29 February 2016

Mobile telephony circa 1990

Many people refer to older cellphones as 'brick phones' due to the size of such phones.  However, there was a time - in the late 1980s up to about 1990 - when bricks were diminutive in size compared to mobile phones.  Those mobile phones were often installed in cars (and therefore were often referred to as car phones). However, many were not permanently affixed to the car and could be carried to wherever they were needed.  The lead acid batteries used by those phones were just one of the factors that impacted on their weight.  I knew someone who, due to the nature of his work, always carried his phone with him; to be more precise, he hired someone to carry his phone.

One of the phones dating from this era was the Siemens C30 Flexiphone (circa 1991).


The picture above shows the phone as well as a SIM (subscriber identity module) card to be used with it.  Note the telephone number embossed on the card, starting with a 081; the cellular networks that appeared a few years later initially used 082 (Vodacom) and 083 (MTN).  Other operators that arrived somewhat later on the scene continued this trend.  However, cellular operators soon needed larger number spaces for their numbers and were assigned other prefixes.  Number portability also meant that the prefix was no longer a reliable indicator of the operator associated with the number.

The SIM card shown in the picture above was the 'standard' or 'normal' size for SIM cards - the same size as, say, a credit card.  However, even in these early cards the electronics were all located underneath the contact points visible on the surface of the card.  Over the years smaller and smaller parts of the SIM card were used by cutting away more and more of the redundant plastic.  This lead to the introduction of mini, micro and nano SIM cards, that only differ in the amount of plastic left around the electronics.  (However, the electronics evolved independently from the decrease in size of the surrounding plastic by, for example, incorporating more memory as time progressed.)  People who have never known 'full size' SIM cards often incorrectly refer to mini-SIM cards as full size cards; however, as demonstrated in the picture below, the 'real full size' cards were indeed once upon a time inserted as is in mobile phones.


The logic of using such relatively large SIM cards clearly did not stem from the size of the electronics - which for decades were housed under contacts with a standard size.  The relatively large size stemmed from the way subscribers were expected to use mobile phones.

The astronomical cost of mobile phones meant that few members of the public would be able to afford a phone.  And very few of those who could afford a mobile phone were going to lug it around with them.  The much more plausible scenario was that mobile phones could be installed in taxis and at other convenient places.  The subscriber then only had to carry a SIM card along and insert it in the phone in the taxi (or elsewhere).  The fact that the subscriber could then make calls would have been convenient.  However, the fact that it would be possible to phone the subscriber at his or her personal number at the phone containing the SIM made this a must-have tool for busy executives.  A credit card sized SIM was the obvious choice for SIM cards to be used in this manner.

The Siemens C30 Flexiphone depicted here offered the business executive additional functionality.  With an adapter it could be connected to a fax machine, telephone answering machine and even a mobile computer - all in the convenience of one's own car...

Phone kindly provided by Peter Fryer of Risk Diversion (Pty) Ltd


08 April 2012

Peering at JINX

The Internet - or any internet for that matter - is, by definition, a collection of connected networks.

The Internet started its history in 1969 as the ARPANET when four nodes were connected from four campuses: the University of California Los Angeles (UCLA), the Stanford Research Institute (SRI) in Menlo Park, California, the University of California at Santa Barbara (UCSB) and the University of Utah.  The well known map of the ARPANET below shows the original topology.

To the best of my knowledge this map was drawn by Alex McKenzie.  His notes are  archived by the Charles Babbage Institute who also now owns the copyright.  They have given permission for researchers to quote from the notes.

Even this simple topology (by modern standards) raises a number of interesting questions about interconnecting sites.  Since Utah is only connected to SRI, traffic between Utah and UCSB or UCLA has to be relayed via SRI.  Who pays SRI for this service?  How much does Utah have to contribute to the links between SRI and UCSB and between SRI and UCLA?  Is the link between UCSB and UCLA of concern to Utah at all?  Of course, this being a US military project at the time made the questions about funding easy to answer.  The other question that this old map illustrates is the one about whom a fifth node should connect to: one, two, three or all four other nodes?  In a world where countless organisations and individuals connect their networks or individual computers to the network these questions have, in principle, become impossible to answer.

In fact very few of the myriad of organisations and individuals connected to the Internet have the skills to decide whom to connect to and how to determine sharing costs.  Very few of them are willing to relay information for others, and very few of them want to entrust the relaying of their own information to a competing neighbour down the road.  And thus emerged Internet Service Providers (ISPs).  Rather than connecting to the Internet 'directly' organisations and individuals rather connect to an ISP.  This does not remove the interconnection problem - but delegates it to the much smaller (but still large) number of skilled specialists.

The connection between ISPs is known as peering.  (Technically, peering is the connection of two Autonomous Systems, but a discussion of Autonomous Systems will have to wait for another post.)  Two ISPs can (and often do) peer with one another by simply connecting their networks to one another and configuring their gateway routers to route traffic according to their agreed upon policies.  But, as we have seen above, this solution does not scale and is only appropriate where there is some special reason for individual networks to peer.

The more general solution is to establish an 'exchange' where all the ISPs in the area can connect to one another.  In South Africa at the time of writing the Internet Service Providers' Association (ISPA) operates two such Internet exchanges: one in Johannesburg (JINX, Johannesburg InterNet eXchange) and one in Cape Town (CINX).

An Internet exchange is often hosted by one of the bigger ISPs.  Both JINX and CINX are hosted by Internet Solutions (IS).

While I would love to post some pictures taken inside an Internet exchange, I haven't even attempted to ask permission to go and take some.  I would imagine that such information (in pictures) is rather sensitive and affects the critical infrastructure of a country.  So the best alternative is to show the outside of the buildings that house one.

JINX is indeed housed in two buildings.

JINX Rosebank is located at 158 Jan Smuts Avenue, Rosebank, Johannesburg.


The satellite and microwave dishes on the balconies are an indication that this is not an ordinary office complex.  The JINX Rosebank facility is located in vaults 14 and 16 somewhere on the second floor of this building.

The picture above was taken from the east. Walking around to the south presents one with the following view. Note again the various dishes on the balcony. The bank of air conditioners may not be easy to see from this angle, but there are indeed many.


From the west - from Jan Smuts Avenue - it looks as shown in the picture below.


Finally, looking from the north towards Johannesburg city centre, one sees the billboard of Internet Solutions claiming its presence (amidst others) in this building.


As usual a map may be useful to illustrate our journey thus far (as well as the journey that remains).  On the map below the JINX Rosebank facility is on the left.  We started our journey on its western side - more or less where the label (W) has been added in green.  From there we travelled clockwise until we reached its eastern side - more or less where the label (E) has been added in green.

Map derived from satellite imagery by Google Earth and others.
Immediately opposite the JINX Rosebank facility - towards its west in the same street block - is the Parklands Centre which contains the JINX Parklands facility.  On the map above we will follow the passage through the Parklands Centre that starts at its eastern side.  The picture below looks towards the Parklands Centre - approximately towards the point labelled (E) in yellow on the map.  The journey proceeds through the centre of the Centre and exists on the west - just below the spot labelled (W) in yellow.  We now enter the Parklands Centre from the east.

Apart from a Post Office, there does not seem to be happening much in the Parklands Centre.


But we know somewhere in this centre - to be more precise, in Vault 4 in this centre - is the JINX Parklands facility.  Exiting on the east the facade of the building is composed of drab concrete slabs housing what seems like emergency power generators.  Pictures from a few years back (which, because of copyright issues I cannot reproduce here) show a facebrick facade starting slightly deeper in.  Hence the part housing the generators must have been a fairly recent addition.


Note that the Rosebank facility (two vaults) and Parklands facility (one fault) have been interconnected to form one virtual Internet exchange.  In fact, at the time of writing, the Rosebank facility is filled to capacity and space only remains in the Parklands facility.  On a logical level this is unimportant to an ISP that wants to join the exchange.  On a physical level it does make a difference - the new ISP's cabling has to terminate in Parklands.

More information about the Internet exchanges is available on the ISPA site - click on INX.  Interesting information includes the names of the ISPs that peer at JINX (as well as those that peer at CINX).  Here are some other FAQs straight from their site:
Q: What equipment is required for connecting to an INX?A: You can connect to the INX switch fabric using either a suitable router or a MetroEthernet service. INX switches have copper based Ethernet connections and singlemode or multimode fibre Ethernet interfaces available. You require a router that supports BGP4 since all peering is done using BGP. BGP4 is supported on numerous devices from many vendors. Speak to your router vendor in order to obtain the best hardware. It is common to have dozens of BGP sessions with other members at the exchange and your hardware should be powerful enough to handle this.
Q: Who can I get backhaul links to an INX from?A: Backhaul links into the INXs are typically provided as either MetroEthernet or SDH circuits over fibre. You can approach any licensed ECNS holder to provide you with these services. A number of ISPA’s large and medium members will already have a point-of-presence (PoP) at or near the INX environment and may be able to provide these services. For larger capacity circuits it may be feasible to obtain dark fibre pairs or DWDM wavelengths into the INX environment.
Q: Can I get wireless access to an INX?A: Roof or tower space near the INXs is normally limited and it is preferable to connect to an INX via a fibre based circuit. Contact the host of the INX you wish to connect to in order to determine the rules and processes for installation of radio equipment.
Q: What’s the price of a link between Rosebank and Parklands?A: Existing cables between the Rosebank and Parklands cages are only for peering traffic on the switch fabric. ISPA is not able to provide these links to members. Members should speak to the host or a licensed operator to obtain these cross-connects.
Statistics about the traffic flowing through JINX are available at http://stats.jinx.net.za, and the statistics for CINC at http://stats.cinx.net.za

15 March 2012

The size of the Net

It should be obvious that it is impossible to determine the size of the Internet.  However, in the day and age in which we live, if you cannot impress someone with numbers, that someone will usually be unimpressed.  Therefore we have to measure the size of the Internet.  One of the best surveys of the size of the Internet is conducted bi-annually by the Internet Systems Consortium (isc.org).  At least their method is clear and they are aware of the limitations of the method.  To see the latest size of the Internet (as well as a host of other interesting statistics) consult their site.  Our museum's interest in their numbers is more local.

The graph below depicts the growth in the size of the Internet in Southern Africa (excluding South Africa) between 1997 and 2012.  The growth in some countries (Namibia - yellow, Zimbabwe - green and Mozambique - purple) is impressive.  Or, at least, it was impressive until the worldwide economic downturn started.


Unfortunately the impressive growth in these countries pales when the growth in the regional superpower, South Africa, is added to the picture.  The graph below shows the growth in South Africa in red.  Relative to South Africa the best performer in the previous graph, Namibia (in yellow), shows hardly any growth at all.


Of course, anyone who has ever travelled through the beautiful Namib knows that people are scarce in that vast country.  In a sense it is 'unfair' to plot Namibia next to South Africa.  The graph below 'corrects' this by plotting the number of hosts per thousand inhabitants for each country.  (The population sizes are from the WHO site - interpolated and extrapolated where necessary.)  This shows that Namibia now becomes visible next to South Africa.  However, the other Southern African countries still have hardly any presence on the Net compared to South Africa and Namibia.


In the global context even the overwhelming regional growth in Internet size in South Africa pales. The graph below shows Japan (in pink), Germany (in green), Australia (in beige) and a couple of other countries.  South Africa is shown in red - almost a flat line next to countries elsewhere in the world.  Note that the country plotted in yellow in this graph (next to South Africa) is Portugal.


Before concluding that South Africa is on the wrong side of the digital divide it should be noted that the .za TLD ranks 23rd on the list of 270 TLDs measured in the January 2012 survey.  Lesotho   is at position 189.  We leave it to the reader to consider the location(s) of the global digital divide(s).

25 October 2011

A brief history of the cloud - as I recall


Once upon a time wide area networks (WANs) had topologies.  Lines (or sometimes waves) would connect the various nodes or sites.  And whether the WAN was spread across a campus or across a country, one could somehow represent the network using a map.


The lines that crisscrossed the country typically belonged to some national operator, but the company that rented it sort of 'owned' it and could use (or waste) the available bandwidth in whatever way it wished.  The problem, of course, was that while they constantly had the bandwidth available, they also constantly paid for the bandwidth.  For busy lines this was sometimes cost-effective.  For occasional connections one could consider dial-up links that would only establish the link when necessary.  However, in many cases (including frequent but brief communication) a permanent link where one only paid for what was actually transmitted would be a better solution.

A concomitant problem occurred where the links spanned national boundaries.  Within a country a line could be leased from the national operator, but there was no 'international' operator from whom one could lease a line from, say, Cape Town to Cairo.

The solution to both these problems came in the form of PDNs, which is an acronym for public data networks and/or packet data networks.  The networks were public in the sense that they were originally still operated by the national operator and any company (or wealthy individual) could connect to the PDN.  They were packet-based because messages were split into packets and packets were intermingled with other users' packets on the same wires, but routed to the correct destinations where the messages were reassembled.

The first incarnation of these PDNs was X.25 networks.  In those days the South African Post Office was the national network operator in South Africa, and the national leased line infrastructure known as SAPONET (for South African Post Office Network).  When the (then) new packet-oriented X.25 network was introduced it was called SAPONET-P.  The old SAPONET network was renamed to SAPONET-C to emphasise that it used circuit switching.

One problem that arose with these new networks was that one did not know where the links were.  However, one of the major benefits of these networks was that one did not need to know where the links were: It was the operator's responsibility to route packets from their origins to their destinations.  It was no longer the user's problem if its link between its Cape Town and Johannesburg offices was down.  One transmitted a packet into the network and it was then the operator's responsibility to get it to its destination - using whichever route the operator preferred.  The network became a fuzzy entity where users could be blissfully unaware of how it was configured or operated internally.

Textbooks from the late 1980s soon began representing such WANs as clouds.  Exactly how 'realistic' the cloud was drawn depended on the preferences of the author or the publisher.  Similarly, anyone who presented a lecture on networking had to draw some clouds during the lecture, which were often drawn with great flair.


The X.25 cloud was interesting in part because of the set of standards that operated outside, inside, across and between clouds.  The book in the bottom left quadrant of the picture above (Uyless D Black, Computer Networks: Protocols, Standards, and Interfaces, 2nd ed, 1993) nicely illustrates where these standards were 'located'.  Note, in particular, X.75 that enables different national networks to be interconnected - linking one country's cloud to that of another country.

Various other incarnations of PDNs followed, including frame relay and ATM.  While some netwoks using these technologies still exist (and are often actively used) the trend is to move to TCP/IP.  Over time most of the clouds converged and became a single cloud known as the Internet.  And for a while the Internet was a cloud - the cloud - but nobody thought it was a big deal.

But what was changing was who was communicating with whom.  In the days of X.25 a company typically communicated with its branches.  In some cases some companies did communicate with others.  Initial bank networks, for example, connected a bank's branches with its central offices.  However, different banks for a long time after they began using networks still communicated in a primitive manner.  In South Africa the Automated Clearing Bureau (ACB) was established in 1972.  Banks would send tapes with details of their transactions every evening to the ACB, where the tapes where processed and it was (automatically) determined how much the various banks owed one another.  In the early 1980s a few brave banks in South Africa connected directly to one another's networks, but most preferred to connect via a central service that was established by the banks and called SASWITCH.

The 1990s saw terms such as B2B (business to business) and B2C (business to customer) emerge to focus on this changing landscape.  Eventually services like eBay and Webmail arguably implemented 'C2C' (customer to customer) networks, although I have never seen the term C2C used in a technical context.  Web 2.0 - with its user-provided content - is another example of C2C communication, although its protagonists probably did not foresee the extent to which these 'users' would actually be C2C (commodity to commodity) communicators in a twisted post-modern world where the consumer has become the commodity and vice versa.

Initially these changing communication patterns had little impact on the cloud(s).  The business (or consumer) was merely communicating with the other business (or consumer) via the cloud using the same old (or, at least, similar) protocols that were used earlier.  But where the primary problem earlier was to communicate between specific places (such as, say, Johannesburg and Bloemfontein), the focus changed to get to information or a service irrespective of where it was.  The business in Bloemfontein can host its Web presence at an ISP in Cape Town - and then move it to an ISP in Pretoria without users knowing or noticing (unless they wish to).

I remember the strange experience many years ago when I phoned AT&T's helpdesk in Johannesburg, but got the distinct impression that the call was answered somewhere in Europe (from the accent of the person who answered the call, the latency on the line and some other clues).  In fact, call-centres were one of the early services to be relocated to anywhere it was convenient.  The local 'call-centre' would consist of a telephone number and the necessary circuitry to forward the call as voice data over the network to the real call centre located where labour and/or facilities where cheaper.  In the simplest cases it meant that a call-centre was not required to be everywhere, but local numbers could make it appear that help was only a local call away.

In the same way any service that primarily provided information or digital goods could be located or relocated anywhere.  In the simplest cases companies' Web and email servers no longer needed to be on-site; they could be managed by a service provider anywhere on the network.  An entire industry came into being to provide such services, and the number and variety of services increased.  Initially the companies offering these services were simply on the 'other side' of the cloud: one knew where they were.  One still used them with the same old protocols that entered the cloud at some local ingress point and exited the cloud at an egress point at the destination - and the response used the same mechanism in reverse.  But quickly things became fuzzier.  It is now much harder to say where, say, google.com or f.root-servers.net is.  At the time of writing many South African readers are probably unaware that much of their communication with Google actually occurs with a Google cache in Cape Town, and that many of their DNS requests (on root level) are resolved by a root name server in Johannesburg, whereas those using the Internet elsewhere in the world when talking to these 'same' entities are talking to physical machines located on other continents.  And all of this happens transparently to the user.  The information no longer comes from the other side of the cloud - it comes from somewhere inside the cloud; it comes from who knows where.


A brief description of the history of clouds should also consider what one uses in the cloud: Is it data, an application, a platform, infrastructure, or something else?  However, this post is already stretching the limits of brevity, and those facets of the history of clouds will have to wait for a later post.
"I've looked at clouds from both sides now,
From up and down, and still somehow,
It's cloud illusions I recall,
I really don't know clouds, at all."