Once upon a time wide area networks (WANs) had topologies. Lines (or sometimes waves) would connect the various nodes or sites. And whether the WAN was spread across a campus or across a country, one could somehow represent the network using a map.
The lines that crisscrossed the country typically belonged to some national operator, but the company that rented it sort of 'owned' it and could use (or waste) the available bandwidth in whatever way it wished. The problem, of course, was that while they constantly had the bandwidth available, they also constantly paid for the bandwidth. For busy lines this was sometimes cost-effective. For occasional connections one could consider dial-up links that would only establish the link when necessary. However, in many cases (including frequent but brief communication) a permanent link where one
only paid for what was actually transmitted would be a better solution.
A concomitant problem occurred where the links spanned national boundaries. Within a country a line could be leased from the national operator, but there was no 'international' operator from whom one could lease a line from, say, Cape Town to Cairo.
The solution to both these problems came in the form of PDNs, which is an acronym for
public data networks and/or
packet data networks. The networks were public in the sense that they were originally still operated by the national operator and any company (or wealthy individual) could connect to the PDN. They were packet-based because messages were split into packets and packets were intermingled with other users' packets on the same wires, but routed to the correct destinations where the messages were reassembled.
The first incarnation of these PDNs was X.25 networks. In those days the South African Post Office was the national network operator in South Africa, and the national leased line infrastructure known as SAPONET (for
South African Post Office Network). When the (then) new packet-oriented X.25 network was introduced it was called SAPONET-P. The old SAPONET network was renamed to SAPONET-C to emphasise that it used circuit switching.
One problem that arose with these new networks was that one did not know where the links were. However, one of the major benefits of these networks was that one did not need to know where the links were: It was the operator's responsibility to route packets from their origins to their destinations. It was no longer the user's problem if its link between its Cape Town and Johannesburg offices was down. One transmitted a packet into the network and it was then the operator's responsibility to get it to its destination - using whichever route the operator preferred. The network became a fuzzy entity where users could be blissfully unaware of how it was configured or operated internally.
Textbooks from the late 1980s soon began representing such WANs as clouds. Exactly how 'realistic' the cloud was drawn depended on the preferences of the author or the publisher. Similarly, anyone who presented a lecture on networking
had to draw some clouds during the lecture, which were often drawn with great flair.
The X.25 cloud was interesting in part because of the set of standards that operated outside, inside, across and between clouds. The book in the bottom left quadrant of the picture above (Uyless D Black,
Computer Networks: Protocols, Standards, and Interfaces, 2nd ed, 1993) nicely illustrates where these standards were 'located'. Note, in particular, X.75 that enables different national networks to be interconnected - linking one country's cloud to that of another country.
Various other incarnations of PDNs followed, including frame relay and ATM. While some netwoks using these technologies still exist (and are often actively used) the trend is to move to TCP/IP. Over time most of the clouds converged and became a single cloud known as the Internet. And for a while the Internet was a cloud -
the cloud - but nobody thought it was a big deal.
But what was changing was who was communicating with whom. In the days of X.25 a company typically communicated with its branches. In some cases some companies did communicate with others. Initial bank networks, for example, connected a bank's branches with its central offices. However, different banks for a long time after they began using networks still communicated in a primitive manner. In South Africa the Automated Clearing Bureau (ACB) was established in 1972. Banks would send tapes with details of their transactions every evening to the ACB, where the tapes where processed and it was (automatically) determined how much the various banks owed one another. In the early 1980s a few brave banks in South Africa connected directly to one another's networks, but most preferred to connect via a central service that was established by the banks and called SASWITCH.
The 1990s saw terms such as B2B (
business to business) and B2C (
business to customer) emerge to focus on this changing landscape. Eventually services like eBay and Webmail arguably implemented 'C2C' (
customer to customer) networks, although I have never seen the term C2C used in a technical context. Web 2.0 - with its user-provided content - is another example of C2C communication, although its protagonists probably did not foresee the extent to which these 'users' would actually be C2C (
commodity to commodity) communicators in a twisted post-modern world where the consumer has become the commodity and vice versa.
Initially these changing communication patterns had little impact on the cloud(s). The business (or consumer) was merely communicating with the other business (or consumer) via the cloud using the same old (or, at least, similar) protocols that were used earlier. But where the primary problem earlier was to communicate between specific places (such as, say, Johannesburg and Bloemfontein), the focus changed to get to information or a service irrespective of where it was. The business in Bloemfontein can host its Web presence at an ISP in Cape Town - and then move it to an ISP in Pretoria without users knowing or noticing (unless they wish to).
I remember the strange experience many years ago when I phoned AT&T's helpdesk in Johannesburg, but got the distinct impression that the call was answered somewhere in Europe (from the accent of the person who answered the call, the latency on the line and some other clues). In fact, call-centres were one of the early services to be relocated to anywhere it was convenient. The local 'call-centre' would consist of a telephone number and the necessary circuitry to forward the call as voice data over the network to the real call centre located where labour and/or facilities where cheaper. In the simplest cases it meant that a call-centre was not required to be everywhere, but local numbers could make it appear that help was only a local call away.
In the same way any service that primarily provided information or digital goods could be located or relocated anywhere. In the simplest cases companies' Web and email servers no longer needed to be on-site; they could be managed by a service provider anywhere on the network. An entire industry came into being to provide such services, and the number and variety of services increased. Initially the companies offering these services were simply on the 'other side' of the cloud: one knew where they were. One still used them with the same old protocols that entered the cloud at some local ingress point and exited the cloud at an egress point at the destination - and the response used the same mechanism in reverse. But quickly things became fuzzier. It is now much harder to say where, say,
google.com or f.root-servers.net is. At the time of writing many South African readers are probably unaware that much of their communication with Google actually occurs with a Google cache in Cape Town, and that many of their DNS requests (on root level) are resolved by a root name server in Johannesburg, whereas those using the Internet elsewhere in the world when talking to these 'same' entities are talking to physical machines located on other continents. And all of this happens transparently to the user. The information no longer comes from the other side of the cloud - it comes from somewhere inside the cloud; it comes from who knows where.
A brief description of the history of clouds should also consider what one uses in the cloud: Is it data, an application, a platform, infrastructure, or something else? However, this post is already stretching the limits of brevity, and those facets of the history of clouds will have to wait for a later post.
"I've looked at clouds from both sides now,
From up and down, and still somehow,
It's cloud illusions I recall,
I really don't know clouds, at all."