The power of a single computer is well understood. The speed and accuracy with which it can run through an algorithm and solve the most complex of problems is outstanding. However, it’s the mere tip of the iceberg when compared to the power of networked computers.
The internet is the biggest worldwide network of devices and has truly transformed almost every aspect of how we live, work and play. Virtually all our infrastructure is reliant on this global connectivity. The ubiquitous nature of its availability together with ease of access has resulted in it replacing costly dedicated connections and transforming everything. Electrical generation, water distribution, transport networks, banks, government, the media, and most of education would all cease to function with no network connectivity. Children can relate to the loss of Facebook or Snapchat, but fail to realise there would be no power distribution and little food in the shops if there was no network.
When trying to explain how the internet works, it’s vital to appreciate that it has been a journey over some six or seven decades, with a variety of technologies coming and going. Thus, the networks we have today and the rules they follow are partially a legacy of older technology. If we had a clean slate and could start again, we wouldn’t build the internet we have today. Hence, to understand the internet, we need to follow the development of networking from the first connected computers. Please note this isn’t a historically authoritative account.
As computers developed, it was recognised that they could be interconnected to share information. With two computers, it was easy to provide a dedicated link between them and use electrical voltages to represent the binary 1s and 0s of the data to be sent.
The electrical signals are referred to as ‘Layer-1’ or ‘Physical Layer’ because they’re the closest to the physical connections.
Three computers can be connected with three connections. Four computers need six connections, and so on. The advantage of the system was that each computer could choose where to send the information just by selecting the appropriate connection. The downside was the number of connections and thus connectors on each device. What was needed was a way of using a single wire that all devices could connect to and a way to somehow share usage of the wire between them all. Two distinct solutions evolved – the ‘ring’ and ‘bus’ topologies.
Rather than connect each computer directly to every other computer, they can be connected in a ring topology. The data passes round the ring, with each computer ‘seeing’ the data but also passing it on. The computer that puts the data in the ring can remove it if it returns. However, how do we ensure the data only ‘goes to’ the computer we want to send it to, since it clearly now goes through all computers?
The solution involves giving each computer in our ring a unique address. Before we send our data out, we add the destination address on to the front of the data. For the receiving computer to know who to reply to, we also add our own address as the source address.
This addition of a source address and destination address to the data forms a ‘frame’. The process of adding this additional data is called ‘encapsulation’. It’s similar to placing the data in an envelope, putting the address of the destination on the front and the sender’s address on the back. The process that encapsulates the data with addresses is called ‘Layer 2’ or the ‘Data Link’ Layer. Thus the format of the frame is:
SOURCE ADDRESS | DESTINATION ADDRESS | DATA
As the frame is passed around the ring, each computer compares the destination address to its own address. If there’s a match, the computer reads the frame and has received the data. Computers that don’t match just forward the frame on.
Ring-type networks are used today in Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) networks.
Another option to connect multiple computers together is to connect them all to a common wire. Initially, this was a thick coaxial cable, similar to a TV cable. Each computer connected to the cable with a ‘tap’, which was a spike in a clamp, tightened up with a nut. This visualisation of many computers being connected to a common wire coined the idea of similarity with a bus where people could get on and off as they wished.
The single wire meant only one computer could send at any one time, and the data would go to every computer on the wire. The technology was called Ethernet and used a set of rules called CSMA/CD to manage more than one computer trying to send at the same time. Mechanical problems with the taps and dry joints led to these networks being unreliable and difficult to fix.
An improvement used a thinner coaxial cable and special connectors called BNC connectors to make the connections. This was called ‘Thin Ethernet’ and the original cabling was retrospectively renamed ‘Thick Ethernet’.
Just like a ring network, all the computers need an address. This is the Media Access Control (MAC), Ethernet, physical or hardware address. Data is encapsulated with a header containing the source and destination address to make a data frame. In early Ethernet networks, frames were received by all computers and each compared the destination address to its own address. If there was a match, the computer reads the frame and had received the data, otherwise the frame was just ignored.
Thus the format of the frame is:
SOURCE ADDRESS (6 BYTES) | DESTINATION ADDRESS (6 BYTES) | DATA (UP TO 1500 BYTES)
With Thin and Thick Ethernet networks, the electrical signals that carried the binary data were protected from interference by the braiding in the coaxial cable. This literally shielded the inner conductor by providing an electrical ‘Faraday’ cage around the core.
Improvements in technology resulted in being able to send the data over a pair of wires twisted together in the same way wires are twisted in a telephone cable. One pair is used to transmit data and the other pair to receive.
The cable is called Unshielded Twisted Pair (UTP) and is commonly, although incorrectly, known as Ethernet cable. Connection is made via RJ45 plugs.
To provide resilience and simplify connections, the bus was collapsed into a box called a hub. Each device connected directly to the hub on its own RJ45 port. Inside the hub, signals are received on one pair, are then regenerated, and just like the bus, transmitted out of all ports. This simple, reliable and cheap way to interconnect computers led to a high growth in the number of Local Area Networks (LANs) with multiple computers connected using an Ethernet hub. We call hubs ‘Layer-1’ or ‘Physical Layer’ devices because they just regenerate the electrical signals with no notion of the structure of the frame.
Hubs just forward data frames out of all ports because they have no knowledge of which computers are connected to which ports and have no understanding of the data they’re forwarding. However, advances in electronics have allowed us to improve the efficiency of our Ethernet networks by putting some ‘intelligence’ in the hub. They can now inspect the frame and examine the source and destination Ethernet addresses.
Clearly, the device is now much more than our humble hub and is called a ‘switch’.
Initially the switch will not know the addresses of the connected computers so it defaults to a hub behaviour and switches incoming frames out of all ports. However, it learns which addresses are connected to which ports by examining source addresses on incoming frames which are stored in a table within the switch. Hence, future frames are switched to the right ports. We call switches ‘Layer-Two’ devices because they understand the headers at Layer Two, the Data-Link Layer.
The function of encapsulation is provided by the Network Interface Card (NIC) in the computer. Different interfaces such as wired, wireless, 3G/4G will all have different NICs. No matter what the media (except fibre), there’s still the possibility of some electrical interference with the signal and spikes in the voltage. These ‘spikes’ result in a binary 0 being interpreted as a binary 1 or visa versa. It may not be obvious that an error has occurred, so we use a ‘check- field’ at the end of the frame to enable us to detect errors.
Thus our Ethernet frame is now:
SOURCE ADDRESS (6 BYTES) | DESTINATION ADDRESS (6 BYTES) | DATA (UP TO 1500 BYTES) | CHECK FIELD
When receiving the frame, we check the ‘check-field’ to see if any errors have occurred. If an error occurs, we discard the frame.
The next evolution was the interconnection of all these LANs and the birth of the Internet Protocol (IP).