Internet protocol is divided into four layers. The layered architecture helps each layer to assume a certain functionality
support from the below layers and builds on top of that functionality to deliver its part to the upper layers.
Four layers of the Internet:
Application Layer – This is the topmost layer
of the Internet. Applications like a browser use this layer to
send commands like ‘Get’ requests to the server.
This is the layer where the request begins.
Transport Layer – Adds TCP header (explained below, for now just assume that
this is some kind of information added by this layer) to the
Network Layer – Adds IP header (main part of IP header are the source
and target IP addresses) to the request. The resulting object is
called ‘IP datagram’
Link Layer – Adds Ethernet header to the
request. The resulting object is called ‘Ethernet
The above 4 layers form a complete block that is sent from
one physical–link to another.
Each physical–link/router strips the previous Ethernet
header, checks the destination added by network layer and resends
the data adding its own Ethernet header.
So, with every hop on the router, source Ethernet address
changes to the current router. IP address of the source and the
target remain the same.
Every datagram has a field called the TTL. This field
determines the lifetime of a datagram during its hop from one
router to another. Every router decrements this field by one and
the datagram expires when the TTL reaches zero.
Routers and Switches:
Address lookup tables
Routers may have switches to form the connection between
them. These switches only look at the Ethernet addresses to decide
what switch/router to pass the information to.
Routers have a built-in lookup that tells them which packet
should be sent to which router. It’s kind of a pattern
matching. For example, a router may have the following map:
Advantage of Packet
This approach really has a very big advantage when a packet
has to find its path in a huge network. Imagine the conventional
graph theory method of finding a path between two nodes. If the
same approach were to be followed here, whole of the Internet would
remain occupied only in finding paths among billions of routers and switches.
But in the above approach, every packet is self-contained
at all the intermediate routers.
Also, this approach provides for a very efficient way to share
links/paths between 2 nodes. For example, if path P is used for
communication between A and B, the same path can be used to
diverge some of the traffic from say X to Y. Even though, path P
may not form the shortest path between X and Y, there is nothing
stopping us to add some extra nodes while hopping from X to Y and
this provides for a very good sharing of nodes/paths thus
improving efficiency and avoiding network clogging of some paths.
This approach also allows transferring data parallelly from
A to B, again improving the performance as compared to a
dedicated connection between A and B.
It is also more robust because even if one path fails,
transmission can be done through other paths. Suppose a message was intended to be sent from
point A to B using nodes C, D and E. The address lookup tables of C, D and E are configured in such a way
that the message accurately hops from one node to another and reaches its destination.
If now, another node needs to be added to this path, it can be added conveniently between any 2 nodes and only
one node's address tables would need to be updated.
Also, if some node stops functioning, intelligence could be added to the nodes to look for an alternate
path and update their address tables. Now compare this with the alternate approach of finding the entire path
first from A to B, encoding this information in the headers and then sending the packet.
That approach would simply not work in this case where nodes are being added/dropped dynamically into the system.
The link between 2 nodes is used only when
required. This is in contrast to a dedicated link which may
remain idle for most of the time because those 2 nodes may not be
Packet Switching provides for ‘Statistical
Multiplexing’. To understand this, consider a multiplexer
having 2 input lines and one output. It multiplexes the inputs
equally such the output is 121212...
Now, if suppose line 1 was to go blank for some time, we
would get holes in output corresponding to the 1s. However, in
statistical multiplexing, the output is never blank unless all
input is blank. The term is called ‘Statistical
Multiplexing’ because it automatically gives higher
preference to an input line with statistically higher number of
OSI Model with 7 layers
In 1980s, networking had another model which had 7 layers.
But now that is mostly obsolete.
However, the layers between the two are connected as shown
Features of the IP Layer
Hop-Routing: Each Datagram packet goes by hopping
from router to router and all packets are not required to take
the same path even if they have the same source and the
Unreliable: IP Layer does not make any guarantee
that the packets will be delivered. For example, if a router in
the middle fails because its dropped outside of the network, the
packets its holding will be lost and the IP layer cannot retrieve
Datagrams may arrive out of sequence: This follows
directly from #1. If every datagram can arrive by a different
route, there is a high probability that they can arrive out of
sequence. It’s the responsibility of the higher layer to
put them in sequence.
Got a thought to share or found a bug in the code? We'd love to hear from you: