The document discusses the structure and functioning of IP addresses, subnets, and transport layer protocols in networking. It explains how IP addresses are hierarchical, the process of subnetting, and the role of the transport layer in ensuring reliable delivery of messages between processes. Additionally, it covers transport service primitives, the use of sockets for communication, and the differences between transport and data link layers.
The document discusses the structure and functioning of IP addresses, subnets, and transport layer protocols in networking. It explains how IP addresses are hierarchical, the process of subnetting, and the role of the transport layer in ensuring reliable delivery of messages between processes. Additionally, it covers transport service primitives, the use of sockets for communication, and the differences between transport and data link layers.
when two or more networks are connected to form an
internetwork, or more simply an internet.
It would be much simpler to join networks together if
everyone used a single networking technology, and it is often the case that there is a dominant kind of network, such as Ethernet. Every host and router on the Internet has an IP address that can be used in the Source address and Destination address fields of IP packets.
It is important to note that an IP address does not
actually refer to a host. It really refers to a network interface, so if a host is on two networks, it must have two IP addresses.
Most hosts are on one network and thus have one
IP address. In contrast, routers have multiple interfaces and thus multiple IP addresses. Prefixes IP addresses are hierarchical, unlike Ethernet addresses. Each 32-bit address is comprised of a variable-length network portion in the top bits and a host portion in the bottom bits.
The network portion has the same value for all
hosts on a single network, such as an Ethernet LAN. This means that a network corresponds to a contiguous block of IP address space. This block is called a prefix.
IP addresses are written in dotted decimal notation.
In this format, each of the 4 bytes is written in decimal, from 0 to 255. The prefix length cannot be inferred from the IP address alone, routing protocols must carry the prefixes to routers. Sometimes prefixes are simply described by their length, as in a ‘‘/16’’ which is pronounced ‘‘slash 16.’’
The length of the prefix corresponds to a binary
mask of 1s in the network portion. When written out this way, it is called a subnet mask. Subnets Network numbers are managed by a nonprofit corporation called ICANN (Internet Corporation for Assigned Names and Numbers), to avoid conflicts.
In turn, ICANN has delegated parts of the address
space to various regional authorities, which dole out IP addresses to ISPs and other companies.
This is the process by which a company is allocated
a block of IP addresses.
The solution is to allow the block of addresses to
be split into several parts for internal use as multiple networks, while still acting like a single network to the outside world. This is called subnetting and the networks (such as Ethernet LANs) that result from dividing up a larger network are called subnets.
When a packet comes into the main router, how
does the router know which subnet to give it to? This is where the details of our prefixes come in. One way would be for each router to have a table with 65,536 entries telling it which outgoing line to use for each host on campus.
The routers simply need to know the subnet masks
for the networks on campus. When a packet arrives, the router looks at the destination address of the packet and checks which subnet it belongs to. The router can do this by ANDing the destination address with the mask for each subnet and checking to see if the result is the corresponding prefix.
For example, consider a packet destined for IP
address 128.208.2.151. To see if it is for the Computer Science Dept., we AND with 255.255.128.0 to take the first 17 bits (which is 128.208.0.0) and see if they match the prefix address (wh ich is 128.208.128.0).
They do not match. Checking The first 18 bits for the
Electrical Engineering Dept., we get 128.208.0.0 when ANDing with the subnet mask. This does match the prefix address, so the packet is forwarded onto the interface which leads to the Electrical Engineering network . CIDR—Classless InterDomain Routing Even if blocks of IP addresses are allocated so that the addresses are used efficiently, there is still a problem that remains: routing table explosion.
Routers in organizations at the edge of a network,
such as a university, need to have an entry for each of their subnets, telling the router which line to use to get to that network.
For routes to destinations outside of the
organization, they can use the simple default rule of sending the packets on the line toward the ISP that connects the organization to the rest of the Internet. IP addresses are contained in prefixes of varying sizes. The same IP address that one router treats as part of a /22 (a block containing 210 addresses) may be treated by another router as part of a larger /20 (which contains 212 addresses).
It is up to each router to have the corresponding
prefix information. This design works with subnetting and is called CIDR (Classless Inter- Domain Routing), which is pronounced ‘‘cider”.
To make CIDR easier to understand, let us consider
an example in which a block of 8192 IP addresses is available starting at 194.24.0.0 Suppose that Cambridge University needs 2048 addresses and is assigned the addresses 194.24.0.0 through 194.24.7.255, along with mask 255.255.248.0.
This is a /21 prefix. Next, Oxford University asks for
4096 addresses. Since a block of 4096 addresses must lie on a 4096-byte boundary, Oxford cannot be given addresses starting at 194.24.8.0.
Instead, it gets 194.24.16.0 through 194.24.31.255,
along with subnet mask 255.255.240.0. Finally, the University of Edinburgh asks for 1024 addresses and is assigned addresses 194.24.8.0 through 194.24.11.255 and mask 255.255.252.0. The transport layer is responsible for process-to- process delivery of the entire message.
A process is an application program running on a
host. Whereas the network layer oversees source-to- destination delivery of individual packets, it does not recognize any relationship between those packets.
It treats each one independently, as though each
piece belonged to a separate message, whether or not it does. The transport layer, on the other hand, ensures that the whole message arrives intact and in order, overseeing both error control and flow control at the source-to-destination level. Computers often run several programs at the same time. For this reason, source to-destination delivery means delivery not only from one computer to the next but also from a specific process on one computer to a specific process on the other.
The transport layer header must therefore include a
type of address called a service-point address in the OSI model and port number or port addresses in the Internet and TCP/IP protocol suite. • ServicesProvided to the Upper Layers • Transport Service Primitives • Berkeley Sockets Services Provided to the Upper Layers The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective data transmission service to its users, normally processes in the application layer.
The transport layer makes use of the services provided by
the network layer. The software and/or hardware within the transport layer that does the work is called the transport entity.
The transport entity can be located in the operating
system kernel, in a library package bound into network applications, in a separate user process, or even on the network interface card. There are two types of transport service. The connection-oriented transport service is similar to the connection-oriented network service in many ways.
In both cases, connections have three phases:
establishment, data transfer, and release. Addressing and flow control are also similar in both layers.
The connectionless transport service is also very similar
to the connectionless network service.
It can be difficult to provide a connectionless transport
service on top of a connection-oriented network service, since it is inefficient to set up a connection to send a single packet and then tear it down immediately afterwards. Transport Service Primitives To allow users to access the transport service, the transport layer must provide some operations to application programs, that is, a transport service interface. Each transport service has its own interface. The transport service is similar to the network service, but there are also some important differences.
The main difference is that the network service is intended
to model the service offered by real networks, warts and all. Real networks can lose packets, so the network service is generally unreliable.
The connection-oriented transport service is reliable. Of
course, real networks are not error-free, but that is precisely the purpose of the transport layer—to provide a reliable service on top of an unreliable network. Berkeley Sockets The socket primitives as they are used for TCP. Sockets were first released as part of the Berkeley UNIX 4.2BSD software distribution in 1983.
The primitives are now widely used for Internet
programming on many operating systems, especially UNIX-based systems, and there is a socket-style API for Windows called ‘‘winsock.’’
The primitives are listed in Fig. which offer more features
and flexibility. The first four primitives in the list are executed in that order by servers. The SOCKET primitive creates a new endpoint and allocates table space for it within the transport entity.
The parameters of the call specify the addressing format
to be used, the type of service desired (e.g., reliable byte stream), and the protocol.
A successful SOCKET call returns an ordinary file
descriptor for use in succeeding calls, the same way an OPEN call on a file does.
Newly created sockets do not have network addresses.
These are assigned using the BIND primitive Once a server has bound an address to a socket, remote clients can connect to it.
The reason for not having the SOCKET call create an
address directly is that some processes care about their addresses (e.g., they have been using the same address for years and everyone knows this address), where as others do not.
The LISTEN call, which allocates space to queue incoming
calls for the case that several clients try to connect at the same time.
To block waiting for an incoming connection, the server
executes an ACCEPT primitive. When a segment asking for a connection arrives, the transport entity creates a new socket with the same properties as the original one and returns a file descriptor for it. The server can then fork off a process or thread to handle the connection on the new socket and go back to waiting for the next connection on the original socket.
At the client side a socket must first be created using the
SOCKET primitive, but BIND is not required since the address used does not matter to the server.
The CONNECT primitive blocks the caller and actively
starts the connection process. When it completes (i.e., when the appropriate segment is received from the server), the client process is unblocked and the connection is established.
Both sides can now use SEND and RECEIVE to transmit
and receive data over the full-duplex connection. The standard UNIX READ and WRITE system calls can also be used if none of the special options of SEND and RECEIVE are required.
Connection release with sockets is symmetric. When both
sides have executed a CLOSE primitive, the connection is released.
The socket API is often used with the TCP protocol to
provide a connection-oriented service called a reliable byte stream, which is simply the reliable bit pipe.
A strength of the socket API is that is can be used by an
application for other transport services. Sockets can also be used with transport protocols that provide a message stream rather than a byte stream and that do or do not have congestion control.
For example, DCCP (Datagram Congestion Controlled
Protocol) is a version of UDP with congestion control.
Newer protocols and interfaces have been devised that
support groups of related streams more effectively and simply for the application.
Two examples are SCTP (Stream Control Transmission
Protocol) defined in RFC 4960 and SST (Structured Stream Transport) (Ford, 2007). The transport service is implemented by a transport protocol used between the two transport entities.
In some ways, transport protocols resemble the data
link protocols, However, significant differences between the two also exist.
These differences are due to major dissimilarities
between the environments in which the two protocols operate.
At the data link layer, two routers communicate directly
via a physical channel, whether wired or wireless, whereas at the transport layer, this physical channel is replaced by the entire network. This difference has many important implications for the protocols. Another difference between the data link layer and the transport layer is the potential existence of storage capacity in the network.
When a router sends a packet over a link, it may
arrive or be lost, but it cannot bounce around for a while, go into hiding in a far corner of the world, and suddenly emerge after other packets that were sent much later.
If the network uses datagrams, which are
independently routed inside, there is a non negligible probability that a packet may take the scenic route and arrive late and out of the expected order, or even that duplicates of the packet will arrive. A final difference between the data link and transport layers is one of degree rather than of kind.
Buffering and flow control are needed in both
layers, but the presence in the transport layer of a large and varying number of connections with bandwidth that fluctuates as the connections compete with each other may require a different approach than we used in the data link layer.
In the transport layer, the larger number of
connections that must be managed and variations in the bandwidth each connection may receive make the idea of dedicating many buffers to each one less attractive. When an application (e.g., a user) process wishes to set up a connection to a remote application process, it must specify which one to connect to.
The method normally used is to define transport
addresses to which processes can listen for connection requests.
In the Internet, these endpoints are called ports. We
will use the generic term TSAP (Transport Service Access Point) to mean a specific endpoint in the transport layer. The analogous endpoints in the network layer (i.e., network layer addresses) are not-surprisingly called NSAPs (Network Service Access Points). IP addresses are examples of NSAPs.
Relationship between NSAP and TSAP
Application processes, both clients and servers, can attach themselves to a local TSAP to establish a connection to a remote TSAP.
These connections run through NSAPs on each
host, as shown. The purpose of having TSAPs is that in some networks, each computer has a single NSAP, so some way is needed to distinguish multiple transport endpoints that share that NSAP. A possible scenario for a transport connection is as follows: 1. A mail server process attaches itself to TSAP 1522 on host 2 to wait for an incoming call. How a process attaches itself to a TSAP is outside the networking model and depends entirely on the local operating system. A call such as our LISTEN might be used, for example.
2. An application process on host 1 wants to send
an email message, so it attaches itself to TSAP 1208 and issues a CONNECT request. The request specifies TSAP 1208 on host 1 as the source and TSAP 1522 on host 2 as the destination. This action ultimately results in a transport connection being established between the application process and the server. 3. The application process sends over the mail message. 4. The mail server responds to say that it will deliver the message. 5. The transport connection is released.
But how does the user process on host 1 know that
the mail server is attached to TSAP 1522?
One possibility is that the mail server has been
attaching itself to TSAP 1522 for years and gradually all the network users have learned this.
In this model, services have stable TSAP addresses