OSI Model (Open Systems Interconnection) : Network
OSI Model (Open Systems Interconnection) : Network
Interconnection)
The main concept of OSI is that the process of communication between two
endpoints in a network can be divided into seven distinct groups of related
functions, or layers. Each communicating user or program is on a device that
can provide those seven layers of function.
In this architecture, each layer serves the layer above it and, in turn, is served
by the layer below it. So, in a given message between users, there will be a
flow of data down through the layers in the source computer, across the
network, and then up through the layers in the receiving computer. Only the
application layer, at the top of the stack, doesn’t provide services to a higher-
level layer.
Layer 6: The presentation layer: Translates or formats data for the application
layer based on the semantics or syntax that the application accepts. This layer
is also able to handle the encryption and decryption that the application layer
requires.
Layer 3: The network layer: Primary function is to move data into and through
other networks. Network layer protocols accomplish this by packaging data
with correct network address information, selecting the appropriate network
routes and forwarding the packaged data up the stack to the transport layer.
Layer 2: The data-link layer: The protocol layer in a program that handles the
moving of data into and out of a physical link in a network. This layer handles
problems that occur as a result of bit transmission errors. It ensures that the
pace of the data flow doesn’t overwhelm the sending and receiving devices.
This layer also permits the transmission of data to Layer 3, the network layer,
where it is addressed and routed.
Cross-layer functions, services that may affect more than one layer, include:
The OSI and TCP/IP models have similarities and differences. The main
similarity is in their construction as both use layers, although the OSI model
consists of seven layers, while TCP/IP consists of just four layers.
Another similarity is that the upper layer for each model is the application
layer, which performs the same tasks in each model, but may vary according
to the information each receives.
The functions performed in each model are also similar because each uses a
network layer and transport to operate. The OSI and TCP/IP model are each
mostly used to transmit data packets. Although they will do so by different
means and by different paths, they will still reach their destinations.
OSI has seven layers while the TCP/IP has four layers.
OSI uses two separate layers (physical and data link) to define the
functionality of the bottom layers while TCP/IP uses one layer (link).
OSI uses the network layer to define the routing standards and
protocols, while TCP/IP uses the Internet layer.
Pros and cons of the OSI model
Layers can’t work in parallel; each layer has to wait to receive data from
the previous layer.
Support for a flexible architecture. Adding more machines to a network was easy.
The network was robust, and connections remained intact untill the source and
destination machines were functioning.
The overall idea was to allow one application on one computer to talk to(send data
packets) another application running on different computer.
Demerits of TCP/IP
1. In this, the transport layer does not guarantee delivery of packets.
2. The model cannot be used in any other application.
3. Replacing protocol is not easy.
4. It has not clearly separated its services, interfaces and protocols.
Proxy Server
The word proxy literally means a substitute. A proxy server
substitutes the IP address of your computer with some substitute IP
address. If you can't access a website from your computer or you
want to access that website anonymously because you want your
identity to be hidden or you don't trust that website then you can use
a proxy. These proxy servers are dedicated computer systems or
software running on a computer system that acts as
an intermediary separating the end-users from the server. These
proxy servers have special popularity among countries like China
where the government has banned connection to some specific
websites.
More Revision
When we talk about the performance of a network, we mean how fast data is able to transfer
from one device in the network to another. The time taken for the data to be requested and then
sent is known as latency.
There are a number of factors that could affect the speed of data transfer in the network:
Bandwidth
Imagine the cables in the network are a little like a river. If there are two rivers where the water
is flowing at the same speed, but one is wider, more data can flow down the second river even
though the water is travelling at the same speed.
In the same way, the bandwidth is the volume of data that can travel along media at the same
time.
Bit Rate
Where bandwidth measures the volume of data that can travel at once, bit rate measures how fast
the data can travel. This is calculated by measuring bits per second.
Number of Users
Have you ever noticed that your internet slows down at the weekend, or when something
exciting is happening on TV? The reason for this is likely to be something to do with how many
people are using the network.
Devices inside a network will be sharing the available bandwidth. If there are 4 devices on a
network, then they will receive 1/4 of the bandwidth each*. As more devices join the network,
the bandwidth is divided in to smaller and smaller amounts for each device eventually making
the network noticably slower.
*This is actually a little bit simplistic, as the router is intelligent enough to divide the bandwidth
depending on what each device is doing.
Bandwidth is a measure of the amount of data that the medium can transfer over a given period
of time. Each transmission medium has a different bandwidth:
Medium Typical bandwidth
Each connected device requires bandwidth to be able to communicate. The bandwidth of the
medium is shared between each connected device. For example, a home Wi-Fi network with one
device would allocate 54 Mb per second to that device. If a second device joins the network, the
bandwidth would be split between the two, giving 27 Mb per second to each, and so on. If ten
devices were connected, the bandwidth allocated to each device would drop to 5.4 Mb per
second, thereby reducing the rate at which data can be sent to any particular device.
In reality, however, things are more complicated. Different types of network traffic usually have
different bandwidth requirements. For example, streaming a high definition video requires more
bandwidth than streaming a low definition video. Some network switches are capable of
determining the type of traffic and adjusting the bandwidth allocated to a particular device to
accommodate the traffic's requirements.
Latency
Network latency is a measure of how long it takes a message to travel from one device to another
across a network. A network with low latency experiences few delays in transmission, whereas a
high latency network experiences many delays. The more delays there are, the longer it takes to
transmit data across a network.
Latency is affected by the number of devices on the network and the type of connection device.
A hub-based network will usually experience higher latency than a switch-based network
because hubs broadcast all messages to all devices. Switch-based networks transmit messages
only to the intended recipient.
The greater the number of devices connected to a network, the more important the choice of
transmission medium becomes. Wi-Fi generally handles less traffic than twisted copper wire
(TCW), which in turn handles less traffic than fibre-optic cable. Many networks include a
combination of all three media:
Transmission errors
Inevitably there will be times when devices try to communicate with each other at the same time.
Their signals collide with each other and the transmission fails. It is similar to when two people
speak to each other simultaneously - neither person is able to clearly hear what the other person
is saying.
A collision occurs when two devices on a network try to communicate simultaneously along
the same communication channel.
The greater the number of devices on a network, the more chance of a collision occurring, and
the longer it takes to transmit a
DEVICE DRIVER
A device driver is a particular form of software application that is designed to enable
interaction with hardware devices. Without the required device driver, the corresponding
hardware device fails to work.
A device driver usually communicates with the hardware by means of the
communications subsystem or computer bus to which the hardware is connected.
Device drivers are operating system-specific and hardware-dependent. A device driver
acts as a translator between the hardware device and the programs or operating
systems that use it.
A device driver may also be called a software driver.
INTERRUPTS
In system programming, an interrupt is a signal to the processor emitted by hardware or
software indicating an event that needs immediate attention. An interrupt alerts the processor
to a high-priority condition requiring the interruption of the current code the processor is
executing. The processor responds by suspending its current activities, saving its state, and
executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with
the event. This interruption is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities.
PROCESS STATE
A process which is Executed by the Process have various States, the State of the
Process is also called as the Status of the process, The Status includes whether the
Process has Executed or Whether the process is Waiting for Some input and output
from the user and whether the Process is Waiting for the CPU to Run the Program after
the Completion of the Process.
The various States of the Process are as Followings:-
1) New State : When a user request for a Service from the System , then the
System will first initialize the process or the System will call it an initial Process . So
Every new Operation which is Requested to the System is known as the New Born
Process.
2) Running State : When the Process is Running under the CPU, or When the
Program is Executed by the CPU , then this is called as the Running process and when
a process is Running then this will also provides us Some Outputs on the Screen.
3) Waiting : When a Process is Waiting for Some Input and Output Operations then this
is called as the Waiting State. And in this process is not under the Execution instead the
Process is Stored out of Memory and when the user will provide the input then this will
Again be on ready State.
4) Ready State : When the Process is Ready to Execute but he is waiting for the CPU
to Execute then this is called as the Ready State. After the Completion of the Input and
outputs the Process will be on Ready State means the Process will Wait for the
Processor to Execute.
5) Terminated State : After the Completion of the Process , the Process will be
Automatically terminated by the CPU . So this is also called as the Terminated State of
the Process. After Executing the Whole Process the Processor will Also deallocate the
Memory which is allocated to the Process. So this is called as the Terminated Process.
As we know that there are many processes those are running at a Time, this is
not true. A processor can execute only one Process at a Time. There are the various
States of the Processes those determined which Process will be executed. The
Processor will Execute all the processes by using the States of the Processes, the
Processes those are on the Waiting State will not be executed and CPU will Also divides
his time for Execution if there are Many Processes those are Ready to Execute.
When a Process Change his State from one State to Another, then this is also
called as the Process State Transition. In this a Running Process may goes on Wait
and a ready Process may goes on the Wait State and the Wait State can be goes on the
Running State.