CN Notes Unit 2 PK
CN Notes Unit 2 PK
Data-link layer is the second layer after the physical layer. The data link layer is responsible for maintaining the
data link between two hosts or nodes.
Before going through the design issues in the data link layer. Some of its sub-layers and their functions are as
following below.
The data link layer is divided into two sub-layers :
1. Logical Link Control Sub-layer (LLC) –
Provides the logic for the data link, Thus it controls the synchronization, flow control, and error checking
functions of the data link layer. Functions are –
(i) Error Recovery.
(ii) It performs the flow control operations.
(iii) User addressing.
Frames are the units of digital transmission, particularly in computer networks and telecommunications. Frames
are comparable to the packets of energy called photons in the case of light energy. Frame is continuously used in
Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consisting of a wire in which data is
transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information.
Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are
meaningful to the receiver. Ethernet, token ring, frame relay, and other data link layer technologies have their own
frame structures. Frames have headers that contain information such as error-checking codes.
At the data link layer, it extracts the message from the sender and provides it to the receiver by providing the
sender‟s and receiver‟s addresses. The advantage of using frames is that data is broken up into recoverable chunks
that can easily be checked for corruption.
The process of dividing the data into frames and reassembling it is transparent to the user and is handled by the
data link layer.
Framing is an important aspect of data link layer protocol design because it allows the transmission of data to be
organized and controlled, ensuring that the data is delivered accurately and efficiently.
Problems in Framing
Detecting start of the frame: When a frame is transmitted, every station must be able to detect it. Station
detects frames by looking out for a special sequence of bits that marks the beginning of the frame i.e. SFD
(Starting Frame Delimiter).
How does the station detect a frame: Every station listens to link for SFD pattern through a sequential
circuit. If SFD is detected, sequential circuit alerts station. Station checks destination address to accept or
reject frame.
Detecting end of frame: When to stop reading the frame.
Handling errors: Framing errors may occur due to noise or other transmission errors,
which can cause a station to misinterpret the frame. Therefore, error detection and correction mechanisms,
such as cyclic redundancy check (CRC), are used to ensure the integrity of the frame.
Framing overhead: Every frame has a header and a trailer that contains control information such as source
and destination address, error detection code, and other protocol-related information. This overhead reduces
the available bandwidth for data transmission, especially for small-sized frames.
Framing incompatibility: Different networking devices and protocols may use different framing methods,
which can lead to framing incompatibility issues. For example, if a device using one framing method sends
data to a device using a different framing method, the receiving device may not be able to correctly interpret
the frame.
Framing synchronization: Stations must be synchronized with each other to avoid collisions and ensure
reliable communication. Synchronization requires that all stations agree on the frame boundaries and timing,
which can be challenging in complex networks with many devices and varying traffic loads.
Framing efficiency: Framing should be designed to minimize the amount of data overhead while maximizing
the available bandwidth for data transmission. Inefficient framing methods can lead to lower network
performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the frame, the length of the
frame itself acts as a delimiter.
Drawback: It suffers from internal fragmentation if the data size is less than the frame size
Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning of the next frame
to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of the frame. Used
in Ethernet(802.3). The problem with this is that sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the frame. Used in Token
Ring. The problem with this is that ED can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains ED then, a byte is
stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains „$‟ anywhere, it can be escaped using „\O‟ character.
–> if data contains „\O$‟ then, use „\O\O\O$'($ is escaped using \O and \O is escaped using \O).
Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are transmitted
types of Errors-Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
Single bit error − In the received frame, only one bit has been corrupted, i.e. either changed from 0 to 1 or from 1 to
0.
Multiple bits error − In the received frame, more than one bits are corrupted
Burst error − In the received frame, more than one consecutive bits are corrupted.
Error Control
Error detection − Error detection involves checking whether any error has occurred or not. The number of
error bits and the type of error does not matter.
Error correction − Error correction involves ascertaining the exact number of bits that has been corrupted
and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits along with the data bits.
The receiver performs necessary checks based upon the additional redundant bits. If it finds that the data is free from
errors, it removes the redundant bits before passing the message to the upper layers.
There are three main techniques for detecting errors in frames: Parity Check, Checksum, and Cyclic Redundancy
Check (CRC).
Parity Check
The parity check is done by adding an extra bit, called parity bit to the data to make a number of 1s either even in
case of even parity or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the following way
In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of 1s is odd then
parity bit value is 1.
In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is even then parity
bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity check, if the count of 1s is
even, the frame is accepted, otherwise, it is rejected. A similar rule is adopted for odd parity check.
The parity check is suitable for single bit error detection only.
Error correction techniques find out the exact number of bits that have been corrupted and as well as their locations.
There are two principle ways
Backward Error Correction (Retransmission) − If the receiver detects an error in the incoming frame, it
requests the sender to retransmit the frame. It is a relatively simple technique. But it can be efficiently used
only where retransmitting is not expensive as in fiber optics and the time for retransmission is low relative to
the requirements of the application.
Forward Error Correction − If the receiver detects some error in the incoming frame, it executes error-
correcting code that generates the actual frame. This saves bandwidth required for retransmission. It is
inevitable in real-time systems. However, if there are too many errors, the frames need to be retransmitted.
Protocols in the data link layer are designed so that this layer can perform its basic functions: framing, error control
and flow control. Framing is the process of dividing bit - streams from physical layer into data frames whose size
ranges from a few hundred to a few thousand bytes. Error control mechanisms deals with transmission errors and
retransmission of corrupted and lost frames. Flow control regulates speed of delivery and so that a fast sender does
not drown a slow receiver.
Data link protocols can be broadly divided into two categories, depending on whether the transmission channel is
noiseless or noisy
Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data transmission over an ideal channel, i.e.
a channel through which transmission can never go wrong. It has distinct procedures for sender and receiver. The
sender simply sends all its data available onto the channel as soon as they are available its buffer. The receiver is
assumed to process all incoming data instantly. It is hypothetical since it does not handle flow control or error control.
Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data transmission without any error
control facilities. However, it provides for flow control so that a fast sender does not drown a slow receiver. The
receiver has a finite buffer size with finite processing speed. The sender can send a frame only when it has received
indication from the receiver that it is available for further data processing.
Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a variation of the above protocol with
added error control mechanisms, appropriate for noisy channels. The sender keeps a copy of the sent frame. It then
waits for a finite time to receive a positive acknowledgement from receiver. If the timer expires or a negative
acknowledgement is received, the frame is retransmitted. If a positive acknowledgement is received then the next
frame is sent.
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the acknowledgement for the first frame.
It uses the concept of sliding window, and so is also called sliding window protocol. The frames are sequentially
numbered and a finite number of frames are sent. If the acknowledgement of a frame is not received within the time
period, all frames starting from that frame are retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the acknowledgement for the first frame.
However, here only the erroneous or lost frames are retransmitted, while the good frames are received and buffered.
The medium access control (MAC) is a sublayer of the data link layer of the open system interconnections (OSI)
reference model for data transmission. It is responsible for flow control and multiplexing for transmission
medium. It controls the transmission of data packets via remotely shared channels. It sends data over the
The Open System Interconnections (OSI) model is a layered networking framework that conceptualizes how
communications should be done between heterogeneous systems. The data link layer is the second lowest layer. It is
divided into two sublayers −
MAC address or media access control address is a unique identifier allotted to a network interface controller
(NIC) of a device. It is used as a network address for data transmission within a network segment like Ethernet, Wi-
Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired or hard-coded in the
network interface card (NIC). A MAC address comprises of six groups of two hexadecimal digits, separated by
hyphens, colons, or no separators. An example of a MAC address is 00:0A:89:5B:F0:11.
Channel allocation is a process in which a single channel is divided and allotted to multiple users in order to
carry user specific tasks. There are user‟s quantity may vary every time the process takes place. If there are N
number of users and channel is divided into N equal-sized sub channels, Each user is assigned one portion. If the
number of users are small and don‟t vary at times, then Frequency Division Multiplexing can be used as it is a
simple and efficient channel bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs and MANs, and
Dynamic Channel Allocation.
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
T = mean time delay,
C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
2. Dynamic Channel Allocation:
In dynamic channel allocation scheme, frequency bands are not permanently assigned to the users. Instead
channels are allotted to users dynamically as needed, from a central pool. The allocation is done considering a
number of parameters so that transmission interference is minimized.
This allocation scheme optimises bandwidth usage and results is faster transmissions.
Dynamic channel allocation is further divided into:
1. Centralised Allocation
2. Distributed Allocation
Possible assumptions include:
Station Model:
Assumes that each of N stations independently produce frames. The probability of producing a packet in the
interval IDt where I is the constant arrival rate of new frames.
Collision Assumption:
If two frames overlap in time-wise, then that‟s collision. Any collision is an error, and both frames must re
transmitted. Collisions are only possible error.
N independent stations.
A station is blocked until its generated frame is transmitted.
probability of a frame being generated in a period of length Dt is IDt where I is the arrival rate of frames.
Only a single Channel available.
Time can be either: Continuous or slotted.
Carrier Sense: A station can sense if a channel is already busy before transmission.
No Carrier Sense: Time out used to sense loss data.
Multiple Access Protocols are methods used in computer networks to control how data is transmitted when
multiple devices are trying to communicate over the same network. These protocols ensure that data packets are
sent and received efficiently, without collisions or interference. They help manage the network traffic so that all
devices can share the communication channel smoothly and effectively.
Who is Responsible for the Transmission of Data?
The Data Link Layer is responsible for the transmission of data between two nodes. Its main functions are:
Data Link Control
Multiple Access Control
Data Link Control
The data link control is responsible for the reliable transmission of messages over transmission channels by using
techniques like framing, error control and flow control. For Data link control refer to – Stop and Wait ARQ.
Multiple Access Control
If there is a dedicated link between the sender and the receiver then data link control layer is sufficient, however if
there is no dedicated link present then multiple stations can access the channel simultaneously. Hence multiple
access protocols are required to decrease collision and avoid crosstalk. For example, in a classroom full of
students, when a teacher asks a question and all the students (or stations) start answering simultaneously (send
data at same time) then a lot of chaos is created( data overlap or data lost) then it is the job of the teacher
(multiple access protocols) to manage the students and make them answer one at a time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple access protocols can be
subdivided further as
Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of data is
allowed only at the beginning of these slots. If a station misses out the allowed time, it
must wait for the next slot. This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is required to first
sense the medium (for idle or busy) before transmitting data. If it is idle then it sends
data, otherwise it waits till the channel becomes idle. However there is still chance of
collision in CSMA due to propagation delay. For example, if station A wants to send
data, it will first sense the medium.If it finds the channel idle, it will start sending data.
However, by the time the first bit of data is transmitted (delayed due to propagation
delay) from station A, if station B requests to send data and senses the medium it will
also find it idle and will also send data. This will result in collision of data from station
A and B.
CSMA Access Modes
1-Persistent: The node senses the channel, if idle it sends the data, otherwise it
continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it
checks the medium after a random amount of time (not continuously) and transmits
when found idle.
P-Persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for some
time and checks the medium again, now if it is found idle then it send with p
probability. This repeat continues until the frame is sent. It is used in Wifi and packet
radio systems.
O-Persistent: Superiority of nodes is decided beforehand and transmission occurs in
that order. If the medium is idle, node waits for its time slot to send data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate
transmission of data if collision is detected. For more details refer – Efficiency of
CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions
detection involves sender receiving acknowledgement signals. If there is just one
signal(its own) then the data is successfully sent but if there are two signals(its own and
the one with which it has collided) then it means a collision has occurred. To distinguish
between these two cases, collision must have a lot of impact on received signal.
However it is not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA Avoids Collision By
Interframe Space: Station waits for medium to become idle and if found idle it does
not immediately send data (to avoid collision due to propagation delay) rather it waits
for a period of time called Interframe space or IFS. After this time it again checks the
medium for being idle. The IFS duration depends on the priority of station.
Contention Window: It is the amount of time divided into slots. If the sender is
ready to send data, it chooses a random number of slots as wait time which doubles
every time medium is not found idle. If the medium is found busy it does not restart
the entire process, rather it restarts the timer when the channel is found idle again.
Acknowledgement: The sender re-transmits the data if acknowledgement is not
received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a time.
Think of it like taking turns in a conversation so everyone can speak without talking over
each other.
In this, the data is sent by that station which is approved by all other stations. For further
details refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code to
multiple stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) – The available bandwidth is divided
into equal bands so that each station can be allocated its own band. Guard bands are
also added so that no two bands overlap to avoid crosstalk and noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between
multiple stations. To avoid collision time is divided into slots and stations are allotted
these slots to transmit data. However there is a overhead of synchronization as each
station needs to know its time slot. This is resolved by adding synchronization bits to
each slot. Another issue with TDMA is propagation delay which is resolved by
addition of guard bands.
For more details refer – Circuit Switching
Code Division Multiple Access (CDMA) – One channel carries all transmissions
simultaneously. There is neither division of bandwidth nor division of time. For
example, if there are many people in a room all speaking at the same time, then also
perfect reception of data is possible if only two person speak the same language.
Similarly, data from different stations can be transmitted simultaneously in different
code languages.
Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA the
available bandwidth is divided into small subcarriers in order to increase the overall
performance, Now the data is transmitted through these small subcarriers. it is widely
used in the 5G technology.
Advantages of OFDMA
High data rates
Good for multimedia traffic
Increase in efficiency
Disadvantages OFDMA
Complex to implement
High peak to power ratio
Spatial Division Multiple Access (SDMA) – SDMA uses multiple antennas at the
transmitter and receiver to separate the signals of multiple users that are located in
different spatial directions. This technique is commonly used in MIMO (Multiple-
Input, Multiple-Output) wireless communication systems.
Advantages SDMA
Frequency band uses effectively
The overall signal quality will be improved
The overall data rate will be increased
Disadvantages SDMA
It is complex to implement
It require the accurate information about the channel
Almost all collisions can be avoided in CSMA/CD but they can still occur during the
contention period. The collision during the contention period adversely affects the
system performance, this happens when the cable is long and length of packet are short.
This problem becomes serious as fiber optics network came into use. Here we shall
discuss some protocols that resolve the collision during the contention period.
Bit-map Protocol
Binary Countdown
Limited Contention Protocols
The Adaptive Tree Walk Protocol
Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:
Try-if collide-Retry
No guarantee of performance
What happen if the network load is high?
Collision Free Protocols:
Pay constant overhead to achieve performance guarantee
Good when network load is high
1. Bit-map Protocol:
Bit map protocol is collision free Protocol. In bitmap protocol method, each contention
period consists of exactly N slots. If any station has to send frame, then it transmits a 1
bit in the corresponding slot. For example, if station 2 has a frame to send, it transmits a
1 bit to the 2nd slot.
In general, Station 1 Announce the fact that it has a frame questions by inserting a 1 bit
into slot 1. In this way, each station has complete knowledge of which station wishes to
transmit. There will never be any collisions because everyone agrees on who goes next.
Protocols like this in which the desire to transmit is broadcasting for the actual
transmission are called Reservation Protocols.
For analyzing the performance of this protocol, We will measure time in units of the
contention bits slot, with a data frame consisting of d time units. Under low load
conditions, the bitmap will simply be repeated over and over, for lack of data frames. All
the stations have something to send all the time at high load, the N bit contention period
is prorated over N frames, yielding an overhead of only 1 bit per frame.
Generally, high numbered stations have to wait for half a scan before starting to transmit
low numbered stations have to wait for half a scan(N/2 bit slots) before starting to
transmit, low numbered stations have to wait on an average 1.5 N slots.
2. Binary Countdown:
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In
binary countdown, binary station addresses are used. A station wanting to use the
channel broadcast its address as binary bit string starting with the high order bit. All
addresses are assumed of the same length. Here, we will see the example to illustrate the
working of the binary countdown.
In this method, different station addresses are read together who decide the priority of
transmitting. If these stations 0001, 1001, 1100, 1011 all are trying to seize the channel
for transmission. All the station at first broadcast their most significant address bit that is
0, 1, 1, 1 respectively. The most significant bits are read together. Station 0001 see the 1
MSB in another station address and knows that a higher numbered station is competing
for the channel, so it gives up for the current round.
Other three stations 1001, 1100, 1011 continue. The next station at which next bit is 1 is
at station 1100, so station 1011 and 1001 give up because there 2nd bit is 0. Then station
1100 starts transmitting a frame, after which another bidding cycle starts.
3. Limited Contention Protocols:
Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good when the
network load is low.
Collision free protocols (bitmap, binary Countdown) are good when load is high.
How about combining their advantages :
1. Behave like the ALOHA scheme under light load
2. Behave like the bitmap scheme under heavy load.
4. Adaptive Tree Walk Protocol:
partition the group of station and limit the contention for each slot.
Under light load, everyone can try for each slot like aloha
Under heavy load, only a group can try for each slot
How do we do it :
1. treat every stations as the leaf of a binary tree
2. first slot (after successful transmission), all stations
can try to get the slot(under the root node).
3. If no conflict, fine.
4. Else, in case of conflict, only nodes under a subtree get to try for the next one. (depth
first search)
Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.
Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio waves instead of
cables for connecting the devices within a limited area forming LAN (Local Area Network). Users
connected by wireless LANs can move around within this limited area such as home, school, campus, office
building, railway platform, etc.
Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.
Components of WLANs
Stations (STA) − Stations comprises of all devices and equipment that are connected to the wireless
LAN. Each station has a wireless network interface controller. A station can be of two types −
o Wireless Access Point (WAP or AP)
o Client
Basic Service Set (BSS) − A basic service set is a group of stations communicating at the physical
layer level. BSS can be of two categories −
o Infrastructure BSS
o Independent BSS
Extended Service Set (ESS) − It is a set of all connected BSS.
Distribution System (DS) − It connects access points in ESS.
ypes of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad hoc mode.
Infrastructure Mode − Mobile devices or clients connect to an access point (AP) that in turn connects
via a bridge to the LAN or Internet. The client transmits frames to other clients via the AP.
Ad Hoc Mode − Clients transmit frames directly to each other in a peer-to-peer fashion.
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified expert
to boost your career.
Advantages of WLANs
They provide clutter-free homes, offices and other networked places.
The LANs are scalable in nature, i.e. devices may be added or removed from the network at greater
ease than wired LANs.
The system is portable within the network coverage. Access to the network is not bounded by the
length of the cables.
Installation and setup are much easier than wired counterparts.
The equipment and setup costs are reduced.
Disadvantages of WLANs
Since radio waves are used for communications, the signals are noisier with more interference from
nearby systems.
Greater care is needed for encrypting information. Also, they are more prone to errors. So, they
require greater bandwidth than the wired LANs.
WLANs are slower than wired LANs.
#Data Link Layer Switching
Network switching is the process of forwarding data frames or packets from one port to another leading to
data transmission from source to destination. Data link layer is the second layer of the Open System
Interconnections (OSI) model whose function is to divide the stream of bits from physical layer into data
frames and transmit the frames according to switching requirements. Switching in data link layer is done by
network devices called bridges.
Bridges
A data link layer bridge connects multiple LANs (local area networks) together to form a larger LAN. This
process of aggregating networks is called network bridging. A bridge connects the different components so
that they appear as parts of a single network.
When a data frame arrives at a particular port of a bridge, the bridge examines the frame‟s data link address,
or more specifically, the MAC address. If the destination address as well as the required switching is valid,
the bridge sends the frame to the destined port. Otherwise, the frame is discarded.
The bridge is not responsible for end to end data transfer. It is concerned with transmitting the data frame
from one hop to the next. Hence, they do not examine the payload field of the frame. Due to this, they can
help in switching any kind of packets from the network layer above.
If any segment of the bridged network is wireless, a wireless bridge is used to perform the switching.
simple bridging
multi-port bridging
learning or transparent bridging
#Ethernet Bridges
Ethernet Bridges are wireless radios that can be used to extend a wireless network to an Ethernet
switch or hub (which can be used to extend connectivity to multiple wired devices). Ethernet bridges
can also be used to connect any device with an Ethernet port such as a Tivo, Xbox, or even a
computer to the wireless network without having to install drivers or client software. This is a great
solution for use with Mac OS and Linux computers, where drivers may be limited and more difficult
to find.
Another benefit of using a wireless bridge is that since it uses wired Ethernet to deliver bandwidth to
the client, you can extend the cat5 cable to its maximum segment length of 100 meters and still get
connectivity. In theory, by using a Power over Ethernet (PoE) injector, you can send power over the
Ethernet data cable as well and place the bridge as far away as 328 feet.
Most Ethernet bridges support external antennas. Figure 5.7 shows a Linksys WET11 with a
removable RP-TNC antenna.