Examination: Communication and Coding
Examination: Communication and Coding
Code: INS3158 - 01
Lecturer’s Signature
Program: VNU
Course Code: INS3158
Date: ………………………………
Instructions to students:
1. Huffman coding is used to compactly encode the species of fish tagged by a game warden. If 50% of
the fish are bass and the rest are evenly divided among 15 other species, how many bits would be used to
encode the species when bass is tagged?
2. Several people at a party are trying to guess a 3-bit binary number. Alice is told that the number is odd; Bob
is told that it is not a multiple of 3 (i.e., not 0, 3, or 6); Charlie is told that the number contains exactly two 1's;
and Deb is given all three of these clues. How much information (in bits) did each player get about the
number?
3. X is an unknown 8-bit binary number. You are given another 8-bit binary number, Y, and told that Y differs
from X in exactly one-bit position. How many bits of information about X have you been given?
4. In Blackjack the dealer starts by dealing 2 cards each to himself and his opponent: one face down, one face
up. After you look at your face-down card, you know a total of three cards. Assuming this was the first hand
played from a new deck, how many bits of information do you now have about the dealer's face-down card?
Problem 2.
1. The following table shows the current undergraduate and MEng enrollments for the School of
Engineering (SOE).
When you learn a randomly chosen SOE student's department you get some number of bits of
information. For which student department do you get the least amount of information?
Design a variable-length Huffman code that minimizes the average number of bits in messages encoding the
departments of randomly chosen groups of students. Show your Huffman tree and give the code for each
course.
If your code is used to send messages containing only the encodings of the departments for each student in
groups of 100 randomly chosen students, what's the average length of such messages? Write an expression if
you wish.
2. You're playing an on-line card game that uses a deck of 100 cards containing 3 Aces, 7 Kings, 25 Queens,
31 Jacks and 34 Tens. In each round of the game the cards are shuffled, you make a bet about what type of
card will be drawn, then a single card is drawn and the winners are paid off. The drawn card is reinserted into
the deck before the next round begins.
A. How much information do you receive when told that a Queen has been drawn during the current
round?
B. Give a numeric expression for the average information content received when learning about the
outcome of a round (aka the entropy).
C. Construct a variable-length Huffman encoding that minimizes the length of messages that report the
outcome of a sequence of rounds. The outcome of a single round is encoded as A (ace), K (king), Q
(queen), J (jack) or X (ten). Specify your encoding for each of A, K, Q, J and X.
D. Using your code from part (C) what is the expected length of a message reporting the outcome of 1000
rounds (i.e., a message that contains 1000 symbols)?
E. The Nevada Gaming Commission regularly receives messages in which the outcome for each round is
encoded using the symbols A, K, Q, J and X. They discover that a large number of messages describing
the outcome of 1000 rounds (i.e. messages with 1000 symbols) can be compressed by the LZW
algorithm into files each containing 43 bytes in total. They immediately issue an indictment for running
a crooked game. Briefly explain their reasoning.
3. Consider messages made up entirely of vowels (A, E, I, O, U). Here's a table of probabilities for each of
the vowels:
l p(l) log2(1/p(l)) p(l)*log2(1/p(l))
A 0.22 2.18 0.48
E 0.34 1.55 0.53
I 0.17 2.57 0.43
O 0.19 2.40 0.46
U 0.08 3.64 0.29
Totals 1.00 12.34 2.19
A. Give an expression for the number of bits of information you receive when learning that a particular
vowel is either I or U.
B. Using Huffman's algorithm, construct a variable-length code assuming that each vowel is encoded
individually. Please draw a diagram of the Huffman tree and give the encoding for each of the vowels.
C. Using your code from above, give an expression for the expected length in bits of an encoded message
transmitting 100 vowels.
D. Ben Bitdiddle spends all night working on a more complicated encoding algorithm and sends you email
claiming that using his code the expected length in bits of an encoded message transmitting 100 vowels
is 197 bits. Would you pay good money for his implementation?
Problem 3.
1. Describe the contents of the string table created when encoding a very long string of all a's using the simple
version of the LZW encoder shown below. In this example, if the decoder has received E encoded symbols
(i.e., string table indices) from the encoder, how many a's has it been able to decode?
initialize TABLE[0 to 255] = code for individual bytes
STRING = get input symbol
while there are still input symbols:
SYMBOL = get input symbol
if STRING + SYMBOL is in TABLE:
STRING = STRING + SYMBOL
else:
output the code for STRING
add STRING + SYMBOL to TABLE
STRING = SYMBOL
output the code for STRING
Suppose that this decoder has received the following five codes from the LZW encoder (these are the first
five codes from a longer compression run):
97 -- index of 'a' in the translation table
98 -- index of 'b' in the translation table
257 -- index of second addition to the translation table
256 -- index of first addition to the translation table
258 -- index of third addition to the translation table
After it has finished processing the fifth code, what are the entries in TABLE and what is the cumulative
output of the decoder?
3. Huffman and other coding schemes tend to devote more bits to the coding of
Problem 4.
1. Consider the following two Huffman decoding tress for a variable-length code involving 5 symbols: A, B,
C, D and E.
A. Using Tree #1, decode the following encoded message: "01000111101".
B. Suppose we were encoding messages with the following probabilities for each of the 5 symbols: p(A) =
0.5, p(B) = p(C) = p(D) = p(E) = 0.125. Which of the two encodings above (Tree #1 or Tree #2) would
yield the shortest encoded messages averaged over many messages?
C. Using the probabilities of part (B), if you learn that the first symbol in a message is "B", how many bits
of information have you received?
D. Using the probabilities of part (B), If Tree #2 is used to encode messages what is the average length of
100-symbol messages, averaged over many messages?
2. Ben Bitdiddle has been hired by the Registrar to help redesign the grades database for WebSIS. Ben is
told that the database only needs to store one of five possible grades (A, B, C, D, F). A survey of the current
WebSIS repository reveals the following probabilities for each grade:
A. Given the probabilities above, if you are told that a particular grade is "C", give an expression for the
number of bits of information you have received.
B. Ben is interested in storing a sequence of grades using as few bits as possible. Help him out by creating
a variable-length encoding that minimizes the average number of bits stored for a sequence of grades.
Use the table above to determine how often a particular grade appears in the sequence.
3. Consider a sigma-delta modulator used to convert a particular analog waveform into a sequence of 2-bit
values. Building a histogram from the 2-bit values we get the following information:
B. If we transmit the 2-bit values as is, it takes 2 bits to send each value (doh!). If we use the Huffman
code from part (A) what is the average number of bits used to send a value? What compression ratio do
we achieve by using the Huffman code?
C. Using Shannon's entropy formula, what is the average information content associated with each of the
2-bit values output by the modulator? How does this compare to the answers for part (B)?
4. In honor of Daisuke Matsuzaka's first game pitching for the Redsox, the Boston-based members of the
Search for Extraterrestrial Intelligence (SETI) have decided to broadcast a 1,000,000 character message
made up of the letters "R", "E", "D", "S", "O", "X". The characters are chosen at random according the
probabilities given in the table below:
Letter p(Letter)
R .21
E .31
D .11
S .16
O .19
X .02
A. If you learn that one of the randomly chosen letters is a vowel (i.e., "E" or "O") how many bits of
information have you received?
B. Nervous about the electric bill for the transmitter at Arecibo, the organizers have asked you to design a
variable length code that will minimize the number of bits needed to send the message of 1,000,000
randomly chosen characters. Please draw a Huffman decoding tree for your code, clearly labeling each
leaf with the appropriate letter and each arc with either a "0" or a "1".
C. Using your code, what bit string would they send for "REDSOX"?
Problem 5.
1. "Information, please"
A. You're given a standard deck of 52 playing cards that you start to turn face up, card by card. So far as
you know, they're in completely random order. How many new bits of information do you get when the
first car is flipped over? The fifth card? The last card?
B. Suppose there three alternatives ("A", "B" and "C") with the following probabilities of being chosen:
p("A") = 0.8
p("B") = 0.1
p("C") = 0.1
We might encode the of "A" with the bit string "0", the choice of "B" with the bit string "10" and the
choice of "C" with the bit string "11".
If we record the results of making a sequence of choices by concatenating in left-to-right order the bit
strings that encode each choice, what sequence of choices is represented by the bit string
"00101001100000"?
C. Using the encoding of the previous part, what is the expected length of the bit string that encodes the
results of making 1000 choices? What is the length in the worst case? How do these numbers compare
with 1000*log2(3/1), which is the information content of 1000 equally-probable choices?
D. Consider the sum of two six-sided dice. Even when the dice are "fair" the amount information
conveyed by a single sum depends on what the sum is since some sums are more likely than others, as
shown in the following figure:
What is the average number of bits of information provided by the sum of 2 dice? Suppose we want to
transmit the sums resulting from rolling the dice 1000 times. How many bits should we expect that
transmission to take?
E. Suppose we want to transmit the sums resulting from rolling the dice 1000 times. If we use 4 bits to
encode each sum, we'll need 4000 bits to transmit the result of 1000 rolls. If we use a variable-length
binary code which uses shorter sequences to encode more likely sums then the expected number of bits
need to encode 1000 sums should be less than 4000. Construct a variable-length encoding for the sum
of two dice whose expected number of bits per sum is less than 3.5. (Hint: It's possible to find an
encoding for the sum of two dice with an expected number of bits = 3.306.)
F. Can we make an encoding for transmitting 1000 sums that has an expected length smaller
than that achieved by the previous part?
3. Consider a Huffman code over four symbols: A, B, C, and D. For each of the following encodings
indicate if it is a valid Huffman code, i.e., a code that would have been produced by the Huffman
algorithm given some set of probabilities p(A), ..., p(D).
4. After careful data collection, Alyssa P. Hacker observes that the probability of HIGH or LOW
traffic on Storrow Drive is given by the following table:
A. If it is known that the Red Sox are playing, then how many bits of information are
conveyed by the statement that the traffic level is LOW.
B. Suppose it is known that the Red Sox are not playing. What is the entropy of the corresponding
probability distribution of traffic?
5. Consider Huffman coding over four symbols (A, B, C and D) with probabilities
p(A)=1/3, p(B)=1/2, p(C)=1/12 and p(D)=1/12.
The entropy of the discrete random variable with this probability distribution was calculated to be
1.62581 bits.
length code: A: 10
B: 0
7
C: 110
D: 111
which gave an expected encoding length of 1.666 bits/symbol, slightly higher than the entropy bound.
A. Suppose we made up a new symbol alphabet consisting of all possible pairs of the original four
symbols. Enumerate the new symbols and give the probabilities associated with each new
symbol.
B. What is the entropy associated with the discrete random variable that has the probability
distribution you gave in part (A). Is is the same, bigger, or smaller than the entropy of 1.626
calculated above? Explain.
C. Derive the Huffman encoding for the new symbols and compute the expected encoding
length expressed in bits/symbol. How does it compare with the 1.666 bits/symbol for the
original alphabet? Note: this is tedious to do by hand -- try using the Huffman program you
wrote!
8
EXAMINATION
Code: INS3158 - 02
Lecturer’s Signature
Program: VNU
Course Code: INS3158
Date: ………………………………
Instructions to students:
C. {00000}
2. Suppose management has decided to use 20-bit data blocks in the company's new (n,20,3) error
correcting code. What's the minimum value of n that will permit the code to be used for single bit error
correction?
3. The Registrar has asked for an encoding of class year ("Freshman", "Sophomore", "Junior", "Senior") that
will allow single error correction. Please give an appropriate 5-bit binary encoding for each of the four
years.
4. For any block code with minimum Hamming distance at least 2t + 1 between code words, show that:
Problem 2.
1. Pairwise Communications has developed a block code with three data (D1, D2, D3) and three parity bits
(P1, P2, P3):
P1 = D1 + D2
P2 = D2 + D3
P3 = D3 + D1
A. What is the (n,k,d) designation for this code.
B. The receiver computes three syndrome bits from the (possibly corrupted) received data and parity bits:
E1 = D1 + D2 + P1
E2 = D2 + D3 + P2
E3 = D3 + D1 + P3.
The receiver performs maximum likelihood decoding using the syndrome bits. For the combinations of
syndrome bits listed below, state what the maximum-likelihood decoder believes has occurred: no
errors, a single error in a special bit (state which one), or multiple errors.
E1 E2 E3 = 000
E1 E2 E3 = 010
E1 E2 E3 = 101
E1 E2 E3 = 111
2. Dos Equis Encodings, Inc. specializes in codes that use 20-bit transmit blocks. They are trying to design a
(20, 16) linear block code for single error correction. Explain whether they are likely to succeed or not.
Problem 3.
1. Consider the following (n,k,d) block code:
D0 D1 D2 D3 D4 | P0
D5 D6 D7 D8 D9 | P1
D10 D11 D12 D13 D14 | P2
P3 P4 P5 P6 P7 |
where D0-D14 are data bits, P0-P2 are row parity bits and P3-P7 are column parity bits. The transmitted code
word will be:
For each of received code words, indicate the number of errors. If there are errors, indicate if they are
correctable, and if they are, what the correction should be.
2. The following matrix shows a rectangular single error correcting code consisting of 9 data bits, 3 row
parity bits and 3 column parity bits. For each of the examples that follow, please indicate the correction the
receiver must perform: give the position of the bit that needs correcting (e.g., D7, R1), or "no" if there are no
errors, or "M" if there is a multi-bit uncorrectable error.
3. Consider two convolutional coding schemes - I and II. The generator polynomials for the two schemes
are
Notation is follows: if the generator polynomial is, say, 1101, then the corresponding parity bit for message bit
n is x[n] + x[n-1] + x[n-3]) mod 2 where x[n] is the message sequence.
A. Indicate TRUE or FALSE
a. Code rate of Scheme I is 1/4.
b. Constraint length of Scheme II is 4.
c. Code rate of Scheme II is equal to code rate of Scheme I.
d. Constraint length of Scheme I is 4.
B. How many states will there be in the state diagram for Scheme I? For Scheme II?
D. Alyssa P. Hacker suggests a modification to Scheme I which involves adding a third generator
polynomial G2 = 1001. What is the code rate r of Alyssa's coding scheme? What about constraint
length k? Alyssa claims that her scheme is stronger than Scheme I. Based on your computations for r
and k, is her statement true?
Problem 3.
1. Consider a convolution code that uses two generator polynomials: G0 = 111 and G1 = 110. You are given
a particular snapshot of the decoding trellis used to determine the most likely sequence of states visited by
the transmitter while transmitting a particular message:
A. Complete the Viterbi step, i.e., fill in the question marks in the matrix, assuming a hard branch metric
based on the Hamming distance between expected an received parity where the received voltages are
digitized using a 0.5V threshold.
B. Complete the Viterbi step, i.e., fill in the question marks in the matrix, assuming a soft branch metric
based on the square of the Euclidean distance between expected an received parity voltages. Note that
your branch and path metrics will not necessarily be integers.
C. Does the soft metric give a different answer than the hard metric? Base your response in terms of the
relative ordering of the states in the second column and the survivor paths.
D. If the transmitted message starts with the bits "01011", what is the sequence of bits produced by the
convolutional encoder?
2. The receiver determines the most-likely transmitted message by using the Viterbi algorithm to process the
(possibly corrupted) received parity bits. The path metric trellis generated from a particular set of received
parity bits is shown below. The boxes in the trellis contain the minimum path metric as computed by the
Viterbi algorithm.
E. Referring to the trellis above, what is the receiver's estimate of the most-likely transmitter state after
processing the bits received at time step 6?
F. Referring to the trellis above, show the most-likely path through the trellis by placing a circle around
the appropriate state box at each time step and darkening the appropriate arcs. What is the receiver's
estimate of the most-likely transmitted message?
G. Referring to the trellis above, and given the receiver's estimate of the most-likely transmitted message,
at what time step(s) were errors detected by the receiver? Briefly explain your reason- ing.
H. Now consider the path metric trellis generated from a different set of received parity bits.
Referring to the trellis above, determine which pair(s) of parity bits could have been been received at
time steps 1, 2 and 3. Briefly explain your reasoning.
3. Consider a binary convolutional code specified by the generators (1011, 1101, 1111).
A 10000-bit message is encoded with the above code and transmitted over a noisy channel. During Viterbi
decoding at the receiver, the state 010 had the lowest path metric (a value of 621) in the final time step, and
the survivor path from that state was traced back to recover the original message.
B. What is the likely number of bit errors that are corrected by the decoder? How many errors are likely
left uncorrected in the decoded message?
C. If you are told that the decoded message had no uncorrected errors, can you guess the approximate
number of bit errors that would have occured had the 10000 bit message been transmitted without any
coding on the same channel?
D. From knowing the final state of the trellis (010, as given above), can you infer what the last bit of the
original message was? What about the last-but-one bit? The last 4 bits?
Consider a transition branch between two states on the trellis that has 000 as the expected set of parity bits.
Assume that 0V and 1V are used as the signaling voltages to transmit a 0 and 1 respectively, and 0.5V is used
as the digitization threshold.
E. Assuming hard decision decoding, which of the two set of received voltages will be considered more
likely to correspond to the expected parity bits on the transition: (0V, 0.501V, 0.501V) or (0V, 0V,
0.9V)? What if one is using soft decision decoding?
Problem 4. Indicate whether each of the statements below is true or false, and a brief reason why you think
so.
A. If the number states in the trellis of a convolutional code is S, then the number of survivor paths at any
point of time is S. Remember that if there is "tie" between to incoming branches (i.e., they both result in
the same path metric), we arbitrarilly choose only one as the predecessor.
The path metric of a state s1 in the trellis indicates the number of residual uncorrected errors left along
the trellis path from the start state to s1.
B. Among the survivor paths left at any point during the decoding, no two can be leaving the same state at
any stage of the trellis.
C. Among the survivor paths left at any point during the decoding, no two can be entering the same state
at any stage of the trellis. Remember that if there is "tie" between to incoming branches (i.e., they both
result in the same path metric), we arbitrarilly choose only one as the predecessor.
D. For a given state machine of a convolutional code, a particular input message bit stream always
produces the same output parity bits.
Problem 5. Consider a convolution code with two generator polynomials: G0=101 and G1=110.
B. Draw the state transition diagram for a transmitter that uses this convolutional code. The states should
be labeled with the binary string xn-1...xn-k+1 and the arcs labeled with xn/p0p1 where x[n] is the next
message bit and p0 and p1 are the two parity bits computed from G0 and G1 respectively.
The figure below is a snapshot of the decoding trellis showing a particular state of a maximum likelihood
decoder implemented using the Viterbi algorithm. The labels in the boxes show the path metrics computed for
each state after receiving the incoming parity bits at time t. The labels on the arcs show the expected parity
bits for each transition; the actual received bits at each time are shown above the trellis.
C. Fill in the path metrics in the empty boxes in the diagram above (corresponding to the Viterbi
calculations for times 6 and 7).
D. Based on the updated trellis, what is the most-likely final state of the transmitter? How many errors
were detected along the most-likely path to the most-likely final state?
E. What's the most-likely path through the trellis (i.e., what's the most-likely sequence of states for the
transmitter)? What's the decoded message?
F. Based on your choice of the most-likely path through the trellis, at what times did the errors occur?
EXAMINATION
Code: INS3158 - 03
Lecturer’s Signature
Program: VNU
Course Code: INS3158
Date: ………………………………
Instructions to students:
A. At the supermarket a checkout operator has on average 4 customers and customers arrive every 2 minutes. How long
must each customer wait in line on average?
B. A restaurant holds about 60 people, and the average person will be in there about 2 hours. On average, how many
customers arrive per hour? If the restaurant queue has 30 people waiting to be seated, how long does each person
have to wait for a table?
C. A fast-food restaurant uses 3,500 kilograms of hamburger each week. The manager of the restaurant wants to ensure
that the meat is always fresh, i.e., the meat should be no more than two days old on average when used. How much
hamburger should be kept in the refrigerator as inventory?
2. Calculate the latency (total delay from first bit sent to last bit received) for the following:
A. Sender and receiver are separated by two 1-Gigabit/s links and a single switch. The packet size is 5000 bits, and each
link introduces a propagation delay of 10 microseconds. Assume that the switch begins forwarding immediately after it
has received the last bit of the packet and the queues are empty.
3. Network designers generally attempt to deploy networks that don't have single points of failure, though they don't always
succeed. Network topologies that employ redundancy are of much interest.
A. Draw an example of a six-node network in which the failure of a single link does not disconnect the entire network (that
is, any node can still reach any other node).
B. Draw an example of a six-node network in which the failure of any single link cannot disconnect the entire network, but
the failure of some single node does disconnect it.
C. Draw an example of a six-node network in which the failure of any single node cannot disconnect the entire network, but
the failure of some single link does disconnect it.
Note: Not all the cases above may have a feasible example.
4. Under what conditions would circuit switching be a better network design than packet switching?
5. Circuit switching and packet switching are two different ways of sharing links in a communication network. Indicate True
or False for each choice.
A. Switches in a circuit-switched network process connection establishment and tear-down messages, whereas switches
in a packet-switched network do not.
B. Under some circumstances, a circuit-switched network may prevent some senders from starting new conversations.
C. Once a connection is correctly established, a switch in a circuit-switched network can forward data correctly without
requiring data frames to include a destination address.
D. Unlike in packet switching, switches in circuit-switched networks do not need any information about the network
topology to function correctly.
Problem 2.
1. Consider a switch that uses time division multiplexing (rather than statistical multiplexing) to share a link between four
concurrent connections (A, B, C, and D) whose packets arrive in bursts. The link's data rate is 1 packet per time slot.
Assume that the switch runs for a very long time.
A. The average packet arrival rates of the four connections (A through D), in packets per time slot, are 0.2, 0.2, 0.1, and
0.1 respectively. The average delays observed at the switch (in time slots) are 10, 10, 5, and 5. What are the average
queue lengths of the four queues (A through D) at the switch?
B. Connection A's packet arrival rate now changes to 0.4 packets per time slot. All the other connections have the same
arrival rates and the switch runs unchanged. What are the average queue lengths of the four queues (A through D)
now?
2. Alyssa P. Hacker has set up eight-node shared medium network running the Carrier Sense Multiple Access (CSMA) MAC
protocol. The maximum data rate of the network is 10 Megabits/s. Including retries, each node sends traffic according to some
unknown random process at an average rate of 1 Megabit/s per node. Alyssa measures the network's utilization and finds that it
is 0.75. No packets get dropped in the network except due to collisions, and each node's average queue size is 5 packets. Each
packet is 10000 bits long.
A. What fraction of packets sent by the nodes (including retries) experience a collision?
B. What is the average queueing delay, in milliseconds, experienced by a packet before it is sent over the medium?
Problem 3.
1. Little's law can be applied to a variety of problems in other fields. Here are some simple examples for you to work out.
A. F freshmen enter MIT every year on average. Some leave after their SB degrees (four years), the rest leave after their
MEng (five years). No one drops out (yes, really). The total number of SB and MEng students at MIT is N. What
fraction of students do an MEng?
B. A hardware vendor manufactures $300 million worth of equipment per year (= invoice$/year). On average, the
company has $45 million in accounts receivable (= invoice$). How much time elapses between invoicing and
payment?
C. While reading a newspaper, you come across a sentence claiming that "less than 1% of the people in the world die
every year". Using Little's law (and some common sense!), explain whether you would agree or disagree with this
claim. Assume that the number of people in the world does not decrease during the year (this assumption holds).
D. (This problem is actually almost related to networks.) Your friendly 6.02 professor receives 200 non-spam emails
every day on average. He estimates that of these, 50 need a reply. Over a period of time, he finds that the average
number of unanswered emails in his inbox that still need a reply is 100.
i. On average, how much time does it take for the professor to send a reply to an email that needs a response?
ii. On average, 6.02 constitutes 25% of his emails that require a reply. He responds to each 6.02 email in 60
minutes, on average. How much time on average does it take him to send a reply to any non-6.02 email?
2. You send a stream of packets of size 1000 bits each across a network path from Cambridge to Berkeley. You find that the
one-way delay varies between 50 ms (in the absence of any queueing) and 125 ms (full queue), with an average of 75 ms. The
transmission rate at the sender is 1 Mbit/s; the receiver gets packets at the same rate without any packet loss.
A. What is the mean number of packets in the queue at the bottleneck link along the path (assume that any queueing
happens at just one switch).You now increase the transmission rate to 2 Mbits/s. You find that the receiver gets packets
at a rate of 1.6 Mbits/s. The average queue length does not change appreciably from before.
B. What is the packet loss rate at the switch?
C. What is the average one-way delay now?
Problem 4.
1. Consider the network topology shown below. Assume that the processing delay at all the nodes is negligible.
A. The sender sends two 1000-byte data packets back-to-back with a negligible inter-packet delay. The queue has no
other packets. What is the time delay between the arrival of the first bit of the second packet and the first bit of the
first packet at the receiver?
B. The receiver acknowledges each 1000-byte data packet to the sender, and each acknowledgment has a size A = 100
bytes. What is the minimum possible round trip time between the sender and receiver? The round trip time is defined
as the duration between the transmission of a packet and the receipt of an acknowledgment for it.
2. Annette Werker has developed a new switch. In this switch, 10% of the packets are processed on the "slow path", which
incurs an average delay of 1 millisecond. All the other packets are processed on the "fast path", incurring an average delay of
0.1 milliseconds. Annette observes the switch over a period of time and finds that the average number of packets in it is. What
is the average rate, in packets per second, at which the switch processes packets?
3. Alyssa P. Hacker designs a switch for a circuit-switched network to send data on a 1 Megabit/s link using time division
multiplexing (TDM). The switch supports a maximum of 20 different simultaneous conversations on the link, and any given
sender transmits data in frames of size 2000 bits. Over a period of time, Alyssa finds that the average number of conversations
simultaneously using the link is 10. The switch forwards a data frame sent by a given sender every δ seconds according to
TDM. Determine the value of δ
Problem 5.
1. In the following plot of a voltage waveform from a transmitter, the transmitter sends 0 Volts for a zero bit and 1.0 Volts
for a one bit, and is sending bits with with a certain number of samples per bit.
A. What is the largest number of samples per bit the transmitter could be using?
2. The following figure show plots of several received waveforms. The transmitter is sending sequences of binary symbols
(i.e., either 0 or 1) at some fixed symbol rate, using 0V to represent 0 and 1V to represent 1.The horizontal grid spacing is 1
microsecond (1e-6 sec).
A. Find the slowest symbol rate that is consistent with the transitions in the waveform.
Lecturer’s Signature
Program: VNU
Course Code: INS3158
Date: ………………………………
Instructions to students:
x[0] = 0,
x[1] = 1,
x[2] = 1 and
x[n] = 0 for all other values of n
y[0] = 1,
y[1] = 2,
y[2] = 1 and
y[n]=0 for all other values of n.
B. What are the nonzero values of the output of this LTI system when the input is
x[0] = 0,
x[1] = 1,
x[2] = 1,
x[3] = 1,
x[4] = 1 and
x[n] = 0 for all other values of n?
2. Determine the output y[n] for a system with the input x[n] and unit-sample response h[n] shown below.
Assume h[n]=0 and x[n]=0 for any times n not shown.
3. A discrete-time linear system produces output v when the input is the unit step u. What is the output h
when the input is the unit-sample δ? Assume v[n]=0 for any times n not shown below.
Problem 2.
• Again let α=.7 and β=.3. Derive a deconvolver for this channel and compute the input sequence that
produced the following output:
y[n] = [.7, 1, 1, .3, .7, 1, .3, 0], followed by all 0's.
2. Suppose four different channels {I,II,III,IIII} have four different unit sample responses: h1 = .25, .25, .25,
.25, 0, ...
Each of the following eye diagrams is associated with transmitting bits using one of the four channels, where
five samples were used per bit. That is, a one bit is five one-volt samples and a zero bit is five zero-volt
samples. Please determine which channel was used in each case.
3. This question refers to the LTI systems, I, II and III, whose unit-sample responses are shown below:
In this question, the input to these systems are bit streams with eight voltage samples per bit, with eight
one-volt samples representing a one bit and eight zero-volt samples representing a zero bit.
A. Which system (I, II or III) generated the following eye diagram? To ensure at least partial credit for
your answer, explain what led you to rule out the systems you did not select.
Problem 3.
h[n] = 1/2 n = 0, 1, 2
h[n] = 0 otherwise
please determine the maximum value of the output of the channel and the index at which that maximum
occurs.
2. For this problem, please consider three linear and time-invariant channels, channel one, channel two, and
channel three. The unit sample response for each of these three channels are plotted below. Please use these
plots to answer all the parts of this question.
A. Which channel (1, 2, or 3) has the following step response, and what is the value of maximum value of
the step response?
l
B. Which channel (1, 2, or 3) produced the pair of transmitted and received samples in the graph below,
and what is the value of voltage sample number 24 (assuming the transmitted samples have the value of
either one volt or zero volts)?
Problem 4.
In this problem you will be answering questions about a causal linear time-invariant channel characterized by
its response to a five-sample pulse, denoted p5[n].
A. Suppose the input to the channel is as plotted below. Plot the output of the channel on the axes
provided beneath the input.
B. The unit sample response, h[n], can be related to the step response, s[n] by the formula h[n] = s[n]
- s[n-1]. Please derive a similar formula for h[n] in terms of the five-sample pulse response p5[n]
(an infinite series is an acceptable form for the answer).
Problem 5.
For all parts of this problem, please consider five linear and time-invariant channels, cleverly titled channel I,
channelII, channel III, channel IV and channel V. The unit sample response for each of these five channels is
plotted below, with the values outside the interval 0 to 14 being zero. Please use these plots to answer all the
parts of this problem.
Please note:
All the voltage values in the five plots are integer multiples of 0.1 volt.
B. Which two channels have step responses, s[n], that approach the same value as n → ∞ and what is that
value?
C. Suppose the input to each of the channels is x[n] = 1 for 0 ≤ n ≤ 9 and zero otherwise. Which
channel has the output y[n] plotted below, and what is value of the n = 15 output sample (not
plotted)?
EXAMINATION
Code: INS3158 - 05
Lecturer’s Signature
Program: VNU
Course Code: INS3158
Date: ………………………………
Instructions to students:
Give an expression for the magnitude of a complex exponential with frequency φ, i.e., |ejφ|.
Problem 2.
A. Prove the validity of the following formula, often referred to as the finite sum formala:
Problem 3.
A. Give an expression for the frequency response of the system H(ejΩ) in terms of h[n].
B. If h[0]=1, h[1]=0, h[2]=1, and h[n]=0 for all other n, what is H(ejΩ)?
C. Let h[n] be defined as in part B and x[n] = cos(φn). Is there a value of φ such that y[n]=0 for all n
and 0 ≤ φ ≤ π?
D. Let h[n] be defined as in part B. Find the maximum magnitude of y[n] if x[n] = cos(πn/4).
E. Let h[n] be defined as in part B. Find the maximum magnitude of y[n] if x[n] = cos(-(π/2)n).
Problem 4.
In answering the questions below, please consider the unit sample response and frequency response of two
filters, H1 and H2, plotted below.
Note: the only nonzero values of unit sample response for H1 are : h1[0] = 1, h1[1]=0, h1[2]=1.
Note, the only nonzero values of unit sample response for H2 are : h2[0] = 1, h2[1]=-sqrt(3), h2[2]=1.
In answering the several parts of this review question consider four linear time-invariant systems, denoted A,
B, C, and D, each characterized by the magnitude of its frequency response, |HA(ejΩ)|, |HB(ejΩ)|,
|HC(ejΩ})|, and |HD(ejΩ)| respectively, as given in the plots below. This is a review problem, not an actual
exam question, so similar concepts are tested multiple times to give you practice
A. Which frequency response (A, B, C or D) corresponds to a unit sample response given by
h[n] = α δ[n] - h1[n]
and what are the numerical values of h[2], h[3] and H(ej0)?
Which system (A, B, C or D) produced an output, y[n] below, and what is the value of y[n] for n >
10?
Which system (H1 or H2) produced an output, y[n] below, and what is the value of y[22]?
Problem 5.
In answering the several parts of this question, consider three linear time-invariant filters, denoted A, B, and
C, each characterized by the magnitude of their frequency responses, |HA(ejΩ)|, |HB(ejΩ)|, |HC(ejΩ)|,
respectively, as given in the plots below.
A. Which frequency response (A, B, or C) corresponds to the following unit sample response, and what is
maxΩ |H(ejΩ)| for your selected filter? Please justify your selection.
B. Which frequency response (A, B, or C) corresponds to the following unit sample response, and is maxΩ
|H(ejΩ)| > 6 for your selected filter? Please justify your answers.
C. Suppose the input to each of the above three filters is x[n] = 0 for n < 0 and for n ≥ 0 is
x[n] = cos((π/3)n) + cos(πn) + 1.0
Which filter (A, B,or C) produced the output, y[n] below, and what is maxΩ |H(ejΩ)| for your
selected system?
D. Six new filters were generated using the unit sample responses of filters A, B and C, denoted hA[n],
hB[n], and hC[n] respectively. The unit sample responses of the new filters were generated in the
following way:
Which of the six new filters has the frequency response plotted below?
Which of the six new filters from the previous question has the frequency response plotted below?