0% found this document useful (0 votes)
38 views18 pages

Lec 39

The document discusses mechanisms for achieving capacity in MIMO systems with and without channel state information at the transmitter. It describes vertical Bell labs space-time coding (V-BLAST) which achieves parallel transmission without feedback by directly transmitting symbols and recovering them without interference at the receiver. It also describes SVD-based precoding which achieves capacity with feedback by preprocessing signals at the transmitter and recovering parallel streams at the receiver without interference.

Uploaded by

shailiayush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views18 pages

Lec 39

The document discusses mechanisms for achieving capacity in MIMO systems with and without channel state information at the transmitter. It describes vertical Bell labs space-time coding (V-BLAST) which achieves parallel transmission without feedback by directly transmitting symbols and recovering them without interference at the receiver. It also describes SVD-based precoding which achieves capacity with feedback by preprocessing signals at the transmitter and recovering parallel streams at the receiver without interference.

Uploaded by

shailiayush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Fundamentals of MIMO Wireless Communication

Prof. Suvra Sekhar Das


Department of Electronics and Communication Engineering
Indian Institute of Technology, Kharagpur

Lecture – 39
Capacity of Random Channel

Welcome to the course on Fundamentals of MIMO Wireless Communication. Till now


we have seen the expression of capacity for two important cases - in one case, it is no
channel state information available at the transmitter where we said the best strategy is to
put equal power across all the transmitting antennas. The second case would be when
you have the channel state information at the transmitter and then the channel modes are
all accessible, in that case the gross result that we got is it is better to do power allocation
across the different antennas and the best strategy to power allocation is to put more
power on the channel which has better channel strength. So, and that was the result of the
water pouring algorithm which we had shown in the previous lecture.

So, we carry on with those concepts and what we are going to discuss initially is about
mechanisms or techniques by which you could achieve those results, and then will move
on to see the effect of correlation on MIMO channel capacity and also we will move
further to look at the random MIMO channel, and what is the outcome on capacity or
how do you replace on capacity under those con those conditions will also be looked at.
(Refer Slide Time: 01:28)

So, when we take the MIMO transceiver system again I draw the transmitter with
multiple antennas at the transmitter and at the receiver we have multiple antennas. So,
the first thing we said is when there is one directional flow of information, no feedback
information. So, in that case there would be signal S 1 up to signal S M T that gets
transmitted and the signal that is received Y vector would be H that is the matrix channel
which lies between the transmitter and the receiver Y 1 up to Y M r times S plus noise
vector this is the expression that is received. So, if we have to transmit these signals that
means, you have to put equal power that is what the expression gave us, but the result
that we have got is basically putting equal power, but it never told us how you can send
symbols across these this lines or these parallel channels, as to achieve the result which
we have shown.

So, the mechanism that we are doing over here is will be sending S 1 to S M T that is
different symbols. So, will directly sending the difference symbols S 1, S 2 up to S M T
let say and we have this channel matrix plus noise across this. So, if H is orthogonal; that
means, if the channel coefficients are orthogonal then one of the simplest strategies to
receive this would be multiply H hermitian with y, if we multiply H hermitian with Y we
will to get H hermitian H times S plus H hermitian noise and that would result in H
hermitian H would be identity matrix. So, we can get an identity matrix times S in this
case of course, with our notation we are going to have S by M T root over E S over M T
right. So, H hermitian H is yeah. So, we have an identity matrix over here and we could
also have zeta upon M. So, that the trace ends up as M. So, we could also have the total
channel strength constraint that will be the scalar modification of this.

So, when we expand this, this will be H n tilde what we have over here is a diagonal
matrix. So, each one that we receive this one would write Y 1 tilde is equal to root over E
S over M T times S i because is a diagonal matrix times n tilde i. So, what we can clearly
see from this expression is that any symbol that is received in the corresponding branch
of the receiver is not having influence of other symbols whereas, in a typical MIMO con
communication whatever is been transmitted from all the antennas get added up together
in any one of the antennas. So, if the channel is orthogonal you can get that, if the
channel is not orthogonal the other simple way of doing getting back the received signal
would be to multiply H inverse if H inverse exists. So, we can assume it a full rank M
cross M square matrix. So, that is for one particular case H inverse Y would give us S
plus H inverse n. So, this is the zero forcing mechanism.

So, even in this the S is a vector. So, basically Y have Y i cap is equal to this is basically
the vector which is Y cap you can say is S i plus n tilde I because this will again be a
vector of course, this n tilde we can put n double tilde which is different from in this
case. Here again we have avoided interference, there is no interference of the symbols. If
it is non square then one could use H hermitian H whole inverse times H hermitian
multiplied by Y. In this case if we are going to do this kind of equalization to make it
square the end result again would be S plus; is the pseudo inverse, H inverse H hermitian
H inverse times H hermitian times n. So, this is what is going to be the received vector.

So, here again Y i would be S i plus n put triple in i. So, here again in this format we are
not getting any interference from any other symbols. So, in his way the one could be
construct this one could also use an NMSE kind of approach where one would use I plus
H hermitian H n naught upon M T inverse times H hermitian times y. So, this is the
MMSC approach this way we would also get it, but there would some minor amount of
interference because of these channel condition. So, what we have effectively is through
this mechanism which is also known as vertical bell labs layered space time coding also
known as V BLAST; that means, in short is a vertical bell labs layered space time coding
is the name for this. Where you could send the symbols directly across the sym, across
the antennas and at the receiver you could do different kinds of processing and recover
the symbols without interference. There by you are achieving as if parallel transmission
and we would remember that this SNR is dominated or is indicated by lambda i where
lambda i is the eigenvalue of i-th channel mode.

So, if it is square matrix you have M parallel links or you have r parallel links depending
upon the rank of the matrix. So, this is one mechanism where you can achieve the
transmission and parallel data transmission when there is no channel state information at
the transmitter. We move on further after this to look at the second scheme where
channel state information could be made available at the transmitter T x and R x.

(Refer Slide Time: 08:11)

So, basically here there is a feedback and if we would remember the result of capacity in
this case told us that we have the feedback about the channel state information. So, we
can get the eigenvalues once we get the eigenvalues of H hermitian H we are getting
indication of the channel strength of the parallel modes that we can send. So, based on
that again we will be distributing the amount of power which is proportional to the
eigenvalue of that particular mode of the channel, here again we look at that we have S
has a vector of signals to be send there is a this channel H which gets multiplied with S
plus there is no noise and this is what is received at the receiver across the antennas.
Here again as we can clearly see that these signals get mixed up and you have mixture of
signals at the antenna.

Now, what we have said is when your feedback what we could do is we could multiply a
V at the transmitter side; that means, there is some processing V matrix and there is some
processing at the receiver with U hermitian this is what we had proposed when we had
discussed that. So, basically at the receiver we are going to get U hermitian, U and V are
(Refer Time: 09:40) matrices. So, then if you would have expand H, H can be stated
equal to d V hermitian and U and because of this U we have U hermitian because of this
U we have V plus U hermitian n and this we could write it as Y cap. U hermitian U is
identity matrix V hermitian V is identity matrix.

So, we have D times S plus n cap we can say Y vector cap. D is the diagonal matrix
consisting of the singular values. So, therefore, we could write Y i is equal to of course,
there is root over E S by M I tend to miss this term. So, root over E S upon M sigma i S i
plus n cap. So, this again we see, through this mode we are not having any interference
from any other term. So, in this case also we are able to establish parallel links between
the transmitter and receiver and hence in both these mechanisms that we are just now
discussed, these provide us we can clearly see the gain of spatial multiplexing. This is an
important terminology in the domain of MIMO.

So, clearly what we are achieving here is in both these cases where there is no feedback
and when there is feedback we are transmitting S 1 to S M T in parallel. So, there is
multiplexing and this multiplexing is in the space domain. So, this is why it is spatial
multiplexing and here this particular scheme there is channel state information at the
transmitter because of feedback we are doing some pre processing or pre coding it is also
known as pre coding. So, this particular method is known as S V D based pre coding. So,
if we follow this S V D based pre coding mechanism in this mechanism we can achieve
the capacity when we have feedback information at the transmitter and of course, as has
been said we have to distribute the power such that gamma i; that means, the power in
this channel is such that there is a k minus 1 over E S by M T n naught times lambda i.
So, this is the kind of sorry M T n naught this E S upon M T n naught. So, this is 1 upon
SNR. So, this is basically the amount of power which again leads to higher lambda
values higher values of SNR. So, if we add this kind of power distribution along with
this then that would naturally lead us to a parallel connection parallel sending of signals
so that we achieve capacity of channel state information at the transmitter.

So, summarily when we do either of these two streams we are sending parallel
information, we are sending parallel information from the transmitter to the receiver. The
number of such parallel links is clearly is as the many number of independent links
available and this is indicated by the rank of H. The rank of H would indicate as many
numbers of independent links. There are many details to it sometimes it is not very easy
to identify what is the rank of a matrix and one would deal with the eigenvalues of H
hermitian. So, one would be interested to know what is the ratio of lambda one which is
basically lambda max to lambda mean. So, this would give us condition number which
tells us what is the distribution of eigenvalues and we have seen this capacity is
maximized when all the eigenvalues are equal.

So, sum zeta upon M let say. So, this going it is maximized. So, we would like to see this
distribution is very large then capacity is not maximum. So, we would like to choose the
number of modes; that means, it may happen that I have 16 cross 16 system or an 8 cross
8 system, but if instead of taking an 8 cross 8 if I would choose a subset 6 cross 6, it
could give me a better capacity better some total capacity, so that one has to try and see
out of these combinations which is giving a better capacity and one chooses and this is
one way of quickly circumventing problem instead of looking around for all possible
combinations.

So, we have seen at least two ways of which MIMO channels could be exploited - first
we have seen what is expression of capacity and then we have looked at how the signals
should be send. First mode where you send the signals as it is, in second mode you
would be doing some pre-coding the pre-coding is based on S V D of the channel or the
single value decomposition of the channel and we have to use the V matrix at the
transmitter and U matrix at the receiver. So, that you get r parallel independent data pipes
and since you are getting parallel data pipes with the help of multiple antennas. Now
these mechanisms are known as spatial multiplexing. Now what we had seen before a
when before we are looking at the capacity we are talked about diversity as, that was
spatial diversity. So, seen two very important things - one through spatial diversity we
are able to improve their probability and through spatial multiplexing we are able to
improve the capacity. So, with this we move forward and take a look at some more
things on capacity.

(Refer Slide Time: 15:31)

So, the first thing we are going to look at is the influence of spatial correlation right. So,
we have seen that H which is a channel could be written in terms of the correlation
matrix at the receiver raise to the power of half times H w which is spatially wide
channel times R t raise to the power of half, is a spatially white channel. So, this captures
the correlation at the receiver, this captures correlation at the transmitter. So, of course,
we have R r the elements i j set equal to 1 or i equals to 1. So, basically we are saying
that it is normalized all these R r coefficients are normalized and. So, R r t coefficients R
t coefficients are also normalized and this would result in expected value of H i j mod
square to be equal to 1.

So, all we are saying is that the channel coefficients are normalized unit power and the
expression of capacity that we have (Refer Time: 16:47) at earlier is log determinate of I
M R plus E S by n naught M T you had H H hermitian. So, instead of H and H hermitian
we had H H hermitian. So, we could write R r of H w R t half for h. So, when we have H
hermitian you would have R t half H w R r half right we are expanded this. So, this R t r
T half we could write it as R t right this is what we could do and we will make the
assumption that M r equals to M T equals to M that is will make the full rank app
approximation. So, when we write it in this form since it is the determinate we could say
that at high SNR this effect of one are from this could be neglected. So, we could say that
it is dominated by this because we are saying that E S by n naught is much much greater
than 1 because the determinate would be having the eigenvalues and in the eigenvalues
you are going to have a (Refer Time: 18:08) matrix and that of the other matrix. So,
basically this will lead to 1.

So, when the right hand side is much more than one we could say that one can be
neglected and this capacity expression we could approximate it to log two determinate of
E S by M we are said M T equals to M r equal to m. So, E S by M n naught times H w
hermitian see now we do not have this. So, we are having log determinate of this things.
So, log determinate of determinate of AB is equal to determinate of times determinate of
B right or determinate of AB is equal to determinate of times determinate of B.

So, what we have is determinate of these terms. So, determinate of these terms we have
H H w. So, we have basically H H w as one of the terms plus log base two determinate of
R r this really what we have and then again we have R r half so basically plus log base
two determinate of R t right. So, if you put them together log determinate of this times
this times this. So, basically we have this term and that is the result of this expression
when right hand side or towards the plus of this is much greater than 1.

So, if what we see from this is the effect of correlation at the transmitter and at the
receiver are basically similar. So, if we study one of them we can satisfy. So, we can do
either at the transmitter side or at the receiver side. So, we will put the constraint that
sum of lambda i R r is equal to 1. So, this is one particular constraint that we will
maintain 1 to n. So, if sorry this is equal to M this equal to m. So, total power constraint
is that we have to maintain.
(Refer Slide Time: 20:18)

So, now, we will use one inequality; that means, arithmetic AM, GM inequality known
as AM, GM arithmetic mean geometric mean inequality to our (Refer Time: 20:27). So,
when we use arithmetic mean and geometric mean inequality. So, what we get is that if
there is x 1 to x n set of random numbers a set of numbers if you say. So, we take the
average we will find it to be greater than or equal to the root of x 1, x 2 up to x n and
what we have is in our case is sum of I equal to 1 to M lambda i of r is set equal to m. So,
this is what we have and according to the inequality that we want is one upon M sum of
lambda i R right this is of course, equal to 1, this should be going by this is greater than
or equal to nth root of pi of lambda i of R r, right, that is the received matrix this is what
we have said.

So, now if you look at this, this is equal to one according to this gives a small upon M
this equal to 1. So, what we see is that square nth root of pi I lambda i R r is less than or
equal to 1 or in other words product of the lambda i of this co variance matrix is less than
or equal to 1, let i equals to 1 to m. So, if you apply this back here. So, determinate of
matrix, determinate of R r is equal to pi i equals to 1 to M and lambda i R r that is the
product of the eigenvalues and this is less than or equal to 1, that means, inside the log
the (Refer Time: 22:17) is less than or equal to 1, so this whole term is negative.
So, what we have is negative term and a negative term, otherwise the best value that you
can have is when they are equal. So, that is the best value that you can have. So,
basically you can have 1 as the best value in that case this will be 0. So, all these
eigenvalues being equal means that this is an identity matrix, and that being identity
matrix means it is an orthogonal channel, orthogonal channel means that R is not
correlated.

So, when r is not correlated this determinate will turn out to be 0 this will also turn out to
be 0 sorry this is log of 1 would turn out to be a 0. So, there will not be any effect if there
is any non zero effect that will only be negative effect so; that means, the effect of
capacity when we are studying at (Refer Time: 23:09) especially that becomes very very
clear, whenever there is correlation be added in the term then the capacity decreases. We
had also seen similarly just to remind you when we studied the effect of correlation on
error probability what we had seen whenever there is correlation the error probability
increases. So, what we seen both the cases when there is a correlation in case of diversity
again the error probability increases relative to the non correlated case, but still it is
better than SISO link. In case of MIMO what we are seeing again is correlation reduces
capacity gain, but overall (Refer Time: 23:47) will be better than a SISO link.

(Refer Slide Time: 23:58)


So, these are some of the important results that we can gain over here and we take a look
at the result of correlation. So, if we see this particular picture what we would see is,
when we look at this particular picture this is the case where correlation is said as 0.95
and this solid line is a due to no correlation case and if there is a 95 percent correlation
between the antenna branches this dashed line what is there and this is a from the book
introduction to space time values communications by (Refer Time: 24:30) the Paulraj
Rohit Nabar Dhananjay Gore from Cambridge university have got taken this picture
from that particular book and could generate this result also.

So, there is a loss which is indicated by this particular line, there is a capacity. So, we
will discuss ergodic capacity about 3.3 bits per second per hertz which is significant at
higher SNR regime. So, what we can conclude from this is that when there is correlation
there is loss in capacity and this is well understood in terms of expressions a when we
look at high SNR regime, for lower SNR regime also there is a loss, but again we have
taken higher SNR all these approximations that we are making is only to ensure that we
get an inside in to it and from which we can clearly see as we have seen over here that is
expressions lead us to certain clear understanding from directly by looking at it and also
which is supported by results.

(Refer Slide Time: 25:31)


So, now we take a look at the outage capacity. So, we have seen this particular figure
before because c is random as we have said, the capacity would be described more by its
distribution and at some point which is 10 percentile or p percentile we would say this
corresponding value of capacity is the outage capacity c out; that means, in this case we
have taken the 10 percent outage capacity. So, what we are seen is that above this is 90
percent, basically this side is 90 percent. That means 90 percent of the time the capacity
is in this range. So, this is very very important value and it determines the lowest point in
performance of course, ergotic capacity is what we have already described and seen how
it is influenced under lower SNR and higher SNR regime. So, we will take a look at the
outage capacity performance.

(Refer Slide Time: 26:31)

So, this is the 10, percent outage capacity performance for different MIMO antenna
configurations, this side is the transmit antenna configurations, this side is the receive
antenna configuration. So, as we increase the number of antenna. So, basically M is
equal to 1, M is equal to 2, in this case and M is equal to 4 in this case, this is M. So, we
have M that means, this diamond this particular curve is M is equal to 4 and this
particular star is n is equal to 2 and this particular line M is equal to 1. So, what we have
seen is that as we increase M, as M increases what we can observe is as M increases
what happens to 10 percent outage capacity that also increases. So, basically the 10
percent outage capacity increases. Clearly this is for M equals to 1, this is for M equals to
2, this is for M equals to 4, and we clearly see that outage capacity is increasing. So, if
you are here this is 2, this is more than 4. So, this is one important thing as we increase
SNR the outage capacity also increases. So, this is one important observation.

(Refer Slide Time: 28:11)

And if we look at the ergotic capacity what we have is the same over here is as we
increase M, here also is the same - this is for M equals to 4, this is for M equals to 2. So,
we have almost the same - this is for M is equal to 2, I think this is M is equal to 2 in this
particular figure this is M is sorry. In this particular figure we have this one as M is equal
to 2 and this one for M is equal to 1. So, what we are seeing here also its capacity is
increasing linearly and as we increase m. So, in both the cases it increases with m.
(Refer Slide Time: 29:10)

Now, the important thing that we are going to look at is here. So, outage capacity
increases now with M clearly. So, now, suppose we have an H w channel. So, for an H w
channel we have seen 1 by M H w H w hermitian tends to I of M as M tends to infinity.
So, what does it mean? That a capacity expression inside the capacity expression you
have E S by M n naught and H w H w hermitian right plus of course, i M r this is
expression of capacity for given channel what we have seen is that this tends to i. So, if
this tends to i, what is the meaning? The meaning is that the capacity expression is
becoming independent of channel coefficients; that means, every time I see measure the
value of capacity for a very very large H w channel and I keep changing the channel
coefficients the capacity values is not going to be different from each other.

So, under such situation what I will see is, under such a situation what is going to happen
if we see this capacity axis most of the time my capacity will be very very close to one
value because what we are seeing c is equal to log base 2 determinant of i plus E S upon
M times n naught times H w H w hermitian. So, 1 upon M H w H w hermitian if that is
equal to i for M being very large that is what we have seen what do we get capacity is log
2 determinant i plus E S upon n naught. So, that is the capacity of a SISO channel. So,
this is in the limiting case. So, if it is not a limiting case will be very, very, very close to
this value that means, will be very very close to one particular value. So, this distribution
which is source spread out will almost be like this that means, under such condition the
outage capacity is almost equal to the medium capacity or in other words what we can
say is that under this case the capacity stabilizes.

So, one we are saying is that if we are able to provide very large number of antennas and
if the link that we are providing is, what we have said is if we have very large number of
antenna elements at the transmitter and the receiver side and the links are specially wide;
that means, they are orthogonal what we can achieve is almost a stable link which does
not vary, the capacity does not vary. So, we are almost able to provide the capacity of
additive white Gaussian noise channel. So, that is very very important.

In fact, it is M times the capacity of spatially of additive white Gaussian noise channel
so; that means, it stabilizes the link is not fluctuating any more similar situations would
happen when there is line of sight you are getting a very high signal strength the channel
is very very stable there is not much variability in it, but of course, there are different
(Refer Time: 33:00) on the channel capacity we have just seen the situation when there is
lower SNR. What we need to see now is the situation when we have very high SNR.

(Refer Slide Time: 33:13)

So, we take at the high SNR condition we have seen it once we will see it again in a
more exact manner and we will see certain interesting outcomes. So, the ergodic capacity
is basically expectation of log base two determinants of i M r plus E S by M n naught
times H H hermitian. This is what we are quite used now, this particular expression. So,
again determinant being pi of i equals 1 to rank of H times rank of H of 1 plus over E S
by M n naught times lambda i, where lambda i is the eigen value of H H hermitian log 2
determinate is replaced by this and the expectation sign. So, when SNR is high when E S
by n naught is very high this could be approximated as E S by M n naught times lambda i
log 2 determinants.

So, this you could also say is we could write it as expectation of, basically this term is
constant if that term is constant you will have r H because this raise to the power of r H
times log base two E S by M T n naught this is the constant term plus expectation of sum
over I equals to 1 to rank of H times log base 2 lambda i, this is constant, so not of much
importance, this is what we should look at. So, what we looking at is the ergotic. So,
when SNR is high the capacity is determined by this spread of eigenvalues.

So, it is determined by two things - the rank of the channel and spread of eigenvalues.
Again clearly this thing will be maximized when all the lambda i are equal and when it is
the full rank. So, if rank is less then capacity is less. So, we would like to have full rank
as well as lesser spread of eigenvalues. So, lesser spread of eigenvalues, has multiple
meanings - one of the meaning is if lambdas are almost the same, they are equal to then
we are having an orthogonal channel which you have seen before and all eigenvalues
being same has another influence. So, if you have all same eigenvalues you can easily
guess what would be the result when channel state information is available at the
transmitter.

So, we have discussed that under such situation when the channel state information is
available at the transmitter, we would like to follow the water pouring algorithm. The
water pouring algorithm says the allocate power to that particular eigen mode
proportional to the eigen value of H H hermitian.

So, if all eigenvalues are same the power to be allocated in all the modes are again going
to be the same and; that means, that R ss would lead to identity. So, again we are getting
the situation when under high SNR condition, we see that the power allocation would
mean would be equal amongst all the antenna branches and that is the same whether you
have channel state information at the transmitter or you do not have channel state
information at the transmitter. So, again what we say is that when channel state when
SNR is very very high extremely high under that condition the gap between no CSI
transmitter and with CSI transmitter again reduces.

So, in summary we have two points that rank of the channel determines the capacity and
spread of eigenvalues, if eigen values are very similar you have very similar power
involve of the channels under that condition the capacity of CSI at transmitter and no
CSI at the transmitter would again be the same.

(Refer Slide Time: 37:21)

So, I would briefly give you an explanation. So, we said that this is water pouring
algorithm when these channel state information available at the transmitter and we said if
this is inverse of SNR levels this is the amount of power that is allocated in each of the
links. Now if all the eigenvalues are the same, then SNR is also the same, in that case the
power allocated would also be the same - power allocated, in all them being the same
means R ss is equal to I of M. So, there again we see that if the equal power is allocated
under such conditions; that means, when you have very high SNR all you need to know
is the rank of the channel and you can give equal power to them. Whereas when SNR is
low and when eigenvalues are not equal in that case it is better that you identify the rank
of the channel and allocate power to those modes where the eigenvalues are non zero or
where eigenvalues are significant and you give the power depending upon the strength of
the eigenvalue.

We will also see another important aspect in the next lecture where we will talk about the
impact of large M or as H w channel for M being very large what happens on outage
capacity.

Thank you for attending this particular lecture.

You might also like