property that this thing is going to be [(1, 1), (1, 0)] to the power n minus 1.
This I already know, by the induction
hypothesis, [(1, 1), (1, 0)] to the n minus 1. So, presumably I should use it in some way. This equality is not yet
true, you may have noticed. So, I need to add something on. What could I possibly add on to be correct? Well,
another factor of [(1, 1), (1, 0)]. The way I am developing this proof is the only way it could possibly be, in some
sense. If you know its induction, this is all that you could do. And then you check. Indeed, this equality holds
conveniently. For example, F_(n plus 1) is the product of these two things. It is this row times this column. So, it is
F_n times 1 plus F_(n minus 1) times 1, which is indeed the definition of F_(n plus 1). And you could check four of
the entries. This is true. Great. If that is true then I would just put these together. That is [(1, 1), (1, 0)] to the n
minus 1 times [(1, 1), (1, 0)], which is [(1, 1), (1, 0)] to the n, end of proof. A very simple proof, but you have to do
that in order to know if this algorithm really works. Good. Question? Oh, yes. Thank you. This, in the lower right,
we should have F_(n minus 1). This is why you should really check your proofs. We would have discovered that
when I checked that this was that row times that column, but that is why you are here, to fix my bugs. That's the
great thing about being up here instead of in a quiz. But that is a minor mistake. You wouldn't lose much for that.
All right. More divide-and-conquer algorithms. Still, we have done relatively simple ones so far. In fact, the fanciest
has been merge sort, which we already saw. So, that is not too exciting. The rest have all be log n time. Let's
break out of the log n world. Well, you all have the master method memorized, right, so I can erase that. Good.
This will be a good test. Next problem is matrix multiplication, following right up on this two-by-two matrix
multiplication. Let's see how we can compute n-by-n matrix multiplications. Just for a recap, you should know how
to multiply matrixes, but here is the definition so we can turn it into an algorithm. You have two matrixes, A and B,
which are capital levels. The ijth entry. Ith row, jth column is called little a_ij or little b_ij. And your goal is to
compute the products of those matrixes. I should probably say that i and j range from 1 to n. So, they are square
matrixes. The output is to compute C which has entry c_ij which is the product of A and B. And, for a recap, the ijth
entry of the product is the inner product of the ith row of A with the jth column of B. But you can write that out as a
sum like so. We want to compute this thing for every i and j. What is the obvious algorithm for doing this? Well, for
every i and j you compute the sum. You compute all the products. You compute the sum. So, it's like n operations
here roughly. I mean like 2n minus 1, whatever. It is order n operations. There are n^2 entries of C that I need to
compute, so that's n^3 time. I will write this out just for the programmers at heart. Here is the pseudocode. It's rare
that I will write pseudocode. And this is a simple enough algorithm that I can write it in gory detail. But it gives you
some basis for this analysis if you like to program. It is a triply nested for loop. And I made a coding error.
Hopefully you haven't written that far yet. I need c_ij to be initialized to zero. And then I add to c_ij the appropriate
product, a_ik, b_kj. That is the algorithm. And the point is you have a nesting of n for loops from 1 to n. That takes
n^3 time because this is constant and that is constant. So, very simple, n^3. Let's do better. And, of course, we
are going to use divide-and-conquer. Now, how are we going to divide a matrix? There are a lot of numbers in a
matrix, n^2 of them in each one. There are all sorts of ways you could divide. So far all of the divide-and-conquers
we have done have been problems of size n into some number of problems of size n over 2. I am going to say I
start with some matrixes of size n-by-n. I want to convert it down to something like n/2-by-n/2. Any suggestions
how I might do that? Yeah? Block form the matrix, indeed. That is the right answer. So, this is the first divide-and-
conquer algorithm. This will not work, but it has the first idea that we need. We have a n-by-n matrix. We can view
it, this equality is more, you can think of it as, it's really the thing, a two-by-two block matrix where each entry in
this two-by-two block matrix is a block of n/2-by-n/2 submatrixes. I will think of C as being divided into three parts,
r, s, t and u. Even though I write them as lower case letters they are really matrixes. Each is n/2-by-n/2. And A, I
will split into a, b, c, d, times B, I will split into e, f, g, h. Why not? This is certainly true. And if you've seen some
linear algebra, this is basically what you can do with matrixes. Now I can pretend these are two-by-two and sort of
forget the fact that these little letters are matrixes and say well, r is the inner product of this row with this column. It
is ae times bg. Let me not cheat or else it will be too easy. r=ae+bg, s=af+bh, t=ce+dh and u=cf+dg. It's nothing
like making it too hard on yourself. OK, got them right. Good. I mean this is just a fact about how you would
expand out this product. And so now we have a recursive algorithm. In fact, we have a divide-and-conquer
algorithm. We start with our n-by-n matrix. Well, we have two of them actually. We divide it into eight little pieces,
a, b, c, d, e, f, g, h. Then we compute these things and that gives us C, just by sticking them together. Now, how
do we compute these things? Well, these innocent-looking little products between these two little numbers are
actually recursive matrix multiplications. Because each of these little letters is an n/2-by-n/2 matrix so I have to
recursively compute the product. There are like eight recursive multiplications of n/2-by-n/2 matrixes. That is what
bites us. And then there are like four additions, plus minor work of gluing things together. How long does it take to
add two matrixes together? n^2. This is cheap. It just takes n^2. Remember, we are trying to beat n^3 for our
matrix multiplication. Addition is a really easy problem. You just have to add every number. There is no way you
can do better than n^2. So, that is not recursive. That is the nice thing. But the bad thing is we have eight of these
recursions. We have T(n)=8T(n/2)+Theta(n^2). And I have erased the master method, but you should all have it
memorized. What is the solution to this recurrence? Theta(n^3). That is annoying. All right. A is 8, b is 2, log base
2 of 8 is 3. Every computer scientist should know that. n^log_b(a)=n^3. That is polynomially larger than n^2, so we
are in Case 1. Thank you. Let's get them upside down. This is n^3, no better than our previous algorithm. That
kind of sucks. And now comes the divine inspiration. Let's go over here. There are some algorithms like this
Fibonacci algorithm where if you sat down for a little while, it's no big deal, you would figure it out. I mean it is kind
of clever to look at that matrix and then everything works happily. It is not obvious but it is not that amazingly
clever. This is an algorithm that is amazingly clever. You may have seen it before which steals the thunder a little
bit, but it is still really, really cool so you should be happy to see it again. And how Strassen came up with this
algorithm, he must have been very clever. The idea is we've got to get rid of these multiplications. I could do a
hundred additions. That only costs Theta(n^2). I have to reduce this 8 to something smaller. It turns out, if you try
to split the matrices into three-by-three or something, that doesn't help you. You get the same problem because
we're using fundamentally the same algorithm, just in a different order. We have got to somehow reduce the