Box 2.2b: Proof of the Product of LimitsKCT logo

© 1996 by Karl Hahn

Before proceding into the material in this box, you should be comfortable with the material in box 2.2a. The proofs here will follow the same line of thought, but are a bit trickier. Again, it would be wise of you to learn this material because there is a nonzero probability that you will encounter it on an exam.

Here we shall prove that the product of two limits is the limit of the product. Again we shall do two proofs, one for sequences and one for real valued functions of a real variable. And again, we shall do the proof for sequences first.

Suppose we have two sequences, sk and tk, whose limits are Ls and Lt respectively. Again, this means for sk for any es > 0, there exists an ns such that:

   |sk - Ls|  £  es                                               eq. 2.2b-1a
whenever  k ³ ns, and for any  et > 0, there exists an nt such that:
   |tk - Lt|  £  et                                               eq. 2.2b-1b
whenever  k ³ nt.

And rememeber the contract analogy. If I tell you how close sk has to be to Ls by specifying an es, you can tell me how big k has to be so that that term in the sequence and all that follow are at least that close. Likewise with tk.

Now you should be able to write the contract that we must show is an inescapable consequence of these two (hint: it involves the product of Ls with Lt and the product of sk with tk). When you are done, click here to continue.







Your contract should look something like: For every epsilonprod > 0, no matter how small, there exists an nprod such that

   |(sk tk) - (Ls Lt)|  £ eprod                                 eq. 2.2b-2
whenever  k ³ nprod. Of course, you could have chosen different names for eprod and nprod and still have had this step correct.

The basic plan of this proof is the same as the proof for the sum of the limits. The algebra, though, is more complicated.

To begin with, we shall reword the contracts to express each using an error term. An error term in a sequence that has a limit is the difference between some term of the sequence and the limit. So for the sk, we have:

   Ls - sk  =  errors[k]                                         eq. 2.2b-3a

   Lt - tk  =  errort[k]                                         eq. 2.2b-3b
from which we can reword the contract to read: For every  es > 0, no matter how small, there exists an ns such that
   |errors[k]|  £ es                                             eq. 2.2b-4a
whenever  k > ns, and for every  et > 0, there exists nt such that:
   |errort[k]|  £ et                                             eq. 2.2b-4b
whenever  k ³ nt (Why can we do this? Because we can always substitute one expression with something else that is equal into eq. 2.2b-1a and b. Also please note that here we use the [k] notation for the indexing of the error variables in order to keep the index distinguishable from the labeling subscript).

So what we are saying is that the kth error term of each sequence is the difference between the limit and the kth term of the sequence, and we are writing our contracts in terms of errors.

If we add sk to both sides 2.2b-3a and tk to both sides of 2.2b-3b, we get:

   Ls  =  sk + errors[k]                                         eq. 2.2b-5a

   Lt  =  tk + errort[k]                                         eq. 2.2b-5b
Then multiplying 2.2b-5a by 2.2b-5b, we have:
   Ls Lt  =  (sk tk) +                                           eq. 2.2b-6

     (sk errort[k]) + (tk errors[k]) + (errors[k] errort[k])

Subtracting  (sk tk from both sides of 2.2b-6, we get:

   (Ls Lt) - (sk tk)  =                                          eq. 2.2b-7

     (sk errort[k]) + (tk errors[k]) + (errors[k] errort[k])

We know from equations 2.2b-4a & b that we can make errors[k] and errort[k] as close to zero as we like by choosing k large enough. We are now in a position to show that you can therefore make  (Ls Lt) - (sk tk as close to zero as you like as well (and specifically make it at least as close to zero as any eprod) simply by arranging k to make the two errors suitably close to zero. This is because both sk and tk are bounded from both above and below. That means that that, for example, there is some value, smax above which sk never gets regardless of what k you stick in. Likewise there is some value, smin below which sk can never go. The same is true of tk.

And how do we know this? Well if sk were unbounded from above, for example, that would mean that no matter big a real number, r, you choose, there will always be some k such that sk > r. If this were the case, then sk could not have a limit. You would always be expecting to get some sk that is even farther from the limit somewhere down the line. Likewise if sk were unbounded from below, then no matter how small (that is how negative) a real number, r, you choose, there will always be some k such that sk < r. By the same logic, this also would prevent sk from having a limit. And again, the same goes for tk.

Applying the inequality,  |a + b| £ |a| + |b|  (can you show that this inequality can be extended from two to three terms?) to equation 2.2b-7, we get:

   |(Ls Lt) - (sk tk)|  £                                        eq. 2.2b-8

     |(sk errort[k])| + |(tk errors[k])| + |(errors[k] errort[k])|

So how big can the right hand side of 2.2b-8 possibly be? We know that sk is never bigger than smax and never smaller than smin. Likewise tk is never bigger than tmax and never smaller than tmin. We have to concern ourselves with what gives us the worst value for the right hand side of 2.2-8. So let sworst be equal to the choice of smax or smin that has the larger absolute value. Do the same with t. Now 2.2b-8 becomes:

   |(Ls Lt) - (sk tk)|  £                                        eq. 2.2b-9

     |(sworst errort[k])| + |(tworst errors[k])| + |(errors[k] errort[k])|

Now keep in mind that sworst is the worst of all terms (that it has the greatest magnitude) in the entire sk sequence. It is globally the worst. And so, it is constant, and does not vary with k. Likewise with tworst. The final step of this proof relies on both of them being constant.

Clearly, no matter how big in magnitude sworst and tworst are, each of the three terms on the right-hand side of 2.2b-9 can be brought as close to zero as you'd like by choosing one of the errors to be sufficiently close to zero. And according to 2.2b-1a through 2.2b-4b, we can bring the errors as close to zero as we'd like by choosing sufficiently large k.

If you can bring each of the three summands as close to zero as you'd like, then you can certainly bring the sum as close to zero as you'd like, and that means you can bring that sum to within epsilonprod of zero. Whatever k it takes to do that, that is the nprod that satisfies the contract given by 2.2b-2.

And that completes the proof. Again, I have been wordy and tried to detail each nuance of each step. I do that because I cannot predict where each student might get lost, and so I must assume that the student is always lost. If you are one of those who did get lost and never found your way again, here is a map of the proof to study before you go back over it:

First we determined the contracts that two limits dictate for us (eq. 2.2b-1a & b). Then we determined what is the product contract that we are trying to prove from them (eq. 2.2b-2). Then we wrote expressions for the error terms of the two sequences (eq. 2.2b-3a & b). These merely give us names for the difference terms we had on the left-hand sides of 2.2b-1a & b. So we substituted those names in for the difference terms on the left-hand sides of 2.2b-1a & b. This provided us with inequalities putting bounds on the errors (eq. 2.2b-4a & b). We saved those inequalities for later use. Next we algebraically munged 2.2b-3a & b into a form useful to us (eq. 2.2b-5a & b) and multiplied them together to get 2.2b-6, which is beginning to look very useful because it has the product of the limits on one side and the product of the sequence terms (that is  sk tk) as one of the terms on the other. Why is that useful? Because we can algebraically munge that into something that is, on the left-hand side, the same as the left-hand side of the contract we are trying to show (eq. 2.2b-7). From there, all we have to do is show we can make the right-hand side of 2.2b-7 as close to zero as anybody could ask. We do that by applying the absolute value inequality (eq. 2.2b-9). But we still have the problem of having too many things in 2.2b-9 that vary with k. To solve that, we observe that both sk and tk are bounded and can be bracketed between maximum and minimum bounds. For both of them we chose the most extreme of the max and min to get the worst. The worst of each sequence does not vary with k. Appealing back to 2.2b-4a & b, which we had saved for later use, we can argue that the right-hand side of 2.2b-9 can be brought as close to zero as anybody could ask by choosing k sufficiently large. Since the left-hand side of 2.2b-9 must be less than or equal to the right-hand side, it is clear that the left-hand side can be made less than or equal to any eprod that anybody might name. That is the conclusion we were after, so we're done.

By now you are probably asking yourself, "When you are doing a complicated proof like this, how do you know what the next step will be? How can you choose it so that it keeps you on the path to the proper conclusion?" That is a matter of intuition. Nobody can teach it to you. It is a lot like how you decide what your next move in a chess game is. You are looking for a path that eventually checkmates your opponent before he checkmates you. Nobody can give you a hard and fast set of rules that will determine your next move each time. But if you play enough games, you will come to see them instinctively. The same is true with proofs. If you do enough of them, or follow enough of them through, you will eventually come to instinctively pick out what the next step ought to be at each point in the proof. Every chess beginner gets thrashed again and again. And if these proofs are thrashing you, it is only because you are a beginner at them. Go back and follow the steps through again. Each time you do, your power of intuition grows a bit stronger.

KCT logo

We now go on to prove the same thing for limits of real functions of a real variable. But before we do, it is necessary to prove what mathematicians call a lemma. A lemma is an assertion that we must show is true in order to show that some other assertion true.

In the last proof we took advantage of the fact that the sequences, sk and tk were bounded. If we have a function, f(x), that has a limit as x goes toward a, we cannot make any blanket statement about f(x) being bounded. Suppose you had, for example, f(x) = x3. This function is clearly not bounded. You always can find an x that will make x3 bigger or smaller (that is more negative) than any bound that somebody might name.

But we can show that if f(x) has a limit as x goes toward a, then there must be an interval around a in which f(x) is bounded.

What we mean by an interval is simply a contiguous segment of the real numbers. In other words, all the real numbers between two endpoints. So, for example, all the real numbers that lie between 0.25 and 0.5 constitute an interval. All the real numbers that lie between 100.1 and 1000.1 also constitute an interval. If we name any two distinct endpoints, p and q, then we have named an interval. Since one of the endpoints must be greater than the other, let's say that p > q. A real number, x is on that interval if and only if p > x > q. Note that this interval does not include the endpoints. (I will say as a side note, since it is not important to this proof, that an interval that includes both endpoints is called a closed interval. One that does not include the endpoints is called an open interval. One that includes one endpoint but not the other is called a half-open interval.)

The lemma is easy to prove if you simply recall the definition of a limit -- that

     lim   f(x)  =  L
    x  > a
if and only if for every  e > 0, no matter how small, there exists a  d > 0  such that:  |f(x) - L| £ e  whenever  |x - a| £ d.

Suppose I give you an e. Then you can tell me a d that satisifies the above. The expression containing d puts restrictions on x. In fact, it holds x to an interval. Specifically, a + d  ³  x  ³  a - d   (see if you can explain why the expression,  |x - a| £ d   restricts x to this interval).

Throughout that interval, f(x) obeys the e expression above. Why? Because that's what we mean by saying f(x) has a limit, L, at a. But that expression also restricts f(x) to an interval. Specifically, L + e  ³  f(x)  ³  L - e  whenever  a + d  ³  x  ³  a - d

But that places both upper and lower bounds on f(x), doesn't it? f(x) can't be any bigger than  L + e, and it can't be any smaller than  L - e. So for all x in the interval,  a + d  ³  x  ³  a - d,     f(x) is certainly bounded. And the interval contains a (you should be able to verify that a is, in fact, the midpoint of the interval). Our lemma said we could always find such an interval if f(x) had a limit at a. Here we have demonstrated a recipe for finding that interval. And that constitutes a proof.

Go ahead and reread the proof to this lemma several times, then see if you can reproduce it on a whiteboard or a sheet of paper. If you won't be bothering anybody, explain out loud to an imaginary audience what the line of thought is here. I'll be there with you in spirit, listening.


And now for the proof that the product of the limits is the limit of the product for the type of limit where x goes toward a. I will outline the proof first. Then I'd like you to go ahead and attempt to complete it based upon what you've already learned from this box. If necessary, go back and reread this box from the beginning. Do that several times if you need to. Then go ahead with your attempt at the proof. Once you have completed it, then go on and read my version of the proof and compare it with yours.

First, apply the definition of a limit to construct two contracts in the way we have been doing for f(x) and g(x). We assume these to be true. Now also write the contract that we are trying to prove is a necessary consequence of the first two. It will involve an expression of  f(x) g(x)  and an expression for the product of the two limits you defined in the first two contracts. You now have a starting point (the first two contracts) and a destination (the third contract involving products).

Now, just as I did before in eq. 2.2b-3a & b and 2.2b-4a & b, reword the first two contracts in terms of errors. This time the error variables will not be indexed -- instead they will be functions of x (ie errorf(x) and errorg(x)). You should end up with two equations giving you expressions for your error terms, and two inequality contracts that use es, ds, and error terms.

Now take the two equations and rearrange them so that the limit is on the left and everything else is on the right. Then multiply the two equations together.

You aim to get a difference between the product of the limits and the product of the two function on the left side. On the right you should have an error expression. What this does for you is it gets you an error expression for the products. See if you can figure out how to rearrange the product equation to that end.

Of course the contract we are trying to get uses the absolute value of the difference you end up with on the left-hand side. So take the absolute value of both sides. Then apply the absolute value inequality to the right-hand side to convert the equation into an inequality.

All you have to do now is show that you can bring the right-hand side of the inequality as close to zero as you like by choosing x close enough to a. But the right-hand side still has f(x) and g(x) in it. So this is where you make use of the lemma we proved earlier. For both f(x) and g(x), we know by the lemma that intervals exist around the point  x = a  in which f(x) and g(x) are bounded. If you take the intersection of both of those intervals, you will end up with an interval in which f(x) and g(x) are simultaneously bounded (can you explain why). Within that interval there must be an fmax, fmin, gmax, and gmin. And from those you can get an fworst and gworst. These are all constants (that is not a function of x) since they are extremes within the interval of f(x) and g(x).

                            a
   <------------------------|--------------------------->
                 [    f(x) bounded in this interval  ]
              [   g(x) bounded in this interval    ]
                 [ both bounded in  this interval  ]

    Diagram shows bounded intervals around x = a on the real number line.
    We know that bounded intervals exist because the lemma tells us they must.

Why do we go to all this trouble of devising an interval? Because as x goes toward a, it must certainly end up inside that interval, no matter how small the interval is. The extrema for f(x) and g(x) within that interval are constants, and constants behave better for our purpose than do functions of x.

You already know from the two error inequalities that you produced earlier that you can bring your two error functions as close to zero as you like by choosing x sufficiently close to a. Write a sentence or two arguing why the expression on the right can also be brought as close to zero as you like by choosing x sufficiently close to a. Once you've done that, you're done.

Click here to see the full proof.







Step 1: State the two contracts. This should be almost second nature to you by now. We have:

    lim f(x)  =  Lf                                               eq. 2.2b-10a
   x  > a
and
    lim g(x)  =  Lg                                               eq. 2.2b-10b
   x  > a
which means that for any  ef > 0  you can always find  df > 0  such that  |f(x) - Lf|  £  ef  whenever  |x - a|  £  df; and likewise for any  eg > 0  you can always find  dg > 0  such that  |g(x) - Lg|  £  eg  whenever  |x - a|  £  dg. We assume all this to be true.

Step 2: State what you want to prove. In this case we have:

     lim f(x)g(x)  =  Lf Lg                                       eq. 2.2b-11
    x  > a
which means that for any  eprod > 0  you can always find  dprod > 0  such that  |(f(x)g(x)) - (Lf Lg)|  £  eprod  whenever  |x - a|  £  dprod. We have to show that this is an inescapable consequence of 2.2b-10a & b.

Step 3: Reword the original two contracts in error terms. So we have errors of:

   Lf - f(x)  =  errorf(x)                                        eq. 2.2b-12a

   Lg - g(x)  =  errorg(x)                                        eq. 2.2b-12b
Now we stick the errors in to replace the difference terms in our contracts. So the contracts become: for every  ef > 0, there exists a  df > 0  such that
   |errorf(x)| £ ef                                               eq. 2.2b-13a
whenever  |x - a| £ df. Likewise for every  eg > 0, there exists a  dg > 0  such that
   |errorg(x)| £ eg                                               eq. 2.2b-13b
whenever  |x - a| £ dg. We save these last two inequalities for later.

Step 4: Munge 2.2b-12a & b, then multiply them together. This will be what gives us the product of the limits on the left and something we can use on the right. So here is the munging:

   Lf  =  f(x) + errorf(x)                                        eq. 2.2b-14a

   Lg  =  g(x) + errorg(x)                                        eq. 2.2b-14b
Now we multiply them together:
   Lf Lg  =  (f(x)g(x) ) +                                        eq. 2.2b-15

      (f(x) errorg(x) ) + (g(x) errorf(x) ) + (errorf(x) errorg(x) )
Munge a little more:
   (Lf Lg) - (f(x)g(x) )  =                                       eq. 2.2b-16

      (f(x) errorg(x) ) + (g(x) errorf(x) ) + (errorf(x) errorg(x) )
and take the absolute value of both sides:
   |(Lf Lg) - (f(x)g(x) )|  =                                     eq. 2.2b-17

      |(f(x) errorg(x) ) + (g(x) errorf(x) ) + (errorf(x) errorg(x) )|
Now all we have to do is show that the right hand side of 2.2b-17 can be made as close to zero as anybody would like, and in particular, at least as close to zero as eprod, by choosing x to be sufficiently close to a.

Step 5: Apply the absolute value inequality to the sum. That is, the sum of absolute values is always greater than or equal to the absolute value of the sum:

   |(Lf Lg) - (f(x)g(x) )|  £                                     eq. 2.2b-18

      |(f(x) errorg(x) )| + |(g(x) errorf(x) )| +

      |(errorf(x) errorg(x) )|
We're almost home now. We just have to show that each of the three absolute value terms on the right of the equal can be made as close to zero as anybody would like. If they can, then their sum certainly can, right? The third term is no problem, since it is the product of two error terms, each of which we already know can be made as close to zero as we like because that is precisely what 2.2b-13a & b assert. It's the remaining two terms that give us a problem because they contain f(x) and g(x), neither of which are constants.

Step 6: Recall the lemma. We know we can always find an interval around x = a in which both f(x) and g(x) are bounded. That's what the lemma asserts. And because f(x) and g(x) are bounded in the interval, they both have maximum and minimum values, fmax, fmin, gmax, and gmin, within the interval. In each case, choose between maximum and minimum according to whichever has the greatest absolute value. This gives fworst, and gworst. We know that any f(x) or g(x) plugged into 2.2b-18 will give us something no greater than if we plug fworst, and gworst into it. So let's do that:

   |(Lf Lg) - (f(x)g(x) )|  £                                     eq. 2.2b-19

      |(fworst errorg(x) )| + |(gworst errorf(x) )| +

      |(errorf(x) errorg(x) )|
fworst, and gworst are both constants. We know we can bring errorf(x) and errorg(x) as near to zero as we like by choosing x sufficiently close to a. That is what 2.2b-13a & b tell us. If you can bring something as near to zero as you like, you can also bring that same thing times a constant as near to zero as you like. And so you can certainly bring the whole right-hand expression as near to zero as you like, including bringing as near to zero as eprod might ever be. And that is precisely the contract we had to prove.

As always, you are likely to have to review this several times before you understand it well. This is not a simple proof.


Return to main text

Send questions to: hahn@netsrq.com