Before proceding into the material in this box, you should be comfortable with the material in box 2.2a. The proofs here will follow the same line of thought, but are a bit trickier. Again, it would be wise of you to learn this material because there is a nonzero probability that you will encounter it on an exam.
Here we shall prove that the product of two limits is the limit of the product. Again we shall do two proofs, one for sequences and one for real valued functions of a real variable. And again, we shall do the proof for sequences first.
Suppose we have two sequences, sk and
tk, whose limits are Ls and
Lt respectively. Again, this means for
sk for any
|sk - Ls| £ es eq. 2.2b-1awhenever
|tk - Lt| £ et eq. 2.2b-1bwhenever
And rememeber the contract analogy. If I tell you how close sk has to be to Ls by specifying an es, you can tell me how big k has to be so that that term in the sequence and all that follow are at least that close. Likewise with tk.
Now you should be able to write the contract that we must show is an inescapable consequence of these two (hint: it involves the product of Ls with Lt and the product of sk with tk). When you are done, click here to continue.
Your contract should look something like: For every
|(sk tk) - (Ls Lt)| £ eprod eq. 2.2b-2whenever
The basic plan of this proof is the same as the proof for the sum of the limits. The algebra, though, is more complicated.
To begin with, we shall reword the contracts to express each using an error term. An error term in a sequence that has a limit is the difference between some term of the sequence and the limit. So for the sk, we have:
Ls - sk = errors[k] eq. 2.2b-3a Lt - tk = errort[k] eq. 2.2b-3bfrom which we can reword the contract to read: For every
|errors[k]| £ es eq. 2.2b-4awhenever
|errort[k]| £ et eq. 2.2b-4bwhenever
So what we are saying is that the kth error term of each sequence is the difference between the limit and the kth term of the sequence, and we are writing our contracts in terms of errors.
If we add sk to both sides 2.2b-3a and tk to both sides of 2.2b-3b, we get:
Ls = sk + errors[k] eq. 2.2b-5a Lt = tk + errort[k] eq. 2.2b-5bThen multiplying 2.2b-5a by 2.2b-5b, we have:
Ls Lt = (sk tk) + eq. 2.2b-6 (sk errort[k]) + (tk errors[k]) + (errors[k] errort[k])
Subtracting
(Ls Lt) - (sk tk) = eq. 2.2b-7 (sk errort[k]) + (tk errors[k]) + (errors[k] errort[k])
We know from equations 2.2b-4a & b that we can make
errors[k] and errort[k] as close
to zero as we like by choosing k large enough. We are now in a
position to show that you can therefore make
And how do we know this? Well if sk were unbounded
from above, for example, that would mean that no matter big a real
number, r, you choose, there will always be some k such
that
Applying the inequality,
|(Ls Lt) - (sk tk)| £ eq. 2.2b-8 |(sk errort[k])| + |(tk errors[k])| + |(errors[k] errort[k])|
So how big can the right hand side of 2.2b-8 possibly be? We know that sk is never bigger than smax and never smaller than smin. Likewise tk is never bigger than tmax and never smaller than tmin. We have to concern ourselves with what gives us the worst value for the right hand side of 2.2-8. So let sworst be equal to the choice of smax or smin that has the larger absolute value. Do the same with t. Now 2.2b-8 becomes:
|(Ls Lt) - (sk tk)| £ eq. 2.2b-9 |(sworst errort[k])| + |(tworst errors[k])| + |(errors[k] errort[k])|
Now keep in mind that sworst is the worst of all terms (that it has the greatest magnitude) in the entire sk sequence. It is globally the worst. And so, it is constant, and does not vary with k. Likewise with tworst. The final step of this proof relies on both of them being constant.
Clearly, no matter how big in magnitude sworst and tworst are, each of the three terms on the right-hand side of 2.2b-9 can be brought as close to zero as you'd like by choosing one of the errors to be sufficiently close to zero. And according to 2.2b-1a through 2.2b-4b, we can bring the errors as close to zero as we'd like by choosing sufficiently large k.
If you can bring each of the three summands as close to zero as you'd like, then you can certainly bring the sum as close to zero as you'd like, and that means you can bring that sum to within epsilonprod of zero. Whatever k it takes to do that, that is the nprod that satisfies the contract given by 2.2b-2.
And that completes the proof. Again, I have been wordy and tried to detail each nuance of each step. I do that because I cannot predict where each student might get lost, and so I must assume that the student is always lost. If you are one of those who did get lost and never found your way again, here is a map of the proof to study before you go back over it:
First we determined the contracts that two limits dictate for us (eq.
2.2b-1a & b). Then
we determined what is the product contract that we are trying to prove
from them (eq. 2.2b-2). Then we wrote expressions for the error terms of the
two sequences (eq. 2.2b-3a & b). These merely give us names for the
difference terms we had on the left-hand sides of 2.2b-1a & b. So
we substituted those names in for the difference terms on the left-hand
sides of 2.2b-1a & b. This provided us with inequalities putting bounds
on the errors (eq. 2.2b-4a & b). We saved those inequalities for later use.
Next we algebraically munged 2.2b-3a & b into a form useful to us
(eq. 2.2b-5a & b) and multiplied them together to get 2.2b-6, which is
beginning to look very useful because it has the product of the limits
on one side and the product of the sequence terms (that is
By now you are probably asking yourself, "When you are doing a complicated proof like this, how do you know what the next step will be? How can you choose it so that it keeps you on the path to the proper conclusion?" That is a matter of intuition. Nobody can teach it to you. It is a lot like how you decide what your next move in a chess game is. You are looking for a path that eventually checkmates your opponent before he checkmates you. Nobody can give you a hard and fast set of rules that will determine your next move each time. But if you play enough games, you will come to see them instinctively. The same is true with proofs. If you do enough of them, or follow enough of them through, you will eventually come to instinctively pick out what the next step ought to be at each point in the proof. Every chess beginner gets thrashed again and again. And if these proofs are thrashing you, it is only because you are a beginner at them. Go back and follow the steps through again. Each time you do, your power of intuition grows a bit stronger.
We now go on to prove the same thing for limits of real functions of a real variable. But before we do, it is necessary to prove what mathematicians call a lemma. A lemma is an assertion that we must show is true in order to show that some other assertion true.
In the last proof we took advantage of the fact that the sequences,
sk and tk were bounded. If we
have a function, f(x), that has a limit as x goes toward
a, we cannot make any blanket statement about f(x) being
bounded. Suppose you had, for example,
But we can show that if f(x) has a limit as x goes toward a, then there must be an interval around a in which f(x) is bounded.
What we mean by an interval is simply a contiguous segment of the real numbers.
In other words, all the real numbers between two endpoints. So, for example,
all the real numbers that lie between 0.25 and 0.5 constitute an interval.
All the real numbers that lie between 100.1 and 1000.1 also constitute an
interval. If we name any two distinct endpoints, p and q,
then we have named an interval. Since one of the endpoints must be greater
than the other, let's say that
The lemma is easy to prove if you simply recall the definition of a limit -- that
lim f(x) = L xif and only if for every> a
Suppose I give you an e. Then you can tell me a d
that satisifies the above. The expression containing d puts
restrictions on x. In fact, it holds x to an interval.
Specifically,
Throughout that interval, f(x) obeys the e expression
above. Why? Because that's what we mean by saying f(x) has a limit,
L, at a. But that expression also restricts f(x)
to an interval. Specifically,
But that places both upper and lower bounds on f(x), doesn't it?
f(x) can't be any bigger than
Go ahead and reread the proof to this lemma several times, then see if you can reproduce it on a whiteboard or a sheet of paper. If you won't be bothering anybody, explain out loud to an imaginary audience what the line of thought is here. I'll be there with you in spirit, listening.
And now for the proof that the product of the limits is the limit of the product for the type of limit where x goes toward a. I will outline the proof first. Then I'd like you to go ahead and attempt to complete it based upon what you've already learned from this box. If necessary, go back and reread this box from the beginning. Do that several times if you need to. Then go ahead with your attempt at the proof. Once you have completed it, then go on and read my version of the proof and compare it with yours.
First, apply the definition of a limit to construct two contracts in
the way we have been doing for f(x) and g(x). We
assume these to be true. Now also write the contract that we are trying
to prove is a necessary consequence of the first two.
It will involve an expression of
Now, just as I did before in eq. 2.2b-3a & b and 2.2b-4a & b, reword the first two contracts in terms of errors. This time the error variables will not be indexed -- instead they will be functions of x (ie errorf(x) and errorg(x)). You should end up with two equations giving you expressions for your error terms, and two inequality contracts that use es, ds, and error terms.
Now take the two equations and rearrange them so that the limit is on the left and everything else is on the right. Then multiply the two equations together.
You aim to get a difference between the product of the limits and the product of the two function on the left side. On the right you should have an error expression. What this does for you is it gets you an error expression for the products. See if you can figure out how to rearrange the product equation to that end.
Of course the contract we are trying to get uses the absolute value of the difference you end up with on the left-hand side. So take the absolute value of both sides. Then apply the absolute value inequality to the right-hand side to convert the equation into an inequality.
All you have to do now is show that you can bring the right-hand side
of the inequality as close to zero as you like by choosing x close
enough to a. But the right-hand side still has f(x)
and g(x) in it. So this is where you make use of the lemma we
proved earlier. For both f(x) and g(x), we know by
the lemma that intervals exist around the point
a <------------------------|---------------------------> [ f(x) bounded in this interval ] [ g(x) bounded in this interval ] [ both bounded in this interval ] Diagram shows bounded intervals around x = a on the real number line. We know that bounded intervals exist because the lemma tells us they must.
Why do we go to all this trouble of devising an interval? Because as x goes toward a, it must certainly end up inside that interval, no matter how small the interval is. The extrema for f(x) and g(x) within that interval are constants, and constants behave better for our purpose than do functions of x.
You already know from the two error inequalities that you produced earlier that you can bring your two error functions as close to zero as you like by choosing x sufficiently close to a. Write a sentence or two arguing why the expression on the right can also be brought as close to zero as you like by choosing x sufficiently close to a. Once you've done that, you're done.
Click here to see the full proof.
Step 1: State the two contracts. This should be almost second nature to you by now. We have:
lim f(x) = Lf eq. 2.2b-10a xand> a
lim g(x) = Lg eq. 2.2b-10b xwhich means that for any> a
Step 2: State what you want to prove. In this case we have:
lim f(x)g(x) = Lf Lg eq. 2.2b-11 xwhich means that for any> a
Step 3: Reword the original two contracts in error terms. So we have errors of:
Lf - f(x) = errorf(x) eq. 2.2b-12a Lg - g(x) = errorg(x) eq. 2.2b-12bNow we stick the errors in to replace the difference terms in our contracts. So the contracts become: for every
|errorf(x)| £ ef eq. 2.2b-13awhenever
|errorg(x)| £ eg eq. 2.2b-13bwhenever
Step 4: Munge 2.2b-12a & b, then multiply them together. This will be what gives us the product of the limits on the left and something we can use on the right. So here is the munging:
Lf = f(x) + errorf(x) eq. 2.2b-14a Lg = g(x) + errorg(x) eq. 2.2b-14bNow we multiply them together:
Lf Lg = (f(x)g(x) ) + eq. 2.2b-15 (f(x) errorg(x) ) + (g(x) errorf(x) ) + (errorf(x) errorg(x) )Munge a little more:
(Lf Lg) - (f(x)g(x) ) = eq. 2.2b-16 (f(x) errorg(x) ) + (g(x) errorf(x) ) + (errorf(x) errorg(x) )and take the absolute value of both sides:
|(Lf Lg) - (f(x)g(x) )| = eq. 2.2b-17 |(f(x) errorg(x) ) + (g(x) errorf(x) ) + (errorf(x) errorg(x) )|Now all we have to do is show that the right hand side of 2.2b-17 can be made as close to zero as anybody would like, and in particular, at least as close to zero as eprod, by choosing x to be sufficiently close to a.
Step 5: Apply the absolute value inequality to the sum. That is, the sum of absolute values is always greater than or equal to the absolute value of the sum:
|(Lf Lg) - (f(x)g(x) )| £ eq. 2.2b-18 |(f(x) errorg(x) )| + |(g(x) errorf(x) )| + |(errorf(x) errorg(x) )|We're almost home now. We just have to show that each of the three absolute value terms on the right of the equal can be made as close to zero as anybody would like. If they can, then their sum certainly can, right? The third term is no problem, since it is the product of two error terms, each of which we already know can be made as close to zero as we like because that is precisely what 2.2b-13a & b assert. It's the remaining two terms that give us a problem because they contain f(x) and g(x), neither of which are constants.
Step 6: Recall the lemma. We know we can always find an interval around
|(Lf Lg) - (f(x)g(x) )| £ eq. 2.2b-19 |(fworst errorg(x) )| + |(gworst errorf(x) )| + |(errorf(x) errorg(x) )|fworst, and gworst are both constants. We know we can bring errorf(x) and errorg(x) as near to zero as we like by choosing x sufficiently close to a. That is what 2.2b-13a & b tell us. If you can bring something as near to zero as you like, you can also bring that same thing times a constant as near to zero as you like. And so you can certainly bring the whole right-hand expression as near to zero as you like, including bringing as near to zero as eprod might ever be. And that is precisely the contract we had to prove.
As always, you are likely to have to review this several times before you understand it well. This is not a simple proof.
Send questions to: hahn@netsrq.com