I always thought integrals and anti derivatives were the same thing but I recently read that they are separate things that can be related by the Fundamental Theorem of Calculus. How does that work? Tried reading a couple explanations but they didn’t dumb it down enough for me I think.
In: Mathematics
An antiderivative of *f* is a function *F* whose derivative is *f*. An integral of *f* is a function *I* such that *I(x)* is the area under *f* between *0* and *x* (or between a general base point *x0* and *x*).
These concepts on the surface don’t seem to be related at all, but the fundamental theorem states that if *f* is continuous, they are the same thing.
An antiderivative of a function f(x) is simply another function F(x) such that taking the derivative of F’(x) = f(x).
For example, consider f(x) = x. An antiderivative F(x) could be x^2 / 2 because F’(x) = 2x / 2 = x = f(x). But observe that this isn’t the ***only*** antiderivative, because say G(x) = x^2 / 2 + 7 ***also*** has the derivative G’(x) = 2x / 2 = x = f(x). This is because the derivative of any constant is 0. You may know this as adding “+ C” to the end of your antiderivative.
What the Fundamental Theorem of Calculus says is that ***if*** you are trying to find the integral from a to b of f(x) ***and if*** you can find an antiderivative F(x), then the integral = F(b) – F(a), whatever antiderivative you pick.
So again, let’s go back to f(x) = x and integrate from 0 to 4. If you draw this on a graph, observe it’s a triangle. It has corners (0,0); (4,4); and (4,0). And we ***know*** the area of a triangle, so we know this should have area = 1/2 x base x height = 1/2 x 4 x 4 = 8.
Now let’s take our antiderivative F(x) = x^2 / 2 from earlier. F(4) = 4^2 / 2 = 16/2 = 8. F(0) = 0. Thus, it is ***true*** that F(4) – F(0) = 8 – 0 = 8 = the integral.
But we can ***also*** use our other antiderivative G(x). Observe G(4) = 4^2 / 2 + 7 = 8 + 7 = 15 and G(0) = 7. Thus G(4) – G(0) = 15 – 7 = 8 still.
At a high-level, the two things are answering different questions. An antiderivative answers “what function exists that has the derivative of a given function.” An integral answers “what is the signed area under the curve.” In most cases, you’ll need an antiderivative in order to calculate an integral.
Integrals measure areas and volumes, almost by definition. Often the one under a graph, but it can actually be something more general.
Antiderivatives try to find a function F that has a given function f as its rate of change. Accumulating the value of f as the change of F.
Those two concepts turn out to be almost the same: accumulating the change proposed by f is essentially the same as finding the are under the graph of f. After all, both methods ultimately sum up values of f, scaled to an infinitely small step size. This argument when put into formality is actually a proof of the Fundamental Theorem of Calculus.
However, it should be noted that integration _is_ really more general. There are functions and shapes where this notion of volume still makes sense, but “rate of change” does not.
Suppose you start with a sequence S that has terms s_1, s_2, s_3, … and so on.
Now let’s introduce an operation D that will assign to a sequence S a new sequence D(S) whose terms are the differences s_2-s_1, s_3-s_2, … and so on. For example, if S=(2,4,6,8,10,…), then D(S)=(2,2,2,2,…), or if S=(1,4,9,16,…), then D(S)=(3,5,7,…).
Now let’s introduce another operation P that will assign to a sequence S a new sequence P(S) whose terms are partial sums s_1, s_1+s_2, s_1+s_2+s_3, … and so on. For example, if S=(1,1,1,1,…), then P(S)=(1,2,3,4,…), or if S=(1,10,100,1000,…), then P(S)=(1,11,111,1111,…).
Those two operations seem unrelated at the first glance. However, let’s for the sake of illustration consider a sequence S=(0,1,4,6,8,9,…). Then the sequence D(S) will be (1,3,2,2,1,…). Now, D(S) is a sequence too, so we can perform operation P on it. We obtain P(D(S))=(1,4,6,8,9,…). That looks very similar to the original sequence S, except we skipped the first term.
Indeed, looking at what happens closely, the first few terms of sequence D(S) were (1-0,4-1,6-4,…) and then for P(D(S)) were (1-0,(1-0)+(4-1),(1-0)+(4-1)+(6-4),…) and you can see the pattern. As (1-0)+(4-1)=(4-1)+(1-0)=4-1+1-0=4+(-1+1)-0=4-0=4 and (1-0)+(4-1)+(6-4)=(6-4)+(4-1)+(1-0)=6-4+4-1+1-0=6+(-4+4)+(-1+1)-0=6-0=6, we see how we recovered the terms of the original sequence apart from the first that was 0.
It looks like the operation P is kind of opposite to operation D. Here’s where people who know calculus should be able to tell where I’ve tricked you somehow.
I purposefuly used a sequence S that started with a zero. Notice, for example that if I started with a sequence T=(2,3,6,8,10,11,…) which is just sequence S+2=(0+2,1+2,4+2,6+2,8+2,9+2,…) (I added the constant 2 to every term of S), the sequence D(T)=(1,3,2,2,1,…) is the same as D(S) and consequently P(D(T)) would be the same as P(D(S)) as well. So the composite operation PD doesn’t yield the original sequence, only the sequence shifted by a constant.
Now doing all this rigorously and for continuous functions instead of sequences, the D would be the differentiation operation, the P would be integration operation and the relationship that PD returns the original function up to shift by a constant is the main idea of the fundamental theorem of calculus that says that under certain conditions you can calculate the integral (a kind of a sum) with the help of antiderivative (formally reversing the differentiation operation).
Latest Answers