Subtraction isn’t much harder than addition. Division isn’t much harder than multiplication. Logarithmization isn’t much harder than exponentiation. Then, what fundamentally makes integration so much more difficult than differentiation?

397 viewsMathematicsOther

Subtraction isn’t much harder than addition. Division isn’t much harder than multiplication. Logarithmization isn’t much harder than exponentiation. Then, what fundamentally makes integration so much more difficult than differentiation?

In: Mathematics

6 Answers

Anonymous 0 Comments

Well we know that 3 * 4 = 12. But if I gave you 12 could you tell me which two numbers were multiplied to produce it? That is somewhat analogous to the problem.

Given a function, it is usually relatively straightforward to know what amount the output value is going to change if the input value changes slightly. Taken to the limit, this is what differentiation does.

However given a function and being asked to decompose it into a series of terms that change at the rate at the functions value at all points in that function is not straightforward at all.

Say you had a credit card and your outstanding balance increased by $10 from yesterday to today. Now you ask someone else, “why did that happen?” How would they figure it out? Did you spend $4 on coffee and $6 on a burger? Did you spend $8 on fried chicken and $2 on a soda?

On the other hand if you said “I spent $6 on a hotdog and $4 on a coffee yesterday”, it doesn’t take much to conclude that “you spent $10 in one day”.

Anonymous 0 Comments

There’s some good discussion here:

https://math.stackexchange.com/questions/20578/why-is-integration-so-much-harder-than-differentiation

One main answer is that differentiation is local while integration is more global.

Also, this isn’t what you’re asking but funnily enough, it’s much more likely for a function to be integrable than differentiable (in some sense). “Most” functions are integrable but “most” functions aren’t differentiable.

Anonymous 0 Comments

All of those other operations are simple mappings between numbers that more or less define their own inverses. Subtraction is addition with a negative number, so it literally is addition as soon as you get negative numbers involved. Division literally is multiplication once you get rationals involved, and both are addition on log scales e.g. on a slide rule where you can read off either the addition/multiplication or the corresponding subtraction/diffusion. Exponentiation is a monotonic function with one input, and we happen to know the series expansion for the inverse, so we gave it a name and now it’s a thing.

Differentiation is a mapping of functions to functions. There are very few functions that we have a name for and that we understand well. Mechanically, integration is sometimes easier than differentiation. If you have a function that’s continuous, it might not be differentiable but it will always (I think?) be integrable. If you have a discontinuous function, but it might still be integrable across the discontinuities.

The hard part about integration is that it doesn’t always come out to a known function, and it’s sometimes hard to know what that function would be. There are a variety of techniques for integrating functions when you only want the area under the entire curve. And you can always define a new function as whatever the antiderivative is of some ugly integrand, and then you have a name for it, you just don’t know much about it.

Basically what it comes down to is that you can take all the simple functions we know and differentiate them, and the answers you get are the set of functions it’s easy to invert the mapping for. All the others are hard. Because differentiation isn’t that simple a mapping between functions unless the functions themselves are relatively simple.

Anonymous 0 Comments

Through numerical methods, integration is perfectly easy. You can just go through the function and add up all the little pieces as you go along, and that gives you the integral. It is, as you’re suggesting, no harder than differentiation.

But what is hard is *analytical* integration. To get from a function to its differential, there is a specific process, df/dx = (f(x+dx)-f(x))/dx as x–>0. No matter how complicated the function f is, you can write out the formula for df/dx (if it’s a continuous, differentiable function). But integration is the reverse of that. There is no single process for it. You just have to hope that there is some function that, when differentiated, gives you f. For many functions, we know what that function is, but there’s no guarantee that an analytical function exists that can be differentiated into the function you want.

Anonymous 0 Comments

It isn’t a general rule that operations should be roughly as easy as their inverses. Encryption algorithms, for example, are very difficult to reverse, as is multiplication when you use prime factorization as the inverse.

Anyway, differentiation only needs to know how the function behaves on a tiny interval (barely more than a single point).

But integration needs to know how the function behaves on large ranges (sometimes everywhere)

Anonymous 0 Comments

I think what’s happening here is confounding two aspects: numerical and symbolical.

It is usually (relatively) easy to do all the mentioned operations numerically, with a given precision. We can work with decimals and can, if so required, carefully track significant digits and error margins. That even works with differentiation and integration. There are better methods, but one can often go with the basic approach of drawing little vertical bars to find the area under a curve.

Symbolically, I would argue that all of those operations are difficult! Yes we can easily and without any numerical errors add, subtract, multiple or form powers of integers and other stuff. But that for differentiations and integration is akin to working with basic polynomials such as x³+7x²-5x+2. It is not hard to find the derivative and antiderivative of those expressions.

Stuff however gets ugly when we involve slightly more complicated numbers: ask yourself if e^^𝜋·√163 is actually an integer. Or is there any expression using e, 𝜋 (but no other numbers or constants) and only +, -, · that amounts to exactly 1? The answer to the former is no, but you need to do quite a a bit of digits and error-tracking to properly see it. The second one is an open problem which all of humanity could, until now, not solve!

So why would we expect that symbolical differentiation and integration would be any simpler? Differentiation works reasonably simple for what we call _elementary functions_, those made from the four basic arithmetical operation, exponentiation, logarithms and roots. But as soon as we allow for complicated functions such as Riemann’s zeta function or the Gamma function, we don’t get a result that has a better description than being that derivative.

Same with integration: there actually _is_ an algorithm by Risch that finds an antiderivative of an elementary function as an elementary expression if one exists, and tells you if none exist*. It only uses the more commonly known methods to accomplish that. And again with more complex things such as the zeta function(s), the antiderivative is “just that”, it lacks a proper better description just like with derivatives.

*: terms and conditions apply: it needs to assume that “do two elementary expressions describe the same function?” can be solved algorithmically; we currently don’t know if this is possible.