Bézout’s identity for polynomials

818 views

Usually math doesn’t give me this sort of trouble, but I can’t seem to wrap my head around or find an explanation for Bezout’s identity for polynomials, which is as follows:

**Bézout’s identity**

As the common [roots](https://en.wikipedia.org/wiki/Root_of_a_polynomial) of two polynomials are the roots of their greatest common divisor, Bézout’s identity and [fundamental theorem of algebra](https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra) imply the following result:

For univariate polynomials *f* and *g* with coefficients in a field, there exist polynomials *a*and *b* such that *af* + *bg* = 1 if and only if *f* and *g* have no common root in any [algebraically closed field](https://en.wikipedia.org/wiki/Algebraically_closed_field) (commonly the field of [complex numbers](https://en.wikipedia.org/wiki/Complex_number)).

In: Mathematics

2 Answers

Anonymous 0 Comments

I am not trying to be pedantic, but do you understand what Bezout’s assertion means for integers?

Anonymous 0 Comments

I don’t know if your problem is with the statement itself, or with the proof of the statement.

So I’ll just give an example to illustrate the theorem. Let’s take F = X^2 – 2X and G = X^2 – 1 which are two polynomials. These two polynomials don’t have a common root (the roots of F are 0 and 2, while the roots of G are -1 and 1), so we can apply the theorem.

The statement says that we can find two polynomials A and B such that AF+BG = 1. In that example, we do find (after computations) that the two polynomials A = -2X/3 – 1/3 and B = 2X/3-1, satisfy the identity. Which means that we have :

(-2X/3 -1/3) (X^2 – 2X) + (2X/3 – 1)(X^2 – 1) = 1

In case this is not clear, you can develop the left hand term of this equality and check that all the terms will cancel out except the constant 1. The “1” that is written on the right hand term needs to be understood as the constant polynomial equal to 1.

This is a Bezout identity for these two polynomials F and G.

The usual method to find the polynomials A and B, is using euclidean division of polynomials. In fact, the proof of that theorem relies heavily on the fact that we do have such a euclidean division in the ring of polynomials with coefficients in a given field, (and hence it is a [principal ideal domain](https://en.wikipedia.org/wiki/Principal_ideal_domain) ).

The reciprocal is quite easy to prove. If F and G have a common root (denoted r), then necessarily any polynomial of the form AF+BG will also have r as a root. But the constant polynomial 1 does not have r as a root. So in that case we can never find A and B such that AF+BG = 1.