French flag Arrows English flag
Sun Arrows Moon
Return Index

The Lagrange interpolating polynomial

Let \( n \hspace{0.1em} \in \hspace{0.05em} \mathbb{N} \) be a natural number, and a given dataset of antecedents/images \( \Bigl \{ \bigl \{x_i, y_i \bigr \}_{ i \hspace{0.1em} \in \hspace{0.05em} n} \Bigr\}\).

This method allows you to find the expression of a unique polynomial function, with this series of values.

Lagrange interpolating polynomial tells us that it is possible to construct a unique polynomial corresponding to this series of values.


There will be two ways to generate this polynomial:


By the direct construction of a polynomial \(L(X) = \sum L_j(X)\)

$$ \forall (i, j) \in \hspace{0.05em} \mathbb{N}^2, \enspace \forall X \in \mathbb{R}, $$

$$ L(X) = \sum_{j = 0}^n y_j \Biggl[ \prod_{\underset{i \neq j}{i=0}}^n \frac{X - x_i}{x_j - x_i} \Biggr]$$


By constructing a polynomial \(P_n(X)\), by finding the only solution for the coefficients \(\bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) unknown to the system \( (S) \)

Given the following system \( (S) \):

$$ (S) \enspace \left \{ \begin{gather*} a_0 + a_1 x_0 + a_2 x_0 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_0 ^n = y_0 \\ a_0 + a_1 x_1 + a_2 x_1 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_1 ^n = y_1 \\ ........................ ............\\ a_0 + a_1 x_n + a_2 x_n ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_n ^n = y_n \\ \end{gather*} \right \} $$

We can resolve it at least by two methods:

  1. By directly solving the system \( (S) \) by substitution or by a Gaussian pivot

  2. $$ P_n(X) = a_0 + a_1 X + a_2 X^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n X^n$$

    With \( \bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) as the unique solution of the system \( (S)\)


  3. By solving the equivalent matricial system \( (S^*) \)

  4. $$ (S^*) \enspace \Longleftrightarrow \enspace \underbrace{ \begin{pmatrix} 1 & x_0 & x_0^2 & \dots & x_0^n \\ 1 & x_1 & x_1^2 & \dots & x_1^n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^n \\ \end{pmatrix} } _\text{X} \times \underbrace{ \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ . \\ a_n \\ \end{pmatrix} } _\text{A} = \underbrace{ \begin{pmatrix} y_0 \\ y_1 \\ y_2 \\ . \\ y_n \\ \end{pmatrix} } _\text{Y} $$

    And at that moment:

    $$ P_n(X) = a_0 + a_1 X + a_2 X^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n X^n$$

    $$ \forall k \in [\![ 0, n]\!], \enspace a_k = \frac{det(X_k)}{det(X)}$$

    Where \(X_k\) represents the square matrix formed by \(X\) in which we inverted its \(k\)-th column with the column matrix \(Y\).


Demonstration


Let \( n \hspace{0.1em} \in \hspace{0.05em} \mathbb{N} \) be a natural number, and a given dataset of antecedents/images \( \Bigl \{ \bigl \{x_i, y_i \bigr \}_{ i \hspace{0.1em} \in \hspace{0.05em} n} \Bigr\}\).

$$ \forall n \hspace{0.1em} \in \hspace{0.05em} \mathbb{N}, \ \Bigl \{ \bigl \{x_i, y_i \bigr \}_{ i \hspace{0.1em} \in \hspace{0.05em} n} \Bigr\} = \Bigl \{ \bigl \{x_0, y_0 \bigr\}, \bigl \{x_1, y_1 \bigr\}, \ ..., \bigl \{x_n, y_n \bigr\} \Bigr\} $$

And such as the following figure as an example of it:

Lagrange interpolating polynomial - given series of values

We want to find a polynomial function that would correspond to this given series of values.


For a given naturel number \(m \enspace (m \in \mathbb{N} )\), a polynomial of degree \( m\) is defined as follows:

$$ \forall i \in [\![ 0, m]\!] , \enspace \exists a_i \in \mathbb{R}, \enspace a_m \neq 0, \enspace \forall X \in \mathbb{R},$$

$$ P_m(X) = a_0 + a_1 X + a_2 X^2 + \hspace{0.1em}... \hspace{0.1em}+ a_m X^m$$

To find this function \( L(X) \), we will therefore have to find a value for each coefficient \(a_i\) for a given \(m\).

We will then have the correspondance between our coefficients and our series of values, which gives us the following system \((S)\):

$$ (S) \enspace \left \{ \begin{gather*} a_0 + a_1 x_0 + a_2 x_0 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_m x_0 ^m = y_0 \\ a_0 + a_1 x_1 + a_2 x_1 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_m x_1 ^m = y_1 \\ ........................ ............\\ a_0 + a_1 x_k + a_2 x_k ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_m x_k ^m = y_k \\ ........................ ............\\ a_0 + a_1 x_m + a_2 x_m ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_m x_m ^m = y_m \\ \end{gather*} \right \} $$


There will be at least two ways to generate this polynomial \(L(X)\):

  1. - by the direct construction of a polynomial \(L(X) = \sum L_j(X)\)

  2. - by constructing a polynomial \(P_n(X)\), by finding the only solution for the coefficients \(\bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) unknown to the system \( (S) \)


By the direct construction of a polynomial \(L(X) = \sum L_j(X)\)

This method consists of constructing a series of \(n\) distinct polynomials \( \Bigl \{L_j(X) \Bigr\}_{ i \hspace{0.1em} \in \hspace{0.05em} \mathbb{N}} \) such as:

$$ \forall (i,j) \in [\![ 0, n]\!]^2, \enspace i \neq j, \enspace \Biggl \{ \begin{gather*} L_j(x_j) = 1 \\ L_j(x_i) = 0 \qquad (C) \end{gather*} $$

We will therefore construct each polynomial one by one.

For all \( j \in [\![ 0, n]\!] \), it is necessary that the condition \( (C) \) is satisfied, tha tis to say \( x_i \ ( i\neq j) \) are roots so that this polynomial \( L_j(x_i) \) cancels itself out. Then,

$$ L_j(X) = \prod_{\underset{i \neq j}{i=0}}^n (X - x_i)$$

Finally, in order to have the polynomial \( L_j(x_j) \) being equal to \( 1 \), there must be a denominator that is equal to the numerator when \( X = x_j \). So,

$$ L_j(X) = \prod_{\underset{i \neq j}{i=0}}^n \frac{X - x_i}{x_j - x_i} $$

Adding a denominator does not change anything from the previously installed condition \( (C) \).

Moreover, the condition \( i \neq j \) assures us that the denominator will never be zero.

Now, by multiplying each polynomial \( L_j(X) \) by the value of \( y_j \), we ensure that:

$$ \forall (i,j) \in [\![ 0, n]\!]^2, \enspace i \neq j, \enspace \Biggl \{ \begin{gather*} y_j L_j(x_j) = y_j \\ y_j L_j(x_i) = 0 \end{gather*} $$

Then, this resulting polynomial:

$$ L(X) = y_0 L_0(X) + y_1 L_1(X) + \hspace{0.1em}... \hspace{0.1em}+ y_n L_n(X) $$

will correctly match each image to its antecedent in our series of initial values \( \Bigl \{ \bigl \{x_0, y_0 \bigr\}, \bigl \{x_1, y_1 \bigr\}, ..., \bigl \{x_n, y_n \bigr\} \Bigr\} \) .


And as a result,

$$ \forall (i, j) \in \hspace{0.05em} \mathbb{N}^2, \enspace \forall X \in \mathbb{R}, $$

$$ L(X) = \sum_{j = 0}^n y_j \Biggl[ \prod_{\underset{i \neq j}{i=0}}^n \frac{X - x_i}{x_j - x_i} \Biggr]$$


By constructing a polynomial \(P_n(X)\), by finding the only solution for the coefficients \(\bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) unknown to the system \( (S) \)

Since our dataset has \(n\) pair of values, we will have the system \((S)\) depending on \(n\):

$$ (S) \enspace \left \{ \begin{gather*} a_0 + a_1 x_0 + a_2 x_0 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_0 ^n = y_0 \\ a_0 + a_1 x_1 + a_2 x_1 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_1 ^n = y_1 \\ ........................ ............\\ a_0 + a_1 x_n + a_2 x_n ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_n ^n = y_n \\ \end{gather*} \right \} $$

It is useless to take a polynomial of degree greater than \(n\), because in further calculations there will remain one or more free variables which will not be useful.

We can proceed at least in two different ways to solve \( (S) \):

  1. - by directly solving the system \( (S) \) by substitution or by a Gaussian pivot

  2. - by solving the equivalent matricial system \( (S^*) \)


  1. By directly solving the system \( (S) \) by substitution or by a Gaussian pivot

  2. A System of linear equations such as the system \( (S) \) comprises a unique solution if and only if its associated homogeneous system \( (S_h) \) (i.e. without right side member) has a unique solution..

    $$ (S_h) \enspace \left \{ \begin{gather*} a_0 + a_1 x_0 + a_2 x_0 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_0 ^n = 0 \\ a_0 + a_1 x_1 + a_2 x_1 ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_1 ^n =0 \\ ........................ ............\\ a_0 + a_1 x_n + a_2 x_n ^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n x_n ^n = 0 \\ \end{gather*} \right \} $$

    However, a polynomial is always worth zero if and only if all of these coefficients are also worth zero.

    Then, the solution of the system \( (S_h)\) is unique and:

    $$ \forall i \in [\![ 0, n]\!], \enspace \mathcal{S}_h \hspace{0.1em}= \Bigl\{ a_i =0 \Bigr\}$$

    We then conclude that the solution of the system \( (S)\) is also unique.


    We then solve this system with the values \( \Bigl \{ \bigl \{x_0, y_0 \bigr\}, \bigl \{x_1, y_1 \bigr\}, ..., \bigl \{x_n, y_n \bigr\} \Bigr\}\) of the series, either by substitution or by a Gaussian pivot.


    We will then obtain a unique polynomial which is:

    $$ P_n(X) = a_0 + a_1 X + a_2 X^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n X^n$$

    With \( \bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) as the unique solution of the system \( (S)\)


  3. By solving the equivalent matricial system \( (S^*) \)

  4. It is possible to transpose this system into a matrix system:

    $$ (S^*) \enspace \Longleftrightarrow \enspace \underbrace{ \begin{pmatrix} 1 & x_0 & x_0^2 & \dots & x_0^n \\ 1 & x_1 & x_1^2 & \dots & x_1^n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^n \\ \end{pmatrix} } _\text{X} \times \underbrace{ \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ . \\ a_n \\ \end{pmatrix} } _\text{A} = \underbrace{ \begin{pmatrix} y_0 \\ y_1 \\ y_2 \\ . \\ y_n \\ \end{pmatrix} } _\text{Y} $$

    $$ (S)\Longleftrightarrow X \times A = Y $$

    With such a system, there exists a unique solution if and only its matrix (ici \(X\)) is invertible. And this is the case if its determinant is non-zero.

    Which will be the case here, because our matrix is a Vandermonde matrix, and the condition for its determinant to be non-zero is that every \(x_i \) are two by two different.

    This is the case here because our series of values includes \(x_i\) all different; the solution is therefore unique.

    We must first calculate the general determinant of the matrix \(X\), as well as each specific determinant to obtain our series of solutions \(\{ a_0, a_1, ..., a_n \}\), coefficients of our polynomial to be determined.

    $$ \forall k \in [\![ 0, n]\!], \enspace a_k = \frac{det(X_k)}{det(X)}$$

    Where \(X_k\) represents the square matrix formed by \(X\) in which we inverted its \(k\)-th column with the column matrix \(Y\).

    We will then obtain a unique polynomial:

    $$ P_n(X) = a_0 + a_1 X + a_2 X^2 + \hspace{0.1em}... \hspace{0.1em}+ a_n X^n$$

    $$ \forall k \in [\![ 0, n]\!], \enspace a_k = \frac{det(X_k)}{det(X)}$$

    Where \(X_k\) represents the square matrix formed by \(X\) in which we inverted its \(k\)-th column with the column matrix \(Y\).


Example


Construction of a polynomial of the second degree

Let us construct a polynomial of degree \(2\) using this method, and give it the following dataset on three points:

$$\Bigl \{ \bigl \{x_0 = 0, y_0 = 2 \bigr\}, \bigl \{x_1 =1, y_1 = -3 \bigr\}, \bigl \{x_2 = 2, y_2 = 1 \bigr\} \Bigr\} $$

Lagrange interpolating polynomial - example of dataset

Let us construct this unique polynomial with the two possibilities seen above:

  1. - by the direct construction of a polynomial \(L(X) = \sum L_j(X)\)

  2. - by constructing a polynomial \(P_n(X)\), by finding the only solution for the coefficients \(\bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) unknown to the system \( (S) \)


  1. By the direct construction of a polynomial \(L(X) = \sum L_j(X)\)

  2. We have three polynomials to construct: \(L_0(X), L_1(X),L_2(X) \).

    According to the formula, each \(L_j(x)\) will be of the form:

    $$ L_j(x) = \prod_{\underset{i \neq j}{i=0}}^n \frac{X - x_i}{x_j - x_i} $$

    1. For \(x_0\):
    2. $$ L_0(X) = \frac{(X - x_1)}{(x_0 - x_1)} \frac{(X - x_2)}{(x_0 - x_2)} $$

      $$ L_0(X) = \frac{(X - 1)}{(0 - 1)} \frac{(X - 2)}{(0 - 2)} $$

      $$ L_0(X) = \frac{X^2 - 3X + 2}{2} $$

      We definitely have:

      $$ \Biggl \{ \begin{gather*} L_0(x_0) = 1 \\ L_0(x_1) = 0, \enspace L_0(x_2) = 0 \end{gather*} $$

    3. For \(x_1\) :
    4. $$ L_1(X) = \frac{(X - x_0)}{(x_1 - x_0)} \frac{(X - x_2)}{(x_1 - x_2)} $$

      $$ L_1(X) = \frac{(X - 0)}{(1 - 0)} \frac{(X - 2)}{(1 - 2)} $$

      $$ L_1(X) = -X^2 +2X $$

      We definitely have:

      $$ \Biggl \{ \begin{gather*} L_1(x_1) = 1 \\ L_1(x_0) = 0, \enspace L_1(x_2) = 0 \end{gather*} $$

    5. For \(x_2\) :
    6. $$ L_2(X) = \frac{(X - x_0)}{(x_2 - x_0)} \frac{(X - x_1)}{(x_2 - x_1)} $$

      $$ L_2(X) = \frac{(X - 0)}{(2- 0)} \frac{(X - 1)}{(2 - 1)} $$

      $$ L_2(X) = \frac{X^2 -X}{2} $$

      We definitely have:

      $$ \Biggl \{ \begin{gather*} L_2(x_2) = 1 \\ L_2(x_0) = 0, \enspace L_2(x_1) = 0 \end{gather*} $$

    7. Assembly of functions \(L_j(X)\)

    8. We can then construct our polynomial by adding the \(y_j L_j(X)\).

      $$L(X) = 2L_0(X) - 3 L_1(X) + L_2(X)$$

      $$L(X) = 2 \left(\frac{X^2 - 3X + 2}{2}\right) - 3 (-X^2 +2X ) + \frac{X^2 -X}{2} $$

      $$L(X) = X^2 - 3X + 2 +3 X^2 -6X + \frac{X^2}{2} - \frac{X}{2} $$

      $$L(X) = \frac{9}{2}X^2 - \frac{19}{2}X + 2 $$


  3. By constructing a polynomial \(P_n(X)\), by finding the only solution for the coefficients \(\bigl \{a_0, \ a_1, \ ..., \ a_n \bigr\} \) unknown to the system \( (S) \)


    1. By directly solving the system \( (S) \) by substitution or by a Gaussian pivot
    2. We seek to solve the following system \((S)\):

      $$ (S) \enspace \left \{ \begin{gather*} a_0 + a_1 x_0 + a_2 x_0 ^2 = y_0 \\ a_0 + a_1 x_1 + a_2 x_1 ^2 = y_1 \\ a_0 + a_1 x_2 + a_2 x_2 ^2 = y_2 \\ \end{gather*} \right \} $$

      With our dataset:

      $$\Bigl \{ \bigl \{x_0 = 0, y_0 = 2 \bigr\}, \bigl \{x_1 =1, y_1 = -3 \bigr\}, \bigl \{x_2 = 2, y_2 = 1 \bigr\} \Bigr\} $$

      So,

      $$ (S) \enspace \left \{ \begin{gather*} a_0 \hspace{5.5em} = 2 \qquad \hspace{0.5em} (L_1) \\ a_0 + \hspace{0.3em} a_1 \hspace{0.2em} + \hspace{0.2em} a_2 \hspace{0.4em} = -3 \qquad (L_2) \\ a_0 + 2a_1 + 4a_2 \hspace{0.1em} = 1 \qquad \hspace{0.7em} (L_3) \\ \end{gather*} \right \} \Longrightarrow a_0 = 2$$

      Then, we replace in the other lines \( a_0 \) by its value:

      $$ (S) \enspace \left \{ \begin{gather*} 2 + \hspace{0.3em} a_1 \hspace{0.2em} + \hspace{0.2em} a_2 \hspace{0.2em} = -3 \qquad (L_2) \\ 2 + 2a_1 + 4a_2 = 1 \qquad \hspace{0.6em} (L_3) \\ \end{gather*} \right \} $$

      With \( (L_2) \), we directly have:

      $$ a_1 + a_2 = -5 \Longleftrightarrow a_1 = -5- a_2 \qquad (a_1 = f(a_2)) $$

      Now, we inject \( (a_1 = f(a_2)) \) into \( (L_3) \):

      $$ 2 + 2(-5- a_2 ) + 4a_2 = 1 $$

      $$ -10 -2a_2 + 4a_2 = 1 - 2 $$

      $$ a_2 = \frac{9}{2} \qquad (a_2)$$

      Finally, we inject \( (a_2) \) into \( (L_2) \):

      $$ 2 + \hspace{0.3em} a_1 \hspace{0.2em} + \hspace{0.2em} \frac{9}{2} \hspace{0.2em} = -3 $$

      $$ a_1 = -\frac{19}{2} $$

      And we do obtain as a unique solution for \((S)\):

      $$\mathcal{S} = \Bigl \{ a_0 = 2, \enspace a_1 = -\frac{19}{2},\enspace a_2 = \frac{9}{2} \Bigr\} $$

      And as a result, the polynomial \(P_2(X) \):

      $$ P_2(X) = \frac{9}{2}X^2 - \frac{19}{2}X + 2 $$


    3. By solving the equivalent matricial system \( (S^*) \)
    4. We seek to solve the following matricial system \((S^*)\):

      $$ (S^*) \enspace \Longleftrightarrow \enspace \begin{pmatrix} 1 & x_0 & x_0^2 & \dots & x_0^n \\ 1 & x_1 & x_1^2 & \dots & x_1^n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^n \\ \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ . \\ a_n \\ \end{pmatrix} = \begin{pmatrix} y_0 \\ y_1 \\ y_2 \\ . \\ y_n \\ \end{pmatrix} $$

      $$ (S^*)\Longleftrightarrow X \times A = Y $$

      So in our case:

      $$ (S^*) \enspace \Longleftrightarrow \enspace \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 2& 4 \\ \end{pmatrix} \times \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ \end{pmatrix} = \begin{pmatrix} 2 \\ \hspace{-0.8em} -3 \\ 1\\ \end{pmatrix} $$

      1. Calculation of the determinant of matrix \(X\)
      2. We calculate \(det(X)\), the determinant of the matrix \(X\).

        $$ det(X) = \begin{vmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 2& 4 \\ \end{vmatrix} $$

        $$ det(X) = 1\times \begin{vmatrix} 1 &1 \\ 2 & 4 \\ \end{vmatrix} -0 \times \begin{vmatrix} 1 &1 \\ 1 & 4 \\ \end{vmatrix} +0 \times \begin{vmatrix} 1 &1 \\ 1 &2 \\ \end{vmatrix} $$

        $$ det(X) = 1\times \Bigl[1 \times 4 - 2 \times 1\Bigr] - 0 \times \Bigl[1 \times 4 - 1 \times 1\Bigr] + 0 \times \Bigl[1 \times 2 - 1 \times 1\Bigr] $$

        $$ det(X) =2 $$

      3. Calculation of the determinant of the matrix \(X_1 \)
      4. To obtain \(X_1 \), we replace the \(1 ^{st}\) column of the matrix \(X\) by \(Y\).

        $$ X_1 = \begin{pmatrix} 2 & 0 & 0 \\ \hspace{-0.8em} -3 & 1 & 1 \\ 1 & 2& 4 \\ \end{pmatrix} $$

        Then we calculate its determinant:

        $$ det(X_1) = \begin{vmatrix} 2 & 0 & 0 \\ \hspace{-0.8em} -3 & 1 & 1 \\ 1 & 2& 4 \\ \end{vmatrix} $$

        $$ det(X_1) = 2\times \begin{vmatrix} 1 &1 \\ 2 & 4 \\ \end{vmatrix} -0 \times \begin{vmatrix} -3&1 \\ \hspace{0.5em} 1 & 4 \\ \end{vmatrix} +0 \times \begin{vmatrix} -3 &1 \\ \hspace{0.5em} 1 &2 \\ \end{vmatrix} $$

        $$ det(X_1) = 2 \times \Bigl[1 \times 4 - 2 \times 1\Bigr] - 0 \times \Bigl[(-3) \times 4 - 1 \times 1\Bigr] + 0 \times \Bigl[(-3) \times 2 - 1 \times 1\Bigr] $$

        $$ det(X_1) = 4 $$

      5. Calculation of the determinant of the matrix \(X_2 \)
      6. To obtain \(X_2 \), we replace the \(2 ^{nd}\) column of the matrix \(X\) by \(Y\).

        $$ X_2 = \begin{pmatrix} 1 & 2 & 0 \\ 1 & \hspace{-0.8em} -3 & 1 \\ 1 & 1& 4 \\ \end{pmatrix} $$

        Then we calculate its determinant:

        $$ det(X_2) = \begin{vmatrix} 1 &2 & 0 \\ 1 & \hspace{-0.8em} -3 & 1 \\ 1 & 1& 4 \\ \end{vmatrix} $$

        $$ det(X_2) = 1\times \begin{vmatrix} \hspace{-0.8em} -3 &1 \\ 1 & 4 \\ \end{vmatrix} -2\times \begin{vmatrix} 1&1 \\ 1 & 4 \\ \end{vmatrix} +0 \times \begin{vmatrix} 1& \hspace{-0.8em} -3 \\ 1 & 1 \\ \end{vmatrix} $$

        $$ det(X_2) = 1 \times \Bigl[ (-3) \times 4 - 1 \times 1 \Bigr] - 2 \times \Bigl[1 \times 4 - 1 \times 1\Bigr] + 0 \times \Bigl[1 \times 1 - 1 \times (-3) \Bigr] $$

        $$ det(X_2) = -19 $$

      7. Calculation of the determinant of the matrix \(X_3 \)
      8. To obtain \(X_3 \), we replace the \(3 ^{rd}\) column of the matrix \(X\) by \(Y\).

        $$ X_3 = \begin{pmatrix} 1 & 0 & 2 \\ 1 & 1 & \hspace{-0.8em} -3\\ 1 & 2& 1 \\ \end{pmatrix} $$

        Then we calculate its determinant:

        $$ det(X_3) = \begin{vmatrix} 1 & 0 & 2 \\ 1 & 1 & \hspace{-0.8em} -3\\ 1 & 2& 1 \\ \end{vmatrix} $$

        $$ det(X_3) = 1\times \begin{vmatrix} 1 & \hspace{-0.8em} -3 \\ 2 & 1 \\ \end{vmatrix} -0\times \begin{vmatrix} 1& \hspace{-0.8em} -3 \\ 1 & 1 \\ \end{vmatrix} +2 \times \begin{vmatrix} 1& 1\\ 1 & 2 \\ \end{vmatrix} $$

        $$ det(X_3) = 1 \times \Bigl[ 1 \times 1- 2 \times (-3) \Bigr] - 0 \times \Bigl[1 \times 1 - 1 \times (-3) \Bigr] + 2 \times \Bigl[1 \times 2 - 1 \times 1\Bigr] $$

        $$ det(X_3) = 9 $$

        The, we have as an unique solution for \((S^*)\):

        $$\mathcal{S} = \Bigl \{ a_0 = \frac{det(X_1)}{det(X)} , \enspace a_1 = \frac{det(X_2)}{det(X)},\enspace a_2 = \frac{det(X_3)}{det(X)} \Bigr\} $$

        $$\mathcal{S} = \Bigl \{ a_0 = 2, \enspace a_1 = -\frac{19}{2},\enspace a_2 = \frac{9}{2} \Bigr\} $$

        And as a result, the polynomial \(P_2(X) \):

        $$ P_2(X) = \frac{9}{2}X^2 - \frac{19}{2}X + 2 $$


  4. Vérification

  5. We found for the three previous cases a polynomial of degree 2:

    $$ P_2(X) = \frac{9}{2}X^2 - \frac{19}{2}X + 2 $$

    Let us now check that our polynomial verifies the three points of our initial conditions:

    $$\Bigl \{ \bigl \{x_0 = 0, y_0 = 2 \bigr\}, \bigl \{x_1 = 0, y_1 = -3 \bigr\}, \bigl \{x_2 = 2, y_2 = 1 \bigr\} \Bigr\} $$

    We do have:

    $$ P_2(x_0) = 0 - 0 + 2 = 2 \ \Longrightarrow \ P_2(x_0) = y_0$$

    $$ P_2(x_1) = \frac{9}{2} - \frac{19}{2} +2 = -3 \ \Longrightarrow \ P_2(x_1) = y_1$$

    $$ P_2(x_2) = \frac{36}{2} - \frac{38}{2} + 2 = -1 +2 = 1 \ \Longrightarrow \ P_2(x_2) = y_2 $$

    We definitely have checked that:

    $$ \Biggl \{ \begin{gather*} P_2(x_0) = y_0 \\ P_2(x_1) = y_1 \\ P_2(x_2) = y_2 \end{gather*} $$


  6. Conclusion

  7. We finally obtain this smooth curve corresponding to our interpolation:

    Lagrange interpolating polynomial - example of dataset with curve

    $$ P_2(X) = \frac{9}{2}X^2 - \frac{19}{2}X + 2 $$

Return Index
Scroll top Go to the top of the page