French flag Arrows English flag
Sun Arrows Moon
Return Index

The properties of matrix

For what follows, it is important to establish the following definitions:

  1. Operations on matrix
    1. Matrix addition
    2. Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2\) be two matrix of the same size.

      $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, p]\!],$$
      $$(A + B)_{i,j} = a_{i,j} + b_{i,j} $$

      In other words, we add each element of the left matrix with the element located at the same position of the right one:

      $$A + B = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & \dots & a_{1, p} \\ a_{2,1} & a_{2,2} & a_{2,3} & \dots & a_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & a_{n, p} \end{pmatrix} + \begin{pmatrix} b_{1,1} & b_{1,2} & b_{1,3} & \dots & b_{1, p} \\ b_{2,1} & b_{2,2} & b_{2,3} & \dots & b_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ b_{n,1} & b_{n,2} & b_{n,3} & \dots & b_{n, p} \end{pmatrix} $$

      $$A + B = \begin{pmatrix} a_{1,1} + b_{1,1} & a_{1,2} + b_{1,2} & a_{1,3} + b_{1,3} & \dots & a_{1, p} + b_{1, p} \\ a_{2,1} + b_{2,1} & a_{2,2} + b_{2,2} & a_{2,3} + b_{2,3} & \dots & a_{2, p} + b_{2,p} \\ \hspace{2em} \vdots & \hspace{2em} \vdots & \hspace{2em} \vdots & \ddots & \hspace{2em} \vdots \\ a_{n,1} + b_{n,1} & a_{n,2} + b_{n,2} & a_{n,3} + b_{n,3} & \dots & a_{n, p} + b_{n,p} \end{pmatrix} $$

    3. Matrix product
    4. Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) and \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) be two matrix.

      To multiply two matrix, we need the left matrix to have the same number of columns as the number of rows of the right one (here \(p\)). As a result, we obtain a matrix \(AB \in \hspace{0.03em} \mathcal{M}_{n,q} (\mathbb{K})\), so having \(n\) lines and \(q\) columns.

      $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, q]\!],$$
      $$(A \times B)_{i,j} = \sum_{k = 1}^p a_{i,k} \times b_{k,j} $$

      For example:

      $$A \times B = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & \dots & a_{1, p} \\ a_{2,1} & a_{2,2} & a_{2,3} & \dots & a_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & a_{n, p} \end{pmatrix} \times \begin{pmatrix} b_{1,1} & b_{1,2} & b_{1,3} & \dots & b_{1, q} \\ b_{2,1} & b_{2,2} & b_{2,3} & \dots & b_{2, q} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ b_{p,1} & b_{p,2} & b_{p,3} & \dots & b_{p, q} \end{pmatrix} $$

      $$A \times B = \begin{pmatrix} \Bigl[a_{1,1} b_{1,1} + a_{1,2} b_{2,1} \ + \ ... \ + \ a_{1,p} b_{p,1} \Bigr] & \Bigl[a_{1,1} b_{1,2} + a_{1,2} b_{2,2} \ + \ ... \ + \ a_{1,p} b_{p,2}\Bigr] & \hspace{1em} \dots \dots \dots \hspace{1em} & \Bigl[a_{1,1} b_{1,q} + a_{1,2} b_{2,q} \ + \ ... \ + \ a_{1,p} b_{p,q}\Bigr] \\ \Bigl[a_{2,1} b_{1,1} + a_{2,2} b_{2,1} \ + \ ... \ + \ a_{2,p} b_{p,1}\Bigr] & \Bigl[a_{2,1} b_{1,2} + a_{2,2} b_{2,2} \ + \ ... \ + \ a_{2,p} b_{p,2}\Bigr] & \hspace{1em} \dots \dots \dots \hspace{1em} & \Bigl[a_{2,1} b_{1,q} + a_{2,2} b_{2,q} \ + \ ... \ + \ a_{2,p} b_{p,q}\Bigr] \\ \hspace{8em} \vdots & \hspace{8em} \vdots & \hspace{1em} \ddots & \hspace{8em} \vdots \\ \hspace{8em} \vdots & \hspace{8em} \vdots & \hspace{1em} \ddots & \hspace{8em} \vdots \\ \Bigl[a_{n,1} b_{1,1} + a_{n,2} b_{2,1} \ + \ ... \ + \ a_{n,p} b_{p,1}\Bigr] & \Bigl[a_{n,1} b_{1,2} + a_{2,2} b_{2,2} \ + \ ... \ + \ a_{n,p} b_{p,2}\Bigr] & \hspace{1em} \dots \dots \dots \hspace{1em} & \Bigl[a_{n,1} b_{1,q} + a_{n,2} b_{2,q} \ + \ ... \ + \ a_{n,p} b_{p,q}\Bigr] \end{pmatrix} $$


      Be careful, in a general way the matrix product does not have commutative law: \( (A \times B) \neq (B \times A) \).

    5. Multiplication of a matrix by a scalar \(\lambda\)
    6. Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix.

      When a matrix is multiplied by a scalar, it affects all its elements.

      $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, p]\!],$$
      $$(\lambda A)_{i,j} = \lambda \ a_{i,j} $$

      For example:

      $$A = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & \dots & a_{1, p} \\ a_{2,1} & a_{2,2} & a_{2,3} & \dots & a_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & a_{n, p} \end{pmatrix} $$

      $$\lambda A = \begin{pmatrix} \lambda \ a_{1,1} & \lambda \ a_{1,2} & \lambda \ a_{1,3} & \dots & \lambda \ a_{1, p} \\ \lambda \ a_{2,1} & \lambda \ a_{2,2} & \lambda \ a_{2,3} & \dots & \lambda \ a_{2, p} \\ \hspace{1.1em} \vdots & \hspace{1.1em} \vdots & \hspace{1.1em} \vdots & \ddots & \hspace{1.1em} \vdots \\ \lambda \ a_{n,1} & \lambda \ a_{n,2} & \lambda \ a_{n,3} & \dots & \lambda \ a_{n, p} \end{pmatrix} $$

    7. Linear combination of matrix
    8. Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2\) be two matrix of the same size and \((\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2\).

      With the previous properties of addition and multiplication by a scalar, we can create linear combinations and:

      $$(\lambda A + \mu B)_{i,j} = \lambda \ a_{i,j} + \mu \ b_{i,j} $$
    9. Matrix transposition
    10. Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix.

      Matrix transposition consists in reverse lines and columns indices for each elements. We note \(A^T\) (sometimes \(^t A\)) the transposed of the matrix \(A\).

      $$ \forall (i, j) \in [\![1, n]\!]^2,$$
      $$(A)_{i,j} = a_{i,j} \ \Longleftrightarrow \ \left(A^T \right)_{i,j} \hspace{0.03em} = a_{j,i} $$

      For example:

      $$A = \begin{pmatrix} a_{1,1} & \textcolor{#8E5B5B}{a_{1,2}} & \textcolor{#8E5B5B}{a_{1,3}} & \textcolor{#8E5B5B}{\dots} & \textcolor{#8E5B5B}{a_{1, n}} \\ \textcolor{#446e4f}{a_{2,1}} & a_{2,2} & \textcolor{#8E5B5B}{a_{2,3}} & \textcolor{#8E5B5B}{\dots} & \textcolor{#8E5B5B}{a_{2, n}} \\ \textcolor{#446e4f}{a_{3,1}} & \textcolor{#446e4f}{a_{3,2}} & a_{3,3} & \textcolor{#8E5B5B}{\dots} & \textcolor{#8E5B5B}{a_{3, n}} \\ \hspace{0.8em} \textcolor{#446e4f}{\vdots} & \hspace{0.8em} \textcolor{#446e4f}{\vdots} & \hspace{0.8em} \textcolor{#446e4f}{\vdots} & \ddots & \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} \\ \textcolor{#446e4f}{a_{n,1}} & \textcolor{#446e4f}{a_{n,2}} & \textcolor{#446e4f}{a_{n,3}} & \textcolor{#446e4f}{\dots} & a_{n, n} \\ \end{pmatrix} $$

      So, its transposed is:

      $$A^T = \begin{pmatrix} a_{1,1} & \textcolor{#446e4f}{a_{2,1}} & \textcolor{#446e4f}{a_{3,1}} & \textcolor{#446e4f}{\dots} & \textcolor{#446e4f}{a_{n, 1}} \\ \textcolor{#8E5B5B}{a_{1,2}} & a_{2,2} & \textcolor{#446e4f}{a_{3,2}} & \textcolor{#446e4f}{\dots} & \textcolor{#446e4f}{a_{n, 2}} \\ \textcolor{#8E5B5B}{a_{1,3}} & \textcolor{#8E5B5B}{a_{2,3}} & a_{3,3} & \textcolor{#446e4f}{\dots} & \textcolor{#446e4f}{a_{n, 3}} \\ \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} & \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} & \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} & \ddots & \hspace{0.8em} \textcolor{#446e4f}{\vdots} \\ \textcolor{#8E5B5B}{a_{1,n}} & \textcolor{#8E5B5B}{a_{2,n}} & \textcolor{#8E5B5B}{a_{3,n}} & \textcolor{#8E5B5B}{\dots} & a_{n, n} \\ \end{pmatrix} $$

      Only the diagonal remains intact, because when \(i = j\), then \(a_{i,j} = a_{j,i}\).

    11. Inversion of a matrix
    12. Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) be a squared matrix of size \(n\).

      The inverse of the matrix \(A\) is the matrix written \(A^{-1}\) such as: \(A A^{-1} = I_n\)


      A matrix is inversible if and only if \(det(A) \neq 0\).

    13. Matricial writing of a system of linear equations
    14. A system of linear equations \( (S)\), where the unknown are the variables \(x_{i,j}\), can be written as a matrix product system :

      $$ (S) \enspace \left \{ \begin{gather*} a_1 x_{1,1} + a_2 x_{1,2} + a_3 x_{1,3} + \hspace{0.1em}... \hspace{0.1em}+ a_n x_{1,p} = b_1 \\ a_1 x_{2,1} + a_2 x_{2,2} + a_3 x_{2,3} + \hspace{0.1em}... \hspace{0.1em}+ a_n x_{2,p} = b_2 \\ ........................ ............. \ = \ ..\\ a_1 x_{n,1} + a_2 x_{n,2} + a_3 x_{n,3} + \hspace{0.1em}... \hspace{0.1em}+ a_n x_{n,p} = b_n \\ \end{gather*} \right \} $$

      $$ \Longleftrightarrow$$

      $$ \underbrace{ \begin{pmatrix} x_{1,1} & x_{1,2} & x_{1,3} & \dots & x_{1, p} \\ x_{2,1} & x_{2,2} & x_{2,3} & \dots & x_{2, p} \\ \hspace{0.8em} \vdots & \hspace{0.8em} \vdots & \hspace{0.8em} \vdots & \ddots & \hspace{0.8em} \vdots \\ x_{n,1} & x_{n,2} & x_{n,3} & \dots & x_{n, p} \\ \end{pmatrix} } _\text{X} \times \underbrace{ \begin{pmatrix} a_1 \\ a_2 \\ \hspace{0.3em}\vdots \\ a_n \end{pmatrix} } _\text{A} = \underbrace{ \begin{pmatrix} b_1 \\ b_2 \\ \hspace{0.3em}\vdots \\ b_n \end{pmatrix} } _\text{B} \ \Longleftrightarrow \ MA = B, \ avec \enspace \Biggl \{ \begin{gather*} X \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) \\ A \in \hspace{0.03em} \mathcal{M}_{1,p} (\mathbb{K}) \\ B \in \hspace{0.03em} \mathcal{M}_{1,p} (\mathbb{K}) \end{gather*} $$

    15. Trace of a matrix
    16. Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) be a squared matrix of size \(n\).

      We call the trace of a matrix, the sum of all its diagonal elements:

      $$ A = \begin{pmatrix} \textcolor{#606B9E}{a_{1,1}} & a_{1,2} & a_{1,3} & \dots & a_{1,n} \\ a_{2,1} & \textcolor{#606B9E}{a_{2,2}} & a_{2,3} & \dots & a_{2,n} \\ a_{3,1} & a_{3,2} & \textcolor{#606B9E}{a_{3,3}} & \dots & a_{3,n} \\ \hspace{0.1em}\vdots & \hspace{0.1em} \vdots & \hspace{0.1em} \vdots & \textcolor{#606B9E}{\ddots} & \hspace{0.1em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & \textcolor{#606B9E}{a_{n,n}} \end{pmatrix} $$

      $$Tr(A) = \sum_{k = 1}^n a_{k,k} = a_{1,1} + a_{2,2} \ + \ ... \ + a_{n,n}$$
  2. Specific matrix
    1. Diagonal matrix
    2. A diagonal is a squared matrix where all the elements are \(0\) except on the main diagonal:

      $$D_n = \begin{pmatrix} \textcolor{#606B9E}{d_{1,1}} & 0 & 0 & \dots & 0 \\ 0 & \textcolor{#606B9E}{d_{2,2}} & 0 & \dots & 0 \\ 0 & 0 & \textcolor{#606B9E}{d_{3,3}} & \dots & 0 \\ \hspace{0.1em}\vdots & \hspace{0.1em} \vdots & \hspace{0.1em} \vdots & \textcolor{#606B9E}{\ddots} & \hspace{0.1em} \vdots \\ 0 & 0 & 0 & \dots & \textcolor{#606B9E}{d_{n,n}} \end{pmatrix} $$

      $$ \forall (i, j) \in [\![1, n]\!]^2, \ (i \neq j) \Longrightarrow d_{i,j} = 0$$

      We also note the diagonal matrix \(D_n\) only in relation with its diagonal elements : \(D_n = diag(\lambda_1, \lambda_2, \ ..., \lambda_n)\).

    3. Matrix identity
    4. The matrix identity \(I_n\) is defined as follows:

      $$I_n = \begin{pmatrix} \textcolor{#606B9E}{1} & 0 & 0 & \dots & 0 \\ 0 & \textcolor{#606B9E}{1} & 0 & \dots & 0 \\ 0 & 0 & \textcolor{#606B9E}{1} & \dots & 0 \\ \vdots & \vdots & \vdots & \textcolor{#606B9E}{\ddots} & \vdots \\ 0 & 0 & 0 & \dots & \textcolor{#606B9E}{1} \\ \end{pmatrix} $$

      It is the matrix of size \(n\) having the value \(1\) on its main diagonal, and \(0\) everywhere else. It's a specific case of diagonal matrix. For example,

      $$I_3 = \begin{pmatrix} \textcolor{#606B9E}{1} & 0 & 0 \\ 0 & \textcolor{#606B9E}{1} & 0 \\ 0 & 0 & \textcolor{#606B9E}{1} \end{pmatrix} $$


Matrix product

Associativity

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), \ \forall C \in \hspace{0.03em} \mathcal{M}_{q,r} (\mathbb{K}), $$

$$ (A \times B) \times C = A \times (B \times C) $$


Distributivity

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall (B, C) \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})^2, $$

$$ A \times (B + C) = A \times B + A \times C $$


$$ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2 , \ \forall C \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$

$$ (A + B) \times C = A \times C + B \times C $$


Bilinearity

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$

$$ (\lambda A) \times B = A \times (\lambda B) = \lambda (A \times B) $$


Multiplication by the identity

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}),$$

$$ I_n \times A = A \times I_p = A $$


Diagonal matrix product

  1. Product of two diagonal matrix
  2. $$ \forall \Bigl[ D_1 = diag(\lambda_1, \lambda_2, \ ..., \lambda_n), \ D_2 = diag(\mu_1, \mu_2, \ ..., \mu_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2, $$

    $$ D_1 \times D_2 = D_2 \times D_1 = diag \left(\lambda_1 \mu_1, \lambda_2 \mu_2, \ ..., \lambda_n \mu_n \right) $$


  3. A diagonal matrix raised to the power of \(n\)
  4. $$ \forall \Bigl[ D = diag(\lambda_1, \lambda_2, \ ..., \lambda_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}), $$

    $$ D^m = diag \left(\lambda_1^m, \lambda_2^m, \ ..., \lambda_n^m \right) $$

Matrix transposition


Linearity of transposition

$$ \forall (\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2, \ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2, $$

$$ (\lambda A + \mu B)^T = \lambda A^T + \mu B^T $$


Transposed of a product

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$

$$ (A \times B)^T = B^T \times A^T $$

$$(3)$$

Inversion of matrix

Inverse of the inverse

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}),$$

$$ A \ is \ inversible \Longrightarrow A^{-1} \ is \ inversible \Longrightarrow (A^{-1})^{-1} = A $$


Inverse of a transposed matrix

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}),$$

$$ A \ is \ inversible \Longrightarrow A^{T} \ is \ inversible \Longrightarrow \ \left(A^T \right)^{-1} = (A^{-1})^T$$


Inverse of a product

$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ A \ et \ B \ are\ inversible \Longrightarrow (A \times B) \ is \ inversible \Longrightarrow \ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} $$

$$(6)$$

Both expressions \((3)\) and \((6)\) have the same behaviour:

$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2, \enspace \Biggl \{ \begin{align*} (A \times B)^T = B^T \times A^T \hspace{1em}\qquad (3) \\ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} \qquad (6) \end{align*} $$


So, the order of transposition or inversion has no importance,

$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ \left((A \times B)^T \right)^{-1} = \hspace{0.03em} \left((A \times B)^{-1} \right)^T = \hspace{0.03em} \left(A^T\right)^{-1} \times \hspace{0.05em} \left(B^T\right)^{-1} = \hspace{0.03em} \left(A^{-1}\right)^T \times \hspace{0.05em} \left(B^{-1}\right)^T $$


Traces of matrix

Linearity of the trace

$$ \forall (\lambda \mu) \in \hspace{0.05em} \mathbb{R}^2, \ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ Tr(\lambda A + \mu B) = \lambda \ Tr(A) + \mu \ Tr(B) $$


Trace of a product

$$ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ Tr(A \times B) = Tr(B \times A)$$


Recap table of the properties of matrix


Demonstrations

Matrix product

Associativity

Let be \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\), \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) et \(C \in \hspace{0.03em} \mathcal{M}_{q,r} (\mathbb{K})\) three matrix.

  1. Calculation of \((A \times B) \times C\)
  2. By definition, we do have:

    $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, r]\!],$$
    $$\Bigl( (A \times B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^q (ab)_{i,k} \times c_{k,j} $$

    But, the factor \((ab)_{i,k}\) is worth:

    $$ \forall (i, k) \in [\![1, n]\!] \times [\![1, q]\!],$$
    $$ (ab)_{i,k} = \sum_{l = 1}^p a_{i,l} \times b_{l,k} $$

    So, we replace it in the main expression and:

    $$\Bigl( (A \times B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^q \left[ \sum_{l = 1}^p a_{i,l} \times b_{l,k} \right] \times c_{k,j} $$

    Since the factor \(c_{k,j}\) is independent from \(l\), it can be considered as a constant, and integrated inside the inner sum.

    $$\Bigl( (A \times B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^q \sum_{l = 1}^p \Bigl[ a_{i,l} \times b_{l,k} \times c_{k,j} \Bigr] \qquad (1) $$

  3. Calculation of \( A \times (B \times C)\)
  4. Let us now calculate the product \(A \times (B \times C)\).

    $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, r]\!],$$
    $$\Bigl( A \times (B \times C) \Bigr)_{i,j} = \ \sum_{k = 1}^p a_{i,k} \times (bc)_{k,j} $$

    In the same way, we replace \((bc)_{k,j}\) by its expression and:

    $$\Bigl( A \times (B \times C) \Bigr)_{i,j} = \ \sum_{k = 1}^p a_{i,k} \times \left[ \sum_{l = 1}^q b_{k,l} \times c_{l,j} \right] $$
    $$\Bigl( A \times (B \times C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \sum_{l = 1}^q \Bigl[ a_{i,k} \times b_{k,l} \times c_{l,j} \Bigr] \qquad (2) $$

    In both expression \((1)\) and \((2)\), the variables \(k\) and \(l\) are free variables:

    $$\Bigl( (A \times B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^q \sum_{l = 1}^p \Bigl[ a_{i,l} \times b_{l,k} \times c_{k,j} \Bigr] \qquad (1) $$
    $$\Bigl( A \times (B \times C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \sum_{l = 1}^q \Bigl[ a_{i,k} \times b_{k,l} \times c_{l,j} \Bigr] \qquad (2) $$

    Therefore, they can be interverterd, and now \((1)\) and \((2)\) are equal and:

    $$\Bigl( (A \times B) \times C \Bigr)_{i,j} = \Bigl( A \times (B \times C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \sum_{l = 1}^q \Bigl[ a_{i,l} \times b_{l,k} \times c_{k,j} \Bigr] $$

And finally,

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), \ \forall C \in \hspace{0.03em} \mathcal{M}_{q,r} (\mathbb{K}), $$

$$ (A \times B) \times C = A \times (B \times C) $$


Distributivity

  1. Left distributivity
  2. Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix and \((B, C) \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})^2\) two matrix.

    With the definition of the matrix product, we do have:

    $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, q]\!],$$
    $$\Bigl( A \times (B + C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[ a_{i,k} \times (b + c)_{k,j}\Bigr] $$
    $$\Bigl( A \times (B + C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[ a_{i,k} \times (b_{k,j} + c_{k,j})\Bigr]$$
    $$\Bigl( A \times (B + C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[ a_{i,k} \times b_{k,j} + a_{i,k} \times c_{k,j} \Bigr]$$
    $$\Bigl( A \times (B + C) \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[ a_{i,k} \times b_{k,j}\Bigr] + \sum_{k = 1}^p \Bigl[a_{i,k} \times c_{k,j}\Bigr] $$

    And finally,

    $$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall (B, C) \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})^2, $$

    $$ A \times (B + C) = A \times B + A \times C $$


  3. Right distributivity
  4. Let \((A, B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2 \) be two matrix and \( C \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) an other matrix.

    As well as before:

    $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, q]\!],$$
    $$\Bigl( (A + B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[(a + b)_{i,k} \times c_{k,j}\Bigr] $$
    $$\Bigl( (A + B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[(a_{i,k} + b_{i,k}) \times c_{k,j}\Bigr] $$
    $$\Bigl( (A + B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[a_{i,k} \times c_{k,j} + b_{i,k} \times c_{k,j}\Bigr] $$
    $$\Bigl( (A + B) \times C \Bigr)_{i,j} = \ \sum_{k = 1}^p \Bigl[a_{i,k} \times c_{k,j}\Bigr] + \sum_{k = 1}^p \Bigl[ b_{i,k} \times c_{k,j}\Bigr] $$

    And finally,

    $$ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2 , \ \forall C \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$

    $$ (A + B) \times C = A \times C + B \times C $$


Bilinearity

Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) and \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) be two matrix and \(\lambda \in \mathbb{R}\) a reel number.


With the definition of the matrix product, we do have:

$$ \forall (i, j) \in [\![1, n]\!] \times [\![1, q]\!],$$
$$(\lambda A \times B)_{i,j} = \sum_{k = 1}^p \lambda a_{i,k} \times b_{k,j} = \lambda \sum_{k = 1}^p a_{i,k} \times b_{k,j} $$

In the same way:

$$( A \times \lambda B)_{i,j} = \sum_{k = 1}^p a_{i,k} \times \lambda b_{k,j} = \lambda \sum_{k = 1}^p a_{i,k} \times b_{k,j} $$

And as a result,

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$

$$ (\lambda A) \times B = A \times (\lambda B) = \lambda (A \times B) $$


Multiplication by the identity

Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix.


  1. Calculation of \(I_n \times A\)
  2. With the definition of the matrix product, we do have:

    $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, p]\!],$$
    $$(I_n \times A)_{i,j} = \sum_{k = 1}^n (I_n)_{i,k} \times a_{k,j} $$

    But the factor \((I_n)_{i,k}\) is worth :

    $$ (I_n)_{i,k} = \Biggl \{ \begin{align*} 1, \ if \ (i = k) \\ 0 \ otherwise \end{align*} $$

    So,

    $$(I_n \times A)_{i,j} = a_{i,j} = (A)_{i,j} $$

    It is the unchanged starting matrix.


  3. Calculation of \(A \times I_p\)
  4. Idem, on the other side:

    $$ \forall (i, j) \in [\![1, n]\!] \times [\![1, p]\!],$$
    $$(A \times I_p)_{i,j} = \sum_{k = 1}^p a_{i,k} \times (I_p)_{k,j} $$

    In the same way and for all \((i,j)\), in this sum of products, when \((k = j)\) we obtain \(a_{i,j}\) since all other terms are worth \(0\) and therefore:

    $$(A \times I_p)_{i,j} = a_{i,j} = (A)_{i,j} $$

And as a result,

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}),$$

$$ I_n \times A = A \times I_p = A $$


Diagonal matrix product

  1. Product of two diagonal matrix
  2. Let \(\Bigl[ D_1 = diag(\lambda_1, \lambda_2, \ ..., \lambda_n), \ D_2 = diag(\mu_1, \mu_2, \ ..., \mu_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2 \) be two diagonal matrix.

    With the definition of the matrix product, we do have:

    $$ \forall (i, j) \in [\![1, n]\!]^2,$$
    $$(D_1 \times D_2)_{i,j} = \sum_{k = 1}^n (d_1)_{i,k} \times (d_2)_{k,j} $$

    Retrieving the definition of a diagonal matrix, in each inner product of all these sums:

    $$ \Biggl \{ \begin{align*} \forall (i, k) \in [\![1, n]\!]^2, \ (i \neq k) \Longrightarrow (d_1)_{i,k} = 0 \\ \forall (k, j) \in [\![1, n]\!]^2, \ (k \neq j) \Longrightarrow (d_2)_{k,j} = 0 \end{align*} $$

    So, for any \(k\), the product \( \Bigl[ (d_1)_{i,k} \times (d_2)_{k,j} \Bigr] \neq 0 \) only if:

    $$ \Bigl[ (i = k) \land (k = j) \Bigr] \Longleftrightarrow (i = k = j) $$

    We then have:

    $$ \forall (i, j) \in [\![1, n]\!]^2, \ (D_1 \times D_2)_{i,j} = \Biggl \{ \begin{align*} (d_1)_{i,j} \times (d_2)_{i,j}, \ si \ (i = j) \\ 0 \ otherwise \end{align*} $$

    So,

    $$ \forall (i, j) \in [\![1, n]\!]^2, \ (D_1 \times D_2)_{i,j} = \Biggl \{ \begin{align*} (d_1)_{k,k} \times (d_2)_{k,k} = \lambda_k \ \mu_k, \ si \ (i = j = k) \\ 0 \ otherwise \end{align*} $$

    $$ (D_1 \times D_2) = \begin{pmatrix} \textcolor{#606B9E}{\lambda_1 \ \mu_1} & 0 & 0 & \dots & 0 \\ 0 & \textcolor{#606B9E}{\lambda_2 \ \mu_2} & 0 & \dots & 0 \\ 0 & 0 & \textcolor{#606B9E}{\lambda_3 \ \mu_3} & \dots & 0 \\ \hspace{0.1em}\vdots & \hspace{0.1em} \vdots & \hspace{0.1em} \vdots & \textcolor{#606B9E}{\ddots} & \hspace{0.1em} \vdots \\ 0 & 0 & 0 & \dots & \textcolor{#606B9E}{\lambda_n \ \mu_n} \end{pmatrix} $$


    In the same way, if we perform the product on the other way round:

    $$ \forall (i, j) \in [\![1, n]\!]^2,$$
    $$(D_2 \times D_1)_{i,j} = \sum_{k = 1}^n (d_2)_{i,k} \times (d_1)_{k,j} $$

    This same reasoning leads us to the same result, that is to say that:

    $$ \forall (i, j) \in [\![1, n]\!]^2, \ (D_1 \times D_2)_{i,j} = \Biggl \{ \begin{align*} (d_2)_{k,k} \times (d_1)_{k,k} = \mu_k \ \lambda_k, \ si \ (i = j = k) \\ 0 \ otherwise \end{align*} $$

    The product of numbers on the field \(\mathbb{K}\) having commutative law, both results of the products \((D_1 \times D_2)\) and \((D_2 \times D_1)\) are equal.


    And finally,

    $$ \forall \Bigl[ D_1 = diag(\lambda_1, \lambda_2, \ ..., \lambda_n), \ D_2 = diag(\mu_1, \mu_2, \ ..., \mu_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2, $$

    $$ D_1 \times D_2 = D_2 \times D_1 = diag \left(\lambda_1 \mu_1, \lambda_2 \mu_2, \ ..., \lambda_n \mu_n \right) $$


  3. A diagonal matrix raised to the power of \(n\)
  4. Let \(\Bigl[ D = diag(\lambda_1, \lambda_2, \ ..., \lambda_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}) \) be a diagonal matrix.

    Following the same reasoning as above, by a direct recurrence we do obtain as a result that:


    $$ \forall \Bigl[ D = diag(\lambda_1, \lambda_2, \ ..., \lambda_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}), $$

    $$ D^m = diag \left(\lambda_1^m, \lambda_2^m, \ ..., \lambda_n^m \right) $$


Matrix transposition

Linearity of transposition

Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2\) be two matrix of the same size and \((\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2\) two real numbers.


We saw with the multiplication of a matrix by a scalar all its elements were affected.

Moreover, the matrix sum (of the same size) is the addition of elements of both matrix having index \((i,j)\) together.


Now, with these two properties, we can build a linear combination such as:

$$ \forall (i, j) \in [\![1, n]\!] \times [\![1, p]\!],$$
$$(\lambda A + \mu B)_{i,j} = \lambda \ a_{i,j} + \mu \ b_{i,j}$$

Now taking its transposed matrix, in reverses all \(i\) and \(j\) index:

$$(\lambda A + \mu B)^T_{i,j} = \lambda \ a_{j,i} + \mu \ b_{j,i}$$

And as a result,

$$ \forall (\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2, \ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2, $$

$$ (\lambda A + \mu B)^T = \lambda A^T + \mu B^T $$


Transposed of a product

Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) and \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) be two matrix.


With the definition of the matrix product, we do have:

$$ \forall (i, j) \in [\![1, n]\!] \times [\![1, q]\!],$$
$$(A \times B)_{i,j} = \sum_{k = 1}^p a_{i,k} \times b_{k,j} $$

Now taking its transposed matrix, in reverses all \(i\) and \(j\) index:

$$ \forall (i, j) \in [\![1, q]\!] \times [\![1, n]\!],$$
$$(A \times B)^T_{i,j} = \sum_{k = 1}^p a_{j,k} \times b_{k,i} $$

But, the product of transposed is worth the same value:

$$(B^T \times A^T)_{i,j} = \sum_{k = 1}^p b_{k,i} \times a_{j,k} $$

So,

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$

$$ (A \times B)^T = B^T \times A^T $$

$$(3)$$

Inversion of matrix

Inverse of the inverse

Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) a squared matrix of size \(n\).

The relationship between an inversible matrix and its determinant is:

$$ A \ is \ inversible \Longleftrightarrow det(A) \neq 0 $$

But we do also have this property:

$$ det(A^{-1}) = det(A)^{-1}$$

Si, if \(A\) is inversible, then \(A^{-1}\) also is. We now have the following relation:

$$ A A^{-1} = I_n$$

But also :

$$ A^{-1} (A^{-1})^{-1} = I_n$$

By multiplying each member of this expression by \(A\) from the left, we do obtain:

$$ \underbrace {A A^{-1}} _\text{ \(= \ I_n\)} \ (A^{-1})^{-1} = \ \underbrace {A I_n} _\text{ \(= \ A\)}$$

And finally,

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}),$$

$$ A \ is \ inversible \Longrightarrow A^{-1} \ is \ inversible \Longrightarrow (A^{-1})^{-1} = A $$


Inverse of a transposed matrix

Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) a squared matrix of size \(n\).

Matrix\(A\) is inversible if and only if \(det(A) \neq 0\). But both matrix \(A\) and \(A^T\) have the same determinant:

$$ det(A) = det(A^T)$$

Then, if \(A\) is inversible, then \(A^T\) also is.


Furthermore, we saw that:

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$
$$ (A \times B)^T = B^T \times A^T $$

So, applied to our case:

$$ \left(A \times A^{-1}\right)^T = \ \left(A^{-1} \right)^T \times \ A^T $$
$$ I_n^T = \ \left(A^{-1} \right)^T \times \ A^T $$

Now, the transposed matrix of the matrix identity is invariant. Therefore:

$$ I_n = \ \left(A^{-1} \right)^T \times \ A^T $$

By multiplying each member of this expression by \(\left(A^T \right)^{-1}\) from the right, we do obtain:

$$ I_n \times \left(A^T \right)^{-1} = \ \left(A^{-1} \right)^T \times \ \underbrace { A^T \left(A^T \right)^{-1}} _\text{ \(= \ I_n\)} \ $$

And as a result,

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}),$$

$$ A \ is \ inversible \Longrightarrow A^{T} \ is \ inversible \Longrightarrow \ \left(A^T \right)^{-1} = (A^{-1})^T$$


Inverse of a product

Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) be two squared matrix of the same size \(n\).

If both matrix\(A\) and \(B\) are inversible, then:

$$ A \ et \ B \ are\ inversible \Longleftrightarrow \Biggl \{ \begin{align*} det(A) \neq 0 \\ det(B) \neq 0 \end{align*} \qquad(4) $$

But, by the properties of the determinant, we know that:

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) , \ \forall B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K}), $$
$$ det(A \times B) = det(A) \times det(B) \qquad(5) $$

Combining both expressions \((4)\) and \((5)\), we now have:

$$ A \ et \ B \ are\ inversible \Longleftrightarrow det(A \times B) \neq 0$$

Therefore, the product \((A \times B)\) is also inversible and:

$$ (AB) \times \ \left(AB\right)^{-1} \hspace{0.01em} = I_n$$

By multiplying each member of this expression by \((B^{-1} A^{-1})\) from the left, we do obtain:

$$ (B^{-1} A^{-1}) \times (AB) \times \hspace{0.01em} \left(AB\right)^{-1} \hspace{0.01em} = (B^{-1} A^{-1}) \times I_n $$
$$ B^{-1} \times (A^{-1} A) \times B \times \hspace{0.01em} \left(AB\right)^{-1} \hspace{0.01em} = B^{-1} A^{-1}$$

Moreover, we saw above that:

$$ \forall A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}),$$
$$ A \ is \ inversible \Longrightarrow A^{-1} \ is \ inversible \Longrightarrow (A^{-1})^{-1} = A $$
$$ B^{-1} \times \ \underbrace{ \left(A^{-1} (A^{-1})^{-1}\right)} _\text{\(= \ I_n\)} \hspace{0.03em} \times B \times \hspace{0.03em} \left(AB\right)^{-1} = B^{-1} A^{-1}$$
$$ \underbrace{ \left(B^{-1} (B^{-1})^{-1}\right)} _\text{\(= \ I_n\)} \ \times \ \left(AB\right)^{-1} = B^{-1} A^{-1}$$

Soit finalement,

$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ A \ et \ B \ are\ inversible \Longrightarrow (A \times B) \ is \ inversible \Longrightarrow \ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} $$

$$(6)$$

Both expressions \((3)\) and \((6)\) have the same behaviour:

$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2, \enspace \Biggl \{ \begin{align*} (A \times B)^T = B^T \times A^T \hspace{1em}\qquad (3) \\ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} \qquad (6) \end{align*} $$

So, the order of transposition or inversion have no importance.

We deduce of it that:

$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ \left((A \times B)^T \right)^{-1} = \hspace{0.03em} \left((A \times B)^{-1} \right)^T = \hspace{0.03em} \left(A^T\right)^{-1} \times \hspace{0.05em} \left(B^T\right)^{-1} = \hspace{0.03em} \left(A^{-1}\right)^T \times \hspace{0.05em} \left(B^{-1}\right)^T $$


Traces of matrix

Linearity of the trace

Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2\) be two squared matrix of the same size \(n\) and \((\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2\) two real numbers.

By the definition of the trace and by linear combination, we do have this:

$$Tr(\lambda A + \mu B) = \sum_{k = 1}^n (\lambda \ a + \mu \ b)_{k,k}$$

So, we directly obtain two sums:

$$Tr(\lambda A + \mu B) = \sum_{k = 1}^n \lambda \ a_{k,k} + \sum_{k = 1}^n b_{k,k}$$

And finally,

$$ \forall (\lambda \mu) \in \hspace{0.05em} \mathbb{R}^2, \ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ Tr(\lambda A + \mu B) = \lambda \ Tr(A) + \mu \ Tr(B) $$


Trace of a product

Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2\) be two squared matrix of the same size \(n\).

By the definition of the trace, we do have:

$$Tr( A \times B) = \sum_{k = 1}^n (a \times b)_{k,k}$$

Now, by the definition of the matrix product, we have it:

$$ \forall k \in [\![1, n]\!],$$
$$(a \times b)_{k,k} = \sum_{l = 1}^n a_{k,l} \times b_{l,k} $$

So, replacing it by its value in the previous expression:

$$Tr( A \times B) = \sum_{k = 1}^n \sum_{l = 1}^n \Bigl[ a_{k,l} \times b_{l,k} \Bigr] \qquad (7)$$

Variables \(k\) and \(l\) are free variables, therefore they can be switched:

$$Tr( A \times B) = \sum_{l = 1}^n \sum_{k = 1}^n \Bigl[ a_{l,k} \times b_{k,l} \Bigr] $$

The product of scalar on the filed \(\mathbb{K}\) being commutative, we can transform it into:

$$Tr( A \times B) = \sum_{l = 1}^n \sum_{k = 1}^n \Bigl[ b_{k,l} \times a_{l,k} \Bigr] $$

But, this is the same thing as \(Tr(B \times A)\), is we look at the expression \((7)\) and switching \(A\) and \(B\) positions.


As a result we do obtain,

$$ \forall (A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2,$$

$$ Tr(A \times B) = Tr(B \times A)$$


Recap table of the properties of matrix

Return Index
Scroll top Go to the top of the page