For what follows, it is important to establish the following definitions:
Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2\) be two matrix of the same size.
In other words, we add each element of the left matrix with the element located at the same position of the right one:
$$A + B = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & \dots & a_{1, p} \\ a_{2,1} & a_{2,2} & a_{2,3} & \dots & a_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & a_{n, p} \end{pmatrix} + \begin{pmatrix} b_{1,1} & b_{1,2} & b_{1,3} & \dots & b_{1, p} \\ b_{2,1} & b_{2,2} & b_{2,3} & \dots & b_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ b_{n,1} & b_{n,2} & b_{n,3} & \dots & b_{n, p} \end{pmatrix} $$
$$A + B = \begin{pmatrix} a_{1,1} + b_{1,1} & a_{1,2} + b_{1,2} & a_{1,3} + b_{1,3} & \dots & a_{1, p} + b_{1, p} \\ a_{2,1} + b_{2,1} & a_{2,2} + b_{2,2} & a_{2,3} + b_{2,3} & \dots & a_{2, p} + b_{2,p} \\ \hspace{2em} \vdots & \hspace{2em} \vdots & \hspace{2em} \vdots & \ddots & \hspace{2em} \vdots \\ a_{n,1} + b_{n,1} & a_{n,2} + b_{n,2} & a_{n,3} + b_{n,3} & \dots & a_{n, p} + b_{n,p} \end{pmatrix} $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) and \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) be two matrix.
To multiply two matrix, we need the left matrix to have the same number of columns as the number of rows of the right one (here \(p\)). As a result, we obtain a matrix \(AB \in \hspace{0.03em} \mathcal{M}_{n,q} (\mathbb{K})\), so having \(n\) lines and \(q\) columns.
For example:
$$A \times B = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & \dots & a_{1, p} \\ a_{2,1} & a_{2,2} & a_{2,3} & \dots & a_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & a_{n, p} \end{pmatrix} \times \begin{pmatrix} b_{1,1} & b_{1,2} & b_{1,3} & \dots & b_{1, q} \\ b_{2,1} & b_{2,2} & b_{2,3} & \dots & b_{2, q} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ b_{p,1} & b_{p,2} & b_{p,3} & \dots & b_{p, q} \end{pmatrix} $$
$$A \times B = \begin{pmatrix} \Bigl[a_{1,1} b_{1,1} + a_{1,2} b_{2,1} \ + \ ... \ + \ a_{1,p} b_{p,1} \Bigr] & \Bigl[a_{1,1} b_{1,2} + a_{1,2} b_{2,2} \ + \ ... \ + \ a_{1,p} b_{p,2}\Bigr] & \hspace{1em} \dots \dots \dots \hspace{1em} & \Bigl[a_{1,1} b_{1,q} + a_{1,2} b_{2,q} \ + \ ... \ + \ a_{1,p} b_{p,q}\Bigr] \\ \Bigl[a_{2,1} b_{1,1} + a_{2,2} b_{2,1} \ + \ ... \ + \ a_{2,p} b_{p,1}\Bigr] & \Bigl[a_{2,1} b_{1,2} + a_{2,2} b_{2,2} \ + \ ... \ + \ a_{2,p} b_{p,2}\Bigr] & \hspace{1em} \dots \dots \dots \hspace{1em} & \Bigl[a_{2,1} b_{1,q} + a_{2,2} b_{2,q} \ + \ ... \ + \ a_{2,p} b_{p,q}\Bigr] \\ \hspace{8em} \vdots & \hspace{8em} \vdots & \hspace{1em} \ddots & \hspace{8em} \vdots \\ \hspace{8em} \vdots & \hspace{8em} \vdots & \hspace{1em} \ddots & \hspace{8em} \vdots \\ \Bigl[a_{n,1} b_{1,1} + a_{n,2} b_{2,1} \ + \ ... \ + \ a_{n,p} b_{p,1}\Bigr] & \Bigl[a_{n,1} b_{1,2} + a_{2,2} b_{2,2} \ + \ ... \ + \ a_{n,p} b_{p,2}\Bigr] & \hspace{1em} \dots \dots \dots \hspace{1em} & \Bigl[a_{n,1} b_{1,q} + a_{n,2} b_{2,q} \ + \ ... \ + \ a_{n,p} b_{p,q}\Bigr] \end{pmatrix} $$
Be careful, in a general way the matrix product does not have commutative law: \( (A \times B) \neq (B \times A) \).
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix.
When a matrix is multiplied by a scalar, it affects all its elements.
For example:
$$A = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & \dots & a_{1, p} \\ a_{2,1} & a_{2,2} & a_{2,3} & \dots & a_{2, p} \\ \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \hspace{0.5em} \vdots & \ddots & \hspace{0.5em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & a_{n, p} \end{pmatrix} $$
$$\lambda A = \begin{pmatrix} \lambda \ a_{1,1} & \lambda \ a_{1,2} & \lambda \ a_{1,3} & \dots & \lambda \ a_{1, p} \\ \lambda \ a_{2,1} & \lambda \ a_{2,2} & \lambda \ a_{2,3} & \dots & \lambda \ a_{2, p} \\ \hspace{1.1em} \vdots & \hspace{1.1em} \vdots & \hspace{1.1em} \vdots & \ddots & \hspace{1.1em} \vdots \\ \lambda \ a_{n,1} & \lambda \ a_{n,2} & \lambda \ a_{n,3} & \dots & \lambda \ a_{n, p} \end{pmatrix} $$
Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2\) be two matrix of the same size and \((\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2\).
With the previous properties of addition and multiplication by a scalar, we can create linear combinations and:
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix.
Matrix transposition consists in reverse lines and columns indices for each elements. We note \(A^T\) (sometimes \(^t A\)) the transposed of the matrix \(A\).
For example:
$$A = \begin{pmatrix} a_{1,1} & \textcolor{#8E5B5B}{a_{1,2}} & \textcolor{#8E5B5B}{a_{1,3}} & \textcolor{#8E5B5B}{\dots} & \textcolor{#8E5B5B}{a_{1, n}} \\ \textcolor{#446e4f}{a_{2,1}} & a_{2,2} & \textcolor{#8E5B5B}{a_{2,3}} & \textcolor{#8E5B5B}{\dots} & \textcolor{#8E5B5B}{a_{2, n}} \\ \textcolor{#446e4f}{a_{3,1}} & \textcolor{#446e4f}{a_{3,2}} & a_{3,3} & \textcolor{#8E5B5B}{\dots} & \textcolor{#8E5B5B}{a_{3, n}} \\ \hspace{0.8em} \textcolor{#446e4f}{\vdots} & \hspace{0.8em} \textcolor{#446e4f}{\vdots} & \hspace{0.8em} \textcolor{#446e4f}{\vdots} & \ddots & \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} \\ \textcolor{#446e4f}{a_{n,1}} & \textcolor{#446e4f}{a_{n,2}} & \textcolor{#446e4f}{a_{n,3}} & \textcolor{#446e4f}{\dots} & a_{n, n} \\ \end{pmatrix} $$
So, its transposed is:
$$A^T = \begin{pmatrix} a_{1,1} & \textcolor{#446e4f}{a_{2,1}} & \textcolor{#446e4f}{a_{3,1}} & \textcolor{#446e4f}{\dots} & \textcolor{#446e4f}{a_{n, 1}} \\ \textcolor{#8E5B5B}{a_{1,2}} & a_{2,2} & \textcolor{#446e4f}{a_{3,2}} & \textcolor{#446e4f}{\dots} & \textcolor{#446e4f}{a_{n, 2}} \\ \textcolor{#8E5B5B}{a_{1,3}} & \textcolor{#8E5B5B}{a_{2,3}} & a_{3,3} & \textcolor{#446e4f}{\dots} & \textcolor{#446e4f}{a_{n, 3}} \\ \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} & \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} & \hspace{0.8em} \textcolor{#8E5B5B}{\vdots} & \ddots & \hspace{0.8em} \textcolor{#446e4f}{\vdots} \\ \textcolor{#8E5B5B}{a_{1,n}} & \textcolor{#8E5B5B}{a_{2,n}} & \textcolor{#8E5B5B}{a_{3,n}} & \textcolor{#8E5B5B}{\dots} & a_{n, n} \\ \end{pmatrix} $$
Only the diagonal remains intact, because when \(i = j\), then \(a_{i,j} = a_{j,i}\).
Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) be a squared matrix of size \(n\).
The inverse of the matrix \(A\) is the matrix written \(A^{-1}\) such as: \(A A^{-1} = I_n\)
A matrix is inversible if and only if \(det(A) \neq 0\).
A system of linear equations \( (S)\), where the unknown are the variables \(x_{i,j}\), can be written as a matrix product system :
$$ (S) \enspace \left \{ \begin{gather*} a_1 x_{1,1} + a_2 x_{1,2} + a_3 x_{1,3} + \hspace{0.1em}... \hspace{0.1em}+ a_n x_{1,p} = b_1 \\ a_1 x_{2,1} + a_2 x_{2,2} + a_3 x_{2,3} + \hspace{0.1em}... \hspace{0.1em}+ a_n x_{2,p} = b_2 \\ ........................ ............. \ = \ ..\\ a_1 x_{n,1} + a_2 x_{n,2} + a_3 x_{n,3} + \hspace{0.1em}... \hspace{0.1em}+ a_n x_{n,p} = b_n \\ \end{gather*} \right \} $$
$$ \underbrace{ \begin{pmatrix} x_{1,1} & x_{1,2} & x_{1,3} & \dots & x_{1, p} \\ x_{2,1} & x_{2,2} & x_{2,3} & \dots & x_{2, p} \\ \hspace{0.8em} \vdots & \hspace{0.8em} \vdots & \hspace{0.8em} \vdots & \ddots & \hspace{0.8em} \vdots \\ x_{n,1} & x_{n,2} & x_{n,3} & \dots & x_{n, p} \\ \end{pmatrix} } _\text{X} \times \underbrace{ \begin{pmatrix} a_1 \\ a_2 \\ \hspace{0.3em}\vdots \\ a_n \end{pmatrix} } _\text{A} = \underbrace{ \begin{pmatrix} b_1 \\ b_2 \\ \hspace{0.3em}\vdots \\ b_n \end{pmatrix} } _\text{B} \ \Longleftrightarrow \ MA = B, \ avec \enspace \Biggl \{ \begin{gather*} X \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K}) \\ A \in \hspace{0.03em} \mathcal{M}_{1,p} (\mathbb{K}) \\ B \in \hspace{0.03em} \mathcal{M}_{1,p} (\mathbb{K}) \end{gather*} $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) be a squared matrix of size \(n\).
We call the trace of a matrix, the sum of all its diagonal elements:
$$ A = \begin{pmatrix} \textcolor{#606B9E}{a_{1,1}} & a_{1,2} & a_{1,3} & \dots & a_{1,n} \\ a_{2,1} & \textcolor{#606B9E}{a_{2,2}} & a_{2,3} & \dots & a_{2,n} \\ a_{3,1} & a_{3,2} & \textcolor{#606B9E}{a_{3,3}} & \dots & a_{3,n} \\ \hspace{0.1em}\vdots & \hspace{0.1em} \vdots & \hspace{0.1em} \vdots & \textcolor{#606B9E}{\ddots} & \hspace{0.1em} \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \dots & \textcolor{#606B9E}{a_{n,n}} \end{pmatrix} $$
A diagonal is a squared matrix where all the elements are \(0\) except on the main diagonal:
$$D_n = \begin{pmatrix} \textcolor{#606B9E}{d_{1,1}} & 0 & 0 & \dots & 0 \\ 0 & \textcolor{#606B9E}{d_{2,2}} & 0 & \dots & 0 \\ 0 & 0 & \textcolor{#606B9E}{d_{3,3}} & \dots & 0 \\ \hspace{0.1em}\vdots & \hspace{0.1em} \vdots & \hspace{0.1em} \vdots & \textcolor{#606B9E}{\ddots} & \hspace{0.1em} \vdots \\ 0 & 0 & 0 & \dots & \textcolor{#606B9E}{d_{n,n}} \end{pmatrix} $$
We also note the diagonal matrix \(D_n\) only in relation with its diagonal elements : \(D_n = diag(\lambda_1, \lambda_2, \ ..., \lambda_n)\).
The matrix identity \(I_n\) is defined as follows:
$$I_n = \begin{pmatrix} \textcolor{#606B9E}{1} & 0 & 0 & \dots & 0 \\ 0 & \textcolor{#606B9E}{1} & 0 & \dots & 0 \\ 0 & 0 & \textcolor{#606B9E}{1} & \dots & 0 \\ \vdots & \vdots & \vdots & \textcolor{#606B9E}{\ddots} & \vdots \\ 0 & 0 & 0 & \dots & \textcolor{#606B9E}{1} \\ \end{pmatrix} $$
It is the matrix of size \(n\) having the value \(1\) on its main diagonal, and \(0\) everywhere else. It's a specific case of diagonal matrix. For example,
$$I_3 = \begin{pmatrix} \textcolor{#606B9E}{1} & 0 & 0 \\ 0 & \textcolor{#606B9E}{1} & 0 \\ 0 & 0 & \textcolor{#606B9E}{1} \end{pmatrix} $$
$$ (A \times B) \times C = A \times (B \times C) $$
$$ A \times (B + C) = A \times B + A \times C $$
$$ (A + B) \times C = A \times C + B \times C $$
$$ (\lambda A) \times B = A \times (\lambda B) = \lambda (A \times B) $$
Multiplication by the identity
$$ I_n \times A = A \times I_p = A $$
$$ D_1 \times D_2 = D_2 \times D_1 = diag \left(\lambda_1 \mu_1, \lambda_2 \mu_2, \ ..., \lambda_n \mu_n \right) $$
$$ D^m = diag \left(\lambda_1^m, \lambda_2^m, \ ..., \lambda_n^m \right) $$
$$ (\lambda A + \mu B)^T = \lambda A^T + \mu B^T $$
$$ (A \times B)^T = B^T \times A^T $$
$$ A \ is \ inversible \Longrightarrow A^{-1} \ is \ inversible \Longrightarrow (A^{-1})^{-1} = A $$
Inverse of a transposed matrix
$$ A \ is \ inversible \Longrightarrow A^{T} \ is \ inversible \Longrightarrow \ \left(A^T \right)^{-1} = (A^{-1})^T$$
$$ A \ et \ B \ are\ inversible \Longrightarrow (A \times B) \ is \ inversible \Longrightarrow \ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} $$
Both expressions \((3)\) and \((6)\) have the same behaviour:
$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2, \enspace \Biggl \{ \begin{align*} (A \times B)^T = B^T \times A^T \hspace{1em}\qquad (3) \\ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} \qquad (6) \end{align*} $$
So, the order of transposition or inversion has no importance,
$$ \left((A \times B)^T \right)^{-1} = \hspace{0.03em} \left((A \times B)^{-1} \right)^T = \hspace{0.03em} \left(A^T\right)^{-1} \times \hspace{0.05em} \left(B^T\right)^{-1} = \hspace{0.03em} \left(A^{-1}\right)^T \times \hspace{0.05em} \left(B^{-1}\right)^T $$
$$ Tr(\lambda A + \mu B) = \lambda \ Tr(A) + \mu \ Tr(B) $$
$$ Tr(A \times B) = Tr(B \times A)$$
Recap table of the properties of matrix
Let be \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\), \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) et \(C \in \hspace{0.03em} \mathcal{M}_{q,r} (\mathbb{K})\) three matrix.
By definition, we do have:
But, the factor \((ab)_{i,k}\) is worth:
So, we replace it in the main expression and:
Since the factor \(c_{k,j}\) is independent from \(l\), it can be considered as a constant, and integrated inside the inner sum.
Let us now calculate the product \(A \times (B \times C)\).
In the same way, we replace \((bc)_{k,j}\) by its expression and:
In both expression \((1)\) and \((2)\), the variables \(k\) and \(l\) are free variables:
Therefore, they can be interverterd, and now \((1)\) and \((2)\) are equal and:
And finally,
$$ (A \times B) \times C = A \times (B \times C) $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix and \((B, C) \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})^2\) two matrix.
With the definition of the matrix product, we do have:
And finally,
$$ A \times (B + C) = A \times B + A \times C $$
Let \((A, B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2 \) be two matrix and \( C \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) an other matrix.
As well as before:
And finally,
$$ (A + B) \times C = A \times C + B \times C $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) and \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) be two matrix and \(\lambda \in \mathbb{R}\) a reel number.
With the definition of the matrix product, we do have:
In the same way:
And as a result,
$$ (\lambda A) \times B = A \times (\lambda B) = \lambda (A \times B) $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) be a matrix.
With the definition of the matrix product, we do have:
But the factor \((I_n)_{i,k}\) is worth :
$$ (I_n)_{i,k} = \Biggl \{ \begin{align*} 1, \ if \ (i = k) \\ 0 \ otherwise \end{align*} $$
So,
It is the unchanged starting matrix.
Idem, on the other side:
In the same way and for all \((i,j)\), in this sum of products, when \((k = j)\) we obtain \(a_{i,j}\) since all other terms are worth \(0\) and therefore:
And as a result,
$$ I_n \times A = A \times I_p = A $$
Let \(\Bigl[ D_1 = diag(\lambda_1, \lambda_2, \ ..., \lambda_n), \ D_2 = diag(\mu_1, \mu_2, \ ..., \mu_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2 \) be two diagonal matrix.
With the definition of the matrix product, we do have:
Retrieving the definition of a diagonal matrix, in each inner product of all these sums:
$$ \Biggl \{ \begin{align*} \forall (i, k) \in [\![1, n]\!]^2, \ (i \neq k) \Longrightarrow (d_1)_{i,k} = 0 \\ \forall (k, j) \in [\![1, n]\!]^2, \ (k \neq j) \Longrightarrow (d_2)_{k,j} = 0 \end{align*} $$
So, for any \(k\), the product \( \Bigl[ (d_1)_{i,k} \times (d_2)_{k,j} \Bigr] \neq 0 \) only if:
We then have:
$$ \forall (i, j) \in [\![1, n]\!]^2, \ (D_1 \times D_2)_{i,j} = \Biggl \{ \begin{align*} (d_1)_{i,j} \times (d_2)_{i,j}, \ si \ (i = j) \\ 0 \ otherwise \end{align*} $$
So,
$$ \forall (i, j) \in [\![1, n]\!]^2, \ (D_1 \times D_2)_{i,j} = \Biggl \{ \begin{align*} (d_1)_{k,k} \times (d_2)_{k,k} = \lambda_k \ \mu_k, \ si \ (i = j = k) \\ 0 \ otherwise \end{align*} $$
$$ (D_1 \times D_2) = \begin{pmatrix} \textcolor{#606B9E}{\lambda_1 \ \mu_1} & 0 & 0 & \dots & 0 \\ 0 & \textcolor{#606B9E}{\lambda_2 \ \mu_2} & 0 & \dots & 0 \\ 0 & 0 & \textcolor{#606B9E}{\lambda_3 \ \mu_3} & \dots & 0 \\ \hspace{0.1em}\vdots & \hspace{0.1em} \vdots & \hspace{0.1em} \vdots & \textcolor{#606B9E}{\ddots} & \hspace{0.1em} \vdots \\ 0 & 0 & 0 & \dots & \textcolor{#606B9E}{\lambda_n \ \mu_n} \end{pmatrix} $$
In the same way, if we perform the product on the other way round:
This same reasoning leads us to the same result, that is to say that:
$$ \forall (i, j) \in [\![1, n]\!]^2, \ (D_1 \times D_2)_{i,j} = \Biggl \{ \begin{align*} (d_2)_{k,k} \times (d_1)_{k,k} = \mu_k \ \lambda_k, \ si \ (i = j = k) \\ 0 \ otherwise \end{align*} $$
The product of numbers on the field \(\mathbb{K}\) having commutative law, both results of the products \((D_1 \times D_2)\) and \((D_2 \times D_1)\) are equal.
And finally,
$$ D_1 \times D_2 = D_2 \times D_1 = diag \left(\lambda_1 \mu_1, \lambda_2 \mu_2, \ ..., \lambda_n \mu_n \right) $$
Let \(\Bigl[ D = diag(\lambda_1, \lambda_2, \ ..., \lambda_n) \Bigr] \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K}) \) be a diagonal matrix.
Following the same reasoning as above, by a direct recurrence we do obtain as a result that:
$$ D^m = diag \left(\lambda_1^m, \lambda_2^m, \ ..., \lambda_n^m \right) $$
Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})^2\) be two matrix of the same size and \((\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2\) two real numbers.
We saw with the multiplication of a matrix by a scalar all its elements were affected.
Moreover, the matrix sum (of the same size) is the addition of elements of both matrix having index \((i,j)\) together.
Now, with these two properties, we can build a linear combination such as:
Now taking its transposed matrix, in reverses all \(i\) and \(j\) index:
And as a result,
$$ (\lambda A + \mu B)^T = \lambda A^T + \mu B^T $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n,p} (\mathbb{K})\) and \(B \in \hspace{0.03em} \mathcal{M}_{p,q} (\mathbb{K})\) be two matrix.
With the definition of the matrix product, we do have:
Now taking its transposed matrix, in reverses all \(i\) and \(j\) index:
But, the product of transposed is worth the same value:
So,
$$ (A \times B)^T = B^T \times A^T $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) a squared matrix of size \(n\).
The relationship between an inversible matrix and its determinant is:
But we do also have this property:
Si, if \(A\) is inversible, then \(A^{-1}\) also is. We now have the following relation:
But also :
By multiplying each member of this expression by \(A\) from the left, we do obtain:
And finally,
$$ A \ is \ inversible \Longrightarrow A^{-1} \ is \ inversible \Longrightarrow (A^{-1})^{-1} = A $$
Let \(A \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) a squared matrix of size \(n\).
Matrix\(A\) is inversible if and only if \(det(A) \neq 0\). But both matrix \(A\) and \(A^T\) have the same determinant:
Then, if \(A\) is inversible, then \(A^T\) also is.
Furthermore, we saw that:
So, applied to our case:
Now, the transposed matrix of the matrix identity is invariant. Therefore:
By multiplying each member of this expression by \(\left(A^T \right)^{-1}\) from the right, we do obtain:
And as a result,
$$ A \ is \ inversible \Longrightarrow A^{T} \ is \ inversible \Longrightarrow \ \left(A^T \right)^{-1} = (A^{-1})^T$$
Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})\) be two squared matrix of the same size \(n\).
If both matrix\(A\) and \(B\) are inversible, then:
$$ A \ et \ B \ are\ inversible \Longleftrightarrow \Biggl \{ \begin{align*} det(A) \neq 0 \\ det(B) \neq 0 \end{align*} \qquad(4) $$
But, by the properties of the determinant, we know that:
Combining both expressions \((4)\) and \((5)\), we now have:
Therefore, the product \((A \times B)\) is also inversible and:
By multiplying each member of this expression by \((B^{-1} A^{-1})\) from the left, we do obtain:
Moreover, we saw above that:
Soit finalement,
$$ A \ et \ B \ are\ inversible \Longrightarrow (A \times B) \ is \ inversible \Longrightarrow \ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} $$
Both expressions \((3)\) and \((6)\) have the same behaviour:
$$ \forall (A ,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2, \enspace \Biggl \{ \begin{align*} (A \times B)^T = B^T \times A^T \hspace{1em}\qquad (3) \\ \left(A \times B\right)^{-1} = B^{-1} \times A^{-1} \qquad (6) \end{align*} $$
So, the order of transposition or inversion have no importance.
We deduce of it that:
$$ \left((A \times B)^T \right)^{-1} = \hspace{0.03em} \left((A \times B)^{-1} \right)^T = \hspace{0.03em} \left(A^T\right)^{-1} \times \hspace{0.05em} \left(B^T\right)^{-1} = \hspace{0.03em} \left(A^{-1}\right)^T \times \hspace{0.05em} \left(B^{-1}\right)^T $$
Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2\) be two squared matrix of the same size \(n\) and \((\lambda, \mu) \in \hspace{0.05em} \mathbb{R}^2\) two real numbers.
By the definition of the trace and by linear combination, we do have this:
So, we directly obtain two sums:
And finally,
$$ Tr(\lambda A + \mu B) = \lambda \ Tr(A) + \mu \ Tr(B) $$
Let \((A,B) \in \hspace{0.03em} \mathcal{M}_{n}(\mathbb{K})^2\) be two squared matrix of the same size \(n\).
By the definition of the trace, we do have:
Now, by the definition of the matrix product, we have it:
So, replacing it by its value in the previous expression:
Variables \(k\) and \(l\) are free variables, therefore they can be switched:
The product of scalar on the filed \(\mathbb{K}\) being commutative, we can transform it into:
But, this is the same thing as \(Tr(B \times A)\), is we look at the expression \((7)\) and switching \(A\) and \(B\) positions.
As a result we do obtain,
$$ Tr(A \times B) = Tr(B \times A)$$