Orthogonal diagonalization

Orthogonal diagonalization is the same as regular diagonlization, with the extended requirement of the eigenvectors needed to form an ON basis for $R^n$. Only symmetric matrices are orthogonal diagonalizable. The process of deciding the vectors for the matrix $P$ is by applying Gram-Schmidt. Then, by the property of symmetric matrices, you have that $$A = PDP^{-1} = PDP^T$$

Table of contents

    Intro

    Here's a fun little physics experiment!

    Take out your phone (or book) and flip it like this!

    Notice how this rotation looks compared to if you flip it along the axis below:

    Notice now how the flip is kind of unstable, just like in the animation. To understand this, we need to learn about orthogonal diagonalization.

    The inertia matrix (or inertia tensor) is a symmetric matrix that describes the resistance of rotation about an axis. One can always diagonalize a symmetric matrix and it turns out that the eigenvectors are mutually orthogonal and that the corresponding eigenvalues are real!

    We say that the inertia matrix is orthogonaly diagonalizable.

    The eigenvectors of the inertia matrix is called principal axes and it was along these axes that you were flipping your phone.

    Your phone's flip was unstable around the axis with the intermediate eigenvalue. This is the Dhzanibekov effect.

    Concept

    To diagonalize a matrix is to find three matrices , , and such that

    If happens to be orthogonal, meaning that all of its column vectors forms 90-degree angles to each other, the diagonalization of is said to be an orthogonal diagonalization.

    This will only be possible if is symmetric, meaning that if we imagine folding along its main diagonal, all elements are matched with another element that also has the same value.

    Math

    Symmetric matrices are defined as square matrices that are equal to their transpose matrix:

    Now let be orthogonally diagonalizable and consider .

    When distributed, the transpose reverses the order of the multiplication, so that:

    Now is symmetric so that , and the fact that is orthogonal gives us that:

    Consequently:

    Apart from showing that must be symmetric, this tells us that the orthogonal diagonalization of can also be formulated as:

    Orthogonal diagonalization

    Orthogonal diagonalization of an -matrix is a special case of diagonalization. We then make an additional requirement that the matrix is orthogonal such that:

    The last term shows that for each orthogonal matrix, its inverse is equal to the transpose. It also applies that a matrix is orthogonally diagonalizable if, and only if, it is a symmetrical matrix.

    A matrix is orthogonally diagonalizable if, and only if, it is symmetric

    In order for to be an orthogonal matrix, it means that we must be able to create an orthonormal basis of eigenvectors for the matrix . This is done with the help of Gram-Schmidt for each basis for each eigenspace, but it is essential that the eigenspaces must be orthogonal sets. Let us take the following explanatory example for the reasoning:

    Let the -matrix be diagonalizable and have two distict eigenvalues, and . This gives us two eigenspaces, a line and a plane (since the matrix is diagonalizable and the eigenvalues are two in number, one of these must have a geometric multiplicity of two). It is then clear that we can create an orthonormal basis for the eigenspace that is a plane, but for the eigenspace that is a line, we can not change the direction of its basis vector without getting a completely new space. The line must therefore intersect the plane orthogonally in order for the matrix to be orthogonally diagonalizable, and therefore it applies analogously to higher dimensions that all eigenspaces must be orthogonal sets.

    To orthogonally diagonalize the matrix , we do the following:

    • Find the eigenvalues and respective eigenspaces for the matrix .

    • Create a basis of eigenvectors for each eigenspace.

    • Apply Gram-Schmidt for each eigenspace.

    • Form the matrices and , such that the column vectors of consist of the orthonormal eigenvectors of , and the diagonal elements of consist in the same order of the eigenvalues of each respective eigenvector.

    Spectral decomposition

    A spectral theorem is the result when a linear operator, or a matrix, can be diagonalized. This operation has great potential in real applications, not least to greatly reduce calculations for diagonalizable matrices, which is why computer scientists appreciate this theorem. This may not sound so exciting to a student, but the fact that this theorem laid the foundation for digitizing music so we could go from buying CDs to streaming music on our phones, usually ignites a spark.

    You find the spectral theorem both in linear algebra and in functional analysis (the latter being is a subject for higher studies in mathematics). Typically, one refers to spectral decomposition for a matrix and spectral theorem for a linear operator.

    For a basic course in linear algebra, the spectral theorem is usually referred to as just an orthogonal diagonalization of a quadratic -matris . This requires that the matrix is symmetric and then the following applies:

    The spectral theorem

    Let be an orthogonally diagonalizable -square matrix. Let its eigenvalues be , , ... and its orthonormal eigenvectors be , ,... . We then have that:

    of which the last line is usually referred to as the spectral decomposition or eigenvalue decomposition.

    Table of contents
      Enjoy this topic? Please help us and share it.

      Good outline for linear algebra and short to-do list

      We work hard to provide you with short, concise and educational knowledge. Contrary to what many books do.

      Get exam problems for old linear algebra exams divided into chapters

      The trick is to both learn the theory and practice on exam problems. We have categorized them to make it extra easy.

      Apple logo
      Google logo
      © 2024 Elevri. All rights reserved.