Gram-Schmidt is an algorithm for finding an ON basis to a given subspace. The input to the algorithm is a known, non-ON basis, and when applying the projection theorem in a sequence will find the basis vectors one by one.

Table of contents


    The name Gram-Schmidt was adopted by the mathematical community after the German scientist Erhard Schmidt published an outline for this algorithm in 1907.

    In his paper, Schmidt also acknowledged that Danish Actuary and Mathematician Jørgen Pedersen Gram had been using a similar technique before him.

    Unfortunately, Gram did not live to see his name in the title of the famous algorithm, on the account of facing an utterly bizarre and tragic death.

    On his way to a meeting with The Royal Danish Academy of Sciences, Gram was hit and killed by a bicyclist.


    A coordinate system can be constructed in many different ways. The form we are most familiar with is the Cartesian coordinate system whose axes are perpendicular and labeled with the same units.

    The reason this particular coordinate system has become the standard is because it makes sense to us intuitively, and simplifies much of the math.

    For various applications of linear algebra, however, we may not be lucky enough that the problem we are trying to solve is handed to us in this simple form.

    In such cases, we may wish to transform a weird coordinate system into the form we are used to working with.

    This is exactly what the Gram-Schmidt method does for us.


    Given a set of n linearly independent vectors:

    forming a vector space , the Gram-Schmidt algorithm constructs an orthonormal basis of :

    The Gram-Schmidt procedure: First, we let , and then we normalize to compute

    To produce the following orthonormal basis, we consider each corresponding vector (before normalizing) and we subtract the projection of this vector onto the previously computed basis vectors. Hence the formula becomes:

    where the second step is to normalize the resulting vector, which gives us the orthonormal basis vector.



    Gram-Schmidt is an algorithm for creating a new ON-basis based on an existing basis. Say we have a basis:

    spanning a subspace . The challenge then is to define a new basis:

    which spans the same subspace , but which is also an orthonormal basis.

    Gram-Schmidt is an algorithm that creates an ON-basis based on an existing basis

    Mathematically, the above two bases and the basis vectors apply for :

    The algorithm

    In this section, we introduce the algorithm and show examples for 2D and 3D. Some of us have a tendency to recognize patterns, while others need a geometric review for each step of the algorithm. The recommendation is to first read the algorithm below superficially, and then understand the geometry of our two examples.


    span the subspace . We create an ON-basis of as the vectors:

    and define them as:

    where is a subspace of and grows with one dimension for each step.

    Interpretation of the algorithm

    The algorithm and its steps can be summarized as:

    1. The algorithm has as many steps as the number of basis vectors . In each individual step, we create a new ON-basis vector .

    2. For each step, we define the subspace which is spanned by all the ON-basis vectors we have defined so far.

    3. For each step, we define the new ON-basis vector as the vector between and the projection of in the subspace .

    Projection in the algorithm

    For the second step of the algorithm, we see that is defined as:

    where the subspace is one-dimensional and spans:

    For this step we can easily use the projection formula, but from the third step where we have:

    the subspace has a dimension higher than 1. Then the projection formula is not enough. However we can support it with the fact that the vectors form an ON-basis. Then:

    since and are normalized. If they are not normalized, the expression is not abbreviated as beautifully, but the general expression becomes:

    since the denominator in the fraction is no longer equal to 1. The general case follows analogously, given that the vectors are normalized:

    Gram-Schmidt in 2D

    Example 1

    In our first example, we have the basis:

    where spans and:

    We perform Gram-Schmidt and start with the first step - to define the first ON-base vector as the norm of and subspace :

    The next and final step for this example is to define the second ON-basis vector according to:

    So our new ON-basis for is:

    where the vectors are both of length 1 and orthogonal to each other.

    Example 2

    In our second example for 2D, we take the basis:

    which spans the plane , which is subspace of and:

    But, how can an example in space qualify for an example for 2D? This is an example of 2D because we are creating a new ON-basis for the subspace , whose dimension is 2 when it has two base vectors.

    We perform Gram-Schmidt:

    As in the first example, we have only one more step, which is to define the second ON-basis vector as:

    Before we perform the last normalization step, we can apply a simple trick to get a nicer vector . Since we are going to normalize the vector, its original length does not matter for the normalization step. Therefore, see what happens when we choose to extend the vector by two before we normalize:

    Surely the later vector became much nicer than the first? Simpler scalars with no fractions in parentheses. Our new ON-basis for will be:

    Gram-Schmidt in 3D

    Let's set an ON-basis using Gram-Schmidt that spans the same subspace of as the basis:

    The first two steps in the algorithm will then be equivalent to the second example for 2D, and we can reuse the results for and . We have:

    which takes us to step three, namely to calculate:

    where we have:

    Thus, it leads to our new basis vector expressed as:

    Let us approach the two projections separately to facilitate the calculations.

    We insert our calculations into the expression above:

    We now normalize as the last sub-step of step 3 in the algorithm, but we first extend the vector by 3/2 to facilitate calculations and get a nicer basis vector:

    We conclude that our new basis is:

    Orthonormal basis

    A basis is orthonormal if its basis vectors are orthogonal to each other and all have length one, that is, an orthogonal base with the additional condition that all basis vectors are normalized. Formally, we can define it as follows:

    Let be a basis for the subspace . is considered orthonormal if, and only if:

    Orthogonal basis

    A basis is orthogonal if all basis vectors are perpendicular to each other. Formally, we can define it as follows:

    Let a basis for the subspace . is considered orthogonal if, and only if:

    Orthogonal matrix

    A matrix is orthogonal if it is square and all of its column vectors are perpendicular to each other with a length of one. A very natural reaction is that the matrix should then be called an orthonormal matrix, which some sources allow. But the conventional name is just an orthogonal matrix, even though the name is confusing and therefore a bit unfortunately chosen. There is a rumor that a prominent, but confused, mathematician once chose the wrong name and the mistake has never been corrected since. Let us formalize the definition:

    Let be an -square matrix:

    is considered an orthogonal matrix if, and only if:

    Table of contents
      Enjoy this topic? Please help us and share it.

      Good outline for linear algebra and short to-do list

      We work hard to provide you with short, concise and educational knowledge. Contrary to what many books do.

      Get exam problems for old linear algebra exams divided into chapters

      The trick is to both learn the theory and practice on exam problems. We have categorized them to make it extra easy.

      Apple logo
      Google logo
      © 2024 Elevri. All rights reserved.