Null space and column space

The null space (or commonly referred to as kernel) and column space (or commonly referred to as image) are spaces related to a certain matrix $A$. The null space is plain and simple the name of the solution space for the homogeneous equation $A\vec{x} = \vec{0}$. The column space (or commonly referred to as image) is the range of the linear transformation with the standard matrix $A$, meaning all the possible vectors $\vec{y}$ that can be mapped to via a multiplication with $A$, such that $A\vec{x} = \vec{y}$.

Table of contents

    Intro

    Imagine that you are a cartographer and you create atlas that explains the world. The one problem is that when you create your atlas you lose the height dimension, since your paper is flat. There is no way to see the height of a tree or a house in your atlas.

    Information is lost

    The reason for this is that you project the real world that is three dimensional on to paper that is clearly flat. Therefore the atlas removes one of the dimensions, and the dimension removed is called the null space.

    Concept

    We can imagine the columns of an matrix as being individual vectors in dimensions. The column space of is then all possible linear combinations of these vectors.

    If we instead construct vectors from the rows of , these , -dimensional vectors, and their linear combinations will form the row space of .

    The concept of null space is a bit different. We can think of the null space of as the set of vectors that becomes when multiplied by . We can find by solving the familiar equation:

    Math

    If the columns or rows of are not linearly independent, some of them will be redundant in forming the column space or row space respectively.

    This we can determine through Gauss-Jordan elimination, where any row or column that does not contain a leading element in the reduced row echelon form can be ignored when we form the column and row spaces.

    For the column space, we pick columns from the original matrix , while selecting row vectors for the row space is better done from its reduced form.

    The matrix and its reduced row echelon form:

    give rise to the following spaces, with and as scaling parameters:

    Column space

    The column space refers to the subspace that is spanned by the column vectors (the vertical vectors) of an -matrix and is noted as . The column space is equivalent to the image if we consider as a standard matrix for a linear transformation. Let:

    and the -matrix be a standard matrix for . This means that is multiplied by vectors in , resulting in vectors in . The column space is a subspace of .

    To find the column space of a matrix , we need to determine which columns are linearly independent, so that they can form a basis for the column space. This is done with the help of the Gaussian-Jordan method.

    Example

    Let:

    Then, with the help of row operations, we get the reduced row echelon form:

    Then we see that columns one and two have leading ones, which indicates that column vectors 1 and 2 of the original matrix form a basis for the column space. We have:

    Null space

    The null space of the -matrix refers to the subspace that consists of the solution set to the equation:

    and is noted as . For all linear systems of equations, exactly one of the following three cases applies to its solutions: a unique solution, no solution or infinitely many solutions. The special case with the homogeneous system of equations we have now (the right term is equal to 0), is that we have at least one solution, namely (trivial solution). This means that we only have two outcomes left from the original three and can state, for a homogeneous system of equations, that we only have the alternatives unique solution or infinitely many solutions.

    We can consider as a standard matrix for the linear transformation . Let:

    and the -matrix be a standard matrix for . This means that is multiplied by vectors in , which results in vectors in . The null space is another subspace of .

    To find the null space of a matrix , we need to determine the solution set of the homogeneous system of equations for . This is done again with the help of the Gaussian-Jordan method.

    Example

    Let:

    Then, with the help of row operations, we have the reduced row echelon form:

    We see that we have a zero line, and thus we have an infinite number of solutions. We introduce a parameter and continue to solve:

    and we see that the solution set forms a line.

    Therefore, we have that the null space is spanned by the vector and can write the null space as:

    The outcome is based on the theorem:

    Elementary row operations on the matrix do not affect the null space.

    Row space

    The row space spans the rows of an -matrix and is noted as . The row space is developed with the help of the Gaussian-Jordan method. We go directly to the example.

    Example

    Let:

    Then, with the help of row operations, we get the reduced row echelon form:

    We see that we have a zero row, and the two rows with leading ones form a basis for the row space. Therefore, we see that the row space is spanned by the vectors and and can write the row space as:

    The outcome is based on the following theorem:

    Elementary row operations on the matrix do not affect the row space.

    Orthogonal complement

    The orthogonal complement can be either a single vector or a set of vectors that form a subspace. Let us approach this with an example first and take the general definition later.

    Example

    Let the subspacee of be all vectors along the line:

    We can express as:

    We see that all vectors that are orthogonal to the line constitute the subspace , that is, the orthogonal complement to . So we have:

    We are ready for the general definition:

    If is a set of vectors in , then the orthogonal complement noted is defined as the set of all vectors that are orthogonal to each vector in .

    We also have a couple of useful statements:

    • If is a set of vectors in , then is a subspace of .

    • If is a subspace of , then

    • If is a subspace of , then

    We also have the following statements to take advantage of:

    • If is an -matrix, then the row space of and null space of are orthogonal complements

    • If is an -matrix, then the column space of and null space of are orthogonal complements

    Rank

    The rank of an -matrix is the number of columns with leading ones remaining after row-reducing the matrix to its reduced row echelon form. It is equivalent to the dimension of the column space / image.

    Table of contents
      Enjoy this topic? Please help us and share it.

      Good outline for linear algebra and short to-do list

      We work hard to provide you with short, concise and educational knowledge. Contrary to what many books do.

      Get exam problems for old linear algebra exams divided into chapters

      The trick is to both learn the theory and practice on exam problems. We have categorized them to make it extra easy.

      Apple logo
      Google logo
      © 2024 Elevri. All rights reserved.