# Projection theorem

By projection one is typically referring to orthogonal projection of one vector onto another. The result is the representative contribution of the one vector along the other vector projected on. Imagine having the sun in zenit, casting a shadow of the first vector strictly down (orthogonally) onto the second vector. That shadow is then the ortogonal projection of the first vector to the second vector. ## Intro

To help us navigate, it is of great use to have a miniature version of our surroundings telling us where to go.

For practical reasons, we usually let two-dimensional illustrations represent the geography of the three-dimensional globe we live on. This is an example of a projection. Since going from a higher dimension down to a lower one, some information is distorted. For example, the relative sizes of countries change when comparing ones close or far away from the equator.

It is this process, known as the mercator projection, that makes Greenland appear to be equal in size to the whole of Africa, while in reality it is just about the size of Algeria alone.

## Concept

The projection of a vector on another vector finds the component of that is aligned in the direction of .

Projections are not restricted only to vectors on vectors, but it is the simplest case since a vector is a one-dimensional subspace of .

The projection of a vector on a subspace extracts the part of the vector that is found in that subspace

More generally, however, a vector can be projected on a subspace of of any dimension. After the projection, we are then left with the part of that lies in all the dimensions spanned by and ignore the rest of .

## Math

We denote the projection of on another vector as . This projection is found using the formula: Alternatively, we can express the projection as a linear transformation using the standard matrix :

where we have:

Linear transformations can be used to project on a subspace as well. Although in this case, is given by the expression:

so that:

where the matrix is formed by arranging a set of basis vectors spanning as column vectors.

## Projection formula for a vector

### Introduction

We have already become familiar with projecting vectors on vectors, which is known as the projection formula. In this lecture note, we will not only cover the more general projection theorem, but also the interesting problem of projecting a vector on a subspace of , used for Gram-Schmidt. This is a generalization of projection on a single vector, since the single vector spans a line in , which is in fact a subspace of .

### Projection formula for a vector

So we know the projection formula for a vector on a line that is spanned by the vector as:

where the numerator is equivalent to the dot product used when the formula originally was introduced. A vector always has dimensions while its transpose has its dimensions opposite, namely .

The projection formula can be reduced if the vector is already normalized, because the denominator becomes 1 since :

Making the connection of projections to linear transformation raises the question, what is the standard matrix for projecting a vector on a line spanned by ? The derivation can be generalized for the non-normalized vector , but we will proceed with the special case with being normalized and then state the general case as a theorem. Remember to keep the dimension requirement for multiplication of vectors and matrices intact! We have that:

where results in a matrix, which is our standard matrix of the linear transformation to project on with norm equal to 1. We have the general case as the following theorem:

Let be a nonzero vector in expressed in column form, meaning it has the dimensions . We have then the standard matrix for the linear transformation of projecting a vector on as:

so that:

The matrix is symmetric and has rank 1.

The column vectors of a standard matrix for a linear transformation are the images of the standard basis vectors under :

Supported by the projection formula, and the fact that the standard basis vetor only have the k:th component as non-zero (which is 1), we have that the k:th column for our standard matrix will be:

Applying this useful information we have the derivation for :

## Projection formula for a subspace

After a rigorous summary and derivation of the projection formula on a vector we now make things short for the projection formula for a subspace. The problem we face is how to project the vector on a subspace of ? We are interested in:

This is the general case of projection of on a vector , which is the special case of projection for a subspace, when the subspace is a line spanned by the vector . There are two ways to solve this question, one is to determine a basis for and the other is to determine an orthonormal basis for . The latter is only a special case of the former, but it reduces the problem considerably. We begin by introducing the theorem for the general case, where we have no further requirements for the basis for .

Let be a subspace of spanned by the basis

We then have the standard matrix for the linear transformation of projecting a vector on the subspace defined as:

were is the matrix with columns consisting of the basis vectors of :

Hence, we have that:

For the special case of when forms an orthonormal basis, we have that is an orthonormal matrix (simply called orthogonal in most literature) and that:

This result reduces the standard matrix in above theorem to:

If we take a closer look at , we can extend the expression to something that gives an intuitive meaning:

From the last line we remember the standard matrix for projecting on a normalized vector , which leads to the intuitive meaning of rewriting the projection on the subspace to:

So if is an orthonormal basis for the subspace , we can project a vector on either by producing the standard matrix with column vectors of the basis or by producing the linear combination of projections on each basis vector .

## Projection theorem for subspaces

Before we dive into the projection theorem we start with a warm-up. Let's say that we have a vector and a line spanned by vector . This means that we can uniquely express by the projection vector on and its orthogonal complement , which is a vector orthogonal to the line . We have that:

This example is intuitive, but what happens when we instead of working with a line in consider the subspace in with dimension ? It so happens that the relationship for this special case holds for higher dimensions. We have the following theorem known as the projection theorem for subspaces:

The projection theorem for subspaces

Let be a subspace of . Then we have that every vector can be uniquely expressed as:

where and , called the orthogonal complement to .

We recognize and can use the projection formula for a subspace for calculating . The vector follows by simply subtracting with , as the theorem suggest.  