Carl Gustav Jacob Jacobi was a prominent German mathematician active in the early 1800s. He made fundamental contributions to many fields of mathematics and was one of the most inspiring teachers of his time.
Jacobi was a brilliant student. At only twelve years of age, he had all the necessary requirements fulfilled to enter university. The only thing stopping him from entering was the minimum age of entry, which was 16.
The Jacobian is simply a matrix with all partial derivatives, or with all the gradients as rows.
It's used whenever changing from one coordinate system to another, and in many real-world applications. For example, if you want to fit a curve to data, there's a great method which makes use of the Jacobian. So there's much more to it than just all the partial derivatives.
Calculating the Jacobian matrix is not a hard problem. Say we want to find the Jacobian matrix of . To find this Jacobi matrix we simply calculate the following partial derivatives
Next we put these derivatives in a matrix.
Btw we can also write this using its defined as
Then we find that
The Jacobian matrix
Starting with an example
The Jacobian matrix is a matrix with all the partial derivatives of a vector valued function of several variables. We'll build up an understanding of why we need it and what it is in this note.
We'll begin with an example. Consider the transformation from Cartesian to polar coordinates in :
What happens to and , if we change and a little?
A small change should make you think: derivative! and that is exactly the tool we'll use.
As both and depend on both and , we get four partial derivatives. Just like and are perpendicular, so are the unit vectors and . So we'll divide the change into two parts. First, how and change as we make a small change in , then, as we change .
The total change in is then , and analogously for .
This graph shows how and are affected by a small increment in :
Look at the little triangle with as the hypotenuse. From trigonometry, we know that the extra part we need to add to is . That is:
And for , we need to add , so
That gives the partial derivatives with respect to :
Now what happens as we increment by the small step ? Recall the circumference of a circle: . If we want only a part of the circumference, we replace by the angle in the formula.
So the arc length of the small circle part created by increasing by is .
If is very small, it is approximately a straight line, as in this graph:
With as a line, we get another tiny triangle. This triangle is uniform with the triangle with as hypotenuse and angle . Thus, using trigonometric rules, we get:
This gives the partial derivatives with respect to :
Note that the minus sign comes from that we decrease , as we increase .
Summing it all up, the total change in and is all the partial derivatives, which can be visualised in matrix form like this:
That is the Jacobian matrix for the coordinate transformation from Cartesian to polar coordinates.
From derivatives to Jacobian matrices
We have seen that for multivariable functions, the gradient vector
plays the same role as the derivative does for single variable functions.
What if we have a vector valued function ? Writing it out, it looks like this:
As each component of the vector function is a function , each will have partial derivatives:
Putting all the partial derivatives for all the 's, we'll get the Jacobian matrix:
Like the derivative and the gradient, the Jacobian matrix will be crucial for investigating functions locally.
Notice that we can see the coordinate transformation in the example above as a vector valued function: it takes in a vector, the two coordinates and , and outputs another vector, consisting of and .
Common ways of denoting the Jacobian matrix are:
Approximation with Jacobians
Recall that we can use differentials to approximate the change of a function at a point. For a function , this is the difference in function value, as we take a small step :
The differential approximates this difference, and it looks like this:
We can also write the sum on this form, which is the exact same thing:
Now, each component of the Jacobian matrix are functions of variables just like . So, the vector difference can be approximated by:
This is the differential for a vector valued function. It is a vector, where each component looks like the differential for a scalar-valued function of variables.
The determinant is a scalar that relates to a matrix . We denote it as:
The concept is more thoroughly introduced in a course in linear algebra, and if you are comfortable with it, feel free to skip this lecture note.
If you have not seen it before, or feel like you need a refresher, here is a short recap.
A geometric interpretation
The determinant is interpreted geometrically as the scale factor for a linear transformation.
In short, each matrix multiplication is a linear transformation, but from a practical perspective, it can be said that a linear transformation is a matrix that multiplies with a vector to obtain a desired result.
A simple example would be a linear transformation that rotates clockwise by the angle and doubles its length. Then the scale factor, that is, the determinant, of would be .
The definition of the determinant of a -matrix forms the basis for calculating the determinant of an -matrix.
whereupon the definition of the determinant is:
The algorithm for calculating the determinant of a -matrix is made using the sum of three -determinants. We produce these by expanding a single row, or column, in the determinant (called cofactor expansion).
and then it applies that the determinant of is:
where we have made a row expansion of the first row, because the scalars of each -matrix are just the elements from the first row.
Now we go through how the expansion is done.
Consider the determinant of :
We start by expanding along the first line and start with the first element :
The expansion then takes place by selecting the row and column of the current element to extract the remaining elements as a -determinant multiplied by :
We move on to the next element along the first line, , and get:
Note that the expansion around comes with a minus sign! We'll return to that shortly.
Now we continue with the next, and last, element to expand: .
Note that the element comes with a plus sign!
We now end the calculation using the definition of the -determinant:
Which concludes the formula for the -determinant, as well as the algorithm that makes the definition easy to remember instead of learning the formula by heart.
An alternative formula
The method above can easily be extended analogously to larger matrices, which is why we started with it.
However, there is an alternative algorithm that applies to the -determinant, which visually resembles the definition of the -determinant:
If we extend this mindset, we get a method that works, but only works for calculating -determinants. The method is called Sarru's rule.
Remember the determinant? The determinant was this thingy which, for a big matrix, took ages to compute. The determinant is the scaling factor for the linear transformation.
The determinant indicates whether a blob shrinks or grows during a linear transformation.
In multivariable calc, we're often dealing with changes of variables that are non-linear transformations. But near a given point, you can kinda approximate the function with a linear transformation. And at the point exactly, the linear approximation is equal to its approximation. The transformation matrix turns out to be the Jacobian.
The Jacobian determinant is, as you should've guessed, the determinant of the Jacobian matrix. It it the scaling factor we need to relate the coordinate systems in a change of variables. It makes sure that the computation we do correspond to each other, no matter what system we are in.
Now let's see the Jacobian determinant in action.
The Jacobian matrix to the following transformation
is given by
The Jacobian determinant is given by
Change of variables
When introduced to the Jacobian matrix, we saw that it can be used as a tool for converting between variables in different coordinate systems. In that case between Cartesian coordinates and polar ones .
Later, we studied the determinant of the Jacobian matrix, or the Jacobian determinant for short. There, we noted that it gives a measure of the scaling involved with the linear transformation that a change of variables correspond to.
In this lecture note, we will put two and two together and see how the Jacobian determinant helps us relate the change in area for small steps in one coordinate system, to the change in area in another coordinate system.
Let's return to the variable change from Cartesian to polar coordinates obtain by this Jacobian matrix:
Recalling how we calculate determinants of matrices, we compute the Jacobian determinant as:
Let denote the area obtained by considering the infinitesimal distances and as the base and height of a rectangle in the -plane. Clearly:
Further, let represent a similar area, resulting from infinitesimal changes and in the variables of the polar coordinate system. It is not as clear what the formula for the area is in this case.
We will not provide the proof for it here, but it turns out that the curvature of the arc of length is negligible for such short distances. We can therefore take it to be a straight line, making the shape a rectangle, and we have:
Now what this means is that for infinitesimal steps, equal in size, the area change in polar coordinates is times as large. This is essentially what is meant by a scaling factor. We thus have an intuitive idea of why the Jacobian determinant related to this variable change turned out to be precisely .
The relative sizes of the change in area due to changes in variables of different coordinate systems are crucial when evaluating integrals of functions in multiple dimensions.
Even in the simplest cases of double integrals, which you might not have encountered yet but surely will soon enough, the change of variables is a useful technique.
In such integrals, the integrand takes the form of a differential area , and it is important to make the correct substitution
to get the answer right.