Multivariable calculus is usually portrayed as a huge jump in difficulty from the original-flavour single variable calculus. Admittedly, the symbolism is a small amount more tricky but there is little in multivariable calculus that is inherently more difficult. It does open up new avenues for high-level study, of course, but coming to terms with the basics should not be impossible for anyone who has grasped high-school calculus.
When differential equations are first encountered, they are usually presented as the gradient of a line against a curve. In these terms, multivariable calculus can measure the rate of change of a curve in three dimensions or more. The power of this is clear, because we can find local maxima and minima of equations that would be more difficult to pick out just by drawing a graph and following trends.
Now in single-variable calculus, we use dy/dx to mean the rate of change of y as we change x; the gradient at y = f(x). We have to generalise these symbols for multi-variable calculus, using the symbols:
∂y
----
∂x
This means that we are dealing with the rate of change of y as x changes, with all other variables kept constant. Basically we treat all the other variables of the equation as if they really were constants. Just as mathematicians like, this reduces multivariable calculus, in this aspect at least, to "a previously solved problem" - it looks just like single-variable calculus! In terms of our graphical analogy, we are just moving through a single plane in the wide range of the graph's many dimensions.
Of course, this is not the whole story. We have to develop some new techniques before we can do anything useful with our new tools. Let's have a look at an equation for a 3D graph and explore the simultaneous equations method. Because we're dealing with a graph, we'll use the axis-affiliated variables x, y, z.
z = 2x² + 3y-3 + 2x²y
Chances are, you can't really visualise this graph from it's equation. There are a number of good programs available for drawing them, but for now we're only concerned with the mathematical implications.
First of all, we'll differentiate with respect to x. As mentioned in the above write-up, this will give us a directional derivative. Now we're taking a slice through the graph parellel to the x-axis, but no one particular slice. We'll expect y to remain a variable in the directional derivative, because the gradient won't be the same for any value of y within each value of x. Nevertheless, we differentiate as if y was a constant, to give:
∂z
---- = 4x + 4xy
∂x
Now we'll differentiate with respect to y, to give us two equations measuring the rate of change of z; one as we alter x, with y being constant but unspecified, and the other with the variables' roles reversed.
∂z -9
---- = ---- + 2x²
∂y y4
Now to find the stationary points of this graph, we have to find the point or points where the gradient of z is 0 with respect to both x and y. This is where simultaneous equations enter the arena. First, we'll find all the solutions of ∂z/∂x for 0:
4x + 4xy = 0
= 4x( 1 + y )
Now clearly x = 0 or y = -1. We aren't expecting a definitive solution at this moment; in fact, we would usually expect to find several points which are local minima or maxima. Let's see if the other equation can help us narrow it down:
∂z -9
---- = ---- + 2x²
∂y y4
2x²y4 = 9
x²y4 = 9 / 2
xy² = ±rt( 9 / 2 ) = ±3 / rt(2)
Now if x were 0, as our first equation suggested, there is no value of y for which xy² could be 3 / rt(2). Therefore, we'll take the other option: y = -1. In this case, x must be ±3 / rt(2), by simple substitution into our second equation.
In this case, we were "lucky". In fact, I set up the example to provide discrete solutions for those of you who are, like me, allergic to simultaneous equations. Frequently, you will find lines of potential stationary points from both equations and will need to find their intersections. This is straight-forward when only straight lines are involved, but higher-degree polynomials are a whole other strand of mathematics.
Just as in single-variable calculus, we can differentiate again to give second-order differentials etc., multi-variable calculus can also create second-order differentials. The important difference is that there is no need to differentiate w.r.t. the same variable each time, and we employ some slightly new syntax to indicate this.
Let's start with the simple equation z = x² + 2xy². Differentiating first w.r.t. x:
∂z
---- = 2x + 2y²
∂x
Now we can differentiate again, but now w.r.t. y:
∂²z
----- = 2x + 2y²
∂x∂y
Notice that the ∂z on top follows the usual index rule seen in single variable calculus, while the variables being differentiated with respect to are "queued" along the bottom. In this way we can specify exactly which variables are being manipulated.
Now I may have so far given the impression that multi-variable calculus only deals with differentiation and not integration. While this is not true, the differentiation aspect simple seems more interesting and more progressive than does multi-variable integration over it's single-variable variant. The important point in integration is to note that, in handling indefinite integrals the arbitrary constant of integration need only be added at the end of resolving the nested integrals. I invite anyone else who can provide a more interesting insight into this topic to do so below.