2.2: Partial Derivatives
( \newcommand{\kernel}{\mathrm{null}\,}\)
We are now ready to define derivatives of functions of more than one variable. First, recall how we defined the derivative, f′(a), of a function of one variable, f(x). We imagined that we were walking along the x-axis, in the positive direction, measuring, for example, the temperature along the way. We denoted by f(x) the temperature at x. The instantaneous rate of change of temperature that we observed as we passed through x=a was
dfdx(a)=limh→0f(a+h)−f(a)h=limx→af(x)−f(a)x−a
Next suppose that we are walking in the xy-plane and that the temperature at (x,y) is f(x,y). We can pass through the point (x,y)=(a,b) moving in many different directions, and we cannot expect the measured rate of change of temperature if we walk parallel to the x-axis, in the direction of increasing x, to be the same as the measured rate of change of temperature if we walk parallel to the y-axis in the direction of increasing y. We'll start by considering just those two directions. We'll consider other directions (like walking parallel to the line y=x) later.
Suppose that we are passing through the point (x,y)=(a,b) and that we are walking parallel to the x-axis (in the positive direction). Then our y-coordinate will be constant, always taking the value y=b. So we can think of the measured temperature as the function of one variable B(x)=f(x,b) and we will observe the rate of change of temperature
dBdx(a)=limh→0B(a+h)−B(a)h=limh→0f(a+h,b)−f(a,b)h
This is called the “partial derivative f with respect to x at (a,b)” and is denoted ∂f∂x(a,b). Here
- the symbol ∂, which is read “partial”, indicates that we are dealing with a function of more than one variable, and
- the x in ∂f∂x indicates that we are differentiating with respect to x, while y is being held fixed, i.e. being treated as a constant.
- ∂f∂x is read “partial dee f dee x”.
Do not write ddx when ∂∂x is appropriate. We shall later encounter situations when ddxf and ∂∂xf are both defined and have different meanings.
If, instead, we are passing through the point (x,y)=(a,b) and are walking parallel to the y-axis (in the positive direction), then our x-coordinate will be constant, always taking the value x=a. So we can think of the measured temperature as the function of one variable A(y)=f(a,y) and we will observe the rate of change of temperature
dAdy(b)=limh→0A(b+h)−A(b)h=limh→0f(a,b+h)−f(a,b)h
This is called the “partial derivative f with respect to y at (a,b)” and is denoted ∂f∂y(a,b).
Just as was the case for the ordinary derivative dfdx(x) (see Definition 2.2.6 in the CLP-1 text), it is common to treat the partial derivatives of f(x,y) as functions of (x,y) simply by evaluating the partial derivatives at (x,y) rather than at (a,b).
The x- and y-partial derivatives of the function f(x,y) are
∂f∂x(x,y)=limh→0f(x+h,y)−f(x,y)h∂f∂y(x,y)=limh→0f(x,y+h)−f(x,y)h
respectively. The partial derivatives of functions of more than two variables are defined analogously.
Partial derivatives are used a lot. And there many notations for them.
The partial derivative ∂f∂x(x,y) of a function f(x,y) is also denoted
∂f∂xfx(x,y)fxDxf(x,y)DxfD1f(x,y)D1f
The subscript 1 on D1f indicates that f is being differentiated with respect to its first variable. The partial derivative ∂f∂x(a,b) is also denoted
∂f∂x|(a,b)
with the subscript (a,b) indicating that ∂f∂x is being evaluated at (x,y)=(a,b).
The notation (∂f∂x)y is used to make explicit that the variable y is being held fixed 1.
We'll now develop a geometric interpretation of the partial derivative
∂f∂x(a,b)=limh→0f(a+h,b)−f(a,b)h
in terms of the shape of the graph z=f(x,y) of the function f(x,y). That graph appears in the figure below. It looks like the part of a deformed sphere that is in the first octant.
The definition of ∂f∂x(a,b) concerns only points on the graph that have y=b. In other words, the curve of intersection of the surface z=f(x,y) with the plane y=b. That is the red curve in the figure. The two blue vertical line segments in the figure have heights f(a,b) and f(a+h,b), which are the two numbers in the numerator of f(a+h,b)−f(a,b)h.
A side view of the curve (looking from the left side of the y-axis) is sketched in the figure below.
Again, the two blue vertical line segments in the figure have heights f(a,b) and f(a+h,b), which are the two numbers in the numerator of f(a+h,b)−f(a,b)h. So the numerator f(a+h,b)−f(a,b) and denominator h are the rise and run, respectively, of the curve z=f(x,b) from x=a to x=a+h. Thus ∂f∂x(a,b) is exactly the slope of (the tangent to) the curve of intersection of the surface z=f(x,y) and the plane y=b at the point (a,b,f(a,b)). In the same way ∂f∂y(a,b) is exactly the slope of (the tangent to) the curve of intersection of the surface z=f(x,y) and the plane x=a at the point (a,b,f(a,b)).
Evaluation of Partial Derivatives
From the above discussion, we see that we can readily compute partial derivatives ∂∂x by using what we already know about ordinary derivatives ddx. More precisely,
- to evaluate ∂f∂x(x,y), treat the y in f(x,y) as a constant and differentiate the resulting function of x with respect to x.
- To evaluate ∂f∂y(x,y), treat the x in f(x,y) as a constant and differentiate the resulting function of y with respect to y.
- To evaluate ∂f∂x(a,b), treat the y in f(x,y) as a constant and differentiate the resulting function of x with respect to x. Then evaluate the result at x=a, y=b.
- To evaluate ∂f∂y(a,b), treat the x in f(x,y) as a constant and differentiate the resulting function of y with respect to y. Then evaluate the result at x=a, y=b.
Now for some examples.
Let
f(x,y)=x3+y2+4xy2
Then, since ∂∂x treats y as a constant,
∂f∂x=∂∂x(x3)+∂∂x(y2)+∂∂x(4xy2)=3x2+0+4y2∂∂x(x)=3x2+4y2
and, since ∂∂y treats x as a constant,
∂f∂y=∂∂y(x3)+∂∂y(y2)+∂∂y(4xy2)=0+2y+4x∂∂y(y2)=2y+8xy
In particular, at (x,y)=(1,0) these partial derivatives take the values
∂f∂x(1,0)=3(1)2+4(0)2=3∂f∂y(1,0)=2(0)+8(1)(0) =0
Let
f(x,y)=ycosx+xexy
Then, since ∂∂x treats y as a constant, ∂∂xeyx=yeyx and
∂∂x(x,y)=y∂∂x(cosx)+exy∂∂x(x)+x∂∂x(exy)(by the product rule)=−ysinx+exy+xyexy∂∂x(x,y)=cosx∂∂y(y)+x∂∂y(exy)=cosx+x2exy
Let's move up to a function of four variables. Things generalize in a quite straight forward way.
Let
f(x,y,z,t)=xsin(y+2z)+t2e3ylnz
Then
∂f∂x(x,y,z,t)=sin(y+2z)∂f∂y(x,y,z,t)=xcos(y+2z)+3t2e3ylnz∂f∂z(x,y,z,t)=2xcos(y+2z)+t2e3y/z∂f∂t(x,y,z,t)=2te3ylnz
Now here is a more complicated example — our function takes a special value at (0,0). To compute derivatives there we revert to the definition.
Set
f(x,y)={cosx−cosyx−yif x≠y0if x=y
If b≠a, then for all (x,y) sufficiently close to (a,b), f(x,y)=cosx−cosyx−y and we can compute the partial derivatives of f at (a,b) using the familiar rules of differentiation. However that is not the case for (a,b)=(0,0). To evaluate fx(0,0), we need to set y=0 and find the derivative of
f(x,0)={cosx−1xif x≠00if x=0
with respect to x at x=0. As we cannot use the usual differentiation rules, we evaluate the derivative 2 by applying the definition
fx(0,0)=limh→0f(h,0)−f(0,0)h=limh→0cosh−1h−0h(Recall that h≠0 in the limit.)=limh→0cosh−1h2=limh→0−sinh2h(By l'Hôpital's rule.)=limh→0−cosh2(By l'Hôpital again.)=−12
We could also evaluate the limit of cosh−1h2 by substituting in the Taylor expansion
cosh=1−h22+h44!−⋯
We can also use Taylor expansions to understand the behaviour of f(x,y) for (x,y) near (0,0). For x≠y,
cosx−cosyx−y=[1−x22!+x44!−⋯]−[1−y22!+y44!−⋯]x−y=−x2−y22!+x4−y44!−⋯x−y=−12!x2−y2x−y+14!x4−y4x−y−⋯=−x+y2!+x3+x2y+xy2+y34!−⋯
So for (x,y) near (0,0),
f(x,y)≈{−x+y2if x≠y0if x=y
So it sure looks like (and in fact it is true that)
- f(x,y) is continuous at (0,0) and
- f(x,y) is not continuous at (a,a) for small a≠0 and
- fx(0,0)=fy(0,0)=−12
Again set
f(x,y)={cosx−cosyx−yif x≠y0if x=y
We'll now compute fy(x,y) for all (x,y).
The case y≠x: When y≠x,
fy(x,y)=∂∂ycosx−cosyx−y=(x−y)∂∂y(cosx−cosy)−(cosx−cosy)∂∂y(x−y)(x−y)2(by the quotient rule)=(x−y)siny+cosx−cosy(x−y)2
The case y=x: When y=x,
fy(x,y)=limh→0f(x,y+h)−f(x,y)h=limh→0f(x,x+h)−f(x,x)h=limh→0cosx−cos(x+h)x−(x+h)−0h(Recall that h≠0 in the limit.)=limh→0cos(x+h)−cosxh2
Now we apply L'Hôpital's rule, remembering that, in this limit, x is a constant and h is the variable — so we differentiate with respect to h.
fy(x,y)=limh→0−sin(x+h)2h
Note that if x is not an integer multiple of π, then the numerator −sin(x+h) does not tend to zero as h tends to zero, and the limit giving fy(x,y) does not exist. On the other hand, if x is an integer multiple of π, both the numerator and denominator tend to zero as h tends to zero, and we can apply L'Hôpital's rule a second time. Then
fy(x,y)=limh→0−cos(x+h)2=−cosx2
The conclusion:
fy(x,y)={(x−y)siny+cosx−cosy(x−y)2if x≠y−cosx2if x=y with x an integer multiple of πDNEif x=y with x not an integer multiple of π
In this example, we will see that the function
f(x,y)={x2x−yif x≠y0if x=y
is not continuous at (0,0) and yet has both partial derivatives fx(0,0) and fy(0,0) perfectly well defined. We'll also see how that is possible. First let's compute the partial derivatives. By definition,
fx(0,0)=limh→0f(0+h,0)−f(0,0)h=limh→0h⏞h2h−0−0h=limh→01=1fy(0,0)=limh→0f(0,0+h)−f(0,0)h=limh→0020−h−0h=limh→00=0
So the first order partial derivatives fx(0,0) and f_y(0,0) are perfectly well defined.
To see that, nonetheless, f(x,y) is not continuous at (0,0)\text{,} we take the limit of f(x,y) as (x,y) approaches (0,0) along the curve y=x-x^3\text{.} The limit is
\begin{gather*} \lim_{x\rightarrow 0} f\big(x,x-x^3\big) =\lim_{x\rightarrow 0} \frac{x^2}{x-(x-x^3)} =\lim_{x\rightarrow 0} \frac{1}{x} \end{gather*}
which does not exist. Indeed as x approoaches 0 through positive numbers, \frac{1}{x} approaches +\infty\text{,} and as x approoaches 0 through negative numbers, \frac{1}{x} approaches -\infty\text{.}
So how is this possible? The answer is that f_x(0,0) only involves values of f(x,y) with y=0\text{.} As f(x,0)=x\text{,} for all values of x\text{,} we have that f(x,0) is a continuous, and indeed a differentiable, function. Similarly, f_y(0,0) only involves values of f(x,y) with x=0\text{.} As f(0,y)=0\text{,} for all values of y\text{,} we have that f(0,y) is a continuous, and indeed a differentiable, function. On the other hand, the bad behaviour of f(x,y) for (x,y) near (0,0) only happens for x and y both nonzero.
Our next example uses implicit differentiation.
The equation
z^5 + y^2 e^z +e^{2x}=0 \nonumber
implicitly determines z as a function of x and y\text{.} That is, the function z(x,y) obeys
z(x,y)^5 + y^2 e^{z(x,y)} +e^{2x}=0 \nonumber
For example, when x=y=0\text{,} the equation reduces to
z(0,0)^5=-1 \nonumber
which forces 3 z(0,0)=-1\text{.} Let's find the partial derivative \frac{\partial z}{\partial x}(0,0)\text{.}
We are not going to be able to explicitly solve the equation for z(x,y)\text{.} All we know is that
z(x,y)^5 + y^2 e^{z(x,y)} + e^{2x} =0 \nonumber
for all x and y\text{.} We can turn this into an equation for \frac{\partial z}{\partial x}(0,0) by differentiating 4 the whole equation with respect to x\text{,} giving
5z(x,y)^4\ \frac{\partial z}{\partial x}(x,y) + y^2 e^{z(x,y)}\ \frac{\partial z}{\partial x}(x,y) +2e^{2x} =0 \nonumber
and then setting x=y=0\text{,} giving
5z(0,0)^4\ \frac{\partial z}{\partial x}(0,0) +2 =0 \nonumber
As we already know that z(0,0)=-1\text{,}
\frac{\partial z}{\partial x}(0,0) = -\frac{2}{5z(0,0)^4} =-\frac{2}{5} \nonumber
Next we have a partial derivative disguised as a limit.
In this example we are going to evaluate the limit
\lim_{z\rightarrow 0}\frac{(x+y+z)^3-(x+y)^3}{(x+y)z} \nonumber
The critical observation is that, in taking the limit z\rightarrow 0\text{,} x and y are fixed. They do not change as z is getting smaller and smaller. Furthermore this limit is exactly of the form of the limits in the Definition 2.2.1 of partial derivative, disguised by some obfuscating changes of notation.
Set
f(x,y,z) = \frac{(x+y+z)^3}{(x+y)} \nonumber
Then
\begin{align*} \lim_{z\rightarrow 0}\frac{(x+y+z)^3-(x+y)^3}{(x+y)z} &=\lim_{z\rightarrow 0}\frac{f(x,y,z)-f(x,y,0)}{z}\\ &=\lim_{h\rightarrow 0}\frac{f(x,y,0+h)-f(x,y,0)}{h}\\ &=\frac{\partial f}{\partial z}(x,y,0)\\ &={\left[\frac{\partial }{\partial z}\frac{(x+y+z)^3}{x+y}\right]}_{z=0} \end{align*}
Recalling that \frac{\partial }{\partial z} treats x and y as constants, we are evaluating the derivative of a function of the form \frac{({\rm const}+z)^3}{\rm const}\text{.} So
\begin{align*} \lim_{z\rightarrow 0}\frac{(x+y+z)^3-(x+y)^3}{(x+y)z} &={\left.3\frac{(x+y+z)^2}{x+y}\right|}_{z=0}\\ &=3(x+y) \end{align*}
The next example highlights a potentially dangerous difference between ordinary and partial derivatives.
In this example we are going to see that, in contrast to the ordinary derivative case, \frac{\partial r}{\partial x} is not, in general, the same as \big(\frac{\partial x}{\partial r}\big)^{-1}\text{.}
Recall that Cartesian and polar coordinates 5 (for (x,y)\ne (0,0) and r \gt 0) are related by
\begin{align*} x&=r\cos\theta\\ y&=r\sin\theta\\ r&=\sqrt{x^2+y^2}\\ \tan\theta&=\frac{y}{x} \end{align*}
We will use the functions
x(r,\theta) = r\cos\theta\qquad \text{and}\qquad r(x,y) = \sqrt{x^2+y^2} \nonumber
Fix any point (x_0,y_0)\ne (0,0) and let (r_0,\theta_0)\text{,} 0\le\theta_0 \lt 2\pi\text{,} be the corresponding polar coordinates. Then
\begin{gather*} \frac{\partial x}{\partial r}(r,\theta) = \cos\theta\qquad \frac{\partial r}{\partial x}(x,y) = \frac{x}{\sqrt{x^2+y^2}} \end{gather*}
so that
\begin{align*} \frac{\partial x}{\partial r}(r_0,\theta_0)=\left(\frac{\partial r}{\partial x}(x_0,y_0)\right)^{-1} &\iff \cos\theta_0= \left(\frac{x_0}{\sqrt{x_0^2+y_0^2}}\right)^{-1} = \left(\cos\theta_0\right)^{-1}\\ &\iff \cos^2\theta_0= 1\\ &\iff \theta_0=0,\pi \end{align*}
We can also see pictorially why this happens. By definition, the partial derivatives
\begin{align*} \frac{\partial x}{\partial r}(r_0,\theta_0) &= \lim_{\mathrm{d}{r}\rightarrow 0} \frac{x(r_0+\mathrm{d}{r},\theta_0) - x(r_0,\theta_0)}{\mathrm{d}{r}}\\ \frac{\partial r}{\partial x}(x_0,y_0) &= \lim_{\mathrm{d}{x}\rightarrow 0} \frac{r(x_0+\mathrm{d}{x},y_0) - r(x_0,y_0)}{\mathrm{d}{x}} \end{align*}
Here we have just renamed the h of Definition 2.2.1 to \mathrm{d}{r} and to \mathrm{d}{x} in the two definitions.
In computing \frac{\partial x}{\partial r}(r_0,\theta_0)\text{,} \theta_0 is held fixed, r is changed by a small amount \mathrm{d}{r} and the resulting \mathrm{d}{x}=x(r_0+\mathrm{d}{r},\theta_0) - x(r_0,\theta_0) is computed. In the figure on the left below, \mathrm{d}{r} is the length of the orange line segment and \mathrm{d}{x} is the length of the blue line segment.
On the other hand, in computing \frac{\partial r}{\partial x}\text{,} y is held fixed, x is changed by a small amount \mathrm{d}{x} and the resulting \mathrm{d}{r}=r(x_0+\mathrm{d}{x},y_0) - r(x_0,y_0) is computed. In the figure on the right above, \mathrm{d}{x} is the length of the pink line segment and \mathrm{d}{r} is the length of the orange line segment.
Here are the two figures combined together. We have arranged that the same \mathrm{d}{r} is used in both computations. In order for the \mathrm{d}{r}'s to be the same in both computations, the two \mathrm{d}{x}'s have to be different (unless \theta_0=0,\pi). So, in general, \frac{\partial x}{\partial r}(r_0,\theta_0)\ne \big(\frac{\partial r}{\partial x}(x_0,y_0)\big)^{-1}\text{.}
The inverse function theorem, for functions of one variable, says that, if y(x) and x(y) are inverse functions, meaning that y\big(x(y)\big)=y and x\big(y(x)\big)=x\text{,} and are differentiable with \frac{\mathrm{d}y}{\mathrm{d}x}\ne 0\text{,} then
\frac{\mathrm{d}x}{\mathrm{d}y}(y) = \frac{1}{\frac{\mathrm{d}y}{\mathrm{d}x}\big(x(y)\big)} \nonumber
To see this, just apply \frac{\mathrm{d}}{\mathrm{d}y} to both sides of y\big(x(y)\big)=y to get \frac{\mathrm{d}y}{\mathrm{d}x}\big(x(y)\big)\ \frac{\mathrm{d}x}{\mathrm{d}y}(y)=1\text{,} by the chain rule (see Theorem 2.9.3 in the CLP-1 text). In the CLP-1 text, we used this to compute the derivatives of the logarithm (see Theorem 2.10.1 in the CLP-1 text) and of the inverse trig functions (see Theorem 2.12.7 in the CLP-1 text).
We have just seen, in Example 2.2.12, that we can't be too naive in extending the single variable inverse function theorem to functions of two (or more) variables. On the other hand, there is such an extension, which we will now illustrate, using Cartesian and polar coordinates. For simplicity, we'll restrict our attention to x \gt 0\text{,} y \gt 0\text{,} or equivalently, r \gt 0\text{,} 0 \lt \theta \lt \frac{\pi}{2}\text{.} The functions which convert between Cartesian and polar coordinates are
\begin{alignat*}{2} x(r,\theta)&=r\cos\theta\qquad& r(x,y)&=\sqrt{x^2+y^2}\\ y(r,\theta)&=r\sin\theta& \theta(x,y)&=\arctan\left(\frac{y}{x}\right) \end{alignat*}
The two functions on the left convert from polar to Cartesian coordinates and the two functions on the right convert from Cartesian to polar coordinates. The inverse function theorem (for functions of two variables) says that,
- if you form the first order partial derivatives of the left hand functions into the matrix
\left[\begin{matrix} \frac{\partial x}{\partial r}(r,\theta) & \frac{\partial r}{\partial \theta}(r,\theta) \\ \frac{\partial y}{\partial r}(r,\theta) & \frac{\partial y}{\partial \theta}(r,\theta) \end{matrix}\right] =\left[\begin{matrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{matrix}\right] \nonumber
- and you form the first order partial derivatives of the right hand functions into the matrix
\left[\begin{matrix} \frac{\partial r}{\partial x}(x,y) & \frac{\partial r}{\partial y}(x,y) \\ \frac{\partial \theta }{\partial x}(x,y) & \frac{\partial \theta }{\partial y}(x,y) \end{matrix}\right] =\left[\begin{matrix} \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\ \frac{-\frac{y}{x^2}}{1+(\frac{y}{x})^2} & \frac{\frac{1}{x}}{1+(\frac{y}{x})^2} \end{matrix}\right] =\left[\begin{matrix} \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\ \frac{-y}{x^2+y^2} & \frac{x}{x^2+y^2} \end{matrix}\right] \nonumber
- and if you evaluate the second matrix at x=x(r,\theta)\text{,} y=y(r,\theta)\text{,}
\left[\begin{matrix} \frac{\partial r}{\partial x}\big(x(r,\theta),y(r,\theta)\big) & \frac{\partial r}{\partial y}\big(x(r,\theta),y(r,\theta)\big) \\ \frac{\partial \theta }{\partial x}\big(x(r,\theta),y(r,\theta)\big) & \frac{\partial \theta }{\partial y}\big(x(r,\theta),y(r,\theta)\big) \end{matrix}\right] =\left[\begin{matrix} \cos\theta & \sin\theta \\ -\frac{\sin\theta}{r} & \frac{\cos\theta}{r} \end{matrix}\right] \nonumber
- and if you multiply 6 the two matrices together
\begin{align*} &\left[\begin{matrix} \frac{\partial r}{\partial x}\big(x(r,\theta),y(r,\theta)\big) & \frac{\partial r}{\partial y}\big(x(r,\theta),y(r,\theta)\big) \\ \frac{\partial \theta }{\partial x}\big(x(r,\theta),y(r,\theta)\big) & \frac{\partial \theta }{\partial y}\big(x(r,\theta),y(r,\theta)\big) \end{matrix}\right]\ \left[\begin{matrix} \frac{\partial x}{\partial r}(r,\theta) & \frac{\partial x}{\partial \theta}(r,\theta) \\ \frac{\partial y}{\partial r}(r,\theta) & \frac{\partial y}{\partial \theta}(r,\theta) \end{matrix}\right]\\ &=\left[\begin{matrix} \cos\theta & \sin\theta \\ -\frac{\sin\theta}{r} & \frac{\cos\theta}{r} \end{matrix}\right]\ \left[\begin{matrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{matrix}\right]\\\ &=\left[\begin{matrix} (\cos\theta)(\cos\theta) + (\sin\theta)(\sin\theta) &(\cos\theta)(-r\sin\theta)+(\sin\theta)(r\cos\theta) \\ (-\frac{\sin\theta}{r})(\cos\theta)+(\frac{\cos\theta}{r})(\sin\theta) & (-\frac{\sin\theta}{r})(-r\sin\theta) + (\frac{\cos\theta}{r})(r\cos\theta) \end{matrix}\right] \end{align*}
- then the result is the identity matrix
\left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right] \nonumber
and indeed it is!
This two variable version of the inverse function theorem can be derived by applying the derivatives \frac{\partial}{\partial r} and \frac{\partial }{\partial \theta} to the equations
\begin{align*} r\big(x(r,\theta),y(r,\theta)\big) &=r \\ \theta\big(x(r,\theta),y(r,\theta)\big) &=\theta \end{align*}
and using the two variable version of the chain rule, which we will see in §2.4.
Exercises
Stage 1
Let f(x,y) = e^x\cos y\text{.} The following table gives some values of f(x,y)\text{.}
x=0 | x=0.01 | x=0.1 | |
y=-0.1 | 0.99500 | 1.00500 | 1.09965 |
y=-0.01 | 0.99995 | 1.01000 | 1.10512 |
y=0 | 1.0 | 1.01005 | 1.10517 |
- Find two different approximate values for \frac{\partial f}{\partial x}(0,0) using the data in the above table.
- Find two different approximate values for \frac{\partial f}{\partial y}(0,0) using the data in the above table.
- Evaluate \frac{\partial f}{\partial x}(0,0) and \frac{\partial f}{\partial y} (0,0) exactly.
You are traversing an undulating landscape. Take the z-axis to be straight up towards the sky, the positive x-axis to be due south, and the positive y-axis to be due east. Then the landscape near you is described by the equation z=f(x,y)\text{,} with you at the point (0,0,f(0,0))\text{.} The function f(x,y) is differentiable.
Suppose f_y(0,0) \lt 0\text{.} Is it possible that you are at a summit? Explain.
Let
f(x,y)=\begin{cases}\frac{x^2y}{x^2+y^2}& \text{if } (x,y)\ne (0,0)\\ 0 & \text{if } (x,y)=(0,0) \end{cases} \nonumber
Compute, directly from the definitions,
- \displaystyle \frac{\partial f}{\partial x}(0,0)
- \displaystyle \frac{\partial f}{\partial y}(0,0)
- \displaystyle \frac{d}{dt} f(t,t)\Big|_{t=0}
Stage 2
Find all first partial derivatives of the following functions and evaluate them at the given point.
- \displaystyle f(x,y,z)=x^3y^4z^5\qquad (0,-1,-1)
- \displaystyle w(x,y,z)=\ln\left(1+e^{xyz}\right)\qquad (2,0,-1)
- \displaystyle f(x,y)=\frac{1}{\sqrt{x^2+y^2}}\qquad (-3,4)
Show that the function z(x,y)=\frac{x+y}{x-y} obeys
x\frac{\partial z}{\partial x}(x,y)+y\\frac{\partial z}{\partial y}(x,y) = 0 \nonumber
A surface z(x, y) is defined by zy - y + x = \ln(xyz)\text{.}
- Compute \frac{\partial z}{\partial x}\text{,} \frac{\partial z}{\partial y} in terms of x\text{,} y\text{,} z\text{.}
- Evaluate \frac{\partial z}{\partial x} and \frac{\partial z}{\partial y} at (x, y, z) = (-1, -2, 1/2)\text{.}
Find \frac{\partial U}{\partial T} and \frac{\partial T}{\partial V} at (1, 1, 2, 4) if (T, U, V, W) are related by
(TU-V)^2 \ln(W-UV) = \ln 2 \nonumber
Suppose that u = x^2 + yz\text{,} x = \rho r \cos(\theta)\text{,} y = \rho r \sin(\theta) and z = \rho r\text{.} Find \frac{\partial u}{\partial r} at the point (\rho_0 , r_0 , \theta_0) = (2, 3, \pi/2)\text{.}
Use the definition of the derivative to evaluate f_x(0,0) and f_y(0,0) for
f(x,y)=\begin{cases} \frac{x^2-2y^2}{x-y}&\text{if } x\ne y\\ 0&\text{if } x=y \end{cases} \nonumber
Stage 3
Let f be any differentiable function of one variable. Define z(x,y)=f(x^2+y^2)\text{.} Is the equation
y\frac{\partial z}{\partial x}(x,y)-x\frac{\partial z}{\partial y}(x,y) = 0 \nonumber
necessarily satisfied?
Define the function
f(x,y)=\begin{cases}\frac{(x+2y)^2}{x+y}& \text{if } x+y\ne 0 \\ 0 &\text{if } x+y=0 \end{cases} \nonumber
- Evaluate, if possible, \frac{\partial f}{\partial x}(0,0) and \frac{\partial f}{\partial y}(0,0)\text{.}
- Is f(x,y) continuous at (0,0)\text{?}
Consider the cylinder whose base is the radius-1 circle in the xy-plane centred at (0,0)\text{,} and which slopes parallel to the line in the yz-plane given by z=y\text{.}
When you stand at the point (0,-1,0)\text{,} what is the slope of the surface if you look in the positive y direction? The positive x direction?
- There are applications in which there are several variables that cannot be varied independently. For example, the pressure, volume and temperature of an ideal gas are related by the equation of state PV= \text{(constant)} T\text{.} In those applications, it may not be clear from the context which variables are being held fixed.
- It is also possible to evaluate the derivative by using the technique of the optional Section 2.15 in the CLP-1 text.
- The only real number z which obeys z^5=-1 is z=-1\text{.} However there are four other complex numbers which also obey z^5=-1\text{.}
- You should have already seen this technique, called implicit differentiation, in your first Calculus course. It is covered in Section 2.11 in the CLP-1 text.
- If you are not familiar with polar coordinates, don't worry about it. There will be an introduction to them in §3.2.1.
- Matrix multiplication is usually covered in courses on linear algebra, which you may or may not have taken. That's why this example is optional.