Natural Sciences Notes > Mathematics for Natural Sciences Notes

This is an extract of our **Multivariable Functions** document, which
we sell as part of our **Mathematics for Natural Sciences Notes** collection written by the top tier of
Cambridge University students.

* The following is a more accessble plain text extract of the PDF sample above, taken from our Mathematics for Natural Sciences Notes.
Due to the challenges of extracting text from PDFs, it will have odd formatting:
*

Mathematics for NST Part IA Cambridge University, 2012-2013

Notes for Multivariable Functions Functions of more than one variable are frequently encountered in scientific applications, when thinking about quantities which vary in more than one direction in space, or which vary in space and time.

Functions of two variables: xyz plane, such that

to visualise the function as a surface in the

x and

variables

y

z

f (x , y ) . A new variable

Functions of two variables have the form

x formed from separate components in the

y

and

z=f ( x , y ) . The

f

are independent, thus the behaviour of

can be introduced

in any direction can be

directions.

Partial derivatives: Partial derivatives represent the rate of change of a multivariable function with respect to one of its variables. Geometrically, this corresponds to the function's rate of change in the direction of one of its basis vectors. For a two-variable function:

( [?][?] xf ) = [?][?] fx =f =lim f ( x+ h , yh) -f ( x , y) x

y

Where

( [?][?] xf )

( [?][?] fy )

constant, and

y

h -0

is obtained by differentiating

y

( [?][?] fy ) = [?][?] fy =f =lim f ( x , y+ kh)-f (x , y )

x k- 0

x f

with respect to

is obtained by differentiating

f

x with

with respect to

y

y

held

with

x held constant. The gradient vector ( [?] f ): At a given point in the plane defined by

x and

relative to the

y

z=f ( x , y ) , one can travel in any direction

axes. Thus, the gradient varies with direction - it is a

directional, vector quantity. It is denoted differential operator, defined in

[?]=

grad ( f ) or [?] f , where [?] is the vector

n dimensions as:

( [?][?]x , [?][?]x , [?][?]x , ... , [?][?]x ) 1

2 3

n

Two dimensions:

Three dimensions:

[?] f [?]f

[?]f [?]f grad ( f )=[?] f =i

+j

=

,

[?]x

[?]y

[?]x [?] y

( ) ( )(

)

[?] f [?]f [?]f

[?]f [?]f [?]f grad ( f )=[?] f =i

+j

+k

=

, ,

[?]x

[?]y

[?]z

[?]x [?] y [?] z

( ) ( ) ( )(

)

Mathematics for NST Part IA Cambridge University, 2012-2013 Direction is perpendicular to the contours of

f (x, y) .

Direction is perpendicular to the equipotential surfaces of

Magnitude is the rate of change perpendicular to the contours (maximum rate of change).

f (x , y , z ) .

Magnitude is the rate of change perpendicular to the equipotentials (maximum rate of change).

Directional derivatives:

[?]f

The gradient vector

points in the direction of steepest ascent on the surface

The gradient in any direction at a given point projection of

[?]f

f .

(x , y ) can be obtained by taking the

onto a unit vector in that direction, using the scalar product. Such a

derivative is called a directional derivative, and is defined by:

[?] f [?] u

Where

u is the unit vector in the appropriate direction.

Second and higher-order partial derivatives: Higher-order partial derivatives are obtained in the same way as higher-order ordinary derivatives; by repeated differentiation:

[?]2 f

[?] [?]f

=f xx=

2 [?]

x [?]x

[?]x

[?]2 f

[?] [?]f

=f yy =

2 [?]

y [?]y

[?]y

[?]2 f

[?] [?]f

=f xy=

[?]x [?] y

[?]x [?] y

[?]2 f

[?] [?]f

=f yx =

[?]y[?]x

[?]y [?]x

( )

( )

( )

The derivatives

[?]2 f

[?]x [?] y

( )

and

[?]2 f

[?]y[?]x

general, mixed partial derivatives in the

are known as 'mixed partial derivatives'. In

n th dimension commute if:

*

They are taken with respect to independent variables in the same co-ordinate system

*

The function

f (x1 , x2 , ... x n) is continuous and differentiable in all its variables

In such a case:

[?]n f

[?]n f

[?] ...[?]

[?] x1 [?] x2 ... [?] xn

[?] xn [?] x n-1 ... [?] x 1 Proof of commutivity for continuous, differentiable functions of two variables:

a=f ( x , y ) , b=f ( x+ h , y ) , c=f ( x , y +k ) , d=f ( x +h , y+ k ) :

Let

2 [?] f

[?] [?]f

=

=lim

[?] x [?] y [?] x [?] y h-0

( )

lim k -0

c-a

-lim ( ( d-b ) k k ) k -0

h

Mathematics for NST Part IA Cambridge University, 2012-2013 2

[?] f

[?] [?]f

=

=lim

[?] y [?] x [?] y [?] x k -0

( )

So long as

h , k -0

with respect to both

[?]

lim

h-0

( d-ch )-lim ( b-a h ) h-0

k

is equivalent to

x and

k , h -0

(the function

f (x , y ) is continuous

y ):

[?]2 f

[?]2 f 1

=

= lim ( d-c-b+a)

[?] x [?] y [?] y [?] x h ,k - 0 hk

Integration of partial derivatives: Integration of multivariable functions is carried out by the same principle as partial differentiation: the function is integrated with respect to one variable, holding all others constant. This produces functions of integration, where the integration of single-variable functions produces constants of integration. Functions of integration can be evaluated by comparison as follows:

[?]f

[?]f

=2x y 2 ,

=2 x 2 y +2y

[?]x

[?]y

[?] f =[?]

[?]f dx=[?] 2x y 2 dx=x 2 y 2 + g ( y )

[?]x

[?] f =[?]

[?]f dy=[?] 2 x 2 y +2y dy=x 2 y 2+ y 2 +h ( x)

[?]y

[?] x 2 y 2 + g ( y )=x 2 y 2+ y 2 +h ( x)

[?] g ( y )= y 2 +h( x)

[?] g ( y )= y 2 + x ,h (x)=c

[?] f ( x , y )=x 2 y 2 + y 2 +c

Differentials: The total differential of a function function

df

is the limiting form of a small finite increment in the

df , as small increments in its variables d x 1 ,d x 2 ... d x n tend to zero.

Differentials are defined only for real analytic functions, which are:

* Infinitely differentiable at every point

* Have a convergent Taylor Series at every point Exact equality of differentials for one-variable functions: For a real-analytic one-variable function

f ( x) :

Mathematics for NST Part IA Cambridge University, 2012-2013

df =f ( x +h ) -f ( x)[?] f ' ( x ) h For increment

h in

x .

This is an increasingly accurate approximation as

lim

h-0

h tends to zero. That is:

f ( x +h ) -f ( x )-f ' ( x ) h

=0 h

Geometrically, this corresponds to approximating the curve

y=f (x)

by a straight line; in

the limit above, this is the tangent line that touches the curve at the point considered. Given this convergence, the first-order approximation can be written as an exact equality of differentials:

df =f ' ( x ) dx=

df dx dx

The original interpretation (Leibniz's interpretation) of this statement is that it relates an infinitesimal change

df

in

f (x) to an infinitesimal change dx

in

x . Although

both quantities are vanishingly small, the absolute error in the statement is even smaller. Thus the relative error is vanishingly small and the statement is exact. Exact equality of differentials for two-variable functions:

df

The equivalent approximation for a finite increment

in the function

f (x , y ) is given

by:

d f =f ( x +h , y +k )-f ( x , y ) [?] ah + bk For increments

h in

x , k

in

y , and some constants a

This is an increasingly accurate approximation as

lim h-0

f ( x +h , y +k ) -f ( x , y )-ah-bk

=0 h

h and k

lim k-0

and

b .

tend to zero. That is:

f ( x +h , y +k ) -f ( x , y )-ah-bk

=0 k

Geometrically, this corresponds to approximating the surface

z=f ( x , y ) by a plane; in

the limit above, this is the tangent plane that touches the curve at the point considered.

Mathematics for NST Part IA Cambridge University, 2012-2013 To find the values of

a

and

b , we consider the cases of k =0 and h=0 ,

respectively: For

k =0 :

lim h-0

f ( x +h , y )-f ( x , y )-ah

=0 h

[?] lim h- 0

[?] a =lim h -0

[?] a=

h=0 :

For

f ( x , y + k )-f ( x , y )-bk

=0 k

lim k-0

f ( x +h , y )-f ( x , y )

-a =0 h

[?] lim k -0

f ( x+ h , y ) -f ( x , y ) [?] f

=

h

[?]x

f ( x , y +k )-f ( x , y )

-b=0 k

[?] b=lim k -0

[?]f

[?]x

[?] b=

[?] df [?] ah+ bk [?]

f ( x , y +k ) -f ( x , y ) [?] f

=

k

[?]y

[?]f

[?]y

[?]f

[?]f

dx+

dy

[?]x

[?]y

Which leads to the exact equality of differentials:

df =

[?]f

[?]f dx +

dy

[?]x

[?]y

Geometrically, this corresponds to writing the infinitesimal vector

df

in terms of the basis

[dx , dy ] of infinitesimal vectors which span the tangent space at (x , y ) to the surface f ( x , y ) =0 . Exact equality of differentials for

n -variable functions:

The ideas above can be generalised to

df =

n -variable functions as follows:

[?]f

[?]f

[?]f d x1 +

d x 2+ ...+

dx

[?] x1

[?] x2

[?] xn n

Relationship between partial and total derivatives: The exact equality of differentials gives rise to a relationship between the partial derivatives of a function

f (x , y ) with respect to

to some parameter

t

alone:

x t , provided that both

and

y , and its total derivative with respect

x and

y

can be expressed in terms of

Mathematics for NST Part IA Cambridge University, 2012-2013

df =

[?]

[?]f

[?]f dx +

dy

[?]x

[?]y

df [?] f dx [?] f dy

=

+

dt [?] x dt [?] y dt

Gradient vector in terms of differentials:

[?]=

(

[?]

[?]

[?]

[?]f

[?]f

[?]f , ,..., , d x=( d x1 , d x 2 , ... , d x n ) , df =

d x1+

d x 2+ ...+

dx

[?] x1 [?] x2

[?] xn

[?] x1

[?] x2

[?] xn n

)

[?] df =[?] f [?] d x

Using differentials to combine errors:

df

The finite approximation to an exact differential the quantity

f . The error df

constituent quantities

can be used to represent an error in

can then be expressed in terms of the errors in its

d x 1 ,d x 2 ... d x n

.

For a quantity which depends on two others,

f ( x , y ) , errors can be combined according

to two statements of equivalence: First statement:

Second statement: By the Central Limit Theorem, distributions which depend upon more than one variable approach Gaussian distributions very quickly, thus combinations of their standard deviations/errors are Gaussian:

[?]f

[?]f

df =

dx+

dy

[?]x

[?]y

[?]

df [?] f dx [?] f dy

=

+

f [?]x f [?] y f

Where errors in

df dx , f x f ,

and

x and

dy y

f x N ( fx , ( d f x ) 2) , f y N ( f y , ( d f y )2 ) are fractional

y , respectively.

f =f x +f y :

If

( df )2=( d f x )2 + ( d f y )2 2 2

[?]f

[?]f ( )

[?] df =

dx +

dy

[?]x

[?]y

(

[?]

df 2 [?] f

=

f

[?]x

) (

2 2

)

dx 2 [?] f

+

f

[?]y

2 dy f

2 ( ) ( )( ) ( )( )

Mathematics for NST Part IA Cambridge University, 2012-2013 The four principal laws used to combine errors are derived from these two statements:

f =x n :

f =x +- y :

df [?] f dx

dx

dx

=

=n x n-1 n =n f [?]x f x x

2 2 ( df )2= [?] f ( dx )2+ [?] f ( dx )2

[?]x

[?]x

( )

2 ( )

2 [?] ( df ) =( dx ) + ( dy )

[?]

2 df

dx

=n f x

f =kx

or

f =x +- k :

f =xy

or

f=

x y :

df [?] f dx

dx dx

=

=k =

f [?]x f kx x

( ) ( )( ) ( )( )

df [?] f dx dx

=

=

f [?]x f x

[?]

df dx

[?] =

f x

[?]

df 2 [?] f

=

f

[?]x

2 dx 2 [?] f

+

f

[?]x

2 2

df 2 dx 2 dy

=y

+x f xy xy

2 2

( ) ( ) ( ) 2

2 df

dx

dy

=

+

f x y

2 ( ) ( ) ( )

dy f

2

*Buy the full version of these notes or essay plans and more in our Mathematics for Natural Sciences Notes.*