Numerical Differentiation

The FunctionMath class contains a number of static (Shared) methods for approximating the derivative of a function. Finite differences are used to approximate the derivative. An adaptive method finds the best choice of finite difference, and estimate the error. You can also create a delegate that evaluates the derivative function of an arbitrary function numerically.

All methods listed in this section are defined as extension methods. This means that in most languages they can be called as if they were instance methods of their first argument. The examples illustrate how this is done.

Direct numerical differentiation

The Derivative method takes two or three parameters. The first argument is a Func<T, TResult> that specifies the function to differentiate. The second argumentspecifies the point at which to evaluate the derivative. The following example evaluates the numerical derivative of the cosine function at x = 1.

C#
Console.WriteLine("Numerical derivative of cos(x):");
double result = FunctionMath.Derivative(Math.Cos, 1.0);
Console.WriteLine("  Result = {0}", result);
Console.WriteLine("  Actual = {0}", -Math.Sin(1));

The optional third parameter, of type DifferencesDirection, specifies the kind of finite differences to use in the calculation. The possible values for this parameter are summarized in table 3.

Value

Description

Backward

Backward differences are used. Only function values at the target point and to the left of the target point are used.

Central

Central differences are used. Function values on both sides of the target point are used.

Forward

Forward differences are used. Only function values at the target point and to the right of the target point are used.

Passing DifferencesDirection, or omitting the third parameter invokes the algorithm using central differences. The target function is evaluated on both sides of the target point.

In some cases this may not work. When the target function is either undefined, the algorithm will break down. If the target function contains a discontinuity, the result will be wrong. In these cases, you should use either forward or backward differences.

Passing DifferencesDirection to Derivative invokes the algorithm using backward differences. The target function is evaluated only to the left of the target point and at the target point, not to the right of the target point. Passing DifferencesDirection to Derivative invokes the algorithm using forward differences. The target function is evaluated only to the right of the target point and at the target point, not to the left of the target point.

The BackwardDerivative, CentralDerivative and ForwardDerivative methods call these respective algorithms directly. They optionally take an out parameter, in which the estimated absolute error is returned.

Numerical derivatives

It may be useful to be able to use the numerical derivative as a function that can be evaluated at arbitrary points. This can be done using the GetNumericalDifferentiator method. This method returns a Func<T, TResult> delegate that represents the derivative of the argument passed to the method. By default, central differences are used. To obtain forward or backward derivatives, use the GetForwardDifferentiator and GetBackwardDifferentiator methods, respectively.

C#
Func<double, double> f = Math.Cos;
Func<double, double> df = FunctionMath.GetNumericalDifferentiator(f);
// Pass both to an equation solver:
double root = FunctionMath.FindZero(f, df, 1.0);

Using a numerical derivative for an algorithm that requires it, as in the example above, is usually not a good idea. Most such algorithms have alternatives that do not use a derivative, but may be more expensive in terms of function evaluations. Still, numerical differentiation is relatively expensive, costing at least 5 evaluations of the original function. This is an important consideration when deciding which algorithm to use.

Numerical gradients and Jacobians

In addition to simple derivatives of functions of one variable, numerical approximations to the gradient of a multivariate function, or the Jacobian of a set of multivariate functions can also be computed. The GetNumericalGradient method returns a delegate that evaluates the gradient of its argument. The GetNumericalJacobian

A note on accuracy

Numerical differentiation is an inherently unstable process. There is really no way to avoid subtracting two numbers that are very close, resulting in very significant round-off error. The best we can hope for is a relative accuracy of roughly half the machine precision (SqrtEpsilon).

Reference

S.D. Conte and Carl de Boor, Elementary Numerical Analysis: An Algorithmic Approach, McGraw-Hill, 1972.