Numerical Differentiation
The FunctionMath class contains a number of static (Shared) methods for approximating the derivative of a function. Finite differences are used to approximate the derivative. An adaptive method finds the best choice of finite difference, and estimate the error. You can also create a delegate that evaluates the derivative function of an arbitrary function numerically.
All methods listed in this section are defined as extension methods. This means that in most languages they can be called as if they were instance methods of their first argument. The examples illustrate how this is done.
Direct numerical differentiation
The Derivative method takes two or three parameters. The first argument is a Func<T, TResult> that specifies the function to differentiate. The second argumentspecifies the point at which to evaluate the derivative. The following example evaluates the numerical derivative of the cosine function at x = 1.
Console.WriteLine("Numerical derivative of cos(x):");
double result = FunctionMath.Derivative(Math.Cos, 1.0);
Console.WriteLine(" Result = {0}", result);
Console.WriteLine(" Actual = {0}", -Math.Sin(1));
To calculate the second or higher derivative, pass in the order as the third parameter.
Other optional arguments allow you to specify the kind of finite differences to use in the calculation. The first of these is the direction, of type DifferencesDirection, specifies the kind of finite differences to use in the calculation. The possible values for this parameter are summarized in the table below.
Value | Description |
---|---|
Backward | Backward differences are used. Only function values at the target point and to the left of the target point are used. |
Central | Central differences are used. Function values on both sides of the target point are used. |
Forward | Forward differences are used. Only function values at the target point and to the right of the target point are used. |
By default central differences are used. This means that the target function is evaluated on both sides of the target point.
The BackwardDerivative, CentralDerivative and ForwardDerivative methods call these respective algorithms directly.
Another optional argument is the accuracyOrder of the finite difference method. The default value is 2, which means that the error is roughly proportional to the square of the step size. Higher orders are often more accurate, especially for higher order derivatives.
Next is a boolean value, adaptive that specifies whether the step size should be computed based on actual function values. If false (the default), a standard value for the step size is used.
The example below computes the second derivative of the cosine function using a fourth order method. The step size is computed adaptively.
result = FunctionMath.Derivative(Math.Cos, 1.0, 2,
accuracyOrder: 4, adaptive: true);
Console.WriteLine(" Order 2: {0}", result);
Console.WriteLine(" Actual: {0}", -Math.Cos(1));
In some cases, the computation of the derivative may fail because the function is evaluated outside of its domain. Math.Log(0.001) is an example that may lead to problems. Two optional parameters, xMin and xMax, may be used to address this problem. These parameters specify minimum and maximum values for the argument of the function. This ensures that the function evaluation will succeed.
var actual = FunctionMath.Derivative(Math.Log, 0.001, accuracyOrder: 4);
var adaptive = FunctionMath.Derivative(Math.Log, 0.001,
accuracyOrder: 4, adaptive: true);
var bounded = FunctionMath.Derivative(Math.Log, 0.001,
accuracyOrder: 4, adaptive: true, xMin: 0.000001);
Console.WriteLine(" Standard: {0}", actual);
Console.WriteLine(" Adaptive: {0}", adaptive);
Console.WriteLine(" Bounded: {0}", bounded);
Some functions are expensive to calculate and use an iterative method to compute the result up to a desired precision. This effectively adds noise to the function values. The smaller the step size, the more of the difference between the function values is due to the noise. This can lead to catastrophic results.
The noiseFactor parameter helps address this by specifying the relative noise level in the function values so that the step size can be adjusted accordingly. The value is relative to the machine precision, so if the noise is of the order of 1e-6, the value for the noise level should be around 1e10.
Delegates for Numerical Derivatives
It may be useful to be able to use the numerical derivative as a function that can be evaluated at arbitrary points. This can be done using the GetNumericalDifferentiator method. This method returns a Func<T, TResult> delegate that represents the derivative of the argument passed to the method. By default, central differences are used. To obtain forward or backward derivatives, use the GetForwardDifferentiator and GetBackwardDifferentiator methods, respectively.
Func<double, double> f = Math.Cos;
Func<double, double> df = FunctionMath.GetNumericalDifferentiator(f);
// Pass both to an equation solver:
double root = FunctionMath.FindZero(f, df, 1.0);
Using a numerical derivative for an algorithm that requires it, as in the example above, is usually not a good idea. Most such algorithms have alternatives that do not use a derivative, but may be more expensive in terms of function evaluations. Still, numerical differentiation is relatively expensive, costing at least 5 evaluations of the original function. This is an important consideration when deciding which algorithm to use.
Numerical gradients and Jacobians
In addition to simple derivatives of functions of one variable, numerical approximations to the gradient of a multivariate function, or the Jacobian of a set of multivariate functions can also be computed. The GetNumericalGradient method returns a delegate that evaluates the gradient of its argument. The GetNumericalJacobian
A note on accuracy
Numerical differentiation is an inherently unstable process. There is really no way to avoid subtracting two numbers that are very close, resulting in very significant round-off error. The best we can hope for is a relative accuracy of roughly half the machine precision (SqrtEpsilon).
Reference
S.D. Conte and Carl de Boor, Elementary Numerical Analysis: An Algorithmic Approach, McGraw-Hill, 1972.