Finite Difference Methods

The computation of derivatives is a fundamental operation in calculus. Given a function \(f(x)\), the derivative \(f'(x)\) is a new function that gives the rate of change of \(f(x)\) at each point \(x\). The derivative is defined as the limit of the difference quotient:

\[f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}.\]

For simple functions, the derivative is straightforward to compute. But for humans, it’s still error prone. That is why we added automatic differentiation, which has been part of Numerics.NET for years.

However, automatic differentiation in its current form has limitations. It can only compute derivatives for relatively simple expressions. For more complex functions, we need to use numerical methods to approximate the derivative.

Numerical Derivatives

Numerical differentiation works by approximating the derivative of a function using some variation of the definition above. Instead of taking the limit, we use a small value of \(h\), the step size to approximate the derivative. This is known as a finite difference approximation:

\[f'(x) \approx \frac{f(x+h) - f(x)}{h}\]

Unfortunately, this formula is badly conditioned. The smaller the value of \(h\), the closer the values of \(f(x)\) and \(f(x+h)\) are to each other. And so we are subtracting two very similar numbers, which leads to a significant loss of precision.

The problem gets worse the smaller \(h\) gets. This is known as roundoff error.

There is another source of error. By using a finite value for \(h\) and not taking the limit, we are making an approximation. To make the approximate equality an exact equality, we need to add an error term. In this case, the error term is roughly proportional to the step size \(h\).

In other words: the larger the value we choose for \(h\), the larger this error term will be. This is known as truncation error.

So, choosing a small value for \(h\) leads to round-off error and choosing a large value for \(h\) leads to truncation error. Computing numerical derivatives is an exercise in balancing these two errors.

It’s bad enough for first derivatives, but it gets worse for higher-order derivatives. With each new order, you’re adding another limit approximation. How can we escape from this conundrum?

Finite differences

We can’t really do anything about the round-off error. There is no way around subtracting values that are almost equal. But we can do something about the truncation error.

The approximation we used has an error term that is proportional to the step size. We can use a different approximation that has a smaller error term. This is the idea behind finite differences.

Our first approximation evaluates the function at two points: \(x\) and \(x+h\). We can say that these points are at multiples of \(h\) or offsets of one and zero from the point \(x\). Looking at the numerator only, the coefficient of \(f(x)\) is \(-1\) and the coefficient of \(f(x+h)\) is \(1\).

We can use other combinations of function values at different points (with different offsets) to approximate the derivative. For example, the central difference formula is

\[f'(x) \approx \frac{f(x+h) - f(x-h)}{2h}.\]

Here, the offsets are \(+1\) and \(-1\), and the coefficients are \(1/2\) and \(-1/2\). As it turns out, this simple change makes the error term much smaller: it is proportional to the square of the step size.

We can generalize this idea to use more points and to approximating higher-order derivatives as well. We can approximate the \(k\)th derivative using \(n\) function evaluations as follows:

\[f^{(k)}(x) \approx \frac{\sum_i^n c_i f(x+g_ih)}{h^k}.\]

This leaves us with one question: Given a set of offsets, \(g_i\), how do we compute the coefficients \(c_i\) so that we have a good approximation?

Since \(h\) is very small, we can approximate \(f(x+h)\) by a Taylor series expansion around \(x\), and plug this into our equation:

\[f^{(k)}(x) \approx \frac{\sum_{i=1}^n c_i \sum_{j=0}^\infty f^{(j)}(x)\frac{(g_ih)^j}{j!}}{h^k}.\]

for some coefficients \(c_i\) and offsets \(g_i\). After some rearranging, we find that

\[f^{(k)}(x) \approx \sum_{j=0}^\infty f^{(j)}(x) \frac{h^{j-k}}{j!} \sum_{i=1}^n c_i g_i^j.\]

We now compare the coefficients of derivatives of \(f(x)\) on both sides. This tells us that we’ll get a good approximation when the coefficient of \(f^{(j)}(x)\) for \(k=j\) is equal to \(1\) and zero for \(k \neq j\). Since we have only \(n\) coefficients, we can only write an equation for the first \(n\) derivatives. This gives us a system of \(n\) equations in \(n\) unknowns:

\[\sum_{i=1}^n c_i g_i^j = y_j\]

for \(j = 1, 2, \ldots, n-1\), where \(y_j\) is \(k!\) when \(j=k\) and zero otherwise. Solving this system of equations gives us the coefficients \(c_i\) that are optimal for a given set of offsets \(g_i\).

We see that to approximate the \(k\)th derivative, we need to evaluate the function at \(k+1\) or more points. The truncation error using \(n\) points is proportional to the \(n-k\)th power of the step size. This is called the accuracy order of the method.

So, the forward difference formula has order 1, while the central difference formula has order 2. This is true even though we’re only using two points!

Higher-order methods are generally more accurate, but they require more function evaluations. Even this has its limit, however. Round-off error will eventually dominate.

The FiniteDifferenceMethod class

New in version 9.0.2 of Numerics.NET is the FiniteDifferenceMethod class. This class represents a finite difference method. Each method is specific to a set of offsets and the derivative order. The class provides methods to approximate that derivative using the given offsets.

The offsets can be specified in two ways: as a list of integers or as a direction and a desired accuracy order.

Creating finite difference methods

Earlier we called our initial approximation the forward difference formula. This is because we are only using forward looking points that are greater than or equal to the point \(x\). Put another way, the offsets are positive or zero. The Forward method returns a finite difference method that uses consecutive offsets starting at 0.

For example, a third order forward method for the second derivative would use the offsets 0, 1, 2, 3, 4, and 5.

We can also use backward looking points that have offsets that are negative or zero. The Backward method returns a finite difference method that uses consecutive offsets ending at 0.

For example, a third order forward method for the second derivative would use the offsets -5, -4, -3, -2, -1, and 0.

We can also use points on either side of the point \(x\). The Central method returns a finite difference method that uses consecutive offsets that are symmetrical around 0.

It’s actually a little bit more complicated than that. Because of symmetry reasons, there are no odd order central difference methods. You need one fewer point to get the same accuracy order. So, a fourth order central method for the second derivative uses the offsets -2, -1, 0, 1, and 2. A fourth order central method for the first derivative uses the offsets -2, -1, 1, and 2.

If you want to use a different set of offsets, that doesn’t fit any of these categories, you can use the FromOffsets method.

Computing derivatives

The EstimateDerivative method of the FiniteDifferenceMethod class computes an approximation of the derivative of a function at a point. The method takes the function to differentiate, the point at which to compute the derivative, and various options.

One overload lets you supply a step size. This can be useful if the function is only known at discrete, equally spaced points.

The second overload is more interesting. It has many options that let you control the behaviour of the method to deal with specific challenges.

The FunctionMath class has a Derivative method that offers the same functionality. You can specify which finite difference method to use. The default is to compute the first derivative using second order central differences.

The corresponding finite difference methods are cached so there is minimal overhead in calling the method for different values.

There’s also the GetDifferentiator method which returns a function that computes the derivative of a specific function.

The simple case

Without any further options, all these methods use a pre-computed step size that is close to optimal for a well-behaved function. It works well in most cases:

C#
var expected = Math.Cos(1);
var actual = FunctionMath.Derivative(Math.Sin, 1.0, accuracyOrder: 4);
Console.WriteLine($"Expected: {expected}");
Console.WriteLine($"Actual: {actual}");
Console.WriteLine($"Difference: {expected - actual}");

This produces the following output. The result is very close to the exact value:

Expected: 0.5403023058681398
Actual: 0.5403023058681846
Difference: -4.4853010194856324E-14

Adaptive step size

What if the function is not so well-behaved?

It’s actually possible to estimate both the round-off error and the truncation error of a finite difference method at a specific point. All it takes is a few extra function evaluations at carefully selected points. Using this information, it is possible to compute the step size that minimizes the total error.

Let’s look at an example. We use the sine function again, but this time we multiply the argument by 100. This produces a highly oscillatory function:

C#
double x0 = 1;
Func<double, double> f = x => Math.Sin(100 * x);
var expected = 100 * Math.Cos(100 * x0);
var actual = FunctionMath.Derivative(f, x0, accuracyOrder: 4);
Console.WriteLine($"Expected: {expected}");
Console.WriteLine($"Actual: {actual}");
Console.WriteLine($"Difference: {expected - actual}");

Using the standard settings, the result is a little bit disappointing:

Expected: 86.23188722876839
Actual: 86.23188703937242
Difference: 1.8939596202471876E-07

So let’s see what we get when we use an adaptive step size, which we can do as follows:

C#
var adaptive = FunctionMath.Derivative(f, x0, accuracyOrder: 4, adaptive: true);
Console.WriteLine($"Adaptive: {adaptive}");
Console.WriteLine($"Difference: {expected - adaptive}");

The result is much better:

Adaptive: 86.23188722876831
Difference: 7.105427357601002E-14

Domain boundaries and other discontinuities

The adaptive step size works great in many situations, but there’s still a potential problem.

C#
var expected = 1000.0;
var actual = FunctionMath.Derivative(Math.Log, 0.001, accuracyOrder: 4);
var adaptive = FunctionMath.Derivative(Math.Log, 0.001,
    accuracyOrder: 4, adaptive: true);
Console.WriteLine($"Expected: {expected}");
Console.WriteLine($"Actual: {actual}");
Console.WriteLine($"Difference: {expected - actual}");
Console.WriteLine($"Adaptive: {adaptive}");
Console.WriteLine($"Difference: {expected - adaptive}");

Here’s what we get:

Expected: 1000
Actual: 999.4201015337821
Difference: 0.5798984662178555
Adaptive: NaN
Difference: NaN

The standard result is not that great. We only get three correct digits. But using an adaptive step size produces an invalid result. What is going on here?

The problem is that the logarithm function is not defined for $x \lt 0$. The computed step size is so large that at least one of the points where the function is evaluated is less than zero. That’s a problem.

The solution is to limit the values where the function may be evaluated. The xMin and xMax parameters of the Derivative method specify the lower and upper bounds:

C#
var bounded = FunctionMath.Derivative(Math.Log, 0.001, 
    accuracyOrder: 4, adaptive: true, xMin: 0.000001);
Console.WriteLine($"Limited: {bounded}");
Console.WriteLine($"Difference: {expected - bounded}");

This produces a result that is 9 orders of magnitude better than the original:

Limited: 999.9999999994158
Difference: 5.842366590513848E-10

Noisy inputs

One underlying assumption of the finite differences methods is that the function evaluations are accurate. This is not always true.

Some functions are expensive to calculate and use an iterative method to compute the result up to a desired precision. This effectively adds noise to the function values. The smaller the step size, the more of the difference between the function values is due to the noise.

To illustrate this, let’s add some noise to a sine function:

C#
Func<double> noise = () => NormalDistribution.Standard.Sample(Random.Shared);
Func<double, double> f = x => Math.Sin(x) + 1e-6 * noise();
var method = FiniteDifferenceMethod.Central(1, 4);
var expected = Math.Cos(1);
var actual = method.EstimateDerivative(f, 1.0);
var adaptive = method.EstimateDerivative(f, 1.0, adaptive: true);
Console.WriteLine($"Expected: {expected}");
Console.WriteLine($"Actual: {actual}");
Console.WriteLine($"Difference: {expected - actual}");
Console.WriteLine($"Adaptive: {adaptive}");
Console.WriteLine($"Difference: {expected - adaptive}");

As expected, the result is not good. Using an adaptive step size doesn’t help:

Expected: 0.5403023058681398
Actual: 0.5371561879390876
Difference: 0.0031461179290521724
Adaptive: 0.585209344195681
Difference: -0.04490703832754128

We can pass the noiseFactor argument to the EstimateDerivative method to account for the noise. This factor is a rough estimate of the ratio of the noise to the machine precision. So in this case, we’ll use 1e10:

C#
var noisy = method.EstimateDerivative(f, 1.0, noiseFactor: 1e10);
Console.WriteLine($"Limited: {noisy}");
Console.WriteLine($"Difference: {expected - noisy}");

The result is about as good as can be expected:

Limited: 0.5403005485406153
Difference: 1.757327524498642E-06

Note that these results are based on random values. Results may vary, but the overall effect is consistent.

Higher order derivatives

Previously, the numerical differentiation methods in Numerics.NET were limited to approximating the first derivative only. Thanks to the FiniteDifferenceMethod class you can now approximate higher order derivatives as well.

For example, to approximate the second derivative of the sine function at 1:

C#
var expected = -Math.Sin(1);
var actual2 = FunctionMath.Derivative(Math.Sin, 1.0, 2);
var actual4 = FunctionMath.Derivative(Math.Sin, 1.0, 2, accuracyOrder: 4);
var actual6 = FunctionMath.Derivative(Math.Sin, 1.0, 2, accuracyOrder: 6,
    adaptive: true);
Console.WriteLine($"Expected: {expected}");
Console.WriteLine($"Actual (2nd order):  {actual2}");
Console.WriteLine($"Difference: {expected - actual2}");
Console.WriteLine($"Actual (4nd order): {actual4}");
Console.WriteLine($"Difference: {expected - actual4}");
Console.WriteLine($"Actual (6nd order): {actual6}");
Console.WriteLine($"Difference: {expected - actual6}");

This produces the following output:

Expected: -0.8414709848078965
Actual (2nd order):  -0.8414713541666666
Difference: 3.693587701247836E-07
Actual (4nd order): -0.8414709848412395
Difference: 3.334299503165994E-11
Actual (6nd order): -0.8414709848081081
Difference: 2.1160850849355484E-13

This illustrates the effectiveness of higher (accuracy) order methods when computing higher order derivatives, especially in combination with an adaptive step size.

Conclusion

The addition of the FiniteDifferenceMethod class to Numerics.NET and the improvements to the FunctionMath class it enabled, considerably enhance the capabilities of the library for numerical differentiation.

Computing second and higher order derivatives is now possible. You have more control to ensure an accurate result by choosing the accuracy order and using an adaptive step size. You also have the tools to handle specific challenges like noisy inputs and discontinuities.