Stand-alone Quadrature Rules

Prof. Robert Piessens was the nemisis of many first year engineering students at my Alma Mater, the university of Leuven in Belgium. He taught first and second year calculus, one of the toughest courses in a challenging curriculum. Prof. Piessens is a colourful character. He has a strong interest in Belgian Draft horses. His Youtube channel has more than 245,000 followers.

There were hundreds of students in the first year, and the course was split between two professors who each taught about half of the students. He taught the first half. His more serious but no less ruthless colleague taught the second half, which included me.

I didn’t get to enjoy his lectures until my fourth year, when he taught a course on numerical integration to a group of 4. It was one of his favourite subjects and he knew it very well. Several years before, he had authored the QUADPACK library, which for a long time was the standard for numerical integration in Fortran. QUADPACK is listed on netlib.org, the repository of numerical software that was famous before Github took over the world.

Which brings me to today’s topic.

Quadrature Rules

Numerical integration is a fundamental operation in numerical computing. Integrals appear in many calculations, and often they cannot be computed analytically. In these cases, we need to use numerical methods to approximate the integral. A method that approximates the integral of a function is called an integration rule or a quadrature rule.

The Trapezoidal Rule

The simplest method is the trapezoidal rule. It approximates the integral of a function \(f(x)\) over an interval \([a, b]\) by the sum of the areas of trapezoids that connect the function values at the endpoints of successive subintervals. The trapezoidal rule is simple to implement and works well for smooth functions. But it is generally not very accurate. The error is proportional to the square of the width of the subintervals.

Newton-Cotes Rules

The trapezoid rule is one instance of so-called Newton-Cotes rules, named after Isaac Newton and Roger Cotes. These rules approximate the integral of a function by evaluating the function at a set of equidistant points and using a weighted sum of these values.

You can use more points to get more accurate results. The trapezoid rule uses two points. Three points give you Simpson’s rule. Simpson also cane up with a rule that uses four points called Simpson’s 3/8 rule. Then there’s Boole’s rule that uses five points.

These rules are not very accurate, but work reasonably well when you divide the integration interval into many smaller ones and apply the rule to each of those.

It would be tempting to use more points to get more accurate results, but that doesn’t work. As the number of points increases, the error in the Newton-Cotes rules will generally not decrease.

Gauss-Legendre Quadrature

Carl Friedrich Gauss came up with a different approach. He realized that there was no requirement to use equidistant points. Instead, he used the freedom to choose the points where the function is evaluated to find better rules.

The order of a quadrature rule is the maximum degree of the polynomials that the rule can integrate exactly. The trapezoidal rule is a first order rule: It will compute the integral of a linear function correctly, but fails for a quadratic.

We want to find an optimal rule of the form

\[\int_a^b f(x)\,dx \approx \sum_{k=1}^n w_i f(x_i)\]

where \(x_i\) are the nodes or points where the function is evaluated, and \(w_i\) are the weights. In general, we have (n) points and (n) weights. We have one quantity to compute (the integral). That leaves us with (2n-1) degrees of freedom. We can use these to find a rule that integrates polynomials of degree (2n-1) exactly.

That’s what Gauss did in 1814. He computed the optimal points and weights for quadrature rules with up to 7 points to 16 digits. (Side note: mathematicians like Gauss and Leonard Euler, who lived way before the era of calculators and computers, must have been amazing at mental arithmetic!)

This class of integration rules is now known as Gauss-Legendre rules. The name Legendre comes from the fact that the nodes are the roots of the Legendre polynomials.

Other Gauss-like rules

There are many other Gauss-like rules, with a slightly different definition:

\[\int_a^b w(x) f(x)\,dx \approx \sum_{k=1}^n w_i f(x_i)\]

where (w(x)) is a weight function. The nodes are the roots of precisely the polynomial of degree (n) that is orthogonal with respect to the weight function over the interval ([a, b]).

When $w(x) = 1$ and ([a, b]\ = [-1, 1]), these are the Legendre polynomials and so end up with the Gauss-Legendre rule.

The nice thing about these rules is that they incorporate the weight function in the weights. This means that they work well for functions that are the product of the weight function and a smooth function. In particular, the product of the weight function with a polynomial of up to degree (2n-1) will be exact!

Gauss-Kronrod rules

An integration rule computes an approximation of the integral of a function. But how good is the approximation?

The theory of numerical integration provides error estimates for many rules. Most of them are of the form

\[\left| \int_a^b f(x)\,dx - \sum_{k=1}^n w_i f(x_i) \right| \leq C h^{p+1} f^{(p+1)}(\xi)\]

where (h) is the width of the subintervals, (p) is the order of the rule, and (\xi) is some value in the integration interval.

This is not very useful in practice. We don’t know the derivative of the function, and even if we did, we have no idea where to find (\xi).

There are two approaches to estimating the error. The first is to divide the interval, compute the integral for each subinterval, and compare their sum to the aprroximation for the whole interval. This method is used for the Newton-Cotes style rules.

The second approach is to use two rules of different orders and compare their results. The higher order rule will be more accurate. How much better it is than the lower order rule gives an indication of the error.

In general, computing two different rules is more costly in terms of function evaluations. Except when we can use nested rules. A rule is nested in another if the nodes of the lower order rule are a subset of the nodes of the higher order rule. This means that we don’t need to compute the function values again at the nodes of the lower order rule.

Most of the rules discussed so far do not have this property. Clenshaw-Curtis rules of the second kind do, but Gauss rules generally do not.

Fortunately, it is possible to extend a rule so that the rule is nested in a higher order rule. This is the idea behind the Gauss-Kronrod rules. These rules are pairs of Gauss rules of different orders. The lower order rule is nested in the higher order rule. The difference between the two rules gives an estimate of the error.

Quadrature Rules in Numerics.NET

Numerical integration has been available in Numerics.NET since version 1.

The IntegrationRules class

New in version 9.0.5 of Numerics.NET is the IntegrationRules class. This class contains methods to construct many of the integration rules mentioned in the previous section.

Each of the methods returns an instance of a quadrature rule. The rule has an Evaluate method that computes the integral of a function over a specific interval.

Let’s look at an example. We’ll compute the integral of the function

\[f(x) = \frac{e^x}{\sqrt{1+x}}\]

over the interval ([-1, 1]). This is a difficult integral to compute analytically. It’s also difficult to approximate numerically because the derivative at (x = -1) is infinite. Its value is approximately 2.460262013896155478.

First we’ll use the Gauss-Legendre rule with 20 points to approximate the integral.

C#
Func<double, double> f = x => Math.Exp(x) / Math.Sqrt(1 + x);
var gaussLegendreRule = IntegrationRules.GaussLegendre(20);
var gaussLegendreResult = gaussLegendreRule.Evaluate(f, -1, 1);
Console.WriteLine($"Gauss-Legendre 20pt rule: {gaussLegendreResult.Value}");

This gives us a value of 2.438…, off by more than 0.02. Despite the fact that we’re using a decent number of points, the result is not very good.

The reason, as mentioned earlier, is that the function is not smooth at (x = -1). The derivative is infinite, and the function is not well approximated by a polynomial in the neighbourhood of that point.

The form of the singularity is ((1+x)^{-1/2}). This exactly the form of a weight function for a Gauss-Jacobi rule with (\alpha=0) and (beta=-1/2). This rule is designed to integrate functions with this kind of singularity.

Let’s see how this works out. First, we need to adjust the integrand by dividing out the weight function. In this case, that’s just Math.Exp. Next we’ll create a Gauss-Jacobi rule with the proper parameters. We’ll try our luck and see how far we can get with just 6 points!

C#
var gaussJacobiRule = IntegrationRules.GaussJacobi(6, 0.0, -0.5);
var gaussJacobiResult = gaussJacobiRule.Evaluate(Math.Exp, -1, 1);
Console.WriteLine($"Gauss-Jacobi 6pt rule: {gaussJacobiResult.Value}");

The result is 2.460262013893976, which is accurate to 12 digits. This is a very impressive result, given that we used only six function evaluations and the function we evaluated was considerably simpler than the original.

Other options for computing integrals

Aside from calling integration rules directly, there are other options for computing integrals in Numerics.NET. We won’t go into all the details here. There are just too many, including adaptive methods, methods for functions with singularities, and even computing integrals to arbitrary precision using the BigFloat struct.

We’ll just mention that you can use the Integrate method of the FunctionMath class to compute integrals in one call.

C#
var functionMathResult = FunctionMath.Integrate(f, -1, 1);
Console.WriteLine($"FunctionMath.Integrate: {functionMathResult}");

The result is 2.4602620138961466, which is accurate to 15 digits.

Under the hood, this method creates an instance of the AdaptiveIntegrator class. This class is a general-purpose adaptive integrator that can compute integrals of functions with singularities. It performs several tricks to compute an accurate result even for difficult integrals. It is based on Prof. Piessens original QUADPACK code, using Gauss-Kronrod rules to estimate the error and adaptively dividing the integration interval until the desired accuracy is reached.

This algorithm can compute even very difficult integrals accurately. For example (x^{-0.9} Log(1/x)) on [0,1]. The exact result is 100. But the singularity is so strong that most methods will fail spectacularly.

C#
Func<double, double> f2 = x => Math.Pow(x, -0.9) * Math.Log(1/x);
var result = FunctionMath.Integrate(f2, 0, 1, extrapolate: true);
Console.WriteLine($"FunctionMath.Integrate: {result}");

This gives 100.00000000008289. What is most remarkable is that the result is accurate to 12 digits using only 285 function evaluations.

Notice we turned on extrapolation here. This is a trick that can be used to get more accurate results for difficult integrands. Rather than compute the integral directly, the algorithm tries to extrapolate its value from successive approximations.

Without extrapolation, the result we would get is 71.76121024663357. The algorithm signals problems with the calculation, and the computation is retried with extrapolation. By explicitly turning on extrapolation, we can avoid the initial failure and get the more accurate result with far fewer function evaluations.

In this release we’ve also updated AdaptiveIntegrator to make it significantly faster and more efficient.

Conclusion

The addition of the IntegrationRules class to Numerics.NET open up new possibilities for the efficient computation of integrals of certain types of functions with specific singularities.