Want to get high order? —— A Detailed Guide to Richardson Extrapolation and Romberg Integration
约 1190 字大约 4 分钟
2025-11-17
Numerical approximation methods often introduce errors that depend systematically on a step size h. If we understand how the error depends on h, we can combine multiple approximations to eliminate the dominant error term — often leading to dramatic accuracy improvements at almost no additional cost.
Two celebrated techniques that exploit this idea are Richardson extrapolation and Romberg integration. Both ultimately trace back to the structure of the Euler–Maclaurin formula.
This article provides a detailed, mathematically explicit guide to all three.
1. Richardson Extrapolation (With Derivation)
Suppose a numerical method produces approximations A(h) to some exact value A, and that:
A(h)=A+Chp+Dhp+1+O(hp+2)
for some unknown constants C,D and known order p>0.
The key observation is that the leading error term Chp appears in both A(h) and A(h/2):
A(h/2)=A+C(2h)p+D(2h)p+1+⋯
Let’s eliminate the common Chp term.
1.1 Eliminating the leading error term
Write:
A(h)=A+Chp+O(hp+1)
A(h/2)=A+C2php+O(hp+1)
Multiply the second equation by 2p:
2pA(h/2)=2pA+Chp+O(hp+1)
Now subtract:
2pA(h/2)−A(h)=(2p−1)A+O(hp+1)
Solve for A:
A=2p−12pA(h/2)−A(h)+O(hp+1)
This is the Richardson extrapolation formula:
R(h)=2p−12pA(h/2)−A(h)
Richardson extrapolation increases accuracy by at least one order, and often by two full orders for symmetric methods (because symmetric methods have error expansions only involving even powers of h).
1.2 Repeated Richardson Extrapolation
Once we have the basic Richardson formula:
R(h)=2p−12pA(h/2)−A(h),
we can apply it repeatedly to further reduce the error.
Suppose we have a sequence of approximations at step sizes h,h/2,h/4,…. We can construct a table of extrapolated values:
R0,0R1,0R2,0=A(h),=A(h/2),=A(h/4),…
Then apply Richardson recursively along the "columns":
Rk,j=2pj−12pjRk,j−1−Rk−1,j−1,j=1,2,…,k
- Rk,1 eliminates the first leading error term.
- Rk,2 eliminates the next term, and so on.
This process forms a Richardson extrapolation table:
R0,0R1,0R2,0⋮Rk,0R1,1R2,1⋮Rk,1R2,2⋮Rk,2⋱⋯Rk,k
This repeated extrapolation systematically cancels successive error terms, dramatically improving accuracy without additional derivations of new formulas.
2. Romberg Integration (Richardson Applied Iteratively)
Romberg integration exactly follows the idea of Richardson extrapolation applied to th composite trapezoidal rule: a repeated Richardson extrapolation that eliminates successive terms in the trapezoidal error expansion.
The definite integral
I=∫abf(x)dx
is approximated using the composite trapezoidal rule with step h=(b−a)/n:
T(h)=h[2f(a)+f(b)+k=1∑n−1f(a+kh)].
2.1 Error expansion of the trapezoidal rule
The trapezoidal rule has a beautiful asymptotic expansion:
T(h)=I+C1h2+C2h4+C3h6+⋯
This fact is not an accident — it comes directly from the Euler–Maclaurin formula (discussed later). Crucially:
- only even powers appear,
- the leading accuracy order is p=2.
2.2 Romberg Recursion
First Richardson step (4th-order integration)
Apply Richardson with p=2:
R1(h)=34T(h/2)−T(h).
This eliminates the C1h2 term and yields a 4th-order approximation.
Second Richardson step (6th-order)
R2(h)=1516R1(h/2)−R1(h).
This cancels the C2h4 term and yields a 6th-order method.
General Romberg recursion
Let:
- Rk,0=T(2kb−a)
- Rk,j be the result after j Richardson steps
Then:
Rk,j=4j−14jRk,j−1−Rk−1,j−1
The Romberg tableau emerges:
R0,0R1,0R2,0⋮Rk,0⋮R1,1R2,1⋮Rk,1⋮R2,2⋮Rk,2⋮⋱⋯Rk,k⋮⋱
Diagonal entries Rk,k converge extremely quickly.
3. The Euler–Maclaurin Formula: The Theoretical Backbone
The Euler–Maclaurin summation formula states:
∫abf(x),dx=h[2f(a)+f(b)+k=1∑n−1f(a+kh)]−m=1∑∞(2m)!B2mh2m(f(2m−1)(b)−f(2m−1)(a)),
where B2m are Bernoulli numbers.
The bracketed part is exactly the composite trapezoidal formula T(h). This formula gives an explicit error expansion of the composite trapezoidal rule. Thus:
T(h)=I+C1h2+C2h4+C3h6+⋯,
with explicit
Cm=−(2m)!B2m(f(2m−1)(b)−f(2m−1)(a)).
This structure (even powers, smooth coefficient behavior) is what makes Romberg so effective.
4. Summary: How Euler–Maclaurin, Richardson, and Romberg Fit Together
Why Romberg Works: Eliminating Euler–Maclaurin Terms
Euler–Maclaurin shows:
T(h)=I+C1h2+C2h4+C3h6+⋯.
Each Richardson step in Romberg cancels one of these terms:
- Rk,0: error O(h2)
- Rk,1: error O(h4)
- Rk,2: error O(h6)
- Rk,3: error O(h8)
Thus:
Romberg=Richardson extrapolation+trapezoidal rule
driven by
Euler–Maclaurin error structure.
This "error peeling" explains Romberg's rapid convergence.
When Romberg Succeeds (Euler–Maclaurin Validity)
Romberg performs exceptionally well when Euler–Maclaurin applies cleanly — that is, when:
- f is smooth on [a,b],
- derivatives exist and are well-behaved at endpoints,
- the integrand is non-oscillatory,
- the interval is finite.
In such cases, error terms behave predictably and Romberg approaches machine precision quickly.
When Romberg Struggles or Fails
Romberg falters precisely in situations where Euler–Maclaurin fails:
- endpoint singularities (e.g., f(x)=x),
- highly oscillatory functions,
- functions with discontinuities or corners,
- improper integrals or infinite intervals.
In these cases, the even-power expansion breaks down, preventing Richardson from eliminating error terms cleanly.
Summary
Romberg integration works because Euler–Maclaurin provides a clean even-power error expansion for the trapezoidal rule, and Richardson extrapolation cancels these terms systematically.
