pysdic.compute_backward_finite_difference_coefficients#
- compute_backward_finite_difference_coefficients(order, spacing=1.0, accuracy=1)[source]#
Compute the coefficients for the backward finite difference approximation of a derivative.
The function returns the Stencil coefficients \(c_j\) used to approximate the n-th order derivative of a function using backward finite differences.
\[\frac{d^n f}{dt^n} \approx \frac{1}{h^n} \sum_{j=0}^{N} c_j f(t - j h)\]where \(h\) is the time step size, and \(m\) is the accuracy order and \(N\) is the number of stencil points required to achieve the desired accuracy (\(N = n + m - 1\)), such that the approximation error is of order \(O(h^m)\).
\[\frac{1}{h^n} \sum_{j=0}^{N} c_j f(t - j h) = \frac{d^n f}{dt^n} + O(h^m)\]Note
The coefficients are computed using Gauss elimination on the Vandermonde matrix constructed from the Taylor series expansion.
- Parameters:
order (Integral) – The order of the derivative to approximate (e.g., 1 for first derivative, 2 for second derivative).
spacing (Number, optional) – The time step size \(h\). Default is
1.0.accuracy (Integral, optional) – The desired accuracy order of the approximation. Default is
1.
- Returns:
The coefficients for the backward finite difference approximation of the derivative in the form of a 1D array
[c_{th}, c_{(t-1)h}, ..., c_{(t-N)h}].- Return type:
- Raises:
ValueError – If the order or accuracy are not positive integers. If the spacing is not a strictly positive number.
Notes
By developing the function \(f(t - k h)\) in Taylor series around \(t\), we have:
\[f(t - j h) = \sum_{k=0}^{\infty} \frac{(-j h)^k}{k!} \frac{d^k f}{dt^k}\]We can add the contributions of the stencil points weighted by the coefficients \(c_k\):
\[\frac{1}{h^n} \sum_{j=0}^{N} c_j f(t - j h) = \frac{1}{h^n} \sum_{j=0}^{N} c_j \sum_{k=0}^{\infty} \frac{(-j h)^k}{k!} \frac{d^k f}{dt^k}\]\[\frac{1}{h^n} \sum_{j=0}^{N} c_j f(t - j h) = \sum_{k=0}^{\infty} \frac{h^{k-n}}{k!} \left(\sum_{j=0}^{N} c_j (-j)^k\right) \frac{d^k f}{dt^k}\]Thus \(\forall k \in [0, N]\), we want to enforce the conditions:
\[\sum_{j=0}^{N} c_j (-j)^k = n! \delta_{k,n}\]where \(\delta_{k,n}\) is the Kronecker delta.
So we have a linear system of equations to solve for the coefficients \(c_j\). We can demonstrate that the matrix of the system is a Vandermonde matrix, which is invertible as long as the stencil points are distinct. And the output coefficients are divided by \(h^n\) to account for the time step size. The result is a finite difference approximation of the n-th order derivative with an error of order \(O(h^m)\).
See also
compute_forward_finite_difference_coefficients()To compute the forward finite difference coefficients.
compute_central_finite_difference_coefficients()To compute the central finite difference coefficients.
assemble_backward_finite_difference_matrix()To assemble the backward finite difference operator matrix.
Examples
>>> compute_backward_finite_difference_coefficients(1, 1.0, 1) array([ 1., -1.])
>>> compute_backward_finite_difference_coefficients(2, 1.0, 1) array([ 1., -2., 1.])
>>> compute_backward_finite_difference_coefficients(1, 1.0, 2) array([ 1.5, -2., 0.5])
>>> compute_backward_finite_difference_coefficients(2, 0.1, 2) array([ 200, -500, 400, -100])