"Numerical solution of linear PDEs: Computing the Crank-Nicolson matrix automatically"

27 jun 2019

The Crank-Nicolson method rewrites a discrete time linear PDE as a matrix multiplication $$\phi_{n+1}=C \phi_n$$. Nonlinear PDE's pose some additional problems, but are solvable as well this way (by linearizing every timestep). A major advantage here is that going $$k$$ steps into the future is just $$\phi_{n+k}=C^{k}\phi_n$$, and calculating a matrix power is polynomial time. The method is in general very stable.

For an assignment I had to construct the Crank-Nicolson matrix for a simple linear 1 dimension PDE, which had to be derived by hand. That's a bit labourus, so I made a method to derive it automatically. As I came across it while sifting through some older files, and could still not find that method elsewhere online, I thought I might as well write it up here.

Derivation (1D)

This is the standard Crank-Nicolson expression for a linear, 1 dimensional PDE. The time stepping matrix would then be $$\mathbf{C}=\mathbf{A}^{-1}\mathbf{B}$$.

$$ \begin{align} \phi^{n+1} - \frac{\Delta t}2 f^{n+1} =& \; \phi^n + \frac{\Delta t}2 f^n =\ \mathbf{A} \phi^{n+1} =& \; \mathbf{B} \phi^n \end{align} $$

Since $$f$$ is linear, we can expand it into the set of its impulse responses $$\left{h^k\right}$$. These are the vector outputs $$h^k = f\left(\delta^k\right)$$ by applying it to a shifted unit impulse $$\delta^k_i = \delta_{ik}$$. We can then rewrite $$f$$ in terms of its impulse responses:

$$ \begin{equation} f\left(\phi\right)_i = \sum_k h^k_i \phi_k \end{equation} $$

The Crank-Nicolson equation can be rewritten by substituting $$f$$ for this definition with the goal of reformulating it as a matrix equation. Focusing on a single element $$\phi^{n+1}_i$$ and moving the $$\phi^n$$ terms not resulting from $$f\left(\phi\right)$$ into the sum one obtains

$$ \begin{equation} \sum_k \delta_{ik} \phi^{n+1}i - \frac{\Delta t}2 h^k_i \phi^{n+1}_k = \sum_k \delta \phi^n_i + \frac{\Delta t}2 h^k_i \phi^n_k \end{equation} $$

Equating this with the definition of the matrix-vector dot product \ $$(\mathbf{A}\vec x)i = \sum_k A x_k$$ gives the elements of the Crank-Nicolson matrices $$\mathbf{A}$$ and $$\mathbf{B}$$:

$$ \begin{align} \mathbf{A}{ik} = \delta - \frac{\Delta t}2 h^k_i \ \mathbf{B}{ik} = \delta + \frac{\Delta t}2 h^k_i \end{align} $$

Implementation

Here is a simple diffusion-advection example in Javascript. Click to reset and apply new values.

Advection(1st derivative):
Diffusion(2nd derivative):

The implementation in Python is a straightforward translation of the equations above. Note that int(i == k) is the Kronecker delta $$\delta_{ik}$$.

import numpy as np
import numpy.linalg as LA

def cn_lin_mat(f, n, dt):
    d = np.eye(n)                       # shifted impulses {delta^k}
    h = [f(di) for di in d]             # shifted impulse responses {h^k}
    A = np.zeros((n, n))
    B = np.zeros((n, n))
    for i in range(n):
        for k in range(1, n+1):
            k = k % n
            A[i, k] = int(i == k) - dt/2*h[k][i]
            B[i, k] = int(i == k) + dt/2*h[k][i]
    Ainv = LA.inv(A)
    C = Ainv @ B
    return C

Higher Dimensions

This generalizes trivially to higher dimensions. You just have to rewrite f so that it works on 1 dimensional data, by reshaping it at appriopiate times. So, for a 2d linear PDE over a field with width W and height H it would result in the following code. You'd probably want to use sparse arrays for $$\mathbf{A}$$ and $$\mathbf{B}$$, since they have $$(WH)^2$$ elements, of which most are 0.

def f1d(phi):
    phi = phi.reshape((H, W))
    phi_next = f(phi)
    return phi_next.flatten()

C = cn_lin_mat(f1d, W*H, dt)
phi1 = (C @ phi0.flatten()).reshape((H, W))