# A minimal Extended Kalman Filter in python

The extended Kalman filter is a state estimator for non-linear systems with noisy transitions and incomplete noisy observations. In a way, its a magically how with so little and noisy information it is able to reconstruct a complete system state. I’m no expert on this matter, but as I tried to understard the EKF, I came up with a small python implementation.

The EKF is an extension over the Kalman filter, which only works for linear systems. EKF deals with the non-linearities by linearizing the system at every time instant by taking its Jacobian.

```
import numpy as np
import numpy.linalg as LA
def jacobian(f, x, rows=None, dx=0.001):
'Numerical Jacobian of f at x, allows for non square output'
if rows is None: rows = len(x)
J = np.empty((rows, len(x)))
xpdx = np.empty_like(x)
xmdx = np.empty_like(x)
for j in range(len(x)):
xpdx[:] = xmdx[:] = x
xpdx[j] += dx
xmdx[j] -= dx
f_j = f(xpdx) - f(xmdx)
J[:, j] = f_j / dx
return J
```

# Implementation in Python

It took me a little while to figure out, but
to implement it in, say, Python, you can just copy the
mathematical definitions pretty much verbatim.
If you don’t have Python 3.5+, the `@`

operator performs np.dot().
Only after getting it working
I started to understand the mathematical definition.

```
class EKF:
def __init__(self, f, h, x0, u0):
'''EKF(f, h, x0, u0)
x' = f(x, u) state transition function
z' = h(x) observation function
x0 initial state estimate
u0 initial control variable'''
self.f, self.h = f, h
self.x, self.u = np.array(x0), np.array(u0)
self.I, self.P = np.eye(len(x0)), np.eye(len(x0))
# modify these if the noise is not independent
self.Q, self.R = np.eye(len(self.h(x0))), np.eye(len(self.h(x0)))
def predict(self):
F = jacobian(lambda x: self.f(x, self.u), self.x)
self.x = self.f(self.x, self.u)
self.P = (F @ self.P @ F.T) + self.Q
def update(self, z):
x, P, R, I, h = self.x, self.P, self.R, self.I, self.h
H = jacobian(h, x, len(z))
y = z - h(x)
S = (H @ P @ H.T) + R
K = P @ H.T @ LA.inv(S)
self.x += K @ y
self.P = (I - K @ H) @ P
```

# Lorentz Attractor

Let’s apply it to a simple nonlinear, chaotic system, the Lorentz attractor.
It’s defined over 3 dimensions, but we only give our state estimator
access to one dimension (`y`

). Both the state itself and the observation
are mutated with some noise (`w`

and `v`

).
Then we ask the EKF algorithm to predict the complete (`x`

, `y`

, `z`

)
system state.

```
def lorentz(X, _u, s=10, r=28, b=2.667, dt=0.02):
# basic lorentz attractor
x, y, z = X
xv = s * (y - x)
yv = r * x - y - x * z
zv = x * y - b * z
return np.array([x + dt*xv, y + dt*yv, z + dt*zv])
def observe(X):
# only look at y
x, y, z = X
return np.array([y])
x = np.array([0, 1., 1.05])
ekf = EKF(lorentz, observe, [20, 10, 30], [])
n_obs = len(observe(x))
for i in range(200):
w = np.random.normal(0, 0.1, len(x))
v = np.random.normal(0, 1, n_obs)
x = lorentz(x, ekf.u) + w
z = observe(x) + v
ekf.predict()
ekf.update(z)
```

See below for the output. Blue is lorentz attractor with random noise, orange the estimated state based on noisy observation of the y axis. As you can see, it works pretty good. Note how the introduced noise derails the estimation sometimes but that it corrects itself eventually. Changing the observervation function to another function of the state space should still enable the EKF to do its job. Magic :)