Worked Estimation Examples

V. Hunter Adams (vha3)

Click the button below to show/hide code
In [1]:
from IPython.display import Latex
import matplotlib.pyplot as plt
%matplotlib inline
import numpy
import math
import scipy.misc
from numpy.linalg import pinv
from numpy.linalg import inv
from scipy.stats import norm
from scipy import integrate
from scipy.integrate import odeint
from IPython.display import Image

The Point

The point of this homework is for you to get familiar with putting together filters of various types. The point is not to get buried in tricky dynamics models or measurement models, but to get familiar with building and diagnosing filters so that you can apply them to more complex systems. As such, we'll use an easily differentiable dynamics and measurement model, and I will provide you with the time history of measurements.

Use whatever programming language you're most comfortable with.

The System in Question

Dynamics, measurements, and initial estimates under consideration:

Dynamics Model

\begin{align} x(k+1) &= f(k,x(k),\nu(k))\\ &= 2\arctan{\left[x(k)\right]} + 0.5 \cos{\left[\frac{\pi}{3}k\right]} + \nu(k) \end{align}

where

\begin{align} E\left[\nu(k)\right] &= 0\\ E\left[\nu(k)\nu(j)\right] &= \delta_{jk}Q(k)\\ \end{align}

Measurement Model

\begin{align} z(k)&= h(k,x(k)) + w(k)\\ &= x(k) + x(k)^2 + x(k)^3 + w(k) \end{align}

where

\begin{align} E\left[w(k)\right] &= 0\\ E\left[w(k)w(j)\right] &= \delta_{jk}R(k)\\ \end{align}

Initial Conditions and Parameters

\begin{align} \hat{x}(0) &= 4 && Q(k)=1\\ P(0) &= 2 && R(k) = 0.25 \end{align}

Trajectory and Measurements

You all don't need to recreate this, just import the data from the files.

In [268]:
# Commented out so that I don't
# accidentally run this cell and
# make a new trajectory 
##########################

# xhat0 = 4.
# Phat0 = 2.
# Q = 1.
# R = 0.25
# N = 100
# def createTrajectory(xhat0, P0, Q, R, t):
#     x0 = numpy.sqrt(P0)*numpy.random.randn()
#     x_array = numpy.zeros(N)
#     meas_array = numpy.zeros(N)
    
#     x_prev = x0
#     count = 0
#     file = open('trajectory.txt','w')
#     file2 = open('measurements.txt', 'w')
#     for i in range(N):
#         x_array[i] = (expectedUpdate(x_prev, i) +
#                           numpy.sqrt(Q)*numpy.random.randn())
#         file.write(str(x_array[i])+'\n') 
#         meas_array[i] = (expectedMeasurement(x_array[i]) +
#                              numpy.sqrt(R)*numpy.random.randn())
#         file2.write(str(meas_array[i])+'\n')
        
#         x_prev = x_array[i]
#     file.close()
#     file2.close
#     return x_array, meas_array
# traj = createTrajectory(xhat0, Phat0, Q, R, t)
# plt.plot(traj[0], 'r.')
# plt.title('Trajectory');plt.xlabel('Sample Number')
# plt.show()
# plt.plot(traj[1], 'b.')
# plt.title('Measurement');plt.xlabel('Sample Number')
# plt.show()

1. Import Trajectory and Measurement Data

Import the measurements and the true trajectory.

In [269]:
with open("measurements.txt") as f:
    measurements = map(float, f)
with open("trajectory.txt") as f:
    true_trajectory = map(float, f)

2. Create Functions for Updating State and Measurement

Expected State Update:

\begin{align} x(k+1) &= f(k,x(k),\nu(k))\\ &= 2\arctan{\left[x(k)\right]} + 0.5 \cos{\left[\frac{\pi}{3}k\right]} \end{align}
In [68]:
def expectedUpdate(x, k):
    return 2*numpy.arctan(x) + .5*numpy.cos((numpy.pi/3.)*k)

Expected Measurement Update:

\begin{align} z(k)&= h(k,x(k)) + w(k)\\ &= x(k) + x(k)^2 + x(k)^3 \end{align}
In [6]:
def expectedMeasurement(x):
    return x + x**2. + x**3.

3. Write a Particle Filter for Estimating True Trajectory

Set the number of particles, and the threshold number of particles

\begin{align} N_s &= 1000\\ N_t &= 100 \end{align}
In [314]:
Ns=1000
Nt=100

Initial covariances and state estimate

\begin{align} P(0) &= \begin{bmatrix} 2 \end{bmatrix} && x(0) = 4\\ Q &= \begin{bmatrix}1\end{bmatrix} && R = \begin{bmatrix}0.25\end{bmatrix} \end{align}
In [315]:
P0 = numpy.array([[2.]])
x0 = numpy.array([4.])
Q = numpy.array([[1.]])
R = numpy.array([[.25]])

Initialize Weights

\begin{align} \underline{w}_s &= \frac{1}{N_s} \end{align}
In [316]:
def resetWeights():    
    ws = numpy.ones(Ns)*(1./Ns)
    logweights = numpy.log(ws)
    return ws, logweights

Create Particles

\begin{align} \underline{\chi}^i(0) &\sim \mathcal{N}({\bf{\hat{x}(0)}},P(0)) \end{align}
In [317]:
def resetParticles(x, P):
    particles_k = numpy.random.multivariate_normal(x, P, Ns)
    return particles_k

Create Process Noises

\begin{align} \underline{v}^i(k) &\sim \mathcal{N}(0,Q) \end{align}
In [318]:
def resetNoise():
    noise_k = numpy.random.multivariate_normal(numpy.zeros(1), Q, Ns)
    return noise_k

Propagate all Particles

\begin{align} \underline{\chi}^i(k+1) &= {\bf{f}}(k, \underline{\chi}^i(k), \underline{u}(k), \underline{v}^i(k))\\ &= 2\arctan{\left[\chi^i(k)\right]} + 0.5 \cos{\left[\frac{\pi}{3}k\right]} + \nu^i(k) \end{align}
In [319]:
def propagateParticles(particles, noise, k):
    new_particles = numpy.zeros((Ns, 1))
    for i in range(Ns):
        new_particles[i,:] = expectedUpdate(particles[i,:][0],k) + noise[i,:][0]
    return new_particles

Get Expected Measurement for each Particle

\begin{align} \underline{\xi}^i &= {\bf{h}}(k+1,\underline{\chi}^i(k+1))\\ &= \chi^i(k+1) + \chi^i(k+1)^2 + \chi^i(k+1)^3 \end{align}

Update Weights

\begin{align} log(\underline{w}(k+1)) &= -\frac{1}{2}\left(z - \underline{\xi}(k+1)^i\right)^TR^{-1}\left(z - \underline{\xi}(k+1)^i\right) + log(\underline{w}(k))\\ log(\underline{w}(k+1))^{MAX} &= max(log(\underline{w}(k+1)))\\ \tilde{\underline{w}}(k+1) &= exp(log(\underline{w}(k+1)) - log(\underline{w}(k+1))^{MAX})\\ \underline{w}(k+1) &= \frac{\tilde{\underline{w}}(k+1)}{\sum_{l=1}^{N_s}\tilde{w}_l(k+1)} \end{align}
In [320]:
def updateWeights(index, new_particles, logweights):
    measurement = measurements[index]
    for i in range(Ns):
        logweights[i] = (-.5*(measurement - expectedMeasurement(new_particles[i,:]))*(1./R)*
                         (measurement - expectedMeasurement(new_particles[i,:]))
                         + logweights[i])
    maxlog = max(logweights)
    intermediate_weights = numpy.exp(logweights - maxlog)
    new_weights = intermediate_weights/(numpy.sum(intermediate_weights))
    return logweights, new_weights

Compute Estimate

\begin{align} {\bf{\hat{x}}}(k+1) &= \sum_{i=1}^{N_s} w_i(k+1) \xi_i(k+1) \end{align}
\begin{align} P(k+1) &= \sum_{i=1}^{N_s}w_i(k+1)\left[\xi_i(k+1) - {\bf{\hat{x}}}(k+1)\right]\left[\xi_i(k+1) - {\bf{\hat{x}}}(k+1)\right]^T \end{align}
In [321]:
def computeEstimate(weights, particles):
    x_new = numpy.zeros(1)
    P_new = numpy.zeros((1,1))
    for i in range(Ns):
        x_new += weights[i]*particles[i,:]
    for i in range(Ns):
        diff = numpy.array([(particles[i,:] - x_new)])
        P_new += weights[i]*numpy.dot(diff.T, diff)
    return x_new, P_new

Compute the effective number of particles

\begin{align} N_{eff} &= \frac{1}{\sum_{i=1}^{N_s}(w_i(k+1)^2)} \end{align}
In [322]:
def effectiveParticles(weights):
    return 1/(numpy.sum(weights*weights))

Put it all together

In [323]:
def runParticleFilter():
    P = P0
    x_start = x0
    ws, logweights = resetWeights()
    particles_k = resetParticles(x_start, P)
    noise = resetNoise()
    Neff = Ns
    
    P_array = numpy.zeros((N, 1, 1))
    x_array = numpy.zeros((N, 1))
    P_array[0] = P
    x_array[0,:] = x_start
    for i in range(N):
        new_particles = propagateParticles(particles_k, noise, i)
        logweights, ws = updateWeights(i, new_particles, logweights)
        x_new, P_new = computeEstimate(ws, new_particles)
        Neff = effectiveParticles(ws)
        
        P_array[i] = P_new
        x_array[i,:] = x_new
        
        if Neff < Nt:
            particles_k = resetParticles(x_new, P_new)
            noise = resetNoise()
            ws, logweights = resetWeights()
        else:
            particles_k = new_particles
            noise = resetNoise()
    return x_array, P_array
pfresults = runParticleFilter()
pf_traj, pf_cov = pfresults

4. Show Particle Filter Results

4a. True vs. Estimated Trajectories

In [326]:
plt.plot(pf_traj[:,0], 'r.', label='PF Estimates')
plt.plot(true_trajectory, 'b.', alpha=0.5, label='True Trajectory')
plt.title('Particle Filter vs Truth')
plt.xlabel('Sample Number')
plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")
plt.show()

4b. True Error vs. Estimated Covariance

In [328]:
plt.plot(true_trajectory - pf_traj[:,0], 'r.', label='Error')
plt.plot(2*numpy.sqrt(pf_cov[:,0,0]), 'g-', label='2 $\sigma$ Covariance')
plt.plot(-2*numpy.sqrt(pf_cov[:,0,0]), 'g-')
plt.title('Particle Filter Error')
plt.xlabel('Sample Number')
plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")
plt.show()

5. Write a UKF to Estimate Trajectory

Initial covariance and state estimate

\begin{align} P(0) &= \begin{bmatrix} 2 \end{bmatrix} && x(0) = 4\\ Q &= \begin{bmatrix}1\end{bmatrix} && R = \begin{bmatrix}0.25\end{bmatrix} \end{align}
In [517]:
P0 = numpy.array([[2.]])
x0 = numpy.array([4.])
Q = numpy.array([[1.]])
R = numpy.array([[.25]])

Set Parameters

\begin{align} \alpha &= 1\times 10^{-4}\\ n_x &= 1\\ n_v &= 1\\ \kappa &= 3-n_x\\ \lambda &= \alpha^2\left(n_x + \kappa\right) - n_x\\ \beta &= 2 \end{align}
In [530]:
alpha = 1.e-4
nx = 1
nv = 1
kappa=3-nx
lam = alpha**2.*(nx+kappa) - nx
radical = numpy.sqrt(nx+nv+lam)
beta = 2.

Cholesky Factorize P and Q

\begin{align} P(k) &= S_x(k)S_x(k)^T\\ &\rightarrow S_x(k) = \text{chol}(P(k))\\ Q(k) &= S_v(k)S_v(k)^T\\ &\rightarrow S_v(k) = \text{chol}(Q(k)) \end{align}
In [531]:
def getCholesky(P, Q):
    return numpy.linalg.cholesky(P), numpy.linalg.cholesky(Q)

Extract the columns of the cholesky factorizations

\begin{align} S_x(k) &= \begin{bmatrix} \underline{s}_{x_1k} & \underline{s}_{x_2k} & \cdots & \underline{s}_{n_xk}\end{bmatrix}\\ S_xvk) &= \begin{bmatrix} \underline{s}_{v_1k} & \underline{s}_{v_2k} & \cdots & \underline{s}_{n_vk}\end{bmatrix} \end{align}
In [532]:
def getColumn(mat, colnum):
    return mat[:,colnum]

Form Sigma Points

\begin{align} \chi_k^0 &= {\bf{\hat{x}}}(k) && v_k^0 = 0 && i=0\\ \chi_k^i &= {\bf{\hat{x}}}(k) + \underline{s}_{x_ik} \sqrt{n_x + n_v + \lambda} && v_k^i=0 && i=1,\dots,n_x\\ \chi_k^i &= {\bf{\hat{x}}}(k) - \underline{s}_{x_{(i-n_x)}k} \sqrt{n_x + n_v + \lambda} && v_k^i=0 && i=n_x+1,\dots,2n_x\\ \chi_k^i &= {\bf{\hat{x}}}(k) && v_k^i=\underline{s}_{v_{(i-2n_x)}k} \sqrt{n_x + n_v + \lambda} && i=2n_x + 1,\dots,2n_x + n_v\\ \chi_k^i &= {\bf{\hat{x}}}(k) && v_k^i=-\underline{s}_{v_{(i-2n_x - n_v)}k} \sqrt{n_x + n_v + \lambda} && i=2n_x+n_v+1,\dots,2(n_x+n_v)\\ \end{align}
In [533]:
def formSigmas(xhat, Sx, Sv):
    sigmas = numpy.zeros((1+2*(nx+nv), 1))
    noises = numpy.zeros((1+2*(nx+nv), 1))
    
    sigmas[0,:] = xhat
    noises[0,:] = numpy.zeros(1)
    for i in range(nx):
        sigmas[i+1,:] = xhat + radical*getColumn(Sx, i)
        noises[i+1,:] = numpy.zeros(1)
    for i in range(nx):
        sigmas[i+1+nx,:] = xhat - radical*getColumn(Sx, i)
        noises[i+1+nx,:] = numpy.zeros(1)
    for i in range(nv):
        sigmas[i+1+nx+nx,:] = xhat
        noises[i+1+nx+nx,:] = radical*getColumn(Sv, i)
    for i in range(nv):
        sigmas[i+1+nx+nx+nv,:] = xhat
        noises[i+1+nx+nx+nv,:] = -radical*getColumn(Sv, i)
    return sigmas, noises

Propagate Sigma Points

\begin{align} \underline{\chi}^i(k+1) &= {\bf{f}}(k, \underline{\chi}^i(k), \underline{u}(k), \underline{v}^i(k))\\ &= 2\arctan{\left[\chi^i(k)\right]} + 0.5 \cos{\left[\frac{\pi}{3}k\right]} + \nu^i(k) \end{align}
In [534]:
def propagateSigmas(sigmas, noise, k):
    new_sigmas = numpy.zeros((len(sigmas), 1))
    for i in range(len(sigmas)):
        new_sigmas[i,:] = expectedUpdate(sigmas[i,:], k) + noise[i,:]
    return new_sigmas

Get Expected Measurement for each Sigma

\begin{align} \underline{\xi}^i &= {\bf{h}}(k+1,\underline{\chi}^i(k+1))\\ &= \chi^i(k+1) + \chi^i(k+1)^2 + \chi^i(k+1)^3 \end{align}
In [535]:
def sigmaMeasurements(sigmas):
    measurements = numpy.zeros(len(sigmas))
    for i in range(len(sigmas)):
        measurements[i] = expectedMeasurement(sigmas[i,:])
    return measurements

Compute weighted mean and weighted mean measurement

A. Weights

\begin{align} w_0^m &= \frac{\lambda}{n_x + n_v + \lambda}\\ w_{i\neq 0}^m &= \frac{1}{2(n_x + n_v + \lambda)} \end{align}

B. Weighted Averages

\begin{align} \overline{x}(k+1) &= \sum_{i=0}^{2(n_x + n_v)}w_i^m \chi^i(k+1)\\ \overline{z}(k+1) &= \sum_{i=0}^{2(n_x + n_v)}w_i^m \xi^i(k+1) \end{align}
In [536]:
def getMeans(sigmas, measurements):
    center_weight = lam/(nx+nv+lam)
    other_weight = 1./(2*(nx+nv+lam))
    x_mean = center_weight * sigmas[0,:]
    z_mean = center_weight * measurements[0]
    for i in range(1,len(sigmas)):
        x_mean += other_weight * sigmas[i,:]
        z_mean += other_weight * measurements[i]
    return x_mean, z_mean

Compute covariances

A. Weights

\begin{align} w_0^c &= \frac{\lambda}{n_x + n_v + \lambda} + 1 - \alpha^2 + \beta\\ w_{i\neq 0}^c &= \frac{1}{2(n_x + n_v + \lambda)} \end{align}

B. Weighted Averages

\begin{align} \overline{P}(k+1) &= \sum_{i=0}^{2(n_x+n_v)} w_i^c \left[\chi^i(k+1) - \overline{x}(k+1)\right]\left[\chi^i(k+1) - \overline{x}(k+1)\right]^T\\ \overline{P}_{xz}(k+1) &= \sum_{i=0}^{2(n_x+n_v)} w_i^c \left[\chi^i(k+1) - \overline{x}(k+1)\right]\left[\xi^i(k+1) - \overline{z}(k+1)\right]^T\\ \overline{P}_{zz}(k+1) &= \sum_{i=0}^{2(n_x+n_v)} w_i^c \left[\xi^i(k+1) - \overline{z}(k+1)\right]\left[\xi^i(k+1) - \overline{z}(k+1)\right]^T \end{align}
In [537]:
def computeCovariances(xmean, zmean, sigmas, measurements):
    center_weight = (lam/(nx+nv+lam)) + 1 - alpha**2. + beta
    other_weight = 1./(2*(nx+nv+lam))
    
    Pxx = center_weight * numpy.dot(numpy.array([sigmas[0,:] - xmean]).T,
                                    numpy.array([sigmas[0,:] - xmean]))
    Pxz = center_weight * numpy.dot(numpy.array([sigmas[0,:] - xmean]).T,
                                    numpy.array([[measurements[0] - zmean]]))
    Pzz = center_weight * numpy.dot(numpy.array([[measurements[0] - zmean]]).T,
                                    numpy.array([[measurements[0] - zmean]]))
    
    for i in range(1, len(sigmas)):
        Pxx += other_weight * numpy.dot(numpy.array([sigmas[i,:] - xmean]).T,
                                        numpy.array([sigmas[i,:] - xmean]))
        Pxz += other_weight * numpy.dot(numpy.array([sigmas[i,:] - xmean]).T,
                                        numpy.array([[measurements[i] - zmean]]))
        Pzz += other_weight * numpy.dot(numpy.array([[measurements[i] - zmean]]).T,
                                        numpy.array([[measurements[i] - zmean]]))
    return Pxx, Pxz, Pzz

Update Estimate

A. Kalman Gain

\begin{align} K &= \overline{P}_{xz}\overline{P}_{zz}^{-1} \end{align}

B. Updates

\begin{align} {\bf{\hat{x}}}(k+1) &= \overline{x}(k+1) + K\left[z(k+1) - \overline{z}(k+1)\right]\\ P(k+1) &= \overline{P}(k+1) - KRK^T \end{align}
In [538]:
def updateEstimate(xmean, Pxx, Pxz, Pzz, zmean, time_index):
    xnew = numpy.array([xmean]).T + numpy.dot(Pxz, pinv(Pzz))*(measurements[time_index] - zmean)
    xnew = xnew.T[0]
    K = numpy.dot(Pxz, pinv(Pzz))
    Pnew = Pxx - numpy.dot(K, numpy.dot(R, K.T))
    return xnew, Pnew

Put it all together

In [539]:
def runUKF():
    P = P0
    x = x0
    
    P_array = numpy.zeros((N, 1, 1))
    x_array = numpy.zeros((N, 1))
    P_array[0] = P
    x_array[0,:] = x_start
    for i in range(N):
        Sx, Sv = getCholesky(P, Q)
        sigmas, noises = formSigmas(x, Sx, Sv)
        propagated_sigmas = propagateSigmas(sigmas, noises, i)
        expected_measurements = sigmaMeasurements(propagated_sigmas)
        xmean, zmean = getMeans(propagated_sigmas, expected_measurements)
        Pxx, Pxz, Pzz = computeCovariances(xmean, zmean, propagated_sigmas,
                                           expected_measurements)
        xnew, Pnew = updateEstimate(xmean, Pxx, Pxz, Pzz, zmean, i)
        
        P_array[i] = Pnew
        x_array[i,:] = xnew
        
        P = Pnew
        x = xnew
    return x_array, P_array
ukfresults = runUKF()
ukf_traj, ukf_cov = ukfresults

6. Plot UKF Results

6a. True vs. Estimated Trajectories

In [540]:
plt.plot(ukf_traj[:,0], 'r.', label='ukf Estimates')
plt.plot(true_trajectory, 'b.', alpha=0.5, label='True Trajectory')
plt.title('Unscented Kalman Filter vs Truth')
plt.xlabel('Sample Number')
plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")
plt.show()

6b. True Error vs. Estimated Covariance

In [541]:
plt.plot(true_trajectory - ukf_traj[:,0], 'r.', label='Error')
plt.plot(2*numpy.sqrt(ukf_cov[:,0,0]), 'g-', label='2 $\sigma$ Covariance')
plt.plot(-2*numpy.sqrt(ukf_cov[:,0,0]), 'g-')
plt.title('Unscented Kalman Filter Error')
plt.xlabel('Sample Number')
plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")
plt.show()

7. Write an EKF to Estimate Trajectory

Initial covariance and state estimate

\begin{align} P(0) &= \begin{bmatrix} 2 \end{bmatrix} && x(0) = 4\\ Q &= \begin{bmatrix}1\end{bmatrix} && R = \begin{bmatrix}0.25\end{bmatrix} \end{align}
In [486]:
P0 = numpy.array([[2.]])
x0 = numpy.array([4.])
Q = numpy.array([[1.]])
R = numpy.array([[.25]])

Propagate Estimate

\begin{align} \hat{x}_{k+1}^- &= f(\hat{x}_k^+) \end{align}
In [487]:
def propagateEstimate(state, k):
    return expectedUpdate(state, k)

Calculate F

\begin{align} F(\hat{x}_k^+) &= \left[\frac{\partial{}}{\partial{x}}f \right]_{\hat{x}_k^+}\\ &= \left[\frac{\partial{}}{\partial{x}}\left(2\arctan{\left[x(k)\right]} + 0.5 \cos{\left[\frac{\pi}{3}k\right]}\right)\right]_{\hat{x}_k^+,k}\\ &= \frac{2}{1 + \left(\hat{x}_k^+\right)^2} \end{align}
In [488]:
def getF(state):
    return numpy.array([[2./(1 + state[0]**2.)]])

Calculate H

\begin{align} H(\hat{x}_k^+) &= \left[\frac{\partial{}}{\partial{x}}h \right]_{\hat{x}_k^+}\\ &= \left[\frac{\partial{}}{\partial{x}}\left(x(k) + x(k)^2 + x(k)^3\right)\right]_{\hat{x}_k^+,k}\\ &= 1 + 2\hat{x}_k^+ + 3(\hat{x}_k^+)^2 \end{align}
In [490]:
def getH(state):
    return numpy.array([[1. + 2.*state[0] + 3.*state[0]**2.]])

Propagate Covariance

\begin{align} P_{k+1}^- &= F(\hat{x}_k^+)P_{k}^+F(\hat{x}_k^+)^T + Q \end{align}
In [492]:
def propagateCovariance(P, F):
    return numpy.dot(F, numpy.dot(P, F.T)) + Q

Find Expected Measurement

\begin{align} \hat{z}_{k+1} &= h(\hat{x}_{k+1}^-) \end{align}
In [493]:
def getExpectedMeasurement(state):
    return expectedMeasurement(state)

Compute Innovation Covariance

\begin{align} P_{zz} &= H(\hat{x}_{k+1}^-)P_{k+1}^-H(\hat{x}_{k+1}^-)^T + R \end{align}
In [494]:
def propagatePzz(H, P):
    return numpy.dot(H, numpy.dot(P, H.T)) + R

Compute State-Innovation Covariance

\begin{align} P_{xz} &= P_k^-H(\hat{x}_{k+1})^T \end{align}
In [495]:
def propagatePxz(H, P):
    return numpy.dot(P, H.T)

Compute Kalman Gain

\begin{align} K &= P_{xz}P_{zz}^{-1} \end{align}
In [496]:
def getK(Pxz, Pzz):
    return numpy.dot(Pxz, pinv(Pzz))

Update State Estimate

\begin{align} \hat{x}_{k+1}^+ &= \hat{x}_{k+1}^- + K(z_{k+1} - \hat{z}_{k+1}) \end{align}
In [510]:
def newEstimate(xhat, K, zexp, time_index):
    out = xhat + numpy.dot(K, numpy.array([[measurements[time_index]-zexp]]))[0]
    return out.T[0]

Update Covariance Estimate

\begin{align} P_{k+1}^+ &= P_{k+1}^- - KH(\hat{x}_{k+1}^-)P_{k+1}^- \end{align}
In [511]:
def newCovariance(P, K, Pzz):
    return P - numpy.dot(K, numpy.dot(Pzz, K.T))

Put it all Together

In [513]:
def runEKF():
    P = P0
    x = x_start
    
    P_array = numpy.zeros((N, 1, 1))
    x_array = numpy.zeros((N, 1))
    P_array[0] = P
    x_array[0,:] = x_start
    for i in range(N):
        x_plus = propagateEstimate(x, i)
        F = getF(x_plus)
        H = getH(x_plus)
        P_plus = propagateCovariance(P, F)
        z_plus = getExpectedMeasurement(x_plus)
        Pzz_plus = propagatePzz(H, P_plus)
        Pxz_plus = propagatePxz(H, P_plus)
        K = getK(Pxz_plus, Pzz_plus)
        
        xnew = newEstimate(x_plus, K, z_plus, i)
        Pnew = newCovariance(P_plus, K, Pzz_plus)
        
        P_array[i] = Pnew
        x_array[i,:] = xnew
        
        P = Pnew
        x = xnew
    return x_array, P_array
ekfresults = runEKF()
ekf_traj, ekf_cov = ekfresults

8. Show EKF Results

8a. True vs. Estimated Trajectories

In [514]:
plt.plot(ekf_traj[:,0], 'r.', label='ekf Estimates')
plt.plot(true_trajectory, 'b.', alpha=0.5, label='True Trajectory')
plt.title('Extended Kalman Filter vs Truth')
plt.xlabel('Sample Number')
plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")
plt.show()

8b. True Error vs. Estimated Covariance

In [515]:
plt.plot(true_trajectory - ekf_traj[:,0], 'r.', label='Error')
plt.plot(2*numpy.sqrt(ekf_cov[:,0,0]), 'g-', label='2 $\sigma$ Covariance')
plt.plot(-2*numpy.sqrt(ekf_cov[:,0,0]), 'g-')
plt.title('Extended Kalman Filter Error')
plt.xlabel('Sample Number')
plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")
plt.show()

9. Reflecting on Results

9a. The true error and the reported error (from the covariance estimate) for the EKF do not agree. Why?

The EKF propagates the error covariance estimate using a Jacobian. This is a first-order approximation of the nonlinear dynamics function. In our case, the dynamics are very nonlinear, so the first-order approximation does not do a good job estimating the true error covariance.

9b. Which filter did the best? What price did we pay for this performance (generally speaking)?

The particle filter did the best, but it is way more computationally intensive than either of the other filters.

9c. If we went through this exercise for a linear system, how do you expect that the performance of the filters would compare?

For a linear system, all filters would perform approximately as well as one another. In that case, we would be foolish to choose a particle filter because it's extra computation for no improvement in accuracy.

In [ ]: