In [1]:

```
from IPython.display import Latex
import matplotlib.pyplot as plt
%matplotlib inline
import numpy
from mpl_toolkits.mplot3d import Axes3D
from numpy.linalg import inv
from numpy.linalg import pinv
from scipy.stats import norm
from scipy import integrate
from numpy import random
from copy import deepcopy
from IPython.display import clear_output
from scipy.interpolate import InterpolatedUnivariateSpline
```

This document outlines a version of the Unscented Kalman Filter that can be used to estimate quaternions. There are a few gotcha's associated with attitude estimation that makes it a little bit different than the more standard estimation problems that we've already covered.

This is an Unscented Kalman Filter which, as you all now know, is a bit less mathematically involved than an Extended Kalman Filter. I think our class time is best spend on this rather than the EKF however. The UKF still needs to account for the gotcha's associated with attitude estimation, but allows us to spend more time discussing what actually makes attitude estimation different rather than getting into the weeds with math. Once you understand this, you'll be very well equipped to take a look at the attitude EKF.

This entire document is based on the UKF outlined in http://ancs.eng.buffalo.edu/pdf/ancs_papers/2003/uf_att.pdf. I present a simpler UKF based on theirs. I am not sure how the performance of this one compares to the performance of theirs.

Our goal is to use a gyroscope and some attitude sensors (like a Sun Sensor, horizon detector, etc.) in order to estimate the time-varying quaternion of a spacecraft.

You will recall from the previous class that the Kalman filter and its variants explode if the covariance loses positive semidefiniteness (i.e. it gets a negative number on its diagonal). This problem becomes especially pernicious in the case of quaternions.

We've learned that there does not exist a globally nonsingular attitude representation in three dimensions. In order to have global nonsingularity, we introduce a fourth dimension and use the quaternion. All components of the quaternion may not vary independently, however. Despite the fact that the quaternion has four elements to it, it has only three degrees of freedom because we must enforce the normalization constraint. If I tell you three components of a quaternion (and we agree which half of the hypersphere we're limiting ourselves to), then you can tell me what the fourth component must be.

Think about what this means for our error covariance matrix. If we chose to run a filter on the quaternion itself, then our error covariance matrix at each timestep would tell us how much uncertainty we have in each element of the quaternion. Since one of those elements can be determined with *absolute certainty* from the other three, then you will expect that one of the diagonals of the covariance matrix will go to zero (since there's no uncertainty about it). In theory, that's ok. In practice, that means that the filter is *extremely* fragile. Any roundoff error at all could make the covariance matrix non positive semidefinite, and then we're out of luck. So, we need to represent attitude in our filter in some other way.

I am going to remind you of a few different attitude representation in order to introduce a (kind of) new one.

Of all of the attitude representations, I think Euler axis/angle is the most intuitive to imagine. It's hard for me to look at a direction cosine matrix or a quaternion and intuit the motion, but it's not so hard to imagine spinning by some angle about some axis. So, with the intuitive notion of Euler axis ${\bf{\hat{e}}}$ and Euler angle $\theta$ in mind, recall that the quaternion is given by:

\begin{align}
{\bf{q}} &= \begin{bmatrix}
\hat{e}_1 \sin{\frac{\theta}{2}}\\
\hat{e}_2 \sin{\frac{\theta}{2}} \\
\hat{e}_3 \sin{\frac{\theta}{2}} \\
\cos{\frac{\theta}{2}}
\end{bmatrix} = \begin{bmatrix}
{\bf{q}}_i\\ q_4 \end{bmatrix}
\end{align}

Rodruigues parameters are just another way of representing an orientation in space. Instead of using four dimensions, we use three. As we know, this means that we must be introducing a singularity or non-globalness. The Rodrigues parameters are the 3 $\times$ 1 column matrix given by:

\begin{align}
{\bf{r}} &= {\bf{\hat{e}}}\theta
\end{align}

We are simply taking our Euler axis and scaling it by the Euler angle. You can see that this has an ambiguity at $\pi$ radians. If you tried to represent a rotation of $\pi$ radians about some vector, you would get the same set of Rodrigues parameters as if you tried to represent a rotation of 0 radians about the opposite of that vector.

Note that, for a small angle $\theta$, the magnitude of this vector (its length) is extremely small. That may be a problem computationally if we're working with tiny angles.

Gibbs representation is much like Rodrigues parameters, except we scale the Euler axis by a different number:

\begin{align}
{\bf{g}} &= {\bf{\hat{e}}} \tan{\frac{\theta}{2}}
\end{align}

In this representation, a rotation of $180$ degrees simply cannot be represented.

If we want to get a set gibbs parameters from a quaternion, we can easily do so:

\begin{align}
{\bf{g}} &= \frac{{\bf{q}}_i}{q_4} = \frac{{\bf{\hat{e}}} \sin{\frac{\theta}{2}}}{\cos{\frac{\theta}{2}}} = {\bf{\hat{e}}}\tan{\frac{\theta}{2}}
\end{align}

For very small $\theta$, the magnitude of this vector approaches $\frac{\theta}{2}$.

Modified Rodruigues Parameters are almost exactly like the Gibbs representation, except we move the un-representable angle of rotation from $\pi$ to $2\pi$:

\begin{align}
{\bf{p}} &= {\bf{\hat{e}}} \tan{\frac{\theta}{4}}
\end{align}

We can also get these from a quaternion:

\begin{align}
{\bf{p}} &= \frac{{\bf{q}}_i}{1 + q_4}\\
&= \frac{{\bf{\hat{e}}}\sin{\frac{\theta}{2}}}{1 + \cos{\frac{\theta}{2}}}\\
&= \frac{{\bf{\hat{e}}}\sin{\frac{\theta}{2}}}{2 \cos^2 \frac{\theta}{4}}\\
&= \frac{{\bf{\hat{e}}}2\sin{\frac{\theta}{4}}\cos{\frac{\theta}{4}}}{2 \cos^2 \frac{\theta}{4}}\\
&= \frac{{\bf{\hat{e}}}\sin{\frac{\theta}{4}}}{\cos \frac{\theta}{4}}\\
&= {\bf{\hat{e}}} \tan{\frac{\theta}{4}}
\end{align}

For very small angles, the magnitude of this vector approaches $\frac{\theta}{4}$.

This is the new thing I want to introduce.

You can see the game that these attitude representations are playing. They're modifying the ratio of the vector and scalar parts of the quaternion in order to change the angle at which the representation diverges, and the value that the small angle approximation approaches. So, buliding on this, let's consider some general transformation of the quaternion:

\begin{align}
{\bf{p}} &= f\frac{{\bf{q}}_i}{a + q_4}\\
&= {\bf{\hat{e}}}\cdot\frac{f\sin{\frac{\theta}{2}}}{a + \cos{\frac{\theta}{2}}}
\end{align}

where $f$ and $a$ are two knobs that we can turn. For small angles, this becomes:

\begin{align}
{\bf{p}} &= {\bf{\hat{e}}} \cdot \frac{f}{2(a + 1)} \theta
\end{align}

So, if we let $f=2(a+1)$, then for small angles the magnitude of our vector will simply be equal to the angle. This will be useful to us. It's a version of modified rodrigues parameters with properties that are convenient. For small angles, the length of the vector approaches the angle of rotation. That's handy.

Again, we expect to be able to transform from any representation to any other representation. So, how do we get back to a quaternion from these generalized rodrigues parameters?

\begin{align}
q_4 &= \frac{-a||{\bf{p}}||^2 + f \sqrt{f^2 + (1-a)^2||{\bf{p}}||^2}}{f^2 + ||{\bf{p}}||^2}\\
{\bf{q_i}} &= f^{-1}(a + q_4){\bf{p}}
\end{align}

I haven't derived these either. If you want some degree of intuition, you'll notice that these two expressions look kind of like projections. The rodrigues parameters are projections of the 4D quaternion into 3-space, this transformation recovers that extra dimension. That's a super hand-waving argument, but I don't want to bog down the discussion. It makes sense that this transformation exists, right?

The stuff in this section is all stuff that you already know, but it's worth repeating. Remember that the quaternion is a 4-D objet that is describing 3-D space, so we must have some constraint. That constraint is simply that the quaternion must be unit length:

\begin{align}
{\bf{q}}^T{\bf{q}} &= 1
\end{align}

Remember too that this is just an attitude representation. It contains the same information as a direction cosine matrix, euler axis/angle, rodrigues parameters, etc. Each of these representations just store the information differently, but they all store the same information. As such, we should expect to be able to transform from any representation to any other representation. Here, for example, is how we get a direction cosine matrix from a quaternion:

\begin{align}
A &= \left(q_4^2 - {\bf{q_i}}^T{\bf{q_i}}\right)\mathcal{I}_{3} + 2{\bf{q_i}}{\bf{q_i}}^T + 2q_4 {\bf{q_i}}^{\times}
\end{align}

where ${\bf{q_i}}^{\times}$ is just the matrix:

\begin{align}
{\bf{q_i}}^{\times} &= \begin{bmatrix}
0 & -q_3 & q_2\\
q_3 & 0 & -q_1\\
-q_2 & q_1 & 0\end{bmatrix}
\end{align}

If we wanted, we could expand that expression for $A$ and get the following:

\begin{align}
A({\bf{q}}) &= \begin{bmatrix}
1 - 2q_2^2 - 2q_3^2 & 2q_1q_2 - 2q_3q_4 & 2q_1q_3 + 2q_2q_4\\
2q_1q_2 + 2q_3q_4 & 1 - 2q_1^2 - 2q_3^2 & 2q_2q_3 - 2q_1q_4\\
2q_1q_3 - 2q_2q_4 & 2q_1q_4 + 2q_2q_3 & 1 - 2q_1^2 - 2q_2^2
\end{bmatrix}
\end{align}

These kinds of expressions can seem deeply mysterious, it's not always obvious (at least to me) where each term is coming from and why it's there. We *know*, however, that the quaternion and the DCM contain the same information, so we *know* that there's a transformation between them. This one looks nasty, and I'm not going to derive it here, but it makes sense that it exists.

If we want to apply successive rotations using direction cosine matrices, we just multiply those matrices together in the particular order that we want the rotations to occur. Since quaternions are just another way of holding the same information that a direction cosine matrix holds, we expect to be able to do something similar. And we can. Suppose we want to rotate by ${\bf{q}}_d$ *then* by ${\bf{q}}_c$. We "multiply" the quaternions in the following way:

\begin{align}
q_c \otimes q_d
\end{align}

where that operator means the following:

\begin{align}
q_c \otimes q_d &= \begin{bmatrix}
q_{c4} & q_{c3} & -q_{c2} & q_{c1}\\
-q_{c3} & q_{c4} & q_{c1} & q_{c2}\\
q_{c2} & -q_{c1} & q_{c4} & q_{c3}\\
-q_{c1} & -q_{c2} & -q_{c3} & q_{c4}
\end{bmatrix} \begin{bmatrix}
q_{d1} \\ q_{d2} \\ q_{d3} \\ q_{d4}\end{bmatrix}
\end{align}

which, of course, evaluates to our new $4 \times 1$ quaternion that is the composite of ${\bf{q_c}}$ and ${\bf{q_d}}$. Here again, I'm likely annoying you by not deriving this. Those of you that are curious can (and should!) look up the derivation, I don't want to bog down this discussion with that. A quaternion is just an attitude representation. We know that we can combine rotations, so we know that there is an expression for combining quaternions. As unsatisfying as that is, I'd like to leave it at that for now.

Remember that, in terms of euler axis/angle, the quaternion is given by:

Keep in mind what an inverse is. For a rotation, an inverse is a second rotation that brings the body back to its original orientation. The quaternion for "no rotation" is $\begin{bmatrix}0 & 0 & 0 & 1\end{bmatrix}^T$ (because $\theta$ is zero, see the above equation). So the inverse of some quaternion ${\bf{q}}$ is some other quaternion ${\bf{q}}^1$ that, when multiplied with ${\bf{q}}$ as shown in the previous section, yields the quaternion for no rotation. More simply, it's the quaternion that undoes the quaternion in question.

So, we could solve for that. Or, we could think about this for a moment. Looking at the expression for the quaternion shown above in terms of Euler axis/angle, what other quaternion "undoes" that rotation? Well, one that represents a rotation in the *opposite direction* along the same axis and by the same angle. *Or*, a rotation by the same angle along the axis opposite to the original axis. So, we know that:

\begin{align}
{\bf{q}}^{-1} &= \begin{bmatrix}
-{\bf{q}}_i\\q_4
\end{bmatrix} & \text{or} & \begin{bmatrix}
{\bf{q_i}} \\ -q_4\end{bmatrix}
\end{align}

In the first expression, I flipped the axis and kept the same angle (${\bf{\hat{e}}} \rightarrow -{\bf{\hat{e}}}$). In the second, I kept the same axis and flipped the angle ($\theta \rightarrow -\theta$). Both undo the original rotation in exactly the same way.

Suppose we want to find the "difference" between two quaternions. A good way to think about this is to ask "to what extent does this quaternion undo this other quaternion." Based on the previous section, we know how to do this. We can multiply one quaternion with the *inverse* of the other quaternion. This is the same as rotating according to the first quaternion, and then rotating back the way you just came using the other. If the two quaternions are the same, you end up with $\begin{bmatrix} 0 & 0 & 0 & 1\end{bmatrix}^T$ (no rotation). If they aren't the same, you end up with some quaternion that represents the difference between these two.

\begin{align}
\text{Difference between ${\bf{q_c}}$ and ${\bf{q_d}}$} &: {\bf{q_c}}^{-1} \otimes {\bf{q_d}}
\end{align}

We expect that the rate of change of a quaternion should be related to the angular velocity of the spacecraft. Here is that expression (again, shamefully underived).

\begin{align}
{\bf{\dot{q}}} &= \frac{1}{2} \begin{bmatrix}
q_4 & -q_3 & q_2\\
q_3 & q_4 & -q_1\\
-q_2 & q_1 & q_4\\
-q_1 & -q_2 & -q_3
\end{bmatrix}{\bf{\omega}}^{B/N}
\end{align}

Here's what we know:

- A quaternion is just another way of representing a rotation. It contains the same informatino as a DCM or any other attitude representation, so we expect to be able to transform from it to any other representation, or from any other representation to it.
- Because we are using a 4-vector to describe 3 dimensions, not all components of the vector vary independently. We have a unity norm constraint.
- Just like we can combine rotation matrices, we can combine quaternions (they are the same information). We have an expression for that.
- Just like we can "undo" rotations with rotation matrices, so too can we "undo" quaternions with quaterion inverses, which have an intuitive form.
- By seeing the extent to which one quaternion undoes another quaternion, we can get a third quaternion that represents the difference between those two.

There are some things that we need to solve for/decide upon before we start the filter.

What variables will represent our state? Recall, we cannot simply use the quaternion because it has that unity norm constraint which will drive our covariance straight to zero for one of our states. So what should we use?

Let's have the filter run on something other than the quaternion itself. Instead, our state will include the *unconstrained* generalized rodrigues parameters that represent our *deviation from our current quaternion estimate.*

\begin{align}
{\bf{x}} &= \begin{bmatrix}
{\bf{\delta p}}\\ \vdots
\end{bmatrix}
\end{align}

I've included dots because we aren't quite finished. But this represents one of the little caveats to consider with attitude estimation. You need to be very careful about how you choose your state. You may rightfully point out that our generalized rodrigues parameters don't work for all angles. That's true, but if our estimate for the quaternion is relatively close to the true quaternion (the error is relatively small), then they work just fine.

The next state that you'll want to include is the angular rate, ${\bf{\omega}}$. Our states in all the filters we've considered thus far have been something like position and something like velocity, and our equation of motion tells us how we update those variables between timesteps. Since we have angular deviation (angular "position"), we expect to need to include our angular velocity, and then we expect to update each of these state variables at each timestep using the equations of motion that we've spent a semester learning how to derive.

With attitude filters, however, people rarely do that. Most systems will include a gyroscope as a sensor, and you can pull a trick with the gyroscope.

The most commonly used model for gyroscopes is given by the following equation:

\begin{align}
\tilde{{\bf{\omega}}}(t) &= {\bf{\omega}}(t) + {\bf{\beta}}(t) + \eta_v(t)
\end{align}

where $\tilde{\omega}$ is the measurment that actually comes out of the gyroscope, $\omega$ is the true angular velocity of the spacecraft, $\beta$ is a time-varying bias, and $\eta_v$ is zero-mean Gaussian white noise. This equation says that the true state is equal to our measurement minus the bias plus some noise. The bias (or "drift") of the gyroscope is driven by a zero-mean Gaussian random process:

\begin{align}
\dot{\beta}(t) &= \eta_u(t)
\end{align}

where

\begin{align}
E\left[\eta_v \eta_v^T\right] &= I_{3 \times 3} \sigma_v^2\\
E\left[\eta_u \eta_u^T \right] &= I_{3 \times 3} \sigma_u^2
\end{align}

These are zero-mean Gaussian random processes. **Given how we're using the gyro, does the noise on the gyro represent process noise or measurement noise?** What does this get us? This allows us to simplify our dynamics model considerably. We say that our best guess for the spacecraft angular velocity at timestep $k$ is given by:

\begin{align}
\omega_k^+ &= \tilde{\omega}_k - \hat{\beta}_k^+
\end{align}

Where, since $\beta$ is driven by a zero-mean process:

\begin{align}
\hat{\beta}_k^- &= \hat{\beta}_{k-1}^+
\end{align}

We're not propagating an estimate for angular velocity from timestep to timestep. Instead, we're propagating an estimate for the *bias in our gyroscope* from timestep to timestep. Then, we assume that our true angular velocity is simply our measured gyro values minus our estimate for the bias, which we don't expect to change between timesteps.

So, now we can fill out the rest of our states:

\begin{align}
{\bf{x}} &= \begin{bmatrix}
{\bf{\delta p}}\\ \beta
\end{bmatrix}
\end{align}

Our filter will keep track of three rodrigues parameters that represent our estimation error, and a 3x1 vector that represents the amount of bias in each axis of our gyro. We, the engineers, will then convert each of these estimation errors back into a quaternion with which we can update our quaternion estimate, but the filter will be blind to that.

In addition to the gyro (which we are using differently than our other sensors), we have attitude observations from sensors like Sun Sensors, horizon sensors, etc. Our measurement model should take our *state* as an input, and return the expected *measurements*. Keep in mind, we know the inertial locations of each of these objects being sensed (Sun, horizon, etc.). That's the whole point of these sensors. So, the function that transforms our state to each of these measurements is:

\begin{align}
{\bf{z}}_i &= A({\bf{q}}){\bf{r}}_i + \nu_i
\end{align}

where $A({\bf{q}})$ is the DCM formed from our quaternion estimate, as shown previously, ${\bf{r_i}}$ is the known inertial vector, and $\nu_i$ is zero-mean Gaussian random noise.

BUT WAIT! We aren't using the quaternion as a filter state! I know, wait until later and we'll solve this problem. This indicates that we'll need an extra step in our filter. If we have a bunch of these sensors, then our measurement model looks like this:

\begin{align}
{\bf{z}}_k &= \begin{bmatrix}
A({\bf{q}}){\bf{r}}_1\\
A({\bf{q}}){\bf{r}}_2\\
\vdots\\
A({\bf{q}}){\bf{r}}_N
\end{bmatrix}_k + \begin{bmatrix}
\nu_1\\ \nu_2 \\ \vdots \\ \nu_N\end{bmatrix}_k
\end{align}

\begin{align}
R &= \text{diag}\begin{bmatrix}
\sigma_1^2 & \sigma_2^2 & \cdots & \sigma_N^2\end{bmatrix}
\end{align}

Here is how our states change from one timestep to the next timestep. Discreet versions of the second equation can be found (one is provided in the paper linked at the top of this page), or one can numerically integrate to the next timestep as shown here.

\begin{align}
\beta_{k+1}^- &= \beta_{k}^+\\
q_{k+1}^- &= \int_k^{k+1} \frac{1}{2} \begin{bmatrix}
q_4 & -q_3 & q_2\\
q_3 & q_4 & -q_1\\
-q_2 & q_1 & q_4\\
-q_1 & -q_2 & -q_3
\end{bmatrix}{\bf{\omega_k^+}}^{B/N}
\end{align}

where, recall

\begin{align}
\omega_k^+ &= \tilde{\omega}_k - \hat{\beta}_k^+
\end{align}

Again, I have propagation of a quaternion here despite the fact that I said quaternions are a bad choice for filter state. This will become clear in the coming parts.

Filter steps

\begin{align}
{\bf{\hat{q}}}_k^+ && \beta_k^+
\end{align}

\begin{align}
P_k^+
\end{align}

\begin{align}
{\bf{\hat{x}}}_k^+ &= \begin{bmatrix}
0^T\\
\beta_k^{+T}\end{bmatrix}
\end{align}

we choose the parameter $a$, and set $f=2(a+1)$. Does it make sense that, at each timestep, we initialize the attitude error to zero? If it were something else, we would account for that in our estimate of the quaternion.

\begin{align}
P(k) &= S_x(k)S_x(k)^T\\
&\rightarrow S_x(k) = \text{chol}(P(k))\\
Q(k) &= S_v(k)S_v(k)^T\\
&\rightarrow S_v(k) = \text{chol}(Q(k))
\end{align}

\begin{align}
S_x(k) &= \begin{bmatrix} \underline{s}_{x_1k} & \underline{s}_{x_2k} & \cdots & \underline{s}_{n_xk}\end{bmatrix}\\
S_v(k) &= \begin{bmatrix} \underline{s}_{v_1k} & \underline{s}_{v_2k} & \cdots & \underline{s}_{n_vk}\end{bmatrix}
\end{align}

\begin{align}
\chi_k^0 &= {\bf{\hat{x}}}(k) && v_k^0 = 0 && i=0\\
\chi_k^i &= {\bf{\hat{x}}}(k) + \underline{s}_{x_ik} \sqrt{n_x + n_v + \lambda} && v_k^i=0 && i=1,\dots,n_x\\
\chi_k^i &= {\bf{\hat{x}}}(k) - \underline{s}_{x_{(i-n_x)}k} \sqrt{n_x + n_v + \lambda} && v_k^i=0 && i=n_x+1,\dots,2n_x\\
\chi_k^i &= {\bf{\hat{x}}}(k) && v_k^i=\underline{s}_{v_{(i-2n_x)}k} \sqrt{n_x + n_v + \lambda} && i=2n_x + 1,\dots,2n_x + n_v\\
\chi_k^i &= {\bf{\hat{x}}}(k) && v_k^i=-\underline{s}_{v_{(i-2n_x - n_v)}k} \sqrt{n_x + n_v + \lambda} && i=2n_x+n_v+1,\dots,2(n_x+n_v)\\
\end{align}

where $\lambda$ is a scaling parameter that is tunable.

\begin{align}
\lambda &= \alpha^2(n_x + \kappa) - n_x\\
\kappa &= 3-n_x
\end{align}

\begin{align}
\delta q_{4_{k}}^i &= \frac{-a||\chi_{k}^{\delta p}(i)||^{2} + f\sqrt{f^2 + (1-a^2)||\chi_{k}^{\delta p}(i)||^2}}{f^2 + ||\chi_{k}^{\delta p}(i)||^2}\\
\delta \hat{q}_k^i &= f^{-1}(a + \delta q_4)\chi_{k}^{\delta p}(i)
\end{align}

\begin{align}
q_k(i) &= {\bf{\delta q}}_k \otimes {\bf{\hat{q}}}_k^+
\end{align}

\begin{align}
\beta_{k+1}^-(i) &= \beta_{k}^+(i)\\
q_{k+1}^-(i) &= \int_k^{k+1} \frac{1}{2} \begin{bmatrix}
q_4 & -q_3 & q_2\\
q_3 & q_4 & -q_1\\
-q_2 & q_1 & q_4\\
-q_1 & -q_2 & -q_3
\end{bmatrix}{\bf{\omega_k^+}}^{B/N}
\end{align}

where, recall

\begin{align}
\omega_k^+ &= \tilde{\omega}_k - \hat{\beta}_k^+
\end{align}

Be sure that the resulting quaternion is unit norm. There exist integration methods for doing that, or a hacky solution is to simply normalize it after propagation.

\begin{align}
\underline{\xi}^i &= {\bf{h}}(\underline{\chi}^i(k+1))
\end{align}

where, recall

\begin{align}
{\bf{z}}_k &= {\bf{h}}({\bf{x_k}}) = \begin{bmatrix}
A({\bf{q}}){\bf{r}}_1\\
A({\bf{q}}){\bf{r}}_2\\
\vdots\\
A({\bf{q}}){\bf{r}}_N
\end{bmatrix}_k
\end{align}

\begin{align}
\delta {\bf{q}}_{k+1}^-(i) &= {\bf{\hat{q}}}_{k+1}^-(i) \otimes \left[{\bf{\hat{q}}}_{k+1}(0)\right]^{-1}
\end{align}

\begin{align}
\chi_{k+1}^{\delta p}(0) &= \underline{0}\\
\chi_{k+1}^{\delta p}(i) &= f\frac{\delta \hat{q}_{k+1}^-(k)}{a + \delta q_{4_{k+1}}(i)}\\
\chi_{k+1}^{\beta}(i) &= \chi_{k}^{\beta}(i)
\end{align}

Weight for the zeroth sigma point:

\begin{align}
w_0^m &= \frac{\lambda}{n_x + n_v + \lambda}
\end{align}

And the weights for all other sigma points:

\begin{align}
w_{i\neq 0}^m &= \frac{1}{2(n_x + n_v + \lambda)}
\end{align}

\begin{align}
\overline{x}(k+1) &= \sum_{i=0}^{2(n_x + n_v)}w_i^m \chi^i(k+1)\\
\overline{z}(k+1) &= \sum_{i=0}^{2(n_x + n_v)}w_i^m \xi^i(k+1)
\end{align}

\begin{align}
w_0^c &= \frac{\lambda}{n_x + n_v + \lambda} + 1 - \alpha^2 + \beta
\end{align}

And all other sigma point weights:

\begin{align}
w_{i\neq 0}^c &= \frac{1}{2(n_x + n_v + \lambda)}
\end{align}

\begin{align}
\overline{P}(k+1) &= \sum_{i=0}^{2(n_x+n_v)} w_i^c \left[\chi^i(k+1) - \overline{x}(k+1)\right]\left[\chi^i(k+1) - \overline{x}(k+1)\right]^T\\
\overline{P}_{xz}(k+1) &= \sum_{i=0}^{2(n_x+n_v)} w_i^c \left[\chi^i(k+1) - \overline{x}(k+1)\right]\left[\xi^i(k+1) - \overline{z}(k+1)\right]^T\\
\overline{P}_{zz}(k+1) &= \sum_{i=0}^{2(n_x+n_v)} w_i^c \left[\xi^i(k+1) - \overline{z}(k+1)\right]\left[\xi^i(k+1) - \overline{z}(k+1)\right]^T
\end{align}

\begin{align}
K &= \overline{P}_{xz}\overline{P}_{zz}^{-1}
\end{align}

\begin{align}
{\bf{\hat{x}}}(k+1) &= \overline{x}(k+1) + K\left[z(k+1) - \overline{z}(k+1)\right]\\
P(k+1) &= \overline{P}(k+1) - KRK^T
\end{align}

\begin{align}
\delta \hat{q}_{4}(k+1) &= \frac{-a||{\bf{\hat{x}}}_{k+1}^{\delta p}||^{2} + f\sqrt{f^2 + (1-a^2)||{\bf{\hat{x}}}_{k+1}^{\delta p}||^2}}{f^2 + ||{\bf{\hat{x}}}_{k+1}^{\delta p}||^2}\\
\delta \hat{q}_i(k+1) &= f^{-1}(a + \delta q_4){\bf{\hat{x}}}_{k+1}^{\delta p}
\end{align}

\begin{align}
\hat{\bf{q}}_{k+1} &= {\bf{\delta q}}_k \otimes {\bf{\hat{q}}}_k^+
\end{align}

In [ ]:

```
```