Georgia Institute of Technology
“Safety Critical Systems”:



Reachability problem: find some
\mathcal{R}(t,
\mathcal{X}_{0}
, u,
\mathcal{W}
) \ni x(t) such that \mathcal{R}(t, \mathcal{X}_{0}, u, \mathcal{W}) \cap \mathcal{O} = \emptyset
Tools:
jax / immrax [1]: Interval computation w/ JIT, Autodiff, and GPU ParallelizationApplications:

immrax:
Main Contribution
A new refinement algorithm that extends previous approaches [2] and provides
[2] J. K. Scott and P. I. Barton, “Bounds on the reachable sets of nonlinear control systems,” Automatica, 2013
Embedding Systems1
Any system \dot{x} = f(x, u, w) induces an embedding system [\dot{\underline{x}}, \dot{\overline{x}}] = F([\underline{x}, \overline{x}], [\underline{u}, \overline{u}], [\underline{w}, \overline{w}]) that overapproximates the reachable set of the original system ( [1], Proposition 9).
Consider a full rank H \in \mathbb{R}^{m \times n} with m > n. Any left psuedo-inverse H^+ (i.e. H^+ H = \mathbb{I}_n) induces a lifted system \dot{y} = Hf(H^{+}y, u, w)
Forward Invariance of \mathcal{H}
Let \mathcal{H} = \{ Hx : x \in \mathbb{R}^{n} \}. If \phi(t) is a trajectory of the lifted system, and \phi(0) \in \mathcal{H}, then \phi(t) \in \mathcal{H} for all t \ge 0 [4].
Refined embedding system: [\dot{\underline{x}}, \dot{\overline{x}}] = F\big(
\mathcal{I}_{\mathcal{H}}([\underline{x}, \overline{x}])
, [\underline{u}, \overline{u}], [\underline{w}, \overline{w}]\big)
Given any a^{\top} \in \mathbb{R}^m with aH = 0, we have ay(t) = 0 by forward invariance. Then,
0 = \sum_{i} a_{i} y_{i} \implies y_{j} = - \frac{1}{a_{j}} \sum_{i \ne j} a_{i}y_{i} \quad \forall a_{j} \ne 0
0 = \sum_{i} a_{i} y_{i} \implies y_{j} \in - \frac{1}{a_{j}} \sum_{i \ne j} a_{i} [\underline{y}_{i}, \overline{y}_{i}], \quad \forall a_{j} \ne 0
A = \begin{bmatrix} a_{1} \\ \vdots \\ a_{N} \end{bmatrix} \implies y_{j} \in \bigcap_{k=1}^{N} - \frac{1}{a_{k, j}} \sum_{i\ne j} a_{k, i} [ \underline{y}_{i}, \overline{y}_{i}]
A = \begin{bmatrix} a_{1} \\ \vdots \\ a_{N} \end{bmatrix} \implies y_{j} \in \bigcap_{k=1}^{N} - \frac{1}{a_{k, j}} \sum_{i\ne j} a_{k, i} [ \underline{y}_{i}, \overline{y}_{i}]
Design Goal
What are “good” left null vectors? How many should we choose? How do they relate to the selection of H?
Simplify by assuming m = n+1. Write H = \begin{bmatrix} H_{V} \\ h \end{bmatrix}, \quad H_V \in GL(n, \mathbb{R}), \quad h^\top \in \mathbb{R}^{n}
- \frac{1}{b_{j}} \sum_{i \ne j} b_{i}y_{i} = - \frac{1}{\lambda a_{j}} \sum_{i \ne j} \lambda a_{i}y_{i} = - \frac{1}{a_{j}} \sum_{i \ne j} a_{i}y_{i}
Canonical Left Null Vector
We call a_{h} = \begin{bmatrix} -h H_{V}^{-1} & 1 \end{bmatrix} the canonical left null vector associated with h.
Generalize to H = \begin{bmatrix} H_{V} \\ H_A \end{bmatrix}, \quad H_V \in GL(n, \mathbb{R}), \quad H_A \in \mathbb{R}^{m-n \times n}
Set of canonical left null vectors is a basis for \operatorname{null} H^\top:
L = \begin{bmatrix} -H_{A} H_{V}^{-1} & \mathbb{I}_{m-n} \end{bmatrix}
L = \begin{bmatrix} -H_{A} H_{V}^{-1} & \mathbb{I}_{m-n} \end{bmatrix}
Subspace sampling: pairwise combinations of rows of L
W = \begin{bmatrix} \cos \frac{\pi}{s+1} & \sin \frac{\pi}{s+1} & 0 & \cdots & 0 \\ \cos \frac{2\pi}{s+1} & \sin \frac{2\pi}{s+1} & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \cos \frac{s\pi}{s+1} & \sin \frac{s\pi}{s+1} & 0 & \cdots & 0 \end{bmatrix}, \quad A = \begin{bmatrix} L \\ \sigma_{p_1}(W)L \\ \vdots \\ \sigma_{p_N}(W)L \end{bmatrix}
Sample Number Growth Bound
Our “library” A consists of \lvert {\,}^{m-n} P_2 \rvert = \frac{(m-n)!}{(m-n-2)!} = \mathcal{O}\big((m-n)^{2}\big)
left null vectors, as opposed to exponentially many from uniform sampling.
Why do we like this specific basis for \operatorname{null} H^\top?
\begin{align*} L_{\rm SVD}\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{bmatrix} \right) &= \begin{bmatrix} -0.577 & -0.577 & 0.577 \end{bmatrix}, \\ L_{\rm SVD}\left( \hspace{-0.5mm} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 0.5 \end{bmatrix} \hspace{-0.5mm} \right) &= \begin{bmatrix} -0.996 & -0.707 & 0.418 & 0.577 \\ 0.092 & -0.408 & 0.908 & -1.00 \end{bmatrix} \end{align*}
\begin{align*} L\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{bmatrix} \right) &= \begin{bmatrix} -1 & -1 & 1 \end{bmatrix}, \\ L\left( \hspace{-0.5mm} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 0.5 \end{bmatrix} \hspace{-0.5mm} \right) &= \begin{bmatrix} -1 & -1 & 1 & 0 \\ -1 & -0.5 & 0 & 1 \end{bmatrix} \end{align*}
Theorem 1
Our basis generates a superset of the library A when adding a lifting dimension to H. Therefore, increasing the lifting dimension can only make the refinement tighter.
Double integrator with softmax control saturation:
\begin{aligned} \dot{p}_x^1 &= v_x^1, & \dot{p}_y^1 &= v_y^1, \\ \dot{v}_x^1 &= \operatorname{softmax}(u_x, u_{\mathrm{max}}) + w_x^1, & \dot{v}_y^1 &= \operatorname{softmax}(u_y, u_{\mathrm{max}}) + w_y^1. \end{aligned}
Chain of double integrators using PD to follow predecessor:
\begin{alignedat}{2} \dot{p}_x^i &= v_x^i, && \dot{p}_y^i = v_y^i, \\ \dot{v}_d^i &= k_p \left( p_d^{i-1} - p_d^i - r \frac{v_d^{i-1}}{\lVert v^{i-1} \rVert} \right) && + k_v(v_d^{i-1}-v_d^i) + w_d^i, \\ % \dot{v}_y^i &= k_p \left( p_y^{i-1} - p_y^i - r \frac{v_y^{i-1}}{\lVert v^{i-1} \rVert} \right) && + k_v(v_y^{i-1}-v_y^i) + w_y^i, \end{alignedat}
| Platoon Size | Unrefined | Sampling Refinement | CORA [3] |
|---|---|---|---|
| 3 | 1.30 \mathrm{e}{-2} | 2.92 \mathrm{e}{-2} | 3.11 |
| 6 | 1.46 \mathrm{e}{-2} | 1.97 | 5.56 |
| 9 | 2.48 \mathrm{e}{-2} | 10.3 | 10.57 |
| 12 | 2.85 \mathrm{e}{-2} | 35.0 | 12.85 |
Main takeaways:
