Automatic and Scalable Safety
Verification Using Interval
Reachability With Subspace Sampling
FrBO3 @ CDC 2025

Brendan Gould
Akash Harapanahalli

Georgia Institute of Technology

Sam Coogan

Why Reachability?

“Safety Critical Systems”:

  • One definition of safety: prove avoidance of obstacle set \mathcal{O} under disturbances \mathcal{W}

Reachability problem: find some

\mathcal{R}(t,

\mathcal{X}_{0}

, u,

\mathcal{W}

) \ni x(t) such that \mathcal{R}(t, \mathcal{X}_{0}, u, \mathcal{W}) \cap \mathcal{O} = \emptyset

Long Term Goal: Online Reachability

Tools:

  • Interval based reachability
  • jax / immrax [1]: Interval computation w/ JIT, Autodiff, and GPU Parallelization

Applications:

  • Reachable set aware MPC

immrax:

This Work: Fast, Automated Interval Refinement

  • Interval reachability techniques are historically over-conservative
  • Refinement is an additional step that helps mitigate this conservatism
  • Induces tradeoff between speed and accuracy

Main Contribution

A new refinement algorithm that extends previous approaches [2] and provides

  1. A guarantee that bounds are monotonic in refinement effort
  2. A big-O bound on the amount of extra computation this will take

[2] J. K. Scott and P. I. Barton, “Bounds on the reachable sets of nonlinear control systems,” Automatica, 2013

Interval Reachability

Interval Reachability

Embedding Systems1

Any system \dot{x} = f(x, u, w) induces an embedding system [\dot{\underline{x}}, \dot{\overline{x}}] = F([\underline{x}, \overline{x}], [\underline{u}, \overline{u}], [\underline{w}, \overline{w}]) that overapproximates the reachable set of the original system ( [1], Proposition 9).

Interval Refinement: Lifted Systems

Consider a full rank H \in \mathbb{R}^{m \times n} with m > n. Any left psuedo-inverse H^+ (i.e. H^+ H = \mathbb{I}_n) induces a lifted system \dot{y} = Hf(H^{+}y, u, w)

Forward Invariance of \mathcal{H}

Let \mathcal{H} = \{ Hx : x \in \mathbb{R}^{n} \}. If \phi(t) is a trajectory of the lifted system, and \phi(0) \in \mathcal{H}, then \phi(t) \in \mathcal{H} for all t \ge 0 [4].

Interval Refinement

Refined embedding system: [\dot{\underline{x}}, \dot{\overline{x}}] = F\big(

\mathcal{I}_{\mathcal{H}}([\underline{x}, \overline{x}])

, [\underline{u}, \overline{u}], [\underline{w}, \overline{w}]\big)

Left Null Vector Refinement

Given any a^{\top} \in \mathbb{R}^m with aH = 0, we have ay(t) = 0 by forward invariance. Then,

0 = \sum_{i} a_{i} y_{i} \implies y_{j} = - \frac{1}{a_{j}} \sum_{i \ne j} a_{i}y_{i} \quad \forall a_{j} \ne 0

0 = \sum_{i} a_{i} y_{i} \implies y_{j} \in - \frac{1}{a_{j}} \sum_{i \ne j} a_{i} [\underline{y}_{i}, \overline{y}_{i}], \quad \forall a_{j} \ne 0

  • Choice of a affects bound tightness
  • Build a “library” of left null vectors:

A = \begin{bmatrix} a_{1} \\ \vdots \\ a_{N} \end{bmatrix} \implies y_{j} \in \bigcap_{k=1}^{N} - \frac{1}{a_{k, j}} \sum_{i\ne j} a_{k, i} [ \underline{y}_{i}, \overline{y}_{i}]

Left Null Vector Refinement

A = \begin{bmatrix} a_{1} \\ \vdots \\ a_{N} \end{bmatrix} \implies y_{j} \in \bigcap_{k=1}^{N} - \frac{1}{a_{k, j}} \sum_{i\ne j} a_{k, i} [ \underline{y}_{i}, \overline{y}_{i}]

  • Larger A creates more refinement work, but tighter bounds

Design Goal

What are “good” left null vectors? How many should we choose? How do they relate to the selection of H?

Canonical Left Null Vector

Simplify by assuming m = n+1. Write H = \begin{bmatrix} H_{V} \\ h \end{bmatrix}, \quad H_V \in GL(n, \mathbb{R}), \quad h^\top \in \mathbb{R}^{n}

  • \operatorname{dim} (\operatorname{null} H^\top) = m - n = 1
  • Invariance under scaling: If b = \lambda a for some \lambda \in \mathbb{R} \setminus \{ 0 \}, then

- \frac{1}{b_{j}} \sum_{i \ne j} b_{i}y_{i} = - \frac{1}{\lambda a_{j}} \sum_{i \ne j} \lambda a_{i}y_{i} = - \frac{1}{a_{j}} \sum_{i \ne j} a_{i}y_{i}

Canonical Left Null Vector

We call a_{h} = \begin{bmatrix} -h H_{V}^{-1} & 1 \end{bmatrix} the canonical left null vector associated with h.

Left Null Basis

Generalize to H = \begin{bmatrix} H_{V} \\ H_A \end{bmatrix}, \quad H_V \in GL(n, \mathbb{R}), \quad H_A \in \mathbb{R}^{m-n \times n}

Set of canonical left null vectors is a basis for \operatorname{null} H^\top:

L = \begin{bmatrix} -H_{A} H_{V}^{-1} & \mathbb{I}_{m-n} \end{bmatrix}

  • Sparsity prevents excessive interval operations y_{j} \in - \frac{1}{a_{j}} \sum_{i \ne j} a_{i} [\underline{y}_{i}, \overline{y}_{i}]

Subspace Sampling

L = \begin{bmatrix} -H_{A} H_{V}^{-1} & \mathbb{I}_{m-n} \end{bmatrix}

Subspace sampling: pairwise combinations of rows of L

W = \begin{bmatrix} \cos \frac{\pi}{s+1} & \sin \frac{\pi}{s+1} & 0 & \cdots & 0 \\ \cos \frac{2\pi}{s+1} & \sin \frac{2\pi}{s+1} & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \cos \frac{s\pi}{s+1} & \sin \frac{s\pi}{s+1} & 0 & \cdots & 0 \end{bmatrix}, \quad A = \begin{bmatrix} L \\ \sigma_{p_1}(W)L \\ \vdots \\ \sigma_{p_N}(W)L \end{bmatrix}

Sample Number Growth Bound

Our “library” A consists of \lvert {\,}^{m-n} P_2 \rvert = \frac{(m-n)!}{(m-n-2)!} = \mathcal{O}\big((m-n)^{2}\big)

left null vectors, as opposed to exponentially many from uniform sampling.

Monotonicity in Auxiliary Variables

Why do we like this specific basis for \operatorname{null} H^\top?

\begin{align*} L_{\rm SVD}\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{bmatrix} \right) &= \begin{bmatrix} -0.577 & -0.577 & 0.577 \end{bmatrix}, \\ L_{\rm SVD}\left( \hspace{-0.5mm} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 0.5 \end{bmatrix} \hspace{-0.5mm} \right) &= \begin{bmatrix} -0.996 & -0.707 & 0.418 & 0.577 \\ 0.092 & -0.408 & 0.908 & -1.00 \end{bmatrix} \end{align*}

\begin{align*} L\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{bmatrix} \right) &= \begin{bmatrix} -1 & -1 & 1 \end{bmatrix}, \\ L\left( \hspace{-0.5mm} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 0.5 \end{bmatrix} \hspace{-0.5mm} \right) &= \begin{bmatrix} -1 & -1 & 1 & 0 \\ -1 & -0.5 & 0 & 1 \end{bmatrix} \end{align*}

Theorem 1

Our basis generates a superset of the library A when adding a lifting dimension to H. Therefore, increasing the lifting dimension can only make the refinement tighter.

Multi-Agent Platoon

Double integrator with softmax control saturation:

\begin{aligned} \dot{p}_x^1 &= v_x^1, & \dot{p}_y^1 &= v_y^1, \\ \dot{v}_x^1 &= \operatorname{softmax}(u_x, u_{\mathrm{max}}) + w_x^1, & \dot{v}_y^1 &= \operatorname{softmax}(u_y, u_{\mathrm{max}}) + w_y^1. \end{aligned}

  • Use optimal control to stabilize to the origin while avoiding an obstacle: u_{\mathrm{ff}}

Chain of double integrators using PD to follow predecessor:

\begin{alignedat}{2} \dot{p}_x^i &= v_x^i, && \dot{p}_y^i = v_y^i, \\ \dot{v}_d^i &= k_p \left( p_d^{i-1} - p_d^i - r \frac{v_d^{i-1}}{\lVert v^{i-1} \rVert} \right) && + k_v(v_d^{i-1}-v_d^i) + w_d^i, \\ % \dot{v}_y^i &= k_p \left( p_y^{i-1} - p_y^i - r \frac{v_y^{i-1}}{\lVert v^{i-1} \rVert} \right) && + k_v(v_y^{i-1}-v_y^i) + w_y^i, \end{alignedat}

  • Does the final agent in the platoon still avoid the obstacle?

Multi-Agent Platoon

Multi-Agent Platoon

Reachable set computation time (s)
Platoon Size Unrefined Sampling Refinement CORA [3]
3 1.30 \mathrm{e}{-2} 2.92 \mathrm{e}{-2} 3.11
6 1.46 \mathrm{e}{-2} 1.97 5.56
9 2.48 \mathrm{e}{-2} 10.3 10.57
12 2.85 \mathrm{e}{-2} 35.0 12.85

Conclusion

Main takeaways:

  • Interval reachability techniques are fast, but conservative
  • Refinement introduces tradeoff between computation and conservatism
  • Our Contribution: New method to automatically select refinement vectors
  • Our Contribution: Guaranteed bounds on refinement tightness and number of vectors

LinkedIn

References

[1]
A. Harapanahalli, S. Jafarpour, and S. Coogan, “Immrax: A Parallelizable and Differentiable Toolbox for Interval Analysis and Mixed Monotone Reachability in JAX.” arXiv, Apr. 2024. doi: 10.48550/arXiv.2401.11608.
[2]
J. K. Scott and P. I. Barton, “Bounds on the reachable sets of nonlinear control systems,” Automatica, vol. 49, no. 1, pp. 93–100, Jan. 2013, doi: 10.1016/j.automatica.2012.09.020.
[3]
L. Jaulin, M. Kieffer, O. Didrit, and E. Walter, Applied interval analysis, 1st Edition. Springer, 2001.
[4]
A. Harapanahalli and S. Coogan, “Certified Robust Invariant Polytope Training in Neural Controlled ODEs.” arXiv, Aug. 2024. Accessed: Aug. 28, 2024. [Online]. Available: https://arxiv.org/abs/2408.01273
[5]
M. Althoff, “An introduction to CORA 2015,” in ARCH14-15. 1st and 2nd international workshop on applied veRification for continuous and hybrid systems, G. Frehse and M. Althoff, Eds., in EPiC series in computing, vol. 34. EasyChair, 2015, pp. 120–151. doi: 10.29007/zbkv.