Click here to flash read.
Before autonomous systems can be deployed in safety-critical applications, we
must be able to understand and verify the safety of these systems. For cases
where the risk or cost of real-world testing is prohibitive, we propose a
simulation-based framework for a) predicting ways in which an autonomous system
is likely to fail and b) automatically adjusting the system's design to
preemptively mitigate those failures. We frame this problem through the lens of
approximate Bayesian inference and use differentiable simulation for efficient
failure case prediction and repair. We apply our approach on a range of
robotics and control problems, including optimizing search patterns for robot
swarms and reducing the severity of outages in power transmission networks.
Compared to optimization-based falsification techniques, our method predicts a
more diverse, representative set of failure modes, and we also find that our
use of differentiable simulation yields solutions that have up to 10x lower
cost and requires up to 2x fewer iterations to converge relative to
gradient-free techniques. Code and videos can be found at
https://mit-realm.github.io/breaking-things/
No creative common's license