Towards a Certification Framework for Deep Learning Systems in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection
To establish a certification framework for deep learning systems in safety-critical applications, this work proposes principles and methods for (1) inherently safe design through disentangled representation learning and (2) run-time error detection via uncertainty quantification, out-of-distribution detection, and adversarial robustness.