首页 | 本学科首页   官方微博 | 高级检索  
     


Safe-visor architecture for sandboxing (AI-based) unverified controllers in stochastic cyber–physical systems
Affiliation:1. Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA;2. Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research, University of California, San Francisco, San Francisco, CA 94143, USA;3. Medical Scientist Training Program, Biomedical Sciences Graduate Program, University of California, San Francisco, San Francisco, CA 94143, USA;4. Developmental and Stem Cell Biology Graduate Program, University of California, San Francisco, San Francisco, CA 94143, USA;5. Department of Neurology, University of California, San Francisco, San Francisco, CA 94143, USA;6. Neuroscience Graduate Program, University of California, San Francisco, San Francisco, CA 94143, USA;7. CIRM-Bridges Scholar Program, San Francisco State University, San Francisco, CA 94132, USA;8. San Francisco Veterans Affairs Medical Center, San Francisco, CA 94121, USA
Abstract:High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber–physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliability of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. Both the history-based supervisor and the safety advisor are designed based on an approximate probabilistic relation between the original system and its finite abstraction. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies.
Keywords:AI-based unverified controllers  Safe-visor architecture  Stochastic cyber–physical systems  Approximate probabilistic relations
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号