DescriptionTrustworthy operation of control systems and critical infrastructures, such as industrial control systems (ICS) or cyber physical systems (CPS) requires a secure and real-time code execution environment. Due to the importance and larger attack surfaces of these platforms, they are becoming attractive targets for malicious penetrations lead- ing to catastrophic failures with substantive impact. A promising defense against these attacks is to monitor the system states through physical side-channel signals. Physical side-channel monitors impose zero overhead to the target system, which is highly favorable for resource-constrained and real-time systems. Physical side-channel monitors are constructed based on side-channel signals in a data-driven fashion. Therefore, they benefit a lot from artificial intelligence (AI) techniques. AI has also been established as a popular class of algorithms for more general CPS such as autonomous systems. How- ever, the recently emerging studies on the adversarial robustness of AI models put their security in question: ”Are AI-powered systems as trustworthy as commonly thought?”
This dissertation aims at investigating using AI to ensure trustworthy security and studying the security of AI-powered systems. First, I provide a tutorial-level discussion on utilizing structures in physical side-channel signals and AI models to construct physical side-channel monitors. Next, I look closer at Zeus, an AI-powered electromagnetic side-channel monitor I proposed. Zeus collects electromagnetic emanations from the micro-controller of a system and constructs a recurrent neural network-based anomaly detector to ensure the execution control flow integrity of the system. I evaluated Zeus on a commercial programmable logic controller (PLC) using real applications. Zeus was able to distinguish benign and malicious executions with an average accuracy of 98.8% and zero overhead. Additionally, I study the adversarial robustness of physical side-channel monitors such as Zeus by proposing a novel attack approach to bypass such a monitor. I customize a Chipwhisperer, a popular side-channel attack testbed, to evaluate the proposed attack. The proposed attack was able to bypass power side-channel monitors on various control programs under different control attacks. The attack is also robust across detector models and hardware implementations. Finally, to study the security of more general AI, I proposed a novel physical adversarial attack against autonomous systems by exploiting the optical sensor pipeline. I designed a low-cost and portable gadget to evaluate the proposed attack. The attack can cause up to 99% attack success rate and survive much stronger environmental light conditions compared to prior work.