Attack-Resilient Weighted L1 Observer with Prior Pruning

Poster

Abstract

Security related questions for Cyber Physical Systems (CPS) have attracted much research attention in searching for novel methods for attack-resilient control and/or estimation. Specifically, false data injection attacks (FDIAs) have been shown to be capable of bypassing bad data detection (BDD), while arbitrarily compromising the integrity of state estimators and robust controller even with very sparse measurements corruption. Moreover, based on the inherent sparsity of pragmatic attack signals, L1-minimization scheme has been used extensively to improve the design of attack-resilient estimators. For this, the theoretical maximum for the percentage of compromised nodes that can be accommodated has been shown to be 50%. In order to guarantee correct state recoveries for larger percentage of attacked nodes, researchers have begun to incorporate prior information into the underlying resilient observer design framework. For the most pragmatic cases, this prior information is often obtained through some data-driven machine learning process. Existing results have shown strong positive correlation between the tolerated attack percentages and the precision of the prior information. In this paper, we present a pruning method to improve the precision of the prior information, given corresponding stochastic uncertainty characteristics of the underlying machine learning model. Then a weighted L1-minimization is proposed based on the pruned prior. The theoretical and simulation results show that the pruning method significantly improves the observer performance for much larger attack percentages, even when moderately accurate machine learning model used.

Publication
In 2021 American Control Conference
Yu Zheng
Yu Zheng
Ph.D. Candidate

My research interests include concurrent learning, and resilient control and estimation design for cyber-physical systems and autonomous systems

Next
Previous

Related