Evasion attacks with adversarial deep learning against power system state estimation

featured figure

Abstract

Cyberattacks against critical infrastructures, including power systems, are increasing rapidly. False Data Injection Attacks (FDIAs) are among the attacks that have been demonstrated to be effective and have been getting more attention over the last years. FDIAs can manipulate measurements to perturb the results of power system state estimation without being detected, leading to potentially severe outages. In order to protect against FDIAs, several machine learning algorithms have been proposed in the literature. However, such methods are susceptible to adversarial examples which could significantly reduce their detection accuracy. In this paper, we examine the effects of adversarial examples on FDIAs detection using deep learning algorithms. Specifically, the impacts on Multilayer Perceptron (MLP) against two different adversarial attacks are investigated, namely the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Jacobian-based Saliency Map Attack (JSMA). Numerical results tested on the IEEE 14-bus system using load data collected from the New York Independent System Operator (NYISO) demonstrate the effectiveness of the proposed methods.

Publication
Submitted to 2020 IEEE Power and Energy Society General Meeting
Next
Previous

Related