Click here to flash read.
Machine learning (ML) sees an increasing prevalence of being used in the
internet-of-things (IoT)-based smart grid. However, the trustworthiness of ML
is a severe issue that must be addressed to accommodate the trend of ML-based
smart grid applications (MLsgAPPs). The adversarial distortion injected into
the power signal will greatly affect the system's normal control and operation.
Therefore, it is imperative to conduct vulnerability assessment for MLsgAPPs
applied in the context of safety-critical power systems. In this paper, we
provide a comprehensive review of the recent progress in designing attack and
defense methods for MLsgAPPs. Unlike the traditional survey about ML security,
this is the first review work about the security of MLsgAPPs that focuses on
the characteristics of power systems. We first highlight the specifics for
constructing the adversarial attacks on MLsgAPPs. Then, the vulnerability of
MLsgAPP is analyzed from both the aspects of the power system and ML model.
Afterward, a comprehensive survey is conducted to review and compare existing
studies about the adversarial attacks on MLsgAPPs in scenarios of generation,
transmission, distribution, and consumption, and the countermeasures are
reviewed according to the attacks that they defend against. Finally, the future
research directions are discussed on the attacker's and defender's side,
respectively. We also analyze the potential vulnerability of large language
model-based (e.g., ChatGPT) power system applications. Overall, we encourage
more researchers to contribute to investigating the adversarial issues of
MLsgAPPs.
No creative common's license