×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Machine learning (ML) sees an increasing prevalence of being used in the
internet-of-things (IoT)-based smart grid. However, the trustworthiness of ML
is a severe issue that must be addressed to accommodate the trend of ML-based
smart grid applications (MLsgAPPs). The adversarial distortion injected into
the power signal will greatly affect the system's normal control and operation.
Therefore, it is imperative to conduct vulnerability assessment for MLsgAPPs
applied in the context of safety-critical power systems. In this paper, we
provide a comprehensive review of the recent progress in designing attack and
defense methods for MLsgAPPs. Unlike the traditional survey about ML security,
this is the first review work about the security of MLsgAPPs that focuses on
the characteristics of power systems. We first highlight the specifics for
constructing the adversarial attacks on MLsgAPPs. Then, the vulnerability of
MLsgAPP is analyzed from both the aspects of the power system and ML model.
Afterward, a comprehensive survey is conducted to review and compare existing
studies about the adversarial attacks on MLsgAPPs in scenarios of generation,
transmission, distribution, and consumption, and the countermeasures are
reviewed according to the attacks that they defend against. Finally, the future
research directions are discussed on the attacker's and defender's side,
respectively. We also analyze the potential vulnerability of large language
model-based (e.g., ChatGPT) power system applications. Overall, we encourage
more researchers to contribute to investigating the adversarial issues of
MLsgAPPs.

Click here to read this post out
ID: 647410; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Dec. 31, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 11
CC:
No creative common's license
Comments: