×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Machine learning (ML) models trained on data from potentially untrusted
sources are vulnerable to poisoning. A small, maliciously crafted subset of the
training inputs can cause the model to learn a "backdoor" task (e.g.,
misclassify inputs with a certain feature) in addition to its main task. Recent
research proposed many hypothetical backdoor attacks whose efficacy heavily
depends on the configuration and training hyperparameters of the target model.


Given the variety of potential backdoor attacks, ML engineers who are not
security experts have no way to measure how vulnerable their current training
pipelines are, nor do they have a practical way to compare training
configurations so as to pick the more resistant ones. Deploying a defense
requires evaluating and choosing from among dozens of research papers and
re-engineering the training pipeline.


In this paper, we aim to provide ML engineers with pragmatic tools to audit
the backdoor resistance of their training pipelines and to compare different
training configurations, to help choose one that best balances accuracy and
security.


First, we propose a universal, attack-agnostic resistance metric based on the
minimum number of training inputs that must be compromised before the model
learns any backdoor.


Second, we design, implement, and evaluate Mithridates a multi-stage approach
that integrates backdoor resistance into the training-configuration search. ML
developers already rely on hyperparameter search to find configurations that
maximize the model's accuracy. Mithridates extends this standard tool to
balance accuracy and resistance without disruptive changes to the training
pipeline. We show that hyperparameters found by Mithridates increase resistance
to multiple types of backdoor attacks by 3-5x with only a slight impact on
accuracy. We also discuss extensions to AutoML and federated learning.

Click here to read this post out
ID: 625586; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Dec. 20, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 12
CC:
No creative common's license
Comments: