Media Summary: Splitting the Difference on Adversarial Training EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection Shigang Liu, CSIRO's Data61 and Swinburne ... Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach Qi Tan, Department of ...

Usenix Security 24 Splitting The Difference On Adversarial Training - Detailed Analysis & Overview

Splitting the Difference on Adversarial Training EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection Shigang Liu, CSIRO's Data61 and Swinburne ... Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach Qi Tan, Department of ... On the Difficulty of Defending Contrastive Learning against Backdoor Attacks Changjiang Li, Stony Brook University; Ren Pang, ... A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data Meenatchi Sundaram Muthu Selva ... SoK: Neural Network Extraction Through Physical Side Channels Péter Horváth, Dirk Lauret, Zhuoran Liu, and Lejla Batina, ...

INSIGHT: Attacking Industry-Adopted Learning Resilient Logic Locking Techniques Using Explainable Graph Neural Network ... Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based Approach Bing Sun, Jun Sun, and Wayne Koh, ... Lessons Learned from Evaluating the Robustness of Defenses to Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks Pranav Dahiya, ...

Photo Gallery

USENIX Security '24 - Splitting the Difference on Adversarial Training
USENIX Security '24 - Correction-based Defense Against Adversarial Video Attacks via...
USENIX Security '24 - EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection
USENIX Security '24 - Defending Against Data Reconstruction Attacks in Federated Learning: An...
USENIX Security '24 - LaserAdv: Laser Adversarial Attacks on Speech Recognition Systems
USENIX Security '23 - The Space of Adversarial Strategies
USENIX Security '24 - On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
USENIX Security '22 - Membership Inference Attacks and Defenses in Neural Network Pruning
USENIX Security '20 - Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited
USENIX Security '24 - Adversarial Illusions in Multi-Modal Embeddings
USENIX Security '24 - A Linear Reconstruction Approach for Attribute Inference Attacks against...
USENIX Security '24 - SoK: Neural Network Extraction Through Physical Side Channels
Sponsored
Sponsored
View Detailed Profile
USENIX Security '24 - Splitting the Difference on Adversarial Training

USENIX Security '24 - Splitting the Difference on Adversarial Training

Splitting the Difference on Adversarial Training

USENIX Security '24 - Correction-based Defense Against Adversarial Video Attacks via...

USENIX Security '24 - Correction-based Defense Against Adversarial Video Attacks via...

Correction-based Defense Against

Sponsored
USENIX Security '24 - EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection

USENIX Security '24 - EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection

EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection Shigang Liu, CSIRO's Data61 and Swinburne ...

USENIX Security '24 - Defending Against Data Reconstruction Attacks in Federated Learning: An...

USENIX Security '24 - Defending Against Data Reconstruction Attacks in Federated Learning: An...

Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach Qi Tan, Department of ...

USENIX Security '24 - LaserAdv: Laser Adversarial Attacks on Speech Recognition Systems

USENIX Security '24 - LaserAdv: Laser Adversarial Attacks on Speech Recognition Systems

LaserAdv: Laser

Sponsored
USENIX Security '23 - The Space of Adversarial Strategies

USENIX Security '23 - The Space of Adversarial Strategies

USENIX Security

USENIX Security '24 - On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

USENIX Security '24 - On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks Changjiang Li, Stony Brook University; Ren Pang, ...

USENIX Security '22 - Membership Inference Attacks and Defenses in Neural Network Pruning

USENIX Security '22 - Membership Inference Attacks and Defenses in Neural Network Pruning

USENIX Security

USENIX Security '20 - Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited

USENIX Security '20 - Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited

Hybrid Batch Attacks: Finding Black-box

USENIX Security '24 - Adversarial Illusions in Multi-Modal Embeddings

USENIX Security '24 - Adversarial Illusions in Multi-Modal Embeddings

Adversarial

USENIX Security '24 - A Linear Reconstruction Approach for Attribute Inference Attacks against...

USENIX Security '24 - A Linear Reconstruction Approach for Attribute Inference Attacks against...

A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data Meenatchi Sundaram Muthu Selva ...

USENIX Security '24 - SoK: Neural Network Extraction Through Physical Side Channels

USENIX Security '24 - SoK: Neural Network Extraction Through Physical Side Channels

SoK: Neural Network Extraction Through Physical Side Channels Péter Horváth, Dirk Lauret, Zhuoran Liu, and Lejla Batina, ...

USENIX Security '23 - Adversarial Training for Raw-Binary Malware Classifiers

USENIX Security '23 - Adversarial Training for Raw-Binary Malware Classifiers

USENIX Security

USENIX Security '23 - Squint Hard Enough: Attacking Perceptual Hashing with Adversarial Machine...

USENIX Security '23 - Squint Hard Enough: Attacking Perceptual Hashing with Adversarial Machine...

USENIX Security

USENIX Security '21 - Dompteur: Taming Audio Adversarial Examples

USENIX Security '21 - Dompteur: Taming Audio Adversarial Examples

USENIX Security

USENIX Security '24 - INSIGHT: Attacking Industry-Adopted Learning Resilient Logic Locking...

USENIX Security '24 - INSIGHT: Attacking Industry-Adopted Learning Resilient Logic Locking...

INSIGHT: Attacking Industry-Adopted Learning Resilient Logic Locking Techniques Using Explainable Graph Neural Network ...

USENIX Security '24 - AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning

USENIX Security '24 - AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning

AttackGNN: Red-Teaming GNNs in Hardware

USENIX Security '24 - Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based..

USENIX Security '24 - Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based..

Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based Approach Bing Sun, Jun Sun, and Wayne Koh, ...

USENIX Security '19 - Lessons Learned from Evaluating the Robustness of Defenses to

USENIX Security '19 - Lessons Learned from Evaluating the Robustness of Defenses to

Lessons Learned from Evaluating the Robustness of Defenses to

USENIX Security '24 - Machine Learning needs Better Randomness Standards: Randomised Smoothing...

USENIX Security '24 - Machine Learning needs Better Randomness Standards: Randomised Smoothing...

Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks Pranav Dahiya, ...