Media Summary: Splitting the Difference on Adversarial Training Matan Levi and Aryeh Kontorovich, Ben-Gurion University of the Negev The ... Unveiling the Secrets without Data: Can Graph SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, ...

Usenix Security 24 Sok Neural Network Extraction Through Physical Side Channels - Detailed Analysis & Overview

Splitting the Difference on Adversarial Training Matan Levi and Aryeh Kontorovich, Ben-Gurion University of the Negev The ... Unveiling the Secrets without Data: Can Graph SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, ... Exploring Connections Between Active Learning and Model Scalable Multi-Party Computation Protocols for Machine Learning in the Honest-Majority Setting Fengrun Liu, University of ... ClearStamp: A Human-Visible and Robust Model-Ownership Proof based on Transposed Model Training Torsten Krauß, Jasper ...

Tossing in the Dark: Practical Bit-Flipping on Gray-box Deep

Photo Gallery

USENIX Security '24 - SoK: Neural Network Extraction Through Physical Side Channels
USENIX Security '24 - Hijacking Attacks against Neural Network by Analyzing Training Data
USENIX Security '24 - SoK: All You Need to Know About On-Device ML Model Extraction - The Gap...
USENIX Security '20 - High Accuracy and High Fidelity Extraction of Neural Networks
USENIX Security '24 - Splitting the Difference on Adversarial Training
USENIX Security '24 - Fast and Private Inference of Deep Neural Networks by Co-designing...
USENIX Security '24 - Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based..
USENIX Security '24 - Privacy Side Channels in Machine Learning Systems
USENIX Security '24 - Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited...
USENIX Security '24 - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
USENIX Security '20 - Exploring Connections Between Active Learning and Model Extraction
USENIX Security '22 - Lend Me Your Ear: Passive Remote Physical Side Channels on PCs
Sponsored
Sponsored
View Detailed Profile
USENIX Security '24 - SoK: Neural Network Extraction Through Physical Side Channels

USENIX Security '24 - SoK: Neural Network Extraction Through Physical Side Channels

SoK

USENIX Security '24 - Hijacking Attacks against Neural Network by Analyzing Training Data

USENIX Security '24 - Hijacking Attacks against Neural Network by Analyzing Training Data

Hijacking Attacks against

Sponsored
USENIX Security '24 - SoK: All You Need to Know About On-Device ML Model Extraction - The Gap...

USENIX Security '24 - SoK: All You Need to Know About On-Device ML Model Extraction - The Gap...

SoK

USENIX Security '20 - High Accuracy and High Fidelity Extraction of Neural Networks

USENIX Security '20 - High Accuracy and High Fidelity Extraction of Neural Networks

High Accuracy and High Fidelity

USENIX Security '24 - Splitting the Difference on Adversarial Training

USENIX Security '24 - Splitting the Difference on Adversarial Training

Splitting the Difference on Adversarial Training Matan Levi and Aryeh Kontorovich, Ben-Gurion University of the Negev The ...

Sponsored
USENIX Security '24 - Fast and Private Inference of Deep Neural Networks by Co-designing...

USENIX Security '24 - Fast and Private Inference of Deep Neural Networks by Co-designing...

Fast and Private Inference of Deep

USENIX Security '24 - Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based..

USENIX Security '24 - Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based..

Neural Network

USENIX Security '24 - Privacy Side Channels in Machine Learning Systems

USENIX Security '24 - Privacy Side Channels in Machine Learning Systems

Privacy

USENIX Security '24 - Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited...

USENIX Security '24 - Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited...

Unveiling the Secrets without Data: Can Graph

USENIX Security '24 - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

USENIX Security '24 - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, ...

USENIX Security '20 - Exploring Connections Between Active Learning and Model Extraction

USENIX Security '20 - Exploring Connections Between Active Learning and Model Extraction

Exploring Connections Between Active Learning and Model

USENIX Security '22 - Lend Me Your Ear: Passive Remote Physical Side Channels on PCs

USENIX Security '22 - Lend Me Your Ear: Passive Remote Physical Side Channels on PCs

USENIX Security

USENIX Security '24 - Scalable Multi-Party Computation Protocols for Machine Learning in the...

USENIX Security '24 - Scalable Multi-Party Computation Protocols for Machine Learning in the...

Scalable Multi-Party Computation Protocols for Machine Learning in the Honest-Majority Setting Fengrun Liu, University of ...

USENIX Security '24 - SoK: State of the Krawlers – Evaluating the Effectiveness of Crawling...

USENIX Security '24 - SoK: State of the Krawlers – Evaluating the Effectiveness of Crawling...

SoK

USENIX Security '24 - ClearStamp: A Human-Visible and Robust Model-Ownership Proof based on...

USENIX Security '24 - ClearStamp: A Human-Visible and Robust Model-Ownership Proof based on...

ClearStamp: A Human-Visible and Robust Model-Ownership Proof based on Transposed Model Training Torsten Krauß, Jasper ...

USENIX Security '24 - UIHash: Detecting Similar Android UIs through Grid-Based Visual Appearance...

USENIX Security '24 - UIHash: Detecting Similar Android UIs through Grid-Based Visual Appearance...

UIHash: Detecting Similar Android UIs

USENIX Security '23 - A Data-free Backdoor Injection Approach in Neural Networks

USENIX Security '23 - A Data-free Backdoor Injection Approach in Neural Networks

USENIX Security

USENIX Security '24 - Tossing in the Dark: Practical Bit-Flipping on Gray-box Deep Neural Networks..

USENIX Security '24 - Tossing in the Dark: Practical Bit-Flipping on Gray-box Deep Neural Networks..

Tossing in the Dark: Practical Bit-Flipping on Gray-box Deep

USENIX Security '24 - AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning

USENIX Security '24 - AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning

AttackGNN: Red-Teaming GNNs in Hardware