Short Ph.D. Course at UniMo (2024)
October 4th, 2024
Duration: 4 hours
Security of Machine Learning in Hostile Environments
Machine Learning (ML) has shown incredible success in many applications, including computer vision, and speech recognition. However, the use of ML can increase also the attack surface, and paves the way for attackers to compromise confidentiality, integrity or availability of ML systems. This is especially relevant in hostile environments such as cybersecurity, where the attacker wants to evade detection and wants the ML system to malfunction.
This brief course will provide you with an overview of the challenges and trends in assessing risks and robustness of applying machine learning, i.e., contexts that consider the presence of a hostile adversary, and that require modifications of complex objects (e.g., software). We will refer to malware detection as a main case study, but this applies to any hostile environment where adversaries could gain some benefit.
A brief outline of the topics is as follows:
- Introduction to Machine Learning in Hostile Environments, including Cybersecurity
- Taxonomy of adversarial attacks
- Attacks on XAI methods
- Security of foundation models (Generative AI, Diffusion Models, LLMs)
- Defense directions
Required skills
Required:
- Computer Science background
- Software engineering basics
- Networking basics
- Machine Learning basics
Preferred (but not required):
- Cybersecurity
Timetable
Date | Time | Location |
---|---|---|
Friday, October 4th, 2024 | 9-13 | Aula M2.4 - Matematica |
Assessment (Optional)
This course offers an optional exam, for students who opt-in to do it (e.g., for getting credits in their Ph.D. programme).
The assessment requires to write an extended abstract (min 1000-max 1500 chars, spaces included) that summarizes how your research relates to (and can be improved) by one paper of your choice from the References list below. The abstract should contain:
- Statement of a problem in your own research that is related to your chosen paper
- Idea for a possible novel solution that integrates the chosen paper and your research
- Novelty with respect to the state of the art
Submit your extended abstract by October 25th, 2024 via this form: click here
You can download course material from here (until Oct 11th, 2024).
Assessment criteria:
- Demonstrating understanding of the chosen paper content and topic
- Ability to position your own research within the context of robust machine learning and security
- Clarity of presentation
- Correct use of technical terms seen in this short course
- Quality of the idea proposed
You will also receive a personalized feedback on your abstract.
Lecturer Biography
Dr. Fabio Pierazzi is a Senior Lecturer (Associate Professor) in Computer Science and Deputy Head of the Cybersecurity group at the Department of Informatics of King’s College London, and affiliated with UCL’s Systems Security Research Lab (S2Lab). His research interests are at the intersection of systems security and machine learning, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity, adaptive attackers). Previously, he obtained his Ph.D. in Computer Science at University of Modena, Italy (2014–2017), he visited University of Maryland, College Park, USA (2016), and he was a Post-Doctoral Research Associate at Royal Holloway, University of London (2017–2019). Home page: https://fabio.pierazzi.com
References
See advice on reviewing literature for systems security.
Best Practices for MLSec
- Arp et al., “Dos and Don’ts of Machine Learning in Computer Security”, USENIX Security Symposium, 2022 (Distinguished Paper Award)
Adversarial ML: Evasion
- Carlini and Wagner, Towards Evaluating the Robustness of Neural Networks, IEEE Symposium on Security & Privacy, 2016 (Best Paper Award)
- Pierazzi et al., “Intriguing Properties of Adversarial ML Attacks in the Problem Space”, IEEE Symposium on Security & Privacy, 2022
- Athalye et al. “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples”, ICML, 2018.
- Quiring et al., “Misleading Authorship Attribution of Source Code using Adversarial Learning”, USENIX Security Symposium, 2019
Adversarial ML: Backdoor
- Yang et al., “Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers”, IEEE Symposium on Security & Privacy, 2023
Adversarial ML: Poisoning
- Shan et al., “Poison forensics: Traceback of data poisoning attacks in neural networks.” USENIX Security Symposium, 2022
Adversarial ML: General and Defenses
- Tramer et al., On Adaptive Attacks to Adversarial Example Defenses, NeurIPS, 2020
- Biggio and Roli, Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning, Pattern Recognition, 2018
- De Cristofaro, A Critical Overview of Privacy in Machine Learning, IEEE Security & Privacy Magazine, 2021
- Papers cited by Nicholas Carlini in his Adversarial ML Reading List
- Demontis et al., “Yes, machine learning can be more secure! A case study on android malware detection“ IEEE Transactions on Dependable and Secure Computing (TDSC), 2017
Socio-technical MLSec
- Aonzo et al., “Humans vs. Machine in Malware Classification”, USENIX Security Symposium, 2023
Malware Detection
- Arp et al., Drebin: Effective and explainable detection of android malware in your pocket., NDSS, 2014