Short Ph.D. Course at UniMo (2024)

October 4th, 2024
Duration: 4 hours

Security of Machine Learning in Hostile Environments

Machine Learning (ML) has shown incredible success in many applications, including computer vision, and speech recognition. However, the use of ML can increase also the attack surface, and paves the way for attackers to compromise confidentiality, integrity or availability of ML systems. This is especially relevant in hostile environments such as cybersecurity, where the attacker wants to evade detection and wants the ML system to malfunction.

This brief course will provide you with an overview of the challenges and trends in assessing risks and robustness of applying machine learning, i.e., contexts that consider the presence of a hostile adversary, and that require modifications of complex objects (e.g., software). We will refer to malware detection as a main case study, but this applies to any hostile environment where adversaries could gain some benefit.

A brief outline of the topics is as follows:

  • Introduction to Machine Learning in Hostile Environments, including Cybersecurity
  • Taxonomy of adversarial attacks
  • Attacks on XAI methods
  • Security of foundation models (Generative AI, Diffusion Models, LLMs)
  • Defense directions

Required skills

Required:

  • Computer Science background
  • Software engineering basics
  • Networking basics
  • Machine Learning basics

Preferred (but not required):

  • Cybersecurity

Timetable

Date Time     Location
Friday, October 4th, 2024 9-13 Aula M2.4 - Matematica


Assessment (Optional)

This course offers an optional exam, for students who opt-in to do it (e.g., for getting credits in their Ph.D. programme).

The assessment requires to write an extended abstract (min 1000-max 1500 chars, spaces included) that summarizes how your research relates to (and can be improved) by one paper of your choice from the References list below. The abstract should contain:

  1. Statement of a problem in your own research that is related to your chosen paper
  2. Idea for a possible novel solution that integrates the chosen paper and your research
  3. Novelty with respect to the state of the art

Submit your extended abstract by October 25th, 2024 via this form: click here

You can download course material from here (until Oct 11th, 2024).

Assessment criteria:

  • Demonstrating understanding of the chosen paper content and topic
  • Ability to position your own research within the context of robust machine learning and security
  • Clarity of presentation
  • Correct use of technical terms seen in this short course
  • Quality of the idea proposed

You will also receive a personalized feedback on your abstract.

Lecturer Biography

Dr. Fabio Pierazzi is a Senior Lecturer (Associate Professor) in Computer Science and Deputy Head of the Cybersecurity group at the Department of Informatics of King’s College London, and affiliated with UCL’s Systems Security Research Lab (S2Lab). His research interests are at the intersection of systems security and machine learning, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity, adaptive attackers). Previously, he obtained his Ph.D. in Computer Science at University of Modena, Italy (2014–2017), he visited University of Maryland, College Park, USA (2016), and he was a Post-Doctoral Research Associate at Royal Holloway, University of London (2017–2019). Home page: https://fabio.pierazzi.com

References

See advice on reviewing literature for systems security.

Best Practices for MLSec

Adversarial ML: Evasion

Adversarial ML: Backdoor

Adversarial ML: Poisoning

Adversarial ML: General and Defenses

Socio-technical MLSec

Malware Detection