Opportunity for Ph.D. at UCL CS

Competitive, Fully-Funded Opportunity for 4-Year Ph.D. at UCL Computer Science (Home/UK candidates only)

Explainable, Knowledge-driven AI and ML for Systems Security

Primary Supervisor Dr. Fabio Pierazzi
Application Deadline April 10th, 2026
Expected Start Date October 2026
Eligibility Open to Home/UK Applicants Only
For details on home fee status and eligibility, please see this UK government guidance and the UCL fee status guidance.
Programme & How to Apply UCL Computer Science 4-Year Programme (MPhil/PhD)

The rapid evolution of AI and ML is fundamentally transforming the landscape of systems security. Recent advances, such as large language models (LLMs) and agentic AI systems, are not only enhancing our ability to automate the detection, investigation, and mitigation of cyber threats, they also introduce emerging opportunities. For example, these technologies offer new ways to support analysts in exploring and structuring vast bodies of security knowledge, potentially enabling richer understanding and more effective investigations. At the same time, their increasing use brings new security and privacy challenges, including novel attack vectors against AI-enabled applications, risks from models that may behave unpredictably in complex environments, and evolving adversarial tactics that target both traditional systems and AI-driven defences. These dynamics highlight an urgent need for AI in security that is not just powerful, but also trustworthy, transparent, and truly supportive of human users.

Current research has exposed limitations in both AI models and human workflows. Models can fail under evolving threats (concept drift), be compromised by misleading or incomplete data, or miss subtle real-world attacker behaviors due to oversimplified problem settings. In parallel, human analysts often face cognitive overload and lack effective support tools to investigate, interpret, and respond to sophisticated attacks, especially as AI models and their outputs become more complex and less transparent.

This Ph.D. project will focus on developing AI/ML techniques for systems security that are not only robust and explainable, but also directly beneficial to real-world practitioners. The direction can be tailored to the candidate’s strengths and interests, whether those lie in designing novel ML approaches for complex security tasks, securing and interpreting ML classifiers, LLMs and agentic AI in adversarial scenarios, or developing user-centered systems to support human analysts facing intricate investigations. Suggested research directions include (but are not limited to):

  • Designing frontier AI-driven systems to solve complex, realistic security tasks that go beyond simple detection (e.g., multi-step intrusion analysis, cross-domain threat hunting), focusing on workflows or decision support that analysts actually need;
  • Evaluating the security and deployment trade-offs of Small Language Models (SLMs) for systems security tasks, including their unique vulnerabilities, performance, and cost benefits in real-world settings;
  • Investigating how agentic AI systems could introduce new attack surfaces and security challenges in core systems security scenarios, such as automated intrusion response, threat investigation, or privilege escalation, compared to traditional paradigms;
  • Conducting co-design user studies and empirical evaluations with security analysts to understand pain points and ensure that proposed methods genuinely help practitioners in their day-to-day work;
  • Developing transparent, interpretable models or tools that facilitate trust and collaboration between AI systems and human experts, ensuring that system decisions and threat explanations are accessible and actionable;
  • Establishing principled evaluation frameworks and robust datasets that reflect realistic, evolving environments, rather than artificially simplified or static scenarios, to guarantee that research outcomes can have genuine practical impact.

The overarching ambition is to bridge the gap between advances in AI/ML and their practical application in adversarial security settings, building solutions that empower both automated defences and human analysts, and ultimately achieving tangible, real-world benefits for systems security.

Ideal Candidate

I am seeking candidates with a BSc/MSc in Computer Science or a related field, a solid background in AI and/or systems security, and a strong interest in trustworthy AI for security. Interest in theoretical aspects of AI/ML security is welcome. Programming experience (ideally Python) is expected. Also important are motivation, curiosity, commitment to research, and alignment with the research vision outlined above. If this resonates with you, please contact me before applying at f.pierazzi@ucl.ac.uk by adding the “Phoenix 2.0” keyword in the subject.

Final Considerations

This remains a highly-competitive opportunity, and funding is not guaranteed.

I encourage prospective applicants not only to consider the research directions outlined above, but also to propose their own project ideas that are inspired by and connected to my team’s research interests and your own passions. There is flexibility in the scope of the project, and I am keen to support candidates who wish to shape their research direction in line with both our interests.

A strong foundation in either cybersecurity or machine learning is important, but a lack of deep experience in one of them is not a deal breaker: curiosity and a willingness to bridge any gaps are valued just as highly.

If you are motivated and would like to discuss your background or explore how your ideas could connect with this opportunity, please feel free to reach out to me.