top of page

Threat Modelling for Sleeper Agents in Agentic AI: Attack Chains and Resilience for Digital Identity Systems

Wed 26 Nov

|

Online

A workshop focused on threat modelling to counter systemic cyber risks from combining shadow AI, sleeper agents, malware and disinformation in autonomous AI attacks in digital identity systems.

Threat Modelling for Sleeper Agents in Agentic AI: Attack Chains and Resilience for Digital Identity Systems
Threat Modelling for Sleeper Agents in Agentic AI: Attack Chains and Resilience for Digital Identity Systems

Time & Location

26 Nov 2025, 10:00 – 13:00

Online

About the event

Together with The Alan Turing Institute, we invite you to our upcoming workshop, 'Threat Modelling for Sleeper Agents in Agentic AI: Attack Chains and Resilience for Digital Identity Systems'.

This workshop advances the ongoing exploration of systemic cyber risks in digital identity infrastructures, building directly on insights gained from Workshop 1 (Rapid Evidence Review) and Workshop 2 (Resilience in Digital Identity – Systemic Risks from Emerging Technologies). While the earlier sessions mapped the broader landscape of risks and resilience challenges, Workshop 3 focuses on developing threat models to understand and counter the convergence of Shadow AI, sleeper agents, AI malware, and AI based disinformation within autonomous AI-driven attacks.

The session will also consider the role of emerging protocols such as Model Context Protocol (MCP), Agent-to-Agent (A2A), and AGNTCY, which are reshaping how agents interact, coordinate, and embed within digital infrastructures, and which may inadvertently introduce new systemic vulnerabilities.

View the agenda here.


Introduction

Sleeper agents (dormant malicious routines stealthily embedded in agentic AI components or MCP integration points) remain inactive until narrowly defined triggers (for example, geo-distributed failed logins, a policy update, or a provenance flag) cause them to activate and manipulate identity verification flows.

This is a major problem as once triggered sleeper agents can silently escalate privileges, corrupt or erase audit logs, and replicate across A2A-coordinated agents, producing large-scale credential compromise and a systemic loss of trust in digital identity infrastructures.


Register now

Audience: this workshop is ideally suited for participants with expertise or interest in digital identity, cryptography, and AI safety. It will benefit from the perspectives of academic and industry researchers, developers, and security specialists working in areas such as post-quantum cryptography, AI risk and resilience, identity infrastructure, and cybersecurity.

Register to attend below. The deadline to register is 12:00pm (London, UK) on Monday, 24 November 2025.


This event is part of the Trustworthy Digital Identity project, a three-year research project collaboration bringing together expertise from SPRITE+ and the Alan Turing Institute.

Share this event

bottom of page