top of page

Project Partner Challenge: Call for collaboration on delusions and related harms associated with the use of Artificial Intelligence (AI)

  • spriteplus
  • 7 days ago
  • 4 min read

Updated: 2 days ago

Overview

The project partner challenge grant scheme is designed to enable a SPRITE+ Expert Fellow to work on a research challenge. The scheme operates through three phases.

  1. Potential partners submit expressions of interest and if chosen they then work with the SPRITE+ director to set their challenge.

  2. SPRITE+ Expert Fellows express interest in that challenge by submitting an application.

  3. The successful Expert Fellow then works with the project partner to develop a full proposal which then goes to the SPRITE+ management team for review.

For this Project Partner Challenge, we invite applications from researchers with expertise in machine learning, natural language processing, and AI evaluation to collaborate with South London and Maudsley (SLaM) NHS Foundation Trust in co-developing a proposal Project Partner Challenge Grant on the safety of large language models (LLMs) in mental health contexts.


  • Funding available: £52,000

  • Eligible applicants: UK-based academic researchers

  • Outcome: Co-developed full proposal submitted by 1st April 2026

  • Project topic: Evaluating LLM “psychotogenicity” (propensity to induce or exacerbate psychosis)


Background:

About South London and Maudsley (SLaM) NHS Foundation Trust

SLaM NHS Foundation Trust provides the widest range of NHS mental health services in the UK and is a leader in improving health and wellbeing – locally, nationally and globally. We serve a local population of 1.3 million people in south London, as well as specialist services for children and adults across the UK and beyond.

Each year we provide inpatient care for over 5,000 people and treat more than 40,000 patients in the community in Lambeth, Southwark, Lewisham and Croydon. Our work is divided across six operational directorates in four boroughs in south London. We provide more than 240 services to local people, as well as more than 50 specialist services for children and adults across the UK and beyond. We are part of King's Health Partners an academic health sciences centre and are the only mental health trust in the UK to have our own biomedical research centre - hosted jointly with the Institute of Psychiatry, Psychology and Neuroscience.

We are also focused on promoting mental health and wellbeing. Our philosophy of care is the recovery model. We provide treatment that helps people get well and stay well, so they can achieve their full potential. Above all, we believe change is possible, no matter how long someone has had a mental health problem, or how much this has changed their life. Our integrated adult services make it possible for us to address both an individual’s mental health and social care needs. We are focusing more on early intervention: getting help to people sooner and supporting them at an earlier stage in their lives – especially younger people. Our work is about changing lives, not just for individuals, but in partnership with them.


Background of the project

Large Language Models (LLMs) are rapidly becoming embedded in everyday communication, decision-making, and information retrieval in the form of AI chatbots. Within mental health contexts, most current work focuses on therapeutic applications, yet individuals with mental illness increasingly interact with these systems outside clinical settings.

Recent reports suggest that conversational AI may sometimes reinforce or mirror delusional content, potentially exacerbating symptoms in those vulnerable to psychosis. Although evidence remains limited and causality unclear, concerns are growing that agential AI may contribute to epistemic instability, blur reality boundaries, or undermine self‑regulation.

Given the accelerating adoption of LLM‑based tools, there is an urgent need for robust methods to evaluate the “psychotogenicity” of these systems and develop benchmarks that reflect real‑world interactions with people experiencing psychosis.

 

Objective:

The aim of this project is to review current approaches to LLM safety evaluation in mental health contexts and to develop a novel benchmark for assessing psychotogenicity — the potential for an AI system to induce, validate, or intensify delusions and other psychotic experiences, as well as more subtle but possibly more widespread mechanisms of belief change with potentially a huge societal impact given the current uptake of generative AI.

We are seeking an academic partner to work collaboratively with SLaM NHS Foundation Trust to co-design a research project to address this challenge.


Expected applicant background:

Applicants should have a background in computer science with strong ML/NLP foundations and demonstrable experience designing and running evaluations of language models.

Required experience (one or more):

  • Building evaluation pipelines for LLMs (Python; API-based experimentation; reproducible workflows)

  • Developing and/or validating benchmarks (e.g., prompt suites, datasets, scoring rubrics, evaluation harnesses, leaderboard methodology)

  • Methodologically robust evaluation (e.g., reliability/validity, inter-rater agreement, bias/error analysis)

Desirable experience:

  • Safety research for generative models (robustness, harmful content, reliability, uncertainty calibration)

  • Human-in-the-loop evaluation and annotation studies (including guideline/rubric development)

  • Experience in sensitive or safety-critical domains (health, crisis, safeguarding), including appropriate risk management and ethics


Application deadline: Monday 9th March 2026

Download the application form here:


Timeline:

  • Call for applications sent out: January 26th, 2026

  • Applications deadline: March 9th, 2026

  • Applicants informed of outcome: March 16th, 2026

  • Full proposal to be submitted by April 27th, 2026

  • Projects to start no later than July 1st, 2026

  • Project to complete by no later than 1st March 2027


Funding:

Maximum total grants size is £52,000.

Funds will be awarded at 80% FEC in accordance with normal UKRI practices. In practical terms, this means that SPRITE+ will fund 80% of the total costs outlined in successful proposals. We will provide a maximum of £52k to the successful applicant, so the budget requested can be over this amount. Funds awarded will be subject to standard UKRI grant terms and conditions, which are non-negotiable.

The duration of work funded by this project will be no longer than 12 months commencing no later than 1st June 2026 and finish no later than 30th June 2027

Eligible items for funding:

  • Replacement salary costs of the Expert Fellow

  • Costs of research assistance for the Expert Fellow

  • Costs of workshops/meetings

  • Travel and subsistence expenses

  • Sundry research costs


If you have any questions, please get in touch with us at spriteplus@manchester.ac.uk


bottom of page