
MICHA - Misinformation Intervention Countermeasure for Health Advice: Benevolent Bots for Combatting Misinformation in Online Health Communities
01 November 2024 - 31 May 2025
Project team
Prof Philip Fei Wu
Principal Investigator
Professor of Information Management, School of Business and Management, Royal Holloway, University of London
Dr Evronia Azer
Co-Investigator
Assistant Professor, Coventry University, Centre for Business in Society
Dr Samantha Clarke
Co-Investigator
Assistant Professor, Coventry University, Centre for Arts, Memory and Communities
Dr Frédéric Tomas
Co-Investigator
Assistant Professor, School of Humanities and Digital Sciences, Tilburg University, Netherlands
Dr Gilad Rosner
Non-Academic Partner
Internet of Things Privacy Forum
Summary
This project aimed to address the spread of health misinformation in online forums. While generative artificial intelligence (GenAI) contributes to the proliferation of online health misinformation, it also presents opportunities to utilise the same technology for content moderation and the correction of misinformation. This project focused on understanding what users require and how they feel about using GenAI-powered 'benevolent bots' that interact with users in online maternal health forums to help combat misinformation.
Objectives
This project had three main objectives which were not changed:
To understand user requirements and expectations for such benevolent bots in online maternal health forums;
To review the existing academic research on using AI for misinformation countermeasures;
To review the state-of-the-art technological solutions for identifying and mitigating misinformation.
Activities
November 2024 – January 2025: Systematic literature search in PubMed and Scopus followed by a preliminary coding of academic papers published between 2015 and 2024.
February 2025 – March 2025: Systematic search in Nexis/Lexis for news articles and industry reports on using AI for countering health misinformation
January 2025 – March 2025: Survey questionnaire design and data collection
March 2025 – May 2025: Planning and conducting two online focus groups
Data analysis is ongoing. Below are some initial findings:
From our systematic literature review:
Academic studies over the past 10 years that examine the use of AI to combat misinformation have focused on detection methods, such as fact-checking and labelling. In contrast, there has been less emphasis on AI-based interactive countermeasures, like in-context moderation and social correction.
From the online survey:
Approximately 60% of survey respondents (N=232) reported visiting online forums for maternal health information ‘frequently’ (once a week) to ‘very frequently’ (daily), with expecting parents being particularly active. Additionally, 35% ‘often’ or ‘always’ and 58% ‘sometimes’ adopt health advice from these forums.
Respondents expressed high trust in maternal health information on online forums, with 55% perceiving ‘none’ to ‘a little’ misinformation and only 8% reporting ‘a lot’.
Across all survey questions regarding a hypothetical AI bot designed to combat misinformation in online forums, responses consistently indicated strong trust in its usefulness for providing accurate information to individuals and enhancing the overall online community.
A significant positive correlation (p<0.001) was found between education level and the perceived amount of misinformation in online forums, with higher education levels associated with greater perceived misinformation. Additionally, a relatively weak (p<0.05) but consistent negative correlation was observed between age and the perceived trustworthiness and usefulness of the misinformation-fighting AI bot.
From the focus groups, with the participants’ words directly quoted:
People do not mind the presence of AI bots in online forums, “because Instagram is full of bots … people still use it the same as ever.”
Human interactions are not necessarily authentic, as “people will troll … many stories are just plain made up!”
“It's just finding that right balance between what a bot should do and what human needs to do”. AI can augment human moderation by flagging misinformation, signposting scientific sources, responding to repeatedly asked questions, and triaging tasks to human moderators.
AI should not be used to block or remove all the misinformation posts; they could serve an educational purpose when flagged or debunked.
“But maybe sometimes I want to read some nonsense!” Online forums not only provide factually accurate information; they also appeal to people’s social and emotional needs - so should the AI bots.
“AI is only as good as the information it’s receiving”. Trust depends on the information sources the AI bot is trained on and who (what organisation) is behind the bot.
“It's never going to stop misinformation if people are working within a bubble”.
Outputs
As the project only concluded on 31st May 2025, we have not had enough time to analyse and write up the findings. We have begun a literature review paper, which we aim to complete the first draft of by August. Additionally, we are planning to produce an empirical paper based on our survey and focus group findings. We hope to share our research findings at national and international conferences, as well as with users and moderators of online health forums.
Impact
Our findings offer valuable insights for health forum designers and managers as many of them begin to utilise AI for content moderation and community management. In our future outputs, we aim to provide a set of design guidelines that balance the effectiveness of AI in countering misinformation within online health forums. The potential impact of our research could include:
Design Implications for AI Integration in Online Health Forums:
The project supports the development of AI systems that augment rather than replace human moderation, with sensitivity to social and emotional dynamics.
Improved Community Health Outcomes:
With appropriate design, AI bots can enhance information accuracy while maintaining user trust and engagement.
Evidence-Based Guidelines:
Our forthcoming design framework will help health forum providers, tech developers, and policymakers adopt AI tools that are effective, ethical, and user-aligned.
Equity Considerations:
Findings highlight the importance of education level and age in perceptions of misinformation and AI trust, offering an opportunity to tailor interventions to diverse user groups.
Future work
We plan to pursue funding opportunities to support a larger, multi-phase project that builds on the insights generated from this sandpit project. The next stage will focus on co-designing and prototyping AI-based tools that can responsibly and effectively address health misinformation in online forums, with a particular emphasis on maternal health.
Key follow-on activities will include:
Developing a set of evidence-based design guidelines for AI moderation in health communities, informed by user preferences and behavioural insights.
Piloting AI prototypes in collaboration with selected online platforms or health forums.
Running participatory workshops with stakeholders—including healthcare professionals, forum moderators, tech developers, and community members—to refine use cases and governance models.
Publishing and disseminating findings through academic and public channels to ensure broad access and impact.
We are keeping an eye on interest/funding calls that include:
Health platform designers and managers exploring AI integration
Researchers focused on health communication, human-AI interaction, or misinformation
Public health bodies and policymakers shaping digital health strategies
Community organisations advocating for inclusive and trustworthy online health spaces.