
Fact Checked - Understanding the Factors Behind Direct Fact-Check Rejection
01 November 2024 - 31 August 2025
Project team
Grégoire Burel
Principal Investigator
Research Fellow, Knowledge Media Institute, The Open University
Dr Irini Katsirea
Co-Investigator
Reader in International Media Law, University of Sheffield
Dr Dani Madrid-Morales
Co-Investigator
Lecturer in Journalism, University of Sheffield
Dr Jeyamohan Neera
Co-Investigator
Assistant Professor in Computer Science, Northumbria University
Summary
The proliferation of misinformation presents a major threat to public discourse. While fact-checking offers a potential solution, fact-checks are not always accepted by the public. To understand the reasons behind the rejection of corrections and improve the communication practices of fact-checking organisations, the Fact-Checked project investigated how editorial style, publishing approaches, and communication methods may influence the acceptance, rejection and understanding of misinformation corrections.
Through the gathering of evidence from the literature and the analysis of responses to direct fact-checks, the project identified preliminary factors linked with the rejection of direct corrections. Interviews with international fact-checkers and the annotation of more than a hundred fact-checks from multiple fact-checking organisations provided insights about the existing and emerging editorial policies and practices followed by fact-checkers. With an additional survey of a thousand individuals across the UK, the project gathered supplementary insights about the overall perception of fact-checks in the UK and valuable knowledge for fact-checking organisations to develop more effective publication and communication strategies.
Objectives
The main objective of the Fact-Checked project was to investigate and understand why the general public and misinformation sharers may reject or distrust fact-checks and identify the areas where the fact-checkers' processes and methodologies could be altered to improve correction acceptance. As the project progressed, the project's focus shifted more to understanding the public perception of fact-checking and the evolution of fact-checking, such as the migration of online platforms towards community fact-checking rather than professional fact-checking:
Objective 1 (O1): The first objective focused on determining the factors that may impact how individuals accept/reject direct fact-check corrections by examining academic literature on fact-checking, misinformation and user behaviour on social media and users’ responses to direct fact-check corrections. To better understand these factors, we annotated 135 responses to direct fact-checks on X and investigated the justification used for rejecting or accepting corrections. We also performed a literature review of the factors often highlighted in previous research.
Objective 2 (O2): The second project objective focused on identifying the editorial and communication factors currently used by fact-checkers and how they may impact how individuals accept/reject direct fact-check corrections online. Interviews were conducted with fact-checkers, and fact-checking articles were annotated for identifying current practices. Although initially, multiple fact-checkers agreed to interviews, we had to scale back the number of conducted interviews due to the change in the fact-checkers' situation following the USA elections and changes in Meta’s fact-checking policies.
Objective 3 (O3): The last objective was to create a pilot study and analyse the collected data to bootstrap the foundation for a large research grant proposal. As the pilot study results were obtained at the end of the project, this objective will be conducted beyond the research project lifespan. We are currently analysing the results and plan to publish them at a conference or in a journal article.
Overall, only a few changes were made to the initial project plans. The first main change was the scaling down of interviews due to social media policy changes towards fact-checking organisations and the political shift in the USA. The second change, which was to some extent expected, was the time needed to collect the survey responses. As a result, the analysis of all the collected data will be fully processed after the project is concluded, and this report only contains preliminary analysis.
Activities
The initial project activities (M1-2) focused on obtaining ethical approval for the project and identifying key resources and stakeholders for the project, such as the hiring of research assistants for the fact-check annotation task and contacting fact-checking organisations.
The following activities focused on annotating the direct fact-check bot responses as either positive or negative reactions and the identification of the factors that are linked to the acceptance or rejection of fact-checks from the literature (M2-3). Following the annotation and literature analysis, interviews with fact-checkers were conducted, and fact-checking articles were annotated by research assistants for understanding the way fact-checks are presented (M3-4). Finally, a large survey was conducted with a representative sample of UK citizens about their perception and understanding of fact-checking (M4-5). The following sections describe in more detail the main activities conducted during the project.
Literature review: We conducted a structured literature review to understand the multifaceted factors influencing the rejection or acceptance of fact-checks. We reviewed 47 scholarly publications, including empirical studies, meta-analyses, conceptual articles, and policy papers. These publications cover a range of fields of study, from human-computer interaction to media studies, journalism, computer science, user design or psychology. Each document was analysed and coded across five key dimensions: study type, methods used, factors identified, and verdicts or conclusions.
This structured review enabled both the identification of patterns in the literature and the construction of an emerging typology of influence factors. The classification yielded a robust set of variables that we used to synthesise findings. Many studies identified psychological and cognitive biases—such as motivated reasoning and selective exposure—as central to the rejection of fact-checks. Others highlighted platform-related dynamics (e.g., public vs. private corrections), emotional tone, and the format or timing of the correction.
From this body of work, we generated a six-part typology of factors influencing fact-check acceptance: i) Content Design Factors (e.g., tone, framing, message clarity); ii) Audience Characteristics (e.g., age, ideology, digital literacy); iii) Source Credibility and Transparency; iv) Technological and Contextual Factors (e.g., platform affordances); v) Behavioural Outcomes (e.g., backfire effects, attitude shifts); and vi) Broader Structural Issues (e.g., systemic misinformation, digital inequality). This typology served as a foundational tool for guiding our analysis of direct fact-check responses on X and structuring interviews with fact-checkers. The coded Excel database (n = 47 entries) remains a valuable asset for ongoing work, helping to map the field and guide future hypothesis formation in both academic and applied settings.
Direct fact-checks annotations: Following a previous study about using a bot for automatically correcting users spreading misinformation on X using messaging templates within different tones (Burel et al., 2024), we annotated 135 additional textual responses to the automatic corrections as either positive or negative. The new annotated responses were based on corrective messages that rely on summarisation of fact-checks rather than the fact-check labels. This annotation task extends the previous annotation of 306 posts and study that identified that alerting, factual and friendly messages are more likely to elicit a response from corrected users (Burel et al., 2024).
Although still preliminary, the analysis showed that mentioning fact-checkers appears to reduce reactions from misinformation spreaders (15% instead of 24%). However, these figures are higher than when using explicit corrective labels (11%). This result suggests that misinformation spreaders may be less likely to engage with corrections when not explicitly attached to a fact-checker and when they lack judgment labels.
We also observed that users are more likely to reply than when using the judgment labels (13% instead of 6%). The annotations task also showed more positive responses from users (neutral to positive: 26% instead of 20.2%). Positive responses ranged from genuinely acknowledging the mistake to already knowingly sharing misinformation with acknowledging it as fake. For the negative stance, it seems that they were less based on the acknowledgement of rejections based on the fact-check nature of the post, as was the case when judgment labels were provided. We also observed interest from repliers in engaging in further discussion.
Fact-check articles annotations: As part of the project, three RAs were hired for annotating English fact-checks from different periods from 28 fact-checking organisations located in multiple countries, resulting in a total of 117 annotations. The fact-checks were selected from fact-checkers from the International Fact-Checking Network (IFCN) using the CimpleKG knowledge graph (Burel et al., 2024a).
The RAs were asked to answer 22 questions centred around: 1) the claims and sources investigated during the fact-checking process; 2) the article characteristics like article structure and claim presentation; 3) communication and tone such as the language used for fact-checking claims; 4) verdict presentation (the way verdict is presented); 5) transparency and accountability, and 5) bias mitigation (the way fact-checkers communicate and deal with their potential biases).
The annotation task highlighted that fact-check articles exhibit global consistency, with a focus on political claims primarily sourced from social media platforms like Facebook and X. Fact-checking articles are generally well-structured and easy to follow, using a direct and formal tone with simple language. Verdicts are clearly highlighted, and articles often include summaries and clear labels to ensure transparency. Fact-check evidence was typically based on interpreting sources and is integrated via hyperlinks. Overall, fact-checkers' editorial and publication practices are largely shaped by claims found on social media. Fact-checkers focus on political claims with a transparent and direct approach to article production.
Fact-checkers' interviews: We endeavoured to interview fact-checkers against the backdrop of a tremendous upheaval in the fact-checking landscape following the severance of ties with fact-checkers by major online platforms at the start of 2025. As a result, many fact-checkers with whom team members had prior connections were not open to the possibility of an interview. However, we managed to interview three organisations involved with fact-checking: Stephan Mündges (EFCSN), Giovanni Zegna (Pagella Politica) and Jana Heigl (Faktenfuchs, BR24 Bayerischer Rundfunk). The last two organisations are fact-checkers. Pagella Politica is the main Italian fact-checker. Faktenfuchs is the fact-checking arm of the German public service broadcaster (PSB) Bayerischer Rundfunk, which is a member of ARD, the umbrella organisation of Germany's regional public-service broadcasters. EFCSN is the supporting body for European fact-checkers. It does not carry out fact-checking work itself, but it supports its member organisations in all the dimensions of their work, such as fact-checking, debunking, media literacy etc.
The interviews showed certain common trends: the shared belief in the importance of fact-checking, not as a silver bullet, but as part of the solution, next to other measures to address the systemic causes of mis- and disinformation, such as media literacy and platform regulation; the perceived current inadequacy of AI as a means of detection of mis-and disinformation; the defence of fact-checkers’ work by reference to their adherence to journalistic standards; the recognition that there are actors who usurp the label of ‘fact-checking’ to engage in manipulative opinion work. They also revealed differences as regards the extent of their exposure to the US funding freeze or the importance of the distinction between mis- and disinformation in their work.
We aim to analyse the transcripts of these interviews in greater detail to identify further insights that can be gained, also in conjunction with the findings from our literature review.
Online Survey: The citizen survey constituted the final empirical component of the project and was designed to explore how members of the UK public understand, interpret, and respond to fact-checking interventions. The survey instrument was administered online using a panel provider and applied quota sampling to ensure demographic balance across age and gender.
The final sample size was N = 1,150. Ethical approval for the study was granted by Northumbria University (Ref. 8461), and all participants provided informed consent. The questionnaire included over 120 items and was structured into thematic blocks covering demographics, political engagement, news consumption, attitudes toward fact-checking, exposure to misinformation, verification behaviours, and media literacy. Additional sections focused on awareness of and trust in community-based fact-checking systems, such as X’s Community Notes.
Participants were asked not only whether they encountered fact-checks and how they responded to them, but also how much they trusted different formats and sources of corrections, including traditional media, independent fact-checkers, and crowdsourced mechanisms. A key component of the survey was an embedded experimental task in which participants viewed a misinformation post and were randomly assigned to see either a professional fact-check or a community note correcting it. They were then asked to evaluate the accuracy, credibility, and perceived bias of the correction, as well as their likelihood of sharing the post. This allowed us to capture the immediate impact of different correction formats on trust and behavioural intent.
Data collection was completed in May 2025. A full analysis will be used in forthcoming publications and grant applications.
Outputs
The main outputs of the project at this stage are the preliminary findings of the activities outlined in the previous section. We expect these preliminary findings, once further analysed, will lead to one or more peer-reviewed publications, and they will contribute to providing evidence for a grant application (see “Future Work” section).
The first key output of the project was the development of a typology capturing the main factors that influence whether individuals accept or reject fact-checks. This typology was derived from a structured review of 47 academic studies on misinformation, correction efficacy, and public trust in media. Each study was analysed and coded according to its methodological approach, focal variables, and conclusions regarding user responses to factual corrections. The resulting taxonomy comprises six interrelated dimensions.
First, content design factors include the tone, clarity, and format of the correction. Second, audience characteristics refer to cognitive, demographic, and ideological traits that shape receptivity. Third, source credibility and transparency reflect how trust in the fact-checker influences uptake. Fourth, technological and contextual factors capture platform design, algorithmic mediation, and affordances that condition correction visibility. Fifth, behavioural outcomes measure the extent to which fact-checks change beliefs or actions. Lastly, broader structural issues address systemic challenges such as digital inequality and the politicisation of truth claims.
This taxonomy provides a conceptual framework that informed the annotation of fact-checking responses and the design of the citizen survey. It also offers a transferable analytical tool for future research on information integrity interventions, and could be used as a theoretical underpinning for a future grant application.
A second key output is the annotation of 135 direct-fact check replies and the analysis of direct misinformation correction on X. Initial analysis suggests that mentioning fact-checkers appears to reduce the reaction rate from misinformation spreaders (24% to 15%). This suggests a very high distrust of fact-checkers from misinformation spreaders. The analysis also showed that users were more likely to reply to corrections when judgment labels were omitted. In this context, positive responses to corrections were more positive than previously observed and negative responses were less based on rejecting the fact-check itself, which was a common occurrence when judgment labels were provided. Overall, these findings suggest that the perception of fact-checkers by misinformers may improve when judgment labels are omitted. However, the lack of clear labels may reduce the clarity of corrections.
A third output follows the annotation of 117 fact-checking articles, which focused on key aspects of the fact-checking process, including the claims and sources investigated, article structure, communication tone, verdict presentation, and transparency. Preliminary analysis reveals that modern fact-checking practices are largely shaped by the social media environment, with a primary focus on political claims originating from platforms like Facebook and X. The resulting articles are designed for clarity and transparency, featuring a logical structure, simple language, and clear verdicts, summaries, and labels. Fact-checkers tend to rely on a limited number of evidence pieces, typically four or fewer, and while the publication time is inconsistent, it generally occurs within seven days of the misinformation's release. These observations show that current fact-checking practices are a direct response to the spread of misinformation on social media, with a prioritisation of clarity, transparency and timeliness.
A fourth key output stems from the analysis of the online survey of 1,150 UK adults on their views on fact-checking. Our preliminary review of the data shows four promising areas for further examination.
(I) First, while there is broad awareness of fact-checking (63% say they’ve heard of it), trust remains divided. Most respondents believe fact-checking plays a vital role—over 80% agree that it helps separate truth from misinformation and supports democracy—but many are simultaneously sceptical. For instance, 60% either agree or remain neutral about the idea that fact-checking is politically biased, and over half believe it’s used to discredit opponents. This ambivalence illustrates the challenge faced by fact-checkers in maintaining legitimacy across the political spectrum.
(II) Second, social media is viewed as the most common source of misinformation, named by 80% of respondents. Despite this, Facebook remains the most frequently used platform for news, highlighting a tension between perceived risk and habitual usage. Meanwhile, trust in information varies significantly by channel: traditional media (TV, newspapers, radio) is trusted more than digital platforms, with messaging apps and social media rated lowest.
(III) Third, and encouragingly, 75% of respondents said they feel somewhat or very confident in spotting false information. However, this self-assurance does not always translate into action: only about a third actively verify claims regularly. Most rely on Google searches or trusted news sites; very few use dedicated fact-checking websites or AI tools.
(IV) Finally, when comparing professional fact-checking with community-based models (like X’s Community Notes), respondents express clear reservations. Only 9% fully trust community systems, while nearly 40% are “very concerned” about their potential for bias. Nonetheless, nearly a third say they would be willing to contribute to such systems, suggesting potential for improvement if credibility and governance are addressed.
We expect to report the findings from these two components of the project in peer-reviewed publications in the coming months.
Impact
While this project was not designed to generate immediate real-world impact, the findings provide a strong foundation for future interventions and policy engagement across several domains.
For fact-checking organisations, our typology offers a practical framework for diagnosing why corrections are accepted or rejected. It draws attention to the importance of content design (e.g., tone, clarity, credibility cues), source transparency, and platform context — all actionable areas for improving correction strategies. Our survey results confirm that while many people recognise the value of fact-checking, concerns about bias and political manipulation persist. This suggests that fact-checkers might need to continue prioritising transparency, neutrality, and consistent communication standards to maintain trust across the political spectrum, as the analysis of fact-check articles already suggests that fact-checkers already focus on these areas. The analysis of direct fact-checking on social media indicates that when trying to change the perceptions of misinformers, removing explicit judgment labels may improve how fact-checkers are perceived and increase the perceived trustworthiness of the fact-checking organisations over time. A possible avenue to explore in a future grant application would be to more closely collaborate with one or more fact-checking organisations to support their work around some of these issues.
For platforms and technology companies, the findings highlight a major credibility gap between professional and community-based corrections. Although models like Community Notes show promise in terms of engagement, users remain sceptical of their accuracy and impartiality. Misinformation labelling is also commonly used by online platforms, but our findings show that it may be too confrontational to be effective. In this context, approaches that allow users to engage with corrections without feeling like they are being explicitly condemned should be pursued. In this context, there is an opportunity for platforms to develop hybrid models that incorporate both professional verification and community input under clear governance structures that favour community dialogue around misinformation. Support for community dialogue could also be integrated into the way fact-checkers present their fact-checks to increase trust in fact-checkers beyond the current focus on transparency and clarity.
For policymakers and educators, the widespread support for media literacy (with over 80% backing curricular inclusion) signals public appetite for more structured education on navigating misinformation. This reinforces the case for embedding critical media skills at multiple educational levels. Improving media literacy is particularly important when observing distrust in fact-checking organisations and the potential shift towards more conversational and non-judgmental correction, as the final judgment of misinformation shifts to the fact-check readers rather than fact-checking organisations
Future work
The outputs of this pilot project offer a strong basis for academic publication and the development of future funding proposals.
Publication Plans: We have begun work to submit at least one peer-reviewed article drawing primarily on the findings of the national survey. The article will focus on public attitudes toward fact-checking, levels of trust in correction mechanisms, and the perceived legitimacy of professional versus community-based models. We are currently preparing a draft manuscript, with a target submission date of November 2025, to coincide with relevant calls for special issues or themed sections. Our preferred outlets would be Journalism Practice or Journalism (SSCI - Q1 - Communication), where recent work on misinformation reception and media trust has gained visibility.
In parallel, we are exploring the potential for a second output, focused on the typology developed through the literature review, which could be submitted to Digital Journalism or Information, Communication & Society. This article would position the typology as a conceptual contribution to the growing literature on correction efficacy and information resilience. A submission for this piece is planned for early 2026. Finally, we are planning to publish our direct correction analysis at the International AAAI Conference on Web and Social Media (ICWSM) conference. The submission is expected in November 2025 or January 2026.
Grant applications: Building on our human-participant research and annotations, which identified requirements and current fact-checking practices for helping users and fact-checkers combat misinformation and disinformation, we plan to apply for the UKRI Cross-Research Council Responsive Mode Pilot Scheme to further develop and apply our findings. Additionally, we are also keeping an eye out for other funding calls to further support and extend our work, such as CRANE and SALIENT.
Work in progress: At present, we are conducting an in-depth analysis of the national survey dataset (N = 1,150), which remains ongoing beyond the initial project period. Two strands of analysis are being prioritised. First, audience expectations of fact-checkers. We are examining what respondents believe fact-checkers should focus on to make fact-checks more credible and useful. These expectations will be compared against the annotated dataset of fact-checking articles, allowing us to assess whether current editorial practices align with public preferences or whether gaps exist that could inform improvements in style, tone, or transparency.
Second, political orientation and attitudes to fact-checking. We are investigating how political leaning, political interest, and political participation correlate with a range of survey measures, including trust in fact-checking, perceptions of bias, willingness to verify information, and responses to different correction formats. This analysis will help clarify whether scepticism toward fact-checking is evenly distributed across the population or concentrated among specific political groups, providing insights into how fact-checking strategies might be tailored to different audience segments. Findings from both strands will feed into forthcoming publications and future grant applications, ensuring that the survey data continues to generate new insights beyond the pilot phase.