top of page
Visual Project

Sandpit 4: Living in an Inauthentic World

In June 2024, SPRITE+ hosted its fourth sandpit focused on "Living in an Inauthentic world". The OED defines authentic as “in accordance with fact or stating the truth, and thus worthy of acceptance or belief; of established credit; able to be relied on; truthful, accurate”. 

Emerging digital technologies are making inauthenticity more prevalent and harder to spot. Generative AI has already democratised the ability to create deepfake photos and videos and AI generated text, at scale, and with increasing believability. As 3D printers improve and reduce in price, anyone might be able to produce a perfect counterfeit in their home (why buy the genuine article when you can 3D print a precise copy?). Increasingly sophisticated autonomous bots masquerading as real people roam the internet. 2D and 3D avatars (either completely fake or doppelgängers of real people) that precisely mimic human gestures, facial expressions and speech are on the horizon. 

Sources of inauthenticity are often citizens or organisations producing creative / entertaining content. But inauthentic content can be part of deliberate attempts by some actors to deceive or manipulate others. And inauthentic data, text, images or sounds can also be unintentional by-products of generative technologies (e.g., ‘hallucinating’ text). At the same time, beliefs (true or false) about the capabilities of new technologies can cause people to label authentic content as inauthentic (the ‘liar’s dividend’). 

Proposed measures to limit the harmful effects of a flood of inauthentic content are social (e.g., digital literacy and awareness campaigns, eradicating anonymity online), legal/regulatory (e.g., the proposed EU AI Act), commercial (e.g., Meta requires political advertisers to mark when deepfakes used) and technical (e.g., digital watermarking, provenance disclosure). The degree to which these will be successful – and, indeed, what ‘success’ might look like – is up for debate. 

Our fourth sandpit explored the broad landscape of this issue, its ramifications and remedies. 

Fact Checked - Understanding the Factors Behind Direct Fact-Check Rejection

The Fact-Checked project investigated how editorial style, publishing approaches, and communication methods may influence the acceptance, rejection and understanding of misinformation corrections.

FinFraudSIM: Financial Fraud Simulative Analytic Research Platform

This project utilised the criminological approach of Crime Script Analysis (CSA), and AI and ML-based technologies to design a platform which can serve as a decision-support tool for professionals working in the area of preventing online financial fraud.

MICHA - Misinformation Intervention Countermeasure for Health Advice: Benevolent Bots for Combatting Misinformation in Online Health Communities

This project aimed to address the spread of health misinformation in online forums.

UNMASKED: The Theatre of Inauthenticity

The UNMASKED project aimed to develop and apply a synergy of devised theatrical performances and scientific methods rooted in computer science and psychology to provide a novel approach to tackle the issue of inauthenticity in cyberspace.

bottom of page