top of page
Visual Project

Sandpit 4: Living in an Inauthentic World

The OED defines authentic as “in accordance with fact or stating the truth, and thus worthy of acceptance or belief; of established credit; able to be relied on; truthful, accurate”. 

Emerging digital technologies are making inauthenticity more prevalent and harder to spot. Generative AI has already democratised the ability to create deepfake photos and videos and AI generated text, at scale, and with increasing believability. As 3D printers improve and reduce in price, anyone might be able to produce a perfect counterfeit in their home (why buy the genuine article when you can 3D print a precise copy?). Increasingly sophisticated autonomous bots masquerading as real people roam the internet. 2D and 3D avatars (either completely fake or doppelgängers of real people) that precisely mimic human gestures, facial expressions and speech are on the horizon. 

Sources of inauthenticity are often citizens or organisations producing creative / entertaining content. But inauthentic content can be part of deliberate attempts by some actors to deceive or manipulate others. And inauthentic data, text, images or sounds can also be unintentional by-products of generative technologies (e.g., ‘hallucinating’ text). At the same time, beliefs (true or false) about the capabilities of new technologies can cause people to label authentic content as inauthentic (the ‘liar’s dividend’). 

Proposed measures to limit the harmful effects of a flood of inauthentic content are social (e.g., digital literacy and awareness campaigns, eradicating anonymity online), legal/regulatory (e.g., the proposed EU AI Act), commercial (e.g., Meta requires political advertisers to mark when deepfakes used) and technical (e.g., digital watermarking, provenance disclosure). The degree to which these will be successful – and, indeed, what ‘success’ might look like – is up for debate. 

Our fourth sandpit explored the broad landscape of this issue, its ramifications and remedies. 

Image by Brian McGowan
Fact Checked - Understanding the Factors Behind Direct Fact-Check Rejection

Led by Grégoire Burel (The Open University), Irini Katsirea (University of Sheffield), Dani Madrid-Morales (University of Sheffield). 

FinFraudSIM: Financial Fraud Simulative Analytic Research Platform 

Led by Lena Podoletz (Lancaster University), Edward Apeh (Bournemouth University), Xiaochun Cheng (Swansea University), Mathieu Chollet (University of Glasgow), Peter Winter (University of Bristol), Yongyu Zeng (Lancaster University). 

Image by rupixen
Image by Le Vu
MICHA - Misinformation Intervention Countermeasure for Health Advice: Benevolent Bots for Combatting Misinformation in Online Health Communities 

Led by Philip Fei Wu (Royal Holloway, University of London), Evronia Azer (Coventry University), Samantha Clarke (Coventry University), Frédéric Tomas (Tilburg University), Gilad Rosner (Internet of Things Privacy Forum).

UNMASKED: The Theatre of Inauthenticity 

Led by Luca Viganò (King’s College London), Alan Chamberlain (University of Nottingham), Maria Limniou (University of Liverpool), Pejman Saeghe (University of Strathclyde), Mark Springett (Middlesex University). 

Theatre of inauthenticity image.jpg
bottom of page