top of page

Can We Trust AI with our Work: Exploring Trust, Privacy & Security with AI technology

04 September 2025
Principal Investigator/Organiser: JournoTECH
Supporting Partner: N/A
Event attendees: 26 across UK cities, Germany and Luxemburg

Summary

The purpose of our event was to bring together journalists, researchers, technologists, and civil

society advocates to discuss a key question: Can we trust AI with our work? We aimed to explore how artificial intelligence can be used responsibly while protecting privacy, ensuring security, and upholding human rights. The event created a space for learning, sharing, and building connections between professionals who use or are affected by AI in their daily work.


Highlights

On September 4, 2025, JournoTECH hosted its in-person AI event in Stratford, London, bringing

together 26 journalists, researchers, technologists, and rights advocates to explore a central

question: Can we trust AI with our work? Sponsored by SPRITE+, the event created an inclusive

platform where participants shared insights on privacy, trust, and security in the fast-changing

world of artificial intelligence.


From the opening keynote to the final panel, the conversations were rich and impactful. Highlights included a keynote from our founder, Elfredah Kevin-Alerechi, who spoke about impact of AI in research and journalism, exploring cases of benefits and misuse. SPRITE+ representative Titania Dia, introduced the network’s mission on support and sponsorship opportunities for collaboration, and thought-provoking sessions such as Beyond the Padlock: End-to-End Encryption in Modern-Day Applications by Timileyin. Participants also engaged deeply with the panel Rights, Consent, and Trust in AI: From Compliance to Human Rights, where experts from the journalism, Open Rights Group, and business sectors explored how AI systems affect representation, diversity, and equity.


What made the event truly special was the breakout group interaction and exchange of ideas and networking. Journalists spoke openly about the challenges of AI in newsrooms, while researchers and technologists offered practical solutions on data security and ethical use. Many participants emphasised that the event gave them a clearer understanding of both the risks and opportunities of AI, as well as practical tools they could take back to their work.


JournoTECH also showcased NewsAssist AI, its home-grown platform designed to help

professionals meet deadlines through transcription, summarisation, and secure document

analysis. Feedback from attendees confirmed the need for such tools, especially in contexts where time, trust, speed and accuracy matter most.


The event closed with renewed commitment to responsible AI. The “good news” is clear: when

diverse voices come together, we can shape AI that strengthens—not weakens—trust, privacy,

and human rights.


Outcomes/outputs

Knowledge Exchange: Brought together 26 participants from journalism, academia, civil society, and technology to discuss trust, privacy, and security in AI.


Cross-sector Dialogue: Strengthened organic interaction and connections between journalists, researchers, and technologists, with new collaborations during and after the event.


Practical Insights: Shared best practices on data encryption, responsible AI use, and rights-based approaches to AI governance.


Showcasing Innovation: Demonstrated NewsAssist AI, an AI platform supporting journalists and researchers with transcription, summarisation, and document analysis.


Participant Testimonials: Collected video interviews and written feedback highlighting the event’s value in building understanding and skills.


Dissemination Materials: Produced a post-event digital flipbook report with embedded photos, quotes, and videos to corroborate captured and shared outcomes.


Community Building: Expanded the JournoTECH network, adding new members to our community of practice and strengthening links with SPRITE+.


Awareness Raising: Increased visibility of ethical AI issues and the role of marginalised voices in shaping responsible AI.


Finally, we created a flipbook for the event, which we believe have more details about the impact of the event. You can view it here.

bottom of page