Best Practices for Trust in the Use of Generative AI Tools (prompt-summary, text-to-video, text-to-image)
3rd Jan - 16th July 2025

Principal Investigator: Dr. Huma Shah, Coventry University
Co-Investigators: N/A
Event attendees: Over 135
Project overview
Summary:
A series of activities in an Innovation Fora tailored to expand Sprite+ networks explored what the general public understand about artificial intelligence (AI) and how generative AI (GenAI) tools (text summarisers; audio, sound and video generators) are being used in the public and private sector.
Driven by questions gathered through a GDPR-compliant survey before a free-to-attend, hybrid public international panel, the event brought together interested stakeholders in-person and online, on Thursday 22 May 2025 in Coventry University, in a dialogue discussing ‘Do you know your AI from your Generative AI?’.
The panel event highlighted that the public’s awakening to AI was increasingly through GenAI, such as Google’s Gemini, or OpenAI’s ChatGPT. This was amidst an increasing awareness of our personal data going to the top four or five big tech companies with our online privacy possibly infringed, copyright violations, identity and authenticity compromised from the capability of GenAI to produce students’ assignments, to trust erosion with using these nascent AI tools as personal assistants for everyday tasks. A second survey, post-panel event demonstrated that there was an eagerness for more public events about where AI is headed.
The final event hosted an in-person roundtable to focus on the implications for education from the prevalence of GenAI applications available to students and how to support their critical thinking. This Sprite+ innovation fora concluded successfully showcasing Sprite+ is an excellent platform, and with a view to finding further grants to organise more events with a diverse cohort of stakeholders to better understand how best to manage our lives in a future of more AI-driven products and services affecting human lives and work.
Further detail:
Designing and distributing surveys before (Survey1: pre-panel) and after (Survey2: post-panel) a free-to-attend public hybrid panel event, held in Coventry University’s Frank Whittle engineering building, gained questions from members of the general public providing an insight into the public’s confusion about ‘What AI is?’ and what the ‘Difference is between AI and Generative AI’. The public’s concerns about trust in the output from GenAI, data security and privacy issues using GenAI, teachers navigating authenticity in students’ learning in the age of AI assistants like ChatGPT, or Copilot drove the organised events facilitating ongoing safe conversational spaces for dialogue between all levels of stakeholders, from individuals who have never used and do not know how to use GenAI, to individuals using GenAI tools efficiently in their day-today tasks, and to AI and other scientists.
Draft questions in surveys 1 & 2 initially designed in a WORD document were sent firstly to Professor Mark Elliot, Director of the Sprite+ project led by Manchester University. Following helpful feedback to better align the questions with the Innovation Fora objectives, and to use an online survey, the questions were improved with the revised surveys designed in Microsoft Forms. These were sent to two Coventry University colleagues for their review and providing further feedback. Dr. Manizheh Montazerian’s and Stephanie Toman’s recommendations were implemented in both surveys’ final versions. The online link to access survey1 was distributed widely to a variety of networks realising numerous questions for the 22 May panel (see page 10: Output folder, .pdf Panel questions).
The date chosen for the panel was Thursday 22 May 2025, the week before UK School half-term thereby providing an opportunity for schools and colleges to participate if they could, in-person or online. Teachers from a local school (Myton) and college in Coventry attended in hybrid panel event in person (Jono Lowe) and online (various).
Invitations to join the hybrid international panel included individuals in Google DeepMind, and at ‘Women in AI’. Though no response was received from these organisations, other individuals forming a multidisciplinary group accepted the invitation to participate as panel members in-person (if in the UK), or online (international). Pre-panel event meetings were held online introducing the Panellists to each other and to discuss the order of this event. The panel were advised that the hybrid event would run as a conversation between the panel and attendees, hence no PowerPoint presentations were required from the Panellists.
As Survey1 questions were received, they were categorised and distributed to the invited panellists in order for the members to capture a flavour of how the public were viewing AI and GenAI. In an e-meeting with the Panellists before the 22 May event, it was suggested by Dr. Luigi Ceccaroni, an AI and citizen science Scientist at Earthwatch, to include an AI as a panellist. Microsoft’s Copilot was included as a panellist since Coventry University possessed licences for Microsoft Office suite of applications.
The list below presents the final invited hybrid panellists. After a welcome from Professor Ian Marshall, and Dr. Philip Gould, Sprite+ Director, Professor Mark Elliot, introduced Sprite+ project led by Manchester University. The international hybrid panel, chaired by Dr. Rebecca Butler (Dean of Coventry University’s College of Engineering, Environment and Science) opened with a Survey1 question on how “businesses prepare their workforce for the automation and AI driven tools?”
Welcome to Coventry University 11:15 am - 11:30 am
Professor Ian Marshall, Deputy Vice Chancellor, Chief Operating Officer and CEO (CUAS)
Dr. Phillip Gould, Head of the School of Science
Panel session 11:30am - 1:00pm
Chair: Dr. Becky Butler, Dean of College of Engineering, Environment and Science
Panel members
An AI - Copilot tool (Microsoft suite)
Lindsey Birnsteel - Privacy Champion & Citizen Scientist – Parent perspective (ONLINE)
Dr. Luigi Ceccaroni - AI and Citizen-Science Scientist, Earthwatch, Oxford
Mark Elliot - Professor of Data Science, Manchester University
Ms. Oluwadamilola Kola-Adejumo - Data & Insight Graduate, Arqiva, London
Dr. Manizheh Montazerian - Software Developer and Generative AI Output Investigator
Dr. Roxana Oltean-Cardos - Clinical Psychologist, Romania (ONLINE)
Salman Nazir - Professor in Training and Assessment, South-eastern University, Norway
Mrs. Juliana Samson - Final year PhD candidate ‘AI in healthcare education’
Aleksandr Tiulkanov - AI and Data Governance Advisor, Contributor to CEN/CENELEC standards on AI (ONLINE)
Jordi Vallverdú - Professor of Philosophy and History
In-person Panellists Salman Nazir began the discussion explaining from his perspective as a professor in training and assessment, that though we should welcome and integrate AI, since there were some positive signs around applications, there were a lot of challenges, such as ‘trust in AI’. He added that, for a driverless car, say, “which has a higher level of autonomy”, how much information should we impart to, or need to have about, such an AI? Trust and transparency were discussed among the panel and attendees, as well as ethical issues including intellectual property.
AI in healthcare was also discussed. Final year PhD student Juliana Samson explained that in the scope of healthcare “we don't treat diagnosis, we don't treat scans, we treat a person who carries all those things”. Juliana stressed that there are “massive efficiency savings that we'd be mad not to consider”. Despite there being plenty of barriers, which will take time to resolve with the different intertwining systems that need to be implemented throughout healthcare, we would need to increase the “digital literacy of the population, and in the healthcare practitioners as well.”
Another Panel conversation centred around AI models, which were perceived as ‘black boxes’ with trust issues in an AI’s performance and its results, for example, in facial recognition systems for crowd detection, algorithms for banking, or for social media moderation. It was pointed out that we do not have an artificial intellect yet that is as smart as a two-year-old child who can convey their hunger, joy, pain in grammatically short sentences without being taught the science of language, linguistics. AI has not experienced enjoying strawberries and ice-cream on a hot summer’s day. However, it was added that ChatGPT tool was useful to find recipes and design work-out regimes, and that “Artificial intelligence is a form of human intelligence because somehow we have created it.”
Copyright, “fair use”, and the EU AI Act were discussed in how to regulate AI models. Online panellist Aleksandr Tiulkanov explained that AIs were affecting some people’s livelihoods, of those whose created works have been used without consent in training AI models. One Panellist, Dr. Manizheh Montazerian, a software developer, exclaimed she was a GenAI fan, because there were positives: everyone could access a GenAI since tools like Microsoft’s Copilot were free, and though there were negatives, these early tools, were improving. She stressed that, analogous to calculators not killing mathematics, GenAI would not kill critical thinking, because, she felt, you needed to think critically to use these tools. If we did not want people to cheat, it was then a case of ‘How do we teach them [learners] to use it to do even better?’ – she pointed out she had nearly 3000 students in a previous semester and that adequate plagiarism detectors were needed.
The panel session ended on the topic of mental health and online engagement. An in-person audience member, previously a school-teacher reminding about the affect on young people’s mental from addictive algorithms capturing their attention on social media and other online applications. Online Panellist Dr. Roxana Oltean Cardos remarked that there were “some really good initiatives in the mental health field by integrating these AI in diagnostic and also intervention and prevention.”
To close the panel, attendees were given a link to an online poll, using Mentimeter, to choose the last Survey1 question which would be put to Microsoft’s Copilot. The purpose of using a GenAI as a panellist at the end was used to demonstrate how such a GenAI tool responded to a query, or prompt. The most popular question from the audience completing the poll was:
“What new skills should students and professionals develop to remain relevant in an AI-driven world?”.
A summary of Copilot’s response shown on the screen in the venue included the following text output: