top of page

Interrogating AI in Legal Practice: Security, Privacy, and Accountability
for Legal Professionals

03 - 04 March 2026
Principal Investigator/Organiser: JournoTECH
Supporting partners: N/A
Event attendees: 30

Event summary

The JournoTECH two day online event on Interrogating AI in Legal Practice successfully convened legal professionals from various fields across the globe with a significant majority joining from developing countries. Funded by SPRITE+, the training focused on how legal practitioners can use AI tools safely and securely while navigating the privacy concerns inherent in these platforms and embracing their professional responsibilities.

The primary aim of this training was to ensure that legal professionals who handle sensitive documents understand the various benefits and potential risks associated with AI. By gaining this knowledge, attendees are better prepared to mitigate issues and provide high quality service to their clients.

Participants explored several practical AI tools available to the legal community such as NewsAssist AI and learned the critical importance of establishing AI governance structures. They also gained insights into how to implement specific AI roles within their organizations to maintain oversight.

Attendees expressed great excitement about the sessions as many were receiving formal training on these topics for the first time.


Highlights

This training led to the bridging of a significant knowledge gap within the global legal community. During the sessions, many participants expressed that legal professionals often feel side-lined in the AI revolution because most specialized training programs do not target the unique ethical and procedural needs of lawyers. For the majority of our attendees, this training represented their first opportunity to engage with formal education specifically tailored to the legal profession.

We selected 34 practitioners with a wide range of experience. Our participants included distinguished Senior Advocates and Lead Counsel with over 20 years of experience, alongside mid-career associates, junior solicitors, and promising law students. This intergenerational group represented various sectors including private practice, non-profit organizations, human rights advocacy, and government agencies.

The training achieved a truly international reach with participants joining from the UK, Nigeria, Ghana, Sierra Leone, and Zambia. At JournoTECH, we believe this cross-border participation is vital for shaping the trusted and ethical use of AI technologies on a global scale. As this was the first time these individuals had attended a JournoTECH training, there was palpable excitement regarding our future plans to support diverse professional groups.

The core of the training equipped these professionals with the skills to interrogate AI platforms and mitigate risks effectively. A highlight of the session was the practical demonstration of NewsAssist AI. This specialized platform is designed specifically for law firms and provides essential features such as deposition transcription, precedent discovery, and assist tools for drafting pleadings. By the end of the training, participants were not only more confident in using these tools but were also prepared to lead the digital transformation within their respective jurisdictions.

Trainers:

  • Elfredah Kevin Alerechi, the Founder of JournoTECH and lead trainer, focused on various AI tools for legal professionals to explore. Her session highlighted the importance of these tools while also addressing specific challenges and practical methods for mitigating risks associated with AI.

  • Professor Lizzie Coles-Kemp, the Head of the Information Security Department at Royal Holloway, University of London, presented an innovative framework for Digital Responsibility. She encouraged participants to move beyond a narrow legal view of liability and toward a broader sociotechnical perspective of care and agency. Her session explored how AI creates responsibility gaps known as lacunae and explained why legal professionals must act as mediators between technology, the state, and the grassroots community.

  • Rebecca Bird, the Founder of Bixbe Tech, provided a rigorous technical guide to Data Privacy and Integrated Security within the AI lifecycle. Her session moved beyond surface level compliance to explain how data travels through APIs, models, and third party vendors. She challenged the common assumption that paid AI tools are inherently safe and advocated for a Zero Trust approach to AI integration.

  • Soribel Feliz, the CEO of Personal Algorithms, presented a comprehensive roadmap for AI Governance for Legal Practice. Her presentation covered the identification of Shadow AI risks and the implementation of a twelve week governance framework. She emphasized that AI governance is no longer optional but is instead a fundamental requirement for professional competence and ethical compliance in a modern law firm.


Outcomes/outputs

Participants learnt:

  • Various AI tools for legal professionals: Attendees explored a range of specialized AI tools designed to enhance legal workflows and improve overall efficiency.

  • Best practices and challenges: The training covered the most effective ways to use various AI tools including free versions like Gemini and ChatGPT as well as specialized platforms like NewsAssist AI. Participants gained a deep understanding of the potential challenges that may arise and learned practical strategies to mitigate issues related to AI in a legal context.

  • AI detection and accuracy: Attendees learned about tools used to detect AI generated content. The session emphasized the importance of caution and professional scepticism because these detection tools are not always accurate and should not replace human judgment.

  • The Shift in Responsibility: Participants learned that AI creates "responsibility gaps" by removing human decision-makers. To bridge these, lawyers must move beyond simple liability and use the 5-Action Framework (Absorb, Allocate, Discharge, Avoid, or Refuse) to ensure every automated task has clear human oversight.

  • Response-able Literacy: It is not enough to have an AI policy; a firm must be "response-able." Participants learned that ethical AI adoption requires digital civics—building a community consensus on what constitutes "good use" and ensuring tools are accessible to all, regardless of ability or language.

  • The "Traffic Light" Governance Model: Participants learned to categorize AI tasks into Green (High-value/Low-risk like research), Yellow (Proceed with caution like strategy), and Red (Never use free tools for client PII). This allows firms to innovate while maintaining "ironclad" data protection.

  • Structured Accountability: Law firms learned the necessity of defining clear "swim lanes" for AI ownership. Participants discovered that without an Executive Sponsor, a Policy Owner, and a Technical Evaluator, accountability becomes diffused, leading to "Shadow AI" and malpractice risks.

  • The Zero-Trust Data Lifecycle: Participants learned that data privacy is a lifecycle, not a one-time setting. By treating AI as a "stranger in the room," they learned to manage data from intake to deletion, ensuring that sensitive information never leaks through hidden API vulnerabilities or model training.

  • Strategic Anonymization: To protect client confidentiality, participants learned practical "decoy" methods for prompting. By stripping PII and anonymizing case details before inputting them into AI, lawyers can leverage the technology’s power without creating a traceable "confidentiality bridge."


Findings:

Before the sessions, participants recognized AI's potential but lacked a structured framework for managing its risks. Here is what we found regarding their initial state:

  • Recognition of Efficiency Gaps: Attendees were acutely aware of the "heavy lifting" involved in traditional legal work—transcription, precedent discovery, and document analysis. They came seeking a way to reduce human error and time-intensive manual labor.

  • Awareness of "Shadow AI" Risks: While many were already curious about tools like ChatGPT, there was a pre-existing (though unrefined) anxiety regarding where the data goes and who is actually "to blame" if an AI makes a mistake in a legal filing.

  • Understanding of Prompting: Most participants understood that AI requires "instructions," but they had not yet realized that Prompt Engineering is a specialized skill that determines the exactness and legal quality of the output.

  • General Ethics vs. Specific Accountability: There was a broad understanding that ethics matter, but a lack of clarity on the multi-directional flow of responsibility—specifically how responsibility moves between the developer, the firm, and the individual practitioner.


Outcomes and Impact

The training moved participants from "cautious curiosity" to "active governance." The following points outline the tangible impact on their future legal practice:


Strategic Governance & Policy Implementation

Participants are not just returning to work to use tools; they are returning to build frameworks. They reported a clear intent to:

  • Establish formal AI policy frameworks within their firms.

  • Assign specific governance roles to ensure accountability isn't diffused.

  • Implement 12-week roadmaps for responsible AI adoption.


Advanced Data Privacy & "Input Caution"

The training created a significant behavioral shift in how lawyers handle client data. Attendees now commit to:

  • The "Stranger" Rule: Avoiding the input of PII (Personally Identifiable Information) or privileged strategy into open-source or free AI platforms.

  • The Data Life Cycle: Monitoring the "Data Privacy Life Cycle," including checking vendor licenses, data-sharing settings, and terms and conditions before integration.

  • Third-Party Vigilance: Paying closer attention to the "hidden" risks in third-party and vendor applications associated with their AI tools.


The "Verify by Default" Mindset

The most critical impact is the shift in professional judgment. Participants now recognize that:

  • Ultimate Accountability: The legal practitioner—not the AI—holds the ultimate responsibility for every citation and plea generated.

  • Risk Assessment: They plan to perform "quick risk assessments" before every AI interaction.

  • Painstaking Review: Every attendee expressed a commitment to "painstakingly review" and verify AI-generated output to prevent malpractice and hallucinations.


Inclusion & Social Capacity Building

Beyond efficiency, the training expanded the definition of professional responsibility to include:

  • Disability Inclusion: A new commitment to ensuring AI tools and legal services are accessible to all.

  • Community Capacity: A desire to uphold digital responsibility not just within the firm, but as a standard in the broader legal community.

bottom of page