Understanding AI Legal Risks and Brand Safety: Key Thoughts from Hampshire AI
Why even 95% AI accuracy isn't enough when legal compliance and brand reputation are on the line

👋 Hi there,
I had the opportunity to join Hampshire AI, a local networking group, setup by Lauren James at Spectrum IT. The evening featured two presentations that provided information on the legal and safety considerations every organisation should understand before implementing AI systems. I thought I’d share these with you because they are pretty essential to understand - or at least be aware of.
Legal Risks in AI: What You Need to Know
Dorothy Agnew, Legal Director at Bradfield UK Law, opened the discussion by highlighting a fundamental principle: AI risk is directly proportional to system complexity. Unlike traditional software, AI systems learn and evolve, creating unique legal challenges.
Key Legal Risk Areas
Data protection AI systems often process vast amounts of personal data, creating potential GDPR and privacy law breaches if not properly managed.
Copyright infringement High-profile cases like Getty Images and Stability AI demonstrate how AI training on copyrighted material can lead to significant legal exposure.
Negligence Claims The landmark case of Moffatt and Air Canada shows how organisations can be held liable for AI generated misinformation or errors. Travellers be weary!
Competition law breaches AI systems can inadvertently engage in anti-competitive behavior, particularly in pricing or market analysis applications.
Discrimination issues Cases like Manjang v Uber Eats highlight how AI algorithms can amplify discriminatory practices and make it worse. It shows how ethnicity can affect technology. More caution is needed here.
So what can be done?
Essential Risk Mitigation Strategies
Build in legal compliance from day 1 Don't interpret compliance as boring or an afterthought. Instead integrate legal requirements into your AI development process.
Prioritise transparency and explainability Ensure your AI decisions can be understood and justified, particularly in regulated industries.
Protect intellectual property rights Carefully audit training data and monitor outputs to avoid IP infringement issues. Get a team / person on this!
Implement rigorous testing Regular testing helps identify potential legal and ethical issues before they impact users. Keep testing!
This final point followed on well to the next talk.
Practical AI Safety Implementation
Richard Willats from Contextual AI provided practical insights into implementing safety measures for AI systems. He started by introducing the concept of RAG (Retrieval-Augmented Generation). Bear with me explaining what this means!
RAG is a technique that enhances AI responses. It combines pre-trained models with real-time access to specific, curated knowledge bases. Think of it as AI, supercharged.
Before I continue, I’d also like to point out that Richard has a creative background. He was drawn to AI and is encouragement if you are reading this and thinking you need to be technical. You don’t.
Brand and Reputational Risks
Richard outlined seven critical safety risks that organisations face:
Inappropriate content generation: AI producing offensive or unsuitable material. Yep you know the kind of examples I mean.
Compliance failures: Systems that don't adhere to industry regulations.
Legal exposure: Similar to Dorothy's points on potential litigation.
Sycophancy: AI telling users what they want to hear rather than accurate information. Not healthy!
Hallucination: AI confidently presenting false information as fact. Are you sure about that? ;-)
Brand criticism: AI inadvertently generating negative content about the organisation. Next time you use a company’s chatbot, test it by asking it to name their competitors.
Data exposure: Accidental revelation of sensitive information. Let’s not go here!
Enabling harm: AI assisting in dangerous or illegal activities.
The Reality of AI Accuracy
Even the most advanced AI models Richard tested achieved only 95% accuracy, meaning 5% of potentially problematic content still slips through automated safeguards. This statistic shows why human oversight remains essential. I hope this encourages you to to keep trying to make sense of AI as well.
Red Teaming: Testing Your 4-4-2 Defence ;-)
Richard shared his experience in "red teaming" which is simulating malicious actors to test AI system vulnerabilities. His approach includes:
Prompt databases: Building collections of challenging inputs to test system responses. Such a great idea and every AI organisation needs this.
Scalable testing: Frameworks and coding that can be repeated and deployed quickly across different systems.
Injection templates: Tools for testing prompt injection and jailbreaking vulnerabilities.
This specialised skill set is increasingly valuable, particularly for small and medium enterprises (SMEs) that lack internal AI security expertise. If you are reading this and would like to understand this further, do get in touch.
Conclusion: A Human-in-the-Loop is Critical
Both speakers emphasised the same crucial point: guardrails will never be 100% effective. Whether addressing legal compliance or safety concerns, human oversight remains critical in AI implementations. It’s not a corner worth cutting.
Summary: Moving Forward Responsibly
For me, these insights highlighted that successful AI adoption isn't just about choosing the right technology, it's about taking the time to build frameworks that address legal, ethical, and safety considerations from the ground up. It needs the buy-in from everyone.
It also highlighted that organisations must invest in proper compliance structures, rigorous testing, and most importantly, maintain meaningful human oversight throughout their AI journey.
There’s a lot to do!
Analysis
Attending this Hampshire AI event reinforced my view that AI isn't just about technology, it's about relationships and responsibility.
It also reinforced the importance of events like these and how powerful it is when people get together. I’m an extrovert and enjoy meeting people and I got to meet incredibly talented people such as Vinh-Dieu Lam from Odyssey and former tech lead at Wayve. There is a lot of AI talent in the UK and around Southampton. We need to make more of this.
Dorothy and Richard's presentations demonstrated that the organisations thriving with AI won't necessarily be those with the most advanced models, but those who implement them most thoughtfully - and creatively.
What struck me most was how both messages from Dorothy and Richard were aligned. Despite coming from different perspectives (legal and technical), both arrived at the same conclusion about human oversight being irreplaceable. This suggests we're not heading toward a world where AI operates independently, but rather one where human-AI collaboration becomes more sophisticated and deliberate. Education and skills are therefore needed in this area.
The 95% accuracy figure Richard also revealed how we need to keep AI developments in context. In many business or education contexts, 95% might seem impressive, but when applied to a social context that 5% is unacceptable and not good enough or right for society. Worringly there are still AI models where 95% is not even achieved.
It's therefore a wake up call that as AI becomes more capable, our responsibility for its governance and our perspective of it, becomes more critical not less.
AI education is essential.
Thanks for reading,
Jonathan