From RoboCop to AI Cop: The importance of combining AI regulation with policing

By Glyn Heath, a director of AI company Bayezian.

  • 9 months ago Posted in

The upcoming EU AI Act, aimed at regulating artificial intelligence, is a significant step forward. Although it’s unlikely to be perfect – with the Ada Lovelace Institute advising that there are ‘significant gaps’ in the proposed Act – it provides an opportunity to build a framework that sets out clear rules. 

However, without effective policing mechanisms, regulation is likely to hold little value.

Take Twitter, for example. The platform already struggles to police Twitter Trolls that hound strangers and celebrities and remain anonymous. Worryingly, AI can easily create pseudonyms for Twitter accounts, making it exceedingly difficult to trace the origin of harmful content. If social media platforms are reluctant to disclose users' identities, the situation will significantly worsen when the content is not generated by humans. AI doesn’t sleep, and so the number of bots and harmful content could increase exponentially over a very short period of time.

Another example is fraud. Whether it’s threatening enterprises or duping unsuspecting individuals, AI can cause a significant amount of harm when it comes to scamming. There are already a number of concerning instances where AI has been used to trick people out of money. This includes using deepfake technology to clone faces and voices.

These are just two examples of why proper policing is imperative. While Big Tech companies can play a role in this process, they have a questionable track record of prioritising public interest over their own commercial objectives. Many are pushing back against the Online Safety Bill, which will protect users against algorithms that push harmful content. Relying solely on these companies for enforcement simply won’t yield the desired outcomes.

The EU AI Act

The EU AI Act can (and should) play a key role in the regulation and policing of AI. However, many organisations are worried that this will come at the expense of the fast-paced innovation we’re seeing currently. In fact, over 150 executives from across Europe have signed an open letter to warn that if regulation is too tough it could stifle innovation opportunities.

There’s certainly a balancing act that the regulators need to negotiate carefully. Clear and comprehensive laws are important, but so is the (safe) advancement of AI, which was the focus of the UN’s recent ‘AI For Good’ summit in Geneva.

So, how can regulation be enforced and maintained properly?

Policing tech

Policing the enforcement of AI regulation comes in a number of ways. Independent Regulatory Authorities will spearhead the initiatives, but they need to be supported with robust audits, whistleblower protections, and the continuous re-evaluation of the laws and regulations in place. Fines will deter misbehaviour to some extent, but again, if these fines aren’t enforced, then organisations are unlikely to take it seriously.

The irony lies in the fact that AI itself will have to police AI. The development of approved AI systems that act as impartial overseers is necessary. Just as a judge in a courtroom could rely on the assistance of AI co-pilots, AI itself will require a "co-pilot" to navigate its behaviour and decisions. Although incredibly intelligent, humans don’t know everything, and “research is rapidly developing and it’s impossible or impractical for a lawmaker to stay on top of that”.

As AI evolves and becomes more sophisticated, it will need to self-regulate to ensure ethical and responsible use. This is difficult – we all know how easy it is for AI systems to be unintentionally trained as biassed. Because of this, there’s considerable pressure to develop complex self-regulation to uphold ethical standards and safeguards.

The bottom line

We’re yet to see how the EU AI Act plays out, and the lengths that the Act goes to regulate and police AI. Whether AI polices itself, is policed by third parties, or a mix of both, public awareness and education will be paramount. It’s through holding companies and individuals accountable that we will create a world that is safe and unbiased, while embracing innovation.


By Dael Williamson, EMEA CTO at Databricks.
By Shadi Rostami, SVP of Engineering at Amplitude.
By Sam Bainborough, Director EMEA-Strategic Segment Colocation & Hyperscale at Vertiv.
By Gregg Ostrowski, CTO Advisor, Cisco Observability.
By Rosemary Thomas, Senior Technical Researcher, AI Labs, Version 1.
By Ian Wood, Senior SE Director at Commvault.
By Ram Chakravarti, chief technology officer, BMC Software.