Mitigating risks of generative AI

We are still in the early stages of adopting generative artificial intelligence. But as the technology permeates through society, mitigating the risks of generative AI will become a key priority. Doing so will need to be a shared responsibility and include a combination of:

Testing & Controls

As we assign greater responsibility to computer models (whether through computer-assisted or autonomous decision-making), we need to be evermore vigilant in how those models are developed, trained, operated, and secured.

For example – scrutinizing data used to train LLMs will help to screen for bias and mitigate potential privacy breaches. Rigorously testing a model’s outputs can help identify (and hopefully prevent) hallucinations.

Equally, as AI takes on greater responsibility and autonomy, the associated operational and cyber risks will intensify. As an example, San Francisco recently granted approval for Waymo and Cruise to operate autonomous robo taxis 24×7. This has already prompted a backlash from residents who perceive the technology as unsafe. Some neo-luddites have started putting traffic cones on the taxis’ bonnets to disable them[1]. Such incidents already constitute a serious operational risk to operators. But the consequences of a full-on cyber breach would be catastrophic in comparison.

For every implementation of AI, appropriate controls will be needed depending on the criticality of the application, sensitivity of data, and severity of potential failures or breaches.

Regulation

With any new technology, there can be tension between the desire to move quickly (seizing first mover advantage) and the need to move slowly (ensuring solutions are well-tested and safe). In instances where the consequences of getting it wrong are severe (such as the use of autonomous vehicles), there will no doubt be cause for regulators to supervise, establish minimum standards, and license operators (where appropriate).

While there have been calls for big tech companies (and AI specifically) to be regulated on a broader scale, it is difficult to imagine that countries around the world can all come to agreement on such rules, particularly given existing geopolitical tensions.

It is more likely for different jurisdictions and industries (particularly those that already regulated) to set their own guidelines. Such an environment could give rise to regulatory arbitrage. It could also pressure rule makers to favour risk over caution, in view of the economic prize on offer.

Given that the technology is relatively nascent, none of the over-arching regulations governing generative AI have been finalised. Yet there already seems to be a divergence in approach between jurisdictions. The European Parliament seems to be setting out the strictest regime with its EU AI Act. The laws are not yet in their final form, but the initial language suggests generative AI will be subject to some onerous requirements – for example, “disclosing that content was generated by AI” and “publishing summaries of copyrighted data used for training.”

Disclosure of AI-generated content may prove difficult to enforce. If a generative AI model drafts some written content, could a human make it their own simply by changing a few words?

Use of copyrighted material in training foundation models is also highly contentious topic. Several lawsuits are already underway by artists[2], authors[3], and (potentially) publishers[4] who are unhappy with their content being used to train generative AI models that can then reproduce content similar to their own. The outcome of these disputes has yet to play out. Yet it could be that copyright law itself is impacted. Perhaps in the future, when I publish an article like this one, I will need to specify (using a simple robots.txt instruction) whether I grant approval for my content to be ingested by Gen AI models.

Ethics & Principles

The field of artificial intelligence is moving so quickly that it might be unrealistic to think regulators can keep pace. Being overly strict in setting rules could also be seen as stifling to innovation. Instead, law makers may opt for a more principles-based approach – looking to encourage innovation, while setting down sufficient guidelines to discourage harm (in line with the approach being suggested by the UK[5]).

Promoting ethics and principles will no doubt help. AI courses and certifications offered by Google and Microsoft already include ethics as a core component. Even so, educating people on how to use AI responsibly does not guarantee safety. The desire to push things forward (both scientifically and commercially) can always yield unforeseen consequences. And the more powerful the technology becomes, the more significant the risks.

[1] https://www.bbc.co.uk/news/technology-6661151332

[2] https://www.scottishlegal.com/articles/us-setback-for-artists-lawsuit-against-generative-ai#

[3] https://abcnews.go.com/Technology/authors-lawsuit-openai-fundamentally-reshape-artificial-intelligence-experts/story?id=103379209

[4] https://www.npr.org/2023/08/16/1194202562/new-york-times-considers-legal-action-against-openai-as-copyright-tensions-swirl

[5] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#part-7-conclusion-and-next-steps