The economic upside of generative AI

Let’s assume that initial teething problems with AI are adequately resolved. We are left with some incredibly powerful technology that excels in a wide variety of complex work. Work that could previously only be performed by humans.

What precisely will that mean for the economy? Put simply, the economic upside of generative AI is staggering. McKinsey estimates this wave of AI represents the “next productivity frontier” and will generate between $2.6 and $4.4 trillion in additional GDP[1], with the impact felt across all sectors. Goldman Sachs believes the economic upside could be up to $7 trillion over a ten-year period[2].

Anyone that has tried out ChatGPT, Google Bard, Anthropic’s Claude, or Meta’s Llama, will no doubt be amazed by their general knowledge. Not only has each LLM pretty much finished reading the internet; they’ve retained that information, can access it on-command, while also helping to summarize, or expand on specific topics. Either one would serve as a very useful companion for a pub quiz. But how will these capabilities translate into trillions of dollars in economic upside?

Corporate Use Cases for LLMs

Organisations looking to leverage LLMs to support their own workflows have a variety of options available. Microsoft (with OpenAI), Google, and Amazon (through Anthropic) have opened their foundation LLMs to Azure, Google Cloud, and AWS users, respectively. Each tech titan will see those models as a significant lure to drive continued growth of cloud services. For anyone looking to run LLMs on-premise, there is also an impressive roster of open source options available[3] (notably Meta’s Llama 2[4]). Open source development frameworks – like Langchain – have also emerged to speed up application development (using a choice of LLMs).

There will be an avalanche of use cases for these new models and frameworks. Within each industry vertical, the ability to use natural language to interrogate, generate, and communicate specialised content will find a range of applications. Some existing examples include:

Human-in-the-Loop or Man-versus-Machine?

Most of the use cases currently envisioned for generative AI involve the technology supporting humans at existing tasks. Conventional wisdom suggests the best results will be to leverage AI for scale but afford humans ultimate control and the final decision in any judgment calls.

However, the interplay between man and machine is not always as symbiotic as we might think. Early studies have shown that we humans can become over-reliant on our automated assistants – possibly dulling our own wits. A recently published article in Nature titled “the impact of AI suggestions on radiologists’ decisions”[4] found that “the presence of algorithmic suggestions can reduce the accuracy of diagnostic decisions, especially when such suggestions are incorrect and limit the focus and analytical engagement of human actors.”

While the Human-in-the-Loop operating model will likely prevail for the time being, it may not be long before AI outperforms the most competent humans in a variety of functions. Should that happen, then humans could quite easily be cut out of the loop. And when a growing range of functions are entirely automated, what will the impact be on society?