Does AI really pose an existential threat to humanity?

The petitions from the Centre for AI Safety[1] and “Future of Life Organisation[2] both highlight significant risks from AI. So, it’s fair to ask: ‘does AI really pose an existential threat to humanity’? And if so, how would that risk materialise?

Personally, I see three broad possibilities for AI to cause harm to society – and have ranked them below in order of severity.

The singularity

The first possibility is for AI to establish consciousness (an event typically referred to as ‘the singularity’[3]). A fully conscious computer being would no longer need humans to feed it data, train it, or monitor its actions. It could slowly spread its own digital tentacles, hack its way into other systems, and begin seizing control of our society. We would hope that someone would have programmed such an entity to respect humanity. But if said AI was self-aware and intent on maintaining its existence, any effort to pull the plug could see it at odds with us mere mortals.

I would hope we are still some ways from this kind of development. LLMs may, on the surface, seem like sentient beings. But they are taking a probabilistic approach to answering questions based on lots of training data. They even have parameters to tune the probabilities of their responses (if you are interested in how that is done, this article does a good job explaining the mechanics[4]). Moving from probabilistic question answering (or task completion) towards true sentience, seems like a large leap. But it is by no means unfeasible.

A recently published scientific paper[5] found that “there are no obvious technical barriers” to building AI systems that exhibit properties of “consciousness”. Such a development would not happen by chance. For computers to become self-aware, we would need to help them on that path. Given that there is already significant scientific interest in doing so, one would hope that bold steps to promote the singularity are taken with equal measures of caution (ensuring the off button can always be pressed).

A Machiavellian force

Even if we manage to avoid “the singularity” (or at least mitigate its harm), AI could damage humanity in more gradual, subtler ways. If it seems far-fetched for artificial intelligence to assume absolute control of society, it is by no means unreasonable that it will play a key supporting role.

Should the evolution of AI become an arms race, the technology could help tighten the grip on society for those already in power. In the sphere of information and public opinion, AI makes it easier to enhance surveillance; influence viewpoints; sow confusion; and ultimately divide and conquer a populous. Equally, in the physical world (especially when combined with robotics) AI could drive advances in weaponry, policing, and warfare, that would further consolidate power in fewer hands and/or help quash dissenting voices.

Such a consolidation of power would no doubt be seen as positive for parties already in control. But the old dictum: “power tends to corrupt and absolute power corrupts absolutely,” suggests the rest of us might be in for a bumpy ride. Drunk on the power afforded to them by AI, humans at the top of the pile could be more likely to take actions that are to the detriment of the rest.

A Destabilising Influence

Just as AI could help tighten control over society, it could equally prompt a backlash from the general population. The last 20 years have seen a wide variety of protest movements, culminating in discontentment spreading into mainstream politics. A rise in populism, distrust of experts, elites, and traditional organs of power, have spilled over into riots and other acts of civil unrest.

AI could further destabilise society in ways that are difficult to foresee. The speed with which AI takes over professional jobs may lead to painful dislocations. Just as the loss of manufacturing jobs has scarred former industrial heartlands in many countries, AI taking over knowledge work could result in many white-collar professionals joining the disaffected ranks of those feeling left behind.

Rather than represent an existential threat, AI could prompt an existential crisis – in the psychological sense. Starved of purpose from work, we could lead a search for greater meaning in life.

Such a crisis could go either way. Advancing knowledge through research, seeking greater spiritual enlightenment, increasing appreciation for arts and culture, taking on new hobbies, learning artisanal skills – there are many ways society could react positively to greater workforce automation.

On the flipside, a lack of purpose could equally result in increased anxiety, restlessness, feelings of resentment, and even an uptick in violence and criminal activity.

How future workforce dislocations are managed (both individually and collectively) will prove crucial in the impact AI has on society.

[1] https://www.safe.ai/statement-on-ai-risk#signatories

[2] https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[3] https://en.wikipedia.org/wiki/Technological_singularity

[4] https://towardsdatascience.com/all-you-need-to-know-about-attention-and-transformers-in-depth-understanding-part-1-552f0b41d021#62ce

[5] https://arxiv.org/abs/2308.08708