The Dangers of Artificial Intelligence: Is it really an existential threat?
Sci-fi novelists have long imagined a future where robot intelligence slingshots that of its human creators, triggering the rise of the machines and the subjugation of us mere mortals. Skynet in the Terminator franchise and HAL in 2001: A Space Odyssey are two of the most memorable examples, but there are dozens of similar tales of fiction.
Until recently, those stories described a future so distant that it seemed closer to fantasy than fiction. But this year, everything changed.
The threat of autonomous machines causing harm to society has become a present-day danger. So much so, that prominent figures are either calling for a pause in AI development or to establish regulatory guardrails around the technology. They are even describing AI as an existential threat to humanity – suggesting that we need to legislate now to prevent a real-life Skynet scenario.
These are not the opinions of tinfoil hat-wearing conspiracy theorists. Petitions by the “Centre for AI Safety” and “Future of Life Organisation” have been signed by leading academics and businesspeople alike (including Sam Altman, Bill Gates, Elon Musk et al). These are people who not only best understand the technology, but also stand to benefit most from its deployment. So what exactly is going on?
Could this be an elaborate bluff or cunning marketing tactic? “Hey world, this technology is so incredibly powerful it could cause the extinction of the human race.” [cue… the whole world scrambles to investigate AI more closely]. Or could there really be an existential threat to humanity? And if so, what are the dangers of artificial intelligence?
In this series of articles, I will look to outline: