Artificial intelligence is here and two people who are closely connected to it in very different ways agree that it has the potential to eliminate humanity … and the takeover may already be underway. One is James Cameron – the filmmaker and screenwriter responsible for some of the most futuristic and dystopian films of all time, including The Terminator, Aliens, The Abyss, Terminator 2: Judgment Day, Avatar and Avatar: The Way of Water. Cameron not only made movies about artificial intelligence – he pioneered is usage in the process of film production. The other is Geoffrey Hinton, a British computer scientist known as the “godfather of artificial intelligence” for his work in training multi-layer neural networks used in artificial intelligence. Both were recently interviewed on the subjects of artificial intelligence and artificial general intelligence and both agree that AI has the ability to take over humanity and the process may have already begun. Are we living in a James Cameron movie? Is it Titanic?
“I think A.I. can be great, but also it could literally be the end of the world.”
Appearing recently on the SmartLess podcast, James Cameron was pondering whether an uprising of artificially intelligent machines in The Terminator is possible. Not only does he think it can happen, he says the current state of artificial intelligence makes him “’pretty concerned about the potential for misuse of A.I.” For those not familiar with the film (spoiler alert), The Terminator is a cybernetic android sent from the future to kill the person whose not yet born son is responsible for eventually stopping an artificially intelligent defense network called Skynet which will become hostile and self-aware and trigger a global nuclear war to exterminate all humans. Needless to say, the recent revelations of conversations with GPT-4 chatbots such as OpenAI’s ChatGPT, Google’s PaLM and Microsoft’s Bing AI turning strange, hostile and violent have caused many to equate them to The Terminator and Skynet. Cameron says he understands why.
“You talk to all the AI scientists and every time I put my hand up at one of their seminars they start laughing. The point is that no technology has ever not been weaponized. And do we really want to be fighting something smarter than us that isn’t us? On our own world? I don’t think so.”
Cameron is, of course, correct in his assessment of the weaponization of technology. However, it is his next comment that is the real cause for concern.
“AI could have taken over the world and already be manipulating it but we just don’t know because it would have total control over all the media and everything.”
Think about the fears being expressed about ChatGPT and other forms of AI being used to collect news, write news stories and even deliver them in the form of very humanlike – and in this case, ironic – avatars of human newscasters. Could AI have already penetrated the media and be working its way into taking over the world? Is this another Terminator sequel in real life?
“I think it’s very reasonable for people to be worrying about these issues now, even though it’s not going to happen in the next year or two. People should be thinking about those issues.”
In an interview with CBS News, Geoffrey Hinton, the “godfather of artificial intelligence,” thinks Cameron is right to be worried about the weaponization of artificial intelligence and a possible takeover of humanity that could lead to its destruction. Hinton knows what he’s talking about. He is the descendent of computer and mathematics royalty – his great-great-grandmother was Mary Everest Boole, who was influential in promoting mathematics education for both boys and girls, and her husband was logician George Boole, whose invention of Boolean algebra and Boolean logic is credited with laying the foundations for modern computer science and the Information Age. Hinton has carried on the tradition of his illustrious ancestors – he was awarded the 2018 Turing Award, with Yoshua Bengio and Yann LeCun, for their work on deep learning. On the subject of the weaponization of AI, Hinton has been speaking out against it for years – he moved from the U.S. to Canada because he was against the military funding of artificial intelligence, and has regularly spoken out against lethal autonomous weapons. One concern he expressed in the CBS interview was the rapidity of AI development.
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less.”
He is also worried about one of the very things he helped develop – computers coming up with their own ideas for self-improvement, warning that “We have to think hard about how you control that.” When asked about the possibility of one of Cameron’s Terminators being developed with artificial general intelligence that takes it beyond human capabilities to the point of acting on its own and potentially threatening the very existence of humanity, he answered cautiously:
“It’s not inconceivable, that’s all I’ll say,”
Not inconceivable! This is from the godfather of artificial intelligence! Why are we not panicking? Why is Hinton not panicking? Or moving farther away than Canada? He explains that on the more conceivable side, things aren’t so bad.
“The phrase ‘artificial general intelligence’ carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don’t think it’s going to be that. I think more and more of the routine things we do are going to be replaced by AI systems — like the Google Assistant.”
What about ChatGPT?
“We’re at this transition point now where ChatGPT is this kind of idiot savant, and it also doesn’t really understand about truth.”
That is a key problem with ChatGPT – its responses are often far from the truth … but presented as facts as it tries to figure out what it is doing and works towards being truthful, factual and consistent. Hinton’s final warning comes straight out of the Wizard of Oz … we need to be worried about who is doing the development and corking the controls behind the curtain.
“You don’t want some big for-profit company deciding what’s true.”
James Cameron and Godfrey … geniuses in different fields who agree on the potential dangers of artificial general intelligence. Are we going to listen to them or the big for-profit companies?
Or is it too late to decide?