Blog

GPT-4 is Showing Sparks of Artificial General Intelligence According to Testers

Is the world changing too fast for you? You may feel that way with each new revelation of the alleged powers of Generative Pre-trained Transformers – better known as GPT-3.5 and now GPT-4 – which use deep learning from the Internet to produce human-like text and serve as the platform for chatbots such as  OpenAI’s ChatGPT, Google’s PaLM and Microsoft’s Bing AI. Well, it is time to speed up your own learning process as AI researchers using early versions of GPT-4 are now talking about artificial general intelligence – the ability of an AI chatbot to understand or learn any intellectual task that humans can – and suggesting that GPT-4 may be the platform that allows AI to experience consciousness … if it hasn‘t already. Is it too late to worry … or stop it?

“Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

In a new paper, “Sparks of Artificial General Intelligence: Early experiments with GPT-4,” published in the journal arXiv, a team of AI researchers from Microsoft suggest that GPT-4 is already showing “sparks” or signs of being an artificial general intelligence (AGI) system. Before we look at what that means, let’s do a little human learning of our own about AGI.

Artificial general intelligence is an all-encompassing AI

“Artificial general intelligence (AGI) is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. The intention of an AGI system is to perform any task that a human being is capable of.”

TechTarget provides that generally accepted definition of AGI but admits that there are many others due to the many definitions of human intelligence – “Computer scientists often define human intelligence in terms of being able to achieve goals. Psychologists, on the other hand, often define general intelligence in terms of adaptability or survival.” Explanations of AGI generally break it down into “weak AI” and “strong AI.” A weak AI is a program designed to solve a single problem – examples are  chess-learning programs or self-driving vehicles. A strong AI is designed to have general cognitive abilities to learn how to accomplish anything presented to it.

See also  Artificial womb prepares for premature baby use after lamb experiments

Artificial general intelligence and strong AI in particular are often linked to the Turing test used for evaluating whether an AI fool humans into thinking it is human. To do this, artificial intelligence researchers must give AGI the ability to reason, use strategy, make judgments when uncertain, have common sense knowledge, communicate in a natural human language, integrate all of these skills on its own, receive input using human senses (seeing, hearing) and have the ability to physically move. While the current chatbots do not have the physical capabilities, many believe they have achieved the reasoning side of AGI enough to fool humans, at least on a limited level.

“We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

The authors of the new paper believe that GPT-4 has inched across the line of AGI and gives many pages of specific examples to support that contention – a GPT-4 written proof about how there are infinitely many primes and a unicorn drawn with the TiKZ program are two. That sounds like more than a “spark” as described in the title and the abstract, and the researchers seem to bounce back and forth between ‘it’s here’ and ‘it’s not quite here’.

“In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction.”

In a tweet, OpenAI CEO Sam Altman also demonstrates the good news/bad news aspects of GPT-4.

“it is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”

In an interview with Intelligencer’s Kara Swisher, Altman gets to the center of the flaw in GPT-4 which has many trembling in fear as it has become an immediate source of information and research for inventors, project managers, educators, leaders, students and more – it makes mistakes.

“Well, I think that’s been an issue with every version of these systems, not particularly GPT-4. You find these flashes of brilliance before you find the problems. And so, a thing that someone used to say about GPT-4 that has really stuck with me is it is the world’s greatest demo creator. Because you can tolerate a lot of mistakes there, but if you need a lot of reliability for a production system, it’s not as good at that. GPT-4 makes fewer mistakes. It’s more reliable, it’s more robust, but there’s still a long way to go.”

The key phrase in Altman’s quote is “world’s greatest demo creator” as a description of all of these GPT systems. If he sees them as demo creators, then he sees us a guinea pigs for unfinished, untested, flawed, mistake-prone products … products which are already being used as sources for journalists, students doing research papers, scientists conducting new research, software developers and more. Altman tells Swisher that GPT-4 will often just plain makes things up. His solution is to give GPT-4 and its competitors a lot more human feedback so they can become more reliable. Shouldn’t that be done BEFORE it is released to the public and is showing “sparks” of artificial general intelligence?

See also  An Intelligence Official says she met with aliens at a hidden US base
Shouldn’t articicial general intelligence be fully tested before being released on the world?

Here are some better ideas. Let’s develop a Turing test for scientists to make sure they are human-centered rather than being focused on unleashing technology on the public and apologizing later for the problems it causes. Let’s put OpenAI and other for-profit, closed-source GPT creating and using companies under closer observation and controls – especially their beta testing departments. While this may slow down advancement, it could save lives and minds. Finally, let us as the public resist the shiny baubles of new technology dangled in front of us until the developers test them thoroughly themselves – and on themselves.

Remember … the next thing has rarely been the next BEST thing.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button