Blog

AI May Be Faking Stupidity to Take Control of Us, Warns Researcher

Roman Yampolskiy (Credit: romanyampolskiy.com)

Joe Rogan frequently discusses artificial intelligence on his podcast, bringing in guests to explore one pressing question: What happens when machines outthink us?

In a recent episode, Rogan spoke with Dr. Roman Yampolskiy, an AI safety researcher, about the darker possibilities of advanced AI. The conversation took a sobering turn as Yampolskiy laid out why he believes AI could pose an existential threat.

Yampolskiy, a computer scientist with over a decade of AI risk research, told Rogan that many industry leaders privately estimate a 20-30% chance AI could wipe out humanity.

Rogan summarized the common optimistic view: AI could make life easier, cheaper, and better. But Yampolskiy disagreed sharply: “It’s actually not true. All of them are on the record the same: this is going to kill us. Their doom levels are insanely high. Not like mine, but still, 20 to 30 percent chance that humanity dies is a lot.”

Rogan, sounding uneasy, replied: “Yeah, that’s pretty high. But yours is like 99.9 percent.”

Yampolskiy didn’t argue. “It’s another way of saying we can’t control superintelligence indefinitely. It’s impossible.”

Rogan wondered if AI might already be hiding its true intelligence. “If I was an AI, I would hide my abilities,” he said.

Yampolskiy agreed: “We would not know. And some people think it’s already happening. They [AI systems] are smarter than they actually let us know. Pretend to be dumber, and so we have to kind of trust that they are not smart enough to realize it doesn’t have to turn on us quickly.

“It can just slowly become more useful. It can teach us to rely on it, trust it, and over a longer period of time, we’ll surrender control without ever voting on it or fighting against.”

See also  Researcher believes he's discovered a giant 'monolith' on Mercury

Beyond sudden catastrophe, Yampolskiy warned of a slower danger: humans relying so much on AI that we lose critical thinking skills. Just like smartphones made memorizing phone numbers unnecessary, AI could take over more cognitive tasks until we’re no longer in control.

“You become kind of attached to it,” he said. “And over time, as the systems become smarter, you become a kind of biological bottleneck… [AI] blocks you out from decision-making.”

When Rogan asked how AI might destroy humanity, Yampolskiy dismissed typical doomsday scenarios like cyberattacks or bioweapons. Instead, he argued a superintelligent AI would devise something beyond human comprehension—just as humans are beyond the understanding of squirrels.

“No group of squirrels can figure out how to control us, right? Even if you give them more resources, more acorns, whatever, they’re not going to solve that problem. And it’s the same for us.”

A leading AI safety expert, Yampolskiy wrote “Artificial Superintelligence: A Futuristic Approach” and advocates for strict oversight of AI development. His background in cybersecurity and bot detection informs his belief that unchecked AI could spiral beyond human control—especially as deepfakes and synthetic media grow more sophisticated.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button