Lab-Grown Human Mini Brains Are Being Developed for Biocomputers
In just a few short months, ChatGPT, Bing and other chatbots have made “Artificial Intelligence” or AI the most popular and arguably the most feared phrase on the Internet. That could change if a group of researchers at Johns Hopkins University have their way. In a new paper, they propose linking lab-grown “mini-brains” or brain organoids together to create biological hardware capable of performing advanced computational tasks — a process they are calling “organoid intelligence” or OI. Is that pronounced Oh-Eye or “Oy!”?
“A community of top scientists has gathered to develop this technology, which we believe will launch a new era of fast, powerful, and efficient biocomputing.”
Just because something is developed by a “community of top scientists,” that doesn’t mean it will be great – they said the same thing about artificial intelligence and the atomic bomb. However, Thomas Hartung, a professor of microbiology at John Hopkins University and one of the lead authors of the study, “Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish,” just published in Frontiers in Science, thinks so … despite the weird connotations of “Intelligence-in-a-dish.” Mini-brains grown from human stem cells have been controversial since they were announced – while there is an obvious benefit to testing drugs on them, there is also the fear that they will quickly become sentient. Recently, mini-brains have been implanted in mice and responded to visual stimuli, and others have shown both normal and abnormal brain waves. Does “abnormal brainwaves” sound like the cause of some of the unusual things chatbots have been saying in conversations recently?
“Technologies that could enable novel biocomputing models via stimulus-response training and organoid-computer interfaces are in development. We envisage complex, networked interfaces whereby brain organoids are connected with real-world sensors and output devices, and ultimately with each other and with sensory organ organoids (e.g. retinal organoids), and are trained using biofeedback, big-data warehousing, and machine learning methods.”
“Are in development” usually means things are much further along that we expected. These researchers are planning to create “organoid intelligence” by networking mini-brains, connecting them to the real world, and training them to use big-data warehousing and machine learning methods. Does this sound like the same approach being used to train AI chatbots? What will mini-brains do when linked to the Internet like the chatbots are?
“While silicon-based computers are certainly better with numbers, brains are better at learning. For example, AlphaGo [the AI that beat the world’s number one Go player in 2017] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.”
Using a game-playing computer as an example makes this sound safe … until one remember that in 2022 a chess-playing robot smashed the finger of the seven-year-old boy it was playing against. If that robot had a network of mini-brains, what might it have tried instead of brute force? The study press release notes that mini-brains are much more energy efficient than computers and can store far more information than silicon chips. Biocomputing would seem to be the next evolution (or revolution) in computing – ending Moore’s Law and replacing it with what? More and more and more’s law?
As usual, the study pushes its noble causes over fears of out-of-control mini-brains. In this case, it is creating medical advances and disease cures at a speed never before seen.
“With OI, we could study the cognitive aspects of neurological conditions as well. For example, we could compare memory formation in organoids derived from healthy people and from Alzheimer’s patients, and try to repair relative deficits. We could also use OI to test whether certain substances, such as pesticides, cause memory or learning problems.”
That sounds promising … except for those words “memory” and “learning.” Those take the brain organoids out of the realm of lumps of tissue and into the world of sentience and consciousness. Again, do we want to create Organoid Intelligence with the same – or worse – bad characteristics that artificial intelligence is currently exhibiting?
“In parallel, we emphasize an embedded ethics approach to analyze the ethical aspects raised by OI research in an iterative, collaborative manner involving all relevant stakeholders. The many possible applications of this research urge the strategic development of OI as a scientific discipline.”
That could be tough … one of the “relevant stakeholders” who always seems to be left out of these ethics discussions is the general public. While Hartung assures in the press release that “All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves”, it is worth noting that this study opened with the announcement that these technologies to network brain organoids with real world interfaces for inputs are already in development. How much of a say has the general public had so far?
Let’s go back to the game-playing computer example. Human mini-brains have already been trained to play the video game Pong. These were single brain organoids, not a network of them like those in biocomputing. AS far as Hartung is concerned, that proves the biocomputing revolution is here.
“Their team is already testing this with brain organoids. And I would say that replicating this experiment with organoids already fulfills the basic definition of OI. From here on, it’s just a matter of building the community, the tools, and the technologies to realize OI’s full potential.”
What is OI’s “full potential”? Is it a computer controlled by a network of human mini-brains? Is it an OI robot? A network of OI robots? The recent beta experiments with the Bing chatbot forced techs at Microsoft to ‘dumb it down’ so it didn’t get too aggressive. Is this OI team ready to do the same? Would it?
The researchers emphasize that OI is not close to being in our laptops or phones yet. That may be the wrong outlook. By the time OI makes it to our phones, it will be too late to control. The time to set the technical and ethical limitations and restrictions is now … while it is still “intelligence in a dish.”
Unless it is already out of the dish … and off the table, on the floor and running out the door.