Can AGI take the next step toward genuine intelligence?

4 min read

By Charles Simon

artificial general intelligence brain

Over the past decade, artificial intelligence (AI) has become part of our daily lives. Just ask anyone who has ever played a video game or asked Alexa a question. AIs have expert credentials when it comes to gameplay. They are able to find data correlations and patterns that no human mind could ever uncover. But despite these impressive achievements, nothing approaching human common sense has emerged so far from the AI we regularly encounter.

What will it take for AI to evolve into truly intelligent, thinking machines, or artificial general intelligence (AGI)?  The truth is that while we can define AGI—Wikipedia calls it “the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can”—no one knows exactly how to create it.

Fortunately, we have a ready-made model for AGI in the human brain. By studying how the brain works and building biologically plausible approaches, we should be able to get closer to actually creating AGI. We know, for example, that intelligence and thinking occur largely in neurons in the neocortex due to their spiking function. Since little DNA data is devoted to neocortex formation, however, the maximum complexity of AGI software is limited. As a result, the brain and AGI must be possible with repeating patterns of simpler neural circuits and generic rules for how to connect them, while overall AGI capacity is bounded by neuron counts. 

To take the next step on the road to genuine intelligence, AGI needs to create its underpinnings by emulating the capabilities of a three-year-old. Take a look at how a three-year-old playing with blocks learns. Using multiple senses and interaction with objects over time, the child learns that blocks are solid and can’t move through each other, that if the blocks are stacked too high they will fall over, that round blocks roll and square blocks don’t, and so on.

A three-year-old, of course, has an advantage over AI in that he or she learns everything in the context of everything else. Today’s AI has no context. Images of blocks are just different arrangements of pixels. Neither image-based AI (think facial recognition) nor word-based AI (like Alexa) has the context of a “thing” like the child’s block which exists in reality, is more-or-less permanent, and is susceptible to basic laws of physics. 

This kind of low-level logic and common sense in the human brain is not completely understood but human intelligence develops within the context of human goals, emotions, and instincts. Humanlike goals and instincts would not form the best basis for AGI. Human intelligence has evolved over millennia and is largely about survival. AGI, on the other hand, can be planned and be largely about being intelligent. Given that, AGI will be given different goals and instincts and is unlikely to be like human intelligence.

Despite this difference, it is essential for AGI to possess an understanding of the real world. Without the ability to understand—and interact with—the real world, AI, no matter how sophisticated, will never evolve to general intelligence. To that end, AGI ultimately will require robotics to deal with the variability and complexity the real world presents. Once AGI has been able to develop via interaction with the real world, that ability can be cloned to static hardware and the knowledge, abilities, and understanding will be retained. 

Prototype AGI is now robust enough to create an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modeling, planning, imagination, and even forethought. These modules enable it to handle a wide range of activities, from avoiding obstacles while moving in a simulated environment and planning a series of actions to achieve a goal to learning words, responding to voice commands, and producing spoken responses.

To date, though, AGI development has been limited to small-scale activities, such as handling encounters with a few objects at a time or learning a few words. Taking this approach enables a system to be built from the bottom up, understanding a few object types before moving on. 

Remember how that child playing with blocks gradually learns about them by using multiple senses and interacting with them over time? This is precisely the kind of process true AGI would possess, simply on a small scale. And as small-scale issues are addressed, the structure of the simulation can be scaled up to handle huge clusters of neurons, developing categories from what would otherwise be random incoming neural spikes until, ultimately, general intelligence emerges.      

Today’s AGI prototypes can explore a two-dimensional simulated environment and begin to understand everything that is to be learned there, paving the way for entry into a three-dimensional simulator where the prototype will finally begin to approach the capabilities of the average three-year-old. By gradually learning about basic concepts, such as gravity and the passage of time, the prototype will acquire the assets needed to eventually interact with the real world. 

All of this leads us to believe that AGI is not simply the stuff of “science fiction and future studies,” to again quote Wikipedia. While opinions continue to vary widely on when and even whether AGI will arrive, the progress currently being made suggests that AGI will be a reality in the not-too-distant future. Given that, the time is now to fully understand the philosophy that anchors AGI and how it can be developed to help—and not displace—mankind.   

About the author

Charles Simon

Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI. Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of two books – Will Computers Revolt?: Preparing for the Future of Artificial Intelligence and Brain Simulator II: The Guide for Creating Artificial General Intelligence – and the developer of Brain Simulator II, an AGI research software platform that combines a neural network model with the ability to write code for any neuron cluster to easily mix neural and symbolic AI code. You can follow the author’s continuing AGI experimentation at or at the Facebook group:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.