Charles Simon, BSEE, MSCs, is the founder and CEO of FutureAI.
From doing a Google search to asking Alexa to play your favorite song, it’s impossible to deny the incredible impact artificial intelligence (AI) has had on today’s world. Ironically, AI’s advances to date also shed light on its shortfalls.
No matter how impressive its accomplishments have been, AI doesn’t think or understand in the same way as humans. Ask a customer service bot a question that does not fit neatly into the “script” that enables it to respond to your queries and you’ll see what I mean. That’s because today’s AI relies on analyzing massive data sets and looking for patterns and correlations. To evolve to its next phase—artificial general intelligence (AGI)—AI must be able to understand or learn the same intellectual tasks that a 3-year-old child takes for granted.
As I discussed in my previous article, a 3-year-old playing with blocks understands space, namely that the blocks are physical things that exist in a three-dimensional world controlled by physical laws. The child understands the passage of time and causality: The blocks must be stacked up before they can be knocked down. The fundamental question that this example raises is, “Could you understand space, time and causality without ever having experienced them?” These are all components of what we call “common sense” and the area of intelligence where AI is currently lacking.
Children learn everything by interacting with their environment. A child moves about, grasps objects, examines them from all sides and tries out actions on objects to see what happens. Playing with blocks lets the child learn that most objects are solid; square objects can be stacked, but round ones can’t; and careful stacking can build a taller stack, while careless stacking can cause a collapse.
MORE FROMFORBES ADVISOR
To allow AI to explore and experiment with real objects in the same way a child does—and in doing so, approach true understanding—its computational system must be integrated with a robot. That doesn’t mean it needs to be inside a robot. The AI could reside on a supercomputer with a wireless connection to the robot. Regardless, the key is for the robot to provide mobility, senses (touch, vision and hearing) and interaction with physical objects that will enable the entire system to experience rapid sensory feedback from each action it takes. This, in turn, will enable the system to begin to learn and understand, and in doing so, approach true AGI.
This is a significant shift in focus. Robots are typically described in terms of what tasks they can perform for people. In this case, the robot is described in terms of the input it can provide to the AI’s “mind.” It is more a “sensory pod” than a robot.
The abilities of such an AGI pod are well within the realm of today’s robotics. In fact, a fairly simple low-cost robot should be sufficient for creating AGI. A mobile pod with vision, a manipulator and touch sensors, for example, may be sufficient for the AGI to learn about the real world. The key to AGI, though, is in the robot’s “mind,” which must be able to control the robot so that it can explore its environment, try out actions and see what happens—just like a child. If this simple robot was being controlled by a potential AGI, it could learn more about dogs, say, in a few moments of interaction than an AI can learn from thousands of images of dogs contained in its data center.
While robots provide the most likely path to AGI, they aren’t strictly necessary. A simulated robot in a simulated environment might be able to learn the same things as a physical robot in a real-world environment. The difficulty with this approach, however, is that progressively more accurate simulations are needed to recreate the variability and unexpectedness of the real world. It soon becomes obvious that building the simulation is more difficult than building a physical sensory pod.
Once the AGI has learned about the world, the pod can be removed. Consider that if you put on a blindfold, you don’t immediately lose your ability to visualize. Your fundamental understanding of what it is to see things or know about them in the real world still exists. Further, an AGI offers the capability to copy its content to another AGI—a clone, if you will. It could be a system that has never been connected to a robot, and this system will inherit the understanding of the real world that the robotic system previously acquired.
Clearly, then, understanding is simply a pattern of software and data, so there must be other pathways to creating the same result without using a robot at all. It would appear that AGI’s need for a robot is more of a practical requirement than a theoretical requirement. Solving the problems needed to create even a simple autonomous robotic system is just a quicker pathway to AGI than other approaches.