An understanding AI isn't entirely beyond the realm of human understanding. In fact, scientists may have just built an AI system that spontaneously reproduces some aspects of human neurology -- as in things like facial cues.
Researchers have developed a new computational model of the human brain's face-recognition "mechanism." This means robots are now one step closer to capturing aspects of human neurology that previous models have missed.
According to 33rd Square, researchers have designed a machine learning system trained to recognize particular faces by feeding it some simple images. They discovered the trained system included an intermediate processing step that represented the face's degree of rotation but not the direction.
This rotational property was not built into the system, but rather emerged spontaneously from the learning process. Interestingly, it does duplicate an experimentally-observed feature of the primate face-processing mechanism.
This means their system may be similar to the brain after all.Tomaso Poggio, CSAIL principal investigator and director of the Center for Brains, Minds, and Machines (CBMM), said this is not proof that we understand entirely what's going on, but it does suggest we are on the right track.
The CBMM is a multi-institution research consortium funded by the National Science Foundation located at MIT.
According to their paper, in current biology, this includes a mathematical proof that the particular type of machine-learning learning system they use is something they can call a "biologically plausible" model of the nervous system.
Poggio added that it shows not only what algorithms the brain use but also the circuits used that can implement such a method.
Their system is a neural network that consists of simple processing units that were arranged into layers. They are densely connected to the processing units or nodes, with data being fed into the bottom up. The output of the top layer is correlated with some classification metrics, for instance determining if a given image depicts a person.
They discovered that the system sees a face's "signature," and it's the same no matter its orientation. This represents a variation of the Hebb's rule, where the weights of the connections between nodes are adjusted to produce more accurate outputs.