With robots and artificial intelligence being, supposedly, on the brink of becoming sentient, you’d think scientists and researchers would be wary of giving our potential future rulers fewer tools to think and behave like human beings. As it turns out, a group of researchers has built an AI that can think and learn just like a human baby.
With their usual demeanour and affinity toward eating their toys instead of playing with them, babies can fool us into believing that they don’t know anything. While this is largely true, they actually do have in-built knowledge of basic concepts of physics, such as permanence which is why they are shocked when they see someone suddenly disappear behind their hands while playing games like peak-a-boo. In their minds, the person or object is not supposed to vanish into thin air and it seems like magic to them when it does.
A new study has introduced an AI called PLATO (Physics Learning through Auto-encoding and Tracking Objects) that has been inspired by research on how human babies think and learn.
Infant studies, over the years, have narrowed down three key concepts that we all understand from a very young age: permanence, solidity (solid objects don’t pass through one another) and continuity (objects move in a consistent way through time and space).
The researchers over at the AI research laboratory DeepMind in the United Kingdom relied on this knowledge to train their PLATO to understand these concepts.
“Luckily for us, developmental psychologists have spent decades studying what infants know about the physical world and cataloging the different ingredients or concepts that go into physical understanding,” writes neuroscientist, and one of the research members of the team, Luis Piloto.
The data set prepared by the researchers for PLATO’s development covers the three basic concepts and adds two new ones: unchangeableness (object properties like shape don’t change) and directional inertia (objects move in a way that’s consistent with the principles of inertia).
When PLATO was shown videos of scenarios that defied these concepts, it expressed surprise (the AI version of surprise). This showed that it was smart enough to recognise what was happening in the videos broke the laws of physics that it had learned. This is what researchers who study infants call Violation-of-Expectation (VoE), which is based on the idea that infants will show surprise when witnessing an impossible event.
What was even more promising/alarming (depending on which side of the ‘AI is evil’ debate you fall on) was that it took relatively short training periods to get these results, in some instances it was less than 28 hours.
“Our object-based model displayed robust VoE effects across all five concepts we studied, despite having been trained on video data in which the specific probe events did not occur,” write the researchers.
Over further tests and research, PLATO demonstrated a solid understanding of what should and shouldn’t be happening and that it could learn and expand upon its basic training knowledge. Even though PLATO isn’t up to the level of a three-month-old baby yet, it still shows promise of in-built knowledge being important to get the full picture and eventually learn and grow.
So while they have a long way to go, the researchers are optimistic that this could give us a better understanding of how the human mind works and develops and eventually help us build a better AI representation of it.