Artificially Intelligent Robot of the Future to Understand the Needs and Actions of Others

Simulation theory of mind

Alan Winfield, professor of robot ethics at the University of West England in Bristol, wants to use “simulation theory of mind,” to develop robots that understand the needs and actions of others. For example, a bellhop of the future would ideally be able to anticipate hotel guests’ needs and intentions based on subtle cues, not just respond to a list of verbal commands. In effect it would “understand” – to the degree that an unconscious machine can – what is going on around it, says Winfield.

Related This Therapy Robot Helps Children with Autism by Teaching Them Social Skills

Simulation Theory of Mind is an approach to Artificial Intelligence that lets robots internally simulate the anticipated needs and actions of people, things and other robots – and use the results (in conjunction with preprogrammed instructions) to determine an appropriate response. In other words, such robots would run an on-board program that models their own behavior in combination with that of other objects and people, reports Scientific American.

“I build robots that have simulations of themselves and other robots inside themselves,” Winfield says. “The idea of putting a simulation inside a robot… is a really neat way of allowing it to actually predict the future.”

The term Theory of Mind is used philosophers and psychologists to be able to forecast the actions of self and others by imagining ourselves in the position of something or someone else.

Winfield thinks using this on robots will help them infer the goals and desires of things around them, such as when you’re inside an elevator and the door is about to close, you see a couple running towards your direction, you immediately know that they’re trying to get on the elevator, so you hold the door for them.

Simulation theory of mind

For now, robots can only use simulation theory of mind in relatively simple situations. In a paper published in January, a research team led by Winfield described an experiment in which a robot was designed to move along a corridor more safely (that is, without bumping into anything) after it was taught to predict the likely movements of other nearby robots.

According to the Scientific American report, this simulation theory of mind can be a huge advantage for robots trying to be communicative with humans – a feature that will become increasingly crucial as automation keep influencing human lives, according to Winfield.

Now that Winfield has built machines that carry out simple actions determined by internal simulations of the mind, his next step is to give these robots the ability to verbally describe their intended or past actions.

Related Embracing Artificial Intelligence and Machine Learning in Healthcare

A good test will be if one robot can listen to statements of intent made by another robot and correctly interpret these statements by simulating them. For example, one robot would verbally describe an action – “I’m going to hold the elevator door,” – and the other robot hearing this information before internally simulating the action and consequence: the doors are kept open.

If they can understand one another in such a way, they are in theory one step closer to understanding us, Winfield says – “I’m very excited by that experiment.”

Previous articleFITT360: A Wearable Camcorder that Records 360-degree video from your point of view
Next articleReMix, a New Technology That Relies on “in-body GPS” to Help Diagnose Illnesses
Johanna Mischke () is Editor-in-Chief at WT | Wearable Technologies – the pioneer and worldwide leading innovation and market development platform for technologies worn close to the body, on the body or even in the body. Besides being an expert for wearables and their broader ecosystem she is experienced in the startup world and international marketing. Johanna can be reached at j.mischke(at)wearable-technologies.com.