Lovotics = Love + Robots
Written by Harry Fairhead   
Thursday, 30 June 2011

There is a problem to be solved before robots can be entrusted to look after children and the elderly - how to establish a bi-directional relationship with them. The new science of "lovotics" sets out to explore this. 

Humans have had a long-time relationship with inanimate objects - just think about a cuddly toy. In fact the object doesn't have to be soft and cuddly but it helps. Now we have a new subject, "Lovotics"  proposed by Hooman Samani, an artificial intelligence researcher at the Social Robotics Lab of the National University of Singapore, which aims to engineer the love we feel for robots and perhaps just as important find way for robots to express love back.

The key here is that the robot is intended to take an active role in promoting the love - as a bidirectional interaction.

"Even though various fields have proposed ideas about the role and function of love, the current understanding about love is still quite limited. Furthermore, developing an affection system similar to that of the human being presents considerable technological challenges."

The idea is to formalize the complex system that controls how humans feel towards one another - their emotions, reasoning and even their endocrine system. From this you can attempt to build robots that humans can love and be loved back by.

This is not a simple interaction and so far the robots seem to have displayed jealousy and a constant demand to be stroked by their human keepers. They run around and tweet and twitter like birds - its all a bit like a determined attempt to be cute.



Before building the robot a survey revealed that 19% thought that they could love a robot and surprisingly 36% thought they could be loved by a robot.  This suggests that unrequited love is going to be a real problem. More seriously it probably reveals an over optimism about what robots are capable of based on an ignorance of the technology.

There are serious issues here but they are mostly centered on the problems of getting a humanoid robot accurate enough to avoid the "uncanny valley" effect - where small errors of behaviour or appearance become creepy rather than endearing. In this case we have small "tribble-like "robots which mimic the role of a pet rather than another human. There is also the argument that humans are suckers enough for "cute" without the device exhibiting reactive behaviours.



Further Information

Lovotics site
(via  Mims's Bits)



Too Good To Miss: The Unreasonable Effectiveness Of GPT-3

Some of our news items deserve a second chance. Here's one from August that, when we look back will probably be remembered as when something new happened in AI. I suppose you say that the unreasonable [ ... ]

Red Hat Improves Developer Access

Red Hat has announced new no- and low-cost programs they're adding to Red Hat Enterprise Linux. These are the first of many new programs.

More News

Last Updated ( Thursday, 30 June 2011 )