Too Good To Miss: Our Robot Overlords Attempt To Master Jenga
Written by David Conrad   
Friday, 27 December 2019

There are some news items from the past year that deserve a second chance. For any human who has sustained injury from falling Jenga blocks the news that a robot has developed enough touch and vision to do the task is bitter-sweet.

Jenga is that game where you have to pull blocks from a tower of blocks and then place the extracted block on the top of the tower, so making it sparser and taller. If you have tried it then you know it isn't easy and it gets harder with the number of blocks already removed. To build a robot to do the job is tough because it's not just a game of logic and judgment, it is also about learning to move the blocks without dislodging other blocks. It's a physical game.

A team from MIT have put together a special robot arm and the software needed to make it play Jenga:

At the start of the video the voice-over says "requiring a steady hand and nerves of steel", Well the robot doesn't have to worry about the injuries from falling blocks so it just needs the steady hand.

 

jenga

 

What is interesting is that the robot learned after seeing only 300 towers. It doesn't use the standard deep neural network approach, which was tried out in a simulator and found to need orders of magnitude more data. There is a neural network involved, but it is a Bayesian neural network and there is a lot of hand-built structure in the model. Essentially the robot learned that if a block was hard to push then its extraction was probably going to bring the tower down. Notice this isn't trivial because it has to have a concept of what "hard to move" means among other things.

At the moment the performance isn't good enough to challenge a human Jenga champion, but it's early days. Soon after AlphaGo beat the best at the game of Go, we reflected and took comfort in the fact that robots and AI couldn't stack stones like a human. With this sort of approach perhaps they can.

jenga2

More Information

See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion

N. Fazeli, M. Oller, J. Wu, Z. Wu, J. B. Tenenbaum and  A. Rodriguez

Related Articles

We May Have Lost At Go, But We Still Stack Stones Better

AlphaGo Triumphs In China

AlphaGo Revealed As Mystery Player On Winning Streak

Why AlphaGo Changes Everything

Robots Learn To Navigate Pedestrians

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Apache Superset 4 Updates Reports
15/04/2024

Apache Superset 4 has been released with improvements to the reporting module and redesigned alerts. Superset is a business intelligence web application. It is open source, provides data exploration a [ ... ]



Microsoft Introduces .NET Smart Components
01/04/2024

Microsoft has provided a set of .NET Smart Components, described as a set of genuinely useful AI-powered UI components that you can quickly and easily add to .NET apps. The components are prebuilt end [ ... ]


More News

raspberry pi books

 

Comments




or email your comment to: comments@i-programmer.info


Last Updated ( Friday, 27 December 2019 )