Google Uses AI to Find Where You Live
Written by Harry Fairhead   
Sunday, 18 December 2011

A recent Google research paper outlines how it might use AI to read digits in natural images - specifically Street View photos.

Google has a huge database of photographs of the urban (and some not so urban) environments. Apart from its curiosity value, there is a lot of data locked up in the images and clearly getting an AI agent to look though the whole corpus and derive useful information is a great idea.

In Reading Digits in Natural Images with Unsupervised Feature Learning a Google/Stanford team explain how they set about extracting house numbers from Street View images. 

While specific OCR problem have been reasonably well solved, the difficulty of reading even digits in a general image is difficult and unsolved. If it can be done then this would allow Google to create much more accurate maps and hence navigation services. It is also proposed that, by knowing the house numbers in a photo, geocoding can be improved to provide accurate views of a target destination - that is, not just a general view of where you are going, but a view looking at the house you are trying to travel to.

Existing techniques tend to be based on hand constructed features which are fine-tuned to the context that the text is found in. In a more general setting these methods are not likely to work as well, if at all. The approach used is to first locate where in an image a house number plaque might be, Next, detected areas are subjected to digit recognition algorithms.


To test their methods they first created a subset of the data consisting of 600,000 images with labeled digits constructed with the help of Amazon's Mechanical Turk.

 

googlehousenumbers

 

They first tried handcrafted features, as typically used in OCR work, and discovered that this approach didn't work well. Next they tried feature learning algorithms - stacked sparse auto-encoders and a K-means based system. The hand crafted features achieved 63% and 85% accuracy compared to around 90% for the two learn feature classifiers - which should be compared to a human accuracy of 98%. 

The large size of the training set proved to be very important in achieving the good performance and this again reinforces the idea that many AI techniques used in the past might simply have underperformed because large training sets were not available.

More Information

Reading Digits in Natural Images with Unsupervised Feature Learning

 

 

To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.

 

Banner


Amazon Ending Alexa Skills Payments
12/04/2024

Amazon has told developers who are signed up to the Alexa Developer Rewards Program that their monthly payments will end at the end of June. The announcement follows a decision to end the program unde [ ... ]



Apache Updates Geronimo Arthur
28/03/2024

Apache Geronimo Arthur has been updated with support for Common-compress, XBean, and ensures the default options are compatible with last GraalVM release.


More News

Last Updated ( Sunday, 18 December 2011 )