Google Uses AI to Find Where You Live
Written by Harry Fairhead   
Sunday, 18 December 2011

A recent Google research paper outlines how it might use AI to read digits in natural images - specifically Street View photos.

Google has a huge database of photographs of the urban (and some not so urban) environments. Apart from its curiosity value, there is a lot of data locked up in the images and clearly getting an AI agent to look though the whole corpus and derive useful information is a great idea.

In Reading Digits in Natural Images with Unsupervised Feature Learning a Google/Stanford team explain how they set about extracting house numbers from Street View images. 

While specific OCR problem have been reasonably well solved, the difficulty of reading even digits in a general image is difficult and unsolved. If it can be done then this would allow Google to create much more accurate maps and hence navigation services. It is also proposed that, by knowing the house numbers in a photo, geocoding can be improved to provide accurate views of a target destination - that is, not just a general view of where you are going, but a view looking at the house you are trying to travel to.

Existing techniques tend to be based on hand constructed features which are fine-tuned to the context that the text is found in. In a more general setting these methods are not likely to work as well, if at all. The approach used is to first locate where in an image a house number plaque might be, Next, detected areas are subjected to digit recognition algorithms.


To test their methods they first created a subset of the data consisting of 600,000 images with labeled digits constructed with the help of Amazon's Mechanical Turk.

 

googlehousenumbers

 

They first tried handcrafted features, as typically used in OCR work, and discovered that this approach didn't work well. Next they tried feature learning algorithms - stacked sparse auto-encoders and a K-means based system. The hand crafted features achieved 63% and 85% accuracy compared to around 90% for the two learn feature classifiers - which should be compared to a human accuracy of 98%. 

The large size of the training set proved to be very important in achieving the good performance and this again reinforces the idea that many AI techniques used in the past might simply have underperformed because large training sets were not available.

More Information

Reading Digits in Natural Images with Unsupervised Feature Learning

 

 

To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.

 

Banner


Bjarne Stroustrup Defends C++ As Safe
25/01/2023

It isn't surprising to find the creator of a language defending the language they created and so it is with the latest paper from Bjarne Stroustrup. Is it fair for the NSA to tell programmers to stay  [ ... ]



Google Season Of Docs 2023 Announced
20/01/2023

Google Season of Docs allows open source organizations to apply for a grant of between and  $5,000 - $15,000 USD to help improve their documentation. Selected organizations use their grant t [ ... ]


More News

Last Updated ( Sunday, 18 December 2011 )