To promote research on how machine learning can apply to natural language problems, Google Knowledge is publishing an open source toolkit called Word2Vec that aims to learn the meaning behind words.
Google is making great strides with neural network research and, having applied deep learning to photo search and speech regognition, the Google Knowledge team has turned its attention to natural language.
In a blog post titled Learning the meaning behind words, Tomas Mikolov, Ilya Sutskever, and Quoc Le introduce Word2vec, a toolkit that has been made available as open soure, that can learn concepts by reading lots of news articles and without requiring human supervision.
The blog explains that Word2vec uses distributed representations of text to capture similarities among concepts and provides this example that demonstrates its success in learning the concept of capital cities.
(click to enlarge)
This chart demonstrates that Word2vec understands that Paris and France are related the same way Berlin and Germany are (capital and country), and not the same way Madrid and Italy are.
As the researchers explain:
The model not only places similar countries next to each other, but also arranges their capital cities in parallel. The most interesting part is that we didn’t provide any supervised information before or during training. Many more patterns like this arise automatically in training.
They suggest that there is a broad range of potential applications for this type of text representation tool, including knowledge representation and extraction; machine translation; question answering; conversational systems and for this reason they are open sourcing the code to enable researchers in machine learning, artificial intelligence, and natural language to create real-world applications.
More details of the methodology encapsulated in Word2vec is available in a recent paper, Efﬁcient Estimation of Word Representations in Vector Space by Googlers Tomas Mikolov, , , and J describe recent progress being made on the application of neural networks to understanding the human language.
According to a post on Research at Google:
By representing words as high dimensional vectors, they design and train models for learning the meaning of words in an unsupervised manner from large textual corpora. In doing so, they find that similar words arrange themselves near each other in this high-dimensional vector space, allowing for interesting results to arise from mathematical operations on the word representations. For example, this method allows one to solve simple analogies by performing arithmetic on the word vectors and examining the nearest words in the vector space.
The research paper includes the amazing statistic that using the novel architectures they descibe it takes less than a day to learn high quality word vectors from a 1.6 billion words data set.
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
Given our expectations of Xbox games, you might consider writing a game within a 13K limit, which is the challenge for the annual js13K competition far too restrictive. Its results are now out and prove that it is possible to produce a game that is fun to play.
Apple has updated its developer web portal adding a new section entitled "Making Great Apps for the App Store" aimed at helping developers grow their businesses and reach more users with their apps.
- Registration Now Open For Apple Developer Conference
- Robot Fear Of Falling - South Koreans Win DARPA Robotics Challenge
- App Locates People Even When There Is No Service
- We May Have Lost At Go, But We Still Stack Stones Better
- Self Driving Car Challenge
- Automata Theory on Coursera
- Simulating the Turing-Welchman Bombe With A Pi
- Underhanded C Contest - The Winner
- Gordon Bell Prize For Simulating The Earth's Interior
- Firefox Developer Edition Goes Quantum
- Coinbase Online Bitcoin Hackathon
- Visual Studio 2017 Released