One of the big problems of working with language is that a word can mean more than one thing. For example, apple might mean a fruit or a computer company. Humans are generally very good at working out, or "disambiguating" the meaning of words but now we need AI to be just as good.
Google Research has just released the Wikilinks corpus of over 40 million disambiguated "mentions" within over 10 million web pages. This is much bigger, over 100 times, what has been available until now. A "mention" is a term that has a link to a Wikipedia page. The anchor text of the link can be thought of as being defined or disambiguated by the content of the Wikipedia page.
To get anything much out of this database it has to be processed. For example if different mentions link to the same Wikipedia page, then presumably the web pages are talking about the same entity. You could also build a more detailed definition of an entity by aggregating linked pages. You can even deal with disambiguation directly by, say, finding that Apple links to two separate Wikipedia pages one about fruit and the other about a computer company.
This is the raw material that can be used to create new AI applications. You can get the data (about 1Gbyte compressed) as a download. The data includes the URL of the web page, the anchor text, the Wikipedia target and a few extras. There are also some tools to help you get started on processing it from UMass Amherst.
The data has more to say about a mention than simply disambiguating its meaning and has a lot of potential to help with general natural language processing. The statistical approach to understanding needs lots and lots of data and this corpus is a big step in the right direction. This is the sort of data that powers Google Translate and could, in time, be used to convert Google's search into a semantic rather than an ad-hoc collection of signals.