Mining of Massive Datasets, a textbook written for an advanced graduate course taught at Stanford University, has been made available for free download by its authors, Anand Rajarma and Jeffrey D. Ullman.
The book focuses on data mining of data so large that it doesn't fit into main memory and uses examples of data derived from the Web. Its approach is to apply algorithms to data, rather than using machine-learning.
According to its Preface the principal topics covered are:
- Distributed file systems and map-reduce as a tool for creating parallel algorithms that succeed on very large amounts of data.
- Similarity search, including the key techniques of minhashing and locality-sensitive hashing.
- Data-stream processing and specialized algorithms for dealing with data that arrives so fast it must be processed immediately or lost.
- The technology of search engines, including Google's PageRank, link-spam detection, and the hubs-and-authorities approach.
- Frequent-itemset mining, including association rules, market-baskets, the A-Priori Algorithm and its improvements.
- Algorithms for clustering very large, high-dimensional datasets.
- Two key problems for Web applications: managing advertising and recommendation
Although this is an academic text it is written in an accessible style making it a suitable for other readers with existing knowledge of SQL, data structures and algorithms and software systems.
If you are interested in big data then this is a must and given it is free the price is right too.
You can read it online (HTML) or download it as a PDF.
Download it from: