What is Apache Mahout?
Apache Mahout ist ein Projekt der Apache Software Foundation, das auf Apache Hadoop implementiert ist und das MapReduce-Paradigma verwendet.
It is also used to build implementations of scalable and distributed machine learning algorithms that target clustering, collaborative filtering, and classification. Mahout includes Java libraries for common mathematical algorithms and operations that focus on statistics and linear algebra, as well as primitive Java collections.
Apache Mahout is all about machine learning and the project aims to create a powerful tool for building intelligent applications faster and easier.
This used to be the exclusive domain of academics and companies with large research budgets, but in today's data-driven world there is a growing need for intelligent applications that can learn from data and user data.
Apache Mahout is used to build applications using machine learning techniques such as clustering, categorization and collaborative filtering to find similarities in large groups of data or to mark large volumes of web content.
Scalable for large amounts of data - the core algorithms are implemented on large scalable, distributed systems.
Scalable to support various business cases - under the commercially usable Apache Software License
Scalable Community - There is a large, dynamic, diverse, and responsive community that enables discussions about the project and its potential use cases.