Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Szt Bigdata | 1,702 | 6 months ago | 15 | other | Scala | |||||
深圳地铁大数据客流分析系统🚇🚄🌟 | ||||||||||
Mongo Hadoop | 1,511 | 78 | 9 | a year ago | 14 | January 27, 2017 | 16 | Java | ||
MongoDB Connector for Hadoop | ||||||||||
Studybooks | 999 | 9 years ago | 1 | |||||||
我的学习资料,包括书籍、网址等 | ||||||||||
Android Nosql | 287 | 3 years ago | 3 | apache-2.0 | Java | |||||
Lightweight, simple structured NoSQL database for Android | ||||||||||
Bigdata Playground | 154 | 4 years ago | 4 | apache-2.0 | TypeScript | |||||
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL | ||||||||||
Mongodb Spark Demo | 41 | 9 years ago | 2 | apache-2.0 | Java | |||||
Spark app that demonstrates reading and writing data to from MongoDB and BSON files | ||||||||||
Big Data Exploration | 37 | 5 years ago | JavaScript | |||||||
[Archive] Intern project - Big Data Exploration using MongoDB - This Repository is NOT a supported MongoDB product | ||||||||||
Pentest Wiki | 35 | 4 years ago | ||||||||
规范渗透测试报告中的漏洞名称以及修复建议 | ||||||||||
Mongoreduce | 29 | 12 years ago | Java | |||||||
Hadoop Input and Ouput formats for MongoDB | ||||||||||
Zerowing | 28 | 9 years ago | mit | Java | ||||||
A set of tools for copying and streaming data from MongoDB into HBase |
The MongoDB Connector for Hadoop is now officially end-of-life (EOL). No further development, bugfixes, enhancements, documentation changes or maintenance will be provided by this project and pull requests will no longer be accepted.
The MongoDB Connector for Hadoop is a library which allows MongoDB (or backup files in its data format, BSON) to be used as an input source, or output destination, for Hadoop MapReduce tasks. It is designed to allow greater flexibility and performance and make it easy to integrate data in MongoDB with other parts of the Hadoop ecosystem including the following:
Check out the releases page for the latest stable release.
mongorestore
The best way to install the Hadoop connector is through a dependency management system like Maven:
<dependency>
<groupId>org.mongodb.mongo-hadoop</groupId>
<artifactId>mongo-hadoop-core</artifactId>
<version>1.5.1</version>
</dependency>
or Gradle:
compile 'org.mongodb.mongo-hadoop:mongo-hadoop-core:1.5.1'
You can also download the jars files yourself from the Maven Central Repository.
New releases are announced on the releases page.
These are the minimum versions tested with the Hadoop connector. Earlier versions may work, but haven't been tested.
You must have at least version 3.0.0 of the MongoDB Java Driver installed in order to use the Hadoop connector.
Run ./gradlew jar
to build the jars. The jars will be placed in to build/libs
for each module. e.g. for the core module,
it will be generated in the core/build/libs
directory.
The Hadoop connector will build against the versions of Hadoop, Hive, Pig, etc. as specified in build.gradle
.
After successfully building, you must copy the jars to the lib directory on each node in your hadoop cluster. This is usually one of the following locations, depending on which Hadoop release you are using:
$HADOOP_PREFIX/lib/
$HADOOP_PREFIX/share/hadoop/mapreduce/
$HADOOP_PREFIX/share/hadoop/lib/
mongo-hadoop should work on any distribution of Hadoop. Should you run in to an issue, please file a Jira ticket.
For full documentation, please check out the Hadoop Connector Wiki. The documentation includes installation instructions, configuration options, as well as specific instructions and examples for each Hadoop application the connector supports.
Amazon Elastic MapReduce is a managed Hadoop framework that allows you to submit jobs to a cluster of customizable size and configuration, without needing to deal with provisioning nodes and installing software.
Using EMR with the MongoDB Connector for Hadoop allows you to run MapReduce jobs against MongoDB backup files stored in S3.
Submitting jobs using the MongoDB Connector for Hadoop to EMR simply requires that the bootstrap actions fetch the dependencies (mongoDB
java driver, mongo-hadoop-core libs, etc.) and place them into the hadoop distributions lib
folders.
For a full example (running the enron example on Elastic MapReduce) please see here.
If your code introduces new features, add tests that cover them if possible and make sure that ./gradlew check
still passes. For instructions on how to run the tests, see the Running the Tests section in the wiki.
If you're not sure how to write a test for a feature or have trouble with a test failure, please post on the google-groups with details
and we will try to help. Note: Until findbugs updates its dependencies, running ./gradlew check
on Java 8 will fail.
Luke Lovett ([email protected])
See CONTRIBUTORS.md.
Issue tracking: https://jira.mongodb.org/browse/HADOOP/
Discussion: http://groups.google.com/group/mongodb-user/