Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Mrjob | 2,584 | 112 | 2 | a year ago | 62 | September 17, 2020 | 211 | other | Python | |
Run MapReduce jobs on Hadoop or Amazon Web Services | ||||||||||
Spark Redshift | 514 | 4 | 1 | 4 years ago | 10 | November 01, 2016 | 134 | apache-2.0 | Scala | |
Redshift data source for Apache Spark | ||||||||||
Trendingtopics | 351 | 12 years ago | 10 | Ruby | ||||||
Rails app for tracking trends in server logs - powered by the Cloudera Hadoop Distribution on EC2 | ||||||||||
Sagemaker Spark | 285 | 2 | 3 days ago | 36 | April 23, 2021 | 34 | apache-2.0 | Scala | ||
A Spark library for Amazon SageMaker. | ||||||||||
Spark Jupyter Aws | 255 | 6 years ago | 2 | Jupyter Notebook | ||||||
A guide on how to set up Jupyter with Pyspark painlessly on AWS EC2 clusters, with S3 I/O support | ||||||||||
Emr Dynamodb Connector | 204 | 5 months ago | 15 | September 28, 2021 | 57 | apache-2.0 | Java | |||
Implementations of open source Apache Hadoop/Hive interfaces which allow for ingesting data from Amazon DynamoDB | ||||||||||
Elastic Mapreduce Ruby | 86 | 9 years ago | 8 | apache-2.0 | Ruby | |||||
Amazon's elastic mapreduce ruby client. Ruby 1.9.X compatible | ||||||||||
Scalding Example Project | 85 | 9 years ago | 3 | apache-2.0 | Scala | |||||
The Scalding WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR | ||||||||||
Emr S3 Io | 29 | 10 years ago | 5 | Java | ||||||
Hadoop IO for Amazon S3 | ||||||||||
Nutch Aws | 23 | 8 years ago | 1 | Makefile | ||||||
mrjob is a Python 2.7/3.4+ package that helps you write and run Hadoop Streaming jobs.
Stable version (v0.7.4) documentation
Development version documentation
mrjob fully supports Amazon's Elastic MapReduce (EMR) service, which allows you to buy time on a Hadoop cluster on an hourly basis. mrjob has basic support for Google Cloud Dataproc (Dataproc) which allows you to buy time on a Hadoop cluster on a minute-by-minute basis. It also works with your own Hadoop cluster.
Some important features:
$PYTHONPATH
$TZ
)mrjob.conf
config file$AWS_ACCESS_KEY_ID
and $AWS_SECRET_ACCESS_KEY
$GOOGLE_APPLICATION_CREDENTIALS
pip install mrjob
As of v0.7.0, Amazon Web Services and Google Cloud Services are optional
depedencies. To use these, install with the aws
and google
targets,
respectively. For example:
pip install mrjob[aws]
Code for this example and more live in mrjob/examples
.
"""The classic MapReduce job: count the frequency of words.
"""
from mrjob.job import MRJob
import re
WORD_RE = re.compile(r"[\w']+")
class MRWordFreqCount(MRJob):
def mapper(self, _, line):
for word in WORD_RE.findall(line):
yield (word.lower(), 1)
def combiner(self, word, counts):
yield (word, sum(counts))
def reducer(self, word, counts):
yield (word, sum(counts))
if __name__ == '__main__':
MRWordFreqCount.run()
# locally python mrjob/examples/mr_word_freq_count.py README.rst > counts # on EMR python mrjob/examples/mr_word_freq_count.py README.rst -r emr > counts # on Dataproc python mrjob/examples/mr_word_freq_count.py README.rst -r dataproc > counts # on your Hadoop cluster python mrjob/examples/mr_word_freq_count.py README.rst -r hadoop > counts
$AWS_ACCESS_KEY_ID
and
$AWS_SECRET_ACCESS_KEY
accordinglyTo run in other AWS regions, upload your source tree, run make
, and use
other advanced mrjob features, you'll need to set up mrjob.conf
. mrjob looks
for its conf file in:
$MRJOB_CONF
~/.mrjob.conf
/etc/mrjob.conf
See the mrjob.conf documentation for more information.
Thanks to Greg Killion (ROMEO ECHO_DELTA) for the logo.