Mrjob

Run MapReduce jobs on Hadoop or Amazon Web Services
Alternatives To Mrjob
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Mrjob2,5841122a year ago62September 17, 2020211otherPython
Run MapReduce jobs on Hadoop or Amazon Web Services
Spark Redshift514414 years ago10November 01, 2016134apache-2.0Scala
Redshift data source for Apache Spark
Trendingtopics351
12 years ago10Ruby
Rails app for tracking trends in server logs - powered by the Cloudera Hadoop Distribution on EC2
Sagemaker Spark285
23 days ago36April 23, 202134apache-2.0Scala
A Spark library for Amazon SageMaker.
Spark Jupyter Aws255
6 years ago2Jupyter Notebook
A guide on how to set up Jupyter with Pyspark painlessly on AWS EC2 clusters, with S3 I/O support
Emr Dynamodb Connector204
5 months ago15September 28, 202157apache-2.0Java
Implementations of open source Apache Hadoop/Hive interfaces which allow for ingesting data from Amazon DynamoDB
Elastic Mapreduce Ruby86
9 years ago8apache-2.0Ruby
Amazon's elastic mapreduce ruby client. Ruby 1.9.X compatible
Scalding Example Project85
9 years ago3apache-2.0Scala
The Scalding WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR
Emr S3 Io29
10 years ago5Java
Hadoop IO for Amazon S3
Nutch Aws23
8 years ago1Makefile
Alternatives To Mrjob
Select To Compare


Alternative Project Comparisons
Readme

mrjob: the Python MapReduce library

https://github.com/Yelp/mrjob/raw/master/docs/logos/logo_medium.png

mrjob is a Python 2.7/3.4+ package that helps you write and run Hadoop Streaming jobs.

Stable version (v0.7.4) documentation

Development version documentation

https://travis-ci.org/Yelp/mrjob.png

mrjob fully supports Amazon's Elastic MapReduce (EMR) service, which allows you to buy time on a Hadoop cluster on an hourly basis. mrjob has basic support for Google Cloud Dataproc (Dataproc) which allows you to buy time on a Hadoop cluster on a minute-by-minute basis. It also works with your own Hadoop cluster.

Some important features:

  • Run jobs on EMR, Google Cloud Dataproc, your own Hadoop cluster, or locally (for testing).
  • Write multi-step jobs (one map-reduce step feeds into the next)
  • Easily launch Spark jobs on EMR or your own Hadoop cluster
  • Duplicate your production environment inside Hadoop
    • Upload your source tree and put it in your job's $PYTHONPATH
    • Run make and other setup scripts
    • Set environment variables (e.g. $TZ)
    • Easily install python packages from tarballs (EMR only)
    • Setup handled transparently by mrjob.conf config file
  • Automatically interpret error logs
  • SSH tunnel to hadoop job tracker (EMR only)
  • Minimal setup
    • To run on EMR, set $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY
    • To run on Dataproc, set $GOOGLE_APPLICATION_CREDENTIALS
    • No setup needed to use mrjob on your own Hadoop cluster

Installation

pip install mrjob

As of v0.7.0, Amazon Web Services and Google Cloud Services are optional depedencies. To use these, install with the aws and google targets, respectively. For example:

pip install mrjob[aws]

A Simple Map Reduce Job

Code for this example and more live in mrjob/examples.

"""The classic MapReduce job: count the frequency of words.
"""
from mrjob.job import MRJob
import re

WORD_RE = re.compile(r"[\w']+")


class MRWordFreqCount(MRJob):

    def mapper(self, _, line):
        for word in WORD_RE.findall(line):
            yield (word.lower(), 1)

    def combiner(self, word, counts):
        yield (word, sum(counts))

    def reducer(self, word, counts):
        yield (word, sum(counts))


if __name__ == '__main__':
     MRWordFreqCount.run()

Try It Out!

# locally
python mrjob/examples/mr_word_freq_count.py README.rst > counts
# on EMR
python mrjob/examples/mr_word_freq_count.py README.rst -r emr > counts
# on Dataproc
python mrjob/examples/mr_word_freq_count.py README.rst -r dataproc > counts
# on your Hadoop cluster
python mrjob/examples/mr_word_freq_count.py README.rst -r hadoop > counts

Setting up EMR on Amazon

Setting up Dataproc on Google

Advanced Configuration

To run in other AWS regions, upload your source tree, run make, and use other advanced mrjob features, you'll need to set up mrjob.conf. mrjob looks for its conf file in:

  • The contents of $MRJOB_CONF
  • ~/.mrjob.conf
  • /etc/mrjob.conf

See the mrjob.conf documentation for more information.

Project Links

Reference

More Information

Thanks to Greg Killion (ROMEO ECHO_DELTA) for the logo.

Popular Amazon Projects
Popular Hadoop Projects
Popular Companies Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Amazon Web Services
Cloud Computing
Amazon
Hadoop
Mapreduce