Awesome Open Source
Awesome Open Source


This is a Python script that you can easily download all the photos and videos from your favorite tumblr blogs.


How to Discuss

  • Feel free to join our Slack, where you can ask questions and help answer them on Slack.
  • Also you can open new issue on Github


For Programmers and Developers

You know how to install Python and pip. Then pip install requests xmltodict


$ git clone
$ cd tumblr-crawler
$ pip install -r requirements.txt

For non-programmers

Configuration and Downloading

There are 2 ways to specify the sites you want to download, either by creating a sites.txt file or specifying in the command line parameter.

Use sites.txt

Find a text editor and open the file sites.txt, add the sites you want to download into the file, separated by comma/space/tab/CR, no suffixes. For example, if you want to download and, compose the file like this:

vogue2, gucci2

And then save the file, and run python in your terminal or just double click the file which will be automatically run by Python.

Use the command line parameter (only for OS experts)

If you are familiar with command lines in Windows or Unix systems, you may run the script with a parameter to specify the sites:

python site1,site2

The site names should be separated with comma, no space and no suffixes needed.

How the files get downloaded and stored

The photos/videos will be saved to the folders named with the tumblr blog. You will find them in the current path/directory.

This script will not re-download the photos or videos if they have already been downloaded. So it will do no harm by running this script several times. In the meanwhile, you can find back the missing photos or videos.

Use Proxies (Optional)

You may want to use proxies when downloading. Please refer to ./proxies_sample1.json and ./proxies_sample2.json. And save your own proxies to ./proxies.json in json format. You can validate the content by visiting

If ./proxies.json is an empty file, no proxies will be used during downloading.

If you are using Shadowsocks with global mode, your ./proxies.json can be,

    "http": "socks5://",
    "https": "socks5://"

And now you can enjoy your downloads.

More customizations for Programmers Only

# Setting timeout

# Retry times

# Medium Index Number that Starts from

# Numbers of photos/videos per page

# Numbers of downloading threads concurrently

You can set TIMEOUT to another value, e.g. 50, according to your network quality.

And this script will retry downloading the images or videos several times (default value is 5).

You can also only download photos or videos by commenting

def download_media(self, site):
    # only download photos


def download_media(self, site):
    # only download videos

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
python (55,543
crawler (380
photos (135
videos (61
tumblr (28