.. image:: https://readthedocs.org/projects/scrapy-redis/badge/?version=latest :target: https://readthedocs.org/projects/scrapy-redis/?badge=latest :alt: Documentation Status
.. image:: https://img.shields.io/pypi/v/scrapy-redis.svg :target: https://pypi.python.org/pypi/scrapy-redis
.. image:: https://img.shields.io/pypi/pyversions/scrapy-redis.svg :target: https://pypi.python.org/pypi/scrapy-redis
.. image:: https://img.shields.io/travis/rolando/scrapy-redis.svg :target: https://travis-ci.org/rolando/scrapy-redis
.. image:: https://codecov.io/github/rolando/scrapy-redis/coverage.svg?branch=master :alt: Coverage Status :target: https://codecov.io/github/rolando/scrapy-redis
.. image:: https://landscape.io/github/rolando/scrapy-redis/master/landscape.svg?style=flat :target: https://landscape.io/github/rolando/scrapy-redis/master :alt: Code Quality Status
.. image:: https://requires.io/github/rolando/scrapy-redis/requirements.svg?branch=master :alt: Requirements Status :target: https://requires.io/github/rolando/scrapy-redis/requirements/?branch=master
Redis-based components for Scrapy.
Distributed crawling/scraping
You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls.
Distributed post-processing
Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue.
Scrapy plug-and-play components
Scheduler + Duplication Filter, Item Pipeline, Base Spiders.
.. note:: This features cover the basic case of distributing the workload across multiple workers. If you need more features like URL expiration, advanced URL prioritization, etc., we suggest you to take a look at the Frontera
_ project.
Scrapy
>= 1.1redis-py
>= 2.10Use the following settings in your project:
.. code-block:: python
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
#SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"
#SCHEDULER_PERSIST = True
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.FifoQueue' #SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.LifoQueue'
#SCHEDULER_IDLE_BEFORE_CLOSE = 10
ITEM_PIPELINES = { 'scrapy_redis.pipelines.RedisPipeline': 300 }
#REDIS_ITEMS_KEY = '%(spider)s:items'
#REDIS_ITEMS_SERIALIZER = 'json.dumps'
#REDIS_HOST = 'localhost' #REDIS_PORT = 6379
#REDIS_URL = 'redis://user:[email protected]:9001'
#REDIS_PARAMS = {}
#REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient'
SPOP
operation. You have to use the SADD
#REDIS_START_URLS_AS_SET = False
zrevrange
and zremrangebyrank
operation. You have to use the zadd
#REDIS_START_URLS_AS_ZSET = False
#REDIS_START_URLS_KEY = '%(name)s:start_urls'
#REDIS_ENCODING = 'latin1'
.. note::
Version 0.3 changed the requests serialization from marshal
to cPickle
,
therefore persisted requests using version 0.2 will not able to work on 0.3.
This example illustrates how to share a spider's requests queue across multiple spider instances, highly suitable for broad crawls.
Setup scrapy_redis package in your PYTHONPATH
Run the crawler for first time then stop it::
$ cd example-project $ scrapy crawl dmoz ... [dmoz] ... ^C
Run the crawler again to resume stopped crawling::
$ scrapy crawl dmoz ... [dmoz] DEBUG: Resuming crawl (9019 requests scheduled)
Start one or more additional scrapy crawlers::
$ scrapy crawl dmoz ... [dmoz] DEBUG: Resuming crawl (8712 requests scheduled)
Start one or more post-processing workers::
$ python process_items.py dmoz:items -v ... Processing: Kilani Giftware (http://www.dmoz.org/Computers/Shopping/Gifts/) Processing: NinjaGizmos.com (http://www.dmoz.org/Computers/Shopping/Gifts/) ...
The class scrapy_redis.spiders.RedisSpider
enables a spider to read the
urls from redis. The urls in the redis queue will be processed one
after another, if the first request yields more requests, the spider
will process those requests before fetching another url from redis.
For example, create a file myspider.py
with the code below:
.. code-block:: python
from scrapy_redis.spiders import RedisSpider
class MySpider(RedisSpider):
name = 'myspider'
def parse(self, response):
# do stuff
pass
Then:
run the spider::
scrapy runspider myspider.py
push urls to redis::
redis-cli lpush myspider:start_urls http://google.com
.. note::
These spiders rely on the spider idle signal to fetch start urls, hence it
may have a few seconds of delay between the time you push a new url and the
spider starts crawling it.
Donate BTC: 13haqimDV7HbGWtz7uC6wP1zvsRWRAhPmF
Donate BCC: CSogMjdfPZnKf1p5ocu3gLR54Pa8M42zZM
Donate ETH: 0x681d9c8a2a3ff0b612ab76564e7dca3f2ccc1c0d
Donate LTC: LaPHpNS1Lns3rhZSvvkauWGDfCmDLKT8vP
.. _Frontera: https://github.com/scrapinghub/frontera