Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Spider_python | 732 | 6 months ago | 13 | apache-2.0 | Python | |||||
python爬虫 | ||||||||||
Phpscraper | 381 | 2 days ago | 15 | March 28, 2022 | 20 | gpl-3.0 | PHP | |||
A universal web-util for PHP. | ||||||||||
Fakebrowser | 290 | a year ago | 54 | January 14, 2022 | 10 | lgpl-3.0 | JavaScript | |||
🤖 Fake fingerprints to bypass anti-bot systems. Simulate mouse and keyboard operations to make behavior like a real person. | ||||||||||
Double Agent | 120 | 7 months ago | 3 | mit | TypeScript | |||||
A test suite of common scraper detection techniques. See how detectable your scraper stack is. | ||||||||||
Scrapy Puppeteer | 103 | 2 years ago | 1 | November 30, 2018 | 8 | mit | Python | |||
Scrapy + Puppeteer | ||||||||||
Scrapy Puppeteer | 35 | 13 days ago | 3 | August 02, 2022 | bsd-3-clause | Python | ||||
Library that helps use puppeteer in scrapy. | ||||||||||
Js Renderer | 16 | 6 months ago | 3 | mit | JavaScript | |||||
A online puppeteer service on Vercel to render pages with javascript (js). Mainly useful for web scraping (not using splash). | ||||||||||
Scrapy Puppeteer Service | 9 | 12 days ago | 4 | bsd-3-clause | JavaScript | |||||
A special service that runs puputeer instances. | ||||||||||
Crawlitem Puppeteer | 2 | 3 years ago | apache-2.0 | JavaScript | ||||||
puppeteer抓取商品的例子 | ||||||||||
Scrap2019 Ncov | 2 | 2 years ago | 1 | JavaScript | ||||||
This repository was created back in Jan 2020 when no one was aware of Corona virus on the western side of the world |
Scrapy middleware to handle javascript pages using puppeteer.
This is an attempt to make Scrapy and Puppeteer work together to handle Javascript-rendered pages. The design is strongly inspired of the Scrapy Splash plugin.
Scrapy and Puppeteer
The main issue when running Scrapy and Puppeteer together is that Scrapy is using Twisted and that Pyppeteeer (the python port of puppeteer we are using) is using asyncio for async stuff.
Luckily, we can use the Twisted's asyncio reactor to make the two talking with each other.
That's why you cannot use the buit-in scrapy
command line (installing the default reactor), you will have to use the scrapyp
one, provided by this module.
If you are running your spiders from a script, you will have to make sure you install the asyncio reactor before importing scrapy or doing anything else:
import asyncio
from twisted.internet import asyncioreactor
asyncioreactor.install(asyncio.get_event_loop())
$ pip install scrapy-puppeteer
Add the PuppeteerMiddleware
to the downloader middlewares:
DOWNLOADER_MIDDLEWARES = {
'scrapy_puppeteer.PuppeteerMiddleware': 800
}
Use the scrapy_puppeteer.PuppeteerRequest
instead of the Scrapy built-in Request
like below:
from scrapy_puppeteer import PuppeteerRequest
def your_parse_method(self, response):
# Your code...
yield PuppeteerRequest('http://httpbin.org', self.parse_result)
The request will be then handled by puppeteer.
The selector
response attribute work as usual (but contains the html processed by puppeteer).
def parse_result(self, response):
print(response.selector.xpath('//title/@text'))
The scrapy_puppeteer.PuppeteerRequest
accept 2 additional arguments:
wait_until
Will be passed to the waitUntil
parameter of puppeteer.
Default to domcontentloaded
.
wait_for
Will be passed to the waitFor
to puppeteer.
screenshot
When used, puppeteer will take a screenshot of the page and the binary data of the .png captured will be added to the response meta
:
yield PuppeteerRequest(
url,
self.parse_result,
screenshot=True
)
def parse_result(self, response):
with open('image.png', 'wb') as image_file:
image_file.write(response.meta['screenshot'])