Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Examples Of Web Crawlers | 11,050 | a year ago | 3 | mit | Python | |||||
一些非常有趣的python爬虫例子,对新手比较友好,主要爬取淘宝、天猫、微信、微信读书、豆瓣、QQ等网站。(Some interesting examples of python crawlers that are friendly to beginners. ) | ||||||||||
Python | 8,683 | 5 days ago | 70 | Python | ||||||
Python脚本。模拟登录知乎, 爬虫,操作excel,微信公众号,远程开机 | ||||||||||
Spider Flow | 8,075 | 3 months ago | 20 | mit | Java | |||||
新一代爬虫平台,以图形化方式定义爬虫流程,不写代码即可完成爬虫。 | ||||||||||
Infospider | 6,649 | 4 months ago | 7 | gpl-3.0 | Python | |||||
INFO-SPIDER 是一个集众多数据源于一身的爬虫工具箱🧰,旨在安全快捷的帮助用户拿回自己的数据,工具代码开源,流程透明。支持数据源包括GitHub、QQ邮箱、网易邮箱、阿里邮箱、新浪邮箱、Hotmail邮箱、Outlook邮箱、京东、淘宝、支付宝、中国移动、中国联通、中国电信、知乎、哔哩哔哩、网易云音乐、QQ好友、QQ群、生成朋友圈相册、浏览器浏览历史、12306、博客园、CSDN博客、开源中国博客、简书。 | ||||||||||
Python3 Spider | 2,541 | 3 years ago | Python | |||||||
Python爬虫实战 - 模拟登陆各大网站 包含但不限于:滑块验证、拼多多、美团、百度、bilibili、大众点评、淘宝,如果喜欢请start ❤️ | ||||||||||
Python Crawler | 1,576 | 2 years ago | 2 | HTML | ||||||
从头开始 系统化的 学习如何写Python爬虫。 Python版本 3.6 | ||||||||||
Autocrawler | 1,438 | 20 days ago | 13 | apache-2.0 | Python | |||||
Google, Naver multiprocess image web crawler (Selenium) | ||||||||||
Instagram Profilecrawl | 1,001 | 5 months ago | 8 | mit | Python | |||||
📝 quickly crawl the information (e.g. followers, tags etc...) of an instagram profile. | ||||||||||
Scrapy Selenium | 699 | 3 | 2 | 2 years ago | 6 | January 24, 2019 | 57 | wtfpl | Python | |
Scrapy middleware to handle javascript pages using selenium | ||||||||||
Xxl Crawler | 650 | 2 | 1 | 6 months ago | 5 | October 24, 2018 | 20 | apache-2.0 | Java | |
A distributed web crawler framework.(分布式爬虫框架XXL-CRAWLER) |
python,
****python
****, Gitee,
#chromedriver
chromedriver_path = "/Users/bird/Desktop/chromedriver.exe"
#
weibo_username = ""
#
weibo_password = ""
#chromedriver
chromedriver_path = "/Users/bird/Desktop/chromedriver.exe"
#
weibo_username = ""
#
weibo_password = ""
#chromedriver
chromedriver_path = "/Users/bird/Desktop/chromedriver.exe"
#
weibo_username = ""
#
weibo_password = ""
python********
MacPap.erMac5KWindowsLinux5K
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
#
python main.py
getMovieInRankingList.py``107``executable_path=./chromedriver.exe
chromedriverpip install -r requirement.txt
python main.py
ScrapyIP
000056,,2019-03-26,1.7740,1.7914,0.98,2019-03-27 15:00
000031,,2019-03-26,1.5650,1.5709,0.38,2019-03-27 15:00
000048,C,2019-03-26,1.2230,1.2236,0.05,2019-03-27 15:00
000008,500ETFA,2019-03-26,1.4417,1.4552,0.93,2019-03-27 15:00
000024,A,2019-03-26,1.1670,1.1674,0.04,2019-03-27 15:00
000054,,2019-03-26,1.1697,1.1693,-0.03,2019-03-27 15:00
000016,C,2019-03-26,1.1790,1.1793,0.03,2019-03-27 15:00
# python3pip install
import requests
import random
import re
import queue
import threading
import csv
import json
python
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt
#
python generate_wx_data.py
# pyinstaller
pip install pyinstaller
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt
# setuptools
pip install --upgrade setuptools
#
pyinstaller generate_wx_data.py
QQQQQQ
QQQQ****
QQQQQQQQ
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt
#
python main.py
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt
#
python main.py
ChromeChromeChromiumChromiumIEFirefoxSafari
URL
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt
#
python app.py
# http://localhost:8090
2.150019-3560%80%/80%****
#
cd
#
pip uninstall -y -r requirement.txt
#
pip install -r requirement.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
#
python pyqt_gui.py