Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Gopeed | 7,078 | 18 hours ago | 22 | August 01, 2023 | 21 | gpl-3.0 | Dart | |||
High speed downloader that supports all platforms. Built with Golang and Flutter. | ||||||||||
Fast Android Networking | 5,536 | 6 months ago | 241 | apache-2.0 | Java | |||||
🚀 A Complete Fast Android Networking Library that also supports HTTP/2 🚀 | ||||||||||
Aria | 5,341 | a month ago | 148 | apache-2.0 | Java | |||||
下载可以很简单 | ||||||||||
Prdownloader | 3,164 | 8 months ago | 3 | December 08, 2020 | 116 | apache-2.0 | Java | |||
PRDownloader - A file downloader library for Android with pause and resume support | ||||||||||
M3u8 Downloader | 2,651 | 2 months ago | 26 | JavaScript | ||||||
M3U8-Downloader 支持多线程、断点续传、加密视频下载缓存。 | ||||||||||
Fetch | 1,514 | 3 | 5 months ago | 3 | September 05, 2019 | 91 | apache-2.0 | Java | ||
The best file downloader library for Android | ||||||||||
Pget | 1,051 | 1 | 6 days ago | 3 | January 06, 2022 | 4 | mit | Go | ||
The fastest, resumable file download client | ||||||||||
Cc Attack | 705 | 4 months ago | 25 | gpl-2.0 | Python | |||||
Using Socks4/5 or http proxies to make a multithreading Http-flood/Https-flood (cc) attack. | ||||||||||
Gradle Download Task | 625 | 64 | 11 | 14 days ago | 34 | May 04, 2022 | 9 | apache-2.0 | Java | |
📥 Adds a download task to Gradle that displays progress information | ||||||||||
Sasila | 264 | 4 years ago | 17 | December 13, 2017 | 1 | apache-2.0 | Python | |||
一个灵活、友好的爬虫框架 |
Pyflit is a simple Python HTTP downloader. The features it supports are shown below.
This package is not well tested and buggy that should not be used in production!
First, get self defined URL opener object, you can specify some handlers to support cookie, authentication and other advanced HTTP features. If you want change the User-Agent
or add Referer
in the HTTP request headers, you can also given a self defined headers
as argument. And more, you can turn on proxy by given a dictionary of proxy address. See the API reference for details.
Example:
handlers = [cookie_handler, redirect_handler]
headers = {'User-Agent': 'Mozilla/5.0 '
'(Macintosh; Intel Mac OS X 10_9_4) '
'AppleWebKit/537.77.4 (KHTML, like Gecko) '
'Version/7.0.5 Safari/537.77.4'}
proxies = {'http': 'http://someproxy.com:8080'}
opener = flit.get_opener(handlers, headers, proxies)
u = opener.open("http://www.python.org")
resp = u.read()
You can just call flit.flit_tasks()
to fetch multiple URLs with specified working thread number, a generator will be returned and you can iterate it to process the data chunks.
Example:
from pyflit import flit
def chunk_process(chunk):
"""Output chunk information.
"""
print "Status_code: %s\n%s\n%s \nRead-Size: %s\nHistory: %s\n" % (
chunk['status_code'],
chunk['url'],
chunk['headers'],
len(chunk['content']),
chunk.get('history', None))
links = ['http://www.domain.com/post/%d/' % i for i in xrange(100, 200)]
thread_number = 5
opener = flit.get_opener([handlers [, headers [, proxies]]])
chunks = flit.flit_tasks(links, thread_number, opener)
for chunk in chunks:
chunk_process(chunk)
Multiple segment file downloading use multiple thread to download the separated part of the URL file, you can simply give two arguments: URL address and the segment number.
Example:
from pyflit import flit
url = "http://www.gnu.org/software/emacs/manual/pdf/emacs.pdf"
segment_number = 2
opener = flit.get_opener([handlers [, headers [, proxies]]])
flit.flit_segments(url, segment_number, opener)
You can send pull requests via GitHub or help fix the bugs in the issues list.