|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Archivebox||16,880||1||14 days ago||24||April 13, 2021||180||mit||Python|
|🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...|
|Awesome Web Archiving||1,572||a month ago||3||cc0-1.0|
|An Awesome List for getting started with web archiving|
|A High-Fidelity Web Archiving Extension for Chrome and Chromium based browsers!|
|Ipwb||551||2 months ago||237||April 06, 2022||146||mit||Python|
|InterPlanetary Wayback: A distributed and persistent archive replay system using IPFS|
|Serverless Web Archive Replay directly in the browser|
|Wail||305||6 months ago||173||mit||Roff|
|:whale2: Web Archiving Integration Layer: One-Click User Instigated Preservation|
|Archivenow||250||3 years ago||11||mit||Python|
|A Tool To Push Web Resources Into Web Archives|
|Waybackpy||235||3||a year ago||35||March 15, 2022||4||mit||Python|
|Wayback Machine API interface & a command-line tool|
|Archiveror will help you preserve the webpages you love. 💾|
|Archivespark||118||2 years ago||7||September 16, 2019||4||mit||Scala|
|An Apache Spark framework for easy data processing, extraction as well as derivation for web archives and archival collections, developed at Internet Archive.|
"Your own personal internet archive" (网站存档 / 爬虫)
ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view sites you want to preserve offline.
You can feed it URLs one at a time, or schedule regular imports from browser bookmarks or history, feeds like RSS, bookmark services like Pocket/Pinboard, and more. See input formats for a full list.
It saves snapshots of the URLs you feed it in several formats: HTML, PDF, PNG screenshots, WARC, and more out-of-the-box, with a wide variety of content extracted and preserved automatically (article text, audio/video, git repos, etc.). See output formats for a full list.
The goal is to sleep soundly knowing the part of the internet you care about will be automatically preserved in durable, easily accessable formats for decades after it goes down.
No matter which setup method you choose, they all follow this basic process and provide the same CLI, Web UI, and on-disk data layout.
archivebox init --setup # creates a new collection in the current directory
archivebox add 'https://example.com' # add URLs one at a time via args / piped stdin archivebox schedule --every=day --depth=1 https://example.com/rss.xml # or have it import URLs on a schedule
archivebox server 0.0.0.0:8000 # use the interactive web UI archivebox list 'https://example.com' # use the CLI commands (--help for more) ls ./archive/*/index.json # or browse directly via the filesystem
⤵️ See the Quickstart below for more...
🖥 Supported OSs: Linux/BSD, macOS, Windows (Docker/WSL) 👾 CPUs: amd64, x86, arm8, arm7 (raspi>=3)
(click to expand your preferred ►
distribution below for full setup instructions)
docker-composeon macOS/Linux/Windows ✨ (highly recommended)
First make sure you have Docker installed: https://docs.docker.com/get-docker/
curl -O 'https://raw.githubusercontent.com/ArchiveBox/ArchiveBox/master/docker-compose.yml'
Start the server.
docker-compose run archivebox init --setup docker-compose up
# you can also add links and manage your archive via the CLI: docker-compose run archivebox add 'https://example.com' echo 'https://example.com' | docker-compose run archivebox -T add docker-compose run archivebox status docker-compose run archivebox help # to see more options # when passing stdin/stdout via the cli, use the -T flag echo 'https://example.com' | docker-compose run -T archivebox add docker-compose run -T archivebox list --html --with-headers > index.html
This is the recommended way to run ArchiveBox because it includes all the extractors like:
chrome, wget, youtube-dl, git, etc., full-text search w/ sonic, and many other great features.
First make sure you have Docker installed: https://docs.docker.com/get-docker/
# create a new empty directory and initalize your collection (can be anywhere) mkdir ~/archivebox && cd ~/archivebox docker run -v $PWD:/data -it archivebox/archivebox init --setup # start the webserver and open the UI (optional) docker run -v $PWD:/data -p 8000:8000 archivebox/archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000 # you can also add links and manage your archive via the CLI: docker run -v $PWD:/data -it archivebox/archivebox add 'https://example.com' docker run -v $PWD:/data -it archivebox/archivebox status docker run -v $PWD:/data -it archivebox/archivebox help # to see more options # when passing stdin/stdout via the cli, use only -i (not -it) echo 'https://example.com' | docker run -v $PWD:/data -i archivebox/archivebox add docker run -v $PWD:/data -i archivebox/archivebox list --html --with-headers > index.html
This method should work on all Ubuntu/Debian based systems, including x86, amd64, arm7, and arm8 CPUs (e.g. Raspberry Pis >=3).
If you're on Ubuntu >= 20.04, add the
apt repository with
(on other Ubuntu/Debian-based systems follow the ♰ instructions below)
# add the repo to your sources and install the archivebox package using apt sudo apt install software-properties-common sudo add-apt-repository -u ppa:archivebox/archivebox sudo apt install archivebox
# create a new empty directory and initalize your collection (can be anywhere) mkdir ~/archivebox && cd ~/archivebox archivebox init --setup # start the webserver and open the web UI (optional) archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000 # you can also add URLs and manage the archive via the CLI and filesystem: archivebox add 'https://example.com' archivebox status archivebox list --html --with-headers > index.html archivebox list --json --with-headers > index.json archivebox help # to see more options
♰ On other Ubuntu/Debian-based systems add these sources directly to
echo "deb http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main" > /etc/apt/sources.list.d/archivebox.list echo "deb-src http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main" >> /etc/apt/sources.list.d/archivebox.list sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C258F79DCC02E369 sudo apt update sudo apt install archivebox archivebox setup archivebox --version # then scroll back up and continue the initalization instructions above
(you may need to install some other dependencies manually however)
First make sure you have Homebrew installed: https://brew.sh/#install
# install the archivebox package using homebrew brew install archivebox/archivebox/archivebox # create a new empty directory and initalize your collection (can be anywhere) mkdir ~/archivebox && cd ~/archivebox archivebox init --setup # start the webserver and open the web UI (optional) archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000 # you can also add URLs and manage the archive via the CLI and filesystem: archivebox add 'https://example.com' archivebox status archivebox list --html --with-headers > index.html archivebox list --json --with-headers > index.json archivebox help # to see more options
pipon any other platforms (some extras must be installed manually)
# install the archivebox package using pip3 pip3 install archivebox # create a new empty directory and initalize your collection (can be anywhere) mkdir ~/archivebox && cd ~/archivebox archivebox init --setup # Install any missing extras like wget/git/ripgrep/etc. manually as needed # start the webserver and open the web UI (optional) archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000 # you can also add URLs and manage the archive via the CLI and filesystem: archivebox add 'https://example.com' archivebox status archivebox list --html --with-headers > index.html archivebox list --json --with-headers > index.json archivebox help # to see more options
# archivebox [subcommand] [--args] # docker-compose run archivebox [subcommand] [--args] # docker run -v $PWD:/data -it [subcommand] [--args] archivebox init --setup # safe to run init multiple times (also how you update versions) archivebox --version archivebox help
archivebox setup/init/config/status/manageto administer your collection
archivebox add/schedule/remove/update/list/shell/oneshotto manage Snapshots in the archive
archivebox scheduleto pull in fresh URLs in regularly from boorkmarks/history/Pocket/Pinboard/RSS/etc.
archivebox manage createsuperuser archivebox server 0.0.0.0:8000
Then open http://127.0.0.1:8000 to view the UI.
# you can also configure whether or not login is required for most features archivebox config --set PUBLIC_INDEX=False archivebox config --set PUBLIC_SNAPSHOTS=False archivebox config --set PUBLIC_ADD_VIEW=False
sqlite3 ./index.sqlite3 # run SQL queries on your index archivebox shell # explore the Python API in a REPL ls ./archive/*/index.html # or inspect snapshots on the filesystem
ArchiveBox supports many input formats for URLs, including Pocket & Pinboard exports, Browser bookmarks, Browser history, plain text, HTML, markdown, and more!
Click these links for instructions on how to propare your links from these sources:
# archivebox add --help archivebox add 'https://example.com/some/page' archivebox add < ~/Downloads/firefox_bookmarks_export.html archivebox add --depth=1 'https://news.ycombinator.com#2020-12-12' echo 'http://example.com' | archivebox add echo 'any_text_with [urls](https://example.com) in it' | archivebox add # (if using docker add -i when piping stdin) echo 'https://example.com' | docker run -v $PWD:/data -i archivebox/archivebox add # (if using docker-compose add -T when piping stdin / stdout) echo 'https://example.com' | docker-compose run -T archivebox add
See the Usage: CLI page for documentation and examples.
It also includes a built-in scheduled import feature with
archivebox schedule and browser bookmarklet, so you can pull in URLs from RSS feeds, websites, or the filesystem regularly/on-demand.
All of ArchiveBox's state (including the index, snapshot data, and config file) is stored in a single folder called the "ArchiveBox data folder". All
archivebox CLI commands must be run from inside this folder, and you first create it by running
The on-disk layout is optimized to be easy to browse by hand and durable long-term. The main index is a standard
index.sqlite3 database in the root of the data folder (it can also be exported as static JSON/HTML), and the archive snapshots are organized by date-added timestamp in the
./ index.sqlite3 ArchiveBox.conf archive/ ... 1617687755/ index.html index.json screenshot.png media/some_video.mp4 warc/1617687755.warc.gz git/somerepo.git ...
Each snapshot subfolder
./archive/<timestamp>/ includes a static
index.html describing its contents, and the snapshot extrator outputs are plain files within the folder.
Inside each Snapshot folder, ArchiveBox save these different types of extractor outputs as plain files:
index.jsonHTML and JSON index files containing metadata and details
singlefile.htmlHTML snapshot rendered with headless Chrome using SingleFile
example.com/page-name.htmlwget clone of the site with
output.pdfPrinted PDF of site using headless chrome
screenshot.png1440x900 screenshot of site using headless chrome
output.htmlDOM Dump of the HTML after rendering using headless chrome
article.html/jsonArticle text extraction using Readability & Mercury
archive.org.txtA link to the saved site on archive.org
media/all audio/video files + playlists, including subtitles & metadata with youtube-dl
git/clone of any repository found on github, bitbucket, or gitlab links
It does everything out-of-the-box by default, but you can disable or tweak individual archive methods via environment variables / config.
# archivebox config --help archivebox config # see all currently configured options archivebox config --set SAVE_ARCHIVE_DOT_ORG=False archivebox config --set YOUTUBEDL_ARGS='--max-filesize=500m'
You can export the main index to browse it statically without needing to run a server.
Note about large exports: These exports are not paginated, exporting many URLs or the entire archive at once may be slow. Use the filtering CLI flags on the
archivebox list command to export specific Snapshots or ranges.
# archivebox list --help archivebox list --html --with-headers > index.html # export to static html table archivebox list --json --with-headers > index.json # export to json blob archivebox list --csv=timestamp,url,title > index.csv # export to csv spreadsheet # (if using docker-compose, add the -T flag when piping) docker-compose run -T archivebox list --html --filter-type=search snozzberries > index.json
The paths in the static exports are relative, make sure to keep them next to your
./archive folder when backing them up or viewing them.
For better security, easier updating, and to avoid polluting your host system with extra dependencies, it is strongly recommended to use the official Docker image with everything preinstalled for the best experience.
To achieve high fidelity archives in as many situations as possible, ArchiveBox depends on a variety of 3rd-party tools and libraries that specialize in extracting different types of content. These optional dependencies used for archiving sites include:
chrome(for screenshots, PDF, DOM HTML, and headless JS scripts)
npm(for readability, mercury, and singlefile)
wget(for plain HTML, static files, and WARC saving)
curl(for fetching headers, favicon, and posting to Archive.org)
youtube-dl(for audio, video, and subtitles)
git(for cloning git repos)
You don't need to install every dependency to use ArchiveBox. ArchiveBox will automatically disable extractors that rely on dependencies that aren't installed, based on what is configured and available in your
If using Docker, you don't have to install any of these manually, all dependencies are set up properly out-of-the-box.
However, if you prefer not using Docker, you can install ArchiveBox and its dependencies using your system package manager or
pip directly on any Linux/macOS system. Just make sure to keep the dependencies up-to-date and check that ArchiveBox isn't reporting any incompatibility with the versions you install.
# install python3 and archivebox with your system package manager # apt/brew/pip/etc install ... (see Quickstart instructions above) archivebox setup # auto install all the extractors and extras archivebox --version # see info and check validity of installed dependencies
Installing directly on Windows without Docker or WSL/WSL2/Cygwin is not officially supported, but some advanced users have reported getting it working.
If you're importing URLs containing secret slugs or pages with private content (e.g Google Docs, unlisted videos, etc), you may want to disable some of the extractor modules to avoid leaking private URLs to 3rd party APIs during the archiving process.
# don't do this: archivebox add 'https://docs.google.com/document/d/12345somelongsecrethere' archivebox add 'https://example.com/any/url/you/want/to/keep/secret/' # without first disabling share the URL with 3rd party APIs: archivebox config --set SAVE_ARCHIVE_DOT_ORG=False # disable saving all URLs in Archive.org # if extra paranoid or anti-google: archivebox config --set SAVE_FAVICON=False # disable favicon fetching (it calls a google API) archivebox config --set CHROME_BINARY=chromium # ensure it's using Chromium instead of Chrome
Be aware that malicious archived JS can access the contents of other pages in your archive when viewed. Because the Web UI serves all viewed snapshots from a single domain, they share a request context and typical CSRF/CORS/XSS/CSP protections do not work to prevent cross-site request attacks. See the Security Overview page for more details.
# visiting an archived page with malicious JS: https://127.0.0.1:8000/archive/1602401954/example.com/index.html # example.com/index.js can now make a request to read everything from: https://127.0.0.1:8000/index.html https://127.0.0.1:8000/archive/* # then example.com/index.js can send it off to some evil server
Support for saving multiple snapshots of each site over time will be added eventually (along with the ability to view diffs of the changes between runs). For now ArchiveBox is designed to only archive each URL with each extractor type once. A workaround to take multiple snapshots of the same URL is to make them slightly different by adding a hash:
archivebox add 'https://example.com#2020-10-24' ... archivebox add 'https://example.com#2020-10-25'
Because ArchiveBox is designed to ingest a firehose of browser history and bookmark feeds to a local disk, it can be much more disk-space intensive than a centralized service like the Internet Archive or Archive.today. However, as storage space gets cheaper and compression improves, you should be able to use it continuously over the years without having to delete anything.
ArchiveBox can use anywhere from ~1gb per 1000 articles, to ~50gb per 1000 articles, mostly dependent on whether you're saving audio & video using
SAVE_MEDIA=True and whether you lower
Storage requirements can be reduced by using a compressed/deduplicated filesystem like ZFS/BTRFS, or by turning off extractors methods you don't need. Don't store large collections on older filesystems like EXT3/FAT as they may not be able to handle more than 50k directory entries in the
Try to keep the
index.sqlite3 file on local drive (not a network mount), and ideally on an SSD for maximum performance, however the
archive/ folder can be on a network mount or spinning HDD.
The aim of ArchiveBox is to enable more of the internet to be archived by empowering people to self-host their own archives. The intent is for all the web content you care about to be viewable with common software in 50 - 100 years without needing to run ArchiveBox or other specialized software to replay it.
Vast treasure troves of knowledge are lost every day on the internet to link rot. As a society, we have an imperative to preserve some important parts of that treasure, just like we preserve our books, paintings, and music in physical libraries long after the originals go out of print or fade into obscurity.
Whether it's to resist censorship by saving articles before they get taken down or edited, or just to save a collection of early 2010's flash games you love to play, having the tools to archive internet content enables to you save the stuff you care most about before it disappears.
The balance between the permanence and ephemeral nature of content on the internet is part of what makes it beautiful. I don't think everything should be preserved in an automated fashion--making all content permanent and never removable, but I do think people should be able to decide for themselves and effectively archive specific content that they care about.
Because modern websites are complicated and often rely on dynamic content, ArchiveBox archives the sites in several different formats beyond what public archiving services like Archive.org/Archive.is save. Using multiple methods and the market-dominant browser to execute JS ensures we can save even the most complex, finicky websites in at least a few high-quality, long-term data formats.
▶ Check out our community page for an index of web archiving initiatives and projects.
A variety of open and closed-source archiving projects exist, but few provide a nice UI and CLI to manage a large, high-fidelity archive collection over time.
ArchiveBox tries to be a robust, set-and-forget archiving solution suitable for archiving RSS feeds, bookmarks, or your entire browsing history (beware, it may be too big to store),
including private/authenticated content that you wouldn't otherwise share with a centralized service (this is not recommended due to JS replay security concerns).
Not all content is suitable to be archived in a centralized collection, wehther because it's private, copyrighted, too large, or too complex. ArchiveBox hopes to fill that gap.
By having each user store their own content locally, we can save much larger portions of everyone's browsing history than a shared centralized service would be able to handle. The eventual goal is to work towards federated archiving where users can share portions of their collections with each other.
ArchiveBox differentiates itself from similar self-hosted projects by providing both a comprehensive CLI interface for managing your archive, a Web UI that can be used either indepenently or together with the CLI, and a simple on-disk data format that can be used without either.
ArchiveBox is neither the highest fidelity, nor the simplest tool available for self-hosted archiving, rather it's a jack-of-all-trades that tries to do most things well by default. It can be as simple or advanced as you want, and is designed to do everything out-of-the-box but be tuned to suit your needs.
If being able to archive very complex interactive pages with JS and video is paramount, check out ArchiveWeb.page and ReplayWeb.page.
If you prefer a simpler, leaner solution that archives page text in markdown and provides note-taking abilities, check out Archivy or 22120.
For more alternatives, see our list here...
Whether you want to learn which organizations are the big players in the web archiving space, want to find a specific open-source tool for your web archiving need, or just want to see where archivists hang out online, our Community Wiki page serves as an index of the broader web archiving community. Check it out to learn about some of the coolest web archiving projects and communities on the web!
Need help building a custom archiving solution?
(They also do general software consulting across many industries)
You can also access the docs locally by looking in the
All contributions to ArchiveBox are welcomed! Check our issues and Roadmap for things to work on, and please open an issue to discuss your proposed implementation before working on things! Otherwise we may have to close your PR if it doesn't align with our roadmap.
git clone --recurse-submodules https://github.com/ArchiveBox/ArchiveBox cd ArchiveBox git checkout dev # or the branch you want to test git submodule update --init --recursive git pull --recurse-submodules
# Install ArchiveBox + python dependencies python3 -m venv .venv && source .venv/bin/activate && pip install -e '.[dev]' # or: pipenv install --dev && pipenv shell # Install node dependencies npm install # or archivebox setup # Check to see if anything is missing archivebox --version # install any missing dependencies manually, or use the helper script: ./bin/setup.sh
# Optional: develop via docker by mounting the code dir into the container # if you edit e.g. ./archivebox/core/models.py on the docker host, runserver # inside the container will reload and pick up your changes docker build . -t archivebox docker run -it archivebox init --setup docker run -it -p 8000:8000 \ -v $PWD/data:/data \ -v $PWD/archivebox:/app/archivebox \ archivebox server 0.0.0.0:8000 --debug --reload # (remove the --reload flag and add the --nothreading flag when profiling with the django debug toolbar)
./bin/ folder and read the source of the bash scripts within.
You can also run all these in Docker. For more examples see the Github Actions CI/CD tests that are run:
archivebox config --set DEBUG=True # or archivebox server --debug ...
docker build -t archivebox:dev https://github.com/ArchiveBox/ArchiveBox.git#dev docker run -it -v $PWD:/data archivebox:dev ...
Make sure to run this whenever you change things in
cd archivebox/ ./manage.py makemigrations cd path/to/test/data/ archivebox shell archivebox manage dbshell
(Normally CI takes care of this, but these scripts can be run to do it manually)
./bin/build.sh # or individually: ./bin/build_docs.sh ./bin/build_pip.sh ./bin/build_deb.sh ./bin/build_brew.sh ./bin/build_docker.sh
(Normally CI takes care of this, but these scripts can be run to do it manually)
./bin/release.sh # or individually: ./bin/release_docs.sh ./bin/release_pip.sh ./bin/release_deb.sh ./bin/release_brew.sh ./bin/release_docker.sh