Awesome Open Source
Awesome Open Source


Fast golang web crawler for gathering URLs and JavaScript file locations. This is basically a simple implementation of the awesome Gocolly library.

Example usages

Single URL:

echo | hakrawler

Multiple URLs:

cat urls.txt | hakrawler

Timeout for each line of stdin after 5 seconds:

cat urls.txt | hakrawler -timeout 5

Send all requests through a proxy:

cat urls.txt | hakrawler -proxy http://localhost:8080

Include subdomains:

echo | hakrawler -subs

Note: a common issue is that the tool returns no URLs. This usually happens when a domain is specified (, but it redirects to a subdomain ( The subdomain is not included in the scope, so the no URLs are printed. In order to overcome this, either specify the final URL in the redirect chain or use the -subs option to include subdomains.

Example tool chain

Get all subdomains of google, find the ones that respond to http(s), crawl them all.

echo | haktrails subdomains | httpx | hakrawler


Normal Install

First, you'll need to install go.

Then run this command to download + compile hakrawler:

go install[email protected]

You can now run ~/go/bin/hakrawler. If you'd like to just run hakrawler without the full path, you'll need to export PATH="~/go/bin/:$PATH". You can also add this line to your ~/.bashrc file if you'd like this to persist.

Docker Install (from dockerhub)

echo | docker run --rm -i hakluke/hakrawler:v2 -subs

Local Docker Install

It's much easier to use the dockerhub method above, but if you'd prefer to run it locally:

git clone
cd hakrawler
docker build -t hakluke/hakrawler .
docker run --rm -i hakluke/hakrawler --help

Kali Linux: Using apt

sudo apt install hakrawler

Then, to run hakrawler:

echo | docker run --rm -i hakluke/hakrawler -subs

Command-line options

Usage of hakrawler:
  -d int
    	Depth to crawl. (default 2)
  -h string
    	Custom headers separated by two semi-colons. E.g. -h "Cookie: foo=bar;;Referer:" 
    	Disable TLS verification.
    	Output as JSON.
  -proxy string
    	Proxy URL. E.g. -proxy
  -s	Show the source of URL based on where it was found. E.g. href, form, script, etc.
  -size int
    	Page size limit, in KB. (default -1)
    	Include subdomains for crawling.
  -t int
    	Number of threads to utilise. (default 8)
  -timeout int
    	Maximum time to crawl each URL from stdin, in seconds. (default -1)
  -u	Show only unique urls.

Alternative Project Comparisons
Related Awesome Lists
Top Programming Languages
Top Projects

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Go (158,480
Web (37,361
Parsing (28,930
Crawler (10,174
Crawling (10,174
Hacking (7,732
Web Application (7,000
Discovery (4,992
Web Crawler (3,955
Pentesting (3,140
Osint (1,138
Recon (892
Reconnaissance (589