Awesome Open Source
Awesome Open Source

Project structure

System consist from multiple independent processes that may be run on different node.
Communication is implemented on top of various message queues on top of redis.
Redis is also used as distributed caching layer - to reflect actual balance across all exchanges.
Data in queues usually pickled.
Persistence for historical data - postgres.
Alerting is implemented on top of Telegram messenger.
CLI management is done manually using scripts within this project.

NOTE: Web based UI and data scrapping framework from various resources are NOT part of this repo.


  • bot - arbitrage process that trade single commodity between exactly two exchanges
  • order - what bot is placed to markets
  • trade - what was actually executed at markets, single order may be closed by multiple trades

Key services:

  • telegram notifier - watch message queues and forward messages based on their severity to corresponding Telegram channels
  • balance_monitoring - update cached values of balances for all currencies for all exchanges in portfolio All trading processes validate time of last update and will immediately stopped if it expired.
  • expired order processing - due to many reasons order placed by bot may be not closed in time, this service re-process them trying to minimise loss
  • failed order processing - timeout, exchange errors and ill fate may lead to situation that we got errors as result of order placement request. This service carefully check current state of order: whether it was registered within exchange or not, whether it was fulfilled or not.

Data analysis:

  • order saving - save all orders that were placed by arbitrage processes into Postgres db
  • bot trade retrieval - retrieve information from all exchanges in regards to recently executed trades.
  • arbitrage monitoring - read tickers for all supported currency across all exchanges and issue a telegram notification about direct arbitrage opportunities

You may find all available services under services packages.



Node with data and cache:

  • redis in place
sudo service docker start
cd ~/crypto_deploy/redis
# NOTE: you may want to edit path to mounted volume for data persistence 
sudo docker-compose -f redis_compose.yml up
# NOTE: redis IP is hardcoded there:
  • postgres in place with proper schema and data:
sudo service docker start
cd ~/crypto_deploy/postgres
# NOTE: you may want to edit path to mounted volume for data persistence
sudo docker-compose -f docker-compose-postgres.yml up
psql -h -Upostgres -f schema/schema.sql
psql -h -Upostgres -f schema/data.sql
  • data retrieval & nodes with bot processes:
yum groupinstall "Development Tools"
pip install -r requirements.txt
  • copy common_sample.cfg to common.cfg and update it with proper public IP addresses and domain names
  • make sure that firewall rules at aws allow incoming connections from bot nodes to data node

Deploying data retrieval services

Will deploy:

  • order_book, history, candles,
  • arbitrage notifications based on tickers
  • telegram notifications

Deploying arbitrage bots

  1. verify settings at config file: more deploy/deploy.cfg
  2. Initiate deployment processes python deploy/deploy.cfg

How to run dedicated services from subfolder:

python -m services.telegram_notifier

Kill ALL processes

ps -ef | grep arbitrage | awk '{print $2}' | xargs kill -9 $1

or just

pkill python

Kill ALL screens with all session MacOs

screen -ls | awk '{print $1}' | xargs -I{} screen -S {} -X quit

screen -ls | grep -v deploy | awk '{print $1}' | xargs -I{} screen -S {} -X quit based on

alias cleanscreen="screen -ls | tail -n +2 | head -n -1|cut -d'.' -f 1 |xargs kill -9 ; screen -wipe"

alias bot_count='ps -ef | grep python | wc -l'
alias bot_kill='pkill python'
alias bot_stop_screen="screen -ls | tail -n +2 | head -n -1|cut -d'.' -f 1 |xargs kill -9 ; screen -wipe"

Rename existing screen session

screen -S old_session_name -X sessionname new_session_name


ssh -v -N -L 7777: -i .ssh/crptdb_sec_openssh -l dima -p 8883
ssh -i .ssh/crptdb_sec_openssh -v [email protected] -p 8883
ssh [email protected] -p 8883

MacOs dependencies:

pip install python-telegram-bot --user


type and depending on the response perform: for "string": get for "hash": hgetall for "list": lrange 0 -1 for "set": smembers for "zset": zrange 0 -1 withscores


Postgres backups:

pg_dump -h -p 5432 -U postgres -F c -b -v -f "/home/dima/full_DDMMYYYY"
pg_dump -h -p 5432 -U postgres -s public
-- How to do full dump without particular tables
pg_dump -h -p 5432 -U postgres -F c -b -v --exclude-table=alarams --exclude-table=tmp_binance_orders --exclude-table=tmp_history_trades --exclude-table=tmp_trades --exclude-table=trades -f "/home/dima/full_DDMMYYYY"


psql --port=5432 --username=postgres --password --dbname=crypto


How to get ID of telegram chat:

""" How to check what the fuck is happening with the bot:


Get all tradable pairs

Socket subscriptions endpoints:

wraping out websocket into class with callbacks:

exchanges API: - not too much info kraken - na

Examples of implementation:

Rounding rules

Setup balance monitoring from the scratch

sudo curl -L`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo yum install docker, mc, git
sudo service docker start
sudo /usr/local/bin/docker-compose -f docker_compose.yml up
scp -i wtf.pem -r crypto_crawler/secret_keys/ [email protected]:/tmp/


sudo logrotate -s /var/log/logstatus /etc/logrotate.conf
/home/ec2-user/crypto_crawler/logs/*.log {
    size 10M
    rotate 10

sudo vim /etc/crontab
*/5 * * * * root logrotate -f /etc/logrotate.conf

sudo service crond restart

Logs analysis

How to find last modified files recursively:

find $1 -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head

How to merge all files in single file sorted by numerical indexes:

ls socket_errors.log* | sort -Vr | xargs cat > history.log

How to find all lines in log containing PID and sort entries by time:

grep 'PID: 9848' *.log* | sed 's/:/ : /'  | sort -k 3 > 9848_sorted_by_time.log

How to select log entries that are within particular time range:

awk '($3 >= 1553066553) && ($3<=1553066599)' 9848_1.log > suspect.log

How to build processing histogram:

head all_profile.log
1553052602 :  PID: 19399 Start: 1553052602200 ms End: 1553052602201 ms Runtime: 1 ms
1553052602 :  PID: 19115 Start: 1553052602187 ms End: 1553052602201 ms Runtime: 14 ms
1553052602 :  PID: 18629 Start: 1553052602201 ms End: 1553052602202 ms Runtime: 1 ms

more all_profile.log | awk '{ print $12 }' | sort -n | uniq -c

Anaconda profit report How-TO Windows

  1. Install for 2.7 Python
  2. Run Start->Programs->Anaconda Prompt
  3. Install necessary dependencies using pip:
    pip install redis tqdm
  1. Run Start->Programs->Jupiter Notebook
  2. Open Notebook from ipython_notebooks/iPython_local_Input.ipynb
  3. Adjust following parameters:
  • should_fetch_data
  • time_end
  • time_start
  • api_key_full_path
  1. Sequentially execute all sells
  2. Profit report should be under your %HOME%/logs folder

How to setup dynuiuc domain name update

more /usr/lib/systemd/system/dynuiuc.service

ExecStart=/usr/bin/dynuiuc --conf_file /etc/dynuiuc/dynuiuc.conf --log_file /var/log/dynuiuc.log --pid_file /var/run/ --daemon
ExecReload=/bin/kill -HUP $MAINPID
# DK manually


sudo systemctl enable dynuiuc.service sudo service dynuiuc start

python -m services.arbitrage_between_pair_subscription --threshold 1.2 --reverse_threshold 0.71 --balance_threshold 15 --sell_exchange_id 4 --buy_exchange_id 4 --pair_id 1 --deal_expire_timeout 15 --cfg deploy/deploy.cfg

Postgres various

-- avg amount of records per table
SELECT schemaname,relname,n_live_tup 
  FROM pg_stat_user_tables 
  ORDER BY n_live_tup DESC;

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
python (50,919
microservices-architecture (72
python27 (42
asynchronous-programming (27

Find Open Source By Browsing 7,000 Topics Across 59 Categories