SiiS is a autotrading bot for forex, indices and crypto currencies markets. It also support semi-automated trading in way to manage your entries and exits with more possibilities than exchanges allows.
It is developped in Python3, using TA-lib, numpy, and matplotlib for the basic charting client.
If this project helped you out feel free to donate.
Need Python 3.6 or Python 3.7 on your system. Tested on Debian, Ubuntu and Fedora.
python -m venv siis.venv source siis.venv/bin/activate
You need to activate it each time you open your terminal before running SiiS.
From deps/ directory, first install TA-Lib (C lib needed by the Python binding) :
tar xvzf deps/ta-lib-0.4.0-src.tar.gz cd ta-lib cp deps/patch/ta_utility.h ta-lib/src/ta_func ./configure make
This include a patch necessary to have correct Bollinger Bands values for market price very low (<0.0001) else all the values will be the sames.
Eventually you need to have installed the build-essential packages from your distribution repository in way to have GCC, Make and Autotools.
Finally to install in your /usr/local :
sudo make install
Or eventually if you have installed TA-lib in a custom prefix (e.g., with ./configure --prefix=$PREFIX), then you have to specify 2 variables before installing the requirements :
export TA_LIBRARY_PATH=$PREFIX/lib export TA_INCLUDE_PATH=$PREFIX/include
For more details on TA-lib installation please visit : https://github.com/mrjbq7/ta-lib
From siis base directory :
pip install -r deps/requirements.txt
Then depending of which database storage to use :
pip install -r deps/reqspgsql.txt # if using PostgreSQL (recommended) pip install -r deps/reqsmysql.txt # or if using MySQL
You might need to install the C client library before. Please refers to psycopg2 or MySQLdb Python package documentation. On Debian based for PostgreSQL you will need to install libpq-dev (apt-get install libpq-dev) before.
Before running the lib folder containing TA-Lib must be found in the LD_LIBRARY_PATH :
With, if installed in the default directory (/usr/local/lib) :
Prefers the PostgreSQL database server for performance and because I have mostly tested with it. Another argument in favor of PostreSQL is the TimescalDB extension for timeseries data that increase incredibly the performance.
The sql/ directory contains the initial SQL script for creations of the tables. The first line of comment in these files describe a possible way to install them.
The futur version will need the requirement of TimescaleDB for optimized timeserie data.
In root (or sudo) :
sh -c "echo 'deb https://packagecloud.io/timescale/timescaledb/debian/ `lsb_release -c -s` main' > /etc/apt/sources.list.d/timescaledb.list" wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | sudo apt-key add - sudo apt-get update sudo apt-get install timescaledb-postgresql-11
You could have to replace
lsb_release -c -s by buster ou bulleye if your are on a Debian sid.
su - postgres
If you are using TCP socket connection do :
psql -h localhost -U root -W -p 5432
Or using local unix socket :
psql -U root -W
Then in psql CLI :
CREATE DATABASE siis; CREATE USER siis WITH ENCRYPTED PASSWORD 'siis'; GRANT ALL PRIVILEGES ON DATABASE siis TO siis;
For the futur usage of TimescaleDB (not necessary at now)
CREATE EXTENSION timescaledb;
Now exit (CTRL-C) the psql CLI.
You can run the table creation script :
Using TCP socket connection do :
psql -h localhost -d siis -U siis -W -p 5432 -a -q -f sql/initpg.sql
Or using local unix socket :
psql -d siis -U siis -W -a -q -f sql/initpg.sql
First running will try to create a data structure on your local user.
The directory will contains 4 sub-directories:
Each json file of the config directory could be overrided by adding your own blank copy of the file in your local siis/config directory. Every parameters can be overrided, and new entries can be inserted, but do NEVER modify the original files.
List of the files in config directory :
List of the sub-directories of config :
The 'siis' database configuration (type is pgsql or mysql). There is only one database for now.
The default configuration might suffise, and you can overrides most of the parameters into your profiles.
There is one configuration per broker to have the capacity to connect to a broker, watching price data, and user trade data. The values could be overrided per appliance, here its the general settings.
The default configuration might suffise, and you can overrides most of the parameters into your profiles.
There is one entry per broker to have the capacity to enable the trading feature for the live-mode. The values could be overrided per appliance, here its the general settings.
Contains the configuration of the listening service to connect a futur Web tools to control SiiS more friendly than using the CLI.
You have two directories .siis/config/profiles/ and .siis/config/appliances/ and some templates in source config directory. You must define one file per profile and one file per appliance, the name of the file act as the name of reference.
A profile refers to zero, one or many appliances. This is the profile name to used on the command line options --profiles=<profilename>. It is a mixing of one or many appliances, that can be runned on a same instance of SiiS, with traders and watchers options overriding.
The file name act as the name of the profile minus the file extension. If no profile is specified on command line option the default profile will be used.
Content of a <myprofile>.json :
The file name act as the name of the appliance minus the file extension.
Content of a <myappliance>.json :
This is the more sensible file, which contains your API keys. You have a config/identity.json.template file. Do not modify this file it will not be read.
Parameters : * the identifier of the differents brokers * profiles name * for my usage I have real and demo * specific needed value for the connector (API key, account identifier, password...)
The template show you the needed values to configure for the supported brokers.
Each broker have its own usage name, creating a directory. Then you have a sub-directory per market. The market is identified by the unique broker market name.
Then you will have a sub-directory T/ meaning tick or trade. Finally there is many files for the ticks or trades data. For Binance this is a aggregate trade level, BitMex at trade, IG at tick.
There is one file per month, there is a binary and a tabular version of the file at this time. But maybe later the tabular version will be disabled and not stored by default.
See more details on the data fetching section.
Some strategy have the capacity to log trades, signals, performance and even more.
The reports directory will contains a sub-directory per appliance, with a second level sub-directory with the name of the market. This is the basic initial reports data file structure.
Inside this could be different for each strategy.
python siis.py <identity> [--help, --options...]
You need to define the name of the identity to use. This is related to the name defined into the identity.json file. Excepted for the tools (fetch, binarize, optimize, rebuild, sync, export, import) the name of the profile to use --profile=<profilename> must be specified.
There are different running mode, the normal mode, will start the watching, trading capacity (paper-mode, live or backtesting) and offering an interactive terminal session, or you can run the specifics tools (fetcher, binarizer, optimizer, syncer, rebuilder...).
Fetching is for getting historcal market data of OHLC, and also of trade/tick data. OHLC goes into the SQL database, trades/ticks data goes to binary files, organized into the markets/ directory.
Starting by example will be more easy, so :
python siis.py real --fetch --broker=binance.com --market=*USDT,*BTC --from=2017-08-01T00:00:00 --to=2019-08-31T23:59:59 --timeframe=1w
This example will fetch any weekly OHLC of pairs based on USDT and BTC, from 2017-08-01 to 2019-08-31. Common timeframes are formed of number plus a letter (s for second, m for minute, h for hour, d for day, w for week, M for month). Here we want only the weekly OHLC, then --timeframe=1w.
Defines the range of datetime using --from=<datetime> and --to=<datetime>. The format of the datetime is 4 digits year, 2 digits month, 2 digts day of month, a T separator (meaning time), 2 digits hour, 2 digits minutes, 2 digits seconds. The datetime is interpreted as UTC.
The optionnal option --cascaded=<max-timeframe> will generate the higher multiple of OHLC until one of (1m, 5m, 15m, 1h, 4h, 1d, 1w). The non multiple timeframe (like 3m, or 45m) are not generated with cascaded because of the nature of the implementation in cascad its not possible. You have to use the rebuild command option to generate theese OHLC from the direct submultiple.
For example, this will fetch from 5m OHLC from the broker, and then generate 15m, 30m, 1h, 2h, 4h and 1d from them :
python siis.py real --fetch --broker=binance.com --market=BTCUSDT --from=2017-08-01T00:00:00 --to=2019-08-31T23:59:59 --timeframe=5m --cascaded=1d
Market must be the unique market id of the broker, not the common usual name. The comma act as a separator. Wildchar * can be placed at the beginning of the market identifier. Negation ! can be placed at the beginning of the market identifier to avoid a specific market when a wildchar filter is also used. Example of --market=*USDT,!BCHUSDT will fetch for any USDT based excepted for BCHUSDT
If you need to only fetch the last n recents OHLCs, you can use the --last=<number> option.
The --spec optionnal option could be necessary for some fetchers, like with alphavantage.co where you have to specify the type of the market (--spec=STOCK).
Getting trade/tick level imply to defines --timeframe=t.
python siis.py real --fetch --broker=binance.com --market=BTCUSDT --from=2017-08-01T00:00:00 --to=2019-08-31T23:59:59 --timeframe=t
You can set the --cascaded option even from tick/trade timeframe. For example a complete fetching from 1m to 1w :
python siis.py real --fetch --broker=binance.com --market=BTCUSDT --from=2017-08-01T00:00:00 --to=2019-08-31T23:59:59 --timeframe=t --cascaded=1w
In the scripts/ directory there is some examples of how you can fetch your data using a bash script. Even these scripts could be added in a crontab entry.
Take care than some brokers have limitations. For example IG will limits to 10000 candles per week. This limit is easy to reach. Some other like BitMex limit to 30 queries per second in non auth mode or 60 in auth mode. Concretely thats mean get months of data of trades could take more than a day.
Lets start with an example :
python siis.py real --profile=my-backtest1 --backtest --from=2017-08-01T00:00:00 --to=2017-12-31T23:59:59 --timestep=15
Backtesting, like live and paper-mode need to know which profile to use. Lets defines a profile file named my-backtest1.json in .siis/config/profiles/, and an appliance file that must be refered from the profile file.
The datetime range must be defined, --from and --to, and a timestep must be precised. This will be the minimal increment of time - in second - beetwen two iterations. The lesser the timestep is the more longer the computation will take, but if you have a strategy that work at the tick/trade level then the backtesting will be more accurate.
The C++ version (WIP) have no performance issue (can run 1000x to 10000x faster than the Python version).
Imagine your strategy works on close of 4h OHLC, you can run your backtesting with a --timestep=4h. Or imagine your strategy works on close of 5m, but you want the exit of a trade be more reactive than 5m, because if the price move briefly in few seconds, then you'll probably have differents results using a lesser timestep.
Ideally a timestep of 0.1 will give accurate results, but the computations will take many hours. Some optimizations to only recompute the only last value for indicators will probably give a bit a performance, but the main problem rest the nature of the Python, without C/C++ sub modules I have no idea how to optimize it : GIL is slow, Python list and slicing are slow, even a simple loop take lot of time compared to C/C++.
Originally I've developped this backtesting feature to be focused to replay multiples markets, on a virtual account, not only oriented to backtest the raw performance of the strategy.
Adding the --time-factor=<factor> will add a supplementary dealy during the backtesting. The idea is if you want to replay a recent period, and have the time to interact manually, like replaying a semi-automated day of scalping. The factor is a multiple of the time : 1 meaning real-time, and then 60 mean 1 minute of simulation per second.
Understand the given strategies acts here as examples, you can use them, can works on some patterns, cannot works on some others. Considers to do your owns, or to use SiiS as a trading monitor with improved trade following, dynamic stop-loss, take-profit. Somes fixes could be needed for the current strategies, it serves as a labs, I will not publish my always winning unicorn strategy ^^.
Trading with live data but on a virtual local simulated trading account.
python siis.py real --profile=bitmex-xbteth1 --paper-mode
Here 'real' mean for the name of the identity to use, related to API key.
Adding the --paper-mode will create a paper-trader instance in place of a connector to your real broker account. Initials amounts of margin or quantity of assets must be configured into the profiles.
At this time the slippage is not simulated. Orders are executed at bid/ofr price according to the direction. The order book is not used to look for the real offered quantities, then order are filled in one trade without slippage.
A slippage factor will be implemented sooner.
In that case the watchers are running and stores OHLC and ticks/trade data (or not if --read-only is specified). In paper-mode OHLCs are stored to the database like in a normal live mode.
Trading with live data using your real or demo trading account.
python siis.py real --profile=bitmex-xbteth1
Trades will be executed on your trading account.
I'll suggest in a first time to test with a demo account or a testnet. Then once your are ok with your strategy, with the interface, and the stability, and in a second time try with small amount, on real account, before finally letting the bot playing with biggers amount. Please read the disclaimer at the bottom of this file.
By default, OHLCs are stored to the database in live mode, but the trade/ticks must be manually fetched, excepted for IG which by default store the ticks during live mode, because it is not possible to get them from history.
SiiS offers a basic but auto suffisent set of commands and keyboard shortcuts to manage and control your trades, looking your account, markets, tickers, trades, orders, positions and strategies performances.
In addition there is a charting feature using matplotlib. The goal is to finish the monitoring service, and to realise a Web client to monitor and manage each instance.
During the execution of the program you can type a command starting by a semicolumn : plus the name of the command. Lets first type the :help command. To exit the command is q then type : followed by q and then press enter.
There is some direct keys, not using the semicolumn, in default mode, and some complex commands in command mode.
The :help command give you the list a shortcut and commands, and :help <command-name> to have detailed help for a specific command.
The tick or trade data (price, volume) are stored during the running or when fetching data at the tick timeframe. The OHLC data are stored in the SQL database. By default any candles from 1m to 1w are stored and kept indefinitively. The databases.json file defines an option "auto-cleanup", by default to false, if set to true it will cleanup each 4 hours the last OHLCs like :
I will probably do more options in databases.json in way to configure the max kept OHLC for each timeframe, and create a special db-cleanup running mode that will only process the db-cleanup for the live servers.
There is not interest for live mode to kept to many past for low timeframe, but its necessarry to keep them for the backtesting.
You can use the rebuild command to rebuild missing OHLCs from submultiple or from ticks/trades data.
It is possible to setup your own crontab with an SQL script the clean as your way.
The strategy call the watchers to prefetch the last recents OHLC for the timeframes. The default value if 100 OHLCs (binance, bitmex, kraken) but this could be a problem with IG because of the 10k sample history limit per week then for now I don't prefetch more than 1 or 2 OHLCs per timeframe for IG.
For conveniance I've made some bash scripts to frequently fetch OHLC, and some others script (look at the scripts/ directory for examples) that I run just before starting a live instance to make a prefetching (only the last N candles).
About the file containing the ticks, there is bad effect of that design. The good effect is the high performance, but because of Python performance this is not very impressive, but the C++ version could read millions of tick per seconds, its more performant than any timestamp based DB engine. So the bad side is that I've choosen to have 1 file per month (per market), and the problem is about temporal consistency of the data. I don't made any check of the timestamp before appending, then fetching could append to a file containing some more recent data, and maybe with some gaps.
You can use the optimize command option to check your data, for trades/ticks and for any OHLC timeframes.
Trades/ticks are by default not stored from watcher running, but excepted for IG, because its not possible to get back history from their API. The problem is if you don't let an instance all the week, you will have some gap. You could manage to restart only once per week, during the weekend the bot in that case, and to apply your strategies changes at this time.
Finally you can disable writting of OHLCS generated during watcher using the option --read-only.
SiiS uses distinct threads per watcher, per WebSocket, per trader, per strategy, plus a pool of workers for the strategies traders, and potentially some others threads for notification and communication extra services.
Because of the Python GIL, threads are efficients when used for the IO operations, but not for the computing.
Performance seems good, tested with 100+ traded markets at trades/ticks level of watching. It could be interesting to use the asyncio capacities to distribute the computing, but in cost of extra communication, and an additional latency.
I recommand to use only a single watcher/trader and a single strategy/appliance per profile. More will requires extra threads, causing possible global latencies.
In addition, if you have different set of parameters for markets you could prefers to uses distinct profiles and then instance of SiiS. For example you trade EURUSD and USDPY using the same profile and instance, but you have a separate profile and instance for trade the indices, and another for trading the commodities.
Another example, you trade pairs on USDT, but you distinct 4 sorts ok markets, serious coin, alt coin and shit coin and low volume shitcoins. Then you could have 4 distinct profiles and then 4 instances. And more probably you might have differents strategies.
Again another example, a broker offers trading on asset and some others pairs on margin, considers having differents profiles, and then differents instances too.
Finally, you could setup your differents VPS, one instance per VPS, lesser failures, lower resources usage, and you could adjust the hardware to the optimal point.
TA-lib is not found : look you have installed it, and may be you have to export your LD_LIBRARY_PATH.
Backtesting is slow : I know that, you can increase the timestep, but then the results will be less accurates, mostly depending if the strategy works at close or at each tick/trade, and if the timestep is or not an integer divider of the strategy base timeframe. When I've more time or lot of feedbacks I will spend more time to develop the C++ version.
Fetching historical data is slow : It depends of the exchance and the timeframe. Fetching history trades from BitMex takes a lot of time, be more patient, this is due to theirs API limitations.
Exception during fetch of BitMex trade : It appears, and I have no idea at this time there is an unexpected API response that generate a program exception, that need to restart the fetch at the time of failure. I will investigate later on that issue.
BitMex WS connection error : Their WS are very annoying, if you restart the bot you have to wait 2 or 3 minutes before, because it will reject you until you don't wait.
BitMex overloads : The bot did retry of order, like 5 or 10 or 15 time, I could make a configurable option for this, but sometime it could not suffise, consider you missed the train.
BitMex reject your API call, a story of expired timestamp : Then your server time is no synced with a global NTP server. BitMex says there is a timestamp to far in the past or that is in the futur. If your server does not have a NTP service consider to install one, and update the datetime of your system, and then restart the bot.
Binance starting is slow : Yes, prefetching lot of USDT and BTC markets take a while, many minutes, be patient, your bot will does not have to be restarted every day, once your configured correctly. For testing considers limiting the configured symbols lists.
IG candle limit 10k reached : Do the maths, how many markets do you want to initiate, to fetch, how many candles history you will need, find your way, or try to ask if they can increase your limitations. I have no solution for this problem because its out of my possibility.
In paper-mode (live or backtesing) margin or asset quantity is missing : A recent problem reapears with BitMex markets, I have to investigate, its annoying for live paper-mode and for backtesting. Similar issues could appears with assets quantities. Its in the priority list. Maybe I will plan to have only percent P/L, where the paper-trader will accept any trades.
Please understands than I develop this project during my free time, and for free, only your donations could help me.
The authors are not responsible of the losses on your trading accounts you will made using SiiS, neither of the data losses, corruptions, computers crashs or physicial dommages on your computers or on the cloud you use.
The authors are not responsible of the losses due to the lack of the security of your systems.
Use SiiS at your own risk, backtest strategies many time before running them on a live account. Test the stability, test the efficiency, take in account the potential execution slippage and latency caused by the network, the broker or by having an inadequate system.