Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pycaret | 7,090 | 13 | a day ago | 83 | June 06, 2022 | 264 | mit | Jupyter Notebook | ||
An open-source, low-code machine learning library in Python | ||||||||||
Sktime | 6,288 | 2 days ago | 677 | bsd-3-clause | Python | |||||
A unified framework for machine learning with time series | ||||||||||
Darts | 5,592 | 7 | a day ago | 25 | June 22, 2022 | 213 | apache-2.0 | Python | ||
A python library for user-friendly forecasting and anomaly detection on time series. | ||||||||||
Autogluon | 5,509 | 17 hours ago | 219 | apache-2.0 | Python | |||||
AutoGluon: AutoML for Image, Text, Time Series, and Tabular Data | ||||||||||
Data Science | 3,528 | 8 days ago | 4 | Jupyter Notebook | ||||||
Collection of useful data science topics along with articles, videos, and code | ||||||||||
Gluonts | 3,430 | 7 | 20 hours ago | 58 | June 30, 2022 | 348 | apache-2.0 | Python | ||
Probabilistic time series modeling in Python | ||||||||||
Tsai | 3,239 | 1 | 2 days ago | 41 | April 19, 2022 | 21 | apache-2.0 | Jupyter Notebook | ||
Time series Timeseries Deep Learning Machine Learning Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai | ||||||||||
Merlion | 2,921 | 6 days ago | 14 | June 28, 2022 | 14 | bsd-3-clause | Python | |||
Merlion: A Machine Learning Framework for Time Series Intelligence | ||||||||||
Neural_prophet | 2,852 | 17 hours ago | 7 | March 22, 2022 | 98 | mit | Python | |||
NeuralProphet: A simple forecasting package | ||||||||||
Pytorch Forecasting | 2,698 | 4 | 21 hours ago | 33 | May 23, 2022 | 362 | mit | Python | ||
Time series forecasting with PyTorch |
Merlion is a Python library for time series intelligence. It provides an end-to-end machine learning framework that includes loading and transforming data, building and training models, post-processing model outputs, and evaluating model performance. It supports various time series learning tasks, including forecasting, anomaly detection, and change point detection for both univariate and multivariate time series. This library aims to provide engineers and researchers a one-stop solution to rapidly develop models for their specific time series needs, and benchmark them across multiple time series datasets.
Merlion's key features are
DefaultDetector
and DefaultForecaster
models that are efficient, robustly achieve good performance,
and provide a starting point for new users.The table below provides a visual overview of how Merlion's key features compare to other libraries for time series anomaly detection and/or forecasting.
Merlion | Prophet | Alibi Detect | Kats | darts | statsmodels | nixtla | GluonTS | RRCF | STUMPY | Greykite | pmdarima | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Univariate Forecasting | ||||||||||||
Multivariate Forecasting | ||||||||||||
Univariate Anomaly Detection | ||||||||||||
Multivariate Anomaly Detection | ||||||||||||
Pre Processing | ||||||||||||
Post Processing | ||||||||||||
AutoML | ||||||||||||
Ensembles | ||||||||||||
Benchmarking | ||||||||||||
Visualization |
The following features are new in Merlion 2.0:
Merlion | Prophet | Alibi Detect | Kats | darts | statsmodels | nixtla | GluonTS | RRCF | STUMPY | Greykite | pmdarima | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Exogenous Regressors | ||||||||||||
Change Point Detection | ||||||||||||
Clickable Visual UI | ||||||||||||
Distributed Backend |
Merlion consists of two sub-repos: merlion
implements the library's core time series intelligence features,
and ts_datasets
provides standardized data loaders for multiple time series datasets. These loaders load
time series as pandas.DataFrame
s with accompanying metadata.
You can install merlion
from PyPI by calling pip install salesforce-merlion
. You may install from source by
cloning this repoand calling pip install Merlion/
, or pip install -e Merlion/
to install in editable mode.
You may install additional dependencies via pip install salesforce-merlion[all]
, or by calling
pip install "Merlion/[all]"
if installing from source.
Individually, the optional dependencies include dashboard
for a GUI dashboard,
spark
for a distributed computation backend with PySpark, and deep-learning
for all deep learning models.
To install the data loading package ts_datasets
, clone this repo and call pip install -e Merlion/ts_datasets/
.
This package must be installed in editable mode (i.e. with the -e
flag) if you don't want to manually specify the
root directory of every dataset when initializing its data loader.
Note the following external dependencies:
Some of our forecasting models depend on OpenMP. If using conda
, please conda install -c conda-forge lightgbm
before installing our package. This will ensure that OpenMP is configured to work with the lightgbm
package
(one of our dependencies) in your conda
environment. If using Mac, please install Homebrew
and call brew install libomp
so that the OpenMP libary is available for the model.
Some of our anomaly detection models depend on the Java Development Kit (JDK). For Ubuntu, call
sudo apt-get install openjdk-11-jdk
. For Mac OS, install Homebrew and call
brew tap adoptopenjdk/openjdk && brew install --cask adoptopenjdk11
. Also ensure that java
can be found
on your PATH
, and that the JAVA_HOME
environment variable is set.
For example code and an introduction to Merlion, see the Jupyter notebooks in
examples
, and the guided walkthrough
here. You may find detailed API documentation (including the
example code) here. The
technical report outlines Merlion's overall architecture
and presents experimental results on time series anomaly detection & forecasting for both univariate and multivariate
time series.
The easiest way to get started is to use the GUI web-based
dashboard.
This dashboard provides a great way to quickly experiment with many models on your own custom datasets.
To use it, install Merlion with the optional dashboard
dependency (i.e.
pip install salesforce-merlion[dashboard]
), and call python -m merlion.dashboard
from the command line.
You can view the dashboard at http://localhost:8050.
Below, we show some screenshots of the dashboard for both anomaly detection and forecasting.
To help you get started with using Merlion in your own code, we provide below some minimal examples using Merlion default models for both anomaly detection and forecasting.
Here, we show the code to replicate the results from the anomaly detection dashboard above.
We begin by importing Merlions TimeSeries
class and the data loader for the Numenta Anomaly Benchmark NAB
.
We can then divide a specific time series from this dataset into training and testing splits.
from merlion.utils import TimeSeries
from ts_datasets.anomaly import NAB
# Data loader returns pandas DataFrames, which we convert to Merlion TimeSeries
time_series, metadata = NAB(subset="realKnownCause")[3]
train_data = TimeSeries.from_pd(time_series[metadata.trainval])
test_data = TimeSeries.from_pd(time_series[~metadata.trainval])
test_labels = TimeSeries.from_pd(metadata.anomaly[~metadata.trainval])
We can then initialize and train Merlions DefaultDetector
, which is an anomaly detection model that
balances performance with efficiency. We also obtain its predictions on the test split.
from merlion.models.defaults import DefaultDetectorConfig, DefaultDetector
model = DefaultDetector(DefaultDetectorConfig())
model.train(train_data=train_data)
test_pred = model.get_anomaly_label(time_series=test_data)
Next, we visualize the model's predictions.
from merlion.plot import plot_anoms
import matplotlib.pyplot as plt
fig, ax = model.plot_anomaly(time_series=test_data)
plot_anoms(ax=ax, anomaly_labels=test_labels)
plt.show()
Finally, we can quantitatively evaluate the model. The precision and recall come from the fact that the model fired 3 alarms, with 2 true positives, 1 false negative, and 1 false positive. We also evaluate the mean time the model took to detect each anomaly that it correctly detected.
from merlion.evaluate.anomaly import TSADMetric
p = TSADMetric.Precision.value(ground_truth=test_labels, predict=test_pred)
r = TSADMetric.Recall.value(ground_truth=test_labels, predict=test_pred)
f1 = TSADMetric.F1.value(ground_truth=test_labels, predict=test_pred)
mttd = TSADMetric.MeanTimeToDetect.value(ground_truth=test_labels, predict=test_pred)
print(f"Precision: {p:.4f}, Recall: {r:.4f}, F1: {f1:.4f}\n"
f"Mean Time To Detect: {mttd}")
Precision: 0.6667, Recall: 0.6667, F1: 0.6667
Mean Time To Detect: 1 days 10:22:30
Here, we show the code to replicate the results from the forecasting dashboard above.
We begin by importing Merlions TimeSeries
class and the data loader for the M4
dataset. We can then divide a
specific time series from this dataset into training and testing splits.
from merlion.utils import TimeSeries
from ts_datasets.forecast import M4
# Data loader returns pandas DataFrames, which we convert to Merlion TimeSeries
time_series, metadata = M4(subset="Hourly")[0]
train_data = TimeSeries.from_pd(time_series[metadata.trainval])
test_data = TimeSeries.from_pd(time_series[~metadata.trainval])
We can then initialize and train Merlions DefaultForecaster
, which is an forecasting model that balances
performance with efficiency. We also obtain its predictions on the test split.
from merlion.models.defaults import DefaultForecasterConfig, DefaultForecaster
model = DefaultForecaster(DefaultForecasterConfig())
model.train(train_data=train_data)
test_pred, test_err = model.forecast(time_stamps=test_data.time_stamps)
Next, we visualize the models predictions.
import matplotlib.pyplot as plt
fig, ax = model.plot_forecast(time_series=test_data, plot_forecast_uncertainty=True)
plt.show()
Finally, we quantitatively evaluate the model. sMAPE measures the error of the prediction on a scale of 0 to 100 (lower is better), while MSIS evaluates the quality of the 95% confidence band on a scale of 0 to 100 (lower is better).
# Evaluate the model's predictions quantitatively
from scipy.stats import norm
from merlion.evaluate.forecast import ForecastMetric
# Compute the sMAPE of the predictions (0 to 100, smaller is better)
smape = ForecastMetric.sMAPE.value(ground_truth=test_data, predict=test_pred)
# Compute the MSIS of the model's 95% confidence interval (0 to 100, smaller is better)
lb = TimeSeries.from_pd(test_pred.to_pd() + norm.ppf(0.025) * test_err.to_pd().values)
ub = TimeSeries.from_pd(test_pred.to_pd() + norm.ppf(0.975) * test_err.to_pd().values)
msis = ForecastMetric.MSIS.value(ground_truth=test_data, predict=test_pred,
insample=train_data, lb=lb, ub=ub)
print(f"sMAPE: {smape:.4f}, MSIS: {msis:.4f}")
sMAPE: 4.1944, MSIS: 18.9331
One of Merlion's key features is an evaluation pipeline that simulates the live deployment of a model on historical data. This enables you to compare models on the datasets relevant to them, under the conditions that they may encounter in a production environment. Our evaluation pipeline proceeds as follows:
We provide scripts that allow you to use this pipeline to evaluate arbitrary models on arbitrary datasets. For example, invoking
python benchmark_anomaly.py --dataset NAB_realAWSCloudwatch --model IsolationForest --retrain_freq 1d
will evaluate the anomaly detection performance of the IsolationForest
(retrained once a day) on the
"realAWSCloudwatch" subset of the NAB dataset. Similarly, invoking
python benchmark_forecast.py --dataset M4_Hourly --model ETS
will evaluate the batch forecasting performance (i.e. no retraining) of ETS
on the "Hourly" subset of the M4 dataset.
You can find the results produced by running these scripts in the Experiments section of the
technical report.
You can find more details in our technical report: https://arxiv.org/abs/2109.09265
If you're using Merlion in your research or applications, please cite using this BibTeX:
@article{bhatnagar2021merlion,
title={Merlion: A Machine Learning Library for Time Series},
author={Aadyot Bhatnagar and Paul Kassianik and Chenghao Liu and Tian Lan and Wenzhuo Yang
and Rowan Cassius and Doyen Sahoo and Devansh Arpit and Sri Subramanian and Gerald Woo
and Amrita Saha and Arun Kumar Jagota and Gokulakrishnan Gopalakrishnan and Manpreet Singh
and K C Krithika and Sukumar Maddineni and Daeki Cho and Bo Zong and Yingbo Zhou
and Caiming Xiong and Silvio Savarese and Steven Hoi and Huan Wang},
year={2021},
eprint={2109.09265},
archivePrefix={arXiv},
primaryClass={cs.LG}
}