Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pycaret | 7,076 | 13 | 7 hours ago | 83 | June 06, 2022 | 261 | mit | Jupyter Notebook | ||
An open-source, low-code machine learning library in Python | ||||||||||
Sktime | 6,284 | a day ago | 678 | bsd-3-clause | Python | |||||
A unified framework for machine learning with time series | ||||||||||
Darts | 5,579 | 7 | 10 hours ago | 25 | June 22, 2022 | 214 | apache-2.0 | Python | ||
A python library for user-friendly forecasting and anomaly detection on time series. | ||||||||||
Autogluon | 5,498 | 10 hours ago | 220 | apache-2.0 | Python | |||||
AutoGluon: AutoML for Image, Text, Time Series, and Tabular Data | ||||||||||
Data Science | 3,528 | 4 days ago | 4 | Jupyter Notebook | ||||||
Collection of useful data science topics along with articles, videos, and code | ||||||||||
Gluonts | 3,423 | 7 | 6 hours ago | 58 | June 30, 2022 | 347 | apache-2.0 | Python | ||
Probabilistic time series modeling in Python | ||||||||||
Tsai | 3,226 | 1 | 6 hours ago | 41 | April 19, 2022 | 21 | apache-2.0 | Jupyter Notebook | ||
Time series Timeseries Deep Learning Machine Learning Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai | ||||||||||
Merlion | 2,921 | 2 days ago | 14 | June 28, 2022 | 14 | bsd-3-clause | Python | |||
Merlion: A Machine Learning Framework for Time Series Intelligence | ||||||||||
Neural_prophet | 2,848 | 2 days ago | 7 | March 22, 2022 | 96 | mit | Python | |||
NeuralProphet: A simple forecasting package | ||||||||||
Pytorch Forecasting | 2,666 | 4 | 11 days ago | 33 | May 23, 2022 | 359 | mit | Python | ||
Time series forecasting with PyTorch |
Voice2Series: Adversarial Reprogramming Acoustic Models for Time Series Classification
We provide an end-to-end approach (Repro. layer) to reprogram on time series data on raw waveform
with a differential mel-spectrogram layer from kapre.
No offiline acoustic feature extraction and all layers are differentiable.
updated: if you have used the ECG 200
dataset in this code, please git pull
and refer to the issue for one reported label loading error. (has been fixed)
Tensorflow 2.2 (CUDA=10.0) and Kapre 0.2.0.
PyTorch noted: Echo to many interests from the community, we will also provide Pytorch V2S layers and frameworks, incoperating the new torch audio layers. Feel free to email the authors for further reprogramming collaboration
.
option 1 (from yml)
conda env create -f V2S.yml
pip install tensorflow-gpu==2.1.0
pip install kapre==0.2.0
pip install h5py==2.10.0
pip install pyts
Please also check the paper for actual validation details. Many Thanks!
python v2s_main.py --dataset 0 --eps 20 --mod 2 --seg 18 --mapping 1
Epoch 14/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.4493 - accuracy: 0.9239 - val_loss: 0.4571 - val_accuracy: 0.9106
Epoch 15/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.4297 - accuracy: 0.9306 - val_loss: 0.4381 - val_accuracy: 0.9265
Epoch 16/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.4182 - accuracy: 0.9247 - val_loss: 0.4204 - val_accuracy: 0.9205
Epoch 17/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.3972 - accuracy: 0.9320 - val_loss: 0.4072 - val_accuracy: 0.9242
Epoch 18/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.3905 - accuracy: 0.9303 - val_loss: 0.4099 - val_accuracy: 0.9242
Epoch 19/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.3765 - accuracy: 0.9320 - val_loss: 0.3924 - val_accuracy: 0.9258
Epoch 20/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.3704 - accuracy: 0.9300 - val_loss: 0.3816 - val_accuracy: 0.9250
--- Train loss: 0.36046191089949786
- Train accuracy: 0.93113023
--- Test loss: 0.38329164963780027
- Test accuracy: 0.925
=== Best Val. Acc: 0.92651516 At Epoch of 14
python v2s_main.py --dataset 0 --eps 20 --mod 2 --seg 18 --mapping 18
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.8762 - accuracy: 0.9231 - val_loss: 0.8479 - val_accuracy: 0.9182
Epoch 12/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.8360 - accuracy: 0.9236 - val_loss: 0.8191 - val_accuracy: 0.9152
Epoch 13/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.7920 - accuracy: 0.9242 - val_loss: 0.7693 - val_accuracy: 0.9273
Epoch 14/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.7586 - accuracy: 0.9228 - val_loss: 0.7358 - val_accuracy: 0.9235
Epoch 15/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.7265 - accuracy: 0.9270 - val_loss: 0.7076 - val_accuracy: 0.9205
Epoch 16/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.6980 - accuracy: 0.9247 - val_loss: 0.6707 - val_accuracy: 0.9295
Epoch 17/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.6650 - accuracy: 0.9281 - val_loss: 0.6473 - val_accuracy: 0.9250
Epoch 18/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.6444 - accuracy: 0.9286 - val_loss: 0.6270 - val_accuracy: 0.9303
Epoch 19/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.6194 - accuracy: 0.9286 - val_loss: 0.6020 - val_accuracy: 0.9318
Epoch 20/20
3601/3601 [==============================] - 4s 1ms/sample - loss: 0.5964 - accuracy: 0.9275 - val_loss: 0.5813 - val_accuracy: 0.9227
--- Train loss: 0.5795955053139845
- Train accuracy: 0.93113023
--- Test loss: 0.5856682072986256
- Test accuracy: 0.92651516
=== Best Val. Acc: 0.9318182 At Epoch of 18
python cam_v2s.py --dataset 5 --weight wNo5_map6-88-0.7662.h5 --mapping 6 --layer conv2d_1
I would recommend using different label mapping numbers for training. For instance, you could use --mapping 7
for ECG 5000
dataset. The dropout rate is also an important hyperparameter for tuning the testing loss. You could use a range between 0.2
to 0.5
with --dr 4
for 0.4
dropout rate.
V2S mask is provided as an option, but the training script is not using the masking for forwarding passing. From our experiments, using or not using the masking only has small variants on the performance. This is not in conflict with the proposed theoretical analysis on learning target domain adaption.
Yes, you are welcome. Please send an email to the author for potential collaberation.
cd weight
pip install gdown
gdown https://drive.google.com/uc?id=1mhqXZ8CANgHyepum7N4yrjiyIg6qaMe6
Please open an issue here for discussion. Thank you!