Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Qbot | 4,799 | 6 months ago | 51 | mit | Jupyter Notebook | |||||
[🔥updating ...] AI 自动量化交易机器人 AI-powered Quantitative Investment Research Platform. 📃 online docs: https://ufund-me.github.io/Qbot ✨ :news: qbot-mini: https://github.com/Charmve/iQuant | ||||||||||
Deepsparse | 2,729 | 3 | 4 months ago | 141 | December 07, 2023 | 28 | other | Python | ||
Sparsity-aware deep learning inference runtime for CPUs | ||||||||||
Model Optimization | 1,445 | 3 | 27 | 4 months ago | 30 | May 26, 2023 | 207 | apache-2.0 | Python | |
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning. | ||||||||||
Intel Extension For Pytorch | 1,161 | 12 | 4 months ago | 13 | October 19, 2023 | 180 | apache-2.0 | Python | ||
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform | ||||||||||
Training_extensions | 1,119 | 1 | a month ago | 55 | October 31, 2023 | 54 | apache-2.0 | Python | ||
Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™ | ||||||||||
Rwkv.cpp | 956 | 5 months ago | 22 | mit | C++ | |||||
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model | ||||||||||
Tinyml Papers And Projects | 566 | 8 months ago | 2 | mit | ||||||
This is a list of interesting papers and projects about TinyML. | ||||||||||
Qkeras | 514 | 2 | 4 months ago | 1 | July 07, 2021 | 38 | apache-2.0 | Python | ||
QKeras: a quantization deep learning library for Tensorflow Keras | ||||||||||
Complete Life Cycle Of A Data Science Project | 499 | 4 months ago | 4 | mit | ||||||
Complete-Life-Cycle-of-a-Data-Science-Project | ||||||||||
Kd_lib | 476 | a year ago | 8 | May 18, 2022 | 18 | mit | Python | |||
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization. |