Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Deepsparse | 2,729 | 3 | 3 months ago | 141 | December 07, 2023 | 28 | other | Python | ||
Sparsity-aware deep learning inference runtime for CPUs | ||||||||||
Sparseml | 1,910 | 5 | 3 months ago | 37 | December 04, 2023 | 60 | apache-2.0 | Python | ||
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models | ||||||||||
Kobert | 1,035 | a year ago | 5 | apache-2.0 | Jupyter Notebook | |||||
Korean BERT pre-trained cased (KoBERT) | ||||||||||
Bolt | 823 | a year ago | 38 | mit | C++ | |||||
Bolt is a deep learning library with high performance and heterogeneous flexibility. | ||||||||||
Nncf | 725 | 6 | 3 months ago | 16 | November 16, 2023 | 46 | apache-2.0 | Python | ||
Neural Network Compression Framework for enhanced OpenVINO™ inference | ||||||||||
Fastt5 | 280 | 2 years ago | 14 | April 05, 2022 | 13 | apache-2.0 | Python | |||
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x. | ||||||||||
Browser Ml Inference | 221 | 2 years ago | apache-2.0 | Jupyter Notebook | ||||||
Edge Inference in Browser with Transformer NLP model | ||||||||||
Onnxt5 | 136 | 3 years ago | 11 | January 28, 2021 | 3 | apache-2.0 | Python | |||
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX. | ||||||||||
Clip Onnx | 122 | 9 months ago | 2 | mit | Python | |||||
It is a simple library to speed up CLIP inference up to 3x (K80 GPU) | ||||||||||
Optimum Transformers | 71 | 2 years ago | 3 | April 01, 2022 | 1 | gpl-3.0 | Python | |||
Accelerated NLP pipelines for fast inference on CPU and GPU. Built with Transformers, Optimum and ONNX Runtime. |