Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Distiller | 4,252 | a year ago | 65 | apache-2.0 | Jupyter Notebook | |||||
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller | ||||||||||
Autogptq | 3,637 | a month ago | 174 | mit | Python | |||||
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm. | ||||||||||
Pinto_model_zoo | 3,121 | 4 months ago | 11 | mit | Python | |||||
A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML. | ||||||||||
Nlp Architect | 2,928 | 2 years ago | 10 | April 12, 2020 | 14 | apache-2.0 | Python | |||
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks | ||||||||||
Pytorch Playground | 2,366 | a year ago | 9 | mit | Python | |||||
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet) | ||||||||||
Micronet | 2,177 | 3 years ago | 46 | October 06, 2021 | 70 | mit | Python | |||
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape | ||||||||||
Mixtral Offloading | 1,943 | 4 months ago | 12 | mit | Python | |||||
Run Mixtral-8x7B models in Colab or consumer desktops | ||||||||||
Optimum | 1,908 | 53 | 4 months ago | 53 | December 06, 2023 | 295 | apache-2.0 | Python | ||
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools | ||||||||||
Vector Quantize Pytorch | 1,627 | 25 | 4 months ago | 160 | December 06, 2023 | 27 | mit | Python | ||
Vector Quantization, in Pytorch | ||||||||||
Mmrazor | 1,231 | 2 | 6 months ago | 8 | May 04, 2022 | 133 | apache-2.0 | Python | ||
OpenMMLab Model Compression Toolbox and Benchmark. |