Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for post training quantization
post-training-quantization
x
10 search results found
Micronet
⭐
2,177
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/in
Neural Compressor
⭐
1,773
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Tinyneuralnetwork
⭐
681
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
Squeezellm
⭐
486
SqueezeLLM: Dense-and-Sparse Quantization
Sparsebit
⭐
291
A model compression and acceleration toolbox based on pytorch.
Fq Vit
⭐
278
[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Adventures In Tensorflow Lite
⭐
143
This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.
Q Diffusion
⭐
125
[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
Di2n Ptq4dm
⭐
6
Improved the performance of 8-bit PTQ4DM expecially on FID.
Quantizations
⭐
5
Related Searches
Python Post Training Quantization (12)
Pytorch Post Training Quantization (6)
Tensorrt Post Training Quantization (3)
1-10 of 10 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.