Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for model compression quantization aware training
model-compression
x
quantization-aware-training
x
4 search results found
Micronet
⭐
2,177
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/in
Tinyneuralnetwork
⭐
681
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
Yolov3v4 Modelcompression Multidatasettraining Multibackbone
⭐
429
YOLO ModelCompression MultidatasetTraining
Torch Model Compression
⭐
137
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Qsparse
⭐
27
Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules
Related Searches
Python Model Compression (104)
Deep Learning Model Compression (52)
Pytorch Model Compression (48)
Quantization Model Compression (40)
Neural Network Model Compression (19)
Convolutional Neural Networks Model Compression (15)
Python Quantization Aware Training (13)
Pytorch Quantization Aware Training (11)
Deep Learning Quantization Aware Training (5)
Object Detection Quantization Aware Training (5)
1-4 of 4 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.