Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for quantization tensorrt
quantization
x
tensorrt
x
9 search results found
Micronet
⭐
2,177
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/in
Paddleslim
⭐
1,486
PaddleSlim is an open-source library for deep model compression and architecture search.
Deepvac
⭐
618
PyTorch Project Specification.
Sparsebit
⭐
291
A model compression and acceleration toolbox based on pytorch.
Torch Model Compression
⭐
137
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Benchmark Fp32 Fp16 Int8 With Tensorrt
⭐
46
Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier
Yolov5 Light
⭐
19
provide some new architecture, channel pruning and quantization methods for yolov5
Tensorrt_ex
⭐
11
Deep Learning Model Optimization Using by TensorRT API, window
Gpt J 6b Tensorrt Int8
⭐
5
GPT-J 6B inference on TensorRT with INT-8 precision
1-9 of 9 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.