Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for model compression
model-compression
x
163 search results found
Nni
⭐
13,725
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Efficient Ai Backbones
⭐
3,770
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
Awesome Knowledge Distillation
⭐
3,222
Awesome Knowledge Distillation
Pretrained Language Model
⭐
2,912
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Pocketflow
⭐
2,553
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Awesome Knowledge Distillation
⭐
2,182
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
Micronet
⭐
2,177
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/in
Awesome Pruning
⭐
2,091
A curated list of neural network pruning resources.
Torch Pruning
⭐
2,035
[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
Knowledge Distillation Pytorch
⭐
1,780
A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
Paddleslim
⭐
1,486
PaddleSlim is an open-source library for deep model compression and architecture search.
Awesome Model Quantization
⭐
1,449
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
Model Optimization
⭐
1,445
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Neuronblocks
⭐
1,441
NLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
Efficient Computing
⭐
1,110
Efficient computing methods developed by Huawei Noah's Ark Lab
Efficient Deep Learning
⭐
885
Collection of recent methods on (deep) neural network compression and acceleration.
Knowledge Distillation Zoo
⭐
804
Pytorch implementation of various Knowledge Distillation (KD) methods.
Tinyneuralnetwork
⭐
681
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
Awesome Automl And Lightweight Models
⭐
647
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Knowledge Distillation Papers
⭐
638
knowledge distillation papers
Deep Compression Alexnet
⭐
599
Deep Compression on AlexNet
Lightctr
⭐
599
Lightweight and Scalable framework that combines mainstream algorithms of Click-Through-Rate prediction based computational DAG, philosophy of Parameter Server and Ring-AllReduce collective communication.
Awesome Model Compression And Acceleration
⭐
534
Filter Pruning Geometric Median
⭐
489
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Squeezellm
⭐
486
SqueezeLLM: Dense-and-Sparse Quantization
Kd_lib
⭐
476
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Yolov3v4 Modelcompression Multidatasettraining Multibackbone
⭐
429
YOLO ModelCompression MultidatasetTraining
Archai
⭐
428
Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
Ghostnet.pytorch
⭐
418
[CVPR2020] GhostNet: More Features from Cheap Operations
Amc
⭐
417
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Deepcache
⭐
381
DeepCache: Accelerating Diffusion Models for Free
Awesome Ml Model Compression
⭐
378
Awesome machine learning model compression research papers, tools, and learning material.
Hawq
⭐
324
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Model_compression
⭐
315
Implementation of model compression with knowledge distilling method.
Model Compression Papers
⭐
306
Papers for deep neural network compression and acceleration
Only_train_once
⭐
242
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, LLM
Laser
⭐
241
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Soft Filter Pruning
⭐
235
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Jfasttext
⭐
215
Java interface for fastText
Awesome Quantization Papers
⭐
201
List of papers related to neural network quantization in recent AI conferences and journals.
Pruning
⭐
193
Code for "Co-Evolutionary Compression for Unpaired Image Translation" (ICCV 2019), "SCOP: Scientific Control for Reliable Neural Network Pruning" (NeurIPS 2020) and “Manifold Regularized Dynamic Network Pruning” (CVPR 2021).
Bert Of Theseus
⭐
186
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
Collaborative Distillation
⭐
185
[CVPR'20] Collaborative Distillation for Ultra-Resolution Universal Style Transfer (PyTorch)
Slimsam
⭐
183
SlimSAM: 0.1% Data Makes Segment Anything Slim
Mobile Id
⭐
180
Deep Face Model Compression
Awesome Ai Infrastructures
⭐
171
Infrastructures™ for Machine Learning Training/Inference in Production.
Amc Models
⭐
164
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Pruning Filter In Filter
⭐
161
Pruning Filter in Filter(NeurIPS2020)
Ds Net
⭐
161
(CVPR 2021, Oral) Dynamic Slimmable Network
Ayolov2
⭐
152
Keras_compressor
⭐
152
Model Compression CLI Tool for Keras.
Awesome Model Compression
⭐
152
papers about model compression
Squeezenet Residual
⭐
151
residual-SqueezeNet
Cofipruning
⭐
151
ACL 2022: Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
Knowledgedistillation
⭐
150
Knowledge distillation in text classification with pytorch. 知识蒸馏,中文文本分类,教师模型BERT、XLNET,学生模型biLSTM。
Ld Net
⭐
145
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Pytorch Weights_pruning
⭐
145
PyTorch Implementation of Weights Pruning
Condensa
⭐
139
Programmable Neural Network Compression
Torch Model Compression
⭐
137
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Font_recognition Deepfont
⭐
136
Its a implementation of DeepFont : Identify Your Font from An Image using Keras
Allie
⭐
126
🤖 An automated machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers). Python 3.6 required.
Q Diffusion
⭐
125
[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
Microexpnet
⭐
118
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Dsd
⭐
108
DSD model zoo. Better accuracy models from DSD training on Imagenet with same model architecture
Diff Pruning
⭐
102
[NeurIPS 2023] Structural Pruning for Diffusion Models
Spvit
⭐
89
[TPAMI 2024] This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.
Aquvitae
⭐
88
Knowledge Distillation Toolkit
Awesome Efficient Plm
⭐
83
Must-read papers on improving efficiency for pre-trained language models.
Pkd For Bert Model Compression
⭐
82
pytorch implementation for Patient Knowledge Distillation for BERT Model Compression
Mayo
⭐
82
Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.
Svite
⭐
82
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Heterofl Computation And Communication Efficient Federated Learning For Heterogeneous Clients
⭐
79
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
Versatile Filters
⭐
79
Pytorch code for paper: Learning Versatile Filters for Efficient Convolutional Neural Networks (NeurIPS 2018)
Dialog Nlu
⭐
78
Tensorflow and Keras implementation of the state of the art researches in Dialog System NLU
Yolov3
⭐
77
yolov3 by pytorch
Tf2
⭐
74
An Open Source Deep Learning Inference Engine Based on FPGA
Model Compression
⭐
74
This is my final year project of Bachelor of Engineering. Its still incomplete though. I am trying to replicate the research paper "Deep Compression" by Song Han et. al. This paper received best paper award in ICLR 2016
Bbcu
⭐
71
This project is the official implementation of 'Basic Binary Convolution Unit for Binarized Image Restoration Network', ICLR2023
Awesome Computer Vision Resources
⭐
70
a collection of computer vision projects&tools. 计算机视觉方向项目和工具集合。
Compress
⭐
67
Compressing Representations for Self-Supervised Learning
Cmi
⭐
65
[IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation
Iss Rnns
⭐
62
Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)
Mobile
⭐
60
Embedded and Mobile Deployment
Ltp
⭐
59
[KDD'22] Learned Token Pruning for Transformers
Awesome Efficient Aigc
⭐
57
A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
Musco Pytorch
⭐
55
MUSCO: MUlti-Stage COmpression of neural networks
Keras_model_compression
⭐
55
Model Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy.An Implementation of "Distilling the Knowledge in a Neural Network - Geoffery Hinton et. al"
Upop
⭐
54
[ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
Model Compression And Acceleration Progress
⭐
50
Repository to track the progress in model compression and acceleration
Awesome Pruning At Initialization
⭐
49
[IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.
Atmc
⭐
48
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
A _guide_ To_data_sciecne_from_mathematics
⭐
47
It is a blueprint to data science from the mathematics to algorithms. It is not completed.
Data Free Adversarial Distillation
⭐
44
Code and pretrained models for paper: Data-Free Adversarial Distillation
Cv_dl_gather
⭐
42
Gather research papers, corresponding codes (if having), reading notes and any other related materials about Hot🔥🔥🔥 fields in Computer Vision based on Deep Learning.
Llama Pruning
⭐
42
Structural Pruning for LLaMA
Moonlit
⭐
41
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Zaq Code
⭐
40
CVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
I Bert
⭐
39
[ICML'21] I-BERT: Integer-only BERT Quantization
Assl
⭐
36
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"
Bitpack
⭐
36
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
1-100 of 163 search results
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.