Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python model compression
model-compression
x
python
x
94 search results found
Nni
⭐
13,725
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Pretrained Language Model
⭐
2,912
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Pocketflow
⭐
2,553
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Torch Pruning
⭐
2,035
[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
Paddleslim
⭐
1,486
PaddleSlim is an open-source library for deep model compression and architecture search.
Model Optimization
⭐
1,445
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Knowledge Distillation Zoo
⭐
804
Pytorch implementation of various Knowledge Distillation (KD) methods.
Tinyneuralnetwork
⭐
681
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
Deep Compression Alexnet
⭐
599
Deep Compression on AlexNet
Filter Pruning Geometric Median
⭐
489
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Squeezellm
⭐
486
SqueezeLLM: Dense-and-Sparse Quantization
Kd_lib
⭐
476
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Archai
⭐
428
Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
Ghostnet.pytorch
⭐
418
[CVPR2020] GhostNet: More Features from Cheap Operations
Deepcache
⭐
381
DeepCache: Accelerating Diffusion Models for Free
Hawq
⭐
324
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Model_compression
⭐
315
Implementation of model compression with knowledge distilling method.
Only_train_once
⭐
242
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, LLM
Laser
⭐
241
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Soft Filter Pruning
⭐
235
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Bert Of Theseus
⭐
186
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
Slimsam
⭐
183
SlimSAM: 0.1% Data Makes Segment Anything Slim
Pruning Filter In Filter
⭐
161
Pruning Filter in Filter(NeurIPS2020)
Ds Net
⭐
161
(CVPR 2021, Oral) Dynamic Slimmable Network
Ayolov2
⭐
152
Keras_compressor
⭐
152
Model Compression CLI Tool for Keras.
Cofipruning
⭐
151
ACL 2022: Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
Knowledgedistillation
⭐
150
Knowledge distillation in text classification with pytorch. 知识蒸馏,中文文本分类,教师模型BERT、XLNET,学生模型biLSTM。
Ld Net
⭐
145
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Pytorch Weights_pruning
⭐
145
PyTorch Implementation of Weights Pruning
Condensa
⭐
139
Programmable Neural Network Compression
Torch Model Compression
⭐
137
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Font_recognition Deepfont
⭐
136
Its a implementation of DeepFont : Identify Your Font from An Image using Keras
Allie
⭐
126
🤖 An automated machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers). Python 3.6 required.
Q Diffusion
⭐
125
[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
Microexpnet
⭐
118
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Diff Pruning
⭐
102
[NeurIPS 2023] Structural Pruning for Diffusion Models
Spvit
⭐
89
[TPAMI 2024] This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.
Svite
⭐
82
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Mayo
⭐
82
Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.
Pkd For Bert Model Compression
⭐
82
pytorch implementation for Patient Knowledge Distillation for BERT Model Compression
Heterofl Computation And Communication Efficient Federated Learning For Heterogeneous Clients
⭐
79
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
Yolov3
⭐
77
yolov3 by pytorch
Model Compression
⭐
74
This is my final year project of Bachelor of Engineering. Its still incomplete though. I am trying to replicate the research paper "Deep Compression" by Song Han et. al. This paper received best paper award in ICLR 2016
Tf2
⭐
74
An Open Source Deep Learning Inference Engine Based on FPGA
Bbcu
⭐
71
This project is the official implementation of 'Basic Binary Convolution Unit for Binarized Image Restoration Network', ICLR2023
Iss Rnns
⭐
62
Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)
Mobile
⭐
60
Embedded and Mobile Deployment
Ltp
⭐
59
[KDD'22] Learned Token Pruning for Transformers
Upop
⭐
54
[ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
Data Free Adversarial Distillation
⭐
44
Code and pretrained models for paper: Data-Free Adversarial Distillation
Llama Pruning
⭐
42
Structural Pruning for LLaMA
Moonlit
⭐
41
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Zaq Code
⭐
40
CVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
I Bert
⭐
39
[ICML'21] I-BERT: Integer-only BERT Quantization
Bitpack
⭐
36
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
Assl
⭐
36
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"
Cop
⭐
36
Code for IJCAI2019 paper
Lc Model Compression
⭐
34
Model compression by constrained optimization, using the Learning-Compression (LC) algorithm
Efficient Bert
⭐
31
This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".
Model_compression
⭐
28
deep learning model compression based on keras
Regularization Pruning
⭐
28
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Qsparse
⭐
27
Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules
Esnac
⭐
25
Learnable Embedding Space for Efficient Neural Architecture Compression
Fastpose
⭐
24
pytorch realtime multi person keypoint estimation
Cnn Compression Performance
⭐
24
A python script that automatise the training of a CNN, compress it through tensorflow (or ristretto) plugin, and compares the performance of the two networks
Yolov5 Distillation Train Inference
⭐
24
Yolov5 distillation training | Yolov5知识蒸馏训练,支持训练自己的数据
Lossless_compression
⭐
21
We propose a lossless compression algorithm based on the NTK matrix for DNN. The compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values only in {0, 1, -1} up to scaling.
Tpp
⭐
21
[ICLR'23] Trainability Preserving Neural Pruning (PyTorch)
Structured Bayesian Pruning Pytorch
⭐
18
pytorch implementation of Structured Bayesian Pruning
Xcompression
⭐
17
[ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)
Ofq
⭐
16
The official implementation of the ICML 2023 paper OFQ-ViT
Bipointnet
⭐
16
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
Micronet_osi Ai
⭐
15
(NeurIPS-2019 MicroNet Challenge - 3rd Winner) Open source code for "SIPA: A simple framework for efficient networks"
Srp
⭐
15
[ICLR'22] PyTorch code for our paper "Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning"
Good Da In Kd
⭐
14
[NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective
Kxy Python
⭐
14
A toolkit to boost the productivity of machine learning engineers.
Causal Distill
⭐
12
The Codebase for Causal Distillation for Language Models
Rosita
⭐
11
[AAAI 2021] "ROSITA: Refined BERT cOmpreSsion with InTegrAted techniques", Yuanxin Liu, Zheng Lin, Fengcheng Yuan
Knowledgesharing Pytorch
⭐
11
Implementations of knowledge distillation and knowledge transfer models in neural networks.
Task Aware Distillation
⭐
10
Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)
Lgtm
⭐
10
[ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.09651)
Lm Vocab Trimmer
⭐
9
Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting irrelevant tokens from its vocabulary. This repository contains a python-library vocabtrimmer, that remove irrelevant tokens from a multilingual LM vocabulary for the target language.
Chip_neurips2021
⭐
9
Code for CHIP: CHannel Independence-based Pruning for Compact Neural Networks (NeruIPS 2021).
Eve Mli
⭐
8
eve-mli: making learning interesting
Keras Targeted Dropout
⭐
8
Targeted dropout implemented in Keras
Cpsca
⭐
8
Code for paper "Channel Pruning Guided by Spatial and Channel Attention for DNNs in Intelligent Edge Computing"
Nash Pruning Official
⭐
7
About Code for the paper "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP 2023 Findings)
Model Compression And Acceleration
⭐
7
Model Compression and Acceleration.
Practise
⭐
6
[CVPR2023] Practical Network Acceleration with Tiny Sets
Cnn_compression_rank_selection_bayesopt
⭐
6
Bayesian Optimization-Based Global Optimal Rank Selection for Compression of Convolutional Neural Networks, IEEE Access
Ternarized_neural_network
⭐
6
Optimizing Deep Convolutional Neural Network with Ternarized Weights and High Accuracy
Prac Lth
⭐
6
[ICML 2021] "Efficient Lottery Ticket Finding: Less Data is More" by Zhenyu Zhang*, Xuxi Chen*, Tianlong Chen*, Zhangyang Wang
Da2lite
⭐
6
DA2Lite is an automated model compression toolkit for PyTorch.
Label Free Network Compression
⭐
6
Caffe implementation of "Learning Compression from Limited Unlabeled Data" (ECCV2018).
Ptdeco
⭐
5
ptdeco is a library for model optimization by decomposition built on top of PyTorch
Related Searches
Python Machine Learning (20,195)
Python Dataset (14,792)
Python Tensorflow (13,736)
Python Deep Learning (13,092)
Python Jupyter Notebook (12,976)
Python Network (11,495)
Python Algorithms (10,033)
Python Natural Language Processing (9,064)
Python Pytorch (7,877)
Python Neural (7,444)
1-94 of 94 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.