Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Nni | 13,725 | 8 | 27 | a month ago | 55 | September 14, 2023 | 342 | mit | Python | |
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning. | ||||||||||
Efficient Ai Backbones | 3,770 | a month ago | 71 | Python | ||||||
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab. | ||||||||||
Micronet | 2,165 | 3 years ago | 46 | October 06, 2021 | 70 | mit | Python | |||
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape | ||||||||||
Awesome Pruning | 2,091 | 5 months ago | 9 | |||||||
A curated list of neural network pruning resources. | ||||||||||
Knowledge Distillation Pytorch | 1,770 | a year ago | 17 | mit | Python | |||||
A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility | ||||||||||
Neuronblocks | 1,442 | 9 months ago | 9 | mit | Python | |||||
NLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego | ||||||||||
Tinyneuralnetwork | 681 | 3 months ago | 18 | mit | Python | |||||
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework. | ||||||||||
Awesome Automl And Lightweight Models | 647 | 4 years ago | ||||||||
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering. | ||||||||||
Filter Pruning Geometric Median | 489 | 8 months ago | 13 | Python | ||||||
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral) | ||||||||||
Kd_lib | 476 | a year ago | 8 | May 18, 2022 | 18 | mit | Python | |||
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization. |