Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for quantization knowledge distillation
knowledge-distillation
x
quantization
x
9 search results found
Pretrained Language Model
⭐
2,912
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Neural Compressor
⭐
1,773
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Mmrazor
⭐
1,231
OpenMMLab Model Compression Toolbox and Benchmark.
Efficient Computing
⭐
1,110
Efficient computing methods developed by Huawei Noah's Ark Lab
Kd_lib
⭐
476
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Awesome Ai Infrastructures
⭐
171
Infrastructures™ for Machine Learning Training/Inference in Production.
Neurips Micronet
⭐
29
[JMLR 2020] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion
Da2lite
⭐
6
DA2Lite is an automated model compression toolkit for PyTorch.
Compressors
⭐
5
A small library with distillation, quantization and pruning pipelines
1-9 of 9 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.