Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Stable Diffusion Webui Colab | 14,090 | 6 months ago | 16 | unlicense | Jupyter Notebook | |||||
stable diffusion webui colab | ||||||||||
Peft | 12,271 | 101 | 3 months ago | 11 | December 06, 2023 | 65 | apache-2.0 | Python | ||
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. | ||||||||||
Lora | 7,814 | 16 | 4 months ago | 3 | August 27, 2023 | 79 | mit | Python | ||
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" | ||||||||||
Chatglm Efficient Tuning | 3,130 | 6 months ago | 6 | August 12, 2023 | apache-2.0 | Python | ||||
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调 | ||||||||||
Adapters | 2,354 | 7 | a month ago | 18 | April 06, 2023 | 51 | apache-2.0 | Jupyter Notebook | ||
A Unified Library for Parameter-Efficient and Modular Transfer Learning | ||||||||||
Alpaca Cot | 2,235 | 4 months ago | 30 | apache-2.0 | Jupyter Notebook | |||||
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr! | ||||||||||
Chatglm_finetuning | 1,486 | 6 months ago | 38 | Python | ||||||
chatglm 6b finetuning and alpaca finetuning | ||||||||||
Onediff | 787 | 3 months ago | 27 | Python | ||||||
OneDiff: An out-of-the-box acceleration library for diffusion models. | ||||||||||
Lorax | 719 | 3 months ago | 45 | apache-2.0 | Python | |||||
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs | ||||||||||
Llm Finetuning | 582 | 5 months ago | 1 | Jupyter Notebook | ||||||
LLM Finetuning with peft |