Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Llama Factory | 10,715 | 3 months ago | 19 | December 03, 2023 | 96 | apache-2.0 | Python | |||
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM) | ||||||||||
Lora | 7,814 | 16 | 4 months ago | 3 | August 27, 2023 | 79 | mit | Python | ||
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" | ||||||||||
Chatglm Efficient Tuning | 3,130 | 6 months ago | 6 | August 12, 2023 | apache-2.0 | Python | ||||
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调 | ||||||||||
Xturing | 2,392 | 5 months ago | 11 | apache-2.0 | Python | |||||
Easily build, customize and control your own LLMs | ||||||||||
Knowlm | 870 | 3 months ago | 1 | apache-2.0 | Python | |||||
An Open-sourced Knowledgable Large Language Model Framework. | ||||||||||
Llm Rlhf Tuning | 225 | 7 months ago | 1 | Python | ||||||
LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA) | ||||||||||
Aurora | 217 | 3 months ago | apache-2.0 | Python | ||||||
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain. | ||||||||||
Llama Lora Tuner | 168 | a year ago | 10 | Python | ||||||
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models. | ||||||||||
Zicklein | 28 | 8 months ago | 1 | apache-2.0 | Python | |||||
Finetuning instruct-LLaMA on german datasets. | ||||||||||
Llama.mmengine | 25 | a year ago | apache-2.0 | Python | ||||||
Training LLaMA language model with MMEngine! It supports LoRA fine-tuning! |