Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for fine tuning
fine-tuning
x
213 search results found
Llama_index
⭐
30,412
LlamaIndex is a data framework for your LLM applications
Ludwig
⭐
10,830
Low-code framework for building custom LLMs, neural networks, and other AI models
Llama Factory
⭐
10,715
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
Openllm
⭐
7,871
Operating LLMs in production
Lora
⭐
5,959
Using Low-rank adaptation to quickly fine-tune diffusion models.
Flyte
⭐
4,380
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Superagent
⭐
3,675
🥷 The open source alternative to OpenAI Assistants API
Flaml
⭐
3,500
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
H2o Llmstudio
⭐
3,268
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Chatglm Efficient Tuning
⭐
3,130
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Face.evolve
⭐
3,074
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥
Unsloth
⭐
2,914
5X faster 60% less memory QLoRA finetuning
Uer Py
⭐
2,802
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Xturing
⭐
2,392
Easily build, customize and control your own LLMs
Yival
⭐
2,307
Your Automatic Prompt Engineering Assistant for GenAI Applications
Learn2learn
⭐
2,283
A PyTorch Library for Meta-learning Research
Custom Diffusion
⭐
1,669
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
Training Operator
⭐
1,447
Distributed ML Training and Fine-Tuning on Kubernetes
Hands On Llms
⭐
1,393
🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴
Refact
⭐
1,237
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
Llm Applications
⭐
1,228
A comprehensive guide to building RAG-based LLM applications for production.
Finetuner
⭐
1,133
🎯 Task-oriented finetuning for better embeddings on neural search
Tencentpretrain
⭐
951
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Llm Adapters
⭐
856
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
Libfewshot
⭐
771
LibFewShot: A Comprehensive Library for Few-shot Learning. TPAMI 2023.
Bert Multi Label Text Classification
⭐
761
This repo contains a PyTorch implementation of a pretrained BERT model for multi-label text classification.
Db Gpt Hub
⭐
759
A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL
Lorax
⭐
719
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Modelsgenesis
⭐
674
[MICCAI 2019] [MEDIA 2020] Models Genesis
Onetrainer
⭐
646
OneTrainer is a one-stop solution for all your stable diffusion training needs.
Lora For Diffusers
⭐
636
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
Llm Finetuning
⭐
582
LLM Finetuning with peft
Swift
⭐
578
魔搭大模型训练推理部署工具箱,支持LLaMA、千问、ChatGLM、BaiChuan等多种模型及Lo LLM training/inference/deployment framework of ModelScope community, Support various models like LLaMA, Qwen, ChatGLM, Baichuan and others, and training methods like LoRA, ResTuning, NEFTune, etc.)
Awesome Text2sql
⭐
568
Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
Llm Finetuning Hub
⭐
556
Repository that contains LLM fine-tuning and deployment scripts along with our research findings.
Magick
⭐
478
Magick is a cutting-edge toolkit for a new kind of AI builder. Make Magick with us!
Awesome Pretrain On Molecules
⭐
440
[IJCAI 2023 survey track]A curated list of resources for chemical pre-trained models
Neosync
⭐
413
A developer-first way to create high-fidelity synthetic data or anonymize sensitive data and sync it across all environments for testing, fine-tuning or model training.
Azureml Bert
⭐
384
End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
Finetune Gpt2xl
⭐
382
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
Adaptnlp
⭐
371
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
Godot Dodo
⭐
363
Finetuning large language models for GDScript generation.
Tiger
⭐
337
Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)
Slowllama
⭐
324
Finetune llama2-70b and codellama on MacBook Air without quantization
Simplet5
⭐
305
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
Onediffusion
⭐
293
OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease
Fondant
⭐
293
Production-ready data processing made easy and shareable
Xiaoyi Robot
⭐
291
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt- Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
Gpt Neo Fine Tuning Example
⭐
282
Fine-Tune EleutherAI GPT-Neo And GPT-J-6B To Generate Netflix Movie Descriptions Using Hugginface And DeepSpeed
Start Llms
⭐
271
A complete guide to start and improve your LLM skills in 2023 with little background in the field and stay up-to-date with the latest news and state-of-the-art techniques!
Vectordb Recipes
⭐
267
High quality resources & applications for LLMs, multi-modal models and VectorDBs
Backprop
⭐
239
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Llm Kit
⭐
237
🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI 整合包。支持主流大模型API接口和开源模型。支持知识库,数据库,角色扮演,mj文生图,LoRA和全参
Medqa Chatglm
⭐
235
🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答
Llm Rlhf Tuning
⭐
225
LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA)
Aurora
⭐
217
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
Kogpt2 Finetuning
⭐
212
🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Azure Openai Llm Vector Langchain
⭐
198
"Awesome-LLM: a curated list of Azure OpenAI & Large Language Model" 🔎References to Azure OpenAI, 🦙Large Language Models, and related 🌌 services and 🎋libraries.
Finetuned Qlora Falcon7b Medical
⭐
197
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
Hcgf
⭐
194
Humanable Chat Generative-model Fine-tuning | LLM微调
Bert Attributeextraction
⭐
185
USING BERT FOR Attribute Extraction in KnowledgeGraph. fine-tuning and feature extraction. 使用基于bert的微调和特征提取方法来进行知识图谱百度百科人物词条属性抽取。
Starwhale
⭐
178
an MLOps/LLMOps platform
Bce Qianfan Sdk
⭐
163
Provide best practices for LMOps, as well as elegant and convenient access to the features of the Qianfan MaaS Platform. (提供大模型工具链最佳实践,以及优雅且便捷地访问千帆大模型平台)
Cosine
⭐
155
This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach' (In Proc. of NAACL-HLT 2021).
Albert Tf2.0
⭐
155
ALBERT model Pretraining and Fine Tuning using TF2.0
Vehicle Detection
⭐
149
Vehicle Detection Using Deep Learning and YOLO Algorithm
Chatglm Maths
⭐
142
chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu
Chatglm2_finetuning
⭐
141
chatglm2 6b finetuning and alpaca finetuning
Chatkbqa
⭐
140
ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models
Llmtuner
⭐
137
Tune LLM in few lines of code
Notus
⭐
123
Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first approach
Xtts Webui
⭐
119
Webui for using XTTS and for finetuning it
Bond
⭐
114
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Scaling Laws Openclip
⭐
112
Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)
Opentpod
⭐
110
Open Toolkit for Painless Object Detection
Fireact
⭐
110
FireAct: Toward Language Agent Fine-tuning
Autoaudit
⭐
109
AutoAudit—— the LLM for Cyber Security 网络安全大语言模型
Awesome Pretraining For Graph Neural Networks
⭐
100
A curated list of papers on pre-training for graph neural networks (Pre-train4GNN).
Finetune Detr
⭐
90
Fine-tune Facebook's DETR (DEtection TRansformer) on Colaboratory.
Optimum Habana
⭐
83
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Optimum Graphcore
⭐
79
Blazing fast training of 🤗 Transformers on Graphcore IPUs
Llm Atc
⭐
77
Fine-tuning and serving LLMs on any cloud
Alpaca Qlora
⭐
77
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
Alpaca 7b Chinese
⭐
70
Finetune LLaMA-7B with Chinese instruction datasets
Llm Toys
⭐
69
Small(7B and below), production-ready finetuned LLMs for a diverse set of useful tasks.
Wav2keyword
⭐
68
Wav2Keyword is keyword spotting(KWS) based on Wav2Vec 2.0. This model shows state-of-the-art in Speech commands dataset V1 and V2.
Chatglm 6b Fine Tuning
⭐
67
chatglm-6b-fine-tuning
Llama Lora Fine Tuning
⭐
64
llama fine-tuning with lora
Praetor Data
⭐
62
Praetor is a lightweight finetuning data and prompt management tool
Comparatively Finetuning Bert
⭐
61
Comparatively fine-tuning pretrained BERT models on downstream, text classification tasks with different architectural configurations in PyTorch.
Vietnamese Electra
⭐
59
Electra pre-trained model using Vietnamese corpus
Powerfulpromptft
⭐
59
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
Discus
⭐
59
A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
Candle Lora
⭐
59
Low rank adaptation (LoRA) for Candle.
Dreambooth
⭐
59
Fine-tuning of diffusion models
Ca Tcc
⭐
58
[TPAMI 2023] Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series Classification
Log10
⭐
53
Python client library for managing your LLM data in one place
Finetuning Suite
⭐
52
Finetune any model on HF in less than 30 seconds
Alpha_pooling
⭐
51
Code for our paper "Generalized Orderless Pooling Performs Implicit Salient Matching" published at ICCV 2017.
Disco
⭐
51
A Toolkit for Distributional Control of Generative Models
1-100 of 213 search results
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.