Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python fine tuning
fine-tuning
x
python
x
112 search results found
Llama Factory
⭐
10,715
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
Openllm
⭐
7,871
Operating LLMs in production
Flyte
⭐
4,380
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Flaml
⭐
3,500
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
H2o Llmstudio
⭐
3,268
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Chatglm Efficient Tuning
⭐
3,130
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Face.evolve
⭐
3,074
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥
Unsloth
⭐
2,914
5X faster 60% less memory QLoRA finetuning
Uer Py
⭐
2,802
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Xturing
⭐
2,392
Easily build, customize and control your own LLMs
Yival
⭐
2,307
Your Automatic Prompt Engineering Assistant for GenAI Applications
Learn2learn
⭐
2,283
A PyTorch Library for Meta-learning Research
Custom Diffusion
⭐
1,669
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
Training Operator
⭐
1,447
Distributed ML Training and Fine-Tuning on Kubernetes
Refact
⭐
1,237
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
Finetuner
⭐
1,133
🎯 Task-oriented finetuning for better embeddings on neural search
Tencentpretrain
⭐
951
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Llm Adapters
⭐
856
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
Libfewshot
⭐
771
LibFewShot: A Comprehensive Library for Few-shot Learning. TPAMI 2023.
Bert Multi Label Text Classification
⭐
761
This repo contains a PyTorch implementation of a pretrained BERT model for multi-label text classification.
Db Gpt Hub
⭐
759
A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL
Lorax
⭐
719
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Onetrainer
⭐
646
OneTrainer is a one-stop solution for all your stable diffusion training needs.
Lora For Diffusers
⭐
636
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
Swift
⭐
578
魔搭大模型训练推理部署工具箱,支持LLaMA、千问、ChatGLM、BaiChuan等多种模型及Lo LLM training/inference/deployment framework of ModelScope community, Support various models like LLaMA, Qwen, ChatGLM, Baichuan and others, and training methods like LoRA, ResTuning, NEFTune, etc.)
Llm Finetuning Hub
⭐
556
Repository that contains LLM fine-tuning and deployment scripts along with our research findings.
Finetune Gpt2xl
⭐
382
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
Godot Dodo
⭐
363
Finetuning large language models for GDScript generation.
Slowllama
⭐
324
Finetune llama2-70b and codellama on MacBook Air without quantization
Simplet5
⭐
305
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
Fondant
⭐
293
Production-ready data processing made easy and shareable
Onediffusion
⭐
293
OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease
Gpt Neo Fine Tuning Example
⭐
282
Fine-Tune EleutherAI GPT-Neo And GPT-J-6B To Generate Netflix Movie Descriptions Using Hugginface And DeepSpeed
Llm Kit
⭐
237
🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI 整合包。支持主流大模型API接口和开源模型。支持知识库,数据库,角色扮演,mj文生图,LoRA和全参
Medqa Chatglm
⭐
235
🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答
Llm Rlhf Tuning
⭐
225
LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA)
Aurora
⭐
217
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
Kogpt2 Finetuning
⭐
212
🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Hcgf
⭐
194
Humanable Chat Generative-model Fine-tuning | LLM微调
Bert Attributeextraction
⭐
185
USING BERT FOR Attribute Extraction in KnowledgeGraph. fine-tuning and feature extraction. 使用基于bert的微调和特征提取方法来进行知识图谱百度百科人物词条属性抽取。
Bce Qianfan Sdk
⭐
163
Provide best practices for LMOps, as well as elegant and convenient access to the features of the Qianfan MaaS Platform. (提供大模型工具链最佳实践,以及优雅且便捷地访问千帆大模型平台)
Cosine
⭐
155
This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach' (In Proc. of NAACL-HLT 2021).
Albert Tf2.0
⭐
155
ALBERT model Pretraining and Fine Tuning using TF2.0
Vehicle Detection
⭐
149
Vehicle Detection Using Deep Learning and YOLO Algorithm
Chatglm Maths
⭐
142
chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu
Chatglm2_finetuning
⭐
141
chatglm2 6b finetuning and alpaca finetuning
Chatkbqa
⭐
140
ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models
Llmtuner
⭐
137
Tune LLM in few lines of code
Notus
⭐
123
Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first approach
Xtts Webui
⭐
119
Webui for using XTTS and for finetuning it
Bond
⭐
114
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Fireact
⭐
110
FireAct: Toward Language Agent Fine-tuning
Optimum Habana
⭐
83
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Optimum Graphcore
⭐
79
Blazing fast training of 🤗 Transformers on Graphcore IPUs
Alpaca Qlora
⭐
77
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
Llm Atc
⭐
77
Fine-tuning and serving LLMs on any cloud
Alpaca 7b Chinese
⭐
70
Finetune LLaMA-7B with Chinese instruction datasets
Llm Toys
⭐
69
Small(7B and below), production-ready finetuned LLMs for a diverse set of useful tasks.
Wav2keyword
⭐
68
Wav2Keyword is keyword spotting(KWS) based on Wav2Vec 2.0. This model shows state-of-the-art in Speech commands dataset V1 and V2.
Chatglm 6b Fine Tuning
⭐
67
chatglm-6b-fine-tuning
Llama Lora Fine Tuning
⭐
64
llama fine-tuning with lora
Praetor Data
⭐
62
Praetor is a lightweight finetuning data and prompt management tool
Comparatively Finetuning Bert
⭐
61
Comparatively fine-tuning pretrained BERT models on downstream, text classification tasks with different architectural configurations in PyTorch.
Discus
⭐
59
A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
Dreambooth
⭐
59
Fine-tuning of diffusion models
Powerfulpromptft
⭐
59
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
Ca Tcc
⭐
58
[TPAMI 2023] Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series Classification
Log10
⭐
53
Python client library for managing your LLM data in one place
Disco
⭐
51
A Toolkit for Distributional Control of Generative Models
Finetuning Scheduler
⭐
49
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
Decomposedprompttuning
⭐
48
This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
Semanticgenesis
⭐
47
Official Keras & PyTorch Implementation and Pre-trained Models for Semantic Genesis - MICCAI 2020
Bunkatopics
⭐
43
🗺️ Data Cleaning and Textual Data Visualization 🗺️
Lm_finetuning
⭐
41
Language Model Fine-tuning for Moby Dick
Transvw
⭐
40
Official Keras & PyTorch Implementation and Pre-trained Models for TransVW
Lidia Denoiser
⭐
38
LIDIA: Lightweight Learned Image Denoising with Instance Adaptation (NTIRE, 2020)
Ftpipe
⭐
37
FTPipe and related pipeline model parallelism research.
Embeddings
⭐
33
Embeddings: State-of-the-art Text Representations for Natural Language Processing tasks, an initial version of library focus on the Polish Language
Mxnet Retrain
⭐
32
Create mxnet finetuner (retrain) for mac/linux ,no need install docker and supports CPU, GPU(eGpu/cudnn).support the inception,resnet ,squeeznet,mobilenet...
Deeplogo2
⭐
32
A brand logo detection system by DETR
Smart Pytorch
⭐
30
PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.
Bert Like Is All You Need
⭐
29
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
Ncov_sentiment
⭐
28
疫情期间网民情绪识别比赛baseline,使用BERT进行端到端的fine-tuning,dataf
Zicklein
⭐
28
Finetuning instruct-LLaMA on german datasets.
Hft Cnn
⭐
27
Convolutional Neural Network based on Hierarchical Category Structure for Multi-label Short Text Categorization
Zeldarose
⭐
27
Train transformer-based models.
Prompt Tuning
⭐
26
A pipeline for Prompt-tuning
Llama.mmengine
⭐
25
Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!
Stable Diffusion Keras Ft
⭐
25
Fine-tuning Stable Diffusion using Keras.
Sam Fine Tune
⭐
20
🌌 Fine tune specific SAM model on any task
Quality Estimation2
⭐
19
机器翻译子任务-翻译质量评价-在BERT模型后面加上Bi-LSTM进行fine-tuning
Codalab Microsoft Coco Image Captioning Challenge
⭐
15
🥉 Codalab-Microsoft-COCO-Image-Captioning-Challenge 3rd place solution(06.30.21)
Neural Scam Artist
⭐
15
Web Scraping, Document Deduplication & GPT-2 Fine-tuning with a newly created scam dataset.
Scalablebdl
⭐
15
Code for "BayesAdapter: Being Bayesian, Inexpensively and Robustly, via Bayeisan Fine-tuning"
Task Conditioned
⭐
15
This source code implements our ECCV paper "task-conditioned domain adaptation for pedestrian detection in thermal imagery".
Lets Verify Step By Step
⭐
15
"Improving Mathematical Reasoning with Process Supervision" by OPENAI
Nanochatgpt
⭐
14
nanogpt turned into a chat model
Osm Budynki Orto Import
⭐
13
🏠 OpenStreetMap, AI import tool for buildings in Poland
Patron
⭐
13
[ACL 2023] The code for our ACL'23 paper Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach
Geniusrise
⭐
13
Geniusrise: Framework for building geniuses
Related Searches
Python Deep Learning (22,937)
Python Machine Learning (20,195)
Python Dataset (14,792)
Python Tensorflow (13,736)
Python Command Line (13,351)
Python Jupyter Notebook (12,976)
Python Algorithms (10,033)
Python Natural Language Processing (9,064)
Python Artificial Intelligence (8,580)
Python Pytorch (7,877)
1-100 of 112 search results
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.