Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python pre trained language models
pre-trained-language-models
x
python
x
26 search results found
Chinese Llama Alpaca
⭐
15,877
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Llmsurvey
⭐
7,255
The official GitHub page for the survey paper "A Survey of Large Language Models".
Openprompt
⭐
4,006
An Open-Source Framework for Prompt-Learning.
Roberta_zh
⭐
2,141
RoBERTa中文预训练模型: RoBERTa for Chinese
Knowlm
⭐
870
An Open-sourced Knowledgable Large Language Model Framework.
P Tuning
⭐
528
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Textpruner
⭐
314
A PyTorch-based model pruning toolkit for pre-trained language models
Hugnlp
⭐
237
HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab
Dart
⭐
76
Code for the ICLR2022 paper "Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners"
Molgen
⭐
64
Code and pre-trained models for the paper "Domain-Agnostic Molecular Generation with Self-feedback."
Sifrank
⭐
61
The code of our paper "SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model"
Mkg_analogy
⭐
56
Code and datasets for the ICLR2023 paper "Multimodal Analogical Reasoning over Knowledge Graphs."
Sifrank_zh
⭐
43
基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
Electra_crf_ner
⭐
37
We start a company-name recognition task with a small scale and low quality training data, then using skills to enhanced model training speed and predicting performance with least artificial participation. The methods we use involve lite pre-training models such as Albert-small or Electra-small with financial corpus, knowledge of distillation and multi-stage learning. The result is that we improve the recall rate of company names recognition task from 0.73 to 0.92 and get 4 times as fast as BERT
Dynamickd
⭐
30
Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"
Gigabert
⭐
26
Zero-shot Transfer Learning from English to Arabic
Ares
⭐
21
SIGIR'22 paper: Axiomatically Regularized Pre-training for Ad hoc Search
Linglong
⭐
9
LingLong (玲珑): a small-scale Chinese pretrained language model
Valuezeroing
⭐
8
The official repo for the EACL 2023 paper "Quantifying Context Mixing in Transformers"
Cdgp
⭐
8
Code for Findings of EMNLP 2022 short paper "CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model".
Revisit Knn
⭐
7
Code for the CCL2023 paper "Revisiting k-NN for Fine-tuning Pre-trained Language Models."
Xlm Plus
⭐
7
Cascadebert
⭐
7
Code for CascadeBERT, Findings of EMNLP 2021
Gigabert
⭐
6
Arabic Relation extraction system, named entity recognition, IE
Pytorch Ko Ner
⭐
5
PLM 기반 한국어 개체명 인식 (NER)
Scimult
⭐
5
Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding (Findings of EMNLP'23)
Related Searches
Python Dataset (14,792)
Python Machine Learning (14,099)
Python Deep Learning (13,092)
Python Jupyter Notebook (12,976)
Python Natural Language Processing (9,064)
Python Pytorch (7,877)
Python Paper (6,544)
Python Recognition (4,789)
Python Computer Vision (3,769)
Python Vector (2,303)
1-26 of 26 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.