Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python roberta
python
x
roberta
x
73 search results found
Chinese Bert Wwm
⭐
8,600
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
Lora
⭐
7,814
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Bertviz
⭐
5,547
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Awesome Pretrained Chinese Nlp Models
⭐
3,738
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
Albert_zh
⭐
3,723
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Clue
⭐
3,345
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Uer Py
⭐
2,802
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Roberta_zh
⭐
2,141
RoBERTa中文预训练模型: RoBERTa for Chinese
News Please
⭐
1,821
news-please - an integrated web crawler and information extractor for news that just works
Farm
⭐
1,706
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
Deberta
⭐
1,673
The implementation of DeBERTa
Tencentpretrain
⭐
951
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Bert_seq2seq
⭐
890
pytorch实现 Bert 做seq2seq任务,使用unilm方案,现在也可以做自动摘要,文本分类,情感分析,NER,词性标注
Curated Transformers
⭐
790
🤖 A PyTorch library of curated Transformer models and their composable components
Texar Pytorch
⭐
711
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/
Gector
⭐
680
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
Phobert
⭐
593
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
Bertweet
⭐
542
BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)
Japanese Pretrained Models
⭐
479
Code for producing Japanese pretrained models provided by rinna Co., Ltd.
Happy Transformer
⭐
449
Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
Transformersum
⭐
362
Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive summarization datasets to the extractive task.
Ernie
⭐
195
Simple State-of-the-Art BERT-Based Sentence Classification with Keras / TensorFlow 2. Built with HuggingFace's Transformers.
Mint
⭐
192
MinT: Minimal Transformer Library and Tutorials
Embedding As Service
⭐
182
One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques
Xlnet_zh
⭐
175
中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
Cosine
⭐
155
This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach' (In Proc. of NAACL-HLT 2021).
Reccon
⭐
128
This repository contains the dataset and the PyTorch implementations of the models from the paper Recognizing Emotion Cause in Conversations.
Data2vec Pytorch
⭐
126
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI
Koclip
⭐
117
KoCLIP: Korean port of OpenAI CLIP, in Flax
Bond
⭐
114
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Robertaabsa
⭐
111
Implementation of paper Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa.
Multi Label_classification
⭐
108
transform multi-label classification as sentence pair task, with more training data and information
Minirbt
⭐
102
MiniRBT (中文小型预训练模型系列)
Adamix
⭐
94
This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.12410).
Wordseg
⭐
91
A PyTorch implementation of a BiLSTM \ BERT \ Roberta (+ BiLSTM + CRF) model for Chinese Word Segmentation (中文分词) .
Infobert
⭐
81
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
Iterater
⭐
66
Official implementation of the paper "IteraTeR: Understanding Iterative Revision from Human-Written Text" (ACL 2022)
Ko Sentence Transformers
⭐
66
한국어 사전학습 모델을 활용한 문장 임베딩
Polish Roberta
⭐
61
RoBERTa models for Polish
Clue_pytorch
⭐
61
CLUE baseline pytorch CLUE的pytorch版本基线
Code Bert
⭐
48
Automatically check mismatch between code and comments using AI and ML
Palbert
⭐
33
Code for the paper "PALBERT: Teaching ALBERT to Ponder", NeurIPS 2022 Spotlight
Transformer Qg On Squad
⭐
26
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Bert_for_longer_texts
⭐
25
BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.
Politbert
⭐
18
Polish RoBERTA model trained on Polish literature, Wikipedia, and Oscar. The major assumption is that quality text will give a good model.
Ruberta
⭐
17
Russian RoBERTa
Distilkobilstm
⭐
17
Distilling Task-Specific Knowledge from Teacher Model into BiLSTM
Text Classification Pytorch
⭐
16
Summary and comparison of Chinese classification models
Bert_seq2seq_ddp
⭐
16
bert_seq2seq的DDP版本,支持bert、roberta、nezha、t5、gpt2等模型
Fake News Classification Model
⭐
16
✨ Fake news classification using source adaptive framework - BE Project 🎓The repository contains Detailed Documentation of the project, Classification pipeline, Architecture, System Interface Design, Tech stack used.
Tianchi2020chinesemedicinequestiongeneration
⭐
16
2020 阿里云天池大数据竞赛-中医药文献问题生成挑战赛
Xbert
⭐
14
Implementation of pre-training model loading architecture of bert and its variants with tensorflow2
Infotabs Code
⭐
14
Implementation of the semi-structured inference model in our ACL 2020 paper, INFOTABS: Inference on Tables as Semi-structured Data.
Klue Rbert
⭐
14
↔️ Utilizing RBERT model structure for KLUE Relation Extraction task
Structured_tuning_srl
⭐
12
Implementation of our ACL 2020 paper: Structured Tuning for Semantic Role Labeling
Vietnamese Roberta
⭐
12
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Team
⭐
12
Our EMNLP 2022 paper on MCQA
Zabanshenas
⭐
10
Zabanshenas is a solution for identifying the most likely language of a piece of written text. Demo (👇 )
Attentionvisualizer
⭐
10
A simple library to showcase highest scored words using RoBERTa model
Pytorch Roberta
⭐
9
Zeroe
⭐
9
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks
Roberta Base Mr
⭐
9
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week
Cdgp
⭐
8
Code for Findings of EMNLP 2022 short paper "CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model".
Transformers_onnx
⭐
7
Cross Lingual Consistency
⭐
7
Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.) Paper here: https://arxiv.org/abs/2310.10378
Ai_generated_text_checker_app
⭐
7
This app Classifies the text generated by AI tools like chatGPT. Roberta-base-openai-detector Model has been used from hugging face to detect ai generated texts.
Drophead Pytorch
⭐
7
An implementation of drophead regularization for pytorch transformers
Roberta4keras
⭐
6
An English RoBERTa based on bert4keras
Security Intelligence On Exchanged Multimedia Messages Based On Deep Learning
⭐
6
Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. Deep Learning algorithms are unable to deal with te
Kaznerd
⭐
6
An open-source Kazakh named entity recognition dataset (KazNERD), annotation guidelines, and baseline NER models.
Simpleclassification
⭐
6
Simple Text Classification[WIP]
Mmm Mcqa
⭐
6
Source code for our "MMM" paper at AAAI 2020
Scratch2lm
⭐
5
Training transformer models (e.g. RoBERTa, GPT2 and GPT-J) from scratch.
Dtem
⭐
5
Developer Technical Expertise Mining
Related Searches
Python Django (28,897)
Python Machine Learning (20,195)
Python Jupyter Notebook (17,692)
Python Dataset (14,962)
Python Flask (14,880)
Python Pytorch (14,871)
Python Tensorflow (13,992)
Python Docker (13,757)
Python Command Line (13,351)
Python Deep Learning (13,092)
1-73 of 73 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.