Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for roberta
roberta
x
109 search results found
Chinese Bert Wwm
⭐
8,600
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
Lora
⭐
7,814
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Bertviz
⭐
5,547
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Awesome Pretrained Chinese Nlp Models
⭐
3,738
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
Albert_zh
⭐
3,723
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Clue
⭐
3,345
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Uer Py
⭐
2,802
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Rust Bert
⭐
2,300
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Roberta_zh
⭐
2,141
RoBERTa中文预训练模型: RoBERTa for Chinese
News Please
⭐
1,821
news-please - an integrated web crawler and information extractor for news that just works
Farm
⭐
1,706
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
Deberta
⭐
1,673
The implementation of DeBERTa
Cluener2020
⭐
1,384
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Turbotransformers
⭐
1,322
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
Tencentpretrain
⭐
951
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Bert_seq2seq
⭐
890
pytorch实现 Bert 做seq2seq任务,使用unilm方案,现在也可以做自动摘要,文本分类,情感分析,NER,词性标注
Curated Transformers
⭐
790
🤖 A PyTorch library of curated Transformer models and their composable components
Texar Pytorch
⭐
711
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/
Gector
⭐
680
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
Phobert
⭐
593
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
Promptclue
⭐
592
PromptCLUE, 全中文任务支持零样本学习模型
Bertweet
⭐
542
BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)
Cluepretrainedmodels
⭐
536
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
Cluecorpus2020
⭐
517
Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料
Japanese Pretrained Models
⭐
479
Code for producing Japanese pretrained models provided by rinna Co., Ltd.
Happy Transformer
⭐
449
Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
Klue
⭐
379
📖 Korean NLU Benchmark
Transformersum
⭐
362
Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive summarization datasets to the extractive task.
Ernie
⭐
195
Simple State-of-the-Art BERT-Based Sentence Classification with Keras / TensorFlow 2. Built with HuggingFace's Transformers.
Mint
⭐
192
MinT: Minimal Transformer Library and Tutorials
Embedding As Service
⭐
182
One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques
Robbert
⭐
180
A Dutch RoBERTa-based language model
Xlnet_zh
⭐
175
中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
Cosine
⭐
155
This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach' (In Proc. of NAACL-HLT 2021).
Reccon
⭐
128
This repository contains the dataset and the PyTorch implementations of the models from the paper Recognizing Emotion Cause in Conversations.
Data2vec Pytorch
⭐
126
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI
Getting Started With Google Bert
⭐
126
Build and train state-of-the-art natural language processing models using BERT
Openroberta Lab
⭐
121
The programming environment »Open Roberta Lab« by Fraunhofer IAIS enables children and adolescents to program robots. A variety of different programming blocks are provided to program motors and sensors of the robot. Open Roberta Lab uses an approach of graphical programming so that beginners can seamlessly start coding. As a cloud-based application, the platform can be used without prior installation of specific software but runs in any popular browser, independent of operating system and devic
Koclip
⭐
117
KoCLIP: Korean port of OpenAI CLIP, in Flax
Bond
⭐
114
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Robertaabsa
⭐
111
Implementation of paper Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa.
Multi Label_classification
⭐
108
transform multi-label classification as sentence pair task, with more training data and information
Minirbt
⭐
102
MiniRBT (中文小型预训练模型系列)
Adamix
⭐
94
This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.12410).
Wordseg
⭐
91
A PyTorch implementation of a BiLSTM \ BERT \ Roberta (+ BiLSTM + CRF) model for Chinese Word Segmentation (中文分词) .
Infobert
⭐
81
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
Dialog Nlu
⭐
78
Tensorflow and Keras implementation of the state of the art researches in Dialog System NLU
Text Summarization
⭐
76
Abstractive and Extractive Text summarization using Transformers.
Iterater
⭐
66
Official implementation of the paper "IteraTeR: Understanding Iterative Revision from Human-Written Text" (ACL 2022)
Ko Sentence Transformers
⭐
66
한국어 사전학습 모델을 활용한 문장 임베딩
Roberta Wwm Base Distill
⭐
64
this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large
Clue_pytorch
⭐
61
CLUE baseline pytorch CLUE的pytorch版本基线
Polish Roberta
⭐
61
RoBERTa models for Polish
Code Bert
⭐
48
Automatically check mismatch between code and comments using AI and ML
Xlm Roberta Ner
⭐
46
Named Entity Recognition with Pretrained XLM-RoBERTa
Erc
⭐
40
Emotion recognition in conversation
Efficient Task Transfer
⭐
34
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021
Palbert
⭐
33
Code for the paper "PALBERT: Teaching ALBERT to Ponder", NeurIPS 2022 Spotlight
Model Zoo
⭐
29
NLP model zoo for Russian
Amazon Ml Challenge2021
⭐
28
Scripts and Approach for Amazon ML Challenge
Les Military Mrc Rank7
⭐
26
莱斯杯:全国第二届“军事智能机器阅读”挑战赛 - Rank7 解决方案
Transformer Qg On Squad
⭐
26
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Bert_for_longer_texts
⭐
25
BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.
Politbert
⭐
18
Polish RoBERTA model trained on Polish literature, Wikipedia, and Oscar. The major assumption is that quality text will give a good model.
Distilkobilstm
⭐
17
Distilling Task-Specific Knowledge from Teacher Model into BiLSTM
Ruberta
⭐
17
Russian RoBERTa
Fake News Classification Model
⭐
16
✨ Fake news classification using source adaptive framework - BE Project 🎓The repository contains Detailed Documentation of the project, Classification pipeline, Architecture, System Interface Design, Tech stack used.
Bert_seq2seq_ddp
⭐
16
bert_seq2seq的DDP版本,支持bert、roberta、nezha、t5、gpt2等模型
Tianchi2020chinesemedicinequestiongeneration
⭐
16
2020 阿里云天池大数据竞赛-中医药文献问题生成挑战赛
Text Classification Pytorch
⭐
16
Summary and comparison of Chinese classification models
Long Texts Sentiment Analysis Roberta
⭐
16
PyTorch implementation of Sentiment Analysis of the long texts written in Serbian language (which is underused language) using pretrained Multilingual RoBERTa based model (XLM-R) on the small dataset.
Infotabs Code
⭐
14
Implementation of the semi-structured inference model in our ACL 2020 paper, INFOTABS: Inference on Tables as Semi-structured Data.
Xbert
⭐
14
Implementation of pre-training model loading architecture of bert and its variants with tensorflow2
Klue Rbert
⭐
14
↔️ Utilizing RBERT model structure for KLUE Relation Extraction task
German Transformer Training
⭐
13
Plan and train German transformer models.
Structured_tuning_srl
⭐
12
Implementation of our ACL 2020 paper: Structured Tuning for Semantic Role Labeling
Team
⭐
12
Our EMNLP 2022 paper on MCQA
Vietnamese Roberta
⭐
12
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Lm Legal Es
⭐
12
Language Models for the legal domain in Spanish done @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).
Aws Llm Sagemaker
⭐
11
SageMaker Ployglot based RAG opensearch
Zabanshenas
⭐
10
Zabanshenas is a solution for identifying the most likely language of a piece of written text. Demo (👇 )
Attentionvisualizer
⭐
10
A simple library to showcase highest scored words using RoBERTa model
Zeroe
⭐
9
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks
Roberta Base Mr
⭐
9
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week
Pytorch Roberta
⭐
9
Cdgp
⭐
8
Code for Findings of EMNLP 2022 short paper "CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model".
Transformers_examples
⭐
8
reference pytorch code for huggingface transformers
Ramen
⭐
7
A software for transferring pre-trained English models to foreign languages
Transformers_onnx
⭐
7
Drophead Pytorch
⭐
7
An implementation of drophead regularization for pytorch transformers
Cross Lingual Consistency
⭐
7
Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.) Paper here: https://arxiv.org/abs/2310.10378
Ai_generated_text_checker_app
⭐
7
This app Classifies the text generated by AI tools like chatGPT. Roberta-base-openai-detector Model has been used from hugging face to detect ai generated texts.
Roberta4keras
⭐
6
An English RoBERTa based on bert4keras
Security Intelligence On Exchanged Multimedia Messages Based On Deep Learning
⭐
6
Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. Deep Learning algorithms are unable to deal with te
Perceptivepyro
⭐
6
Run and train Transformer based Large Language Models (LLMS) natively in .NET using TorchSharp
Simpleclassification
⭐
6
Simple Text Classification[WIP]
Kaznerd
⭐
6
An open-source Kazakh named entity recognition dataset (KazNERD), annotation guidelines, and baseline NER models.
Amazon Fine Food Reviews
⭐
6
Determine the polarity of amazon fine food reviews using ULMFiT, BERT, XLNet and RoBERTa
Assin
⭐
6
Supporting code for the paper "Multilingual Transformer Ensembles for Portuguese Natural Language Tasks".
Mmm Mcqa
⭐
6
Source code for our "MMM" paper at AAAI 2020
1-100 of 109 search results
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.