Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Adbench | 383 | 5 months ago | 6 | bsd-2-clause | Python | |||||
Official Implement of "ADBench: Anomaly Detection Benchmark". | ||||||||||
Paddlefleetx | 329 | a day ago | 9 | October 21, 2020 | 111 | apache-2.0 | Python | |||
Paddle Distributed Training Examples. 飞桨分布式训练示例 Resnet Bert GPT MOE DataParallel ModelParallel PipelineParallel HybridParallel AutoParallel Zero Sharding Recompute GradientMerge Offload AMP DGC LocalSGD Wide&Deep | ||||||||||
Awesome State Of Depth Completion | 210 | 5 months ago | ||||||||
Current state of supervised and unsupervised depth completion methods | ||||||||||
Deep Unsupervised Domain Adaptation | 20 | a year ago | 1 | Python | ||||||
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31. | ||||||||||
Useb | 12 | a year ago | 1 | apache-2.0 | Python | |||||
Heterogenous, Task- and Domain-Specific Benchmark for Unsupervised Sentence Embeddings used in the TSDAE paper: https://arxiv.org/abs/2104.06979. |
This repository hosts the data and the evaluation script for reproducing the results reported in the paper: "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning" (EMNLP 2021 Findings). This benchmark (USEB) contains four heterogeous, task- and domain-specific datasets: AskUbuntu, CQADupStack, TwitterPara and SciDocs. It directly works with SBERT. For details, pleasae refer to the paper.
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
After data downloading, one can either run (it needs ~8min on a GPU)
python -m useb.examples.eval_sbert
to evaluate an SBERT model (really an awesome repository for sentence embeddings, and the lastest model there is much better) on all the datasets; or run this same code below:
from useb import run
from sentence_transformers import SentenceTransformer # SentenceTransformer is an awesome library for providing SOTA sentence embedding methods. TSDAE is also integrated into it.
import torch
sbert = SentenceTransformer('bert-base-nli-mean-tokens') # Build an SBERT model
# The only thing needed for the evaluation: a function mapping a list of sentences into a batch of vectors (torch.Tensor)
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(sbert.encode(sentences, show_progress_bar=False))
results, results_main_metric = run(
semb_fn_askubuntu=semb_fn,
semb_fn_cqadupstack=semb_fn,
semb_fn_twitterpara=semb_fn,
semb_fn_scidocs=semb_fn,
eval_type='test',
data_eval_path='data-eval' # This should be the path to the folder of data-eval
)
assert round(results_main_metric['avg'], 1) == 47.6
It is also supported to evaluate on a single dataset (please see useb/examples/eval_sbert_askubuntu.py):
python -m useb.examples.eval_sbert_askubuntu
.
data-eval # For evaluation usage. One can refer to ./unsupse_benchmark/evaluators to learn about how to loading these data.
askubuntu
dev.txt
test.txt
text_tokenized.txt
cqadupstack
corpus.json
retrieval_split.json
scidocs
cite
test.qrel
val.qrel
cocite
test.qrel
val.qrel
coread
test.qrel
val.qrel
coview
test.qrel
val.qrel
data.json
twitterpara
Twitter_URL_Corpus_test.txt
test.data
test.label
data-train # For training usage.
askubuntu
supervised # For supervised training. *.org and *.para are parallel files, each line are aligned and compose a gold relevant sentence pair (to work with MultipleNegativeRankingLoss in the SBERT repo).
train.org
train.para
unsupervised # For unsupervised training. Each line is a sentence.
train.txt
cqadupstack
supervised
train.org
train.para
unsupervised
train.txt
scidocs
supervised
train.org
train.para
unsupervised
train.txt
twitter # For supervised training on TwitterPara, the float labels are also available (to work with CosineSimilarityLoss in the SBERT repo). As reported in the paper, using the float labels can achieve higher performance.
supervised
train.lbl
train.org
train.para
train.s1
train.s2
unsupervised
train.txt
tree.txt
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
@inproceedings{wang-etal-2021-tsdae-using,
title = "{TSDAE}: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.59",
doi = "10.18653/v1/2021.findings-emnlp.59",
pages = "671--688",
}
Contact person and main contributor: Kexin Wang, [email protected]
https://www.ukp.tu-darmstadt.de/
Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.
This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.