Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Beir | 1,332 | 8 | a month ago | 29 | July 21, 2023 | 57 | apache-2.0 | Python | ||
A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets. | ||||||||||
Stringzilla | 999 | 3 months ago | 5 | November 19, 2023 | 16 | apache-2.0 | C | |||
Up to 10x faster string search, split, sort, and shuffle for long strings and multi-gigabyte files in Python and C, leveraging SIMD with just a few lines of Arm Neon and x86 AVX2 & AVX-512 intrinsics 🦖 | ||||||||||
Rmdl | 409 | 1 | a year ago | 7 | July 01, 2020 | 2 | gpl-3.0 | Python | ||
RMDL: Random Multimodel Deep Learning for Classification | ||||||||||
Automated Fact Checking Resources | 303 | 3 months ago | 3 | mit | ||||||
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper). | ||||||||||
Ir_datasets | 284 | 8 | 3 months ago | 23 | October 18, 2022 | 76 | apache-2.0 | Python | ||
Provides a common interface to many IR ranking datasets. | ||||||||||
Hdltex | 252 | 5 months ago | 5 | April 20, 2018 | 7 | mit | Python | |||
HDLTex: Hierarchical Deep Learning for Text Classification | ||||||||||
Neuralqa | 207 | 3 years ago | 27 | September 18, 2020 | 33 | mit | JavaScript | |||
NeuralQA: A Usable Library for Question Answering on Large Datasets with BERT | ||||||||||
Awesome Hungarian Nlp | 192 | 6 months ago | 1 | |||||||
A curated list of NLP resources for Hungarian | ||||||||||
Chatgpt Retrievalqa | 130 | 3 months ago | Jupyter Notebook | |||||||
A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on real human responses. | ||||||||||
Query Wellformedness | 63 | 6 years ago | 1 | |||||||
25,100 queries from the Paralex corpus (Fader et al., 2013) annotated with human ratings of whether they are well-formed natural language questions. |