Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Haystack | 12,474 | 30 | 3 months ago | 100 | November 09, 2023 | 346 | apache-2.0 | Python | ||
:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots. | ||||||||||
Txtai | 6,143 | 9 | 3 months ago | 35 | November 08, 2023 | 16 | apache-2.0 | Python | ||
💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows | ||||||||||
Instructor Embedding | 1,630 | 2 months ago | 20 | apache-2.0 | Python | |||||
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings | ||||||||||
Langroid | 988 | 3 months ago | 36 | mit | Python | |||||
Harness LLMs with Multi-Agent Programming | ||||||||||
Sgpt | 742 | 10 months ago | 20 | mit | Jupyter Notebook | |||||
SGPT: GPT Sentence Embeddings for Semantic Search | ||||||||||
Retromae | 171 | 4 months ago | 9 | apache-2.0 | Python | |||||
Codebase for RetroMAE and beyond. | ||||||||||
Smarterreply | 27 | 3 years ago | 1 | mit | JavaScript | |||||
Chrome extension for creating custom Smart Replies in Gmail | ||||||||||
Unsupervised Passage Reranking | 24 | a year ago | 1 | Python | ||||||
Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)" | ||||||||||
Query_completion | 15 | 3 years ago | 3 | apache-2.0 | Python | |||||
Personalized Query Completion | ||||||||||
Hierarchical Language Modeling | 5 | a year ago | 2 | mit | Jupyter Notebook | |||||
We address the task of learning contextualized word, sentence and document representations with a hierarchical language model by stacking Transformer-based encoders on a sentence level and subsequently on a document level and performing masked token prediction. |