AnythingLLM: A document chatbot to chat with anything!.
An efficient, customizable, and open-source enterprise-ready document chatbot solution.
A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use.
AnythingLLM aims to be a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions.
Anything LLM is a full-stack product that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it.
AnythingLLM divides your documents into objects called
workspaces. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean.
Some cool features of AnythingLLM
query. Conversation retains previous questions and amendments. Query is simple QA against your documents
Supported Vector Databases:
This monorepo consists of three main sections:
collector: Python tools that enable you to quickly convert online resources or local documents into LLM useable format.
frontend: A viteJS + React frontend that you can run to easily create and manage all your content the LLM can use.
server: A nodeJS + express server to handle all the interactions and do all the vectorDB management and LLM interactions.
nodeon your machine
python3.9+ for running scripts in
docker pull mintplexlabs/anythingllm:master
docker run -d -p 3001:3001 mintplexlabs/anythingllm:master
http://localhost:3001 and you are now using AnythingLLm!
More about running AnythingLLM with Docker
yarn setupfrom the project root directory.
.envfiles you'll need in each of the application sections. Go fill those out before proceeding or else things won't work right.
cd frontend && yarn install && cd ../server && yarn installfrom the project root directory.
To boot the server locally (run commands from root of repo):
server/.env.developmentis set and filled out.
To boot the frontend locally (run commands from root of repo):
frontend/.envis set and filled out.
Next, you will need some content to embed. This could be a Youtube Channel, Medium articles, local text files, word documents, and the list goes on. This is where you will use the
collector/ part of the repo.
<issue number>-<short name>
AnythingLLM by Mintplex Labs Inc contains a telemetry feature that collects anonymous usage information.
We use this information to help us understand how AnythingLLM is used, to help us prioritize work on new features and bug fixes, and to help us improve AnythingLLM's performance and stability.
DISABLE_TELEMETRY in your server or docker .env settings to "true" to opt out of telemetry.
We will only track usage details that help us make product and roadmap decisions, specifically:
You can verify these claims by finding all locations
Telemetry.sendTelemetry is called. Additionally these events are written to the output log so you can also see the specific data which was sent - if enabled. No IP or other identifying information is collected. The Telemetry provider is PostHog - an open-source telemetry collection service.