Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for prompt injection llm security
llm-security
x
prompt-injection
x
6 search results found
Llm Guard
⭐
567
The Security Toolkit for LLM Interactions
Vigil Llm
⭐
132
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Open Prompt Injection
⭐
52
Prompt injection attacks and defenses in LLM-integrated applications
Vibraniumdome
⭐
29
The world's first open source LLM Applications Firewall.
Llm Confidentiality
⭐
13
Framework for Attacking the Confidentiality of Large Language Models (LLMs)
Manipulative Expression Recognition
⭐
7
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
Related Searches
Python Prompt Injection (11)
Llm Prompt Injection (11)
Chatgpt Prompt Injection (5)
Large Language Models Prompt Injection (4)
Llmops Prompt Injection (4)
Python Llm Security (4)
Adversarial Attacks Prompt Injection (4)
Llmops Llm Security (3)
Openai Llm Security (3)
Security Tools Prompt Injection (3)
1-6 of 6 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.