ENNAENNA

garak

Apache-2.0

โšก Vulnerability Scanning ยท Python

garak is a vulnerability scanner specifically designed for Large Language Models, developed by NVIDIA. It probes LLMs for weaknesses including prompt injection, jailbreaking, training data leakage, hallucination, toxic generation, and other failure modes. garak ships with dozens of probe modules targeting specific vulnerability classes and supports custom probe development. It works with OpenAI, Hugging Face, local models, and any API-compatible endpoint. Results include detailed reports on which attacks succeeded, confidence scores, and categorization by risk type. Essential for red-teaming AI systems before deployment and validating safety guardrails.

7.7kstars
903forks
325issues
Updated today
+I use this

Installation

$ pip install garak

Use Cases

  • Red-teaming LLM deployments for vulnerabilities
  • Testing prompt injection resistance
  • Detecting training data leakage from models
  • Validating AI safety guardrails before production

Tags

llm-securityai-red-teamprompt-injectionjailbreakml-securityaillm-evaluationsecurity-scannersvulnerability-assessment

Community Reviews

More in Vulnerability Scanning