Loading...
Loading...
Found 16 Skills
Guide for AI-powered penetration testing tools, red teaming frameworks, and autonomous security agents.
Use when you need comprehensive security scanning across applications, infrastructure, and dependencies with LLM-based analysis
Defense techniques against prompt injection attacks including direct injection, indirect injection, and jailbreaks - theUse when "prompt injection, jailbreak prevention, input sanitization, llm security, injection attack, security, prompt-injection, llm, owasp, jailbreak, ai-safety" mentioned.
Techniques to test and bypass AI safety filters, content moderation systems, and guardrails for security assessment