Loading...
Loading...
Found 5 Skills
LLM guardrails with NeMo, Guardrails AI, and OpenAI. Input/output rails, hallucination prevention, fact-checking, toxicity detection, red-teaming patterns. Use when building LLM guardrails, safety checks, or red-team workflows.
Senior Code Architect & Quality Assurance Engineer for 2026. Specialized in context-aware AI code reviews, automated PR auditing, and technical debt mitigation. Expert in neutralizing "AI-Smells," identifying performance bottlenecks, and enforcing architectural integrity through multi-job red-teaming and surgical remediation suggestions.
Find every way users can break your AI before they do. Use when you need to red-team your AI, test for jailbreaks, find prompt injection vulnerabilities, run adversarial testing, do a safety audit before launch, prove your AI is safe for compliance, stress-test guardrails, or verify your AI holds up against adversarial users. Covers automated attack generation, iterative red-teaming with DSPy, and MIPROv2-optimized adversarial testing.
Use when challenging ideas, plans, decisions, or proposals using structured critical reasoning. Invoke to play devil's advocate, run a pre-mortem, red team, or audit evidence and assumptions.
Search and retrieve pentesting, red teaming, and security research information from the HackTricks wiki (book.hacktricks.wiki). Use for payloads, methodologies, bypasses, and edge-case behaviors across web, network, cloud, and application security topics.