Loading...
Loading...
Found 54 Skills
Test for user enumeration vulnerabilities through various authentication endpoints.
Set up HelpMeTest proxy tunnels for local development testing. Use when user needs to test localhost, wants to substitute production URLs with local ports, or needs to route multiple services. Use when user says 'set up proxy', 'test localhost', 'tunnel to local', or before running tests against local development servers.
Everything to do with tests on HelpMeTest. Use when: writing tests for a new feature, generating tests for an existing feature, fixing a broken test, debugging a failing test, tests broke after a UI change, tests are out of date after a refactor. Triggers on: 'write tests', 'generate tests', 'test is failing', 'fix tests', 'tests broke', 'implement X', 'add feature', 'fix bug', 'why does this test fail', 'tests are out of date'. If it involves HelpMeTest tests in any way, this is the skill.
Test if user signup is open and identify potential abuse vectors in the registration process.
Create a test user (with explicit permission) to audit what authenticated users can access vs anonymous users. Detects IDOR, cross-user access, and privilege escalation.
CRITICAL - Detect if the Supabase service_role key is leaked in client-side code. This is a P0 severity issue.
Extract and decode Supabase-related JWTs from client-side code, cookies, and local storage patterns.
Compare two security audit reports to track remediation progress and identify new vulnerabilities.
List all tables exposed via the Supabase PostgREST API to identify the attack surface.
Extract the Supabase project URL from client-side JavaScript code, environment variables, and configuration files.
Invoke this skill when a user shares test code and questions whether it actually works as intended — not to run or fix the test, but to evaluate whether the test has real value. Triggers on: "is this test any good?", "would this catch a real bug?", "this test always passes — is that normal?", "review these tests before I commit", or "does this test verify anything meaningful?". Also triggers when someone suspects a test is useless, wants a pre-commit quality gate, or is unsure if an auto-generated test is worth keeping. The core question this skill answers: "Would this test fail if the feature broke?" If not, the test gets rejected. Do NOT use for generating new tests, fixing failing tests, or exploring application features.
Deep UI walkthrough with screenshot-based analysis across all pages and viewports (desktop + tablet + mobile). Delivers per-page improvement pitches grounded in what you actually see. Use when user says 'review the UI', 'pitch UI improvements', 'how does this look', 'UX audit', 'walk through the app'.