Loading...
Loading...
Found 100 Skills
Patterns for OrbStack Linux VMs and Docker on macOS. Covers orbctl/orb commands, machine lifecycle, cloud-init, networking, file sharing, and SSH access. Must use when working with OrbStack, orbctl commands, or Linux VMs on macOS.
Host security hardening and risk-tolerance configuration for OpenClaw deployments. Use when a user asks for security audits, firewall/SSH/update hardening, risk posture, exposure review, OpenClaw cron scheduling for periodic checks, or version status checks on a machine running OpenClaw (laptop, workstation, Pi, VPS).
Use Desktop Commander MCP (typically tools like `mcp__desktop-commander__*`) to manage local files and long-running processes: read/write/search files, apply precise edits, work with Excel/PDFs, run terminal commands and interact with REPLs (Python/Node/SSH/DB), inspect/terminate processes, and review tool call history. Use when the task requires doing real work on the machine (editing code/configs, searching a repo, analyzing CSV/Excel, generating/modifying PDFs, running commands with streaming output).
Comprehensive Tailscale VPN setup, configuration, and management for mesh networking, secure access, and zero-trust infrastructure. Covers installation, CLI commands, subnet routers, exit nodes, Tailscale SSH, ACL/grants configuration, MagicDNS, Tailscale Serve/Funnel, API automation, and production deployment best practices.
Use this skill when managing Linux servers, writing shell scripts, configuring systemd services, debugging networking, or hardening security. Triggers on bash scripting, systemd units, iptables, firewall, SSH configuration, file permissions, process management, cron jobs, disk management, and any task requiring Linux system administration.
Internal downstream skill for ctf-sandbox-orchestrator. CTF-sandbox workflow for Linux credential artifacts, service tokens, SSH material, cloud and container secrets, socket-level trust, and host-to-host pivot chains. Use when the user asks to trace Linux auth artifacts, accepted token or key replay, socket or service-account trust edges, sudo or capability abuse, or explain lateral movement across Linux challenge nodes. Use only after `$ctf-sandbox-orchestrator` has already established sandbox assumptions and routed here.
Early-access skill for Mimiry's softlaunch GPU compute platform. Use this skill whenever the user wants to run a GPU job, start a compute session, train a model, launch a container on a GPU, check their balance, manage running sessions, or build a compute job script on the Mimiry softlaunch environment. Also triggers when the user mentions Mimiry softlaunch, Mimiry compute, GPU sessions, SSH-ing into a session, or asks to "run this on a GPU" or "launch a training job". Covers both quick one-liners and interactive job-building workflows. This is the softlaunch (early beta) version — the API and features may change.
DDEV local development environment for Craft CMS projects. Covers config.yaml settings (project type, PHP/Node versions, database, docroot), shorthand commands (ddev composer, ddev craft, ddev npm), add-ons (Redis, Mailpit), custom commands (.ddev/commands/), Vite dev server exposure (web_extra_exposed_ports, web_extra_daemons), database import/export, Xdebug toggling, and troubleshooting. Triggers on: ddev start, ddev craft, ddev composer, ddev ssh, ddev import-db, ddev xdebug, .ddev/config.yaml, web_extra_exposed_ports, web_extra_daemons, ddev add-on, ddev poweroff, ddev describe. Use when running DDEV commands, configuring local environments, or troubleshooting container issues.
HCCL (Huawei Collective Communication Library) performance testing for Ascend NPU clusters. Use for testing distributed communication bandwidth, verifying HCCL functionality, and benchmarking collective operations like AllReduce, AllGather. Covers MPI installation, multi-node pre-flight checks (SSH/CANN version/NPU health), and production testing workflows.
Run GPU workloads on Modal — training, fine-tuning, inference, batch processing. Zero-config serverless: no SSH, no Docker, auto scale-to-zero. Use when user says "modal run", "modal training", "modal inference", "deploy to modal", "need a GPU", "run on modal", "serverless GPU", or needs remote GPU compute.
Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.
Deploy and manage enter.pollinations.ai text/image services on EC2 and Cloudflare Workers. Requires: SSH keys, sops, wrangler.