Loading...
Loading...
Found 45 Skills
Compare Replicate models by cost, speed, quality, and capabilities.
Prompting techniques for AI image generation and editing models on Replicate. Use when writing prompts for image models or building image generation features.
Prompting techniques for AI video generation models on Replicate. Use when writing prompts for video models or building video generation features.
Run AI models on Replicate via predictions, webhooks, and streaming.
Find AI models on Replicate using search and curated collections.
Discover, compare, and run AI models using Replicate's API
Push and publish custom AI models to Replicate, and set up CI/CD for releasing new model versions safely. Use when running cog push, deploying a model to Replicate, releasing a new version, validating a model with cog-safe-push before publishing, configuring a Replicate deployment, setting up GitHub Actions for model releases, or porting a community model to an official one. Trigger on phrases like "push a model to Replicate", "publish a model", "deploy a model", "release a new version", "cog push", "cog-safe-push", "model CI", "r8.im", or "schema compatibility", and when referencing github.com/replicate/cog-safe-push or github.com/replicate/model-ci-template. Covers cog push, the full cog-safe-push config (test cases, fuzz, deployment, official_model), GitHub Actions patterns, multi-model matrix pushes, and post-publish monitoring. Assumes you already have a working Cog project; see build-models if you need to package one first.
Package and build custom AI models with Cog for deployment on Replicate. Use when creating a cog.yaml or predict.py, defining model inputs and outputs, loading model weights at setup time, building Docker images for ML models, serving locally with cog serve or cog predict, or porting a HuggingFace, GitHub, or ComfyUI model to run on Replicate. Trigger on phrases like "build a model", "package a model", "create a Cog model", "wrap a model", "containerize an AI model", "predict.py", "cog.yaml", "BasePredictor", or "Cog container", and when referencing cog.run, github.com/replicate/cog, or github.com/replicate/cog-examples. Covers GPU and CUDA setup, pget for fast weight downloads, async predictors with continuous batching, streaming outputs, and cold-boot optimization for image, video, audio, and LLM models. For pushing built models to Replicate, see publish-models. For running existing models, see run-models.
Receive and verify Replicate webhooks. Use when setting up Replicate webhook handlers, debugging signature verification, or handling prediction events like start, output, logs, or completed.
Replicate integration. Manage data, records, and automate workflows. Use when the user wants to interact with Replicate data.
This skill provides comprehensive guidance for using the Replicate CLI to run AI models, create predictions, manage deployments, and fine-tune models. Use this skill when the user wants to interact with Replicate's AI model platform via command line, including running image generation models, language models, or any ML model hosted on Replicate. This skill should be used when users ask about running models on Replicate, creating predictions, managing deployments, fine-tuning models, or working with the Replicate API through the CLI.
Replicate and validate a GitHub issue by spinning up Archon, analyzing the issue, and systematically testing all described symptoms using browser automation. Use when: User wants to reproduce a bug, validate a GitHub issue, confirm a reported problem, or investigate whether an issue is real before working on a fix. Triggers: "replicate issue", "reproduce issue", "validate issue", "confirm bug", "test issue", "can you reproduce", "try to replicate", "verify the bug". Capability: Checks out main, pulls latest, starts Archon, reads the GitHub issue, then uses agent-browser to systematically test every symptom and produce a findings report. NOT for: Fixing issues (use /archon or /exp-piv-loop:fix-issue), general UI testing (use /validate-ui).