Loading...
Loading...
Found 67 Skills
Forge a complete lobster soul solution for OpenClaw AI Agent. Based on user preferences or random gacha, output identity positioning, soul description (SOUL.md), role-based bottom-line rules, name, and avatar generation prompts. If the current environment provides an audited image generation skill, it can automatically generate avatar images with unified style. Use this when users need to create, design or customize OpenClaw lobster souls. Not applicable for: fine-tuning existing SOUL.md, character design for non-OpenClaw platforms, pure tool-type Agent without personality. Trigger words: 龙虾灵魂, 虾魂, OpenClaw 灵魂, 养虾灵魂, 龙虾角色, 龙虾定位, 龙虾剧本杀角色, 龙虾游戏角色, 龙虾 NPC, 龙虾性格, 龙虾背景故事, lobster soul, lobster character, 抽卡, 随机龙虾, 龙虾 SOUL, gacha.
World-class ML engineering skill for productionizing ML models, MLOps, and building scalable ML systems. Expertise in PyTorch, TensorFlow, model deployment, feature stores, model monitoring, and ML infrastructure. Includes LLM integration, fine-tuning, RAG systems, and agentic AI. Use when deploying ML models, building ML platforms, implementing MLOps, or integrating LLMs into production systems.
Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.
OpenAI's model connecting vision and language. Enables zero-shot image classification, image-text matching, and cross-modal retrieval. Trained on 400M image-text pairs. Use for image search, content moderation, or vision-language tasks without fine-tuning. Best for general-purpose image understanding.
Receive and verify OpenAI webhooks. Use when setting up OpenAI webhook handlers for fine-tuning jobs, batch completions, or async events like fine_tuning.job.completed, batch.completed, or realtime.call.incoming.
Collaboration Process for UI Style Modifications. Used when users request page style changes, layout adjustments, or UI detail tweaks. The structured process of "Screenshot Localization → Current Status Description → Option Selection → Code Modification → Fine-tuning" reduces communication deviations and avoids token waste.
Trains and fine-tunes vision models for object detection (D-FINE, RT-DETR v2, DETR, YOLOS), image classification (timm models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3 — plus any Transformers classifier), and SAM/SAM2 segmentation using Hugging Face Transformers on Hugging Face Jobs cloud GPUs. Covers COCO-format dataset preparation, Albumentations augmentation, mAP/mAR evaluation, accuracy metrics, SAM segmentation with bbox/point prompts, DiceCE loss, hardware selection, cost estimation, Trackio monitoring, and Hub persistence. Use when users mention training object detection, image classification, SAM, SAM2, segmentation, image matting, DETR, D-FINE, RT-DETR, ViT, timm, MobileNet, ResNet, bounding box models, or fine-tuning vision models on Hugging Face Jobs.
Use this skill when building production LLM applications, implementing guardrails, evaluating model outputs, or deciding between prompting and fine-tuning. Triggers on LLM app architecture, AI guardrails, output evaluation, model selection, embedding pipelines, vector databases, fine-tuning, function calling, tool use, and any task requiring production AI application design.
Build Next.js web applications with Google Gemini Nano Banana image generation APIs (gemini-2.5-flash-image, gemini-3-pro-image-preview). Use when creating image generators, editors, galleries, or any app integrating conversational image generation with server actions, API routes, and storage. Use for "image generation app", "nano banana", "text to image", "AI image generator", or "gemini image". Do NOT use for non-Gemini models, Python/Go backends, model fine-tuning, or image classification/input tasks.
Implement comprehensive image editing capabilities in Blazor applications using the Syncfusion Image Editor component. Use this skill when implementing image editing, annotations, transformations, cropping, filtering, zooming, and panning features. Supports annotations (text, shapes, freehand), transformations (crop, rotate, flip, resize), effects (filters, fine-tuning), toolbar customization, and keyboard shortcuts.
Trains and fine-tunes vision models for object detection (D-FINE, RT-DETR v2, DETR, YOLOS), image classification (timm models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3 — plus any Transformers classifier), and SAM/SAM2 segmentation using Hugging Face Transformers on Hugging Face Jobs cloud GPUs. Covers COCO-format dataset preparation, Albumentations augmentation, mAP/mAR evaluation, accuracy metrics, SAM segmentation with bbox/point prompts, DiceCE loss, hardware selection, cost estimation, Trackio monitoring, and Hub persistence. Use when users mention training object detection, image classification, SAM, SAM2, segmentation, image matting, DETR, D-FINE, RT-DETR, ViT, timm, MobileNet, ResNet, bounding box models, or fine-tuning vision models on Hugging Face Jobs.
Guidance for recovering PyTorch model architectures from state dictionaries, retraining specific layers, and saving models in TorchScript format. This skill should be used when tasks involve reconstructing model architectures from saved weights, fine-tuning specific layers while freezing others, or converting models to TorchScript format.