Loading...
Loading...
Found 1,578 Skills
Financial time series analysis method toolkit. Covers stocks / commodity futures / cryptocurrencies / ETFs / foreign exchange / indices, full process from data acquisition to high-level analysis. Built-in 70+ analysis methods, covering 8 major method domains: time series testing, predictive modeling, cross-asset relationships, volatility risk, portfolio optimization, state recognition, commodity-specific analysis and network analysis. Tushare MCP tool (A shares/Hong Kong stocks/US stocks/futures/funds/macro) is preferred for data acquisition, and yfinance scripts are used to supplement assets not covered by tushare such as commodity futures (CL=F) and crypto (BTC-USD).
When the user wants to find patterns in what content works and what doesn't. Also use when the user mentions 'what's working,' 'content patterns,' 'best topics,' 'best format,' 'best time to post,' 'analyze my content,' 'do more of,' 'do less of,' or 'what should I change.' For raw metrics, see performance-analyzer-sms. For audience-specific analysis, see audience-growth-tracker-sms. For actionable recommendations, see optimization-advisor-sms.
When the user wants to analyze how their social media posts are performing. Also use when the user mentions 'analytics,' 'performance,' 'how did my posts do,' 'engagement,' 'impressions,' 'what's working,' 'post metrics,' 'my best posts,' or 'why isn't this post performing.' Uses BlackTwist analytics when available, works from user-provided data otherwise. For audience growth specifically, see audience-growth-tracker-sms. For pattern detection, see content-pattern-analyzer-sms. For actionable next steps, see optimization-advisor-sms.
Use when optimizing multi-factor systems with limited experimental budget, screening many variables to find the vital few, discovering interactions between parameters, mapping response surfaces for peak performance, validating robustness to noise factors, or when users mention factorial designs, A/B/n testing, parameter tuning, process optimization, or experimental efficiency.
Optimize e-commerce checkout flow to reduce cart abandonment. Friction analysis, payment method optimization, trust signals, and checkout UX best practices.
Help with MongoDB query optimization and indexing. Use only when the user asks for optimization or performance: "How do I optimize this query?", "How do I index this?", "Why is this query slow?", "Can you fix my slow queries?", "What are the slow queries on my cluster?", etc. Do not invoke for general MongoDB query writing unless user asks for performance or index help. Prefer indexing as optimization strategy. Use MongoDB MCP when available.
Different techniques to optimize the performance of Qdrant, including indexing strategies, query optimization, and hardware considerations. Use when you want to improve the speed and efficiency of your Qdrant deployment.
Analyze text content using both traditional NLP and LLM-enhanced methods. Extract sentiment, topics, keywords, and insights from various content types including social media posts, articles, reviews, and video content. Use when working with text analysis, sentiment detection, topic modeling, or content optimization.
When the user wants to find blog keywords, do keyword research for SEO, or build a keyword list for content. Use when the user mentions "keyword research," "blog keywords," "find keywords," "what should I blog about," "keyword ideas," "long-tail keywords," "striking distance keywords," "keyword gap," "content gap analysis," "competitor keywords," "keyword difficulty," "search volume," "topic clusters," "pillar content keywords," "keyword list," or "what are people searching for." Outputs a ranked JSONL keyword list for downstream content creation. For writing content strategy, see content-strategy. For SEO audits, see seo-audit. For AI search optimization, see ai-seo.
Analyzes codebases to identify refactoring opportunities based on Martin Fowler's catalog of code smells and refactoring techniques. Detects duplicated code, high coupling, complex conditionals, primitive obsession, long functions, and other structural issues. Produces a structured refactoring report with prioritized findings saved to docs/_refacs/. Use when auditing code quality, preparing for a refactoring sprint, or reviewing architectural health. Don't use for style/formatting issues, performance optimization, or security audits.
Profile-driven performance optimization with behavior proofs. Use when: optimize, slow, bottleneck, hotspot, profile, p95, latency, throughput, or algorithmic improvements.
Autonomous LLM training optimization with GPU support. Runs 5-minute training experiments, measures val_bpb, keeps improvements or reverts — repeat forever. Use this skill when the user asks to "train a model autonomously", "optimize LLM training", "run ML experiments", "autoresearch with GPU", "optimize val_bpb", "autonomous ML training", "LLM pretraining loop", "setup ML autoresearch", "GPU training experiments", "pretrain from scratch", "speed up training", "lower my loss", "GPU optimization", "CUDA training", or mentions "train.py", "prepare.py", "bits per byte", "val_bpb", "NVIDIA GPU training", "RTX training", "H100 training", "autonomous model training", "consumer GPU training", "low VRAM training". Always use this skill when the user wants to autonomously optimize any ML training metric.