Total 30,744 skills, Data Processing has 1471 skills
Showing 12 of 1471 skills
Analyze the risks of the 'fiscal trap' under the interaction of population aging, debt dynamics, bureaucratic expansion, and inflation erosion, quantify the fiscal vulnerability of various countries/regions, and identify potential currency dilution paths
Integrate the lithium industry chain (mining → refined chemicals → batteries and end demand) into a set of computable proxy indicators; then map these indicators to the component exposure and long-term price trends of lithium-themed ETFs (such as LIT) to form a basis for decision-making.
Using palladium's leading trend reversal as a confirmation condition, verify whether silver's short-term price movements are supported by both industrial sentiment and risk sentiment, and mark failed trends that lack palladium participation.
Automatically crawl financial statements and operational disclosures (production volume, costs, capital expenditures) of mining companies from the web, back-calculate the fundamental explanations and interval thresholds (e.g., 1.2/1.7) of the "Mining Stock/Metal Price Ratio", and output reproducible valuation decomposition (cost factor / leverage factor / multiple factor / dilution factor).
Create and configure Databricks Asset Bundles (DABs) with best practices for multi-environment deployments. Use when working with: (1) Creating new DAB projects, (2) Adding resources (dashboards, pipelines, jobs, alerts), (3) Configuring multi-environment deployments, (4) Setting up permissions, (5) Deploying or running bundle resources
Extract text and data from PDF documents
Databricks Job activity and 2025 Azure Data Factory connectors
Write and debug TypeQL queries for TypeDB 3.8+. Use when working with TypeDB schemas, data queries, insertions, deletions, or functions. Covers schema definition, CRUD operations, pattern matching, aggregations, and common pitfalls.
Databricks documentation reference. Use as a lookup resource alongside other skills and MCP tools for comprehensive guidance.
Use this skill when creating database schemas or tables for Timescale, TimescaleDB, TigerData, or Tiger Cloud, especially for time-series, IoT, metrics, events, or log data. Use this to improve the performance of any insert-heavy table. **Trigger when user asks to:** - Create or design SQL schemas/tables AND Timescale/TimescaleDB/TigerData/Tiger Cloud is available - Set up hypertables, compression, retention policies, or continuous aggregates - Configure partition columns, segment_by, order_by, or chunk intervals - Optimize time-series database performance or storage - Create tables for sensors, metrics, telemetry, events, or transaction logs **Keywords:** CREATE TABLE, hypertable, Timescale, TimescaleDB, time-series, IoT, metrics, sensor data, compression policy, continuous aggregates, columnstore, retention policy, chunk interval, segment_by, order_by Step-by-step instructions for hypertable creation, column selection, compression policies, retention, continuous aggregates, and indexes.
Patterns and best practices for using Lakebase Provisioned (Databricks managed PostgreSQL) for OLTP workloads.
This skill should be used when writing PostGIS queries, using spatial functions, geography types, raster operations, topology, or any PostGIS features from 3.5+ (2024-2026).