get-available-resources
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGet Available Resources
获取可用资源
Overview
概述
Detect available computational resources and generate strategic recommendations for scientific computing tasks. This skill automatically identifies CPU capabilities, GPU availability (NVIDIA CUDA, AMD ROCm, Apple Silicon Metal), memory constraints, and disk space to help make informed decisions about computational approaches.
检测可用的计算资源,并为科学计算任务生成策略建议。本技能会自动识别CPU性能、GPU可用性(NVIDIA CUDA、AMD ROCm、Apple Silicon Metal)、内存限制和磁盘空间,帮助你针对计算方法做出明智决策。
When to Use This Skill
何时使用本技能
Use this skill proactively before any computationally intensive task:
- Before data analysis: Determine if datasets can be loaded into memory or require out-of-core processing
- Before model training: Check if GPU acceleration is available and which backend to use
- Before parallel processing: Identify optimal number of workers for joblib, multiprocessing, or Dask
- Before large file operations: Verify sufficient disk space and appropriate storage strategies
- At project initialization: Understand baseline capabilities for making architectural decisions
Example scenarios:
- "Help me analyze this 50GB genomics dataset" → Use this skill first to determine if Dask/Zarr are needed
- "Train a neural network on this data" → Use this skill to detect available GPUs and backends
- "Process 10,000 files in parallel" → Use this skill to determine optimal worker count
- "Run a computationally intensive simulation" → Use this skill to understand resource constraints
请在任何计算密集型任务前主动使用本技能:
- 数据分析前:判断数据集是否可加载到内存,或是否需要核外处理
- 模型训练前:检查是否有可用的GPU加速,以及应使用哪个后端
- 并行处理前:确定joblib、multiprocessing或Dask的最优工作进程数
- 大文件操作前:验证是否有足够的磁盘空间,以及合适的存储策略
- 项目初始化时:了解基础性能,为架构决策提供依据
示例场景:
- "帮我分析这个50GB的基因组数据集" → 先使用本技能判断是否需要Dask/Zarr
- "用这些数据训练神经网络" → 使用本技能检测可用的GPU和后端
- "并行处理10000个文件" → 使用本技能确定最优工作进程数
- "运行计算密集型模拟" → 使用本技能了解资源限制
How This Skill Works
本技能的工作原理
Resource Detection
资源检测
The skill runs to automatically detect:
scripts/detect_resources.py-
CPU Information
- Physical and logical core counts
- Processor architecture and model
- CPU frequency information
-
GPU Information
- NVIDIA GPUs: Detects via nvidia-smi, reports VRAM, driver version, compute capability
- AMD GPUs: Detects via rocm-smi
- Apple Silicon: Detects M1/M2/M3/M4 chips with Metal support and unified memory
-
Memory Information
- Total and available RAM
- Current memory usage percentage
- Swap space availability
-
Disk Space Information
- Total and available disk space for working directory
- Current usage percentage
-
Operating System Information
- OS type (macOS, Linux, Windows)
- OS version and release
- Python version
本技能运行来自动检测:
scripts/detect_resources.py-
CPU信息
- 物理和逻辑核心数
- 处理器架构和型号
- CPU频率信息
-
GPU信息
- NVIDIA GPU:通过nvidia-smi检测,报告显存、驱动版本、计算能力
- AMD GPU:通过rocm-smi检测
- Apple Silicon:检测支持Metal的M1/M2/M3/M4芯片及统一内存
-
内存信息
- 总内存和可用内存
- 当前内存使用率
- 交换空间可用性
-
磁盘空间信息
- 工作目录的总磁盘空间和可用空间
- 当前使用率
-
操作系统信息
- 操作系统类型(macOS、Linux、Windows)
- 操作系统版本和发行版
- Python版本
Output Format
输出格式
The skill generates a file in the current working directory containing:
.claude_resources.jsonjson
{
"timestamp": "2025-10-23T10:30:00",
"os": {
"system": "Darwin",
"release": "25.0.0",
"machine": "arm64"
},
"cpu": {
"physical_cores": 8,
"logical_cores": 8,
"architecture": "arm64"
},
"memory": {
"total_gb": 16.0,
"available_gb": 8.5,
"percent_used": 46.9
},
"disk": {
"total_gb": 500.0,
"available_gb": 200.0,
"percent_used": 60.0
},
"gpu": {
"nvidia_gpus": [],
"amd_gpus": [],
"apple_silicon": {
"name": "Apple M2",
"type": "Apple Silicon",
"backend": "Metal",
"unified_memory": true
},
"total_gpus": 1,
"available_backends": ["Metal"]
},
"recommendations": {
"parallel_processing": {
"strategy": "high_parallelism",
"suggested_workers": 6,
"libraries": ["joblib", "multiprocessing", "dask"]
},
"memory_strategy": {
"strategy": "moderate_memory",
"libraries": ["dask", "zarr"],
"note": "Consider chunking for datasets > 2GB"
},
"gpu_acceleration": {
"available": true,
"backends": ["Metal"],
"suggested_libraries": ["pytorch-mps", "tensorflow-metal", "jax-metal"]
},
"large_data_handling": {
"strategy": "disk_abundant",
"note": "Sufficient space for large intermediate files"
}
}
}本技能会在当前工作目录生成一个文件,内容如下:
.claude_resources.jsonjson
{
"timestamp": "2025-10-23T10:30:00",
"os": {
"system": "Darwin",
"release": "25.0.0",
"machine": "arm64"
},
"cpu": {
"physical_cores": 8,
"logical_cores": 8,
"architecture": "arm64"
},
"memory": {
"total_gb": 16.0,
"available_gb": 8.5,
"percent_used": 46.9
},
"disk": {
"total_gb": 500.0,
"available_gb": 200.0,
"percent_used": 60.0
},
"gpu": {
"nvidia_gpus": [],
"amd_gpus": [],
"apple_silicon": {
"name": "Apple M2",
"type": "Apple Silicon",
"backend": "Metal",
"unified_memory": true
},
"total_gpus": 1,
"available_backends": ["Metal"]
},
"recommendations": {
"parallel_processing": {
"strategy": "high_parallelism",
"suggested_workers": 6,
"libraries": ["joblib", "multiprocessing", "dask"]
},
"memory_strategy": {
"strategy": "moderate_memory",
"libraries": ["dask", "zarr"],
"note": "Consider chunking for datasets > 2GB"
},
"gpu_acceleration": {
"available": true,
"backends": ["Metal"],
"suggested_libraries": ["pytorch-mps", "tensorflow-metal", "jax-metal"]
},
"large_data_handling": {
"strategy": "disk_abundant",
"note": "Sufficient space for large intermediate files"
}
}
}Strategic Recommendations
策略建议
The skill generates context-aware recommendations:
Parallel Processing Recommendations:
- High parallelism (8+ cores): Use Dask, joblib, or multiprocessing with workers = cores - 2
- Moderate parallelism (4-7 cores): Use joblib or multiprocessing with workers = cores - 1
- Sequential (< 4 cores): Prefer sequential processing to avoid overhead
Memory Strategy Recommendations:
- Memory constrained (< 4GB available): Use Zarr, Dask, or H5py for out-of-core processing
- Moderate memory (4-16GB available): Use Dask/Zarr for datasets > 2GB
- Memory abundant (> 16GB available): Can load most datasets into memory directly
GPU Acceleration Recommendations:
- NVIDIA GPUs detected: Use PyTorch, TensorFlow, JAX, CuPy, or RAPIDS
- AMD GPUs detected: Use PyTorch-ROCm or TensorFlow-ROCm
- Apple Silicon detected: Use PyTorch with MPS backend, TensorFlow-Metal, or JAX-Metal
- No GPU detected: Use CPU-optimized libraries
Large Data Handling Recommendations:
- Disk constrained (< 10GB): Use streaming or compression strategies
- Moderate disk (10-100GB): Use Zarr, H5py, or Parquet formats
- Disk abundant (> 100GB): Can create large intermediate files freely
本技能会生成上下文感知的建议:
并行处理建议:
- 高并行性(8核及以上):使用Dask、joblib或multiprocessing,工作进程数 = 核心数 - 2
- 中等并行性(4-7核):使用joblib或multiprocessing,工作进程数 = 核心数 - 1
- 顺序处理(<4核):优先选择顺序处理,避免开销
内存策略建议:
- 内存受限(可用内存<4GB):使用Zarr、Dask或H5py进行核外处理
- 中等内存(可用内存4-16GB):对大于2GB的数据集使用Dask/Zarr
- 内存充足(可用内存>16GB):可直接将大多数数据集加载到内存
GPU加速建议:
- 检测到NVIDIA GPU:使用PyTorch、TensorFlow、JAX、CuPy或RAPIDS
- 检测到AMD GPU:使用PyTorch-ROCm或TensorFlow-ROCm
- 检测到Apple Silicon:使用带MPS后端的PyTorch、TensorFlow-Metal或JAX-Metal
- 未检测到GPU:使用CPU优化库
大数据处理建议:
- 磁盘受限(<10GB):使用流式处理或压缩策略
- 中等磁盘(10-100GB):使用Zarr、H5py或Parquet格式
- 磁盘充足(>100GB):可自由创建大型中间文件
Usage Instructions
使用说明
Step 1: Run Resource Detection
步骤1:运行资源检测
Execute the detection script at the start of any computationally intensive task:
bash
python scripts/detect_resources.pyOptional arguments:
- : Specify custom output path (default:
-o, --output <path>).claude_resources.json - : Print full resource information to stdout
-v, --verbose
在任何计算密集型任务开始时执行检测脚本:
bash
python scripts/detect_resources.py可选参数:
- :指定自定义输出路径(默认:
-o, --output <path>).claude_resources.json - :将完整的资源信息打印到标准输出
-v, --verbose
Step 2: Read and Apply Recommendations
步骤2:读取并应用建议
After running detection, read the generated file to inform computational decisions:
.claude_resources.jsonpython
undefined运行检测后,读取生成的文件,为计算决策提供依据:
.claude_resources.jsonpython
undefinedExample: Use recommendations in code
Example: Use recommendations in code
import json
with open('.claude_resources.json', 'r') as f:
resources = json.load(f)
import json
with open('.claude_resources.json', 'r') as f:
resources = json.load(f)
Check parallel processing strategy
Check parallel processing strategy
if resources['recommendations']['parallel_processing']['strategy'] == 'high_parallelism':
n_jobs = resources['recommendations']['parallel_processing']['suggested_workers']
# Use joblib, Dask, or multiprocessing with n_jobs workers
if resources['recommendations']['parallel_processing']['strategy'] == 'high_parallelism':
n_jobs = resources['recommendations']['parallel_processing']['suggested_workers']
# Use joblib, Dask, or multiprocessing with n_jobs workers
Check memory strategy
Check memory strategy
if resources['recommendations']['memory_strategy']['strategy'] == 'memory_constrained':
# Use Dask, Zarr, or H5py for out-of-core processing
import dask.array as da
# Load data in chunks
if resources['recommendations']['memory_strategy']['strategy'] == 'memory_constrained':
# Use Dask, Zarr, or H5py for out-of-core processing
import dask.array as da
# Load data in chunks
Check GPU availability
Check GPU availability
if resources['recommendations']['gpu_acceleration']['available']:
backends = resources['recommendations']['gpu_acceleration']['backends']
# Use appropriate GPU library based on available backend
undefinedif resources['recommendations']['gpu_acceleration']['available']:
backends = resources['recommendations']['gpu_acceleration']['backends']
# Use appropriate GPU library based on available backend
undefinedStep 3: Make Informed Decisions
步骤3:做出明智决策
Use the resource information and recommendations to make strategic choices:
For data loading:
python
memory_available_gb = resources['memory']['available_gb']
dataset_size_gb = 10
if dataset_size_gb > memory_available_gb * 0.5:
# Dataset is large relative to memory, use Dask
import dask.dataframe as dd
df = dd.read_csv('large_file.csv')
else:
# Dataset fits in memory, use pandas
import pandas as pd
df = pd.read_csv('large_file.csv')For parallel processing:
python
from joblib import Parallel, delayed
n_jobs = resources['recommendations']['parallel_processing'].get('suggested_workers', 1)
results = Parallel(n_jobs=n_jobs)(
delayed(process_function)(item) for item in data
)For GPU acceleration:
python
import torch
if 'CUDA' in resources['gpu']['available_backends']:
device = torch.device('cuda')
elif 'Metal' in resources['gpu']['available_backends']:
device = torch.device('mps')
else:
device = torch.device('cpu')
model = model.to(device)利用资源信息和建议做出策略选择:
数据加载示例:
python
memory_available_gb = resources['memory']['available_gb']
dataset_size_gb = 10
if dataset_size_gb > memory_available_gb * 0.5:
# Dataset is large relative to memory, use Dask
import dask.dataframe as dd
df = dd.read_csv('large_file.csv')
else:
# Dataset fits in memory, use pandas
import pandas as pd
df = pd.read_csv('large_file.csv')并行处理示例:
python
from joblib import Parallel, delayed
n_jobs = resources['recommendations']['parallel_processing'].get('suggested_workers', 1)
results = Parallel(n_jobs=n_jobs)(
delayed(process_function)(item) for item in data
)GPU加速示例:
python
import torch
if 'CUDA' in resources['gpu']['available_backends']:
device = torch.device('cuda')
elif 'Metal' in resources['gpu']['available_backends']:
device = torch.device('mps')
else:
device = torch.device('cpu')
model = model.to(device)Dependencies
依赖项
The detection script requires the following Python packages:
bash
uv pip install psutilAll other functionality uses Python standard library modules (json, os, platform, subprocess, sys, pathlib).
检测脚本需要以下Python包:
bash
uv pip install psutil所有其他功能使用Python标准库模块(json、os、platform、subprocess、sys、pathlib)。
Platform Support
平台支持
- macOS: Full support including Apple Silicon (M1/M2/M3/M4) GPU detection
- Linux: Full support including NVIDIA (nvidia-smi) and AMD (rocm-smi) GPU detection
- Windows: Full support including NVIDIA GPU detection
- macOS:完全支持,包括Apple Silicon(M1/M2/M3/M4)GPU检测
- Linux:完全支持,包括NVIDIA(nvidia-smi)和AMD(rocm-smi)GPU检测
- Windows:完全支持,包括NVIDIA GPU检测
Best Practices
最佳实践
- Run early: Execute resource detection at the start of projects or before major computational tasks
- Re-run periodically: System resources change over time (memory usage, disk space)
- Check before scaling: Verify resources before scaling up parallel workers or data sizes
- Document decisions: Keep the file in project directories to document resource-aware decisions
.claude_resources.json - Use with versioning: Different machines have different capabilities; resource files help maintain portability
- 尽早运行:在项目开始时或大型计算任务前执行资源检测
- 定期重新运行:系统资源会随时间变化(内存使用率、磁盘空间)
- 扩容前检查:在增加并行工作进程数或数据规模前验证资源情况
- 记录决策:将文件保存在项目目录中,记录基于资源的决策
.claude_resources.json - 结合版本控制:不同机器的性能不同;资源文件有助于保持可移植性
Troubleshooting
故障排除
GPU not detected:
- Ensure GPU drivers are installed (nvidia-smi, rocm-smi, or system_profiler for Apple Silicon)
- Check that GPU utilities are in system PATH
- Verify GPU is not in use by other processes
Script execution fails:
- Ensure psutil is installed:
uv pip install psutil - Check Python version compatibility (Python 3.6+)
- Verify script has execute permissions:
chmod +x scripts/detect_resources.py
Inaccurate memory readings:
- Memory readings are snapshots; actual available memory changes constantly
- Close other applications before detection for accurate "available" memory
- Consider running detection multiple times and averaging results
未检测到GPU:
- 确保已安装GPU驱动(nvidia-smi、rocm-smi,或Apple Silicon使用system_profiler)
- 检查GPU工具是否在系统PATH中
- 验证GPU未被其他进程占用
脚本执行失败:
- 确保已安装psutil:
uv pip install psutil - 检查Python版本兼容性(Python 3.6+)
- 验证脚本有执行权限:
chmod +x scripts/detect_resources.py
内存读数不准确:
- 内存读数为快照;实际可用内存会不断变化
- 检测前关闭其他应用,以获得准确的“可用”内存值
- 考虑多次运行检测并取平均值