os-use

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OS Use - Cross-Platform OS Automation

跨平台操作系统自动化工具集

A comprehensive cross-platform toolkit for OS automation, screenshot capture, visual recognition, mouse/keyboard control, and window management. Supports macOS 12+ and Windows 10+.
一款功能全面的跨平台操作系统自动化工具集,支持截图捕获、视觉识别、鼠标/键盘控制及窗口管理。兼容**macOS 12+Windows 10+**系统。

Platform Support Matrix

平台支持矩阵

FeaturemacOS ImplementationWindows Implementation
Screenshot
pyautogui
+
PIL
pyautogui
+
PIL
Visual Recognition
opencv-python
+
pyautogui
opencv-python
+
pyautogui
Mouse/Keyboard
pyautogui
pyautogui
Window Management
AppleScript
(native)
pywinauto
/
pygetwindow
Application Control
AppleScript
/
subprocess
subprocess
/
pywinauto
Browser AutomationChrome DevTools MCPChrome DevTools MCP
功能macOS实现方式Windows实现方式
截图
pyautogui
+
PIL
pyautogui
+
PIL
视觉识别
opencv-python
+
pyautogui
opencv-python
+
pyautogui
鼠标/键盘控制
pyautogui
pyautogui
窗口管理
AppleScript
(原生)
pywinauto
/
pygetwindow
应用控制
AppleScript
/
subprocess
subprocess
/
pywinauto
浏览器自动化Chrome DevTools MCPChrome DevTools MCP

Capabilities

核心功能

1. Screenshot Capture 📸

1. 截图捕获 📸

Universal (macOS & Windows):
  • Full screen capture
  • Region capture (specified coordinates)
  • Window capture (specific application window)
  • Clipboard screenshot access
Implementation:
pyautogui.screenshot()
+
PIL.Image
全平台通用(macOS & Windows):
  • 全屏截图
  • 指定区域截图(自定义坐标)
  • 特定应用窗口截图
  • 剪贴板截图获取
实现方式:
pyautogui.screenshot()
+
PIL.Image

2. Visual Recognition 👁️

2. 视觉识别 👁️

Universal (macOS & Windows):
  • Image matching/locating on screen
  • Template matching with confidence threshold
  • Multi-scale matching (handle different resolutions)
  • Color detection and region extraction
Optional OCR:
  • Text recognition from screenshots (requires
    pytesseract
    + Tesseract OCR engine)
Implementation:
opencv-python
+
pyautogui.locateOnScreen()
全平台通用(macOS & Windows):
  • 屏幕图像匹配与定位
  • 带置信度阈值的模板匹配
  • 多尺度匹配(适配不同分辨率)
  • 颜色检测与区域提取
可选OCR功能:
  • 从截图中识别文本(需安装
    pytesseract
    及Tesseract OCR引擎)
实现方式:
opencv-python
+
pyautogui.locateOnScreen()

3. Mouse & Keyboard Control 🖱️⌨️

3. 鼠标与键盘控制 🖱️⌨️

Universal (macOS & Windows):
  • Mouse movement (absolute and relative coordinates)
  • Mouse clicking (left, right, middle, double-click)
  • Mouse dragging and dropping
  • Scroll wheel operations
  • Keyboard text input
  • Keyboard shortcuts and hotkeys
  • Special key combinations
Implementation:
pyautogui
全平台通用(macOS & Windows):
  • 鼠标移动(绝对/相对坐标)
  • 鼠标点击(左键、右键、中键、双击)
  • 鼠标拖拽
  • 滚轮操作
  • 键盘文本输入
  • 键盘快捷键与热键
  • 特殊组合键
实现方式:
pyautogui

4. Window Management 🪟

4. 窗口管理 🪟

macOS Implementation:
  • List all application windows
  • Get window position, size, title
  • Activate/minimize/close windows
  • Move and resize windows
  • Launch/quit applications
Implementation:
AppleScript
via
subprocess
Windows Implementation:
  • Same capabilities as macOS
  • Additional: Get window handle (HWND), process information
  • Better integration with Windows window manager
Implementation:
pywinauto
or
pygetwindow
macOS实现:
  • 列出所有应用窗口
  • 获取窗口位置、尺寸、标题
  • 激活/最小化/关闭窗口
  • 移动与调整窗口大小
  • 启动/退出应用
实现方式: 通过
subprocess
调用
AppleScript
Windows实现:
  • 具备与macOS相同的功能
  • 额外支持:获取窗口句柄(HWND)、进程信息
  • 与Windows窗口管理器集成度更高
实现方式:
pywinauto
pygetwindow

5. Browser Automation 🌐

5. 浏览器自动化 🌐

Universal (macOS & Windows):
  • Webpage screenshots
  • Element screenshots
  • Page navigation
  • Form filling and clicking
  • Network monitoring
  • Performance analysis
Implementation: Chrome DevTools MCP (separate tool)
全平台通用(macOS & Windows):
  • 网页截图
  • 元素截图
  • 页面导航
  • 表单填写与点击
  • 网络监控
  • 性能分析
实现方式: Chrome DevTools MCP(独立工具)

6. System Integration 🔧

6. 系统集成 🔧

Clipboard Operations:
  • Read/write clipboard content
  • Support images and text
Implementation:
pyperclip
+
pyautogui
剪贴板操作:
  • 读取/写入剪贴板内容
  • 支持图片与文本格式
实现方式:
pyperclip
+
pyautogui

Technical Implementation Details

技术实现细节

Python Environment Setup

Python环境配置

bash
undefined
bash
undefined

Create virtual environment

创建虚拟环境

python3 -m venv ~/.nanobot/workspace/macos-automation/.venv
python3 -m venv ~/.nanobot/workspace/macos-automation/.venv

Activate

激活环境

source ~/.nanobot/workspace/macos-automation/.venv/bin/activate
source ~/.nanobot/workspace/macos-automation/.venv/bin/activate

Install dependencies

安装依赖

pip install pyautogui opencv-python-headless numpy Pillow pyperclip
pip install pyautogui opencv-python-headless numpy Pillow pyperclip

macOS specific

macOS专属依赖

(AppleScript is built-in, no installation needed)

(AppleScript为系统内置,无需额外安装)

Windows specific

Windows专属依赖

pip install pywinauto pygetwindow
undefined
pip install pywinauto pygetwindow
undefined

Key Libraries Reference

核心库参考

LibraryVersionPurpose
pyautogui
0.9.54+Screenshot, mouse/keyboard control
opencv-python-headless
4.11.0.84+Image recognition, computer vision
numpy
2.4.2+Numerical operations for OpenCV
Pillow
12.1.1+Image processing
pyperclip
LatestClipboard operations
pywinauto
LatestWindows window management
pygetwindow
LatestCross-platform window control
版本用途
pyautogui
0.9.54+截图、鼠标/键盘控制
opencv-python-headless
4.11.0.84+图像识别、计算机视觉
numpy
2.4.2+OpenCV数值运算支持
Pillow
12.1.1+图像处理
pyperclip
最新版剪贴板操作
pywinauto
最新版Windows窗口管理
pygetwindow
最新版跨平台窗口控制

Platform-Specific Notes

平台专属注意事项

macOS Specifics

macOS专属说明

Permissions Required:
  • Accessibility: System Settings > Privacy & Security > Accessibility
  • Screen Recording: System Settings > Privacy & Security > Screen Recording
AppleScript Quirks:
  • Some modern apps (e.g., Chrome) may have limited AppleScript support
  • Window titles may be truncated or localized
  • Some operations require app to be frontmost
Coordinate System:
  • Origin (0, 0) at top-left
  • Retina displays: pyautogui automatically handles scaling
所需权限:
  • 辅助功能:系统设置 > 隐私与安全性 > 辅助功能
  • 屏幕录制:系统设置 > 隐私与安全性 > 屏幕录制
AppleScript特性:
  • 部分现代应用(如Chrome)对AppleScript支持有限
  • 窗口标题可能被截断或本地化
  • 部分操作需要应用处于前台
坐标系:
  • 原点(0, 0)位于屏幕左上角
  • Retina显示屏:pyautogui会自动处理缩放

Windows Specifics

Windows专属说明

Administrator Privileges:
  • Some operations (e.g., interacting with elevated windows) may require admin rights
High DPI Displays:
  • Windows scaling may affect coordinate accuracy
  • Use
    pyautogui.size()
    to get actual screen dimensions
Window Handle (HWND):
  • Windows provides low-level window handles for precise control
  • pywinauto
    provides both high-level and low-level access
管理员权限:
  • 部分操作(如与高权限窗口交互)需要管理员权限
高DPI显示屏:
  • Windows缩放可能影响坐标精度
  • 使用
    pyautogui.size()
    获取实际屏幕尺寸
窗口句柄(HWND):
  • Windows提供底层窗口句柄以实现精确控制
  • pywinauto
    同时支持高层与底层访问

Error Handling Patterns

错误处理模式

python
import pyautogui
import time
python
import pyautogui
import time

Pattern 1: Retry with backoff

模式1:带退避重试

def retry_with_backoff(func, max_retries=3, base_delay=1): for i in range(max_retries): try: return func() except Exception as e: if i == max_retries - 1: raise delay = base_delay * (2 ** i) print(f"Retry {i+1}/{max_retries} after {delay}s: {e}") time.sleep(delay)
def retry_with_backoff(func, max_retries=3, base_delay=1): for i in range(max_retries): try: return func() except Exception as e: if i == max_retries - 1: raise delay = base_delay * (2 ** i) print(f"Retry {i+1}/{max_retries} after {delay}s: {e}") time.sleep(delay)

Pattern 2: Safe operations with fallback

模式2:带降级方案的安全操作

def safe_screenshot(output_path): try: screenshot = pyautogui.screenshot() screenshot.save(output_path) return output_path except Exception as e: print(f"Screenshot failed: {e}") return None
def safe_screenshot(output_path): try: screenshot = pyautogui.screenshot() screenshot.save(output_path) return output_path except Exception as e: print(f"Screenshot failed: {e}") return None

Pattern 3: Coordinate boundary checking

模式3:坐标边界检查

def safe_click(x, y, max_x=None, max_y=None): """安全点击,确保坐标在屏幕范围内""" if max_x is None or max_y is None: max_x, max_y = pyautogui.size()
x = max(0, min(x, max_x - 1))
y = max(0, min(y, max_y - 1))

pyautogui.click(x, y)
undefined
def safe_click(x, y, max_x=None, max_y=None): """安全点击,确保坐标在屏幕范围内""" if max_x is None or max_y is None: max_x, max_y = pyautogui.size()
x = max(0, min(x, max_x - 1))
y = max(0, min(y, max_y - 1))

pyautogui.click(x, y)
undefined

Usage Examples by Scenario

场景化使用示例

Scenario 1: Automated Testing

场景1:自动化测试

python
"""
自动化 UI 测试示例
测试一个假设的登录页面
"""
import pyautogui
import time

def test_login_flow():
    # 1. 截取初始状态
    initial_screenshot = pyautogui.screenshot()
    initial_screenshot.save("test_01_initial.png")
    
    # 2. 查找并点击登录按钮
    button_location = pyautogui.locateOnScreen(
        "login_button.png",
        confidence=0.9
    )
    if button_location:
        center = pyautogui.center(button_location)
        pyautogui.click(center.x, center.y)
        time.sleep(1)
    
    # 3. 输入用户名
    pyautogui.typewrite("testuser@example.com", interval=0.01)
    pyautogui.press('tab')
    
    # 4. 输入密码
    pyautogui.typewrite("TestPassword123", interval=0.01)
    
    # 5. 点击提交
    pyautogui.press('return')
    time.sleep(2)
    
    # 6. 验证结果
    result_screenshot = pyautogui.screenshot()
    result_screenshot.save("test_02_result.png")
    
    # 检查是否出现成功提示
    success_indicator = pyautogui.locateOnScreen(
        "success_message.png",
        confidence=0.8
    )
    
    if success_indicator:
        print("✅ 测试通过:登录成功")
        return True
    else:
        print("❌ 测试失败:未找到成功提示")
        return False
python
"""
自动化 UI 测试示例
测试一个假设的登录页面
"""
import pyautogui
import time

def test_login_flow():
    # 1. 截取初始状态
    initial_screenshot = pyautogui.screenshot()
    initial_screenshot.save("test_01_initial.png")
    
    # 2. 查找并点击登录按钮
    button_location = pyautogui.locateOnScreen(
        "login_button.png",
        confidence=0.9
    )
    if button_location:
        center = pyautogui.center(button_location)
        pyautogui.click(center.x, center.y)
        time.sleep(1)
    
    # 3. 输入用户名
    pyautogui.typewrite("testuser@example.com", interval=0.01)
    pyautogui.press('tab')
    
    # 4. 输入密码
    pyautogui.typewrite("TestPassword123", interval=0.01)
    
    # 5. 点击提交
    pyautogui.press('return')
    time.sleep(2)
    
    # 6. 验证结果
    result_screenshot = pyautogui.screenshot()
    result_screenshot.save("test_02_result.png")
    
    # 检查是否出现成功提示
    success_indicator = pyautogui.locateOnScreen(
        "success_message.png",
        confidence=0.8
    )
    
    if success_indicator:
        print("✅ 测试通过:登录成功")
        return True
    else:
        print("❌ 测试失败:未找到成功提示")
        return False

运行测试

运行测试

if name == "main": test_login_flow()
undefined
if name == "main": test_login_flow()
undefined

Scenario 2: Data Entry Automation

场景2:数据录入自动化

python
"""
数据录入自动化示例
将 Excel 数据自动填入网页表单
"""
import pyautogui
import pandas as pd
import time

def automate_data_entry(excel_file, form_template):
    """
    从 Excel 读取数据并自动填入表单
    
    Args:
        excel_file: Excel 文件路径
        form_template: 表单字段与 Excel 列的映射
    """
    # 1. 读取 Excel 数据
    df = pd.read_excel(excel_file)
    print(f"读取到 {len(df)} 条记录")
    
    # 2. 遍历每条记录
    for index, row in df.iterrows():
        print(f"\n正在处理第 {index + 1} 条记录...")
        
        # 3. 填写每个字段
        for field_name, column_name in form_template.items():
            value = row.get(column_name, '')
            
            # 查找表单字段(需要提前准备字段截图)
            field_location = pyautogui.locateOnScreen(
                f"form_field_{field_name}.png",
                confidence=0.8
            )
            
            if field_location:
                # 点击字段
                center = pyautogui.center(field_location)
                pyautogui.click(center.x, center.y)
                time.sleep(0.2)
                
                # 输入值
                pyautogui.hotkey('ctrl', 'a')  # 全选
                pyautogui.typewrite(str(value), interval=0.01)
                time.sleep(0.2)
            else:
                print(f"  ⚠️ 未找到字段: {field_name}")
        
        # 4. 提交表单
        submit_btn = pyautogui.locateOnScreen(
            "submit_button.png",
            confidence=0.8
        )
        if submit_btn:
            center = pyautogui.center(submit_btn)
            pyautogui.click(center.x, center.y)
            print("  ✅ 已提交")
            time.sleep(2)  # 等待提交完成
        else:
            print("  ⚠️ 未找到提交按钮")
        
        # 5. 准备下一条记录
        # 可能需要点击"添加新记录"或返回列表
        time.sleep(1)
    
    print("\n🎉 所有记录处理完成!")
python
"""
数据录入自动化示例
将 Excel 数据自动填入网页表单
"""
import pyautogui
import pandas as pd
import time

def automate_data_entry(excel_file, form_template):
    """
    从 Excel 读取数据并自动填入表单
    
    Args:
        excel_file: Excel 文件路径
        form_template: 表单字段与 Excel 列的映射
    """
    # 1. 读取 Excel 数据
    df = pd.read_excel(excel_file)
    print(f"读取到 {len(df)} 条记录")
    
    # 2. 遍历每条记录
    for index, row in df.iterrows():
        print(f"\n正在处理第 {index + 1} 条记录...")
        
        # 3. 填写每个字段
        for field_name, column_name in form_template.items():
            value = row.get(column_name, '')
            
            # 查找表单字段(需要提前准备字段截图)
            field_location = pyautogui.locateOnScreen(
                f"form_field_{field_name}.png",
                confidence=0.8
            )
            
            if field_location:
                # 点击字段
                center = pyautogui.center(field_location)
                pyautogui.click(center.x, center.y)
                time.sleep(0.2)
                
                # 输入值
                pyautogui.hotkey('ctrl', 'a')  # 全选
                pyautogui.typewrite(str(value), interval=0.01)
                time.sleep(0.2)
            else:
                print(f"  ⚠️ 未找到字段: {field_name}")
        
        # 4. 提交表单
        submit_btn = pyautogui.locateOnScreen(
            "submit_button.png",
            confidence=0.8
        )
        if submit_btn:
            center = pyautogui.center(submit_btn)
            pyautogui.click(center.x, center.y)
            print("  ✅ 已提交")
            time.sleep(2)  # 等待提交完成
        else:
            print("  ⚠️ 未找到提交按钮")
        
        # 5. 准备下一条记录
        # 可能需要点击"添加新记录"或返回列表
        time.sleep(1)
    
    print("\n🎉 所有记录处理完成!")

使用示例

使用示例

if name == "main": # 表单模板:字段名 -> Excel 列名 form_template = { "name": "姓名", "email": "邮箱", "phone": "电话", "address": "地址" }
automate_data_entry("data.xlsx", form_template)
undefined
if name == "main": # 表单模板:字段名 -> Excel 列名 form_template = { "name": "姓名", "email": "邮箱", "phone": "电话", "address": "地址" }
automate_data_entry("data.xlsx", form_template)
undefined

Scenario 3: Screen Monitoring & Alerting

场景3:屏幕监控与告警

python
"""
屏幕监控与告警示例
监控特定区域变化,发现变化时发送通知
"""
import pyautogui
import cv2
import numpy as np
import time
from datetime import datetime

def monitor_screen_region(region, template_image=None, check_interval=5, callback=None):
    """
    监控屏幕特定区域的变化
    
    Args:
        region: (left, top, width, height) 监控区域
        template_image: 要查找的模板图像路径(可选)
        check_interval: 检查间隔(秒)
        callback: 发现变化时的回调函数
    
    Returns:
        监控会话对象(可调用 stop() 停止)
    """
    class MonitorSession:
        def __init__(self):
            self.running = True
            self.baseline = None
        
        def stop(self):
            self.running = False
    
    session = MonitorSession()
    
    print(f"🔍 开始监控区域: {region}")
    print(f"⏱️  检查间隔: {check_interval}秒")
    print("按 Ctrl+C 停止监控\n")
    
    try:
        while session.running:
            # 捕获当前区域
            current = pyautogui.screenshot(region=region)
            current_array = np.array(current)
            
            if template_image:
                # 模式1: 查找模板图像
                template_location = pyautogui.locateOnScreen(
                    template_image,
                    confidence=0.8
                )
                
                if template_location:
                    print(f"✅ [{datetime.now()}] 找到模板图像: {template_location}")
                    if callback:
                        callback('template_found', {
                            'location': template_location,
                            'screenshot': current
                        })
            else:
                # 模式2: 检测变化
                if session.baseline is None:
                    session.baseline = current_array
                    print(f"📸 [{datetime.now()}] 已建立基准图像")
                else:
                    # 计算差异
                    diff = cv2.absdiff(session.baseline, current_array)
                    diff_gray = cv2.cvtColor(diff, cv2.COLOR_RGB2GRAY)
                    diff_score = np.mean(diff_gray)
                    
                    if diff_score > 10:  # 阈值可调
                        print(f"⚠️  [{datetime.now()}] 检测到变化! 差异分数: {diff_score:.2f}")
                        if callback:
                            callback('change_detected', {
                                'diff_score': diff_score,
                                'screenshot': current,
                                'baseline': session.baseline
                            })
                        # 更新基准
                        session.baseline = current_array
            
            time.sleep(check_interval)
    
    except KeyboardInterrupt:
        print("\n🛑 监控已停止")
    
    return session
python
"""
屏幕监控与告警示例
监控特定区域变化,发现变化时发送通知
"""
import pyautogui
import cv2
import numpy as np
import time
from datetime import datetime

def monitor_screen_region(region, template_image=None, check_interval=5, callback=None):
    """
    监控屏幕特定区域的变化
    
    Args:
        region: (left, top, width, height) 监控区域
        template_image: 要查找的模板图像路径(可选)
        check_interval: 检查间隔(秒)
        callback: 发现变化时的回调函数
    
    Returns:
        监控会话对象(可调用 stop() 停止)
    """
    class MonitorSession:
        def __init__(self):
            self.running = True
            self.baseline = None
        
        def stop(self):
            self.running = False
    
    session = MonitorSession()
    
    print(f"🔍 开始监控区域: {region}")
    print(f"⏱️  检查间隔: {check_interval}秒")
    print("按 Ctrl+C 停止监控\n")
    
    try:
        while session.running:
            # 捕获当前区域
            current = pyautogui.screenshot(region=region)
            current_array = np.array(current)
            
            if template_image:
                # 模式1: 查找模板图像
                template_location = pyautogui.locateOnScreen(
                    template_image,
                    confidence=0.8
                )
                
                if template_location:
                    print(f"✅ [{datetime.now()}] 找到模板图像: {template_location}")
                    if callback:
                        callback('template_found', {
                            'location': template_location,
                            'screenshot': current
                        })
            else:
                # 模式2: 检测变化
                if session.baseline is None:
                    session.baseline = current_array
                    print(f"📸 [{datetime.now()}] 已建立基准图像")
                else:
                    # 计算差异
                    diff = cv2.absdiff(session.baseline, current_array)
                    diff_gray = cv2.cvtColor(diff, cv2.COLOR_RGB2GRAY)
                    diff_score = np.mean(diff_gray)
                    
                    if diff_score > 10:  # 阈值可调
                        print(f"⚠️  [{datetime.now()}] 检测到变化! 差异分数: {diff_score:.2f}")
                        if callback:
                            callback('change_detected', {
                                'diff_score': diff_score,
                                'screenshot': current,
                                'baseline': session.baseline
                            })
                        # 更新基准
                        session.baseline = current_array
            
            time.sleep(check_interval)
    
    except KeyboardInterrupt:
        print("\n🛑 监控已停止")
    
    return session

使用示例

使用示例

def alert_callback(event_type, data): """告警回调函数示例""" if event_type == 'template_found': print(f"🎯 模板出现在: {data['location']}") # 可以在这里发送通知、发送邮件、执行操作等 elif event_type == 'change_detected': print(f"📊 变化强度: {data['diff_score']}") # 保存差异图像 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") data['screenshot'].save(f"change_{timestamp}.png")
if name == "main": # 示例1: 监控屏幕变化 print("=== 监控屏幕变化 ===") monitor = monitor_screen_region( region=(0, 0, 1920, 1080), # 全屏 check_interval=5, # 每5秒检查一次 callback=alert_callback )
# 10分钟后停止(实际使用可以一直运行)
# time.sleep(600)
# monitor.stop()

# 示例2: 查找特定图像
# monitor = monitor_screen_region(
#     region=(0, 0, 1920, 1080),
#     template_image="target_button.png",  # 要查找的图像
#     check_interval=2,
#     callback=alert_callback
# )
undefined
def alert_callback(event_type, data): """告警回调函数示例""" if event_type == 'template_found': print(f"🎯 模板出现在: {data['location']}") # 可以在这里发送通知、发送邮件、执行操作等 elif event_type == 'change_detected': print(f"📊 变化强度: {data['diff_score']}") # 保存差异图像 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") data['screenshot'].save(f"change_{timestamp}.png")
if name == "main": # 示例1: 监控屏幕变化 print("=== 监控屏幕变化 ===") monitor = monitor_screen_region( region=(0, 0, 1920, 1080), # 全屏 check_interval=5, # 每5秒检查一次 callback=alert_callback )
# 10分钟后停止(实际使用可以一直运行)
# time.sleep(600)
# monitor.stop()

# 示例2: 查找特定图像
# monitor = monitor_screen_region(
#     region=(0, 0, 1920, 1080),
#     template_image="target_button.png",  # 要查找的图像
#     check_interval=2,
#     callback=alert_callback
# )
undefined

Advanced Techniques

高级技术

Handling Multiple Monitors

多显示器处理

python
import pyautogui

def get_all_screen_sizes():
    """获取所有显示器尺寸(仅 Windows 支持多显示器详细信息)"""
    # macOS 返回主屏尺寸
    # Windows 可以使用 pygetwindow 或 win32api 获取多显示器信息
    
    primary = pyautogui.size()
    print(f"主屏幕尺寸: {primary}")
    
    # Windows 示例(需要安装 pywin32)
    try:
        import win32api
        monitors = win32api.EnumDisplayMonitors()
        for i, monitor in enumerate(monitors):
            print(f"显示器 {i+1}: {monitor[2]}")
    except ImportError:
        pass
    
    return primary

def screenshot_specific_monitor(monitor_num=0):
    """截图指定显示器(实验性功能)"""
    # 目前 pyautogui 主要支持主显示器
    # 多显示器支持需要平台特定代码
    pass
python
import pyautogui

def get_all_screen_sizes():
    """获取所有显示器尺寸(仅 Windows 支持多显示器详细信息)"""
    # macOS 返回主屏尺寸
    # Windows 可以使用 pygetwindow 或 win32api 获取多显示器信息
    
    primary = pyautogui.size()
    print(f"主屏幕尺寸: {primary}")
    
    # Windows 示例(需要安装 pywin32)
    try:
        import win32api
        monitors = win32api.EnumDisplayMonitors()
        for i, monitor in enumerate(monitors):
            print(f"显示器 {i+1}: {monitor[2]}")
    except ImportError:
        pass
    
    return primary

def screenshot_specific_monitor(monitor_num=0):
    """截图指定显示器(实验性功能)"""
    # 目前 pyautogui 主要支持主显示器
    # 多显示器支持需要平台特定代码
    pass

Performance Optimization

性能优化

python
import cv2
import numpy as np
import pyautogui
import time
from functools import lru_cache

class ScreenCache:
    """屏幕缓存优化器"""
    
    def __init__(self, cache_duration=0.5):
        self.cache_duration = cache_duration
        self.last_capture = None
        self.last_capture_time = 0
    
    def get_screenshot(self, region=None):
        """获取截图(带缓存)"""
        current_time = time.time()
        
        # 检查缓存是否有效
        if (self.last_capture is not None and 
            current_time - self.last_capture_time < self.cache_duration and
            region is None):
            return self.last_capture
        
        # 捕获新截图
        screenshot = pyautogui.screenshot(region=region)
        
        if region is None:
            self.last_capture = screenshot
            self.last_capture_time = current_time
        
        return screenshot
    
    def clear_cache(self):
        """清除缓存"""
        self.last_capture = None
        self.last_capture_time = 0

class FastImageFinder:
    """快速图像查找器(使用多尺度金字塔)"""
    
    def __init__(self, scales=[0.8, 0.9, 1.0, 1.1, 1.2]):
        self.scales = scales
    
    def find_multi_scale(self, template_path, screenshot=None, confidence=0.8):
        """
        多尺度图像查找
        
        Returns:
            (x, y, scale) 或 None
        """
        if screenshot is None:
            screenshot = pyautogui.screenshot()
        
        template = cv2.imread(template_path)
        if template is None:
            return None
        
        screenshot_cv = cv2.cvtColor(np.array(screenshot), cv2.COLOR_RGB2BGR)
        
        for scale in self.scales:
            # 缩放模板
            scaled_template = cv2.resize(
                template,
                None,
                fx=scale,
                fy=scale,
                interpolation=cv2.INTER_AREA
            )
            
            # 模板匹配
            result = cv2.matchTemplate(
                screenshot_cv,
                scaled_template,
                cv2.TM_CCOEFF_NORMED
            )
            
            _, max_val, _, max_loc = cv2.minMaxLoc(result)
            
            if max_val >= confidence:
                h, w = scaled_template.shape[:2]
                center_x = max_loc[0] + w // 2
                center_y = max_loc[1] + h // 2
                return (center_x, center_y, scale)
        
        return None
python
import cv2
import numpy as np
import pyautogui
import time
from functools import lru_cache

class ScreenCache:
    """屏幕缓存优化器"""
    
    def __init__(self, cache_duration=0.5):
        self.cache_duration = cache_duration
        self.last_capture = None
        self.last_capture_time = 0
    
    def get_screenshot(self, region=None):
        """获取截图(带缓存)"""
        current_time = time.time()
        
        # 检查缓存是否有效
        if (self.last_capture is not None and 
            current_time - self.last_capture_time < self.cache_duration and
            region is None):
            return self.last_capture
        
        # 捕获新截图
        screenshot = pyautogui.screenshot(region=region)
        
        if region is None:
            self.last_capture = screenshot
            self.last_capture_time = current_time
        
        return screenshot
    
    def clear_cache(self):
        """清除缓存"""
        self.last_capture = None
        self.last_capture_time = 0

class FastImageFinder:
    """快速图像查找器(使用多尺度金字塔)"""
    
    def __init__(self, scales=[0.8, 0.9, 1.0, 1.1, 1.2]):
        self.scales = scales
    
    def find_multi_scale(self, template_path, screenshot=None, confidence=0.8):
        """
        多尺度图像查找
        
        Returns:
            (x, y, scale) 或 None
        """
        if screenshot is None:
            screenshot = pyautogui.screenshot()
        
        template = cv2.imread(template_path)
        if template is None:
            return None
        
        screenshot_cv = cv2.cvtColor(np.array(screenshot), cv2.COLOR_RGB2BGR)
        
        for scale in self.scales:
            # 缩放模板
            scaled_template = cv2.resize(
                template,
                None,
                fx=scale,
                fy=scale,
                interpolation=cv2.INTER_AREA
            )
            
            # 模板匹配
            result = cv2.matchTemplate(
                screenshot_cv,
                scaled_template,
                cv2.TM_CCOEFF_NORMED
            )
            
            _, max_val, _, max_loc = cv2.minMaxLoc(result)
            
            if max_val >= confidence:
                h, w = scaled_template.shape[:2]
                center_x = max_loc[0] + w // 2
                center_y = max_loc[1] + h // 2
                return (center_x, center_y, scale)
        
        return None

使用示例

使用示例

cache = ScreenCache() finder = FastImageFinder()
cache = ScreenCache() finder = FastImageFinder()

快速截图(带缓存)

快速截图(带缓存)

screenshot = cache.get_screenshot()
screenshot = cache.get_screenshot()

多尺度图像查找

多尺度图像查找

result = finder.find_multi_scale("button.png", screenshot) if result: x, y, scale = result print(f"找到图像: ({x}, {y}), 缩放: {scale}")
undefined
result = finder.find_multi_scale("button.png", screenshot) if result: x, y, scale = result print(f"找到图像: ({x}, {y}), 缩放: {scale}")
undefined

Security Considerations

安全注意事项

python
"""
安全最佳实践
"""

import pyautogui
import hashlib
import time

class SecureAutomation:
    """安全自动化包装器"""
    
    def __init__(self):
        self.action_log = []
        self.max_retries = 3
        self.rate_limit_delay = 0.1  # 操作间隔
    
    def log_action(self, action, details):
        """记录操作日志"""
        timestamp = time.strftime("%Y-%m-%d %H:%M:%S")
        log_entry = {
            'timestamp': timestamp,
            'action': action,
            'details': details,
            'hash': hashlib.md5(f"{timestamp}{action}{details}".encode()).hexdigest()[:8]
        }
        self.action_log.append(log_entry)
    
    def safe_click(self, x, y, description=""):
        """安全点击(带验证)"""
        try:
            # 验证坐标在屏幕范围内
            screen_width, screen_height = pyautogui.size()
            if not (0 <= x < screen_width and 0 <= y < screen_height):
                raise ValueError(f"坐标 ({x}, {y}) 超出屏幕范围")
            
            # 执行点击
            pyautogui.moveTo(x, y, duration=0.2)
            time.sleep(self.rate_limit_delay)
            pyautogui.click()
            
            # 记录日志
            self.log_action('click', f"({x}, {y}) - {description}")
            
            return True
            
        except Exception as e:
            self.log_action('click_failed', f"({x}, {y}) - Error: {str(e)}")
            return False
    
    def safe_typewrite(self, text, interval=0.01):
        """安全输入(敏感信息不记录)"""
        try:
            pyautogui.typewrite(text, interval=interval)
            self.log_action('typewrite', f"输入 {len(text)} 个字符 [内容已隐藏]")
            return True
        except Exception as e:
            self.log_action('typewrite_failed', f"Error: {str(e)}")
            return False
    
    def get_action_report(self):
        """生成操作报告"""
        total = len(self.action_log)
        successful = sum(1 for log in self.action_log if 'failed' not in log['action'])
        failed = total - successful
        
        report = f"""
=== 自动化操作报告 ===
总操作数: {total}
成功: {successful}
失败: {failed}
成功率: {(successful/total*100):.1f}%

详细日志:
"""
        for log in self.action_log:
            report += f"[{log['timestamp']}] [{log['hash']}] {log['action']}: {log['details']}\n"
        
        return report
python
"""
安全最佳实践
"""

import pyautogui
import hashlib
import time

class SecureAutomation:
    """安全自动化包装器"""
    
    def __init__(self):
        self.action_log = []
        self.max_retries = 3
        self.rate_limit_delay = 0.1  # 操作间隔
    
    def log_action(self, action, details):
        """记录操作日志"""
        timestamp = time.strftime("%Y-%m-%d %H:%M:%S")
        log_entry = {
            'timestamp': timestamp,
            'action': action,
            'details': details,
            'hash': hashlib.md5(f"{timestamp}{action}{details}".encode()).hexdigest()[:8]
        }
        self.action_log.append(log_entry)
    
    def safe_click(self, x, y, description=""):
        """安全点击(带验证)"""
        try:
            # 验证坐标在屏幕范围内
            screen_width, screen_height = pyautogui.size()
            if not (0 <= x < screen_width and 0 <= y < screen_height):
                raise ValueError(f"坐标 ({x}, {y}) 超出屏幕范围")
            
            # 执行点击
            pyautogui.moveTo(x, y, duration=0.2)
            time.sleep(self.rate_limit_delay)
            pyautogui.click()
            
            # 记录日志
            self.log_action('click', f"({x}, {y}) - {description}")
            
            return True
            
        except Exception as e:
            self.log_action('click_failed', f"({x}, {y}) - Error: {str(e)}")
            return False
    
    def safe_typewrite(self, text, interval=0.01):
        """安全输入(敏感信息不记录)"""
        try:
            pyautogui.typewrite(text, interval=interval)
            self.log_action('typewrite', f"输入 {len(text)} 个字符 [内容已隐藏]")
            return True
        except Exception as e:
            self.log_action('typewrite_failed', f"Error: {str(e)}")
            return False
    
    def get_action_report(self):
        """生成操作报告"""
        total = len(self.action_log)
        successful = sum(1 for log in self.action_log if 'failed' not in log['action'])
        failed = total - successful
        
        report = f"""
=== 自动化操作报告 ===
总操作数: {total}
成功: {successful}
失败: {failed}
成功率: {(successful/total*100):.1f}%

详细日志:
"""
        for log in self.action_log:
            report += f"[{log['timestamp']}] [{log['hash']}] {log['action']}: {log['details']}\n"
        
        return report

使用示例

使用示例

secure = SecureAutomation()
secure = SecureAutomation()

执行安全操作

执行安全操作

secure.safe_click(500, 400, "登录按钮") secure.safe_typewrite("username@example.com") secure.safe_click(500, 450, "密码输入框") secure.safe_typewrite("********") secure.safe_click(500, 500, "提交按钮")
secure.safe_click(500, 400, "登录按钮") secure.safe_typewrite("username@example.com") secure.safe_click(500, 450, "密码输入框") secure.safe_typewrite("********") secure.safe_click(500, 500, "提交按钮")

生成报告

生成报告

print(secure.get_action_report())
undefined
print(secure.get_action_report())
undefined

Troubleshooting Guide

故障排除指南

Common Issues and Solutions

常见问题与解决方案

1. Permission Errors

1. 权限错误

Symptom:
pyautogui
fails with permission errors or captures black screenshots.
macOS Solution:
  1. Open System Settings > Privacy & Security > Accessibility
  2. Add your terminal application (e.g., Terminal.app, iTerm.app, or the Python executable)
  3. Repeat for Screen Recording permission
Windows Solution:
  1. Run as Administrator if needed
  2. Check Windows Defender or antivirus isn't blocking
症状:
pyautogui
报权限错误或捕获黑屏截图。
macOS解决方案:
  1. 打开系统设置 > 隐私与安全性 > 辅助功能
  2. 添加你的终端应用(如Terminal.app、iTerm.app或Python可执行文件)
  3. 重复上述操作添加屏幕录制权限
Windows解决方案:
  1. 以管理员身份运行程序
  2. 检查Windows Defender或杀毒软件是否拦截

2. Coordinate Inaccuracy

2. 坐标不准确

Symptom: Clicks or screenshots miss the intended target.
Possible Causes:
  • High DPI / Retina display scaling
  • Multiple monitors with different resolutions
  • Window decorations or taskbar affecting coordinates
Solution:
python
import pyautogui
症状: 点击或截图未命中目标。
可能原因:
  • 高DPI/Retina显示屏缩放
  • 多显示器分辨率不同
  • 窗口装饰或任务栏影响坐标
解决方案:
python
import pyautogui

Debug: Print screen info

调试:打印屏幕信息

print(f"Screen size: {pyautogui.size()}") print(f"Mouse position: {pyautogui.position()}")
print(f"Screen size: {pyautogui.size()}") print(f"Mouse position: {pyautogui.position()}")

Handle high DPI (Windows)

处理高DPI(Windows)

import ctypes ctypes.windll.user32.SetProcessDPIAware() # Windows only
undefined
import ctypes ctypes.windll.user32.SetProcessDPIAware() # Windows only
undefined

3. Image Recognition Failures

3. 图像识别失败

Symptom:
locateOnScreen
returns None even when image is visible.
Common Causes:
  • Resolution mismatch (captured image at different scale)
  • Color depth differences
  • Transparency or alpha channel issues
  • Confidence threshold too high
Solutions:
python
import pyautogui
import cv2
import numpy as np
症状:
locateOnScreen
返回None,即使图像可见。
常见原因:
  • 分辨率不匹配(捕获的图像与模板缩放比例不同)
  • 颜色深度差异
  • 透明度或Alpha通道问题
  • 置信度阈值设置过高
解决方案:
python
import pyautogui
import cv2
import numpy as np

Solution 1: Lower confidence

方案1:降低置信度

location = pyautogui.locateOnScreen('button.png', confidence=0.7) # Default is 0.9
location = pyautogui.locateOnScreen('button.png', confidence=0.7) # 默认值为0.9

Solution 2: Multi-scale matching (see FastImageFinder class in Performance section)

方案2:多尺度匹配(参考性能优化章节的FastImageFinder类)

finder = FastImageFinder(scales=[0.5, 0.75, 1.0, 1.25, 1.5]) result = finder.find_multi_scale('button.png')
finder = FastImageFinder(scales=[0.5, 0.75, 1.0, 1.25, 1.5]) result = finder.find_multi_scale('button.png')

Solution 3: Convert to grayscale for matching

方案3:转换为灰度图匹配

screenshot = pyautogui.screenshot() screenshot_cv = cv2.cvtColor(np.array(screenshot), cv2.COLOR_RGB2GRAY) template = cv2.imread('button.png', cv2.IMREAD_GRAYSCALE)
result = cv2.matchTemplate(screenshot_cv, template, cv2.TM_CCOEFF_NORMED) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
if max_val >= 0.8: print(f"找到匹配,置信度: {max_val}") h, w = template.shape center_x = max_loc[0] + w // 2 center_y = max_loc[1] + h // 2 pyautogui.click(center_x, center_y)
undefined
screenshot = pyautogui.screenshot() screenshot_cv = cv2.cvtColor(np.array(screenshot), cv2.COLOR_RGB2GRAY) template = cv2.imread('button.png', cv2.IMREAD_GRAYSCALE)
result = cv2.matchTemplate(screenshot_cv, template, cv2.TM_CCOEFF_NORMED) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
if max_val >= 0.8: print(f"找到匹配,置信度: {max_val}") h, w = template.shape center_x = max_loc[0] + w // 2 center_y = max_loc[1] + h // 2 pyautogui.click(center_x, center_y)
undefined

4. Slow Performance

4. 性能缓慢

Symptom: Operations are slow, high CPU usage, or noticeable delays.
Optimization Strategies:
  1. Reduce Screenshot Frequency
    • Cache screenshots when possible
    • Use region-specific captures instead of full screen
  2. Optimize Image Matching
    • Resize large images before matching
    • Use grayscale matching when color isn't important
    • Set appropriate confidence levels
  3. Batch Operations
    • Group multiple actions together
    • Minimize unnecessary delays
See the "Performance Optimization" section for detailed code examples.
症状: 操作延迟高、CPU占用率高或卡顿明显。
优化策略:
  1. 减少截图频率
    • 尽可能缓存截图
    • 使用区域截图替代全屏截图
  2. 优化图像匹配
    • 匹配前缩小大图像尺寸
    • 颜色不重要时使用灰度图匹配
    • 设置合适的置信度
  3. 批量操作
    • 将多个动作分组执行
    • 减少不必要的延迟
详细代码示例请参考“性能优化”章节。

5. Application-Specific Issues

5. 应用专属问题

Browser Automation:
  • Modern browsers may block automation
  • Use Chrome DevTools Protocol instead of pyautogui for web
  • Consider Playwright or Selenium for complex web automation
Game/Graphics Applications:
  • DirectX/OpenGL apps may not be capturable by standard screenshot
  • May require specialized tools (e.g., OBS Studio's capture API)
Protected Content:
  • DRM-protected content (Netflix, etc.) cannot be screenshotted
  • This is a system-level restriction
浏览器自动化:
  • 现代浏览器可能拦截自动化操作
  • 网页自动化建议使用Chrome DevTools Protocol而非pyautogui
  • 复杂网页自动化可考虑Playwright或Selenium
游戏/图形应用:
  • DirectX/OpenGL应用可能无法被标准截图工具捕获
  • 可能需要专用工具(如OBS Studio的捕获API)
受保护内容:
  • DRM保护内容(如Netflix等)无法截图
  • 这是系统级限制

Integration with Other Tools

与其他工具集成

With ChatGPT/AI Assistants

与ChatGPT/AI助手集成

This skill is designed to work with AI assistants like nanobot. Here's how to integrate:
python
undefined
本工具专为与nanobot等AI助手配合使用设计。集成示例如下:
python
undefined

Example: AI assistant using this skill

示例:AI助手使用本自动化工具

def ai_assisted_automation(user_request): """ AI 助手使用自动化技能
Args:
    user_request: 用户的自然语言请求
"""
# 1. AI 解析用户意图
intent = parse_intent(user_request)

if intent == 'screenshot':
    # 2. 执行截图
    screenshot = pyautogui.screenshot()
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    path = f"screenshot_{timestamp}.png"
    screenshot.save(path)
    return f"已截图并保存到: {path}"

elif intent == 'click_button':
    # 2. 查找并点击按钮
    button_name = extract_button_name(user_request)
    location = pyautogui.locateOnScreen(f"{button_name}.png")
    if location:
        pyautogui.click(pyautogui.center(location))
        return f"已点击按钮: {button_name}"
    else:
        return f"未找到按钮: {button_name}"

# ... 其他意图处理
undefined
def ai_assisted_automation(user_request): """ AI 助手使用自动化技能
Args:
    user_request: 用户的自然语言请求
"""
# 1. AI解析用户意图
intent = parse_intent(user_request)

if intent == 'screenshot':
    # 2. 执行截图
    screenshot = pyautogui.screenshot()
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    path = f"screenshot_{timestamp}.png"
    screenshot.save(path)
    return f"已截图并保存到: {path}"

elif intent == 'click_button':
    # 2. 查找并点击按钮
    button_name = extract_button_name(user_request)
    location = pyautogui.locateOnScreen(f"{button_name}.png")
    if location:
        pyautogui.click(pyautogui.center(location))
        return f"已点击按钮: {button_name}"
    else:
        return f"未找到按钮: {button_name}"

# ... 其他意图处理
undefined

With CI/CD Pipelines

与CI/CD流水线集成

yaml
undefined
yaml
undefined

Example: GitHub Actions using this skill for visual testing

示例:GitHub Actions使用本工具进行视觉测试

name: Visual Regression Tests
on: [push, pull_request]
jobs: visual-test: runs-on: macos-latest # or windows-latest
steps:
- uses: actions/checkout@v3

- name: Set up Python
  uses: actions/setup-python@v4
  with:
    python-version: '3.11'

- name: Install dependencies
  run: |
    pip install pyautogui opencv-python-headless numpy Pillow

- name: Run visual tests
  run: python tests/visual_regression.py

- name: Upload screenshots
  uses: actions/upload-artifact@v3
  with:
    name: screenshots
    path: screenshots/
undefined
name: 视觉回归测试
on: [push, pull_request]
jobs: visual-test: runs-on: macos-latest # 或 windows-latest
steps:
- uses: actions/checkout@v3

- name: 配置Python环境
  uses: actions/setup-python@v4
  with:
    python-version: '3.11'

- name: 安装依赖
  run: |
    pip install pyautogui opencv-python-headless numpy Pillow

- name: 运行视觉测试
  run: python tests/visual_regression.py

- name: 上传截图
  uses: actions/upload-artifact@v3
  with:
    name: screenshots
    path: screenshots/
undefined

With Monitoring Systems

与监控系统集成

python
undefined
python
undefined

Example: Integration with Prometheus/Grafana for screen monitoring

示例:与Prometheus/Grafana集成实现屏幕监控

from prometheus_client import Gauge, start_http_server import pyautogui import time
from prometheus_client import Gauge, start_http_server import pyautogui import time

Define metrics

定义指标

screen_change_gauge = Gauge('screen_change_score', 'Screen change detection score') template_match_gauge = Gauge('template_match_confidence', 'Template matching confidence')
start_http_server(8000)
def monitoring_loop(): baseline = None
while True:
    # Capture screen
    current = pyautogui.screenshot()
    current_array = np.array(current)
    
    if baseline is not None:
        # Calculate change
        diff = cv2.absdiff(baseline, current_array)
        diff_score = np.mean(diff)
        screen_change_gauge.set(diff_score)
    
    baseline = current_array
    
    # Check for template
    try:
        location = pyautogui.locateOnScreen('alert_icon.png', confidence=0.8)
        if location:
            template_match_gauge.set(1.0)
        else:
            template_match_gauge.set(0.0)
    except:
        template_match_gauge.set(0.0)
    
    time.sleep(5)
monitoring_loop()
undefined
screen_change_gauge = Gauge('screen_change_score', '屏幕变化检测分数') template_match_gauge = Gauge('template_match_confidence', '模板匹配置信度')
start_http_server(8000)
def monitoring_loop(): baseline = None
while True:
    # 捕获屏幕
    current = pyautogui.screenshot()
    current_array = np.array(current)
    
    if baseline is not None:
        # 计算变化值
        diff = cv2.absdiff(baseline, current_array)
        diff_score = np.mean(diff)
        screen_change_gauge.set(diff_score)
    
    baseline = current_array
    
    # 检查模板是否存在
    try:
        location = pyautogui.locateOnScreen('alert_icon.png', confidence=0.8)
        if location:
            template_match_gauge.set(1.0)
        else:
            template_match_gauge.set(0.0)
    except:
        template_match_gauge.set(0.0)
    
    time.sleep(5)
monitoring_loop()
undefined

Future Roadmap

未来规划

Planned Features

计划新增功能

  1. Linux Support
    • X11 and Wayland compatibility
    • xdotool
      and
      scrot
      integration
    • mss
      for multi-monitor support
  2. AI-Powered Recognition
    • Integration with OpenAI GPT-4V or Google Gemini for visual understanding
    • Natural language element finding ("click the blue submit button")
    • OCR-free text extraction using vision models
  3. Mobile Device Support
    • Android: ADB (Android Debug Bridge) integration
    • iOS: WebDriverAgent via Appium
    • Screenshot and touch simulation
  4. Cloud Integration
    • AWS Lambda support for serverless automation
    • Azure Functions and GCP Cloud Functions compatibility
    • Distributed screenshot processing
  5. Advanced Analytics
    • Built-in A/B testing framework for UI changes
    • Heatmap generation from user interactions
    • Performance regression detection
  1. Linux支持
    • 兼容X11和Wayland
    • 集成
      xdotool
      scrot
    • 使用
      mss
      实现多显示器支持
  2. AI驱动识别
    • 集成OpenAI GPT-4V或Google Gemini实现视觉理解
    • 自然语言元素定位(如“点击蓝色提交按钮”)
    • 无需OCR的文本提取
  3. 移动设备支持
    • Android:集成ADB(Android Debug Bridge)
    • iOS:通过Appium使用WebDriverAgent
    • 截图与触摸模拟
  4. 云集成
    • 支持AWS Lambda实现无服务器自动化
    • 兼容Azure Functions和GCP Cloud Functions
    • 分布式截图处理
  5. 高级分析
    • 内置UI变化A/B测试框架
    • 基于用户交互生成热力图
    • 性能回归检测

Contributing

贡献指南

We welcome contributions! Please see the Contributing Guide for details on:
  • Code style and formatting
  • Testing requirements
  • Documentation standards
  • Pull request process
我们欢迎所有贡献!请查看贡献指南了解以下细节:
  • 代码风格与格式要求
  • 测试规范
  • 文档标准
  • Pull Request流程

License

许可证

This skill is licensed under the MIT License. See LICENSE for details.

Last Updated: 2026-03-06
Version: 1.0.0
Maintainer: nanobot skills team
本工具采用MIT许可证。详情请见LICENSE文件。

最后更新: 2026-03-06
版本: 1.0.0
维护者: nanobot技能团队