computer-automation

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Desktop Computer Automation

桌面电脑自动化

CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:
  1. Never run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.
  2. Run only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.
  3. Allow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex
    act
    commands may need even longer.
  4. Always report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.
  5. Only minimize windows, never close them unless explicitly asked. When you need to dismiss or get a window out of the way, minimize it instead of closing it. Do not close any app or window unless the user explicitly asks you to do so.
Control your desktop (macOS, Windows, Linux) using
npx -y @midscene/computer@1
. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots.
重要规则——违反会破坏工作流:
  1. 切勿在后台运行midscene命令。每个命令必须同步运行,以便你在决定下一步操作前查看其输出(尤其是截图)。后台执行会破坏「截图-分析-执行」循环。
  2. 一次仅运行一个midscene命令。等待前一个命令完成,查看截图后再决定下一步操作。切勿将多个命令链式执行。
  3. 为每个命令预留足够的完成时间。Midscene命令涉及AI推理和屏幕交互,耗时可能比普通Shell命令更长。常规命令约需1分钟;复杂的
    act
    命令可能需要更长时间。
  4. 完成任务后务必主动汇报结果。完成自动化任务后,你必须主动向用户总结结果——包括发现的关键数据、已完成的操作、截取的截图以及任何相关结论。切勿在最后一步自动化操作后静默结束;用户期望在单次交互中获得完整回复。
  5. 仅最小化窗口,除非明确要求否则绝不关闭。当你需要关闭或移开某个窗口时,将其最小化而非关闭。除非用户明确要求,否则不要关闭任何应用或窗口。
使用
npx -y @midscene/computer@1
控制你的桌面(macOS、Windows、Linux)。每个CLI命令直接对应一个MCP工具——你(AI Agent)作为核心大脑,根据截图决定采取哪些操作。

What
act
Can Do

act
命令的功能

Inside a single
act
call on desktop, Midscene can move the mouse, click, double-click, right-click, drag items, type or clear text, scroll, press single keys or keyboard shortcuts, and work through multi-step interactions on whatever is visible on the selected display.
在桌面端单次
act
调用中,Midscene可以移动鼠标、单击、双击、右键单击、拖拽项目、输入或清除文本、滚动、按下单个按键或键盘快捷键,还能在选定显示器的可见内容上完成多步交互。

Prerequisites

前提条件

Midscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a
.env
file in the current working directory (Midscene loads
.env
automatically):
bash
MIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
Example: Gemini (Gemini-3-Flash)
bash
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
Example: Qwen 3.5
bash
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"
Midscene需要具备强大视觉定位能力的模型。必须配置以下环境变量——可作为系统环境变量,或在当前工作目录的
.env
文件中配置(Midscene会自动加载
.env
):
bash
MIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
示例:Gemini(Gemini-3-Flash)
bash
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
示例:Qwen 3.5
bash
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"

If using OpenRouter, set:

如果使用OpenRouter,设置:

MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"

MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"

MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"

MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"

MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"

MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"


Example: Doubao Seed 2.0 Lite

```bash
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
Commonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.
If the model is not configured, ask the user to set it up. See Model Configuration for supported providers.

示例:Doubao Seed 2.0 Lite

```bash
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
常用模型:Doubao Seed 2.0 Lite、Qwen 3.5、Zhipu GLM-4.6V、Gemini-3-Pro、Gemini-3-Flash。
如果未配置模型,请要求用户进行设置。有关支持的提供商,请查看模型配置

Commands

命令

Connect to Desktop

连接到桌面

bash
npx -y @midscene/computer@1 connect
npx -y @midscene/computer@1 connect --displayId <id>
bash
npx -y @midscene/computer@1 connect
npx -y @midscene/computer@1 connect --displayId <id>

List Displays

列出显示器

bash
npx -y @midscene/computer@1 list_displays
bash
npx -y @midscene/computer@1 list_displays

Take Screenshot

截取截图

bash
npx -y @midscene/computer@1 take_screenshot
After taking a screenshot, read the saved image file to understand the current screen state before deciding the next action.
bash
npx -y @midscene/computer@1 take_screenshot
截取截图后,先查看保存的图像文件以了解当前屏幕状态,再决定下一步操作。

Perform Action

执行操作

Use
act
to interact with the computer and get the result. It autonomously handles all UI interactions internally — clicking, typing, scrolling, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe what you want to do and the desired effect in natural language:
bash
undefined
使用
act
与电脑交互并获取结果。它会自主处理所有内部UI交互——点击、输入、滚动、等待和导航——因此你应将复杂的高级任务整体交给它,而非拆分为小步骤。用自然语言描述你想要做什么以及期望的效果
bash
undefined

specific instructions

具体指令

npx -y @midscene/computer@1 act --prompt "type hello world in the search field and press Enter" npx -y @midscene/computer@1 act --prompt "drag the file icon to the Trash"
npx -y @midscene/computer@1 act --prompt "在搜索框中输入hello world并按回车" npx -y @midscene/computer@1 act --prompt "将文件图标拖拽到废纸篓"

or target-driven instructions

或目标驱动型指令

npx -y @midscene/computer@1 act --prompt "search for the weather in Shanghai using the Chrome browser, tell me the result"
undefined
npx -y @midscene/computer@1 act --prompt "用Chrome浏览器搜索上海的天气,告诉我结果"
undefined

Assert Current Screen State

验证当前屏幕状态

Use
assert
to verify that the current screen satisfies a natural language condition. It does not perform UI actions; it checks the visible screen state and passes only when the assertion is true. Use this for validation, QA checks, and final state verification after
act
.
bash
npx -y @midscene/computer@1 assert --prompt "there is a login button visible"
npx -y @midscene/computer@1 assert --prompt "the active window shows a saved confirmation message"
npx -y @midscene/computer@1 assert --displayId 1 --prompt "the file picker is open"
使用
assert
验证当前屏幕是否满足自然语言描述的条件。它不会执行UI操作,只会检查可见屏幕状态,仅当断言为真时才会通过。可用于验证、QA检查以及
act
操作后的最终状态确认。
bash
npx -y @midscene/computer@1 assert --prompt "可见登录按钮"
npx -y @midscene/computer@1 assert --prompt "当前活动窗口显示保存成功的确认消息"
npx -y @midscene/computer@1 assert --displayId 1 --prompt "文件选择器已打开"

Use a Reference Image for Precise Targeting

使用参考图像进行精准定位

When the user provides a screenshot, icon, logo, or reference image and wants an exact visual match, prefer
tap --locate
instead of a generic
act --prompt
. Pass
--locate
as JSON. The
prompt
describes the target,
images
supplies named reference images, and
convertHttpImage2Base64: true
is useful when the image URL may not be directly accessible to the model.
bash
npx -y @midscene/computer@1 tap --locate '{
  "prompt": "tap the area contains the image",
  "images": [
    {
      "name": "target image",
      "url": "https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png"
    }
  ],
  "convertHttpImage2Base64": true
}'
The same
locate
JSON shape also works for other commands that accept a
locate
parameter.
当用户提供截图、图标、Logo或参考图像,并希望进行精确视觉匹配时,优先使用
tap --locate
而非通用的
act --prompt
。将
--locate
作为JSON传递。
prompt
描述目标,
images
提供命名参考图像,
convertHttpImage2Base64: true
在模型无法直接访问图像URL时非常有用。
bash
npx -y @midscene/computer@1 tap --locate '{
  "prompt": "点击包含该图像的区域",
  "images": [
    {
      "name": "目标图像",
      "url": "https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png"
    }
  ],
  "convertHttpImage2Base64": true
}'
相同的
locate
JSON格式也适用于其他接受
locate
参数的命令。

Disconnect

断开连接

bash
npx -y @midscene/computer@1 disconnect
bash
npx -y @midscene/computer@1 disconnect

Consume Report Files

处理报告文件

The generated HTML report is recommended for human reading first. It includes step-by-step execution details and replay videos for each operation, which makes it much easier to understand what happened and troubleshoot problems.
If another skill or tool needs to consume the report, first convert it with
report-tool
from the same platform CLI package. Prefer Markdown for LLM-based workflows. Use JSON when the report needs to be processed programmatically.
bash
npx -y @midscene/computer@1 report-tool --action to-markdown --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-markdown
npx -y @midscene/computer@1 report-tool --action split --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-data
生成的HTML报告建议优先供人类阅读。它包含分步执行细节和每个操作的回放视频,便于理解操作过程和排查问题。
如果其他技能或工具需要处理报告,请先使用同一平台CLI包中的
report-tool
进行转换。基于LLM的工作流优先选择Markdown格式;当报告需要程序化处理时使用JSON格式。
bash
npx -y @midscene/computer@1 report-tool --action to-markdown --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-markdown
npx -y @midscene/computer@1 report-tool --action split --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-data

Workflow Pattern

工作流模式

Since CLI commands are stateless between invocations, follow this pattern:
  1. Connect to establish a session
  2. Health check — observe the output of the
    connect
    command. If
    connect
    already performed a health check (screenshot and mouse movement test), no additional check is needed. If
    connect
    did not perform a health check, do one manually: take a screenshot and verify it succeeds, then move the mouse to a random position (
    act --prompt "move the mouse to a random position"
    ) and verify it succeeds. If either step fails, stop and troubleshoot before continuing. Only proceed to the next steps after both checks pass without errors.
  3. Launch the target app and take screenshot to see the current state, make sure the app is launched and visible on the screen.
  4. Execute action using
    act
    to perform the desired action or target-driven instructions, and use
    assert
    when you need to verify the resulting screen state.
  5. Disconnect when done
  6. Report results — summarize what was accomplished, present key findings and data extracted during the task, and list any generated files (screenshots, logs, etc.) with their paths
由于CLI命令在调用之间无状态,请遵循以下模式:
  1. 连接以建立会话
  2. 健康检查——查看
    connect
    命令的输出。如果
    connect
    已执行健康检查(截图和鼠标移动测试),则无需额外检查。如果
    connect
    未执行健康检查,请手动执行:截取一张截图并验证成功,然后将鼠标移动到随机位置(
    act --prompt "将鼠标移动到随机位置"
    )并验证成功。如果任一步骤失败,请先排查问题再继续。只有在两项检查均无错误通过后,才能进行下一步操作。
  3. 启动目标应用并截取截图以查看当前状态,确保应用已启动并在屏幕上可见。
  4. 执行操作——使用
    act
    执行所需操作或目标驱动型指令,需要验证屏幕状态时使用
    assert
  5. 完成后断开连接
  6. 汇报结果——总结完成的内容,展示任务期间提取的关键发现和数据,并列出生成的所有文件(截图、日志等)及其路径

Best Practices

最佳实践

  1. Always run a health check first: After connecting, observe the output of the
    connect
    command. If
    connect
    already performed a health check (screenshot and mouse movement test), no additional check is needed. If it did not, do one manually: take a screenshot and move the mouse to a random position. Both must succeed (no errors) before proceeding with any further operations. This catches environment issues early.
  2. Bring the target app to the foreground before using this skill: For best efficiency, bring the app to the foreground using other means (e.g.,
    open -a <AppName>
    on macOS,
    start <AppName>
    on Windows) before invoking any midscene commands. Then take a screenshot to confirm the app is actually in the foreground. Only after visual confirmation should you proceed with UI automation using this skill. Avoid using Spotlight, Start menu search, or other launcher-based approaches through midscene — they involve transient UI, multiple AI inference steps, and are significantly slower.
  3. Be specific about UI elements: Instead of vague descriptions, provide clear, specific details. Say
    "the yellow minimize button in the top-left corner of the Safari window"
    instead of
    "the button"
    .
  4. Describe locations when possible: Help target elements by describing their position (e.g.,
    "the icon in the top-right corner of the menu bar"
    ,
    "the third item in the left sidebar"
    ).
  5. Never run in background: Every midscene command must run synchronously — background execution breaks the screenshot-analyze-act loop.
  6. Check for multiple displays: If you launched an app but cannot see it on the screenshot, the app window may have opened on a different display. Use
    list_displays
    to check available displays. You have two options: either move the app window to the current display, or use
    connect --displayId <id>
    to switch to the display where the app is.
  7. Batch related operations into a single
    act
    command
    : When performing consecutive operations within the same app, combine them into one
    act
    prompt instead of splitting them into separate commands. For example, "search for X, click the first result, and scroll down to see more details" should be a single
    act
    call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.
  8. Set up
    PATH
    before running (macOS)
    : On macOS, some commands (e.g.,
    system_profiler
    ) may not be found if the
    PATH
    is incomplete. Before running any midscene commands, ensure the
    PATH
    includes the standard system directories:
    bash
    export PATH="/usr/sbin:/usr/bin:/bin:/sbin:$PATH"
    This prevents screenshot failures caused by missing system utilities.
  9. Use
    assert
    for verification
    : When the goal is to confirm that a screen state is true, use
    assert --prompt "..."
    instead of an
    act
    prompt. Keep assertions observable and specific, such as
    "the Save dialog is open"
    or
    "the export completed message is visible"
    .
  10. Always report results after completion: After finishing the automation task, you MUST proactively present the results to the user without waiting for them to ask. This includes: (1) the answer to the user's original question or the outcome of the requested task, (2) key data extracted or observed during execution, (3) screenshots and other generated files with their paths, (4) a brief summary of steps taken. Do NOT silently finish after the last automation command — the user expects complete results in a single interaction.
  11. Prefer
    tap --locate
    when a reference image is provided
    : If the user shares a screenshot, icon, or logo and wants that exact visual target, use
    tap --locate
    with a multimodal
    locate
    JSON object such as
    { "prompt": "...", "images": [...] }
    instead of relying only on
    act --prompt
    .
Example — Context menu interaction:
bash
npx -y @midscene/computer@1 act --prompt "right-click the file icon and select Delete from the context menu"
npx -y @midscene/computer@1 take_screenshot
Example — Dropdown menu:
bash
npx -y @midscene/computer@1 act --prompt "open the File menu and click New Window"
npx -y @midscene/computer@1 take_screenshot
  1. 始终先执行健康检查:连接后,查看
    connect
    命令的输出。如果
    connect
    已执行健康检查(截图和鼠标移动测试),则无需额外检查。如果未执行,请手动执行:截取一张截图并将鼠标移动到随机位置。两项操作必须均成功(无错误)才能继续后续操作。这可以提前发现环境问题。
  2. 在使用此技能前将目标应用置于前台:为提高效率,请先通过其他方式(如macOS上的
    open -a <AppName>
    ,Windows上的
    start <AppName>
    )将应用置于前台,调用任何midscene命令。然后截取截图确认应用确实在前台。只有在视觉确认后,才能使用此技能进行UI自动化。避免通过midscene使用Spotlight、开始菜单搜索或其他启动器方式——这些方式涉及临时UI、多步AI推理,且速度明显较慢。
  3. 明确描述UI元素:避免模糊描述,提供清晰、具体的细节。例如,说「Safari窗口左上角的黄色最小化按钮」而非「那个按钮」。
  4. 尽可能描述位置:通过描述元素位置帮助定位(如「菜单栏右上角的图标」、「左侧边栏的第三个项目」)。
  5. 切勿在后台运行:每个midscene命令必须同步运行——后台执行会破坏「截图-分析-执行」循环。
  6. 检查多显示器情况:如果你启动了应用但在截图中看不到它,应用窗口可能在其他显示器上打开。使用
    list_displays
    检查可用显示器。你有两种选择:要么将应用窗口移动到当前显示器,要么使用
    connect --displayId <id>
    切换到应用所在的显示器。
  7. 将相关操作批量放入单个
    act
    命令
    :在同一应用内执行连续操作时,将其合并为一个
    act
    提示,而非拆分为多个命令。例如,「搜索X,点击第一个结果,向下滚动查看更多详情」应作为单个
    act
    调用,而非三个。这减少了往返次数,避免不必要的截图-分析循环,且速度明显更快。
  8. 运行前设置
    PATH
    (macOS)
    :在macOS上,如果
    PATH
    不完整,某些命令(如
    system_profiler
    )可能无法找到。运行任何midscene命令前,确保
    PATH
    包含标准系统目录:
    bash
    export PATH="/usr/sbin:/usr/bin:/bin:/sbin:$PATH"
    这可避免因缺少系统工具导致的截图失败。
  9. 使用
    assert
    进行验证
    :当目标是确认屏幕状态为真时,使用
    assert --prompt "..."
    而非
    act
    提示。确保断言可观察且具体,例如「保存对话框已打开」或「导出完成消息可见」。
  10. 完成后务必汇报结果:完成自动化任务后,你必须主动向用户展示结果,无需等待用户询问。内容包括:(1) 用户原始问题的答案或请求任务的结果,(2) 执行过程中提取或观察到的关键数据,(3) 截图和其他生成文件及其路径,(4) 执行步骤的简要总结。切勿在最后一个自动化命令后静默结束——用户期望在单次交互中获得完整结果。
  11. 提供参考图像时优先使用
    tap --locate
    :如果用户分享了截图、图标或Logo,并希望定位该精确视觉目标,请使用带有多模态
    locate
    JSON对象(如
    { "prompt": "...", "images": [...] }
    )的
    tap --locate
    ,而非仅依赖
    act --prompt
示例——上下文菜单交互:
bash
npx -y @midscene/computer@1 act --prompt "右键点击文件图标并从上下文菜单中选择删除"
npx -y @midscene/computer@1 take_screenshot
示例——下拉菜单:
bash
npx -y @midscene/computer@1 act --prompt "打开文件菜单并点击新建窗口"
npx -y @midscene/computer@1 take_screenshot

Troubleshooting

故障排除

macOS: Accessibility Permission Denied

macOS:无障碍权限被拒绝

Your terminal app does not have Accessibility access:
  1. Open System Settings > Privacy & Security > Accessibility
  2. Add your terminal app and enable it
  3. Restart your terminal app after granting permission
你的终端应用未获得无障碍访问权限:
  1. 打开系统设置 > 隐私与安全性 > 无障碍
  2. 添加你的终端应用并启用权限
  3. 授予权限后重启终端应用

macOS: Xcode Command Line Tools Not Found

macOS:未找到Xcode命令行工具

bash
xcode-select --install
bash
xcode-select --install

API Key Not Set

API密钥未设置

Check
.env
file contains
MIDSCENE_MODEL_API_KEY=<your-key>
.
检查
.env
文件是否包含
MIDSCENE_MODEL_API_KEY=<your-key>

macOS: Screenshot Fails with
system_profiler
Not Found

macOS:截图失败,提示
system_profiler
未找到

If
take_screenshot
fails with an error like
system_profiler: command not found
, the
PATH
environment variable is likely incomplete. Fix it by running:
bash
export PATH="/usr/sbin:/usr/bin:/bin:/sbin:$PATH"
Then retry the screenshot command.
如果
take_screenshot
失败并提示类似
system_profiler: command not found
的错误,可能是
PATH
环境变量不完整。通过运行以下命令修复:
bash
export PATH="/usr/sbin:/usr/bin:/bin:/sbin:$PATH"
然后重试截图命令。

macOS: Screenshot Returns a Black Screen

macOS:截图返回黑屏

If
take_screenshot
returns a completely black image, the Mac is likely locked (e.g. screen is at the login/lock window). This is a system-level restriction — macOS prohibits capturing the screen contents while the session is locked, so there is no workaround at the application level.
Recommended fix: Use a screensaver instead of locking the screen. A screensaver keeps the user session active and unlocked, allowing screenshots to capture normally.
  1. Open System Settings > Lock Screen
  2. Set "Require password after screen saver begins or display is turned off" to a longer delay (or turn it off during automation)
  3. Optionally configure a screensaver under System Settings > Screen Saver so the display still dims after inactivity without locking
如果
take_screenshot
返回完全黑屏的图像,可能是Mac处于锁定状态(如屏幕处于登录/锁定窗口)。这是系统级限制——macOS禁止在会话锁定时捕获屏幕内容,因此应用层面没有解决方法。
推荐修复方案: 使用屏幕保护程序而非锁定屏幕。屏幕保护程序会保持用户会话活跃且未锁定,允许正常捕获截图。
  1. 打开系统设置 > 锁定屏幕
  2. 将「屏幕保护程序启动或显示器关闭后要求输入密码」设置为更长的延迟(或在自动化期间关闭)
  3. 可选地在系统设置 > 屏幕保护程序下配置屏幕保护程序,以便显示器在闲置后仍会变暗但不会锁定

AI Cannot Find the Element

AI无法找到元素

  1. Take a screenshot to verify the element is actually visible
  2. Use more specific descriptions (include color, position, surrounding text)
  3. Ensure the element is not hidden behind another window
  1. 截取截图验证元素是否真的可见
  2. 使用更具体的描述(包括颜色、位置、周围文本)
  3. 确保元素未被其他窗口遮挡

@midscene/*
Dependency Version Outdated

@midscene/*
依赖版本过时

  • Check local versions:
    npm ls @midscene/computer @midscene/core @midscene/shared
    (or
    pnpm why @midscene/computer
    ).
  • Check latest versions:
    npm view @midscene/computer version
    ,
    npm view @midscene/core version
    ,
    npm view @midscene/shared version
    .
  • Upgrade dependencies:
    npm i @midscene/computer@latest @midscene/core@latest @midscene/shared@latest
    .
  • 检查本地版本:
    npm ls @midscene/computer @midscene/core @midscene/shared
    (或
    pnpm why @midscene/computer
    )。
  • 检查最新版本:
    npm view @midscene/computer version
    ,
    npm view @midscene/core version
    ,
    npm view @midscene/shared version
  • 更新依赖:
    npm i @midscene/computer@latest @midscene/core@latest @midscene/shared@latest