code-interpreter

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Code Interpreter

代码解释器

A general-purpose code execution environment powered by AWS Bedrock AgentCore Code Interpreter. Run code, execute shell commands, and manage files in a secure sandbox.
由AWS Bedrock AgentCore Code Interpreter驱动的通用代码执行环境。可在安全沙箱中运行代码、执行Shell命令和管理文件。

Available Tools

可用工具

  • execute_code(code, language, output_filename): Execute Python, JavaScript, or TypeScript code.
  • execute_command(command): Execute shell commands.
  • file_operations(operation, paths, content): Read, write, list, or remove files in the sandbox.
  • ci_push_to_workspace(paths): Save sandbox files to the shared workspace (S3). Omit
    paths
    to save all files in the sandbox root.
  • execute_code(code, language, output_filename): 执行Python、JavaScript或TypeScript代码。
  • execute_command(command): 执行Shell命令。
  • file_operations(operation, paths, content): 在沙箱中读取、写入、列出或删除文件。
  • ci_push_to_workspace(paths): 将沙箱文件保存到共享工作区(S3)。省略
    paths
    可保存沙箱根目录下的所有文件。

Tool Parameters

工具参数

execute_code

execute_code

ParameterTypeRequiredDefaultDescription
code
stringYesCode to execute. Use
print()
for text output.
language
stringNo
"python"
"python"
,
"javascript"
, or
"typescript"
output_filename
stringNo
""
File to download after execution. Code must save a file with this exact name. Saved to workspace automatically.
参数类型是否必填默认值描述
code
string要执行的代码。使用
print()
输出文本。
language
string
"python"
可选值为
"python"
"javascript"
"typescript"
output_filename
string
""
执行后要下载的文件。代码必须保存一个与该名称完全一致的文件,会自动保存到工作区。

execute_command

execute_command

ParameterTypeRequiredDescription
command
stringYesShell command to execute (e.g.,
"ls -la"
,
"pip install requests"
).
参数类型是否必填描述
command
string要执行的Shell命令(例如
"ls -la"
"pip install requests"
)。

file_operations

file_operations

ParameterTypeRequiredDescription
operation
stringYes
"read"
,
"write"
,
"list"
, or
"remove"
paths
listFor read/list/removeFile paths. read:
["file.txt"]
, list:
["."]
, remove:
["old.txt"]
content
listFor writeEntries with
path
and
text
:
[{"path": "out.txt", "text": "hello"}]
参数类型是否必填描述
operation
string可选值为
"read"
"write"
"list"
"remove"
paths
list读取/列出/删除操作时必填文件路径。读取:
["file.txt"]
,列出:
["."]
,删除:
["old.txt"]
content
list写入操作时必填包含
path
text
的条目:
[{"path": "out.txt", "text": "hello"}]

tool_input Examples

tool_input 示例

execute_code — text output

execute_code — 文本输出

json
{
  "code": "import pandas as pd\ndf = pd.DataFrame({'A': [1,2,3], 'B': [4,5,6]})\nprint(df.describe())",
  "language": "python"
}
json
{
  "code": "import pandas as pd\ndf = pd.DataFrame({'A': [1,2,3], 'B': [4,5,6]})\nprint(df.describe())",
  "language": "python"
}

execute_code — generate chart

execute_code — 生成图表

json
{
  "code": "import matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = np.linspace(0, 10, 100)\nplt.figure(figsize=(10,6))\nplt.plot(x, np.sin(x))\nplt.title('Sine Wave')\nplt.savefig('sine.png', dpi=300, bbox_inches='tight')\nprint('Done')",
  "language": "python",
  "output_filename": "sine.png"
}
json
{
  "code": "import matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = np.linspace(0, 10, 100)\nplt.figure(figsize=(10,6))\nplt.plot(x, np.sin(x))\nplt.title('Sine Wave')\nplt.savefig('sine.png', dpi=300, bbox_inches='tight')\nprint('Done')",
  "language": "python",
  "output_filename": "sine.png"
}

execute_command — install a package

execute_command — 安装软件包

json
{
  "command": "pip install yfinance"
}
json
{
  "command": "pip install yfinance"
}

execute_command — check environment

execute_command — 检查环境

json
{
  "command": "python --version && pip list | head -20"
}
json
{
  "command": "python --version && pip list | head -20"
}

file_operations — write a file

file_operations — 写入文件

json
{
  "operation": "write",
  "content": [{"path": "config.json", "text": "{\"key\": \"value\"}"}]
}
json
{
  "operation": "write",
  "content": [{"path": "config.json", "text": "{\"key\": \"value\"}"}]
}

file_operations — list files

file_operations — 列出文件

json
{
  "operation": "list",
  "paths": ["."]
}
json
{
  "operation": "list",
  "paths": ["."]
}

file_operations — read a file

file_operations — 读取文件

json
{
  "operation": "read",
  "paths": ["output.csv"]
}
json
{
  "operation": "read",
  "paths": ["output.csv"]
}

When to Use This Skill

何时使用该技能

Use code-interpreter as a sandbox for testing and prototyping code. For production tasks (creating documents, charts, presentations), prefer specialized skills.
Do NOT use for:
  • Formatting or displaying code examples (respond directly with markdown code blocks)
  • Explaining code or algorithms (respond directly with text)
  • Simple calculations you can do mentally (just provide the answer)
  • Any task that doesn't require actual code execution
TaskRecommended SkillNotes
Create charts/diagramsvisual-designUse this first for production charts
Create Word documentsword-documentsHas template support and styling
Create Excel spreadsheetsexcel-spreadsheetsHas formatting pipeline and validation
Create PowerPointpowerpoint-presentationsHas layout system and design patterns
Test code snippetscode-interpreterDebug, verify logic, check output
Prototype algorithmscode-interpreterExperiment before implementing
Install/test packagescode-interpreterCheck compatibility, test APIs
Debug code logiccode-interpreterIsolate and test specific functions
Verify calculationscode-interpreterQuick math or data checks
将code-interpreter用作测试和原型化代码的沙箱。 对于生产任务(创建文档、图表、演示文稿),建议使用专用技能。
请勿用于:
  • 格式化或展示代码示例(直接用Markdown代码块回复)
  • 解释代码或算法(直接用文本回复)
  • 可心算的简单计算(直接提供答案)
  • 任何不需要实际执行代码的任务
任务推荐技能说明
创建图表/示意图visual-design生产图表优先使用该技能
创建Word文档word-documents支持模板和样式设置
创建Excel电子表格excel-spreadsheets具备格式化流程和验证功能
创建PowerPoint演示文稿powerpoint-presentations拥有布局系统和设计模式
测试代码片段code-interpreter调试、验证逻辑、检查输出
原型化算法code-interpreter实现前进行实验
安装/测试软件包code-interpreter检查兼容性、测试API
调试代码逻辑code-interpreter隔离并测试特定函数
验证计算结果code-interpreter快速进行数学或数据检查

Code Interpreter vs Code Agent

代码解释器 vs 代码代理

Code InterpreterCode Agent
NatureSandboxed execution environmentAutonomous agent (Claude Code)
Best forQuick scripts, data analysis, prototypingMulti-file projects, refactoring, test suites
File persistenceOnly when
output_filename
is set
All files auto-synced to S3
Session stateVariables persist within sessionFiles + conversation persist across sessions
AutonomyYou write the codeAgent plans, writes, runs, and iterates
Use whenYou need to run a specific piece of codeYou need an engineer to solve a problem end-to-end
代码解释器代码代理
性质沙箱化执行环境自主代理(Claude Code)
最佳适用场景快速脚本、数据分析、原型开发多文件项目、重构、测试套件
文件持久化仅当设置
output_filename
时生效
所有文件自动同步到S3
会话状态变量在会话内持久保存文件和对话跨会话持久保存
自主性由您编写代码代理负责规划、编写、运行和迭代
使用时机您需要运行特定代码片段时您需要工程师端到端解决问题时

Workspace Integration

工作区集成

All files go to the
code-interpreter/
namespace — a flat, session-isolated space separate from office documents.
Sandbox → Workspace (save outputs):
json
// Save a specific file after execution
{ "tool": "ci_push_to_workspace", "paths": ["chart.png", "results.json"] }

// Save everything in the sandbox root
{ "tool": "ci_push_to_workspace" }

// Alternative: save a single file inline during execute_code
{ "tool": "execute_code", "output_filename": "chart.png", "code": "..." }
Uploaded files (auto-preloaded):
Files uploaded by the user (e.g. ZIP archives) are automatically available in the sandbox — no manual loading needed. Just use them directly in
execute_code
.
Read saved files via workspace skill:
workspace_read("code-interpreter/chart.png")
workspace_read("code-interpreter/results.json")
workspace_list("code-interpreter/")
Text files (
.py
,
.csv
,
.json
,
.txt
, etc.) are transferred as-is. Binary files (
.png
,
.pdf
,
.xlsx
, etc.) are handled via base64 encoding automatically.
所有文件都会进入
code-interpreter/
命名空间——这是一个与办公文档分离的扁平、会话隔离空间。
沙箱 → 工作区(保存输出):
json
// 执行后保存特定文件
{ "tool": "ci_push_to_workspace", "paths": ["chart.png", "results.json"] }

// 保存沙箱根目录下的所有文件
{ "tool": "ci_push_to_workspace" }

// 替代方式:在execute_code中内联保存单个文件
{ "tool": "execute_code", "output_filename": "chart.png", "code": "..." }
上传的文件(自动预加载):
用户上传的文件(如ZIP压缩包)会自动在沙箱中可用——无需手动加载。直接在
execute_code
中使用即可。
通过工作区技能读取已保存的文件:
workspace_read("code-interpreter/chart.png")
workspace_read("code-interpreter/results.json")
workspace_list("code-interpreter/")
文本文件(
.py
.csv
.json
.txt
等)会原样传输。 二进制文件(
.png
.pdf
.xlsx
等)会自动通过base64编码处理。

Environment

环境说明

  • Languages: Python (recommended, 200+ libraries), JavaScript, TypeScript
  • Shell: Full shell access via
    execute_command
  • File system: Persistent within session; use
    file_operations
    to manage files
  • Session state: Variables and files persist across multiple calls within the same session
  • Network: Internet access available (can use
    requests
    ,
    urllib
    ,
    curl
    )
  • 支持语言:Python(推荐,预安装200+库)、JavaScript、TypeScript
  • Shell:通过
    execute_command
    可完全访问Shell
  • 文件系统:会话内持久化;使用
    file_operations
    管理文件
  • 会话状态:变量和文件在同一会话的多次调用中持久保存
  • 网络:可访问互联网(可使用
    requests
    urllib
    curl

Supported Languages

支持的语言

  • Python (recommended) — 200+ pre-installed libraries covering data science, ML, visualization, file processing
  • JavaScript — Node.js runtime, useful for JSON manipulation, async operations
  • TypeScript — TypeScript runtime with type checking
  • Python(推荐)——预安装200+库,涵盖数据科学、机器学习、可视化、文件处理等领域
  • JavaScript——Node.js运行时,适用于JSON处理、异步操作
  • TypeScript——带类型检查的TypeScript运行时

Pre-installed Python Libraries

预安装的Python库

Data Analysis & Visualization

数据分析与可视化

LibraryCommon Use
pandas
DataFrames, CSV/Excel I/O, groupby, pivot
numpy
Arrays, linear algebra, random, statistics
matplotlib
Line, bar, scatter, histogram, subplots
plotly
Interactive charts, 3D plots
bokeh
Interactive visualization
scipy
Optimization, interpolation, signal processing
statsmodels
Regression, time series, hypothesis tests
sympy
Algebra, calculus, equation solving
常见用途
pandas
数据框、CSV/Excel读写、分组、透视表
numpy
数组、线性代数、随机数、统计
matplotlib
折线图、柱状图、散点图、直方图、子图
plotly
交互式图表、3D绘图
bokeh
交互式可视化
scipy
优化、插值、信号处理
statsmodels
回归、时间序列、假设检验
sympy
代数、微积分、方程求解

Machine Learning & AI

机器学习与人工智能

LibraryCommon Use
scikit-learn
Classification, regression, clustering, pipelines
torch
/
torchvision
/
torchaudio
Deep learning, computer vision, audio
xgboost
High-performance gradient boosting
spacy
/
nltk
/
textblob
NLP, tokenization, NER, sentiment
scikit-image
Image processing, filters, segmentation
常见用途
scikit-learn
分类、回归、聚类、流水线
torch
/
torchvision
/
torchaudio
深度学习、计算机视觉、音频处理
xgboost
高性能梯度提升
spacy
/
nltk
/
textblob
自然语言处理、分词、命名实体识别、情感分析
scikit-image
图像处理、滤镜、分割

Mathematical & Optimization

数学与优化

LibraryCommon Use
cvxpy
Convex optimization, portfolio optimization
ortools
Scheduling, routing, constraint programming
pulp
Linear programming
z3-solver
SAT solving, formal verification
networkx
/
igraph
Graph algorithms, network analysis
常见用途
cvxpy
凸优化、投资组合优化
ortools
调度、路径规划、约束编程
pulp
线性规划
z3-solver
SAT求解、形式化验证
networkx
/
igraph
图算法、网络分析

File Processing & Documents

文件处理与文档

LibraryCommon Use
openpyxl
/
xlrd
/
XlsxWriter
Excel read/write with formatting
python-docx
Word document creation/modification
python-pptx
PowerPoint creation/modification
PyPDF2
/
pdfplumber
/
reportlab
PDF read/write/generate
lxml
/
beautifulsoup4
XML/HTML parsing
markitdown
Convert various formats to Markdown
常见用途
openpyxl
/
xlrd
/
XlsxWriter
带格式的Excel读写
python-docx
Word文档创建/修改
python-pptx
PowerPoint演示文稿创建/修改
PyPDF2
/
pdfplumber
/
reportlab
PDF读写/生成
lxml
/
beautifulsoup4
XML/HTML解析
markitdown
将多种格式转换为Markdown

Image & Media

图像与媒体

LibraryCommon Use
pillow
(PIL)
Image resize, crop, filter, conversion
opencv-python
(cv2)
Computer vision, feature detection
imageio
/
moviepy
Image/video I/O and editing
pydub
Audio manipulation
svgwrite
/
Wand
SVG creation, ImageMagick
常见用途
pillow
(PIL)
图像缩放、裁剪、滤镜、格式转换
opencv-python
(cv2)
计算机视觉、特征检测
imageio
/
moviepy
图像/视频读写与编辑
pydub
音频处理
svgwrite
/
Wand
SVG创建、ImageMagick操作

Data Storage & Formats

数据存储与格式

LibraryCommon Use
duckdb
SQL queries on DataFrames and files
SQLAlchemy
SQL ORM and database abstraction
pyarrow
Parquet and Arrow format processing
orjson
/
ujson
/
PyYAML
Fast JSON/YAML parsing
常见用途
duckdb
对数据框和文件执行SQL查询
SQLAlchemy
SQL ORM和数据库抽象
pyarrow
Parquet和Arrow格式处理
orjson
/
ujson
/
PyYAML
快速JSON/YAML解析

Web & API

Web与API

LibraryCommon Use
requests
/
httpx
HTTP requests, API calls
beautifulsoup4
Web scraping
fastapi
/
Flask
/
Django
Web frameworks
常见用途
requests
/
httpx
HTTP请求、API调用
beautifulsoup4
网页爬取
fastapi
/
Flask
/
Django
Web框架

Utilities

实用工具

LibraryCommon Use
pydantic
Data validation, schema definition
Faker
Test data generation
rich
Pretty printing, tables
cryptography
Encryption, hashing
qrcode
QR code generation
boto3
AWS SDK
For the full list of 200+ libraries with versions, run:
execute_command(command="pip list")
常见用途
pydantic
数据验证、模式定义
Faker
测试数据生成
rich
美观打印、表格展示
cryptography
加密、哈希
qrcode
二维码生成
boto3
AWS SDK
如需查看包含版本信息的200+库完整列表,运行:
execute_command(command="pip list")

Usage Patterns

使用模式

Pattern 1: Data Analysis

模式1:数据分析

python
import pandas as pd
import numpy as np

df = pd.DataFrame({
    'date': pd.date_range('2024-01-01', periods=100),
    'revenue': np.random.normal(1000, 200, 100),
    'costs': np.random.normal(700, 150, 100),
})
df['profit'] = df['revenue'] - df['costs']

print("=== Summary Statistics ===")
print(df.describe())
print(f"\nTotal Profit: ${df['profit'].sum():,.2f}")
print(f"Profit Margin: {df['profit'].mean() / df['revenue'].mean() * 100:.1f}%")
python
import pandas as pd
import numpy as np

df = pd.DataFrame({
    'date': pd.date_range('2024-01-01', periods=100),
    'revenue': np.random.normal(1000, 200, 100),
    'costs': np.random.normal(700, 150, 100),
})
df['profit'] = df['revenue'] - df['costs']

print("=== 汇总统计 ===")
print(df.describe())
print(f"\n总利润: ${df['profit'].sum():,.2f}")
print(f"利润率: {df['profit'].mean() / df['revenue'].mean() * 100:.1f}%")

Pattern 2: Visualization (with output_filename)

模式2:可视化(带output_filename)

python
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

categories = ['Q1', 'Q2', 'Q3', 'Q4']
values = [120, 150, 180, 210]
axes[0,0].bar(categories, values, color='#2196F3')
axes[0,0].set_title('Quarterly Revenue')

x = np.linspace(0, 10, 50)
axes[0,1].plot(x, np.sin(x), 'b-', linewidth=2)
axes[0,1].set_title('Trend')

sizes = [35, 30, 20, 15]
axes[1,0].pie(sizes, labels=['A','B','C','D'], autopct='%1.1f%%')
axes[1,0].set_title('Market Share')

x = np.random.normal(50, 10, 200)
y = x * 1.5 + np.random.normal(0, 15, 200)
axes[1,1].scatter(x, y, alpha=0.5, c='#FF5722')
axes[1,1].set_title('Correlation')

plt.tight_layout()
plt.savefig('dashboard.png', dpi=300, bbox_inches='tight')
print('Dashboard saved')
python
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

categories = ['Q1', 'Q2', 'Q3', 'Q4']
values = [120, 150, 180, 210]
axes[0,0].bar(categories, values, color='#2196F3')
axes[0,0].set_title('季度收入')

x = np.linspace(0, 10, 50)
axes[0,1].plot(x, np.sin(x), 'b-', linewidth=2)
axes[0,1].set_title('趋势图')

sizes = [35, 30, 20, 15]
axes[1,0].pie(sizes, labels=['A','B','C','D'], autopct='%1.1f%%')
axes[1,0].set_title('市场份额')

x = np.random.normal(50, 10, 200)
y = x * 1.5 + np.random.normal(0, 15, 200)
axes[1,1].scatter(x, y, alpha=0.5, c='#FF5722')
axes[1,1].set_title('相关性分析')

plt.tight_layout()
plt.savefig('dashboard.png', dpi=300, bbox_inches='tight')
print('仪表盘已保存')

Pattern 3: Machine Learning

模式3:机器学习

python
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.datasets import load_iris

iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(
    iris.data, iris.target, test_size=0.3, random_state=42
)

model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

print(classification_report(y_test, y_pred, target_names=iris.target_names))
python
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.datasets import load_iris

iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(
    iris.data, iris.target, test_size=0.3, random_state=42
)

model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

print(classification_report(y_test, y_pred, target_names=iris.target_names))

Pattern 4: SQL with DuckDB

模式4:使用DuckDB执行SQL

python
import duckdb
import pandas as pd

orders = pd.DataFrame({
    'order_id': range(1, 101),
    'customer': [f'Customer_{i%20}' for i in range(100)],
    'amount': [round(50 + i * 3.5, 2) for i in range(100)],
})

result = duckdb.sql("""
    SELECT customer, COUNT(*) as cnt, ROUND(SUM(amount), 2) as total
    FROM orders GROUP BY customer
    HAVING COUNT(*) >= 3 ORDER BY total DESC LIMIT 10
""").df()
print(result.to_string(index=False))
python
import duckdb
import pandas as pd

orders = pd.DataFrame({
    'order_id': range(1, 101),
    'customer': [f'Customer_{i%20}' for i in range(100)],
    'amount': [round(50 + i * 3.5, 2) for i in range(100)],
})

result = duckdb.sql("""
    SELECT customer, COUNT(*) as cnt, ROUND(SUM(amount), 2) as total
    FROM orders GROUP BY customer
    HAVING COUNT(*) >= 3 ORDER BY total DESC LIMIT 10
""").df()
print(result.to_string(index=False))

Pattern 5: Fetch Data from Web

模式5:从Web获取数据

python
import requests
import pandas as pd

response = requests.get("https://api.example.com/data")
data = response.json()
df = pd.DataFrame(data)
print(df.head())
python
import requests
import pandas as pd

response = requests.get("https://api.example.com/data")
data = response.json()
df = pd.DataFrame(data)
print(df.head())

Pattern 6: Multi-step Workflow (session state persists)

模式6:多步骤工作流(会话状态持久化)

Call 1: execute_code → load and clean data, store in variable `df`
Call 2: execute_code → analyze `df`, generate chart, save as PNG
Call 3: execute_code → export results to CSV
Call 4: file_operations(operation="read") → download the CSV
Variables (
df
) and files persist across calls in the same session.
调用1:execute_code → 加载并清洗数据,存储到变量`df`中
调用2:execute_code → 分析`df`,生成图表并保存为PNG
调用3:execute_code → 将结果导出为CSV
调用4:file_operations(operation="read") → 下载CSV文件
变量(如
df
)和文件在同一会话的多次调用中会持久保存。

Important Rules

重要规则

  1. matplotlib.use('Agg')
    before
    import matplotlib.pyplot
    — sandbox has no display.
  2. Use
    print()
    for text output
    — stdout is how results are returned.
  3. output_filename
    must match exactly
    — the filename in
    plt.savefig()
    or
    wb.save()
    must match the
    output_filename
    parameter.
  4. Use
    execute_command
    for shell tasks
    ls
    ,
    pip install
    ,
    curl
    , etc.
  5. Use
    file_operations
    for file management
    — read/write/list/remove files explicitly.
  6. Session state persists — variables and files remain across calls. Use this for multi-step workflows.
  1. import matplotlib.pyplot
    前调用
    matplotlib.use('Agg')
    —— 沙箱无显示界面。
  2. 使用
    print()
    输出文本
    —— 标准输出是返回结果的方式。
  3. output_filename
    必须完全匹配
    ——
    plt.savefig()
    wb.save()
    中的文件名必须与
    output_filename
    参数完全一致。
  4. 使用
    execute_command
    执行Shell任务
    —— 如
    ls
    pip install
    curl
    等。
  5. 使用
    file_operations
    管理文件
    —— 显式地进行文件的读/写/列出/删除操作。
  6. 会话状态持久化 —— 变量和文件在多次调用中保留。可用于多步骤工作流。

Common Mistakes to Avoid

需避免的常见错误

  • Forgetting
    matplotlib.use('Agg')
    before
    import matplotlib.pyplot as plt
  • Using
    plt.show()
    instead of
    plt.savefig()
    — there is no display
  • Typo in
    output_filename
    — must match the file saved by the code exactly
  • Using
    execute_code
    for shell tasks — use
    execute_command
    instead
  • Writing binary files via
    file_operations
    — use
    execute_code
    to generate binary files, then download with
    output_filename
  • import matplotlib.pyplot as plt
    前忘记调用
    matplotlib.use('Agg')
  • 使用
    plt.show()
    而非
    plt.savefig()
    —— 沙箱无显示界面
  • output_filename
    拼写错误 —— 必须与代码保存的文件名完全一致
  • 使用
    execute_code
    执行Shell任务 —— 应使用
    execute_command
  • 通过
    file_operations
    写入二进制文件 —— 使用
    execute_code
    生成二进制文件,再通过
    output_filename
    下载