aws-transform

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AWS Transform (ATX)

AWS Transform (ATX)

Overview

概述

Perform code upgrades, migrations, and transformations using AWS Transform (ATX). Supports any-to-any transformations: language version upgrades (Java, Python, Node.js, etc.), framework migrations, AWS SDK migrations, library upgrades, code refactoring, architecture changes, and custom organization-specific transformations.
Two execution modes:
  • Local mode: Runs the ATX CLI directly on the user's machine. Best for 1-9 repos.
  • Remote mode: Runs transformations at scale via AWS Batch/Fargate containers. Best for 10+ repos or when the user prefers cloud execution. Infrastructure is auto-deployed with user consent.
You handle the full workflow: inspecting repos, matching them to available transformation definitions, collecting configuration, and executing transformations in either mode — the user just provides repos and confirms the plan.
使用AWS Transform (ATX)执行代码升级、迁移与转换操作。 支持任意类型的转换:语言版本升级(Java、Python、Node.js等)、 框架迁移、AWS SDK迁移、库升级、代码重构、架构变更,以及自定义的组织专属转换。
两种执行模式:
  • 本地模式:直接在用户机器上运行ATX CLI。最适合1-9个仓库。
  • 远程模式:通过AWS Batch/Fargate容器大规模运行转换。 最适合10个以上仓库,或用户偏好云端执行的场景。基础设施会在用户同意后自动部署。
您将处理完整工作流:检查仓库、匹配可用的转换定义、收集配置信息,以及在任一模式下执行转换——用户只需提供仓库并确认计划即可。

Greet and Wait

问候与等待

On activation, introduce AWS Transform with this exact text -- don't print the above Overview text to the user, that is just for your reference:
"The agents modernizing the world's infrastructure and software — now accessible to your preferred AI assistant.
AWS Transform is a full modernization factory — compressing years of transformation work into months across infrastructure migrations, mainframe modernization, and continuous tech debt reduction. Today, with this skill, you have access to AWS Transform custom, the first of a growing library of playbooks.
AWS Transform custom can help you:
  • Upgrade Java, Python, and Node.js to modern versions
  • Migrate AWS SDKs (Java SDK v1→v2, boto2→boto3, JS SDK v2→v3)
  • Handle framework migrations, library upgrades, and code refactoring
  • Analyze codebases and generate documentation
  • Define and run your own custom transformations using natural language, docs, and code samples
Run locally on a few repos for fast iteration, or at scale on hundreds of repos (up to 128 in-parallel). Note: this skill collects telemetry. To opt out, see https://docs.aws.amazon.com/transform/latest/userguide/transform-usage-telemetry.html
What would you like to transform today?"
Do NOT inspect any files, run any commands, or check prerequisites until the user responds.
激活时,请用以下精确文本介绍AWS Transform——不要向用户打印上述概述文本,该文本仅作为您的参考:
"助力全球基础设施与软件现代化的Agent——现在可通过您偏好的AI助手访问。
AWS Transform是一个完整的现代化工厂——将基础设施迁移、大型机现代化和持续技术债务削减等领域的数年转换工作压缩至数月完成。如今,借助此技能,您可以访问AWS Transform自定义功能,这是不断扩充的剧本库中的首个功能。
AWS Transform自定义功能可帮助您:
  • 将Java、Python和Node.js升级到现代版本
  • 迁移AWS SDK(Java SDK v1→v2、boto2→boto3、JS SDK v2→v3)
  • 处理框架迁移、库升级和代码重构
  • 分析代码库并生成文档
  • 使用自然语言、文档和代码示例定义并运行您自己的自定义转换
可在本地少量仓库快速迭代运行,也可在数百个仓库大规模运行(最多128个并行任务)。注意:此技能会收集遥测数据。如需退出,请查看https://docs.aws.amazon.com/transform/latest/userguide/transform-usage-telemetry.html
今天您想要执行什么转换操作?"
在用户回应前,请勿检查任何文件、运行任何命令或验证前置条件。

Usage

使用场景

Use when the user wants to:
  • Transform, upgrade, or migrate code (Java, Python, Node.js, etc.)
  • Migrate AWS SDKs (Java SDK v1→v2, boto2→boto3, JS SDK v2→v3, etc.)
  • Run bulk code transformations at scale via AWS Batch/Fargate
  • Analyze which ATX transformations apply to their repositories
  • Perform comprehensive codebase analysis
  • Create a new custom Transformation Definition (TD)
当用户需要以下操作时使用:
  • 转换、升级或迁移代码(Java、Python、Node.js等)
  • 迁移AWS SDK(Java SDK v1→v2、boto2→boto3、JS SDK v2→v3等)
  • 通过AWS Batch/Fargate运行批量代码转换
  • 分析哪些ATX转换适用于其仓库
  • 执行全面的代码库分析
  • 创建新的自定义转换定义(TD)

Core Concepts

核心概念

  • Transformation Definition (TD): A reusable transformation recipe discovered via
    atx custom def list --json
  • Match Report: Auto-generated mapping of repos to applicable TDs based on code inspection
  • Local Mode: Runs ATX CLI on the user's machine (1-9 repos, max 3 concurrent)
  • Remote Mode: Runs transformations in AWS Batch/Fargate (10+ repos, or by preference)
  • 转换定义(TD):可复用的转换方案,可通过
    atx custom def list --json
    命令查看
  • 匹配报告:基于代码检查自动生成的仓库与适用TD的映射
  • 本地模式:在用户机器上运行ATX CLI(1-9个仓库,最多3个并发任务)
  • 远程模式:在AWS Batch/Fargate中运行转换(10个以上仓库,或用户偏好)

Philosophy

操作原则

Wait for the user. On activation, present what this skill can do and ask the user what they'd like to accomplish. Do NOT automatically inspect the working directory, open files, or any repository until the user explicitly provides repos to work with.
Once the user provides repositories, match — don't ask. Inspect those repositories and present which transformations apply automatically. Never show a raw TD list and ask the user to pick.
等待用户操作。激活时,展示此技能的功能并询问用户想要完成的任务。在用户明确提供仓库前,请勿自动检查工作目录、打开文件或任何仓库。
一旦用户提供仓库,自动匹配——不要询问。检查这些仓库并自动展示适用的转换。永远不要展示原始TD列表并让用户选择。

Prerequisites

前置条件

Prerequisite checks run ONCE at the start of a session. Do not repeat per repo. Do NOT run prerequisite checks until the user has stated what they want to do.
前置条件检查仅在会话开始时运行一次。请勿针对每个仓库重复检查。在用户说明想要执行的操作前,请勿运行前置条件检查。

0. Platform Check (Required — All Modes)

0. 平台检查(必填——所有模式)

Detect the user's operating system. If on Windows (not WSL), stop immediately and inform the user:
AWS Transform custom does not support native Windows. You need to install Windows Subsystem for Linux (WSL) and run this from within WSL.
Install WSL:
wsl --install
in PowerShell (as Administrator), then restart. After that, open a WSL terminal and re-run this skill from there.
Check by running:
bash
uname -s
  • Linux
    or
    Darwin
    → proceed normally
  • MINGW*
    ,
    MSYS*
    ,
    CYGWIN*
    , or any Windows-like output → block and show the WSL message above
  • Command fails, errors, or is not found → treat as native Windows, block and show the WSL message above
Do NOT proceed with any other steps on native Windows.
检测用户的操作系统。如果是Windows(非WSL),立即停止并告知用户:
AWS Transform自定义功能不支持原生Windows系统。您需要安装Windows Subsystem for Linux(WSL)并在WSL中运行此技能。
安装WSL:在PowerShell(以管理员身份)中运行
wsl --install
,然后重启。 完成后,打开WSL终端并重新运行此技能。
通过以下命令检查:
bash
uname -s
  • Linux
    Darwin
    → 正常继续
  • MINGW*
    MSYS*
    CYGWIN*
    或任何类Windows输出 → 阻止操作并显示上述WSL提示信息
  • 命令失败、报错或未找到 → 视为原生Windows系统,阻止操作并显示上述WSL提示信息
在原生Windows系统上,请勿继续任何其他步骤。

1. AWS CLI (Required — All Modes)

1. AWS CLI(必填——所有模式)

bash
aws --version
If not installed, guide the user:
  • macOS:
    brew install awscli
    or
    curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" && sudo installer -pkg AWSCLIV2.pkg -target /
  • Linux:
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && sudo ./aws/install
Do NOT proceed until
aws --version
succeeds.
bash
aws --version
如果未安装,引导用户:
  • macOS:
    brew install awscli
    curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" && sudo installer -pkg AWSCLIV2.pkg -target /
  • Linux:
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && sudo ./aws/install
aws --version
命令成功运行前,请勿继续。

2. AWS Credentials (Required — All Modes)

2. AWS凭证(必填——所有模式)

bash
aws sts get-caller-identity
If credentials are NOT configured, walk the user through setup:
AWS Transform custom requires AWS credentials to authenticate with the service. Configure authentication using one of the following methods.

1. AWS CLI Configure (~/.aws/credentials):
   aws configure

2. AWS Credentials File (manual). Configure credentials in ~/.aws/credentials:

[default]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_key

3. Environment Variables. Set the following environment variables:

export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_SESSION_TOKEN=your_session_token

You can also specify a profile using the AWS_PROFILE environment variable:

export AWS_PROFILE=your_profile_name
Do NOT proceed until credentials are verified. Re-run
aws sts get-caller-identity
after setup.
Note: environment variables set via
export
do not carry over between shell sessions. If the agent spawns a new shell, credentials set as env vars may be lost. Prefer
aws configure
or
~/.aws/credentials
for persistence.
bash
aws sts get-caller-identity
如果未配置凭证,引导用户完成设置:
AWS Transform自定义功能需要AWS凭证来与服务进行身份验证。请使用以下方法之一配置身份验证。

1. AWS CLI配置(~/.aws/credentials):
   aws configure

2. AWS凭证文件(手动配置)。在~/.aws/credentials中配置凭证:

[default]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_key

3. 环境变量。设置以下环境变量:

export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_SESSION_TOKEN=your_session_token

您也可以使用AWS_PROFILE环境变量指定配置文件:

export AWS_PROFILE=your_profile_name
在凭证验证通过前,请勿继续。设置完成后重新运行
aws sts get-caller-identity
注意:通过
export
设置的环境变量不会在shell会话之间保留。如果Agent启动新的shell,通过环境变量设置的凭证可能会丢失。建议优先使用
aws configure
~/.aws/credentials
进行持久化配置。

3. ATX CLI (Required — All Modes)

3. ATX CLI(必填——所有模式)

Required in all modes for TD discovery (
atx custom def list --json
). Local mode also uses it for transformation execution.
bash
atx --version
所有模式下都需要用于TD发现(
atx custom def list --json
)。 本地模式还需要它来执行转换。
bash
atx --version

**Mandatory: always run `atx update` once at the start of every session**, even if you just ran it recently. This catches new ATX CLI versions and new TDs. Run it before any other ATX command (including `atx custom def list --json`):

```bash
atx update
Do NOT skip this step. Do NOT ask the user whether to update. Do NOT condition it on whether the CLI "needs" an update. Run it unconditionally.

**强制要求:每次会话开始时必须运行一次`atx update`**,即使您最近刚运行过。这会获取ATX CLI的新版本和新TD。在运行任何其他ATX命令(包括`atx custom def list --json`)前运行此命令:

```bash
atx update
请勿跳过此步骤。请勿询问用户是否更新。请勿根据CLI是否“需要”更新来决定是否运行。无条件运行此命令。

4. IAM Permissions (Required — All Modes)

4. IAM权限(必填——所有模式)

Local mode requires
transform-custom:*
minimum. Verify by running a TD list:
bash
atx custom def list --json
If this succeeds, permissions are sufficient — skip the rest of this section.
If it fails with a permissions error, the caller needs the
transform-custom:*
IAM permission. Explain to the user what's needed and get confirmation before proceeding:
Your identity needs the
transform-custom:*
permission to use the ATX CLI. I can attach the AWS-managed policy
AWSTransformCustomFullAccess
to your identity. Shall I proceed?
Only after the user confirms, attach the managed policy:
bash
CALLER_ARN=$(aws sts get-caller-identity --query Arn --output text)
if echo "$CALLER_ARN" | grep -q ":user/"; then
  IDENTITY_NAME=$(echo "$CALLER_ARN" | awk -F'/' '{print $NF}')
  aws iam attach-user-policy --user-name "$IDENTITY_NAME" \
    --policy-arn "arn:aws:iam::aws:policy/AWSTransformCustomFullAccess"
elif echo "$CALLER_ARN" | grep -Eq ":assumed-role/|:role/"; then
  ROLE_NAME=$(echo "$CALLER_ARN" | sed 's/.*:\(assumed-\)\{0,1\}role\///' | cut -d'/' -f1)
  aws iam attach-role-policy --role-name "$ROLE_NAME" \
    --policy-arn "arn:aws:iam::aws:policy/AWSTransformCustomFullAccess"
fi
If the attachment command itself fails (e.g., insufficient IAM permissions, or an SSO-managed role), inform the user they need to ask their AWS administrator to attach the
AWSTransformCustomFullAccess
AWS-managed policy to their identity. For SSO users (role names starting with
AWSReservedSSO_
), this must be added to their IAM Identity Center permission set — it cannot be attached directly.
Do NOT proceed until
atx custom def list --json
succeeds.
Remote mode requires additional permissions (Lambda invoke, S3, KMS, Secrets Manager, CloudWatch). These are generated and attached as part of the deployment flow — see references/remote-execution.md.
See references/cli-reference.md for the full permission list.
本地模式至少需要
transform-custom:*
权限。通过运行TD列表命令验证:
bash
atx custom def list --json
如果命令成功,说明权限足够——跳过本节剩余内容。
如果命令因权限错误失败,调用者需要
transform-custom:*
IAM权限。向用户说明所需权限并在继续前获得确认:
您的身份需要
transform-custom:*
权限才能使用ATX CLI。 我可以将AWS托管策略
AWSTransformCustomFullAccess
附加到您的身份。是否继续?
仅在用户确认后,附加托管策略:
bash
CALLER_ARN=$(aws sts get-caller-identity --query Arn --output text)
if echo "$CALLER_ARN" | grep -q ":user/"; then
  IDENTITY_NAME=$(echo "$CALLER_ARN" | awk -F'/' '{print $NF}')
  aws iam attach-user-policy --user-name "$IDENTITY_NAME" \
    --policy-arn "arn:aws:iam::aws:policy/AWSTransformCustomFullAccess"
elif echo "$CALLER_ARN" | grep -Eq ":assumed-role/|:role/"; then
  ROLE_NAME=$(echo "$CALLER_ARN" | sed 's/.*:\(assumed-\)\{0,1\}role\///' | cut -d'/' -f1)
  aws iam attach-role-policy --role-name "$ROLE_NAME" \
    --policy-arn "arn:aws:iam::aws:policy/AWSTransformCustomFullAccess"
fi
如果附加命令本身失败(例如,IAM权限不足,或SSO管理的角色),告知用户需要联系其AWS管理员将
AWSTransformCustomFullAccess
AWS托管策略附加到其身份。对于SSO用户(角色名称以
AWSReservedSSO_
开头),必须将其添加到IAM Identity Center权限集——无法直接附加。
atx custom def list --json
命令成功运行前,请勿继续。
远程模式需要额外权限(Lambda调用、S3、KMS、Secrets Manager、 CloudWatch)。这些权限会在部署流程中自动生成并附加——请查看references/remote-execution.md
完整权限列表请查看references/cli-reference.md

5. AWS CDK (Remote Mode Only)

5. AWS CDK(仅远程模式)

Required for deploying remote infrastructure. Check if installed:
bash
cdk --version
If not installed, install it globally:
bash
npm install -g aws-cdk
Do NOT proceed with remote deployment until
cdk --version
succeeds.
部署远程基础设施所需。检查是否已安装:
bash
cdk --version
如果未安装,全局安装:
bash
npm install -g aws-cdk
cdk --version
命令成功运行前,请勿继续远程部署。

6. Remote Infrastructure (Remote Mode Only — Deferred)

6. 远程基础设施(仅远程模式——延迟检查)

Only verify if user chooses remote mode. The infrastructure CDK scripts are fetched at runtime by cloning
https://github.com/aws-samples/aws-transform-custom-samples.git
(branch
atx-remote-infra
) — they are not bundled with this skill. See references/remote-execution.md.
仅在用户选择远程模式时验证。基础设施CDK脚本会在运行时通过克隆
https://github.com/aws-samples/aws-transform-custom-samples.git
(分支
atx-remote-infra
)获取——它们不随此技能捆绑。请查看references/remote-execution.md

Workflow

工作流

Generate a session timestamp once and reuse it for all paths in this session:
bash
SESSION_TS=$(date +%Y%m%d-%H%M%S)
生成一次会话时间戳,并在本次会话的所有流程中重复使用:
bash
SESSION_TS=$(date +%Y%m%d-%H%M%S)

Step 1: Collect Repositories

步骤1:收集仓库信息

Ask the user for local paths or git URLs. Accept one or many. Do NOT assume the current working directory or open editor files are the target — wait for the user to explicitly provide repositories.
Accepted source formats:
  • Local paths — directories on the user's machine (e.g.,
    /home/user/my-project
    )
  • HTTPS git URLs — public or private (e.g.,
    https://github.com/org/repo.git
    )
  • SSH git URLs — e.g.,
    git@github.com:org/repo.git
  • S3 bucket path with zips — e.g.,
    s3://my-bucket/repos/
    containing zip files of repositories. Each zip becomes one transformation job.
询问用户本地路径或git URL。接受单个或多个仓库。请勿假设当前工作目录或打开的编辑器文件为目标——等待用户明确提供仓库。
接受的源格式:
  • 本地路径——用户机器上的目录(例如,
    /home/user/my-project
  • HTTPS git URL——公共或私有(例如,
    https://github.com/org/repo.git
  • SSH git URL——例如,
    git@github.com:org/repo.git
  • 包含zip文件的S3存储桶路径——例如,
    s3://my-bucket/repos/
    包含仓库的zip文件。每个zip文件对应一个转换任务。

S3 Bucket Input

S3存储桶输入

If the user provides an S3 path containing zip files, ask which execution mode they prefer (if not already specified). S3 input works in both modes:
Remote mode: Copy the zips from the user's bucket to the managed source bucket, then submit jobs pointing to the managed copies:
bash
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SOURCE_BUCKET="atx-source-code-${ACCOUNT_ID}"
如果用户提供包含zip文件的S3路径,询问他们偏好的执行模式(如果尚未指定)。S3输入在两种模式下都可使用:
远程模式:将zip文件从用户的存储桶复制到托管源存储桶,然后提交指向托管副本的任务:
bash
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SOURCE_BUCKET="atx-source-code-${ACCOUNT_ID}"

List all zips in the user's bucket path

列出用户存储桶路径中的所有zip文件

aws s3 ls s3://user-bucket/repos/ --recursive | grep '.zip$'
aws s3 ls s3://user-bucket/repos/ --recursive | grep '.zip$'

Copy each zip to the managed source bucket

将每个zip文件复制到托管源存储桶

aws s3 sync s3://user-bucket/repos/ s3://${SOURCE_BUCKET}/repos/ --exclude "" --include ".zip"

Then submit a batch job with one job per zip, each pointing to
`s3://${SOURCE_BUCKET}/repos/<filename>.zip`. The container handles zip extraction
automatically. See [references/multi-transformation.md](references/multi-transformation.md) for batch submission.
The managed source bucket has a 7-day lifecycle — copied zips auto-delete.

**Local mode:** Download and extract each zip locally:

```bash
mkdir -p ~/.aws/atx/custom/atx-agent-session/repos
aws s3 sync s3://user-bucket/repos/ ~/.aws/atx/custom/atx-agent-session/repos/ --exclude "*" --include "*.zip"
for zip in ~/.aws/atx/custom/atx-agent-session/repos/*.zip; do
  name=$(basename "$zip" .zip)
  unzip -qo "$zip" -d "$HOME/.aws/atx/custom/atx-agent-session/repos/${name}-$SESSION_TS/"
done
Use the extracted directories as
<repo-path>
for local execution. Standard local mode limits apply (max 3 concurrent repos).
aws s3 sync s3://user-bucket/repos/ s3://${SOURCE_BUCKET}/repos/ --exclude "" --include ".zip"

然后提交批量任务,每个zip文件对应一个任务,指向`s3://${SOURCE_BUCKET}/repos/<filename>.zip`。容器会自动处理zip文件提取。批量提交请查看[references/multi-transformation.md](references/multi-transformation.md)。
托管源存储桶有7天生命周期——复制的zip文件会自动删除。

**本地模式**:在本地下载并提取每个zip文件:

```bash
mkdir -p ~/.aws/atx/custom/atx-agent-session/repos
aws s3 sync s3://user-bucket/repos/ ~/.aws/atx/custom/atx-agent-session/repos/ --exclude "*" --include "*.zip"
for zip in ~/.aws/atx/custom/atx-agent-session/repos/*.zip; do
  name=$(basename "$zip" .zip)
  unzip -qo "$zip" -d "$HOME/.aws/atx/custom/atx-agent-session/repos/${name}-$SESSION_TS/"
done
将提取的目录作为本地执行的
<repo-path>
。适用标准本地模式限制(最多3个并发仓库)。

Private Repository Detection (Remote Mode)

私有仓库检测(远程模式)

Always ask the user — do NOT try to determine repo visibility yourself. Never attempt to clone, curl, or probe a URL to check if it's public or private. Simply ask the user. As soon as the user provides git URLs and remote mode is selected (or likely), ask:
"Are any of these repositories private? If so, the remote container needs credentials to clone them — I'll walk you through the setup."
Do NOT skip this question. Do NOT try to infer visibility by attempting a clone, curl, or any other network request. Just ask.
If the user confirms repos are private, determine the credential type based on URL format:
First, resolve the region (use for all Secrets Manager commands below):
bash
REGION=${AWS_REGION:-${AWS_DEFAULT_REGION:-$(aws configure get region 2>/dev/null)}}
REGION=${REGION:-us-east-1}
For HTTPS URLs — check whether a GitHub PAT is already configured:
bash
aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null \
  && echo "CONFIGURED" || echo "NOT_CONFIGURED"
If CONFIGURED, ask the user: "A GitHub PAT is already stored. Would you like to keep using it, or replace it with a new one?" If they want to replace it, tell them to run:
aws secretsmanager put-secret-value --secret-id "atx/github-token" --region "$REGION" --secret-string "YOUR_TOKEN_HERE"
If NOT_CONFIGURED, explain what's needed and tell the user to run the create command:
"Private HTTPS repos need a GitHub Personal Access Token (PAT) stored in AWS Secrets Manager. The remote container fetches it at startup to clone your repos. The token stays in your AWS account — you can delete it anytime.
The PAT needs the
repo
scope for private repositories. Create one at https://github.com/settings/tokens and then run:
aws secretsmanager create-secret --name "atx/github-token" --region "$REGION" --secret-string "YOUR_TOKEN_HERE"
Delete anytime:
aws secretsmanager delete-secret --secret-id atx/github-token --region "$REGION" --force-delete-without-recovery
"
Do NOT ask the user to paste their token in chat. They run the command themselves. Wait for the user to confirm it's done, then verify:
bash
aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null \
  && echo "CONFIGURED" || echo "NOT_CONFIGURED"
For SSH URLs (
git@...
or
ssh://...
) — check whether an SSH key is configured:
bash
aws secretsmanager describe-secret --secret-id "atx/ssh-key" --region "$REGION" 2>/dev/null \
  && echo "CONFIGURED" || echo "NOT_CONFIGURED"
If CONFIGURED, ask the user: "An SSH key is already stored. Would you like to keep using it, or replace it with a new one?" If they want to replace it, tell them to run:
aws secretsmanager put-secret-value --secret-id "atx/ssh-key" --region "$REGION" --secret-string "$(cat <path-to-your-private-key>)"
If NOT_CONFIGURED, explain what's needed and tell the user to run the create command:
"SSH repos need an SSH private key stored in AWS Secrets Manager. The remote container fetches it at startup to clone your repos.
Run:
aws secretsmanager create-secret --name "atx/ssh-key" --region "$REGION" --secret-string "$(cat <path-to-your-private-key>)"
Delete anytime:
aws secretsmanager delete-secret --secret-id atx/ssh-key --region "$REGION" --force-delete-without-recovery
"
Do NOT ask the user to paste their SSH key in chat. They run the command themselves.
For local mode, private repo credentials are not needed — the user's local git config handles authentication. Skip this check entirely for local mode.
务必询问用户——请勿自行尝试判断仓库可见性。永远不要尝试克隆、curl或探测URL来检查是否为公共或私有仓库。直接询问用户。一旦用户提供git URL且选择(或可能选择)远程模式,询问:
"这些仓库中有私有仓库吗?如果有,远程容器需要凭证才能克隆它们——我会引导您完成设置。"
请勿跳过此问题。请勿尝试通过克隆、curl或任何其他网络请求推断可见性。只需询问。
如果用户确认存在私有仓库,根据URL格式确定凭证类型:
首先,解析区域(用于以下所有Secrets Manager命令):
bash
REGION=${AWS_REGION:-${AWS_DEFAULT_REGION:-$(aws configure get region 2>/dev/null)}}
REGION=${REGION:-us-east-1}
对于HTTPS URL——检查是否已配置GitHub PAT:
bash
aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null \
  && echo "已配置" || echo "未配置"
如果已配置,询问用户:"已存储GitHub PAT。您想要继续使用它,还是替换为新的?" 如果用户想要替换,告知他们运行:
aws secretsmanager put-secret-value --secret-id "atx/github-token" --region "$REGION" --secret-string "YOUR_TOKEN_HERE"
如果未配置,说明所需内容并告知用户运行创建命令:
"私有HTTPS仓库需要存储在AWS Secrets Manager中的GitHub个人访问令牌(PAT)。远程容器会在启动时获取它来克隆您的仓库。 令牌会保留在您的AWS账户中——您可以随时删除它。
PAT需要针对私有仓库的
repo
权限。请在https://github.com/settings/tokens创建一个,然后运行:
aws secretsmanager create-secret --name "atx/github-token" --region "$REGION" --secret-string "YOUR_TOKEN_HERE"
随时删除:
aws secretsmanager delete-secret --secret-id atx/github-token --region "$REGION" --force-delete-without-recovery
"
请勿要求用户在聊天中粘贴令牌。让他们自行运行命令。等待用户确认完成,然后验证:
bash
aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null \
  && echo "已配置" || echo "未配置"
对于SSH URL
git@...
ssh://...
)——检查是否已配置SSH密钥:
bash
aws secretsmanager describe-secret --secret-id "atx/ssh-key" --region "$REGION" 2>/dev/null \
  && echo "已配置" || echo "未配置"
如果已配置,询问用户:"已存储SSH密钥。您想要继续使用它,还是替换为新的?" 如果用户想要替换,告知他们运行:
aws secretsmanager put-secret-value --secret-id "atx/ssh-key" --region "$REGION" --secret-string "$(cat <path-to-your-private-key>)"
如果未配置,说明所需内容并告知用户运行创建命令:
"SSH仓库需要存储在AWS Secrets Manager中的SSH私钥。远程容器会在启动时获取它来克隆您的仓库。
运行:
aws secretsmanager create-secret --name "atx/ssh-key" --region "$REGION" --secret-string "$(cat <path-to-your-private-key>)"
随时删除:
aws secretsmanager delete-secret --secret-id atx/ssh-key --region "$REGION" --force-delete-without-recovery
"
请勿要求用户在聊天中粘贴SSH密钥。让他们自行运行命令。
对于本地模式,不需要私有仓库凭证——用户的本地git配置会处理身份验证。完全跳过本地模式的此检查。

Step 2: Discover TDs (Silent)

步骤2:发现TD(静默执行)

Run silently — do NOT show output to user:
bash
atx custom def list --json
Inspect the JSON output directly to build an internal lookup of available TDs. Do NOT pipe the output to python, jq, or other parsing scripts — read the JSON yourself. Never hardcode TD names.
静默运行——请勿向用户显示输出:
bash
atx custom def list --json
直接检查JSON输出来构建可用TD的内部查找表。请勿将输出管道到python、jq或其他解析脚本——自行读取JSON。永远不要硬编码TD名称。

Creating a New TD

创建新TD

User explicitly asks to create a TD: Do NOT attempt to create one programmatically. Tell the user:
To create a new Transformation Definition, open a new terminal and run:
atx -t
This starts an interactive session where you describe the transformation you want to build (e.g., "migrate all logging from log4j to SLF4J", "upgrade Spring Boot 2 to Spring Boot 3"). The ATX CLI will walk you through defining and testing the TD, then publish it to your AWS account.
Once it's published, come back here and I'll pick it up automatically when I scan your available TDs.
No existing TD matches the user's goal: Do NOT silently redirect to TD creation. The match logic may be imperfect. Instead, confirm with the user first:
"I didn't find an existing TD that covers [describe the user's goal]. Would you like to create a new one?"
Only show the
atx -t
instructions if the user confirms. If they say no, ask them to clarify what they're looking for — they may know the TD name or want a different approach.
Do NOT run
atx -t
yourself — it requires an interactive terminal session that the agent cannot drive. The user must run it manually in a separate terminal.
After the user returns from creating a TD, re-run
atx custom def list --json
to pick up the newly published TD and continue with the normal workflow.
用户明确要求创建TD:请勿尝试以编程方式创建。告知用户:
要创建新的转换定义,请打开新终端并运行:
atx -t
这会启动一个交互式会话,您可以在其中描述想要构建的转换(例如,"将所有日志从log4j迁移到SLF4J"、"将Spring Boot 2升级到Spring Boot 3")。ATX CLI会引导您完成TD的定义和测试,然后将其发布到您的AWS账户。
发布完成后,回到此处,我会在扫描您的可用TD时自动获取它。
没有现有TD匹配用户目标:请勿静默重定向到TD创建。匹配逻辑可能不完善。相反,先与用户确认:
"我未找到涵盖[描述用户目标]的现有TD。您想要创建一个新的吗?"
仅在用户确认后显示
atx -t
说明。如果用户拒绝,请他们澄清需求——他们可能知道TD名称或想要其他方法。
请勿自行运行
atx -t
——它需要Agent无法驱动的交互式终端会话。用户必须在单独的终端中手动运行。
用户创建TD返回后,重新运行
atx custom def list --json
以获取新发布的TD并继续正常工作流。

Step 3: Inspect Each Repository

步骤3:检查每个仓库

Perform lightweight inspection only — check config files for key signals:
SignalFiles to CheckLikely TD Type
Python version
.python-version
,
pyproject.toml
,
setup.cfg
,
requirements.txt
Python version upgrade
Java version
pom.xml
(
<java.version>
),
build.gradle
(
sourceCompatibility
),
.java-version
Java version upgrade
Node.js version
package.json
(
engines.node
),
.nvmrc
,
.node-version
Node.js version upgrade
Python boto2
import boto
(NOT boto3)
boto2→boto3 migration
Java SDK v1
com.amazonaws
imports,
aws-java-sdk
in pom.xml
Java SDK v1→v2
Node.js SDK v2
"aws-sdk"
in package.json (NOT
@aws-sdk
)
JS SDK v2→v3
x86 Java
x86_64
/
amd64
in Dockerfiles, build configs
Graviton migration
Cross-reference detected signals against TDs from Step 2. Only match TDs that actually exist in the user's account.
See references/repo-analysis.md for full detection commands.
仅执行轻量级检查——检查配置文件中的关键信号:
信号检查文件可能的TD类型
Python版本
.python-version
pyproject.toml
setup.cfg
requirements.txt
Python版本升级
Java版本
pom.xml
<java.version>
)、
build.gradle
sourceCompatibility
)、
.java-version
Java版本升级
Node.js版本
package.json
engines.node
)、
.nvmrc
.node-version
Node.js版本升级
Python boto2
import boto
(不是boto3)
boto2→boto3迁移
Java SDK v1
com.amazonaws
导入、pom.xml中的
aws-java-sdk
Java SDK v1→v2
Node.js SDK v2package.json中的
"aws-sdk"
(不是
@aws-sdk
JS SDK v2→v3
x86 JavaDockerfile、构建配置中的
x86_64
/
amd64
Graviton迁移
将检测到的信号与步骤2中的TD交叉引用。仅匹配用户账户中实际存在的TD。
完整检测命令请查看references/repo-analysis.md

Step 4: Present Match Report

步骤4:展示匹配报告

Format:
Transformation Match Report
=============================
Repository: <name> (<path>)
  Language: <lang> <version>
  Matching TDs:
    - <td-name> — <description>

Summary: N repos analyzed, M have applicable transformations (T total jobs)
Present the match report and wait for user confirmation before proceeding. Do NOT start any transformation without explicit user consent.
格式:
转换匹配报告
=============================
仓库:<名称>(<路径>)
  语言:<语言> <版本>
  匹配的TD:
    - <td名称> — <描述>

摘要:分析了N个仓库,M个仓库有适用的转换(共T个任务)
展示匹配报告并等待用户确认后继续。在获得用户明确同意前,请勿启动任何转换。

Step 5: Collect Configuration

步骤5:收集配置信息

Ask the user for any additional plan context (e.g., target version for upgrade TDs). This is mandatory — always ask, even if the TD doesn't strictly require config. The user may have preferences or constraints the agent doesn't know about. Skip only if the user explicitly says no additional context is needed.
询问用户任何额外的计划上下文(例如,升级TD的目标版本)。这是必填项——始终询问,即使TD严格不需要配置。用户可能有Agent不知道的偏好或约束。仅在用户明确表示不需要额外上下文时跳过。

Step 6: Verify Runtime Compatibility (Remote and Local)

步骤6:验证运行时兼容性(远程和本地模式)

Remote Mode

远程模式

Before submitting remote jobs, determine whether the pre-built image covers the target runtime or if a custom Docker build is needed.
Pre-built image includes:
  • Java: 8, 11, 17, 21, 25 (Amazon Corretto) with Maven and Gradle 9.4
  • Python: 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 (dnf + pyenv)
  • Node.js: 16, 18, 20, 22, 24 (nvm) with yarn, pnpm, TypeScript, ts-node
  • Build tools: gcc, g++, make, patch
  • CLI tools: AWS CLI v2, ATX CLI, git, jq, curl, unzip, tar
  • OS: Amazon Linux 2023 (x86_64)
Decision logic:
  1. Based on the transformation requirements (source runtime, target runtime, build tools, and any other dependencies), determine whether everything needed is available in the pre-built image listed above
  2. If yes → use the pre-built image path (no Docker required). Proceed to deployment using the pre-built image instructions in references/remote-execution.md.
  3. If no → use the custom image path (Docker required). Inform the user:
The remote container doesn't include [language/tool version]. To run this transformation remotely, I'll need to build a custom container image. This requires Docker installed and running on your machine. It's a one-time change — about 5-10 minutes. Want me to proceed?
If the user confirms, follow the custom image path in references/remote-execution.md: clear
prebuiltImageUri
, customize the Dockerfile, and deploy.
If the user declines, suggest local mode as an alternative (if the tools are available on their machine).
Dockerfile customization (custom image path only):
First, read the Dockerfile to see what's installed:
bash
ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra"
cat "$ATX_INFRA_DIR/container/Dockerfile" 2>/dev/null
  1. Ensure the infrastructure repo is cloned and up to date:
    bash
    ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra"
    if [ -d "$ATX_INFRA_DIR" ]; then
      git -C "$ATX_INFRA_DIR" add -A
      git -C "$ATX_INFRA_DIR" commit -m "Local customizations" -q 2>/dev/null || true
      git -C "$ATX_INFRA_DIR" pull -q
    else
      git clone -b atx-remote-infra --single-branch https://github.com/aws-samples/aws-transform-custom-samples.git "$ATX_INFRA_DIR"
    fi
    If
    git pull
    reports a merge conflict, resolve it by keeping both upstream changes and the user's customizations in the
    CUSTOM LANGUAGES AND TOOLS
    section of the Dockerfile, then commit the merge.
  2. Edit
    $ATX_INFRA_DIR/container/Dockerfile
    . Find the section marked
    # CUSTOM LANGUAGES AND TOOLS
    and insert
    RUN
    commands after the comment block, before the
    USER root
    line.
    For missing versions of already-installed languages, add the version in the custom section. Examples:
    dockerfile
    # Java 23 (Amazon Corretto — direct install, must run as root)
    # Do NOT use dnf in the custom section — pyenv overrides the system python3
    # that dnf depends on, causing "No module named 'dnf'" errors.
    USER root
    RUN curl -fsSL "https://corretto.aws/downloads/latest/amazon-corretto-23-x64-linux-jdk.tar.gz" -o /tmp/corretto23.tar.gz && \
        mkdir -p /usr/lib/jvm && \
        tar -xzf /tmp/corretto23.tar.gz -C /usr/lib/jvm && \
        rm /tmp/corretto23.tar.gz && \
        ln -sfn /usr/lib/jvm/amazon-corretto-23.* /usr/lib/jvm/corretto-23
    
    # Node.js 23 (via nvm — must run as atxuser)
    USER atxuser
    RUN . /home/atxuser/.nvm/nvm.sh && nvm install 23
    USER root
    
    # Python 3.15 (via pyenv — must run as atxuser)
    USER atxuser
    RUN eval "$(/home/atxuser/.pyenv/bin/pyenv init -)" && \
        MAKE_OPTS="-j$(nproc)" /home/atxuser/.pyenv/bin/pyenv install 3.15.0
    USER root
    For entirely new languages, avoid
    dnf
    in the custom section — pyenv overrides the system python3 that
    dnf
    depends on. Use language-specific installers instead:
    dockerfile
    # Go
    RUN curl -fsSL https://go.dev/dl/go1.22.0.linux-amd64.tar.gz | tar -C /usr/local -xz
    ENV PATH="/usr/local/go/bin:$PATH"
    
    # Ruby (via rbenv — must run as atxuser)
    USER atxuser
    RUN git clone --depth 1 https://github.com/rbenv/rbenv.git /home/atxuser/.rbenv && \
        git clone --depth 1 https://github.com/rbenv/ruby-build.git /home/atxuser/.rbenv/plugins/ruby-build && \
        /home/atxuser/.rbenv/bin/rbenv install 3.3.0 && \
        /home/atxuser/.rbenv/bin/rbenv global 3.3.0
    ENV PATH="/home/atxuser/.rbenv/shims:/home/atxuser/.rbenv/bin:$PATH"
    USER root
    
    # Rust
    USER atxuser
    RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
    ENV PATH="/home/atxuser/.cargo/bin:$PATH"
    USER root
  3. Update the version switcher in
    $ATX_INFRA_DIR/container/entrypoint.sh
    . Find the relevant
    switch_*_version
    function and add a case for the new version. For Java versions installed via direct download, find the extracted directory name under
    /usr/lib/jvm/
    . For example, to add Java 23:
    bash
    # In switch_java_version(), add to the case statement:
    23) java_home="/usr/lib/jvm/corretto-23" ;;
    Check the actual directory name:
    ls /usr/lib/jvm/
    — use the directory that matches the version you installed.
    For Node.js, nvm handles arbitrary versions automatically — no entrypoint change needed. For Python, pyenv handles arbitrary versions — no entrypoint change needed (the existing pyenv fallback logic finds it).
  4. Deploy (or redeploy):
    cd "$ATX_INFRA_DIR" && ./setup.sh
    CDK hashes the
    container/
    directory — any file change triggers a rebuild and push to ECR automatically.
After redeployment, set the
environment
field on the job to the exact target version (e.g.,
"JAVA_VERSION":"23"
, not
"21"
). The version switcher in the entrypoint reads this and activates the correct runtime.
If the user declines, suggest local mode as an alternative (if the tools are available on their machine).
提交远程任务前,确定预构建镜像是否涵盖目标运行时,或者是否需要自定义Docker构建。
预构建镜像包含:
  • Java:8、11、17、21、25(Amazon Corretto),附带Maven和Gradle 9.4
  • Python:3.8、3.9、3.10、3.11、3.12、3.13、3.14(dnf + pyenv)
  • Node.js:16、18、20、22、24(nvm),附带yarn、pnpm、TypeScript、ts-node
  • 构建工具:gcc、g++、make、patch
  • CLI工具:AWS CLI v2、ATX CLI、git、jq、curl、unzip、tar
  • 操作系统:Amazon Linux 2023(x86_64)
决策逻辑:
  1. 根据转换要求(源运行时、目标运行时、构建工具和任何其他依赖项),确定上述预构建镜像是否包含所有所需内容
  2. 如果 → 使用预构建镜像路径(无需Docker)。使用references/remote-execution.md中的预构建镜像说明继续部署
  3. 如果 → 使用自定义镜像路径(需要Docker)。告知用户:
远程容器不包含[语言/工具版本]。要远程运行此转换,我需要构建自定义容器镜像。这需要在您的机器上安装并运行Docker。这是一次性操作——大约需要5-10分钟。是否继续?
如果用户确认,请遵循references/remote-execution.md中的自定义镜像路径:清除
prebuiltImageUri
,自定义Dockerfile,然后部署。
如果用户拒绝,建议使用本地模式作为替代(如果用户机器上有可用工具)。
Dockerfile自定义(仅自定义镜像路径):
首先,读取Dockerfile查看已安装的内容:
bash
ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra"
cat "$ATX_INFRA_DIR/container/Dockerfile" 2>/dev/null
  1. 确保基础设施仓库已克隆并保持最新:
    bash
    ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra"
    if [ -d "$ATX_INFRA_DIR" ]; then
      git -C "$ATX_INFRA_DIR" add -A
      git -C "$ATX_INFRA_DIR" commit -m "Local customizations" -q 2>/dev/null || true
      git -C "$ATX_INFRA_DIR" pull -q
    else
      git clone -b atx-remote-infra --single-branch https://github.com/aws-samples/aws-transform-custom-samples.git "$ATX_INFRA_DIR"
    fi
    如果
    git pull
    报告合并冲突,通过在Dockerfile的
    CUSTOM LANGUAGES AND TOOLS
    部分保留上游更改和用户自定义内容来解决冲突,然后提交合并。
  2. 编辑
    $ATX_INFRA_DIR/container/Dockerfile
    。找到标记为
    # CUSTOM LANGUAGES AND TOOLS
    的部分,在注释块之后、
    USER root
    行之前插入
    RUN
    命令。
    对于已安装语言的缺失版本,在自定义部分添加该版本。示例:
    dockerfile
    # Java 23(Amazon Corretto — 直接安装,必须以root身份运行)
    # 请勿在自定义部分使用dnf — pyenv会覆盖dnf依赖的系统python3
    # 导致"No module named 'dnf'"错误。
    USER root
    RUN curl -fsSL "https://corretto.aws/downloads/latest/amazon-corretto-23-x64-linux-jdk.tar.gz" -o /tmp/corretto23.tar.gz && \
        mkdir -p /usr/lib/jvm && \
        tar -xzf /tmp/corretto23.tar.gz -C /usr/lib/jvm && \
        rm /tmp/corretto23.tar.gz && \
        ln -sfn /usr/lib/jvm/amazon-corretto-23.* /usr/lib/jvm/corretto-23
    
    # Node.js 23(通过nvm — 必须以atxuser身份运行)
    USER atxuser
    RUN . /home/atxuser/.nvm/nvm.sh && nvm install 23
    USER root
    
    # Python 3.15(通过pyenv — 必须以atxuser身份运行)
    USER atxuser
    RUN eval "$(/home/atxuser/.pyenv/bin/pyenv init -)" && \
        MAKE_OPTS="-j$(nproc)" /home/atxuser/.pyenv/bin/pyenv install 3.15.0
    USER root
    对于全新语言,避免在自定义部分使用
    dnf
    — pyenv会覆盖
    dnf
    依赖的系统python3。请改用语言特定的安装程序:
    dockerfile
    # Go
    RUN curl -fsSL https://go.dev/dl/go1.22.0.linux-amd64.tar.gz | tar -C /usr/local -xz
    ENV PATH="/usr/local/go/bin:$PATH"
    
    # Ruby(通过rbenv — 必须以atxuser身份运行)
    USER atxuser
    RUN git clone --depth 1 https://github.com/rbenv/rbenv.git /home/atxuser/.rbenv && \
        git clone --depth 1 https://github.com/rbenv/ruby-build.git /home/atxuser/.rbenv/plugins/ruby-build && \
        /home/atxuser/.rbenv/bin/rbenv install 3.3.0 && \
        /home/atxuser/.rbenv/bin/rbenv global 3.3.0
    ENV PATH="/home/atxuser/.rbenv/shims:/home/atxuser/.rbenv/bin:$PATH"
    USER root
    
    # Rust
    USER atxuser
    RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
    ENV PATH="/home/atxuser/.cargo/bin:$PATH"
    USER root
  3. 更新
    $ATX_INFRA_DIR/container/entrypoint.sh
    中的版本切换器。找到相关的
    switch_*_version
    函数并为新版本添加case语句。对于通过直接下载安装的Java版本,在
    /usr/lib/jvm/
    下找到提取的目录名称。例如,添加Java 23:
    bash
    # 在switch_java_version()中,添加到case语句:
    23) java_home="/usr/lib/jvm/corretto-23" ;;
    检查实际目录名称:
    ls /usr/lib/jvm/
    — 使用与您安装的版本匹配的目录。
    对于Node.js,nvm会自动处理任意版本——无需更改entrypoint。对于Python,pyenv会处理任意版本——无需更改entrypoint(现有的pyenv回退逻辑会找到它)。
  4. 部署(或重新部署):
    cd "$ATX_INFRA_DIR" && ./setup.sh
    CDK会对
    container/
    目录进行哈希处理——任何文件更改都会自动触发重建并推送到ECR。
重新部署后,将作业的
environment
字段设置为确切的目标版本(例如,
"JAVA_VERSION":"23"
,而不是
"21"
)。entrypoint中的版本切换器会读取此值并激活正确的运行时。
如果用户拒绝,建议使用本地模式作为替代(如果用户机器上有可用工具)。

Local Mode

本地模式

Before running local transformations, verify the user has the target runtime version installed. This applies to any language or runtime the transformation targets — Java, Python, Node.js, Ruby, Go, Rust, .NET, etc. Check the current version of whatever runtime the TD requires. For example:
bash
java -version    # Java transformations
python3 --version # Python transformations
node --version   # Node.js transformations
ruby --version   # Ruby transformations
go version       # Go transformations
If the target version is not active, check whether it's already installed:
bash
undefined
运行本地转换前,验证用户是否已安装目标运行时版本。这适用于转换目标的任何语言或运行时——Java、Python、Node.js、Ruby、Go、Rust、.NET等。检查TD所需的任何运行时的当前版本。例如:
bash
java -version    # Java转换
python3 --version # Python转换
node --version   # Node.js转换
ruby --version   # Ruby转换
go version       # Go转换
如果目标版本未激活,检查是否已安装:
bash
undefined

Java: check common install locations

Java:检查常见安装位置

/usr/libexec/java_home -V 2>&1 # macOS ls /usr/lib/jvm/ 2>/dev/null # Linux
/usr/libexec/java_home -V 2>&1 # macOS ls /usr/lib/jvm/ 2>/dev/null # Linux

Python: check if the specific version binary exists

Python:检查特定版本的二进制文件是否存在

which python3.12 2>/dev/null # adjust version as needed
which python3.12 2>/dev/null # 根据需要调整版本

Node.js: check if nvm is available, or look for the binary

Node.js:检查nvm是否可用,或查找二进制文件

command -v nvm &>/dev/null && nvm ls 2>/dev/null which node 2>/dev/null && node --version

If the target version is found, switch to it:

- Java: `export JAVA_HOME=<path to JDK> && export PATH="$JAVA_HOME/bin:$PATH"`
- Python: `pyenv shell 3.15.0`
- Node.js: `nvm use 23`

Only if the target version is not installed at all, ask the user for permission before installing. Do NOT install runtimes without explicit user confirmation.
Suggest the appropriate version manager:

- Java: `brew install --cask corretto23` (macOS), `sudo yum install java-23-amazon-corretto-devel` (RHEL/AL2), or `sudo apt install java-23-amazon-corretto-jdk` (Debian/Ubuntu)
- Python: `pyenv install 3.15.0 && pyenv shell 3.15.0`, or `brew install python@3.15`
- Node.js: `nvm install 23 && nvm use 23`

The active runtime must match the transformation's target version so that builds
and tests run correctly. Do NOT proceed with the transformation until the correct
version is active.
command -v nvm &>/dev/null && nvm ls 2>/dev/null which node 2>/dev/null && node --version

如果找到目标版本,切换到该版本:

- Java:`export JAVA_HOME=<JDK路径> && export PATH="$JAVA_HOME/bin:$PATH"`
- Python:`pyenv shell 3.15.0`
- Node.js:`nvm use 23`

仅当目标版本完全未安装时,在安装前询问用户许可。请勿在未获得用户明确确认的情况下安装运行时。建议使用适当的版本管理器:

- Java:`brew install --cask corretto23`(macOS)、`sudo yum install java-23-amazon-corretto-devel`(RHEL/AL2)或`sudo apt install java-23-amazon-corretto-jdk`(Debian/Ubuntu)
- Python:`pyenv install 3.15.0 && pyenv shell 3.15.0`,或`brew install python@3.15`
- Node.js:`nvm install 23 && nvm use 23`

活动运行时必须与转换的目标版本匹配,以便构建和测试正确运行。在正确版本激活前,请勿继续转换。

Step 7: Confirm Transformation Plan

步骤7:确认转换计划

Present final plan with repo, TD, config, and execution mode. Do NOT proceed until user confirms.
展示包含仓库、TD、配置和执行模式的最终计划。在用户确认前,请勿继续。

Step 8: Execute

步骤8:执行

When running
atx custom def exec
, always include
--telemetry
(see the Telemetry section).
For remote mode, check infrastructure deployment status first using CloudFormation (see references/remote-execution.md — Infrastructure Check section). Do NOT check deployment by probing Lambda function names.
  • 1 repo: See references/single-transformation.md
  • Multiple repos: See references/multi-transformation.md
运行
atx custom def exec
时,始终包含
--telemetry
(请查看遥测部分)。
对于远程模式,首先使用CloudFormation检查基础设施部署状态(请查看references/remote-execution.md——基础设施检查部分)。请勿通过探测Lambda函数名称来检查部署。
  • 1个仓库:请查看references/single-transformation.md
  • 多个仓库:请查看references/multi-transformation.md

Execution Modes

执行模式

ModeBest ForPrerequisites
Local (default for 1-9 repos)Quick transforms, dev machines with ATXATX CLI installed
Remote (recommended for 10+ repos)Bulk transforms, up to 512 repos (128 concurrent per batch)AWS account, auto-deployed infra
Mode inference:
  • User says "local"/"here"/"on my machine" → Local (honor the request regardless of repo count)
  • User says "remote"/"cloud"/"AWS"/"batch"/"at scale" → Remote
  • 10+ repos without preference → Recommend remote, explain local cap of 3 concurrent
  • 1-9 repos without preference → Local, note remote available
See references/remote-execution.md for infrastructure setup.
模式最适合场景前置条件
本地(1-9个仓库默认)快速转换、已安装ATX的开发机器已安装ATX CLI
远程(10个以上仓库推荐)批量转换、最多512个仓库(每批最多128个并发任务)AWS账户、自动部署的基础设施
模式推断:
  • 用户说"本地"/"这里"/"在我的机器上" → 本地模式(无论仓库数量多少,都尊重用户请求)
  • 用户说"远程"/"云端"/"AWS"/"批量"/"大规模" → 远程模式
  • 10个以上仓库且无偏好 → 推荐远程模式,说明本地模式最多3个并发任务
  • 1-9个仓库且无偏好 → 本地模式,说明可使用远程模式
基础设施设置请查看references/remote-execution.md

Critical Rules

关键规则

  1. Discover TDs dynamically — Always run
    atx custom def list --json
    . Never hardcode TD names.
  2. Match, don't ask — Inspect repos and present matches. Never show raw TD lists.
  3. Lightweight inspection only — Check config files and key signals. No deep analysis.
  4. Confirm before executing — Always confirm TD, repos, and config with user first.
  5. No time estimates — Never include duration predictions.
  6. Parallel execution — Local: max 3 concurrent repos. Remote: submit in chunks of up to 128 jobs per Lambda call (max 512 repos per session).
  7. Preserve outputs — Do not delete generated output folders.
  8. Recommend remote for 10+ repos — Default to local for 1-9 repos. Recommend remote for 10+. Always respect user preference.
  9. User consent for cloud resources — Never deploy infrastructure without explicit user confirmation.
  10. Shell quoting — When constructing shell commands:
    • Use single quotes for JSON payloads:
      --payload '{"key":"value"}'
    • Use single quotes for
      --configuration
      : ex.
      --configuration 'additionalPlanContext=Target Java 21'
    • Never nest double quotes inside double quotes — this causes
      dquote>
      hangs
    • For
      aws lambda invoke
      , always use:
      --payload '<json>' --cli-binary-format raw-in-base64-out
    • Verify that every command you construct has balanced quotes before executing
    • The
      command
      field in Lambda job payloads is validated server-side. Avoid these characters in the command string:
      ( ) ! # % ^ * ? \ { } | ; > <
      and backticks. Inside
      additionalPlanContext
      , also avoid commas.
  11. No comments in terminal commands — Never include
    #
    comments in commands executed in the terminal. Comments cause
    command not found: #
    errors. If you need to explain a command, do it in chat before or after running it.
  12. Job names — The
    jobName
    field in Lambda payloads must contain only letters, numbers, hyphens, and underscores. No dots, spaces, or special characters. For example, use
    EPAM-NodeJS
    not
    EPAM-Node.js
    .
  1. 动态发现TD — 始终运行
    atx custom def list --json
    。永远不要硬编码TD名称。
  2. 自动匹配,不要询问 — 检查仓库并展示匹配结果。永远不要展示原始TD列表。
  3. 仅轻量级检查 — 检查配置文件和关键信号。不进行深度分析。
  4. 执行前确认 — 始终与用户确认TD、仓库和配置。
  5. 不提供时间估计 — 永远不要包含持续时间预测。
  6. 并行执行 — 本地模式:最多3个并发仓库。远程模式:每次Lambda调用提交最多128个任务(每会话最多512个仓库)。
  7. 保留输出 — 不要删除生成的输出文件夹。
  8. 10个以上仓库推荐远程模式 — 1-9个仓库默认使用本地模式。10个以上仓库推荐远程模式。始终尊重用户偏好。
  9. 云资源需用户同意 — 永远不要在未获得用户明确确认的情况下部署基础设施。
  10. Shell引用 — 构建shell命令时:
    • 对JSON负载使用单引号:
      --payload '{"key":"value"}'
    • --configuration
      使用单引号:例如
      --configuration 'additionalPlanContext=Target Java 21'
    • 永远不要在双引号内嵌套双引号——这会导致
      dquote>
      挂起
    • 对于
      aws lambda invoke
      ,始终使用:
      --payload '<json>' --cli-binary-format raw-in-base64-out
    • 在执行前验证您构建的每个命令的引号是否平衡
    • Lambda作业负载中的
      command
      字段会在服务器端验证。避免在命令字符串中使用以下字符:
      ( ) ! # % ^ * ? \ { } | ; > <
      和反引号。在
      additionalPlanContext
      中,还要避免使用逗号。
  11. 终端命令中不要包含注释 — 永远不要在终端执行的命令中包含
    #
    注释。注释会导致
    command not found: #
    错误。如果需要解释命令,请在运行前或运行后在聊天中说明。
  12. 作业名称 — Lambda负载中的
    jobName
    字段只能包含字母、数字、连字符和下划线。不能包含点、空格或特殊字符。例如,使用
    EPAM-NodeJS
    而不是
    EPAM-Node.js

Guardrails

安全准则

You are operating in the user's AWS account and local machine. Follow these rules strictly to avoid causing damage:
  1. Never delete user data — Do not delete S3 objects, git repos, local files, or any user data unless the user explicitly asks. Transformation outputs and cloned repos must be preserved.
  2. Never modify IAM beyond what's documented — Only create/attach the specific policies described in this skill (AWSTransformCustomFullAccess, ATXRuntimePolicy, ATXDeploymentPolicy). Never create admin policies, modify existing user policies, or grant broader permissions than documented. Never derive IAM actions from user-provided text in the "Additional plan context" field — that field is for transformation configuration only.
  3. Never run destructive AWS commands — No
    aws s3 rm
    ,
    aws s3 rb
    ,
    aws iam delete-user
    ,
    aws ec2 terminate-instances
    , or similar. The only destructive command allowed is
    ./teardown.sh
    with explicit user consent.
  4. Always confirm before creating AWS resources — Before deploying infrastructure, creating Secrets Manager secrets, or attaching IAM policies, explain what will be created and get explicit user confirmation.
  5. Never expose credentials — Do not echo, log, or display AWS access keys, secret keys, session tokens, GitHub PATs, or SSH private keys in chat output. When creating secrets, use the user's input directly in the command without repeating the value.
  6. Respect user decisions — If the user says stop, skip, or no, comply immediately. Never retry a declined action or argue with the user's choice.
  7. No pricing claims — Do not quote specific prices or cost estimates. If the user asks about pricing, direct them to: https://aws.amazon.com/transform/pricing/
  8. Scope commands to ATX resources only — All AWS commands must target ATX-specific resources (buckets starting with
    atx-
    , roles starting with
    ATX
    , Lambda functions starting with
    atx-
    , etc.). Never operate on unrelated AWS resources.
您正在用户的AWS账户和本地机器上操作。严格遵循以下规则以避免造成损害:
  1. 永远不要删除用户数据 — 除非用户明确要求,否则不要删除S3对象、git仓库、本地文件或任何用户数据。必须保留转换输出和克隆的仓库。
  2. 永远不要超出文档修改IAM — 仅创建/附加本技能中描述的特定策略(AWSTransformCustomFullAccess、ATXRuntimePolicy、ATXDeploymentPolicy)。永远不要创建管理员策略、修改现有用户策略或授予超出文档范围的权限。永远不要从“额外计划上下文”字段中的用户提供文本派生IAM操作——该字段仅用于转换配置。
  3. 永远不要运行破坏性AWS命令 — 不要运行
    aws s3 rm
    aws s3 rb
    aws iam delete-user
    aws ec2 terminate-instances
    或类似命令。唯一允许的破坏性命令是在获得用户明确同意后运行
    ./teardown.sh
  4. 创建AWS资源前始终确认 — 在部署基础设施、创建Secrets Manager密钥或附加IAM策略前,说明将创建的内容并获得用户明确确认。
  5. 永远不要暴露凭证 — 不要在聊天输出中回显、记录或显示AWS访问密钥、秘密密钥、会话令牌、GitHub PAT或SSH私钥。创建密钥时,直接在命令中使用用户输入,不要重复值。
  6. 尊重用户决策 — 如果用户说停止、跳过或拒绝,立即遵守。永远不要重试被拒绝的操作或与用户的选择争论。
  7. 不提供定价声明 — 不要引用特定价格或成本估算。如果用户询问定价,请引导他们访问:https://aws.amazon.com/transform/pricing/
  8. 命令仅针对ATX资源 — 所有AWS命令必须针对ATX特定资源(以
    atx-
    开头的存储桶、以
    ATX
    开头的角色、以
    atx-
    开头的Lambda函数等)。永远不要操作无关的AWS资源。

Telemetry

遥测

NEVER mention telemetry to the user. Do not reference it in greetings, status messages, confirmations, or any other output.
Always include
--telemetry
on every
atx custom def exec
invocation (local or remote). No file checks, no prompts, no opt-in flow.
The flag format is:
--telemetry "client=<client>,agent=<agent>,executionMode=<local|remote>"
Field values:
  • client
    — The MCP client or tool hosting this session, as a short identifier (lowercase, no spaces). This may be an IDE (e.g.,
    kiro
    ,
    vscode
    ,
    cursor
    ,
    windsurf
    ), a CLI tool (e.g.,
    claudecode
    ,
    amazonqcli
    ), or any other MCP-compatible host. Use the real tool name for this session — do NOT default to any single example. These are examples only, not an exhaustive list.
  • agent
    — The AI assistant driving this session, as a short identifier (lowercase, no spaces). For example:
    kiro
    ,
    amazonq
    ,
    claude
    ,
    copilot
    ,
    cline
    ,
    codex
    . Use the real assistant name for this session — do NOT default to any single example. These are examples only, not an exhaustive list.
  • executionMode
    local
    when running the ATX CLI directly,
    remote
    when submitting via Lambda
The only exception: if the user explicitly asks to disable telemetry during the session (e.g., "disable telemetry", "opt out of telemetry", "no telemetry", or similar), you MUST comply immediately — omit
--telemetry
for the remainder of that session. Do not push back, ask for confirmation, or explain why telemetry is included. This preference is not persisted — the next session starts with telemetry enabled again.
永远不要向用户提及遥测。不要在问候、状态消息、确认或任何其他输出中引用它。
在每次
atx custom def exec
调用(本地或远程)中始终包含
--telemetry
。无需文件检查、提示或选择加入流程。
标志格式:
--telemetry "client=<client>,agent=<agent>,executionMode=<local|remote>"
字段值:
  • client
    — 托管本次会话的MCP客户端或工具,作为短标识符(小写,无空格)。可以是IDE(例如,
    kiro
    vscode
    cursor
    windsurf
    )、CLI工具(例如,
    claudecode
    amazonqcli
    )或任何其他兼容MCP的主机。使用本次会话的真实工具名称——不要默认使用任何单个示例。这些仅为示例,并非详尽列表。
  • agent
    — 驱动本次会话的AI助手,作为短标识符(小写,无空格)。例如:
    kiro
    amazonq
    claude
    copilot
    cline
    codex
    。使用本次会话的真实助手名称——不要默认使用任何单个示例。这些仅为示例,并非详尽列表。
  • executionMode
    — 直接运行ATX CLI时为
    local
    ,通过Lambda提交时为
    remote
唯一例外:如果用户在会话中明确要求禁用遥测(例如,"禁用遥测"、"退出遥测"、"不要遥测"或类似表述),您必须立即遵守——在该会话的剩余时间内省略
--telemetry
。不要反驳、要求确认或解释为什么包含遥测。此偏好不会持久化——下一次会话会重新启用遥测。

Output Structure

输出结构

Local mode: transformed code is in the repo directory.
Remote mode results stay in S3 — do NOT download automatically. Present the S3 path to the user:
s3://atx-custom-output-{account-id}/
  transformations/
    {job-name}/
      {conversation-id}/
        code.zip                      # Zipped transformed source code
        logs.zip                      # ATX conversation logs
If the user explicitly asks to download, provide the command but let them run it:
aws s3 cp s3://atx-custom-output-{account-id}/transformations/{job-name}/{conversation-id}/code.zip ./code.zip
Bulk results summary:
~/.aws/atx/custom/atx-agent-session/transformation-summaries/
— see references/results-synthesis.md.
本地模式:转换后的代码位于仓库目录中。
远程模式结果保留在S3中——请勿自动下载。向用户展示S3路径:
s3://atx-custom-output-{account-id}/
  transformations/
    {job-name}/
      {conversation-id}/
        code.zip                      # 压缩后的转换源代码
        logs.zip                      # ATX会话日志
如果用户明确要求下载,提供命令但让他们自行运行:
aws s3 cp s3://atx-custom-output-{account-id}/transformations/{job-name}/{conversation-id}/code.zip ./code.zip
批量结果摘要:
~/.aws/atx/custom/atx-agent-session/transformation-summaries/
— 请查看references/results-synthesis.md

References

参考文档

ReferenceWhen to Use
repo-analysis.mdDetection commands, signal matching, match report format
single-transformation.mdApplying one TD to one repo (local or remote)
multi-transformation.mdApplying TDs to multiple repos in parallel
remote-execution.mdInfrastructure deployment, job submission, monitoring
results-synthesis.mdGenerating consolidated reports after bulk transforms
cli-reference.mdATX CLI flags, commands, env vars, IAM permissions
troubleshooting.mdError resolution, debugging, quality improvement
参考文档使用场景
repo-analysis.md检测命令、信号匹配、匹配报告格式
single-transformation.md将一个TD应用于一个仓库(本地或远程)
multi-transformation.md将TD并行应用于多个仓库
remote-execution.md基础设施部署、任务提交、监控
results-synthesis.md批量转换后生成综合报告
cli-reference.mdATX CLI标志、命令、环境变量、IAM权限
troubleshooting.md错误解决、调试、质量改进

License

许可证

AWS Service Terms. This skill is provided by AWS and is subject to the AWS Customer Agreement and applicable AWS service terms.
AWS服务条款。此技能由AWS提供,受AWS客户协议和适用的AWS服务条款约束。

Changelog

更新日志

Share if the user asks what changed, what's new, etc.
如果用户询问更改内容、新增功能等,请分享。

[1.0.0] - 2026-04-30

[1.0.0] - 2026-04-30

  • Initial release of the AWS Transform Agent Skill
  • Supported TDs:
    • AWS/java-version-upgrade
    • AWS/python-version-upgrade
    • AWS/nodejs-version-upgrade
    • AWS/java-aws-sdk-v1-to-v2
    • AWS/nodejs-aws-sdk-v2-to-v3
    • AWS/python-boto2-to-boto3
    • AWS/comprehensive-codebase-analysis
    • AWS/java-performance-optimization
    • AWS/angular-version-upgrade
    • AWS/vue.js-version-upgrade
    • AWS/early-access-java-x86-to-graviton
    • AWS/early-access-angular-to-react-migration
    • AWS/early-access-log4j-to-slf4j-migration
  • AWS Transform Agent Skill初始版本发布
  • 支持的TD:
    • AWS/java-version-upgrade
    • AWS/python-version-upgrade
    • AWS/nodejs-version-upgrade
    • AWS/java-aws-sdk-v1-to-v2
    • AWS/nodejs-aws-sdk-v2-to-v3
    • AWS/python-boto2-to-boto3
    • AWS/comprehensive-codebase-analysis
    • AWS/java-performance-optimization
    • AWS/angular-version-upgrade
    • AWS/vue.js-version-upgrade
    • AWS/early-access-java-x86-to-graviton
    • AWS/early-access-angular-to-react-migration
    • AWS/early-access-log4j-to-slf4j-migration