pctl

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

pctl CLI Skill

pctl CLI 使用指南

For full flag details, run
pctl <command> <action> --help
.
如需查看完整的参数说明,请运行
pctl <command> <action> --help

Environments

可用环境

Use these connection profile names with any pctl command:
Profile NameTenantPurpose
sandbox
yourtenantsandboxSandbox
dev
yourtenantdevDevelopment
staging
yourtenantstagingStaging
prod
yourtenantprodProduction
bash
pctl conn list                  # See all saved profiles
pctl conn show sandbox              # Show details of a specific profile
pctl conn validate sandbox          # Validate credentials
所有pctl命令都可以使用以下连接配置文件名:
配置文件名租户用途
sandbox
yourtenantsandbox沙箱环境
dev
yourtenantdev开发环境
staging
yourtenantstaging预发环境
prod
yourtenantprod生产环境
bash
pctl conn list                  # See all saved profiles
pctl conn show sandbox              # Show details of a specific profile
pctl conn validate sandbox          # Validate credentials

Connection Profile Management

连接配置文件管理

Syntax:
pctl conn <add|list|show|validate|delete> [options] [conn_name]
bash
undefined
语法
pctl conn <add|list|show|validate|delete> [options] [conn_name]
bash
undefined

Add with flags

Add with flags

pctl conn add myenv --platform https://openam-env.id.forgerock.io
--sa-id "account-id" --sa-jwk-file /path/to/jwk.json
pctl conn add myenv --platform https://openam-env.id.forgerock.io
--sa-id "account-id" --sa-jwk-file /path/to/jwk.json

Add from YAML config file

Add from YAML config file

pctl conn add myenv -c /path/to/conn.yaml
pctl conn add myenv -c /path/to/conn.yaml

Add without credential validation

Add without credential validation

pctl conn add myenv --platform https://... --sa-id "..." --sa-jwk-file ... --no-validate
pctl conn add myenv --platform https://... --sa-id "..." --sa-jwk-file ... --no-validate

Show, validate, delete

Show, validate, delete

pctl conn show sandbox pctl conn validate sandbox pctl conn delete sandbox # Prompts for confirmation pctl conn delete sandbox --force # Skip confirmation

Connection YAML format:
```yaml
platform: "https://openam-env.id.forgerock.io"
sa_id: "service-account-id"
sa_jwk_file: "/path/to/jwk.json"      # File path OR inline JSON via sa_jwk
log_api_key: "optional-key"
log_api_secret: "optional-secret"
admin_username: "optional-admin"
admin_password: "optional-password"
description: "Environment description"
Note: JWK credentials can be provided either as
--sa-jwk-file /path/to/file.json
(file path) or
--sa-jwk '{"kty":"RSA",...}'
(inline JSON string). Use one or the other, not both.
pctl conn show sandbox pctl conn validate sandbox pctl conn delete sandbox # Prompts for confirmation pctl conn delete sandbox --force # Skip confirmation

连接配置YAML格式:
```yaml
platform: "https://openam-env.id.forgerock.io"
sa_id: "service-account-id"
sa_jwk_file: "/path/to/jwk.json"      # File path OR inline JSON via sa_jwk
log_api_key: "optional-key"
log_api_secret: "optional-secret"
admin_username: "optional-admin"
admin_password: "optional-password"
description: "Environment description"
注意:JWK凭证可以通过
--sa-jwk-file /path/to/file.json
(文件路径)或者
--sa-jwk '{"kty":"RSA",...}'
(内联JSON字符串)两种方式提供,二选一即可,不要同时使用。

Token Management

令牌管理

Syntax:
pctl token <get|decode|validate> [options] <conn_name|token_string>
bash
undefined
语法
pctl token <get|decode|validate> [options] <conn_name|token_string>
bash
undefined

Generate access token from connection profile

Generate access token from connection profile

pctl token get sandbox # Raw token string pctl token get sandbox --format bearer # With "Bearer " prefix pctl token get sandbox --format json # Full JSON response
pctl token get sandbox # Raw token string pctl token get sandbox --format bearer # With "Bearer " prefix pctl token get sandbox --format json # Full JSON response

Inspect a JWT (no verification, just decode)

Inspect a JWT (no verification, just decode)

pctl token decode "eyJhbGciOiJS..."
pctl token decode "eyJhbGciOiJS..."

Validate JWT structure and format

Validate JWT structure and format

pctl token validate "eyJhbGciOiJS..."

Note: `token get` requires a valid connection profile with sa_id and sa_jwk/sa_jwk_file. `token decode` and `token validate` take a raw JWT string, NOT a connection profile name.
pctl token validate "eyJhbGciOiJS..."

注意:`token get` 需要包含sa_id和sa_jwk/sa_jwk_file的有效连接配置文件。`token decode` 和 `token validate` 接收原始JWT字符串作为参数,而非连接配置文件名。

Journey Testing

认证旅程测试

Syntax:
pctl journey <run|validate> [options] <file>
bash
undefined
语法
pctl journey <run|validate> [options] <file>
bash
undefined

Validate config before running

Validate config before running

pctl journey validate path/to/config.yaml
pctl journey validate path/to/config.yaml

Run journey

Run journey

pctl journey run path/to/config.yaml
pctl journey run path/to/config.yaml

Interactive step-by-step mode (pauses between steps)

Interactive step-by-step mode (pauses between steps)

pctl journey run path/to/config.yaml --step
pctl journey run path/to/config.yaml --step

With custom timeout

With custom timeout

pctl journey run path/to/config.yaml --timeout 30000

Journey YAML config format:
```yaml
platformUrl: https://openam-env.id.forgerock.io
realm: alpha
journeyName: Login

steps:
  step1:
    Username: testuser

  step2:
    Password: testpassword
Note: The
--step
/
-s
flag enables interactive step-by-step mode, pausing between each step so you can inspect intermediate state. Useful for debugging failing journeys.
pctl journey run path/to/config.yaml --timeout 30000

认证旅程YAML配置格式:
```yaml
platformUrl: https://openam-env.id.forgerock.io
realm: alpha
journeyName: Login

steps:
  step1:
    Username: testuser

  step2:
    Password: testpassword
注意:
--step
/
-s
参数会启用交互式分步模式,在每步执行之间暂停,方便你检查中间状态,非常适合调试运行失败的认证旅程。

ELK Stack Management

ELK栈管理

Syntax:
pctl elk <init|health|start|stop|status|clean|purge|hardstop|down> [options]
语法
pctl elk <init|health|start|stop|status|clean|purge|hardstop|down> [options]

Lifecycle (in order)

生命周期(按执行顺序)

bash
undefined
bash
undefined

1. Initialize — deploy containers, templates, policies (one-time setup)

1. 初始化 — 部署容器、模板、策略(仅需执行一次的初始化操作)

pctl elk init
pctl elk init

2. Check health — verify Elasticsearch + Kibana are running

2. 检查健康状态 — 验证Elasticsearch + Kibana是否正常运行

pctl elk health
pctl elk health

3. Start streaming logs from a connection profile

3. 从指定连接配置文件开始流式传输日志

pctl elk start sandbox # Streamer named "sandbox" pctl elk start sandbox --name my-streamer # Custom streamer name pctl elk start sandbox --log-level 3 # DEBUG level pctl elk start sandbox -c am-core,idm-core # Specific components
pctl elk start sandbox # Streamer named "sandbox" pctl elk start sandbox --name my-streamer # Custom streamer name pctl elk start sandbox --log-level 3 # DEBUG level pctl elk start sandbox -c am-core,idm-core # Specific components

4. Check streamer status

4. 查看流传输任务状态

pctl elk status # All streamers pctl elk status --name sandbox # Specific streamer
pctl elk status # All streamers pctl elk status --name sandbox # Specific streamer

5. Stop streamers

5. 停止流传输任务

pctl elk stop # Stop all pctl elk stop --name sandbox # Stop specific
undefined
pctl elk stop # Stop all pctl elk stop --name sandbox # Stop specific
undefined

Data Management

数据管理

bash
undefined
bash
undefined

Clean index data but keep streamer running (name required)

清理索引数据但保留流传输任务运行(必须指定任务名)

pctl elk clean --name sandbox pctl elk clean --name sandbox --force # Skip confirmation
pctl elk clean --name sandbox pctl elk clean --name sandbox --force # Skip confirmation

Purge streamer completely — stop + delete indices (name required)

彻底清除流传输任务 — 停止任务 + 删除索引(必须指定任务名)

pctl elk purge --name sandbox pctl elk purge --name sandbox --force # Skip confirmation
undefined
pctl elk purge --name sandbox pctl elk purge --name sandbox --force # Skip confirmation
undefined

Teardown

资源销毁

bash
undefined
bash
undefined

Stop all streamers + containers, PRESERVE data

停止所有流传输任务 + 容器,保留数据

pctl elk hardstop pctl elk hardstop --force # Skip confirmation
pctl elk hardstop pctl elk hardstop --force # Skip confirmation

Stop all + REMOVE containers + DELETE all data

停止所有任务 + 删除容器 + 清空所有数据

pctl elk down pctl elk down --force # Skip confirmation

ELK log levels: `1` = ERROR, `2` = INFO, `3` = DEBUG, `4` = ALL (default: depends on command).

Note: `elk clean` and `elk purge` require `--name` / `-n` — they do NOT operate on "all" streamers. `elk stop`, `elk hardstop`, and `elk down` operate on all if no name given.
pctl elk down pctl elk down --force # Skip confirmation

ELK日志级别:`1` = ERROR, `2` = INFO, `3` = DEBUG, `4` = ALL(默认值:根据具体命令而定)。

注意:`elk clean` 和 `elk purge` 必须指定 `--name` / `-n` 参数,不支持批量操作所有流传输任务,避免意外数据丢失。如果未指定名称,`elk stop`、`elk hardstop` 和 `elk down` 会默认操作所有任务。

Querying Local Elasticsearch

查询本地Elasticsearch

After
pctl elk start
streams logs into local ES, you can query them directly. This is different from
pctl log search
which queries the remote AIC API.
Connection:
http://localhost:9200
, no authentication.
Index pattern:
paic-logs-{profile_name}-{YYYY.MM}
(e.g.,
paic-logs-sandbox-2026.03
). Indices rotate monthly. Use
paic-logs-*
to search across all profiles/months, or
paic-logs-sandbox*
for a specific profile.
Document schema:
FieldES TypeDescription
timestamp
date
ISO 8601 timestamp
source
keyword
Log source (e.g.,
idm-core
,
am-access
)
type
keyword
application/json
or
text/plain
payload
object
Log content (flexible structure)
Payload structure depends on the
type
field:
  • text/plain
    :
    payload
    only has
    message
    — a raw log string with level/logger info embedded in the text.
  • application/json
    (structured logs):
    payload
    has structured fields:
Payload FieldDescription
payload.message
Log message (present in both types)
payload.level
Log level string (
SEVERE
,
ERROR
,
WARNING
,
INFO
,
DEBUG
,
FINE
,
FINER
,
FINEST
)
payload.logger
Java class name (e.g.,
org.forgerock.am.health.LivenessCheckEndpoint
)
payload.transactionId
Transaction ID for request tracing
payload.mdc.transactionId
Same transaction ID (nested in MDC context)
payload.thread
Java thread name
payload.context
AM context (e.g.,
default
)
payload.timestamp
Log-level timestamp (may differ slightly from top-level
timestamp
)
Retention: Data auto-deletes after 7 days (ILM policy). Don't search beyond that window.
bash
undefined
pctl elk start
将日志流式传输到本地ES后,你可以直接查询这些数据。这和查询远程AIC API的
pctl log search
功能不同。
连接地址
http://localhost:9200
,无需身份认证。
索引模式
paic-logs-{profile_name}-{YYYY.MM}
(例如
paic-logs-sandbox-2026.03
)。索引按月轮转。你可以使用
paic-logs-*
搜索所有环境/所有月份的日志,或者用
paic-logs-sandbox*
搜索指定环境的日志。
文档结构
字段ES类型说明
timestamp
date
ISO 8601格式时间戳
source
keyword
日志来源(例如
idm-core
am-access
type
keyword
application/json
text/plain
payload
object
日志内容(灵活结构)
Payload结构 取决于
type
字段的值:
  • text/plain
    payload
    仅包含
    message
    字段,是原始日志字符串,内嵌日志级别/日志器信息。
  • application/json
    (结构化日志):
    payload
    包含结构化字段:
Payload字段说明
payload.message
日志消息(两种类型都包含)
payload.level
日志级别字符串(
SEVERE
ERROR
WARNING
INFO
DEBUG
FINE
FINER
FINEST
payload.logger
Java类名(例如
org.forgerock.am.health.LivenessCheckEndpoint
payload.transactionId
用于请求追踪的事务ID
payload.mdc.transactionId
同事务ID(嵌套在MDC上下文中)
payload.thread
Java线程名
payload.context
AM上下文(例如
default
payload.timestamp
日志级别时间戳(可能和顶层
timestamp
存在细微差异)
留存策略:数据会在7天后自动删除(ILM策略),不要查询超出这个时间窗口的日志。
bash
undefined

Check available indices

Check available indices

curl -s 'localhost:9200/_cat/indices/paic-logs-*?v&s=index'
curl -s 'localhost:9200/_cat/indices/paic-logs-*?v&s=index'

Search by source

Search by source

curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"match":{"source":"idm-core"}},"size":10,"sort":[{"timestamp":"desc"}]}'
curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"match":{"source":"idm-core"}},"size":10,"sort":[{"timestamp":"desc"}]}'

Search by keyword in message

Search by keyword in message

curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"match":{"payload.message":"authentication failed"}},"size":10,"sort":[{"timestamp":"desc"}]}'
curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"match":{"payload.message":"authentication failed"}},"size":10,"sort":[{"timestamp":"desc"}]}'

Filter by transaction ID

Filter by transaction ID

curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"match":{"payload.transactionId":"abc-123-def"}}}'
curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"match":{"payload.transactionId":"abc-123-def"}}}'

Errors — structured logs (application/json) use payload.level

Errors — structured logs (application/json) use payload.level

curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"bool":{"must":[{"match":{"payload.level":"SEVERE"}},{"range":{"timestamp":{"gte":"now-1h"}}}]}}}'
curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"bool":{"must":[{"match":{"payload.level":"SEVERE"}},{"range":{"timestamp":{"gte":"now-1h"}}}]}}}'

Errors — text/plain logs have level in the message string

Errors — text/plain logs have level in the message string

curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"bool":{"must":[{"match":{"payload.message":"SEVERE"}},{"match":{"type":"text/plain"}}]}}}'
curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"query":{"bool":{"must":[{"match":{"payload.message":"SEVERE"}},{"match":{"type":"text/plain"}}]}}}'

Count docs per source

Count docs per source

curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"size":0,"aggs":{"by_source":{"terms":{"field":"source"}}}}'

**When to use local ES vs `pctl log search`**:
- `pctl log search` — searches historical logs from the AIC API. Looking into the past.
- Local ES queries — queries live-streamed data captured by `pctl elk start`. Requires ELK running. Better for real-time monitoring, complex queries, and aggregations.
curl -s 'localhost:9200/paic-logs-sandbox*/_search?pretty' -H 'Content-Type: application/json' -d ' {"size":0,"aggs":{"by_source":{"terms":{"field":"source"}}}}'

**什么时候用本地ES查询,什么时候用`pctl log search`**:
- `pctl log search` — 从AIC API查询历史日志,适用于回溯过往的问题。
- 本地ES查询 — 查询`pctl elk start`捕获的实时流数据,需要ELK服务运行,更适合实时监控、复杂查询和聚合分析。

Historical Log Search

历史日志搜索

Syntax:
pctl log search [options] <conn_name>
bash
undefined
语法
pctl log search [options] <conn_name>
bash
undefined

Last 24h from idm-config (all defaults)

查询idm-config组件最近24小时的日志(所有默认参数)

pctl log search sandbox
pctl log search sandbox

Last 7 days, specific component, with filter

查询最近7天指定组件的日志,带过滤条件

pctl log search sandbox -c idm-config --days 7 -q '/payload/objectId co "endpoint/"'
pctl log search sandbox -c idm-config --days 7 -q '/payload/objectId co "endpoint/"'

Specific date range

指定日期范围查询

pctl log search sandbox -c am-access --from 2025-10-01 --to 2025-10-06
pctl log search sandbox -c am-access --from 2025-10-01 --to 2025-10-06

Filter by transaction ID

按事务ID过滤

pctl log search sandbox --txid "abc-123-def"
pctl log search sandbox --txid "abc-123-def"

Errors only

仅查询错误日志

pctl log search sandbox -l 1
pctl log search sandbox -l 1

Save to file

保存到文件

pctl log search sandbox -c idm-config --days 7 -o logs.jsonl pctl log search sandbox -c idm-config --format json -o report.json

Log levels: `1` = ERROR, `2` = INFO (default), `3` = DEBUG, `4` = ALL.

Default component is `idm-config`. Default time range is last 24 hours (`--days 1`).

Note: `--days` overrides `--from`/`--to` if both are provided. Use `--no-default-noise-filter` to disable built-in noise filtering.
pctl log search sandbox -c idm-config --days 7 -o logs.jsonl pctl log search sandbox -c idm-config --format json -o report.json

日志级别:`1` = ERROR, `2` = INFO(默认), `3` = DEBUG, `4` = ALL。

默认查询组件是`idm-config`,默认时间范围是最近24小时(`--days 1`)。

注意:如果同时提供了`--days`和`--from`/`--to`参数,`--days`会覆盖后者。你可以使用`--no-default-noise-filter`参数禁用内置的噪声过滤规则。

Configuration Change Tracking

配置变更追踪

Syntax:
pctl log changes [options] <conn_name>
语法
pctl log changes [options] <conn_name>

IDM-Config Types

IDM配置类型

bash
undefined
bash
undefined

Endpoint changes

端点变更

pctl log changes sandbox --type endpoint --name my_endpoint
pctl log changes sandbox --type endpoint --name my_endpoint

Connector changes (last 30 days)

连接器变更(最近30天)

pctl log changes sandbox --type connector --name MyConnector --days 30
pctl log changes sandbox --type connector --name MyConnector --days 30

Email template changes

邮件模板变更

pctl log changes sandbox --type emailtemplate --name welcome-email
pctl log changes sandbox --type emailtemplate --name welcome-email

Mapping changes

映射规则变更

pctl log changes sandbox --type mapping --name managedAlpha_user
pctl log changes sandbox --type mapping --name managedAlpha_user

Access control changes (no --name needed, global config)

访问控制变更(无需--name参数,全局配置)

pctl log changes sandbox --type access --days 30
pctl log changes sandbox --type access --days 30

Repo changes (no --name needed)

仓库变更(无需--name参数)

pctl log changes sandbox --type repo --days 7
undefined
pctl log changes sandbox --type repo --days 7
undefined

AM-Config Types

AM配置类型

bash
undefined
bash
undefined

Script changes (name auto-resolved to UUID)

脚本变更(名称会自动解析为UUID)

pctl log changes sandbox --type script --name "My Test Script"
pctl log changes sandbox --type script --name "My Test Script"

Journey changes

认证旅程变更

pctl log changes sandbox --type journey --name MyLoginJourney --days 30
pctl log changes sandbox --type journey --name MyLoginJourney --days 30

SAML entity changes

SAML实体变更

pctl log changes sandbox --type saml --name "https://example.com/saml/logout/"
undefined
pctl log changes sandbox --type saml --name "https://example.com/saml/logout/"
undefined

Output Formats

输出格式

bash
undefined
bash
undefined

JSON (default, human-readable)

JSON(默认,人类可读格式)

pctl log changes sandbox --type endpoint --name my_endpoint --format json
pctl log changes sandbox --type endpoint --name my_endpoint --format json

JSONL (one object per line, for piping)

JSONL(每行一个对象,适合管道处理)

pctl log changes sandbox --type endpoint --name my_endpoint --format jsonl
pctl log changes sandbox --type endpoint --name my_endpoint --format jsonl

JS (JavaScript, for embedding)

JS(JavaScript格式,适合嵌入)

pctl log changes sandbox --type endpoint --name my_endpoint --format js
pctl log changes sandbox --type endpoint --name my_endpoint --format js

Save to file

保存到文件

pctl log changes sandbox --type endpoint --name my_endpoint -o report.json

Supported types: `endpoint`, `connector`, `emailtemplate`, `mapping`, `access`, `repo` (IDM-Config) and `script`, `journey`, `saml` (AM-Config).

Note: `access` and `repo` types do NOT require `--name` — they track global config changes. All other types require `--name`. For AM-Config `script` type, the human-readable name is auto-resolved to UUID for the API query.
pctl log changes sandbox --type endpoint --name my_endpoint -o report.json

支持的配置类型:`endpoint`、`connector`、`emailtemplate`、`mapping`、`access`、`repo`(IDM配置)以及`script`、`journey`、`saml`(AM配置)。

注意:`access`和`repo`类型不需要`--name`参数,它们追踪的是全局配置变更。其他所有类型都需要指定`--name`。对于AM配置的`script`类型,工具会自动将可读名称解析为API查询所需的UUID。

Gotchas

注意事项

  • token decode
    and
    token validate
    take a raw JWT string, NOT a connection profile name. Only
    token get
    uses a profile name.
  • elk clean
    and
    elk purge
    REQUIRE
    --name
    — they refuse to run without it to prevent accidental data loss.
  • elk down
    deletes ALL data (containers + indices). Use
    elk hardstop
    if you want to preserve data.
  • --days
    overrides
    --from
    /
    --to
    in both
    log search
    and
    log changes
    . If you pass all three, the date range is ignored.
  • ELK log levels start at 1 (ERROR), not 0. The scale is 1=ERROR, 2=INFO, 3=DEBUG, 4=ALL.
  • log changes
    for
    access
    and
    repo
    types do NOT take
    --name
    — they track global config, not individual resources.
  • log changes
    for
    script
    type auto-resolves human-readable names to UUIDs. You pass the friendly name, not the UUID.
  • Connection profiles support JWK as either a file path (
    --sa-jwk-file
    ) or inline JSON (
    --sa-jwk
    ). Never pass both.
  • conn add
    validates credentials by default. Use
    --no-validate
    to skip if the environment isn't reachable yet.
  • Journey
    --step
    mode is interactive — it pauses between steps and waits for user input. Don't use it in scripts.
  • token decode
    token validate
    接收原始JWT字符串作为参数,而非连接配置文件名,只有
    token get
    使用配置文件名。
  • elk clean
    elk purge
    必须指定
    --name
    参数,缺少该参数会拒绝执行,避免意外数据丢失。
  • elk down
    会删除所有数据(容器 + 索引),如果你需要保留数据请使用
    elk hardstop
  • log search
    log changes
    命令中,
    --days
    参数会覆盖
    --from
    /
    --to
    参数,如果同时传入三个参数,自定义日期范围会被忽略。
  • ELK日志级别从1(ERROR)开始,而非0,等级规则为1=ERROR, 2=INFO, 3=DEBUG, 4=ALL。
  • access
    repo
    类型的
    log changes
    命令不需要传入
    --name
    参数,它们追踪的是全局配置,而非单个资源。
  • script
    类型的
    log changes
    命令会自动将可读名称解析为UUID,你只需要传入友好名称,不需要传入UUID。
  • 连接配置文件支持两种JWK提供方式:文件路径(
    --sa-jwk-file
    )或者内联JSON(
    --sa-jwk
    ),不要同时传入两种参数。
  • conn add
    命令默认会校验凭证,如果环境暂时不可达可以使用
    --no-validate
    参数跳过校验。
  • 认证旅程的
    --step
    模式是交互式的,会在步骤之间暂停等待用户输入,不要在脚本中使用该参数。