bedrock-prompts
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAmazon Bedrock Prompt Management
Amazon Bedrock 提示词管理
Overview
概述
Amazon Bedrock Prompt Management provides enterprise-grade capabilities for creating, versioning, testing, and deploying prompt templates. It enables teams to centralize prompt engineering, implement A/B testing, and integrate prompts across Bedrock Flows, Agents, and applications.
Purpose: Centralized prompt template management with version control, variable substitution, and multi-variant testing
Pattern: Task-based (independent operations for different prompt management tasks)
Key Capabilities:
- Prompt Templates - Reusable templates with variable substitution
- Version Management - Track changes, rollback, and staged deployment
- Multi-Variant Testing - A/B test different prompt variations
- Flow Integration - Use prompts in Bedrock Flows and Agents
- Variable Types - String, number, array, and JSON object variables
- Prompt Catalog - Centralized library for team collaboration
- Cross-Model Support - Works with all Bedrock foundation models
Quality Targets:
- Reusability: 80%+ prompt template reuse across applications
- Version Control: 100% prompt changes tracked
- Testing: A/B test 3+ variants per production prompt
- Collaboration: Centralized catalog for team-wide access
Amazon Bedrock 提示词管理提供企业级的提示词模板创建、版本控制、测试和部署能力。它支持团队集中进行提示词工程、实施A/B测试,并在Bedrock Flows、Agents和各类应用中集成提示词。
用途:集中管理提示词模板,支持版本控制、变量替换和多变体测试
模式:基于任务(不同提示词管理任务对应独立操作)
核心功能:
- 提示词模板 - 支持变量替换的可复用模板
- 版本管理 - 跟踪变更、回滚和分阶段部署
- 多变体测试 - 对不同提示词变体进行A/B测试
- Flow集成 - 在Bedrock Flows和Agents中使用提示词
- 变量类型 - 支持字符串、数字、数组和JSON对象变量
- 提示词目录 - 用于团队协作的集中式库
- 跨模型支持 - 兼容所有Bedrock基础模型
质量目标:
- 可复用性:跨应用的提示词模板复用率达80%以上
- 版本控制:100%的提示词变更都被跟踪
- 测试:每个生产级提示词至少测试3种变体
- 协作:提供团队全员可访问的集中式目录
When to Use
适用场景
Use bedrock-prompts when:
- Creating reusable prompt templates across applications
- Managing prompt versions for rollback and staged deployment
- Implementing A/B testing for prompt optimization
- Building centralized prompt catalogs for teams
- Integrating prompts with Bedrock Flows or Agents
- Standardizing prompt engineering practices
- Testing prompt variations before production deployment
- Sharing prompts across multiple projects
- Implementing variable substitution in prompts
- Optimizing prompts with data-driven testing
When NOT to Use:
- Single-use prompts without reuse (use inline prompts)
- Simple applications without version control needs
- Ad-hoc experimentation (test locally first, then promote to managed prompts)
在以下场景中使用bedrock-prompts:
- 在多个应用间创建可复用的提示词模板
- 管理提示词版本以支持回滚和分阶段部署
- 实施A/B测试来优化提示词
- 为团队构建集中式提示词目录
- 将提示词与Bedrock Flows或Agents集成
- 标准化提示词工程实践
- 在生产部署前测试提示词变体
- 在多个项目间共享提示词
- 在提示词中实现变量替换
- 通过数据驱动的测试优化提示词
不适用场景:
- 无需复用的一次性提示词(使用内联提示词)
- 无需版本控制的简单应用
- 临时实验(先在本地测试,再推广到托管提示词)
Prerequisites
前置条件
Required
必需条件
- AWS account with Bedrock access
- IAM permissions for Bedrock Agent service
- Foundation model access enabled
- boto3 >= 1.34.0
- 拥有Bedrock访问权限的AWS账户
- Bedrock Agent服务的IAM权限
- 已启用基础模型访问
- boto3 >= 1.34.0
Recommended
推荐条件
- Understanding of prompt engineering best practices
- Familiarity with Bedrock Flows or Agents
- CloudWatch for monitoring prompt usage
- S3 for storing prompt test results
- 了解提示词工程最佳实践
- 熟悉Bedrock Flows或Agents
- 使用CloudWatch监控提示词使用情况
- 使用S3存储提示词测试结果
Installation
安装
bash
pip install boto3 botocorebash
pip install boto3 botocoreIAM Permissions
IAM权限
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:CreatePrompt",
"bedrock:GetPrompt",
"bedrock:UpdatePrompt",
"bedrock:DeletePrompt",
"bedrock:ListPrompts",
"bedrock:CreatePromptVersion",
"bedrock:ListPromptVersions",
"bedrock:InvokeModel"
],
"Resource": "*"
}
]
}json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:CreatePrompt",
"bedrock:GetPrompt",
"bedrock:UpdatePrompt",
"bedrock:DeletePrompt",
"bedrock:ListPrompts",
"bedrock:CreatePromptVersion",
"bedrock:ListPromptVersions",
"bedrock:InvokeModel"
],
"Resource": "*"
}
]
}Quick Start
快速开始
1. Create Prompt Template
1. 创建提示词模板
python
import boto3
import json
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')
response = bedrock_agent.create_prompt(
name='customer-support-prompt',
description='Customer support response template',
variants=[
{
'name': 'default',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': '''You are a helpful customer support agent for {{company_name}}.
Customer Query: {{customer_query}}
Instructions:
- Be professional and empathetic
- Provide clear, actionable solutions
- If you don't know, offer to escalate
- Keep responses under {{max_words}} words
Response:''',
'inputVariables': [
{
'name': 'company_name'
},
{
'name': 'customer_query'
},
{
'name': 'max_words'
}
]
}
},
'inferenceConfiguration': {
'text': {
'maxTokens': 500,
'temperature': 0.7,
'topP': 0.9
}
}
}
]
)
prompt_id = response['id']
prompt_arn = response['arn']
print(f"Created prompt: {prompt_id}")
print(f"ARN: {prompt_arn}")python
import boto3
import json
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')
response = bedrock_agent.create_prompt(
name='customer-support-prompt',
description='Customer support response template',
variants=[
{
'name': 'default',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': '''You are a helpful customer support agent for {{company_name}}.
Customer Query: {{customer_query}}
Instructions:
- Be professional and empathetic
- Provide clear, actionable solutions
- If you don't know, offer to escalate
- Keep responses under {{max_words}} words
Response:''',
'inputVariables': [
{
'name': 'company_name'
},
{
'name': 'customer_query'
},
{
'name': 'max_words'
}
]
}
},
'inferenceConfiguration': {
'text': {
'maxTokens': 500,
'temperature': 0.7,
'topP': 0.9
}
}
}
]
)
prompt_id = response['id']
prompt_arn = response['arn']
print(f"Created prompt: {prompt_id}")
print(f"ARN: {prompt_arn}")2. Create Prompt Version
2. 创建提示词版本
python
undefinedpython
undefinedCreate immutable version for production
Create immutable version for production
version_response = bedrock_agent.create_prompt_version(
promptIdentifier=prompt_id,
description='Production v1.0 - Initial release'
)
version = version_response['version']
print(f"Created version: {version}")
undefinedversion_response = bedrock_agent.create_prompt_version(
promptIdentifier=prompt_id,
description='Production v1.0 - Initial release'
)
version = version_response['version']
print(f"Created version: {version}")
undefined3. Get and Use Prompt
3. 获取并使用提示词
python
undefinedpython
undefinedGet prompt details
Get prompt details
prompt = bedrock_agent.get_prompt(
promptIdentifier=prompt_id,
promptVersion=version
)
prompt = bedrock_agent.get_prompt(
promptIdentifier=prompt_id,
promptVersion=version
)
Extract template and variables
Extract template and variables
template = prompt['variants'][0]['templateConfiguration']['text']['text']
variables = {var['name']: None for var in prompt['variants'][0]['templateConfiguration']['text']['inputVariables']}
print(f"Template: {template}")
print(f"Variables: {list(variables.keys())}")
---template = prompt['variants'][0]['templateConfiguration']['text']['text']
variables = {var['name']: None for var in prompt['variants'][0]['templateConfiguration']['text']['inputVariables']}
print(f"Template: {template}")
print(f"Variables: {list(variables.keys())}")
---Operations
操作指南
Operation 1: create-prompt
操作1: create-prompt
Create a new prompt template with variables and inference configuration.
Use when: Building reusable prompt templates, standardizing prompts across applications, creating prompt catalogs
Code Example:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')创建带有变量和推理配置的新提示词模板。
适用场景:构建可复用提示词模板、跨应用标准化提示词、创建提示词目录
代码示例:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Create advanced prompt with multiple variable types
Create advanced prompt with multiple variable types
response = bedrock_agent.create_prompt(
name='product-recommendation-prompt',
description='E-commerce product recommendation engine',
defaultVariant='optimized',
variants=[
{
'name': 'optimized',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': '''You are a product recommendation expert for an e-commerce platform.
User Profile:
- Name: {{user_name}}
- Purchase History: {{purchase_history}}
- Preferences: {{preferences}}
- Budget: ${{budget}}
Available Categories: {{categories}}
Task: Recommend {{num_recommendations}} products that match the user's profile.
Format your response as a JSON array with product_id, name, price, and reason.''',
'inputVariables': [
{'name': 'user_name'},
{'name': 'purchase_history'},
{'name': 'preferences'},
{'name': 'budget'},
{'name': 'categories'},
{'name': 'num_recommendations'}
]
}
},
'inferenceConfiguration': {
'text': {
'maxTokens': 1000,
'temperature': 0.5,
'topP': 0.95,
'stopSequences': ['\n\n---']
}
}
}
],
tags={
'Environment': 'production',
'Team': 'recommendations',
'CostCenter': 'engineering'
}
)
print(f"Prompt ID: {response['id']}")
print(f"Prompt ARN: {response['arn']}")
print(f"Created At: {response['createdAt']}")
**Best Practices**:
- Use descriptive names with hyphens (e.g., `customer-support-prompt`)
- Document variable types and expected formats
- Set appropriate `maxTokens` to control costs
- Use `defaultVariant` to specify preferred version
- Add tags for cost tracking and organization
---response = bedrock_agent.create_prompt(
name='product-recommendation-prompt',
description='E-commerce product recommendation engine',
defaultVariant='optimized',
variants=[
{
'name': 'optimized',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': '''You are a product recommendation expert for an e-commerce platform.
User Profile:
- Name: {{user_name}}
- Purchase History: {{purchase_history}}
- Preferences: {{preferences}}
- Budget: ${{budget}}
Available Categories: {{categories}}
Task: Recommend {{num_recommendations}} products that match the user's profile.
Format your response as a JSON array with product_id, name, price, and reason.''',
'inputVariables': [
{'name': 'user_name'},
{'name': 'purchase_history'},
{'name': 'preferences'},
{'name': 'budget'},
{'name': 'categories'},
{'name': 'num_recommendations'}
]
}
},
'inferenceConfiguration': {
'text': {
'maxTokens': 1000,
'temperature': 0.5,
'topP': 0.95,
'stopSequences': ['\n\n---']
}
}
}
],
tags={
'Environment': 'production',
'Team': 'recommendations',
'CostCenter': 'engineering'
}
)
print(f"Prompt ID: {response['id']}")
print(f"Prompt ARN: {response['arn']}")
print(f"Created At: {response['createdAt']}")
**最佳实践**:
- 使用带连字符的描述性名称(例如:`customer-support-prompt`)
- 记录变量类型和预期格式
- 设置合适的`maxTokens`以控制成本
- 使用`defaultVariant`指定首选版本
- 添加标签用于成本跟踪和组织管理
---Operation 2: create-prompt-version
操作2: create-prompt-version
Create immutable versions of prompts for production deployment and rollback.
Use when: Deploying prompts to production, implementing staged rollout, enabling rollback capability
Code Example:
python
import boto3
from datetime import datetime
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')创建提示词的不可变版本,用于生产部署和回滚。
适用场景:将提示词部署到生产环境、实施分阶段发布、启用回滚能力
代码示例:
python
import boto3
from datetime import datetime
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Create version with detailed description
Create version with detailed description
version_response = bedrock_agent.create_prompt_version(
promptIdentifier='prompt-12345',
description=f'Production v2.0 - {datetime.now().isoformat()} - Added sentiment analysis',
tags={
'Version': '2.0',
'ReleaseDate': datetime.now().strftime('%Y-%m-%d'),
'Changelog': 'Added sentiment context to improve response quality'
}
)
version_number = version_response['version']
version_arn = version_response['arn']
print(f"Version: {version_number}")
print(f"ARN: {version_arn}")
version_response = bedrock_agent.create_prompt_version(
promptIdentifier='prompt-12345',
description=f'Production v2.0 - {datetime.now().isoformat()} - Added sentiment analysis',
tags={
'Version': '2.0',
'ReleaseDate': datetime.now().strftime('%Y-%m-%d'),
'Changelog': 'Added sentiment context to improve response quality'
}
)
version_number = version_response['version']
version_arn = version_response['arn']
print(f"Version: {version_number}")
print(f"ARN: {version_arn}")
List all versions
List all versions
list_response = bedrock_agent.list_prompts(
promptIdentifier='prompt-12345'
)
print("\nAll versions:")
for version in list_response.get('promptSummaries', []):
print(f"- Version {version['version']}: {version.get('description', 'No description')}")
**Version Management Best Practices**:
- Create versions before production deployment
- Use semantic versioning in descriptions (v1.0, v1.1, v2.0)
- Document changes in version descriptions
- Keep DRAFT version for active development
- Test versions thoroughly before promoting
---list_response = bedrock_agent.list_prompts(
promptIdentifier='prompt-12345'
)
print("\nAll versions:")
for version in list_response.get('promptSummaries', []):
print(f"- Version {version['version']}: {version.get('description', 'No description')}")
**版本管理最佳实践**:
- 在生产部署前创建版本
- 在描述中使用语义化版本(v1.0、v1.1、v2.0)
- 在版本描述中记录变更内容
- 保留DRAFT版本用于活跃开发
- 在推广前彻底测试版本
---Operation 3: get-prompt
操作3: get-prompt
Retrieve prompt details including template, variables, and inference configuration.
Use when: Inspecting prompt templates, debugging issues, preparing for invocation
Code Example:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')检索提示词详情,包括模板、变量和推理配置。
适用场景:检查提示词模板、调试问题、准备调用
代码示例:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Get specific version
Get specific version
prompt = bedrock_agent.get_prompt(
promptIdentifier='prompt-12345',
promptVersion='2' # Omit for DRAFT version
)
prompt = bedrock_agent.get_prompt(
promptIdentifier='prompt-12345',
promptVersion='2' # Omit for DRAFT version
)
Extract configuration
Extract configuration
variant = prompt['variants'][0]
template = variant['templateConfiguration']['text']['text']
variables = variant['templateConfiguration']['text']['inputVariables']
inference_config = variant['inferenceConfiguration']['text']
print(f"Prompt Name: {prompt['name']}")
print(f"Version: {prompt['version']}")
print(f"Model: {variant['modelId']}")
print(f"\nTemplate:\n{template}")
print(f"\nVariables:")
for var in variables:
print(f" - {var['name']}")
print(f"\nInference Config:")
print(f" Max Tokens: {inference_config['maxTokens']}")
print(f" Temperature: {inference_config['temperature']}")
print(f" Top P: {inference_config['topP']}")
---variant = prompt['variants'][0]
template = variant['templateConfiguration']['text']['text']
variables = variant['templateConfiguration']['text']['inputVariables']
inference_config = variant['inferenceConfiguration']['text']
print(f"Prompt Name: {prompt['name']}")
print(f"Version: {prompt['version']}")
print(f"Model: {variant['modelId']}")
print(f"\nTemplate:\n{template}")
print(f"\nVariables:")
for var in variables:
print(f" - {var['name']}")
print(f"\nInference Config:")
print(f" Max Tokens: {inference_config['maxTokens']}")
print(f" Temperature: {inference_config['temperature']}")
print(f" Top P: {inference_config['topP']}")
---Operation 4: list-prompts
操作4: list-prompts
List all prompts or filter by criteria.
Use when: Building prompt catalogs, auditing prompt usage, discovering available prompts
Code Example:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')列出所有提示词或按条件筛选。
适用场景:构建提示词目录、审计提示词使用情况、发现可用提示词
代码示例:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')List all prompts with pagination
List all prompts with pagination
paginator = bedrock_agent.get_paginator('list_prompts')
page_iterator = paginator.paginate()
prompts = []
for page in page_iterator:
prompts.extend(page.get('promptSummaries', []))
print(f"Total prompts: {len(prompts)}")
print("\nPrompt Catalog:")
for prompt in prompts:
print(f"\n- {prompt['name']} (ID: {prompt['id']})")
print(f" Description: {prompt.get('description', 'N/A')}")
print(f" Created: {prompt['createdAt']}")
print(f" Updated: {prompt['updatedAt']}")
print(f" Version: {prompt['version']}")
---paginator = bedrock_agent.get_paginator('list_prompts')
page_iterator = paginator.paginate()
prompts = []
for page in page_iterator:
prompts.extend(page.get('promptSummaries', []))
print(f"Total prompts: {len(prompts)}")
print("\nPrompt Catalog:")
for prompt in prompts:
print(f"\n- {prompt['name']} (ID: {prompt['id']})")
print(f" Description: {prompt.get('description', 'N/A')}")
print(f" Created: {prompt['createdAt']}")
print(f" Updated: {prompt['updatedAt']}")
print(f" Version: {prompt['version']}")
---Operation 5: update-prompt
操作5: update-prompt
Update prompt templates, add variants, or modify inference configuration.
Use when: Improving prompts, adding A/B test variants, adjusting inference parameters
Code Example:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')更新提示词模板、添加变体或修改推理配置。
适用场景:优化提示词、添加A/B测试变体、调整推理参数
代码示例:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Update prompt with new variant
Update prompt with new variant
response = bedrock_agent.update_prompt(
promptIdentifier='prompt-12345',
name='customer-support-prompt',
description='Customer support with multiple response styles',
defaultVariant='professional',
variants=[
{
'name': 'professional',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Professional tone template...',
'inputVariables': [{'name': 'query'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 500, 'temperature': 0.3}
}
},
{
'name': 'friendly',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Friendly tone template...',
'inputVariables': [{'name': 'query'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 500, 'temperature': 0.7}
}
}
]
)
print(f"Updated prompt: {response['id']}")
---response = bedrock_agent.update_prompt(
promptIdentifier='prompt-12345',
name='customer-support-prompt',
description='Customer support with multiple response styles',
defaultVariant='professional',
variants=[
{
'name': 'professional',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Professional tone template...',
'inputVariables': [{'name': 'query'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 500, 'temperature': 0.3}
}
},
{
'name': 'friendly',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Friendly tone template...',
'inputVariables': [{'name': 'query'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 500, 'temperature': 0.7}
}
}
]
)
print(f"Updated prompt: {response['id']}")
---Operation 6: delete-prompt
操作6: delete-prompt
Delete prompt templates (cannot be undone).
Use when: Cleaning up unused prompts, removing deprecated templates
Code Example:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')删除提示词模板(不可撤销)。
适用场景:清理未使用的提示词、移除已弃用的模板
代码示例:
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Delete prompt (all versions)
Delete prompt (all versions)
response = bedrock_agent.delete_prompt(
promptIdentifier='prompt-12345'
)
print(f"Deleted prompt: {response['id']}")
print(f"Status: {response['status']}")
**Warning**: Deletion is permanent and affects all versions. Ensure prompt is not used in Flows or Agents before deleting.
---response = bedrock_agent.delete_prompt(
promptIdentifier='prompt-12345'
)
print(f"Deleted prompt: {response['id']}")
print(f"Status: {response['status']}")
**警告**:删除操作是永久性的,会影响所有版本。删除前请确保提示词未在Flows或Agents中使用。
---Variable Types and Substitution
变量类型与替换
Supported Variable Types
支持的变量类型
Bedrock Prompt Management supports multiple variable types:
- String: Text values (default)
- Number: Numeric values
- Array: Lists of items
- JSON Object: Complex structured data
Bedrock 提示词管理支持多种变量类型:
- 字符串:文本值(默认类型)
- 数字:数值类型
- 数组:项目列表
- JSON对象:复杂结构化数据
Variable Substitution Example
变量替换示例
python
import boto3
import json
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')
bedrock_runtime = boto3.client('bedrock-runtime', region_name='us-east-1')python
import boto3
import json
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')
bedrock_runtime = boto3.client('bedrock-runtime', region_name='us-east-1')Create prompt with complex variables
Create prompt with complex variables
prompt_response = bedrock_agent.create_prompt(
name='data-analysis-prompt',
variants=[{
'name': 'default',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': '''Analyze the following data:
Dataset: {{dataset_name}}
Columns: {{columns}}
Row Count: {{row_count}}
Sample Data: {{sample_data}}
Analysis Type: {{analysis_type}}
Provide insights and recommendations.''',
'inputVariables': [
{'name': 'dataset_name'},
{'name': 'columns'},
{'name': 'row_count'},
{'name': 'sample_data'},
{'name': 'analysis_type'}
]
}
}
}]
)
prompt_response = bedrock_agent.create_prompt(
name='data-analysis-prompt',
variants=[{
'name': 'default',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': '''Analyze the following data:
Dataset: {{dataset_name}}
Columns: {{columns}}
Row Count: {{row_count}}
Sample Data: {{sample_data}}
Analysis Type: {{analysis_type}}
Provide insights and recommendations.''',
'inputVariables': [
{'name': 'dataset_name'},
{'name': 'columns'},
{'name': 'row_count'},
{'name': 'sample_data'},
{'name': 'analysis_type'}
]
}
}
}]
)
Use prompt with variable substitution
Use prompt with variable substitution
prompt = bedrock_agent.get_prompt(
promptIdentifier=prompt_response['id']
)
template = prompt['variants'][0]['templateConfiguration']['text']['text']
prompt = bedrock_agent.get_prompt(
promptIdentifier=prompt_response['id']
)
template = prompt['variants'][0]['templateConfiguration']['text']['text']
Substitute variables
Substitute variables
variables = {
'dataset_name': 'Sales Q4 2024',
'columns': json.dumps(['date', 'product', 'revenue', 'quantity']),
'row_count': '10,000',
'sample_data': json.dumps([
{'date': '2024-10-01', 'product': 'Widget A', 'revenue': 1500, 'quantity': 50},
{'date': '2024-10-02', 'product': 'Widget B', 'revenue': 2000, 'quantity': 75}
]),
'analysis_type': 'Revenue trends and product performance'
}
variables = {
'dataset_name': 'Sales Q4 2024',
'columns': json.dumps(['date', 'product', 'revenue', 'quantity']),
'row_count': '10,000',
'sample_data': json.dumps([
{'date': '2024-10-01', 'product': 'Widget A', 'revenue': 1500, 'quantity': 50},
{'date': '2024-10-02', 'product': 'Widget B', 'revenue': 2000, 'quantity': 75}
]),
'analysis_type': 'Revenue trends and product performance'
}
Replace variables in template
Replace variables in template
prompt_text = template
for var_name, var_value in variables.items():
prompt_text = prompt_text.replace(f'{{{{{var_name}}}}}', str(var_value))
print(f"Final Prompt:\n{prompt_text}")
---prompt_text = template
for var_name, var_value in variables.items():
prompt_text = prompt_text.replace(f'{{{{{var_name}}}}}', str(var_value))
print(f"Final Prompt:\n{prompt_text}")
---Multi-Variant Testing (A/B Testing)
多变体测试(A/B测试)
Creating Multi-Variant Prompts
创建多变体提示词
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Create prompt with 3 variants for A/B/C testing
Create prompt with 3 variants for A/B/C testing
response = bedrock_agent.create_prompt(
name='email-subject-generator',
description='A/B/C test for email subject lines',
defaultVariant='variant-a',
variants=[
{
'name': 'variant-a',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Generate a professional email subject line for: {{email_content}}',
'inputVariables': [{'name': 'email_content'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 50, 'temperature': 0.3}
}
},
{
'name': 'variant-b',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Create an engaging, click-worthy subject line for: {{email_content}}',
'inputVariables': [{'name': 'email_content'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 50, 'temperature': 0.7}
}
},
{
'name': 'variant-c',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-haiku-20240307-v1:0',
'templateConfiguration': {
'text': {
'text': 'Write a concise, action-oriented subject line for: {{email_content}}',
'inputVariables': [{'name': 'email_content'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 50, 'temperature': 0.5}
}
}
]
)
print(f"Created multi-variant prompt: {response['id']}")
undefinedresponse = bedrock_agent.create_prompt(
name='email-subject-generator',
description='A/B/C test for email subject lines',
defaultVariant='variant-a',
variants=[
{
'name': 'variant-a',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Generate a professional email subject line for: {{email_content}}',
'inputVariables': [{'name': 'email_content'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 50, 'temperature': 0.3}
}
},
{
'name': 'variant-b',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
'templateConfiguration': {
'text': {
'text': 'Create an engaging, click-worthy subject line for: {{email_content}}',
'inputVariables': [{'name': 'email_content'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 50, 'temperature': 0.7}
}
},
{
'name': 'variant-c',
'templateType': 'TEXT',
'modelId': 'anthropic.claude-3-haiku-20240307-v1:0',
'templateConfiguration': {
'text': {
'text': 'Write a concise, action-oriented subject line for: {{email_content}}',
'inputVariables': [{'name': 'email_content'}]
}
},
'inferenceConfiguration': {
'text': {'maxTokens': 50, 'temperature': 0.5}
}
}
]
)
print(f"Created multi-variant prompt: {response['id']}")
undefinedTesting Framework
测试框架
python
import boto3
import random
import json
from datetime import datetime
class PromptABTester:
def __init__(self, prompt_id, region='us-east-1'):
self.bedrock_agent = boto3.client('bedrock-agent', region_name=region)
self.bedrock_runtime = boto3.client('bedrock-runtime', region_name=region)
self.prompt_id = prompt_id
self.results = []
def get_variants(self):
prompt = self.bedrock_agent.get_prompt(promptIdentifier=self.prompt_id)
return [v['name'] for v in prompt['variants']]
def test_variant(self, variant_name, variables, user_id=None):
# Get prompt
prompt = self.bedrock_agent.get_prompt(promptIdentifier=self.prompt_id)
# Find variant
variant = next(v for v in prompt['variants'] if v['name'] == variant_name)
# Substitute variables
template = variant['templateConfiguration']['text']['text']
for var_name, var_value in variables.items():
template = template.replace(f'{{{{{var_name}}}}}', str(var_value))
# Invoke model
model_id = variant['modelId']
inference_config = variant['inferenceConfiguration']['text']
response = self.bedrock_runtime.invoke_model(
modelId=model_id,
body=json.dumps({
'anthropic_version': 'bedrock-2023-05-31',
'messages': [{'role': 'user', 'content': template}],
'max_tokens': inference_config['maxTokens'],
'temperature': inference_config['temperature']
})
)
result = json.loads(response['body'].read())
output = result['content'][0]['text']
# Record test result
self.results.append({
'timestamp': datetime.now().isoformat(),
'variant': variant_name,
'user_id': user_id,
'input_variables': variables,
'output': output,
'model': model_id
})
return output
def random_test(self, variables, user_id=None):
"""Randomly select variant for testing"""
variants = self.get_variants()
selected_variant = random.choice(variants)
return self.test_variant(selected_variant, variables, user_id)
def analyze_results(self):
"""Analyze test results by variant"""
analysis = {}
for result in self.results:
variant = result['variant']
if variant not in analysis:
analysis[variant] = {
'count': 0,
'avg_output_length': 0,
'samples': []
}
analysis[variant]['count'] += 1
analysis[variant]['samples'].append(result['output'])
analysis[variant]['avg_output_length'] += len(result['output'])
# Calculate averages
for variant in analysis:
count = analysis[variant]['count']
analysis[variant]['avg_output_length'] /= count
return analysispython
import boto3
import random
import json
from datetime import datetime
class PromptABTester:
def __init__(self, prompt_id, region='us-east-1'):
self.bedrock_agent = boto3.client('bedrock-agent', region_name=region)
self.bedrock_runtime = boto3.client('bedrock-runtime', region_name=region)
self.prompt_id = prompt_id
self.results = []
def get_variants(self):
prompt = self.bedrock_agent.get_prompt(promptIdentifier=self.prompt_id)
return [v['name'] for v in prompt['variants']]
def test_variant(self, variant_name, variables, user_id=None):
# Get prompt
prompt = self.bedrock_agent.get_prompt(promptIdentifier=self.prompt_id)
# Find variant
variant = next(v for v in prompt['variants'] if v['name'] == variant_name)
# Substitute variables
template = variant['templateConfiguration']['text']['text']
for var_name, var_value in variables.items():
template = template.replace(f'{{{{{var_name}}}}}', str(var_value))
# Invoke model
model_id = variant['modelId']
inference_config = variant['inferenceConfiguration']['text']
response = self.bedrock_runtime.invoke_model(
modelId=model_id,
body=json.dumps({
'anthropic_version': 'bedrock-2023-05-31',
'messages': [{'role': 'user', 'content': template}],
'max_tokens': inference_config['maxTokens'],
'temperature': inference_config['temperature']
})
)
result = json.loads(response['body'].read())
output = result['content'][0]['text']
# Record test result
self.results.append({
'timestamp': datetime.now().isoformat(),
'variant': variant_name,
'user_id': user_id,
'input_variables': variables,
'output': output,
'model': model_id
})
return output
def random_test(self, variables, user_id=None):
"""Randomly select variant for testing"""
variants = self.get_variants()
selected_variant = random.choice(variants)
return self.test_variant(selected_variant, variables, user_id)
def analyze_results(self):
"""Analyze test results by variant"""
analysis = {}
for result in self.results:
variant = result['variant']
if variant not in analysis:
analysis[variant] = {
'count': 0,
'avg_output_length': 0,
'samples': []
}
analysis[variant]['count'] += 1
analysis[variant]['samples'].append(result['output'])
analysis[variant]['avg_output_length'] += len(result['output'])
# Calculate averages
for variant in analysis:
count = analysis[variant]['count']
analysis[variant]['avg_output_length'] /= count
return analysisUsage
Usage
tester = PromptABTester('prompt-12345')
tester = PromptABTester('prompt-12345')
Run A/B test
Run A/B test
for i in range(100):
result = tester.random_test(
variables={'email_content': f'Test email content {i}'},
user_id=f'user-{i}'
)
for i in range(100):
result = tester.random_test(
variables={'email_content': f'Test email content {i}'},
user_id=f'user-{i}'
)
Analyze
Analyze
analysis = tester.analyze_results()
print(json.dumps(analysis, indent=2))
---analysis = tester.analyze_results()
print(json.dumps(analysis, indent=2))
---Prompt Engineering Best Practices
提示词工程最佳实践
1. Clear Instructions
1. 清晰的指令
python
undefinedpython
undefinedGood: Specific, clear instructions
Good: Specific, clear instructions
prompt = '''You are a financial analyst.
Task: Analyze the following quarterly earnings data and provide:
- Revenue trends (% change YoY)
- Key growth drivers
- Risk factors
Data: {{financial_data}}
Format: Use bullet points. Keep analysis under 200 words.'''
prompt = '''You are a financial analyst.
Task: Analyze the following quarterly earnings data and provide:
- Revenue trends (% change YoY)
- Key growth drivers
- Risk factors
Data: {{financial_data}}
Format: Use bullet points. Keep analysis under 200 words.'''
Bad: Vague, unclear
Bad: Vague, unclear
prompt = '''Analyze this: {{financial_data}}'''
undefinedprompt = '''Analyze this: {{financial_data}}'''
undefined2. Variable Naming Conventions
2. 变量命名规范
python
undefinedpython
undefinedGood: Descriptive variable names
Good: Descriptive variable names
inputVariables=[
{'name': 'customer_query'},
{'name': 'customer_purchase_history'},
{'name': 'max_response_words'}
]
inputVariables=[
{'name': 'customer_query'},
{'name': 'customer_purchase_history'},
{'name': 'max_response_words'}
]
Bad: Ambiguous names
Bad: Ambiguous names
inputVariables=[
{'name': 'input'},
{'name': 'data'},
{'name': 'limit'}
]
undefinedinputVariables=[
{'name': 'input'},
{'name': 'data'},
{'name': 'limit'}
]
undefined3. Inference Configuration
3. 推理配置
python
undefinedpython
undefinedCreative tasks: Higher temperature
Creative tasks: Higher temperature
'inferenceConfiguration': {
'text': {
'maxTokens': 1000,
'temperature': 0.8, # More creative
'topP': 0.95
}
}
'inferenceConfiguration': {
'text': {
'maxTokens': 1000,
'temperature': 0.8, # More creative
'topP': 0.95
}
}
Factual tasks: Lower temperature
Factual tasks: Lower temperature
'inferenceConfiguration': {
'text': {
'maxTokens': 500,
'temperature': 0.1, # More deterministic
'topP': 0.9
}
}
undefined'inferenceConfiguration': {
'text': {
'maxTokens': 500,
'temperature': 0.1, # More deterministic
'topP': 0.9
}
}
undefined4. Stop Sequences
4. 停止序列
python
undefinedpython
undefinedUse stop sequences to control output length
Use stop sequences to control output length
'inferenceConfiguration': {
'text': {
'maxTokens': 1000,
'stopSequences': ['\n\n---', 'END_RESPONSE', '###']
}
}
---'inferenceConfiguration': {
'text': {
'maxTokens': 1000,
'stopSequences': ['\n\n---', 'END_RESPONSE', '###']
}
}
---Integration with Bedrock Flows
与Bedrock Flows集成
Using Prompts in Flows
在Flows中使用提示词
python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')python
import boto3
bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')Create flow that uses managed prompt
Create flow that uses managed prompt
flow_response = bedrock_agent.create_flow(
name='customer-support-flow',
executionRoleArn='arn:aws:iam::123456789012:role/BedrockFlowRole',
definition={
'nodes': [
{
'name': 'FlowInput',
'type': 'Input',
'outputs': [{'name': 'query', 'type': 'String'}]
},
{
'name': 'SupportPrompt',
'type': 'Prompt',
'configuration': {
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'arn:aws:bedrock:us-east-1:123456789012:prompt/prompt-12345:2'
}
}
}
},
'inputs': [
{
'name': 'customer_query',
'expression': 'FlowInput.query'
},
{
'name': 'company_name',
'expression': '"Acme Corp"'
},
{
'name': 'max_words',
'expression': '150'
}
],
'outputs': [{'name': 'response', 'type': 'String'}]
},
{
'name': 'FlowOutput',
'type': 'Output',
'inputs': [
{
'name': 'response',
'expression': 'SupportPrompt.response'
}
]
}
],
'connections': [
{'source': 'FlowInput', 'target': 'SupportPrompt'},
{'source': 'SupportPrompt', 'target': 'FlowOutput'}
]
}
)
print(f"Created flow: {flow_response['id']}")
---flow_response = bedrock_agent.create_flow(
name='customer-support-flow',
executionRoleArn='arn:aws:iam::123456789012:role/BedrockFlowRole',
definition={
'nodes': [
{
'name': 'FlowInput',
'type': 'Input',
'outputs': [{'name': 'query', 'type': 'String'}]
},
{
'name': 'SupportPrompt',
'type': 'Prompt',
'configuration': {
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'arn:aws:bedrock:us-east-1:123456789012:prompt/prompt-12345:2'
}
}
}
},
'inputs': [
{
'name': 'customer_query',
'expression': 'FlowInput.query'
},
{
'name': 'company_name',
'expression': '"Acme Corp"'
},
{
'name': 'max_words',
'expression': '150'
}
],
'outputs': [{'name': 'response', 'type': 'String'}]
},
{
'name': 'FlowOutput',
'type': 'Output',
'inputs': [
{
'name': 'response',
'expression': 'SupportPrompt.response'
}
]
}
],
'connections': [
{'source': 'FlowInput', 'target': 'SupportPrompt'},
{'source': 'SupportPrompt', 'target': 'FlowOutput'}
]
}
)
print(f"Created flow: {flow_response['id']}")
---Related Skills
相关技能
- bedrock-inference: Invoke foundation models directly
- bedrock-flows: Build visual AI workflows with prompts
- bedrock-agentcore: Create AI agents with managed prompts
- bedrock-knowledge-bases: RAG applications with prompt templates
- bedrock-guardrails: Apply safety policies to prompt outputs
- claude-advanced-tool-use: Advanced prompt patterns for tool use
- context-engineering: Optimize prompt context and token usage
- prompt-builder: Build effective prompts (meta-skill)
- bedrock-inference: 直接调用基础模型
- bedrock-flows: 使用提示词构建可视化AI工作流
- bedrock-agentcore: 使用托管提示词创建AI Agent
- bedrock-knowledge-bases: 结合提示词模板的RAG应用
- bedrock-guardrails: 为提示词输出应用安全策略
- claude-advanced-tool-use: 面向工具调用的高级提示词模式
- context-engineering: 优化提示词上下文和令牌使用
- prompt-builder: 构建高效提示词(元技能)
Complete Example: Production Prompt Catalog
完整示例:生产级提示词目录
python
import boto3
import json
from typing import Dict, List, Optional
class PromptCatalog:
"""Enterprise prompt catalog with versioning and testing"""
def __init__(self, region='us-east-1'):
self.bedrock_agent = boto3.client('bedrock-agent', region_name=region)
self.bedrock_runtime = boto3.client('bedrock-runtime', region_name=region)
self.catalog = {}
def create_prompt_template(
self,
name: str,
description: str,
template: str,
variables: List[str],
model_id: str = 'anthropic.claude-3-sonnet-20240229-v1:0',
max_tokens: int = 1000,
temperature: float = 0.7,
tags: Optional[Dict] = None
) -> str:
"""Create a new prompt template"""
response = self.bedrock_agent.create_prompt(
name=name,
description=description,
variants=[{
'name': 'default',
'templateType': 'TEXT',
'modelId': model_id,
'templateConfiguration': {
'text': {
'text': template,
'inputVariables': [{'name': var} for var in variables]
}
},
'inferenceConfiguration': {
'text': {
'maxTokens': max_tokens,
'temperature': temperature,
'topP': 0.9
}
}
}],
tags=tags or {}
)
prompt_id = response['id']
self.catalog[name] = prompt_id
return prompt_id
def version_prompt(self, name: str, description: str) -> str:
"""Create immutable version"""
prompt_id = self.catalog[name]
response = self.bedrock_agent.create_prompt_version(
promptIdentifier=prompt_id,
description=description
)
return response['version']
def get_prompt(self, name: str, version: Optional[str] = None) -> Dict:
"""Get prompt by name"""
prompt_id = self.catalog[name]
return self.bedrock_agent.get_prompt(
promptIdentifier=prompt_id,
promptVersion=version
) if version else self.bedrock_agent.get_prompt(promptIdentifier=prompt_id)
def list_catalog(self) -> List[Dict]:
"""List all prompts in catalog"""
response = self.bedrock_agent.list_prompts()
return response.get('promptSummaries', [])python
import boto3
import json
from typing import Dict, List, Optional
class PromptCatalog:
"""Enterprise prompt catalog with versioning and testing"""
def __init__(self, region='us-east-1'):
self.bedrock_agent = boto3.client('bedrock-agent', region_name=region)
self.bedrock_runtime = boto3.client('bedrock-runtime', region_name=region)
self.catalog = {}
def create_prompt_template(
self,
name: str,
description: str,
template: str,
variables: List[str],
model_id: str = 'anthropic.claude-3-sonnet-20240229-v1:0',
max_tokens: int = 1000,
temperature: float = 0.7,
tags: Optional[Dict] = None
) -> str:
"""Create a new prompt template"""
response = self.bedrock_agent.create_prompt(
name=name,
description=description,
variants=[{
'name': 'default',
'templateType': 'TEXT',
'modelId': model_id,
'templateConfiguration': {
'text': {
'text': template,
'inputVariables': [{'name': var} for var in variables]
}
},
'inferenceConfiguration': {
'text': {
'maxTokens': max_tokens,
'temperature': temperature,
'topP': 0.9
}
}
}],
tags=tags or {}
)
prompt_id = response['id']
self.catalog[name] = prompt_id
return prompt_id
def version_prompt(self, name: str, description: str) -> str:
"""Create immutable version"""
prompt_id = self.catalog[name]
response = self.bedrock_agent.create_prompt_version(
promptIdentifier=prompt_id,
description=description
)
return response['version']
def get_prompt(self, name: str, version: Optional[str] = None) -> Dict:
"""Get prompt by name"""
prompt_id = self.catalog[name]
return self.bedrock_agent.get_prompt(
promptIdentifier=prompt_id,
promptVersion=version
) if version else self.bedrock_agent.get_prompt(promptIdentifier=prompt_id)
def list_catalog(self) -> List[Dict]:
"""List all prompts in catalog"""
response = self.bedrock_agent.list_prompts()
return response.get('promptSummaries', [])Usage
Usage
catalog = PromptCatalog(region='us-east-1')
catalog = PromptCatalog(region='us-east-1')
Create customer support prompt
Create customer support prompt
support_id = catalog.create_prompt_template(
name='customer-support-v1',
description='Customer support response generator',
template='''You are a customer support agent for {{company_name}}.
Query: {{query}}
Provide a helpful, professional response in under {{max_words}} words.''',
variables=['company_name', 'query', 'max_words'],
max_tokens=500,
temperature=0.5,
tags={'Department': 'Support', 'Environment': 'Production'}
)
support_id = catalog.create_prompt_template(
name='customer-support-v1',
description='Customer support response generator',
template='''You are a customer support agent for {{company_name}}.
Query: {{query}}
Provide a helpful, professional response in under {{max_words}} words.''',
variables=['company_name', 'query', 'max_words'],
max_tokens=500,
temperature=0.5,
tags={'Department': 'Support', 'Environment': 'Production'}
)
Create version
Create version
version = catalog.version_prompt('customer-support-v1', 'Initial production release')
print(f"Created prompt: {support_id}")
print(f"Version: {version}")
version = catalog.version_prompt('customer-support-v1', 'Initial production release')
print(f"Created prompt: {support_id}")
print(f"Version: {version}")
List catalog
List catalog
prompts = catalog.list_catalog()
print(f"\nCatalog contains {len(prompts)} prompts")
---prompts = catalog.list_catalog()
print(f"\nCatalog contains {len(prompts)} prompts")
---Summary
总结
Amazon Bedrock Prompt Management provides enterprise-grade prompt template capabilities:
- Centralized Management: Single source of truth for prompts
- Version Control: Track changes, rollback, staged deployment
- A/B Testing: Multi-variant testing for optimization
- Variable Substitution: Flexible templating system
- Flow Integration: Use prompts across Bedrock services
- Team Collaboration: Shared prompt catalog
- Production Ready: Immutable versions, tagging, monitoring
Use bedrock-prompts to standardize prompt engineering, enable A/B testing, and build reusable prompt libraries for production AI applications.
Amazon Bedrock 提示词管理提供企业级的提示词模板能力:
- 集中管理:提示词的单一可信来源
- 版本控制:跟踪变更、回滚和分阶段部署
- A/B测试:多变体测试以优化效果
- 变量替换:灵活的模板系统
- Flow集成:在Bedrock服务间复用提示词
- 团队协作:共享的提示词目录
- 生产就绪:不可变版本、标签、监控
使用bedrock-prompts来标准化提示词工程、启用A/B测试,并为生产级AI应用构建可复用的提示词库。