design-jira-state-analyzer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDesign Jira State Analyzer
设计Jira状态分析器
When to Use This Skill
何时使用该技能
Explicit Triggers:
- "Analyze state transitions in Jira"
- "Calculate time spent in each status"
- "Find workflow bottlenecks"
- "Track cycle time for tickets"
- "Measure how long tickets stay in review"
- "Analyze workflow performance"
- "Design a state analyzer"
- "Calculate business hours in status"
Implicit Triggers:
- Questions about "how long does it take" for workflow stages
- Requests for SLA tracking or compliance analysis
- Need to optimize process flow or reduce delays
- Questions about which states slow down delivery
- Requests to measure team velocity or throughput
- Need to analyze deployment pipeline duration
Use This Skill When:
- Building systems to track state changes over time
- Analyzing workflows with discrete states (Jira, GitHub PRs, deployments, support tickets)
- Calculating temporal metrics (cycle time, lead time, flow efficiency)
- Detecting bottlenecks in multi-stage processes
- Implementing SLA monitoring and compliance tracking
- Extracting insights from audit logs or changelogs
Do NOT Use This Skill When:
- You need real-time event processing (use stream processing instead)
- Data lacks complete state history (partial data leads to invalid metrics)
- States are not well-defined or change frequently (fix data model first)
- You need predictive analytics (this skill covers historical analysis only)
明确触发场景:
- "分析Jira中的状态转换"
- "计算每个状态的停留时长"
- "查找工作流瓶颈"
- "跟踪工单的周期时间"
- "统计工单在评审阶段的停留时长"
- "分析工作流性能"
- "设计状态分析器"
- "计算状态内的工作时长"
隐含触发场景:
- 关于工作流各阶段“耗时多久”的问题
- SLA跟踪或合规性分析的需求
- 需要优化流程或减少延迟
- 询问哪些状态拖慢交付进度
- 衡量团队速度或吞吐量的需求
- 分析部署流水线时长的需求
建议使用该技能的场景:
- 构建跟踪状态随时间变化的系统
- 分析具有离散状态的工作流(Jira、GitHub PR、部署流程、支持工单)
- 计算时间指标(周期时间、前置时间、流动效率)
- 检测多阶段流程中的瓶颈
- 实现SLA监控与合规跟踪
- 从审计日志或变更日志中提取洞察
请勿使用该技能的场景:
- 需要实时事件处理(请使用流处理)
- 数据缺少完整的状态历史(部分数据会导致指标无效)
- 状态定义不清晰或频繁变更(先修复数据模型)
- 需要预测分析(该技能仅覆盖历史分析)
What This Skill Does
该技能的作用
Provides comprehensive guidance on designing and implementing state transition analysis systems. Covers state machine fundamentals, extracting transitions from audit logs, calculating temporal durations (calendar days and business hours), detecting bottlenecks, analyzing workflow metrics (cycle time, lead time, flow efficiency), and exporting results for stakeholders. Includes practical examples for Jira, GitHub PRs, and custom systems.
提供设计和实现状态转换分析系统的全面指导。涵盖状态机基础、从审计日志中提取转换数据、计算时间时长(日历天数和工作时长)、检测瓶颈、分析工作流指标(周期时间、前置时间、流动效率),以及为相关人员导出结果。包含针对Jira、GitHub PR和自定义系统的实用示例。
Quick Start
快速开始
Analyze how long a ticket spends in each state:
python
from jira_tool.analysis.state_analyzer import StateDurationAnalyzer
from jira_tool.client import JiraClient分析工单在每个状态的停留时长:
python
from jira_tool.analysis.state_analyzer import StateDurationAnalyzer
from jira_tool.client import JiraClientFetch issue with changelog
Fetch issue with changelog
client = JiraClient()
issue = client.get_issue("PROJ-123", expand=["changelog"])
client = JiraClient()
issue = client.get_issue("PROJ-123", expand=["changelog"])
Analyze state durations
Analyze state durations
analyzer = StateDurationAnalyzer()
durations = analyzer.analyze_issue(issue)
analyzer = StateDurationAnalyzer()
durations = analyzer.analyze_issue(issue)
Get results
Get results
for duration in durations:
print(f"{duration.state}: {duration.calendar_days} days, {duration.business_hours} business hours")
Or use the CLI for batch analysis:
```bashfor duration in durations:
print(f"{duration.state}: {duration.calendar_days} days, {duration.business_hours} business hours")
或使用CLI进行批量分析:
```bashExport issues with changelog
Export issues with changelog
uv run jira-tool search "project = PROJ" --expand changelog --format json -o issues.json
uv run jira-tool search "project = PROJ" --expand changelog --format json -o issues.json
Analyze state durations
Analyze state durations
uv run jira-tool analyze state-durations issues.json -o durations.csv --business-hours
undefineduv run jira-tool analyze state-durations issues.json -o durations.csv --business-hours
undefinedInstructions
操作步骤
Step 1: Understand State Machines and Transitions
步骤1:理解状态机与状态转换
Every system with state changes follows this pattern:
- States: Discrete conditions (To Do, In Progress, Done, Blocked, etc.)
- Transitions: Changes from one state to another (To Do → In Progress)
- Timeline: When each transition occurred (from audit log/changelog)
- Duration: Time between transitions
Key Principle: A complete state history lets you calculate exactly how long items spend in each state.
Real-World Examples:
- Jira Ticket: (track days in each)
Created → To Do → In Progress → Review → Done - GitHub PR: (track review duration)
Created → In Review → Approved → Merged - Deployment: (track stage duration)
Queued → Building → Testing → Staging → Production - Support Ticket: (track response time)
New → Assigned → Investigating → Resolved → Closed
所有存在状态变化的系统都遵循以下模式:
- 状态:离散的条件(待办、进行中、已完成、阻塞等)
- 转换:从一个状态到另一个状态的变化(待办 → 进行中)
- 时间线:每次转换发生的时间(来自审计日志/变更日志)
- 时长:转换之间的时间间隔
核心原则:完整的状态历史可以让你准确计算条目在每个状态的停留时长。
实际示例:
- Jira工单:(跟踪每个状态的天数)
创建 → 待办 → 进行中 → 评审 → 已完成 - GitHub PR:(跟踪评审时长)
创建 → 评审中 → 已批准 → 已合并 - 部署流程:(跟踪每个阶段的时长)
排队 → 构建中 → 测试中 → 预发布 → 生产环境 - 支持工单:(跟踪响应时间)
新建 → 已分配 → 调查中 → 已解决 → 已关闭
Step 2: Extract State Transitions from Audit Logs
步骤2:从审计日志中提取状态转换
The foundation of state analysis is reliable transition data. Most systems provide this through:
- Changelog (Jira issues, GitHub API)
- Audit logs (enterprise systems)
- Event streams (modern event-driven architectures)
- Status history (deployment platforms)
Pattern: Extract raw transition data into structured records:
python
from dataclasses import dataclass
from datetime import datetime
@dataclass
class StateTransition:
timestamp: datetime # When it changed
from_state: str | None # Previous state (None if created)
to_state: str # New state
author: str | None # Who made the changeExample from Jira changelog:
json
{
"created": "2024-01-15T10:30:00Z",
"changelog": {
"histories": [
{
"created": "2024-01-15T10:30:00Z",
"items": [
{
"field": "status",
"fromString": null,
"toString": "To Do"
}
]
},
{
"created": "2024-01-16T09:00:00Z",
"items": [
{
"field": "status",
"fromString": "To Do",
"toString": "In Progress"
}
]
}
]
}
}Extraction Logic: For each status change in changelog, create a StateTransition
状态分析的基础是可靠的转换数据。大多数系统通过以下方式提供:
- 变更日志(Jira工单、GitHub API)
- 审计日志(企业系统)
- 事件流(现代事件驱动架构)
- 状态历史(部署平台)
模式:将原始转换数据提取为结构化记录:
python
from dataclasses import dataclass
from datetime import datetime
@dataclass
class StateTransition:
timestamp: datetime # When it changed
from_state: str | None # Previous state (None if created)
to_state: str # New state
author: str | None # Who made the change来自Jira变更日志的示例:
json
{
"created": "2024-01-15T10:30:00Z",
"changelog": {
"histories": [
{
"created": "2024-01-15T10:30:00Z",
"items": [
{
"field": "status",
"fromString": null,
"toString": "To Do"
}
]
},
{
"created": "2024-01-16T09:00:00Z",
"items": [
{
"field": "status",
"fromString": "To Do",
"toString": "In Progress"
}
]
}
]
}
}提取逻辑:针对变更日志中的每个状态变化,创建一个StateTransition对象
Step 3: Calculate Duration Between Transitions
步骤3:计算转换之间的时长
Once you have transitions, calculate time spent in each state:
Duration Metrics:
- Calendar Days: Total elapsed days (includes nights, weekends)
- Business Hours: Time within working hours (e.g., 9 AM - 5 PM, weekdays only)
- Active Hours: Excludes explicitly defined off-hours
Implementation Pattern:
python
from datetime import datetime, timedelta, UTC
@dataclass
class StateDuration:
state: str
start_time: datetime
end_time: datetime | None # None if still in state
calendar_days: float
business_hours: float
def calculate_calendar_days(start: datetime, end: datetime) -> float:
"""Calculate elapsed calendar days."""
delta = end - start
return delta.total_seconds() / (24 * 3600)
def calculate_business_hours(
start: datetime,
end: datetime,
business_start: int = 9, # 9 AM
business_end: int = 17, # 5 PM
) -> float:
"""Calculate time within business hours (Mon-Fri, 9-5)."""
current = start
hours = 0.0
while current < end:
# Only count weekdays
if current.weekday() < 5: # Mon=0, Fri=4
day_start = current.replace(hour=business_start, minute=0, second=0)
day_end = current.replace(hour=business_end, minute=0, second=0)
# Clamp to actual interval
interval_start = max(current, day_start)
interval_end = min(end, day_end)
if interval_start < interval_end:
hours += (interval_end - interval_start).total_seconds() / 3600
# Move to next day
current = (current + timedelta(days=1)).replace(
hour=0, minute=0, second=0, microsecond=0
)
return hoursConsiderations:
- Timezone Awareness: Always use UTC internally, convert for display
- Partial Days: Handle transitions at any time (not just business hour boundaries)
- Open Issues: Current state has (still ongoing)
end_time = None - Business Hours: Configurable per organization (not all use 9-5)
获取转换数据后,计算每个状态的停留时长:
时长指标:
- 日历天数:总经过天数(包含夜间、周末)
- 工作时长:工作时间内的时长(例如:上午9点 - 下午5点,仅工作日)
- 有效时长:排除明确定义的非工作时间
实现模式:
python
from datetime import datetime, timedelta, UTC
@dataclass
class StateDuration:
state: str
start_time: datetime
end_time: datetime | None # None if still in state
calendar_days: float
business_hours: float
def calculate_calendar_days(start: datetime, end: datetime) -> float:
"""Calculate elapsed calendar days."""
delta = end - start
return delta.total_seconds() / (24 * 3600)
def calculate_business_hours(
start: datetime,
end: datetime,
business_start: int = 9, # 9 AM
business_end: int = 17, # 5 PM
) -> float:
"""Calculate time within business hours (Mon-Fri, 9-5)."""
current = start
hours = 0.0
while current < end:
# Only count weekdays
if current.weekday() < 5: # Mon=0, Fri=4
day_start = current.replace(hour=business_start, minute=0, second=0)
day_end = current.replace(hour=business_end, minute=0, second=0)
# Clamp to actual interval
interval_start = max(current, day_start)
interval_end = min(end, day_end)
if interval_start < interval_end:
hours += (interval_end - interval_start).total_seconds() / 3600
# Move to next day
current = (current + timedelta(days=1)).replace(
hour=0, minute=0, second=0, microsecond=0
)
return hours注意事项:
- 时区意识:内部始终使用UTC,展示时再转换
- 部分天数:处理任意时间点的转换(不仅限于工作时间边界)
- 未结工单:当前状态的end_time为None(仍处于该状态)
- 工作时长:可根据组织需求配置(并非所有组织都使用9-5)
Step 4: Detect Bottlenecks
步骤4:检测瓶颈
Once you have durations, find which states take the longest:
Pattern: Aggregate by state and sort by duration
python
def find_bottlenecks(durations: list[StateDuration]) -> dict[str, dict]:
"""Identify states where items spend most time."""
by_state = {}
for duration in durations:
if duration.state not in by_state:
by_state[duration.state] = {
'total_days': 0,
'count': 0,
'max_days': 0,
'avg_business_hours': 0
}
stats = by_state[duration.state]
if duration.end_time: # Only closed items
stats['total_days'] += duration.calendar_days
stats['count'] += 1
stats['max_days'] = max(stats['max_days'], duration.calendar_days)
# Calculate averages
for state, stats in by_state.items():
if stats['count'] > 0:
stats['avg_days'] = stats['total_days'] / stats['count']
# Sort by average duration
return dict(sorted(
by_state.items(),
key=lambda x: x[1].get('avg_days', 0),
reverse=True
))Interpretation:
- States with high average = bottlenecks
- States with high variance = inconsistent process
- States in critical path = focus areas for improvement
获取时长数据后,找出耗时最长的状态:
模式:按状态聚合并按时长排序
python
def find_bottlenecks(durations: list[StateDuration]) -> dict[str, dict]:
"""Identify states where items spend most time."""
by_state = {}
for duration in durations:
if duration.state not in by_state:
by_state[duration.state] = {
'total_days': 0,
'count': 0,
'max_days': 0,
'avg_business_hours': 0
}
stats = by_state[duration.state]
if duration.end_time: # Only closed items
stats['total_days'] += duration.calendar_days
stats['count'] += 1
stats['max_days'] = max(stats['max_days'], duration.calendar_days)
# Calculate averages
for state, stats in by_state.items():
if stats['count'] > 0:
stats['avg_days'] = stats['total_days'] / stats['count']
# Sort by average duration
return dict(sorted(
by_state.items(),
key=lambda x: x[1].get('avg_days', 0),
reverse=True
))解读:
- 平均时长高的状态 = 瓶颈
- 方差大的状态 = 流程不一致
- 关键路径中的状态 = 优化重点
Step 5: Analyze Patterns and Metrics
步骤5:分析模式与指标
Extract actionable insights from duration data:
Common Metrics:
- Cycle Time: Total time from creation to completion
Cycle Time = sum of all state durations
(Useful for: capacity planning, delivery promises)- Lead Time: Time from request to delivery (may exclude certain states)
Lead Time = cycle time minus waiting states
(Useful for: customer SLA tracking)- Flow Efficiency: Percentage of time in value-added states
Flow Efficiency = (active work time) / (total time)
(Useful for: process optimization)- State-Specific Metrics:
Review Wait Time = average time in "In Review" state
Development Time = average time in "In Progress" stateAnalysis Patterns:
- Trend: Is cycle time getting longer or shorter? (Performance degradation)
- Distribution: Are most items fast with occasional slow ones? (Outlier analysis)
- Correlation: Do certain states predict long cycle times? (Predictive metrics)
- Seasonality: Do Mondays take longer? (Resource constraints)
从时长数据中提取可执行的洞察:
常见指标:
- 周期时间:从创建到完成的总时长
周期时间 = 所有状态时长之和
(用途:容量规划、交付承诺)- 前置时间:从请求到交付的时长(可能排除某些状态)
前置时间 = 周期时间减去等待状态的时长
(用途:客户SLA跟踪)- 流动效率:增值状态时长占总时长的百分比
流动效率 = (有效工作时长)/(总时长)
(用途:流程优化)- 状态特定指标:
评审等待时长 = “评审中”状态的平均时长
开发时长 = “进行中”状态的平均时长分析模式:
- 趋势:周期时间是变长还是变短?(性能退化)
- 分布:大多数条目速度快,偶尔有慢的?(异常值分析)
- 相关性:某些状态是否会导致周期时间变长?(预测指标)
- 季节性:周一的耗时是否更长?(资源限制)
Step 6: Export and Communicate Results
步骤6:导出并传达结果
Format analysis results for different audiences:
Format 1: CSV (Spreadsheet-Friendly)
csv
issue_key,state,start_time,end_time,calendar_days,business_hours
PROJ-123,To Do,2024-01-15T10:30:00Z,2024-01-16T09:00:00Z,0.94,8.5
PROJ-123,In Progress,2024-01-16T09:00:00Z,2024-01-18T14:30:00Z,2.23,16.5Use for: Excel analysis, stakeholder reports, historical records
Format 2: JSON (Programmatic Processing)
json
{
"issues": [
{
"key": "PROJ-123",
"states": [
{
"state": "To Do",
"calendar_days": 0.94,
"business_hours": 8.5
}
],
"cycle_time_days": 4.17
}
],
"summary": {
"by_state": {
"In Progress": {
"avg_days": 2.8,
"max_days": 8.5
}
}
}
}Use for: Further analysis, automation, dashboards
Format 3: Visualization Data
State Analysis Summary:
========================
Total Items Analyzed: 47
Average Cycle Time: 5.2 days (35.4 business hours)
By State:
In Progress: avg 2.8 days (18% of time)
Code Review: avg 1.9 days (23% of time) ⚠️ BOTTLENECK
Testing: avg 0.8 days (10% of time)
Done: avg 0.7 days (8% of time)Use for: Dashboards, reports, team communication
为不同受众格式化分析结果:
格式1:CSV(适合电子表格)
csv
issue_key,state,start_time,end_time,calendar_days,business_hours
PROJ-123,To Do,2024-01-15T10:30:00Z,2024-01-16T09:00:00Z,0.94,8.5
PROJ-123,In Progress,2024-01-16T09:00:00Z,2024-01-18T14:30:00Z,2.23,16.5用途:Excel分析、 stakeholder报告、历史记录
格式2:JSON(适合程序化处理)
json
{
"issues": [
{
"key": "PROJ-123",
"states": [
{
"state": "To Do",
"calendar_days": 0.94,
"business_hours": 8.5
}
],
"cycle_time_days": 4.17
}
],
"summary": {
"by_state": {
"In Progress": {
"avg_days": 2.8,
"max_days": 8.5
}
}
}
}用途:进一步分析、自动化、仪表盘
格式3:可视化数据
State Analysis Summary:
========================
Total Items Analyzed: 47
Average Cycle Time: 5.2 days (35.4 business hours)
By State:
In Progress: avg 2.8 days (18% of time)
Code Review: avg 1.9 days (23% of time) ⚠️ BOTTLENECK
Testing: avg 0.8 days (10% of time)
Done: avg 0.7 days (8% of time)用途:仪表盘、报告、团队沟通
Step 7: Extend for Your Use Case
步骤7:针对你的场景扩展
Template for Custom State Analyzer:
python
from datetime import datetime, UTC
from dataclasses import dataclass
@dataclass
class YourStateTransition:
timestamp: datetime
from_state: str | None
to_state: str
# Add domain-specific fields:
# user_id: str
# reason: str
# severity: int
class YourStateAnalyzer:
def extract_transitions(self, audit_log: dict) -> list[YourStateTransition]:
"""Extract transitions from your system's format."""
# Implement for your audit log structure
pass
def calculate_durations(self, transitions: list) -> list[StateDuration]:
"""Calculate time in each state."""
# Implement calculation logic
pass
def find_bottlenecks(self, durations: list) -> dict:
"""Identify slow states."""
# Implement bottleneck detection
pass
def format_report(self, durations: list) -> str:
"""Export results."""
# Implement report generation
passCommon Adaptations:
- GitHub PRs: Extract from API (comments with state changes)
pull_requests.timeline - Deployments: Parse CI/CD logs for stage transitions (timestamp from log entries)
- Support Tickets: Extract from ticket history (created, assigned, resolved timestamps)
- Manufacturing: Use MES (Manufacturing Execution System) event logs
自定义状态分析器模板:
python
from datetime import datetime, UTC
from dataclasses import dataclass
@dataclass
class YourStateTransition:
timestamp: datetime
from_state: str | None
to_state: str
# Add domain-specific fields:
# user_id: str
# reason: str
# severity: int
class YourStateAnalyzer:
def extract_transitions(self, audit_log: dict) -> list[YourStateTransition]:
"""Extract transitions from your system's format."""
# Implement for your audit log structure
pass
def calculate_durations(self, transitions: list) -> list[StateDuration]:
"""Calculate time in each state."""
# Implement calculation logic
pass
def find_bottlenecks(self, durations: list) -> dict:
"""Identify slow states."""
# Implement bottleneck detection
pass
def format_report(self, durations: list) -> str:
"""Export results."""
# Implement report generation
pass常见适配场景:
- GitHub PRs:从API提取(包含状态变化的评论)
pull_requests.timeline - 部署流程:解析CI/CD日志中的阶段转换(时间戳来自日志条目)
- 支持工单:从工单历史中提取(创建、分配、解决时间戳)
- 制造业:使用MES(制造执行系统)事件日志
Examples
示例
Example 1: Analyze Single Jira Issue
示例1:分析单个Jira工单
bash
undefinedbash
undefinedGet issue with changelog
Get issue with changelog
uv run jira-tool get PROJ-123
uv run jira-tool get PROJ-123
Or analyze via CLI (shorter version)
Or analyze via CLI (shorter version)
uv run jira-tool search "key = PROJ-123" --expand changelog --format json -o issue.json
uv run jira-tool analyze state-durations issue.json -o durations.csv
undefineduv run jira-tool search "key = PROJ-123" --expand changelog --format json -o issue.json
uv run jira-tool analyze state-durations issue.json -o durations.csv
undefinedExample 2: Find Bottlenecks Across Project
示例2:查找项目中的瓶颈
bash
undefinedbash
undefinedExport all issues from last sprint with changelog
Export all issues from last sprint with changelog
uv run jira-tool search "sprint in openSprints()"
--expand changelog
--format json
-o sprint_issues.json
--expand changelog
--format json
-o sprint_issues.json
uv run jira-tool search "sprint in openSprints()"
--expand changelog
--format json
-o sprint_issues.json
--expand changelog
--format json
-o sprint_issues.json
Analyze state durations
Analyze state durations
uv run jira-tool analyze state-durations sprint_issues.json
-o sprint_analysis.csv
--business-hours
-o sprint_analysis.csv
--business-hours
uv run jira-tool analyze state-durations sprint_issues.json
-o sprint_analysis.csv
--business-hours
-o sprint_analysis.csv
--business-hours
Load CSV and find bottlenecks
Load CSV and find bottlenecks
python3 << 'EOF'
import pandas as pd
df = pd.read_csv('sprint_analysis.csv')
python3 << 'EOF'
import pandas as pd
df = pd.read_csv('sprint_analysis.csv')
Group by state and calculate averages
Group by state and calculate averages
bottlenecks = df.groupby('state').agg({
'calendar_days': 'mean',
'business_hours': 'mean'
}).sort_values('calendar_days', ascending=False)
print("Bottleneck States (by average duration):")
print(bottlenecks)
EOF
undefinedbottlenecks = df.groupby('state').agg({
'calendar_days': 'mean',
'business_hours': 'mean'
}).sort_values('calendar_days', ascending=False)
print("Bottleneck States (by average duration):")
print(bottlenecks)
EOF
undefinedExample 3: Track SLA Compliance
示例3:跟踪SLA合规性
bash
undefinedbash
undefinedAnalyze customer-facing issues
Analyze customer-facing issues
uv run jira-tool search "type = Bug AND labels = urgent"
--expand changelog
--format json
-o urgent_bugs.json
--expand changelog
--format json
-o urgent_bugs.json
uv run jira-tool analyze state-durations urgent_bugs.json
-o bug_analysis.csv
--date-from 2024-01-01 --date-to 2024-12-31
-o bug_analysis.csv
--date-from 2024-01-01 --date-to 2024-12-31
uv run jira-tool search "type = Bug AND labels = urgent"
--expand changelog
--format json
-o urgent_bugs.json
--expand changelog
--format json
-o urgent_bugs.json
uv run jira-tool analyze state-durations urgent_bugs.json
-o bug_analysis.csv
--date-from 2024-01-01 --date-to 2024-12-31
-o bug_analysis.csv
--date-from 2024-01-01 --date-to 2024-12-31
Check if average resolution time meets SLA (< 24 business hours)
Check if average resolution time meets SLA (< 24 business hours)
python3 << 'EOF'
import pandas as pd
df = pd.read_csv('bug_analysis.csv')
total_hours = df[df['state'] == 'Resolved']['business_hours'].sum()
avg_hours = df[df['state'] == 'Resolved']['business_hours'].mean()
sla_threshold = 24
compliance = (avg_hours <= sla_threshold) * 100 if avg_hours > 0 else 0
print(f"Average Resolution Time: {avg_hours:.1f} business hours")
print(f"SLA Threshold: {sla_threshold} hours")
print(f"Compliance: {compliance:.0f}%")
EOF
undefinedpython3 << 'EOF'
import pandas as pd
df = pd.read_csv('bug_analysis.csv')
total_hours = df[df['state'] == 'Resolved']['business_hours'].sum()
avg_hours = df[df['state'] == 'Resolved']['business_hours'].mean()
sla_threshold = 24
compliance = (avg_hours <= sla_threshold) * 100 if avg_hours > 0 else 0
print(f"Average Resolution Time: {avg_hours:.1f} business hours")
print(f"SLA Threshold: {sla_threshold} hours")
print(f"Compliance: {compliance:.0f}%")
EOF
undefinedExample 4: Custom State Analyzer (GitHub PR Review Time)
示例4:自定义状态分析器(GitHub PR评审时长)
python
undefinedpython
undefinedanalyze_github_prs.py
analyze_github_prs.py
import json
from datetime import datetime, UTC
from dataclasses import dataclass
@dataclass
class PRStateTransition:
timestamp: datetime
from_state: str | None
to_state: str
class GitHubPRAnalyzer:
"""Analyze GitHub PR time in review."""
def extract_transitions(self, pr_data: dict):
"""Extract state transitions from GitHub PR timeline."""
transitions = []
# PR created = initial state
created_at = datetime.fromisoformat(pr_data['created_at'].replace('Z', '+00:00'))
transitions.append(PRStateTransition(
timestamp=created_at,
from_state=None,
to_state='Draft' if pr_data['draft'] else 'Open'
))
# Parse review states from timeline
for event in pr_data.get('timeline', []):
if event['event'] == 'review_requested':
transitions.append(PRStateTransition(
timestamp=datetime.fromisoformat(event['created_at'].replace('Z', '+00:00')),
from_state='Open',
to_state='In Review'
))
elif event['event'] == 'pull_request_review':
transitions.append(PRStateTransition(
timestamp=datetime.fromisoformat(event['submitted_at'].replace('Z', '+00:00')),
from_state='In Review',
to_state=f"Review-{event['state']}" # APPROVED, CHANGES_REQUESTED, COMMENTED
))
elif event['event'] == 'merged':
transitions.append(PRStateTransition(
timestamp=datetime.fromisoformat(event['created_at'].replace('Z', '+00:00')),
from_state=transitions[-1].to_state,
to_state='Merged'
))
return transitionsimport json
from datetime import datetime, UTC
from dataclasses import dataclass
@dataclass
class PRStateTransition:
timestamp: datetime
from_state: str | None
to_state: str
class GitHubPRAnalyzer:
"""Analyze GitHub PR time in review."""
def extract_transitions(self, pr_data: dict):
"""Extract state transitions from GitHub PR timeline."""
transitions = []
# PR created = initial state
created_at = datetime.fromisoformat(pr_data['created_at'].replace('Z', '+00:00'))
transitions.append(PRStateTransition(
timestamp=created_at,
from_state=None,
to_state='Draft' if pr_data['draft'] else 'Open'
))
# Parse review states from timeline
for event in pr_data.get('timeline', []):
if event['event'] == 'review_requested':
transitions.append(PRStateTransition(
timestamp=datetime.fromisoformat(event['created_at'].replace('Z', '+00:00')),
from_state='Open',
to_state='In Review'
))
elif event['event'] == 'pull_request_review':
transitions.append(PRStateTransition(
timestamp=datetime.fromisoformat(event['submitted_at'].replace('Z', '+00:00')),
from_state='In Review',
to_state=f"Review-{event['state']}" # APPROVED, CHANGES_REQUESTED, COMMENTED
))
elif event['event'] == 'merged':
transitions.append(PRStateTransition(
timestamp=datetime.fromisoformat(event['created_at'].replace('Z', '+00:00')),
from_state=transitions[-1].to_state,
to_state='Merged'
))
return transitionsUsage
Usage
analyzer = GitHubPRAnalyzer()
pr_data = json.load(open('pr.json'))
transitions = analyzer.extract_transitions(pr_data)
for t in transitions:
print(f"{t.from_state} → {t.to_state} at {t.timestamp}")
undefinedanalyzer = GitHubPRAnalyzer()
pr_data = json.load(open('pr.json'))
transitions = analyzer.extract_transitions(pr_data)
for t in transitions:
print(f"{t.from_state} → {t.to_state} at {t.timestamp}")
undefinedRequirements
要求
For Jira Analysis (Project-Based)
针对Jira分析(基于项目)
- Jira Cloud instance with REST API v3 access
- Environment variables: ,
JIRA_BASE_URL,JIRA_USERNAMEJIRA_API_TOKEN - Python 3.10+ with jira-tool installed ()
uv sync - Changelog data: Issues must be exported with
--expand changelog
- Jira Cloud实例,支持REST API v3访问
- 环境变量:,
JIRA_BASE_URL,JIRA_USERNAMEJIRA_API_TOKEN - Python 3.10+,已安装jira-tool()
uv sync - 变更日志数据:工单必须通过导出
--expand changelog
For Custom Analysis
针对自定义分析
- Audit log access: Your system must provide state change history
- Python 3.8+ (dataclasses support)
- datetime library for temporal calculations
- Optional: pandas for advanced statistical analysis
- 审计日志访问权限:你的系统必须提供状态变更历史
- Python 3.8+(支持dataclasses)
- datetime库:用于时间计算
- 可选:pandas,用于高级统计分析
Related Skills
相关技能
- jira-api - Fetch issue data with changelog using Jira REST API v3
- export-and-analyze-jira-data - Data export patterns and bulk analysis workflows
- build-jira-document-format - Create analysis reports in Atlassian Document Format
- kafka-consumer-implementation - For real-time state tracking with event streams
- observability-analyze-logs - Extract state transitions from application logs
- jira-api - 使用Jira REST API v3获取包含变更日志的工单数据
- export-and-analyze-jira-data - 数据导出模式与批量分析工作流
- build-jira-document-format - 创建Atlassian文档格式的分析报告
- kafka-consumer-implementation - 使用事件流进行实时状态跟踪
- observability-analyze-logs - 从应用日志中提取状态转换数据
Notes
注意事项
Key Implementation Points:
- Always use UTC internally for timestamp calculations
- Handle open issues where end_time is None (still in current state)
- Business hours are configurable per organization (default: Mon-Fri, 9 AM - 5 PM)
- Complete state history is required for accurate analysis
- Partial days must be handled correctly (transitions happen at any time)
Common Pitfalls:
- Missing changelog expansion in API queries (flag required)
--expand changelog - Ignoring timezone conversions (leads to incorrect duration calculations)
- Not filtering out non-status changes from changelog (focus on status field only)
- Treating weekends as business days (skews business hours metric)
- Calculating averages including incomplete items (exclude items still in progress)
Business Hours Calculation:
- See Step 3 for complete implementation
- Handles partial days, weekends, and configurable work hours
- Returns floating-point hours for precise metrics
核心实现要点:
- 时间戳计算内部始终使用UTC
- 处理end_time为None的未结工单(仍处于当前状态)
- 工作时长可根据组织需求配置(默认:周一至周五,上午9点 - 下午5点)
- 准确分析需要完整的状态历史
- 必须正确处理部分天数(转换可发生在任意时间)
常见陷阱:
- API查询中缺少变更日志扩展(需要参数)
--expand changelog - 忽略时区转换(导致时长计算错误)
- 未从变更日志中过滤非状态变化(仅关注status字段)
- 将周末视为工作时间(导致工作时长指标失真)
- 计算平均值时包含未完成条目(排除仍在进行中的条目)
工作时长计算:
- 完整实现请查看步骤3
- 支持部分天数、周末和可配置工作时间
- 返回浮点型时长以保证指标精度