pymc-bayesian-modeling
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePyMC Bayesian Modeling
基于PyMC的贝叶斯建模
Overview
概述
PyMC is a Python library for Bayesian modeling and probabilistic programming. Build, fit, validate, and compare Bayesian models using PyMC's modern API (version 5.x+), including hierarchical models, MCMC sampling (NUTS), variational inference, and model comparison (LOO, WAIC).
PyMC是一款用于贝叶斯建模和概率编程的Python库。使用PyMC的现代API(5.x及以上版本)构建、拟合、验证和比较贝叶斯模型,包括层次模型、MCMC采样(NUTS)、变分推理以及模型比较(LOO、WAIC)。
When to Use This Skill
何时使用该技能
This skill should be used when:
- Building Bayesian models (linear/logistic regression, hierarchical models, time series, etc.)
- Performing MCMC sampling or variational inference
- Conducting prior/posterior predictive checks
- Diagnosing sampling issues (divergences, convergence, ESS)
- Comparing multiple models using information criteria (LOO, WAIC)
- Implementing uncertainty quantification through Bayesian methods
- Working with hierarchical/multilevel data structures
- Handling missing data or measurement error in a principled way
在以下场景中应使用本技能:
- 构建贝叶斯模型(线性/逻辑回归、层次模型、时间序列等)
- 执行MCMC采样或变分推理
- 进行先验/后验预测检查
- 诊断采样问题(发散、收敛、ESS)
- 使用信息准则(LOO、WAIC)比较多个模型
- 通过贝叶斯方法实现不确定性量化
- 处理层次/多水平数据结构
- 以规范方式处理缺失数据或测量误差
Standard Bayesian Workflow
标准贝叶斯工作流
Follow this workflow for building and validating Bayesian models:
遵循以下工作流构建和验证贝叶斯模型:
1. Data Preparation
1. 数据准备
python
import pymc as pm
import arviz as az
import numpy as nppython
import pymc as pm
import arviz as az
import numpy as npLoad and prepare data
加载并准备数据
X = ... # Predictors
y = ... # Outcomes
X = ... # 预测变量
y = ... # 结果变量
Standardize predictors for better sampling
标准化预测变量以提升采样效率
X_mean = X.mean(axis=0)
X_std = X.std(axis=0)
X_scaled = (X - X_mean) / X_std
**Key practices:**
- Standardize continuous predictors (improves sampling efficiency)
- Center outcomes when possible
- Handle missing data explicitly (treat as parameters)
- Use named dimensions with `coords` for clarityX_mean = X.mean(axis=0)
X_std = X.std(axis=0)
X_scaled = (X - X_mean) / X_std
**关键实践:**
- 标准化连续预测变量(提升采样效率)
- 尽可能对结果变量进行中心化处理
- 显式处理缺失数据(将其视为参数)
- 使用`coords`定义命名维度以提升清晰度2. Model Building
2. 模型构建
python
coords = {
'predictors': ['var1', 'var2', 'var3'],
'obs_id': np.arange(len(y))
}
with pm.Model(coords=coords) as model:
# Priors
alpha = pm.Normal('alpha', mu=0, sigma=1)
beta = pm.Normal('beta', mu=0, sigma=1, dims='predictors')
sigma = pm.HalfNormal('sigma', sigma=1)
# Linear predictor
mu = alpha + pm.math.dot(X_scaled, beta)
# Likelihood
y_obs = pm.Normal('y_obs', mu=mu, sigma=sigma, observed=y, dims='obs_id')Key practices:
- Use weakly informative priors (not flat priors)
- Use or
HalfNormalfor scale parametersExponential - Use named dimensions () instead of
dimswhen possibleshape - Use for values that will be updated for predictions
pm.Data()
python
coords = {
'predictors': ['var1', 'var2', 'var3'],
'obs_id': np.arange(len(y))
}
with pm.Model(coords=coords) as model:
# 先验分布
alpha = pm.Normal('alpha', mu=0, sigma=1)
beta = pm.Normal('beta', mu=0, sigma=1, dims='predictors')
sigma = pm.HalfNormal('sigma', sigma=1)
# 线性预测器
mu = alpha + pm.math.dot(X_scaled, beta)
# 似然函数
y_obs = pm.Normal('y_obs', mu=mu, sigma=sigma, observed=y, dims='obs_id')关键实践:
- 使用弱信息先验(而非平坦先验)
- 对尺度参数使用或
HalfNormalExponential - 尽可能使用命名维度()而非
dimsshape - 对需要更新以进行预测的值使用
pm.Data()
3. Prior Predictive Check
3. 先验预测检查
Always validate priors before fitting:
python
with model:
prior_pred = pm.sample_prior_predictive(samples=1000, random_seed=42)拟合前务必验证先验分布:
python
with model:
prior_pred = pm.sample_prior_predictive(samples=1000, random_seed=42)Visualize
可视化
az.plot_ppc(prior_pred, group='prior')
**Check:**
- Do prior predictions span reasonable values?
- Are extreme values plausible given domain knowledge?
- If priors generate implausible data, adjust and re-checkaz.plot_ppc(prior_pred, group='prior')
**检查要点:**
- 先验预测值是否覆盖合理范围?
- 结合领域知识,极端值是否合理?
- 如果先验分布生成不合理的数据,调整后重新检查4. Fit Model
4. 拟合模型
python
with model:
# Optional: Quick exploration with ADVI
# approx = pm.fit(n=20000)
# Full MCMC inference
idata = pm.sample(
draws=2000,
tune=1000,
chains=4,
target_accept=0.9,
random_seed=42,
idata_kwargs={'log_likelihood': True} # For model comparison
)Key parameters:
- : Number of samples per chain
draws=2000 - : Warmup samples (discarded)
tune=1000 - : Run 4 chains for convergence checking
chains=4 - : Higher for difficult posteriors (0.95-0.99)
target_accept=0.9 - Include for model comparison
log_likelihood=True
python
with model:
# 可选:使用ADVI快速探索
# approx = pm.fit(n=20000)
# 完整MCMC推理
idata = pm.sample(
draws=2000,
tune=1000,
chains=4,
target_accept=0.9,
random_seed=42,
idata_kwargs={'log_likelihood': True} # 用于模型比较
)关键参数:
- : 每条链的样本数量
draws=2000 - : 预热样本(会被丢弃)
tune=1000 - : 运行4条链以检查收敛性
chains=4 - : 针对复杂后验分布可设置更高值(0.95-0.99)
target_accept=0.9 - 包含以支持模型比较
log_likelihood=True
5. Check Diagnostics
5. 检查诊断结果
Use the diagnostic script:
python
from scripts.model_diagnostics import check_diagnostics
results = check_diagnostics(idata, var_names=['alpha', 'beta', 'sigma'])Check:
- R-hat < 1.01: Chains have converged
- ESS > 400: Sufficient effective samples
- No divergences: NUTS sampled successfully
- Trace plots: Chains should mix well (fuzzy caterpillar)
If issues arise:
- Divergences → Increase , use non-centered parameterization
target_accept=0.95 - Low ESS → Sample more draws, reparameterize to reduce correlation
- High R-hat → Run longer, check for multimodality
使用诊断脚本:
python
from scripts.model_diagnostics import check_diagnostics
results = check_diagnostics(idata, var_names=['alpha', 'beta', 'sigma'])检查要点:
- R-hat < 1.01: 链已收敛
- ESS > 400: 有效样本量充足
- 无发散点: NUTS采样成功
- 轨迹图: 链应混合良好(呈现模糊的毛毛虫状)
若出现问题:
- 发散点 → 提高,使用非中心化参数化
target_accept=0.95 - 低ESS → 增加样本数量,重新参数化以降低相关性
- 高R-hat → 延长运行时间,检查是否存在多峰性
6. Posterior Predictive Check
6. 后验预测检查
Validate model fit:
python
with model:
pm.sample_posterior_predictive(idata, extend_inferencedata=True, random_seed=42)验证模型拟合效果:
python
with model:
pm.sample_posterior_predictive(idata, extend_inferencedata=True, random_seed=42)Visualize
可视化
az.plot_ppc(idata)
**Check:**
- Do posterior predictions capture observed data patterns?
- Are systematic deviations evident (model misspecification)?
- Consider alternative models if fit is pooraz.plot_ppc(idata)
**检查要点:**
- 后验预测是否捕捉到观测数据的模式?
- 是否存在系统性偏差(模型误设)?
- 如果拟合效果不佳,考虑使用替代模型7. Analyze Results
7. 分析结果
python
undefinedpython
undefinedSummary statistics
汇总统计
print(az.summary(idata, var_names=['alpha', 'beta', 'sigma']))
print(az.summary(idata, var_names=['alpha', 'beta', 'sigma']))
Posterior distributions
后验分布
az.plot_posterior(idata, var_names=['alpha', 'beta', 'sigma'])
az.plot_posterior(idata, var_names=['alpha', 'beta', 'sigma'])
Coefficient estimates
系数估计
az.plot_forest(idata, var_names=['beta'], combined=True)
undefinedaz.plot_forest(idata, var_names=['beta'], combined=True)
undefined8. Make Predictions
8. 生成预测
python
X_new = ... # New predictor values
X_new_scaled = (X_new - X_mean) / X_std
with model:
pm.set_data({'X_scaled': X_new_scaled})
post_pred = pm.sample_posterior_predictive(
idata.posterior,
var_names=['y_obs'],
random_seed=42
)python
X_new = ... # 新的预测变量值
X_new_scaled = (X_new - X_mean) / X_std
with model:
pm.set_data({'X_scaled': X_new_scaled})
post_pred = pm.sample_posterior_predictive(
idata.posterior,
var_names=['y_obs'],
random_seed=42
)Extract prediction intervals
提取预测区间
y_pred_mean = post_pred.posterior_predictive['y_obs'].mean(dim=['chain', 'draw'])
y_pred_hdi = az.hdi(post_pred.posterior_predictive, var_names=['y_obs'])
undefinedy_pred_mean = post_pred.posterior_predictive['y_obs'].mean(dim=['chain', 'draw'])
y_pred_hdi = az.hdi(post_pred.posterior_predictive, var_names=['y_obs'])
undefinedCommon Model Patterns
常见模型模式
Linear Regression
线性回归
For continuous outcomes with linear relationships:
python
with pm.Model() as linear_model:
alpha = pm.Normal('alpha', mu=0, sigma=10)
beta = pm.Normal('beta', mu=0, sigma=10, shape=n_predictors)
sigma = pm.HalfNormal('sigma', sigma=1)
mu = alpha + pm.math.dot(X, beta)
y = pm.Normal('y', mu=mu, sigma=sigma, observed=y_obs)Use template:
assets/linear_regression_template.py适用于具有线性关系的连续结果变量:
python
with pm.Model() as linear_model:
alpha = pm.Normal('alpha', mu=0, sigma=10)
beta = pm.Normal('beta', mu=0, sigma=10, shape=n_predictors)
sigma = pm.HalfNormal('sigma', sigma=1)
mu = alpha + pm.math.dot(X, beta)
y = pm.Normal('y', mu=mu, sigma=sigma, observed=y_obs)使用模板:
assets/linear_regression_template.pyLogistic Regression
逻辑回归
For binary outcomes:
python
with pm.Model() as logistic_model:
alpha = pm.Normal('alpha', mu=0, sigma=10)
beta = pm.Normal('beta', mu=0, sigma=10, shape=n_predictors)
logit_p = alpha + pm.math.dot(X, beta)
y = pm.Bernoulli('y', logit_p=logit_p, observed=y_obs)适用于二分类结果变量:
python
with pm.Model() as logistic_model:
alpha = pm.Normal('alpha', mu=0, sigma=10)
beta = pm.Normal('beta', mu=0, sigma=10, shape=n_predictors)
logit_p = alpha + pm.math.dot(X, beta)
y = pm.Bernoulli('y', logit_p=logit_p, observed=y_obs)Hierarchical Models
层次模型
For grouped data (use non-centered parameterization):
python
with pm.Model(coords={'groups': group_names}) as hierarchical_model:
# Hyperpriors
mu_alpha = pm.Normal('mu_alpha', mu=0, sigma=10)
sigma_alpha = pm.HalfNormal('sigma_alpha', sigma=1)
# Group-level (non-centered)
alpha_offset = pm.Normal('alpha_offset', mu=0, sigma=1, dims='groups')
alpha = pm.Deterministic('alpha', mu_alpha + sigma_alpha * alpha_offset, dims='groups')
# Observation-level
mu = alpha[group_idx]
sigma = pm.HalfNormal('sigma', sigma=1)
y = pm.Normal('y', mu=mu, sigma=sigma, observed=y_obs)Use template:
assets/hierarchical_model_template.pyCritical: Always use non-centered parameterization for hierarchical models to avoid divergences.
适用于分组数据(使用非中心化参数化):
python
with pm.Model(coords={'groups': group_names}) as hierarchical_model:
# 超先验
mu_alpha = pm.Normal('mu_alpha', mu=0, sigma=10)
sigma_alpha = pm.HalfNormal('sigma_alpha', sigma=1)
# 组水平(非中心化)
alpha_offset = pm.Normal('alpha_offset', mu=0, sigma=1, dims='groups')
alpha = pm.Deterministic('alpha', mu_alpha + sigma_alpha * alpha_offset, dims='groups')
# 观测水平
mu = alpha[group_idx]
sigma = pm.HalfNormal('sigma', sigma=1)
y = pm.Normal('y', mu=mu, sigma=sigma, observed=y_obs)使用模板:
assets/hierarchical_model_template.py关键提示: 层次模型务必使用非中心化参数化以避免发散点
Poisson Regression
泊松回归
For count data:
python
with pm.Model() as poisson_model:
alpha = pm.Normal('alpha', mu=0, sigma=10)
beta = pm.Normal('beta', mu=0, sigma=10, shape=n_predictors)
log_lambda = alpha + pm.math.dot(X, beta)
y = pm.Poisson('y', mu=pm.math.exp(log_lambda), observed=y_obs)For overdispersed counts, use instead.
NegativeBinomial适用于计数数据:
python
with pm.Model() as poisson_model:
alpha = pm.Normal('alpha', mu=0, sigma=10)
beta = pm.Normal('beta', mu=0, sigma=10, shape=n_predictors)
log_lambda = alpha + pm.math.dot(X, beta)
y = pm.Poisson('y', mu=pm.math.exp(log_lambda), observed=y_obs)对于过度离散的计数数据,使用替代
NegativeBinomialTime Series
时间序列
For autoregressive processes:
python
with pm.Model() as ar_model:
sigma = pm.HalfNormal('sigma', sigma=1)
rho = pm.Normal('rho', mu=0, sigma=0.5, shape=ar_order)
init_dist = pm.Normal.dist(mu=0, sigma=sigma)
y = pm.AR('y', rho=rho, sigma=sigma, init_dist=init_dist, observed=y_obs)适用于自回归过程:
python
with pm.Model() as ar_model:
sigma = pm.HalfNormal('sigma', sigma=1)
rho = pm.Normal('rho', mu=0, sigma=0.5, shape=ar_order)
init_dist = pm.Normal.dist(mu=0, sigma=sigma)
y = pm.AR('y', rho=rho, sigma=sigma, init_dist=init_dist, observed=y_obs)Model Comparison
模型比较
Comparing Models
模型对比
Use LOO or WAIC for model comparison:
python
from scripts.model_comparison import compare_models, check_loo_reliability使用LOO或WAIC进行模型比较:
python
from scripts.model_comparison import compare_models, check_loo_reliabilityFit models with log_likelihood
拟合包含log_likelihood的模型
models = {
'Model1': idata1,
'Model2': idata2,
'Model3': idata3
}
models = {
'Model1': idata1,
'Model2': idata2,
'Model3': idata3
}
Compare using LOO
使用LOO进行比较
comparison = compare_models(models, ic='loo')
comparison = compare_models(models, ic='loo')
Check reliability
检查可靠性
check_loo_reliability(models)
**Interpretation:**
- **Δloo < 2**: Models are similar, choose simpler model
- **2 < Δloo < 4**: Weak evidence for better model
- **4 < Δloo < 10**: Moderate evidence
- **Δloo > 10**: Strong evidence for better model
**Check Pareto-k values:**
- k < 0.7: LOO reliable
- k > 0.7: Consider WAIC or k-fold CVcheck_loo_reliability(models)
**解读:**
- **Δloo < 2**: 模型性能相近,选择更简单的模型
- **2 < Δloo < 4**: 弱证据支持更优模型
- **4 < Δloo < 10**: 中等证据支持
- **Δloo > 10**: 强证据支持更优模型
**检查Pareto-k值:**
- k < 0.7: LOO结果可靠
- k > 0.7: 考虑使用WAIC或k折交叉验证Model Averaging
模型平均
When models are similar, average predictions:
python
from scripts.model_comparison import model_averaging
averaged_pred, weights = model_averaging(models, var_name='y_obs')当模型性能相近时,对预测结果进行平均:
python
from scripts.model_comparison import model_averaging
averaged_pred, weights = model_averaging(models, var_name='y_obs')Distribution Selection Guide
分布选择指南
For Priors
先验分布
Scale parameters (σ, τ):
- - Default choice
pm.HalfNormal('sigma', sigma=1) - - Alternative
pm.Exponential('sigma', lam=1) - - More informative
pm.Gamma('sigma', alpha=2, beta=1)
Unbounded parameters:
- - For standardized data
pm.Normal('theta', mu=0, sigma=1) - - Robust to outliers
pm.StudentT('theta', nu=3, mu=0, sigma=1)
Positive parameters:
pm.LogNormal('theta', mu=0, sigma=1)pm.Gamma('theta', alpha=2, beta=1)
Probabilities:
- - Weakly informative
pm.Beta('p', alpha=2, beta=2) - - Non-informative (use sparingly)
pm.Uniform('p', lower=0, upper=1)
Correlation matrices:
- - eta=1 uniform, eta>1 prefers identity
pm.LKJCorr('corr', n=n_vars, eta=2)
尺度参数 (σ, τ):
- - 默认选择
pm.HalfNormal('sigma', sigma=1) - - 替代方案
pm.Exponential('sigma', lam=1) - - 信息性更强
pm.Gamma('sigma', alpha=2, beta=1)
无界参数:
- - 适用于标准化数据
pm.Normal('theta', mu=0, sigma=1) - - 对异常值鲁棒
pm.StudentT('theta', nu=3, mu=0, sigma=1)
正参数:
pm.LogNormal('theta', mu=0, sigma=1)pm.Gamma('theta', alpha=2, beta=1)
概率值:
- - 弱信息先验
pm.Beta('p', alpha=2, beta=2) - - 无信息先验(谨慎使用)
pm.Uniform('p', lower=0, upper=1)
相关矩阵:
- - eta=1时为均匀分布,eta>1时倾向于单位矩阵
pm.LKJCorr('corr', n=n_vars, eta=2)
For Likelihoods
似然函数
Continuous outcomes:
- - Default for continuous data
pm.Normal('y', mu=mu, sigma=sigma) - - Robust to outliers
pm.StudentT('y', nu=nu, mu=mu, sigma=sigma)
Count data:
- - Equidispersed counts
pm.Poisson('y', mu=lambda) - - Overdispersed counts
pm.NegativeBinomial('y', mu=mu, alpha=alpha) - - Excess zeros
pm.ZeroInflatedPoisson('y', psi=psi, mu=mu)
Binary outcomes:
- or
pm.Bernoulli('y', p=p)pm.Bernoulli('y', logit_p=logit_p)
Categorical outcomes:
pm.Categorical('y', p=probs)
See: for comprehensive distribution reference
references/distributions.md连续结果变量:
- - 连续数据的默认选择
pm.Normal('y', mu=mu, sigma=sigma) - - 对异常值鲁棒
pm.StudentT('y', nu=nu, mu=mu, sigma=sigma)
计数数据:
- - 等离散计数数据
pm.Poisson('y', mu=lambda) - - 过度离散计数数据
pm.NegativeBinomial('y', mu=mu, alpha=alpha) - - 存在过量零值的情况
pm.ZeroInflatedPoisson('y', psi=psi, mu=mu)
二分类结果变量:
- 或
pm.Bernoulli('y', p=p)pm.Bernoulli('y', logit_p=logit_p)
多分类结果变量:
pm.Categorical('y', p=probs)
参考: 包含按类别划分的PyMC分布综合目录
references/distributions.mdSampling and Inference
采样与推理
MCMC with NUTS
基于NUTS的MCMC
Default and recommended for most models:
python
idata = pm.sample(
draws=2000,
tune=1000,
chains=4,
target_accept=0.9,
random_seed=42
)Adjust when needed:
- Divergences → or higher
target_accept=0.95 - Slow sampling → Use ADVI for initialization
- Discrete parameters → Use for discrete vars
pm.Metropolis()
默认且推荐用于大多数模型:
python
idata = pm.sample(
draws=2000,
tune=1000,
chains=4,
target_accept=0.9,
random_seed=42
)必要时调整:
- 发散点 → 设置或更高
target_accept=0.95 - 采样缓慢 → 使用ADVI初始化
- 离散参数 → 对离散变量使用
pm.Metropolis()
Variational Inference
变分推理
Fast approximation for exploration or initialization:
python
with model:
approx = pm.fit(n=20000, method='advi')
# Use for initialization
start = approx.sample(return_inferencedata=False)[0]
idata = pm.sample(start=start)Trade-offs:
- Much faster than MCMC
- Approximate (may underestimate uncertainty)
- Good for large models or quick exploration
See: for detailed sampling guide
references/sampling_inference.md用于探索或初始化的快速近似方法:
python
with model:
approx = pm.fit(n=20000, method='advi')
# 用于初始化
start = approx.sample(return_inferencedata=False)[0]
idata = pm.sample(start=start)权衡:
- 比MCMC快得多
- 近似结果(可能低估不确定性)
- 适用于大型模型或快速探索
参考: 包含详细的采样指南
references/sampling_inference.mdDiagnostic Scripts
诊断脚本
Comprehensive Diagnostics
综合诊断
python
from scripts.model_diagnostics import create_diagnostic_report
create_diagnostic_report(
idata,
var_names=['alpha', 'beta', 'sigma'],
output_dir='diagnostics/'
)Creates:
- Trace plots
- Rank plots (mixing check)
- Autocorrelation plots
- Energy plots
- ESS evolution
- Summary statistics CSV
python
from scripts.model_diagnostics import create_diagnostic_report
create_diagnostic_report(
idata,
var_names=['alpha', 'beta', 'sigma'],
output_dir='diagnostics/'
)生成内容:
- 轨迹图
- 秩图(混合性检查)
- 自相关图
- 能量图
- ESS演化图
- 汇总统计CSV文件
Quick Diagnostic Check
快速诊断检查
python
from scripts.model_diagnostics import check_diagnostics
results = check_diagnostics(idata)Checks R-hat, ESS, divergences, and tree depth.
python
from scripts.model_diagnostics import check_diagnostics
results = check_diagnostics(idata)检查R-hat、ESS、发散点和树深度
Common Issues and Solutions
常见问题与解决方案
Divergences
发散点
Symptom:
idata.sample_stats.diverging.sum() > 0Solutions:
- Increase or
target_accept=0.950.99 - Use non-centered parameterization (hierarchical models)
- Add stronger priors to constrain parameters
- Check for model misspecification
症状:
idata.sample_stats.diverging.sum() > 0解决方案:
- 提高或更高
target_accept=0.95 - 对层次模型使用非中心化参数化
- 添加更强的先验以约束参数
- 检查是否存在模型误设
Low Effective Sample Size
有效样本量低
Symptom:
ESS < 400Solutions:
- Sample more draws:
draws=5000 - Reparameterize to reduce posterior correlation
- Use QR decomposition for regression with correlated predictors
症状:
ESS < 400解决方案:
- 增加样本数量:
draws=5000 - 重新参数化以降低后验相关性
- 对存在相关预测变量的回归使用QR分解
High R-hat
高R-hat
Symptom:
R-hat > 1.01Solutions:
- Run longer chains:
tune=2000, draws=5000 - Check for multimodality
- Improve initialization with ADVI
症状:
R-hat > 1.01解决方案:
- 延长链运行时间:
tune=2000, draws=5000 - 检查是否存在多峰性
- 使用ADVI优化初始化
Slow Sampling
采样缓慢
Solutions:
- Use ADVI initialization
- Reduce model complexity
- Increase parallelization:
cores=8, chains=8 - Use variational inference if appropriate
解决方案:
- 使用ADVI初始化
- 降低模型复杂度
- 提高并行化程度:
cores=8, chains=8 - 若合适,使用变分推理
Best Practices
最佳实践
Model Building
模型构建
- Always standardize predictors for better sampling
- Use weakly informative priors (not flat)
- Use named dimensions () for clarity
dims - Non-centered parameterization for hierarchical models
- Check prior predictive before fitting
- 务必标准化预测变量以提升采样效率
- 使用弱信息先验(而非平坦先验)
- 使用命名维度()以提升清晰度
dims - 层次模型使用非中心化参数化
- 拟合前检查先验预测
Sampling
采样
- Run multiple chains (at least 4) for convergence
- Use as baseline (higher if needed)
target_accept=0.9 - Include for model comparison
log_likelihood=True - Set random seed for reproducibility
- 运行多条链(至少4条)以检查收敛性
- **使用**作为基准(必要时提高)
target_accept=0.9 - **包含**以支持模型比较
log_likelihood=True - 设置随机种子以保证可复现性
Validation
验证
- Check diagnostics before interpretation (R-hat, ESS, divergences)
- Posterior predictive check for model validation
- Compare multiple models when appropriate
- Report uncertainty (HDI intervals, not just point estimates)
- 解读前先检查诊断结果(R-hat、ESS、发散点)
- 使用后验预测检查验证模型拟合效果
- 必要时比较多个模型
- 报告不确定性(使用HDI区间,而非仅点估计)
Workflow
工作流
- Start simple, add complexity gradually
- Prior predictive check → Fit → Diagnostics → Posterior predictive check
- Iterate on model specification based on checks
- Document assumptions and prior choices
- 从简单模型开始,逐步增加复杂度
- 先验预测检查 → 拟合 → 诊断 → 后验预测检查
- 根据检查结果迭代调整模型规格
- 记录假设和先验选择
Resources
资源
This skill includes:
本技能包含以下资源:
References (references/
)
references/参考文档(references/
)
references/-
: Comprehensive catalog of PyMC distributions organized by category (continuous, discrete, multivariate, mixture, time series). Use when selecting priors or likelihoods.
distributions.md -
: Detailed guide to sampling algorithms (NUTS, Metropolis, SMC), variational inference (ADVI, SVGD), and handling sampling issues. Use when encountering convergence problems or choosing inference methods.
sampling_inference.md -
: Complete workflow examples and code patterns for common model types, data preparation, prior selection, and model validation. Use as a cookbook for standard Bayesian analyses.
workflows.md
-
: 按类别划分的PyMC分布综合目录(连续、离散、多变量、混合、时间序列)。选择先验或似然函数时使用。
distributions.md -
: 采样算法(NUTS、Metropolis、SMC)、变分推理(ADVI、SVGD)及采样问题处理的详细指南。遇到收敛问题或选择推理方法时使用。
sampling_inference.md -
: 常见模型类型、数据准备、先验选择和模型验证的完整工作流示例与代码模式。作为标准贝叶斯分析的参考手册使用。
workflows.md
Scripts (scripts/
)
scripts/脚本(scripts/
)
scripts/-
: Automated diagnostic checking and report generation. Functions:
model_diagnostics.pyfor quick checks,check_diagnostics()for comprehensive analysis with plots.create_diagnostic_report() -
: Model comparison utilities using LOO/WAIC. Functions:
model_comparison.py,compare_models(),check_loo_reliability().model_averaging()
-
: 自动化诊断检查和报告生成。包含函数:
model_diagnostics.py用于快速检查,check_diagnostics()用于带图表的综合分析。create_diagnostic_report() -
: 使用LOO/WAIC的模型比较工具。包含函数:
model_comparison.py、compare_models()、check_loo_reliability()。model_averaging()
Templates (assets/
)
assets/模板(assets/
)
assets/-
: Complete template for Bayesian linear regression with full workflow (data prep, prior checks, fitting, diagnostics, predictions).
linear_regression_template.py -
: Complete template for hierarchical/multilevel models with non-centered parameterization and group-level analysis.
hierarchical_model_template.py
-
: 贝叶斯线性回归的完整模板,包含完整工作流(数据准备、先验检查、拟合、诊断、预测)。
linear_regression_template.py -
: 层次/多水平模型的完整模板,包含非中心化参数化和组水平分析。
hierarchical_model_template.py
Quick Reference
快速参考
Model Building
模型构建
python
with pm.Model(coords={'var': names}) as model:
# Priors
param = pm.Normal('param', mu=0, sigma=1, dims='var')
# Likelihood
y = pm.Normal('y', mu=..., sigma=..., observed=data)python
with pm.Model(coords={'var': names}) as model:
# 先验分布
param = pm.Normal('param', mu=0, sigma=1, dims='var')
# 似然函数
y = pm.Normal('y', mu=..., sigma=..., observed=data)Sampling
采样
python
idata = pm.sample(draws=2000, tune=1000, chains=4, target_accept=0.9)python
idata = pm.sample(draws=2000, tune=1000, chains=4, target_accept=0.9)Diagnostics
诊断
python
from scripts.model_diagnostics import check_diagnostics
check_diagnostics(idata)python
from scripts.model_diagnostics import check_diagnostics
check_diagnostics(idata)Model Comparison
模型比较
python
from scripts.model_comparison import compare_models
compare_models({'m1': idata1, 'm2': idata2}, ic='loo')python
from scripts.model_comparison import compare_models
compare_models({'m1': idata1, 'm2': idata2}, ic='loo')Predictions
预测
python
with model:
pm.set_data({'X': X_new})
pred = pm.sample_posterior_predictive(idata.posterior)python
with model:
pm.set_data({'X': X_new})
pred = pm.sample_posterior_predictive(idata.posterior)Additional Notes
补充说明
- PyMC integrates with ArviZ for visualization and diagnostics
- Use to visualize model structure
pm.model_to_graphviz(model) - Save results with
idata.to_netcdf('results.nc') - Load with
az.from_netcdf('results.nc') - For very large models, consider minibatch ADVI or data subsampling
- PyMC与ArviZ集成以实现可视化和诊断
- 使用可视化模型结构
pm.model_to_graphviz(model) - 使用保存结果
idata.to_netcdf('results.nc') - 使用加载结果
az.from_netcdf('results.nc') - 对于超大型模型,考虑使用小批量ADVI或数据子采样