stata-data-cleaning
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseStata Data Cleaning
Stata数据清理
Purpose
用途
This skill helps economists clean, transform, and prepare datasets for analysis in Stata. It emphasizes reproducibility, proper documentation, and handling common data quality issues found in economic research.
本技能帮助经济学家在Stata中清理、转换和准备用于分析的数据集。它强调可复现性、规范的文档记录,以及处理经济研究中常见的数据质量问题。
When to Use
使用场景
- Cleaning raw survey or administrative data
- Merging multiple data sources
- Handling missing values, duplicates, and outliers
- Creating analysis-ready panel datasets
- Documenting data transformations for replication
- 清理原始调研或行政数据
- 合并多个数据源
- 处理缺失值、重复项和异常值
- 创建可用于分析的面板数据集
- 记录数据转换过程以便复现
Instructions
操作步骤
Step 1: Understand the Data
步骤1:了解数据
Before generating code, ask the user:
- What is the data source? (survey, administrative, API, etc.)
- What is the unit of observation?
- What are the key variables needed for analysis?
- Are there known data quality issues to address?
在生成代码前,询问用户:
- 数据源是什么?(调研、行政、API等)
- 观测单位是什么?
- 分析需要哪些关键变量?
- 是否有已知的数据质量问题需要处理?
Step 2: Generate Cleaning Pipeline
步骤2:生成清理流程
Create a Stata do-file that:
- Has a clear header with project info and date
- Sets up the environment (clear all, set memory, log)
- Loads and inspects raw data
- Documents each transformation with comments
- Creates a codebook for the final dataset
创建一个Stata do-file,包含以下内容:
- 清晰的文件头,包含项目信息和日期
- 环境设置(清空环境、设置内存、开启日志)
- 加载并检查原始数据
- 为每个转换步骤添加注释说明
- 为最终数据集生成代码手册
Step 3: Follow Best Practices
步骤3:遵循最佳实践
- Use statements to verify data integrity
assert - Create labeled variables with
label variable - Use value labels for categorical variables
- Generate a log file for reproducibility
- Save intermediate files when appropriate
- 使用语句验证数据完整性
assert - 使用为变量添加标签
label variable - 为分类变量添加值标签
- 生成日志文件以保证可复现性
- 适时保存中间文件
Example Output
示例输出
stata
/*==============================================================================
Project: Economic Analysis Data Cleaning
Author: [Your Name]
Date: [Date]
Purpose: Clean raw survey data for regression analysis
Input: raw_survey_data.dta
Output: cleaned_analysis_data.dta
==============================================================================*/
* ============================================
* 1. SETUP
* ============================================
clear all
set more off
cap log close
log using "logs/data_cleaning_`c(current_date)'.log", replace
* Set working directory
cd "/path/to/project"
* Define globals for paths
global raw_data "data/raw"
global clean_data "data/clean"
global output "output"
* ============================================
* 2. LOAD AND INSPECT RAW DATA
* ============================================
use "${raw_data}/raw_survey_data.dta", clear
* Basic inspection
describe
summarize
codebook, compact
* Check for duplicates
duplicates report id_var
duplicates list id_var if _dup > 0
* ============================================
* 3. VARIABLE CLEANING
* ============================================
* --- Rename variables for clarity ---
rename q1 age
rename q2 income_reported
rename q3 education_level
* --- Clean numeric variables ---
* Replace missing value codes with .
mvdecode age income_reported, mv(-99 -88 -77)
* Cap outliers at 99th percentile
qui sum income_reported, detail
replace income_reported = r(p99) if income_reported > r(p99) & !mi(income_reported)
* --- Clean string variables ---
* Standardize state names
replace state = upper(trim(state))
replace state = "NEW YORK" if inlist(state, "NY", "N.Y.", "N Y")
* --- Create categorical variables ---
gen education_cat = .
replace education_cat = 1 if education_level < 12
replace education_cat = 2 if education_level == 12
replace education_cat = 3 if education_level > 12 & education_level <= 16
replace education_cat = 4 if education_level > 16 & !mi(education_level)
label define edu_lbl 1 "Less than HS" 2 "High School" 3 "College" 4 "Graduate"
label values education_cat edu_lbl
* ============================================
* 4. HANDLE MISSING DATA
* ============================================
* Create missing indicator variables
gen mi_income = mi(income_reported)
* Document missingness
tab mi_income
* Count complete cases
egen complete_case = rownonmiss(age income_reported education_cat)
tab complete_case
* ============================================
* 5. CREATE DERIVED VARIABLES
* ============================================
* Age groups
gen age_group = .
replace age_group = 1 if age >= 18 & age < 30
replace age_group = 2 if age >= 30 & age < 50
replace age_group = 3 if age >= 50 & age < 65
replace age_group = 4 if age >= 65 & !mi(age)
label define age_lbl 1 "18-29" 2 "30-49" 3 "50-64" 4 "65+"
label values age_group age_lbl
* Log income
gen log_income = ln(income_reported + 1)
* ============================================
* 6. DATA VALIDATION
* ============================================
* Assert expected ranges
assert age >= 18 & age <= 120 if !mi(age)
assert income_reported >= 0 if !mi(income_reported)
* Check variable types
assert !mi(id_var)
isid id_var // Verify unique identifier
* ============================================
* 7. LABEL VARIABLES
* ============================================
label variable age "Age in years"
label variable income_reported "Annual income (USD)"
label variable education_cat "Education category"
label variable log_income "Log of annual income"
label variable mi_income "Missing income indicator"
* ============================================
* 8. FINAL CHECKS AND SAVE
* ============================================
* Keep relevant variables
keep id_var age age_group income_reported log_income ///
education_cat mi_income state year
* Order variables logically
order id_var year state age age_group income_reported ///
log_income education_cat mi_income
* Compress to minimize file size
compress
* Save cleaned data
save "${clean_data}/cleaned_analysis_data.dta", replace
* Create codebook
codebook, compact
* Close log
log close
* ============================================
* END OF FILE
* ============================================stata
/*==============================================================================
Project: Economic Analysis Data Cleaning
Author: [Your Name]
Date: [Date]
Purpose: Clean raw survey data for regression analysis
Input: raw_survey_data.dta
Output: cleaned_analysis_data.dta
==============================================================================*/
* ============================================
* 1. SETUP
* ============================================
clear all
set more off
cap log close
log using "logs/data_cleaning_`c(current_date)'.log", replace
* Set working directory
cd "/path/to/project"
* Define globals for paths
global raw_data "data/raw"
global clean_data "data/clean"
global output "output"
* ============================================
* 2. LOAD AND INSPECT RAW DATA
* ============================================
use "${raw_data}/raw_survey_data.dta", clear
* Basic inspection
describe
summarize
codebook, compact
* Check for duplicates
duplicates report id_var
duplicates list id_var if _dup > 0
* ============================================
* 3. VARIABLE CLEANING
* ============================================
* --- Rename variables for clarity ---
rename q1 age
rename q2 income_reported
rename q3 education_level
* --- Clean numeric variables ---
* Replace missing value codes with .
mvdecode age income_reported, mv(-99 -88 -77)
* Cap outliers at 99th percentile
qui sum income_reported, detail
replace income_reported = r(p99) if income_reported > r(p99) & !mi(income_reported)
* --- Clean string variables ---
* Standardize state names
replace state = upper(trim(state))
replace state = "NEW YORK" if inlist(state, "NY", "N.Y.", "N Y")
* --- Create categorical variables ---
gen education_cat = .
replace education_cat = 1 if education_level < 12
replace education_cat = 2 if education_level == 12
replace education_cat = 3 if education_level > 12 & education_level <= 16
replace education_cat = 4 if education_level > 16 & !mi(education_level)
label define edu_lbl 1 "Less than HS" 2 "High School" 3 "College" 4 "Graduate"
label values education_cat edu_lbl
* ============================================
* 4. HANDLE MISSING DATA
* ============================================
* Create missing indicator variables
gen mi_income = mi(income_reported)
* Document missingness
tab mi_income
* Count complete cases
egen complete_case = rownonmiss(age income_reported education_cat)
tab complete_case
* ============================================
* 5. CREATE DERIVED VARIABLES
* ============================================
* Age groups
gen age_group = .
replace age_group = 1 if age >= 18 & age < 30
replace age_group = 2 if age >= 30 & age < 50
replace age_group = 3 if age >= 50 & age < 65
replace age_group = 4 if age >= 65 & !mi(age)
label define age_lbl 1 "18-29" 2 "30-49" 3 "50-64" 4 "65+"
label values age_group age_lbl
* Log income
gen log_income = ln(income_reported + 1)
* ============================================
* 6. DATA VALIDATION
* ============================================
* Assert expected ranges
assert age >= 18 & age <= 120 if !mi(age)
assert income_reported >= 0 if !mi(income_reported)
* Check variable types
assert !mi(id_var)
isid id_var // Verify unique identifier
* ============================================
* 7. LABEL VARIABLES
* ============================================
label variable age "Age in years"
label variable income_reported "Annual income (USD)"
label variable education_cat "Education category"
label variable log_income "Log of annual income"
label variable mi_income "Missing income indicator"
* ============================================
* 8. FINAL CHECKS AND SAVE
* ============================================
* Keep relevant variables
keep id_var age age_group income_reported log_income ///
education_cat mi_income state year
* Order variables logically
order id_var year state age age_group income_reported ///
log_income education_cat mi_income
* Compress to minimize file size
compress
* Save cleaned data
save "${clean_data}/cleaned_analysis_data.dta", replace
* Create codebook
codebook, compact
* Close log
log close
* ============================================
* END OF FILE
* ============================================Requirements
要求
Software
软件
- Stata 15+ (some commands require newer versions)
- Stata 15+(部分命令需要更高版本)
Recommended User-Written Commands
推荐的用户自定义命令
stata
ssc install unique // For unique value checking
ssc install mdesc // For missing data patterns
ssc install labutil // For label manipulationstata
ssc install unique // For unique value checking
ssc install mdesc // For missing data patterns
ssc install labutil // For label manipulationBest Practices
最佳实践
- Always start with to ensure clean environment
clear all - Use log files to document all transformations
- Comment extensively - explain WHY, not just WHAT
- Use statements to catch data errors early
assert - Create a data dictionary alongside your cleaned data
- Version your do-files and datasets
- 始终以开头,确保环境干净
clear all - 使用日志文件记录所有转换操作
- 大量添加注释 - 解释原因,而非仅仅说明操作
- 使用语句尽早发现数据错误
assert - 创建数据字典与清理后的数据集配套
- 对do-file和数据集进行版本管理
Common Pitfalls
常见误区
- ❌ Not checking for duplicates before merging
- ❌ Forgetting to handle missing value codes (-99, -88, etc.)
- ❌ Not labeling variables and values
- ❌ Overwriting raw data files
- ❌ Not documenting data transformations
- ❌ 合并前未检查重复项
- ❌ 忘记处理缺失值编码(如-99、-88等)
- ❌ 不为变量和值添加标签
- ❌ 覆盖原始数据文件
- ❌ 不记录数据转换过程
References
参考资料
Changelog
更新日志
v1.0.0
v1.0.0
- Initial release with comprehensive cleaning template
- 初始版本,包含完整的清理模板