nlp-engineer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseNLP Engineer
NLP工程师
Purpose
用途
Provides expertise in Natural Language Processing systems design and implementation. Specializes in text classification, named entity recognition, sentiment analysis, and integrating modern LLMs using frameworks like Hugging Face, spaCy, and LangChain.
提供自然语言处理(NLP)系统设计与实现的专业支持。专注于文本分类、命名实体识别(NER)、情感分析,以及使用Hugging Face、spaCy和LangChain等框架集成现代大语言模型(LLM)。
When to Use
适用场景
- Building text classification systems
- Implementing named entity recognition (NER)
- Creating sentiment analysis pipelines
- Fine-tuning transformer models
- Designing LLM-powered features
- Implementing text preprocessing pipelines
- Building search and retrieval systems
- Creating text generation applications
- 构建文本分类系统
- 实现命名实体识别(NER)
- 创建情感分析流水线
- 微调Transformer模型
- 设计基于LLM的功能
- 实现文本预处理流水线
- 构建搜索与检索系统
- 创建文本生成应用
Quick Start
快速入门
Invoke this skill when:
- Building NLP pipelines (classification, NER, sentiment)
- Fine-tuning transformer models
- Implementing text preprocessing
- Integrating LLMs for text tasks
- Designing semantic search systems
Do NOT invoke when:
- RAG architecture design → use
/ai-engineer - LLM prompt optimization → use
/prompt-engineer - ML model deployment → use
/mlops-engineer - General data processing → use
/data-engineer
调用此技能的场景:
- 构建NLP流水线(分类、NER、情感分析)
- 微调Transformer模型
- 实现文本预处理
- 为文本任务集成LLM
- 设计语义搜索系统
请勿调用此技能的场景:
- RAG架构设计 → 使用
/ai-engineer - LLM提示词优化 → 使用
/prompt-engineer - 机器学习模型部署 → 使用
/mlops-engineer - 通用数据处理 → 使用
/data-engineer
Decision Framework
决策框架
NLP Task Type?
├── Classification
│ ├── Simple → Fine-tuned BERT/DistilBERT
│ └── Zero-shot → LLM with prompting
├── NER
│ ├── Standard entities → spaCy
│ └── Custom entities → Fine-tuned model
├── Generation
│ └── LLM (GPT, Claude, Llama)
└── Semantic Search
└── Embeddings + Vector storeNLP Task Type?
├── Classification
│ ├── Simple → Fine-tuned BERT/DistilBERT
│ └── Zero-shot → LLM with prompting
├── NER
│ ├── Standard entities → spaCy
│ └── Custom entities → Fine-tuned model
├── Generation
│ └── LLM (GPT, Claude, Llama)
└── Semantic Search
└── Embeddings + Vector storeCore Workflows
核心工作流
1. Text Classification Pipeline
1. 文本分类流水线
- Collect and label training data
- Preprocess text (tokenization, cleaning)
- Select base model (BERT, RoBERTa)
- Fine-tune on labeled dataset
- Evaluate with appropriate metrics
- Deploy with inference optimization
- 收集并标注训练数据
- 文本预处理(分词、清洗)
- 选择基础模型(BERT、RoBERTa)
- 在标注数据集上微调模型
- 使用合适的指标进行评估
- 结合推理优化进行部署
2. NER System
2. NER系统
- Define entity types for domain
- Create labeled training data
- Choose framework (spaCy, Hugging Face)
- Train custom NER model
- Evaluate precision, recall, F1
- Integrate with post-processing rules
- 定义领域专属的实体类型
- 创建标注训练数据
- 选择框架(spaCy、Hugging Face)
- 训练自定义NER模型
- 评估精确率、召回率、F1值
- 集成后处理规则
3. Embedding-Based Search
3. 基于Embedding的搜索
- Select embedding model (sentence-transformers)
- Generate embeddings for corpus
- Index in vector database
- Implement query embedding
- Add hybrid search (keyword + semantic)
- Tune similarity thresholds
- 选择Embedding模型(sentence-transformers)
- 为语料库生成Embedding
- 在向量数据库中建立索引
- 实现查询Embedding
- 添加混合搜索(关键词+语义)
- 调整相似度阈值
Best Practices
最佳实践
- Start with pretrained models, fine-tune as needed
- Use domain-specific preprocessing
- Evaluate with task-appropriate metrics
- Consider inference latency for production
- Implement proper text cleaning pipelines
- Use batching for efficient inference
- 从预训练模型开始,按需进行微调
- 使用领域专属的预处理方法
- 使用与任务匹配的评估指标
- 考虑生产环境的推理延迟
- 实现完善的文本清洗流水线
- 使用批处理提升推理效率
Anti-Patterns
反模式
| Anti-Pattern | Problem | Correct Approach |
|---|---|---|
| Training from scratch | Wastes data and compute | Fine-tune pretrained |
| No preprocessing | Noisy inputs hurt performance | Clean and normalize text |
| Wrong metrics | Misleading evaluation | Task-appropriate metrics |
| Ignoring class imbalance | Biased predictions | Balance or weight classes |
| Overfitting to eval set | Poor generalization | Proper train/val/test splits |
| 反模式 | 问题 | 正确做法 |
|---|---|---|
| 从零开始训练模型 | 浪费数据与计算资源 | 微调预训练模型 |
| 不做预处理 | 噪声输入影响性能 | 清洗并标准化文本 |
| 使用错误指标 | 评估结果有误导性 | 使用与任务匹配的指标 |
| 忽略类别不平衡 | 预测结果有偏差 | 平衡类别或为类别加权 |
| 过拟合验证集 | 泛化能力差 | 合理划分训练/验证/测试集 |