mcp-development

Original🇺🇸 English
Translated

Use when building "MCP server", "Model Context Protocol", creating "Claude tools", "MCP tools", or asking about "FastMCP", "MCP SDK", "tool development for LLMs", "external API integration for Claude"

5installs
Added on

NPX Install

npx skill4agent add eyadsibai/ltk mcp-development

MCP Server Development Guide

Build high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services.

Core Design Principles

Build for Workflows, Not Just APIs

PrincipleWhy
Consolidate operationsSingle tool for complete tasks
Return high-signal dataAgents have limited context
Provide format options"concise" vs "detailed" modes
Use human-readable IDsNot technical codes
Make errors actionableGuide toward correct usage
Key concept: Don't just wrap API endpoints. Design tools that enable complete workflows agents actually need.

Development Phases

Phase 1: Research

StepAction
Study MCP ProtocolRead
modelcontextprotocol.io/llms-full.txt
Study SDK docsPython or TypeScript SDK README
Study target APIRead ALL available documentation
Create implementation planBefore writing code

Phase 2: Design

DecisionOptions
LanguagePython (FastMCP) or TypeScript
Tool granularityAtomic vs workflow-oriented
Response formatJSON, Markdown, or both
Error handlingWhat errors can occur, how to recover

Phase 3: Implementation

ComponentPurpose
Input validationPydantic (Python) or Zod (TypeScript)
Tool descriptionsClear, with examples
Error messagesInclude suggested next steps
Response formattingConsistent across tools

Phase 4: Testing

Critical: MCP servers are long-running processes. Never run directly in main process.
ApproachHow
Evaluation harnessRecommended
tmux sessionRun server separately
Timeout wrapper
timeout 5s python server.py
MCP InspectorOfficial debugging tool

Tool Annotations

AnnotationMeaningDefault
readOnlyHintDoesn't modify statefalse
destructiveHintCan cause damagetrue
idempotentHintRepeated calls safefalse
openWorldHintInteracts externallytrue
Key concept: Annotations help the LLM decide when and how safely to use tools.

Input Design

Validation Patterns

PatternUse Case
Required fieldsCore parameters
Optional with defaultsConvenience parameters
EnumsLimited valid values
Min/max constraintsNumeric bounds
Pattern matchingFormat validation (email, URL)

Parameter Naming

GoodBadWhy
user_email
e
Self-documenting
limit
max_results_to_return
Concise but clear
include_archived
ia
Descriptive boolean

Response Design

Format Options

FormatUse Case
JSONProgrammatic use, structured data
MarkdownHuman readability, reports
HybridJSON in markdown code blocks

Response Guidelines

GuidelineWhy
~25,000 token limitContext constraints
Truncate with indicatorDon't silently cut
Support pagination
limit
and
offset
params
Include metadataTotal count, has_more

Error Handling

Error Message Structure

ElementPurpose
What failedClear description
Why it failedRoot cause if known
How to fixSuggested next action
ExampleCorrect usage
Key concept: Error messages should guide the agent toward correct usage, not just diagnose problems.

Quality Checklist

Code Quality

CheckDescription
No duplicated codeExtract shared logic
Consistent formatsSimilar ops return similar structure
Full error handlingAll external calls wrapped
Type coverageAll inputs/outputs typed
Comprehensive docstringsEvery tool documented

Tool Quality

CheckDescription
Clear descriptionsModel knows when to use
Good examplesIn docstring
Sensible defaultsReduce required params
Consistent namingGroup related with prefixes

Best Practices

PracticeWhy
One tool = one purposeClear mental model
Comprehensive descriptionsLLM selection accuracy
Include examples in docstringsShow expected usage
Return actionable errorsEnable self-correction
Test with actual LLMReal-world validation
Version your serverTrack compatibility

Resources