mcp-integration-expert
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseMCP Integration Expert
MCP集成专家
Comprehensive skill for researching, documenting, and integrating Model Context Protocol (MCP) servers and tools into Claude Code and AI applications.
本技能为研究、文档编写及将Model Context Protocol (MCP)服务器与工具集成到Claude Code和AI应用中提供全面指导。
When to Use This Skill
何时使用该技能
Use this skill when you need to:
- Research and evaluate MCP servers for integration
- Build custom MCP servers or clients
- Integrate existing MCP tools into Claude Code
- Document MCP server capabilities and usage patterns
- Troubleshoot MCP integration issues
- Implement MCP security and authentication patterns
- Create multi-language MCP implementations (Python, TypeScript, C#, Java, Rust)
- Design MCP-based AI agent workflows
- Evaluate MCP server trust scores and documentation quality
当你需要以下操作时使用本技能:
- 研究并评估可集成的MCP服务器
- 构建自定义MCP服务器或客户端
- 将现有MCP工具集成到Claude Code中
- 记录MCP服务器的功能和使用模式
- 排查MCP集成问题
- 实现MCP安全与认证模式
- 创建多语言MCP实现(Python、TypeScript、C#、Java、Rust)
- 设计基于MCP的AI Agent工作流
- 评估MCP服务器的信任分数和文档质量
Model Context Protocol (MCP) Overview
Model Context Protocol (MCP) 概述
What is MCP?
什么是MCP?
The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that standardizes how AI applications and Large Language Models (LLMs) integrate with external data sources, tools, and systems.
Key Analogy: MCP is like a "USB-C port for AI" - providing a universal, standardized interface for connecting AI models to diverse data sources and tools.
**Model Context Protocol (MCP)**是Anthropic于2024年11月推出的开放标准,用于规范AI应用和大语言模型(LLM)与外部数据源、工具及系统的集成方式。
核心类比:MCP就像是“AI的USB-C接口”——为AI模型连接多样化的数据源和工具提供通用、标准化的接口。
Core Architecture
核心架构
┌─────────────────────────────────────────────────────────────┐
│ MCP Client │
│ (AI Application) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Claude │ │ ChatGPT │ │ Custom │ │
│ │ Code │ │ Desktop │ │ App │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
│ MCP Protocol
│
┌─────────────────────────────────────────────────────────────┐
│ MCP Servers │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Linear │ │ Postgres │ │ GitHub │ │ Custom │ │
│ │ Server │ │ Server │ │ Server │ │ Server │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────┘┌─────────────────────────────────────────────────────────────┐
│ MCP Client │
│ (AI Application) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Claude │ │ ChatGPT │ │ Custom │ │
│ │ Code │ │ Desktop │ │ App │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
│ MCP Protocol
│
┌─────────────────────────────────────────────────────────────┐
│ MCP Servers │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Linear │ │ Postgres │ │ GitHub │ │ Custom │ │
│ │ Server │ │ Server │ │ Server │ │ Server │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────┘MCP Protocol Communication Flow
MCP协议通信流程
mermaid
sequenceDiagram
autonumber
actor User as 👤 User
participant ClientApp as 🖥️ Client App
participant ClientLLM as 🧠 Client LLM
participant Server1 as 🔧 MCP Server 1
participant Server2 as 📚 MCP Server 2
%% Discovery Phase
rect rgb(220, 240, 255)
Note over ClientApp, Server2: TOOL DISCOVERY PHASE
ClientApp->>+Server1: Request available tools/resources
Server1-->>-ClientApp: Return tool list (JSON)
ClientApp->>+Server2: Request available tools/resources
Server2-->>-ClientApp: Return tool list (JSON)
Note right of ClientApp: Store combined tool<br/>catalog locally
end
%% User Interaction
rect rgb(255, 240, 220)
Note over User, ClientLLM: USER INTERACTION PHASE
User->>+ClientApp: Enter natural language prompt
ClientApp->>+ClientLLM: Forward prompt + tool catalog
ClientLLM->>-ClientLLM: Analyze prompt & select tools
end
%% Tool Execution
rect rgb(220, 255, 220)
Note over ClientApp, Server1: TOOL EXECUTION PHASE
ClientLLM->>+ClientApp: Request tool execution
ClientApp->>+Server1: Execute specific tool
Server1-->>-ClientApp: Return results
ClientApp->>+ClientLLM: Process results
ClientLLM-->>-ClientApp: Generate response
ClientApp-->>-User: Display final answer
endmermaid
sequenceDiagram
autonumber
actor User as 👤 User
participant ClientApp as 🖥️ Client App
participant ClientLLM as 🧠 Client LLM
participant Server1 as 🔧 MCP Server 1
participant Server2 as 📚 MCP Server 2
%% Discovery Phase
rect rgb(220, 240, 255)
Note over ClientApp, Server2: TOOL DISCOVERY PHASE
ClientApp->>+Server1: Request available tools/resources
Server1-->>-ClientApp: Return tool list (JSON)
ClientApp->>+Server2: Request available tools/resources
Server2-->>-ClientApp: Return tool list (JSON)
Note right of ClientApp: Store combined tool<br/>catalog locally
end
%% User Interaction
rect rgb(255, 240, 220)
Note over User, ClientLLM: USER INTERACTION PHASE
User->>+ClientApp: Enter natural language prompt
ClientApp->>+ClientLLM: Forward prompt + tool catalog
ClientLLM->>-ClientLLM: Analyze prompt & select tools
end
%% Tool Execution
rect rgb(220, 255, 220)
Note over ClientApp, Server1: TOOL EXECUTION PHASE
ClientLLM->>+ClientApp: Request tool execution
ClientApp->>+Server1: Execute specific tool
Server1-->>-ClientApp: Return results
ClientApp->>+ClientLLM: Process results
ClientLLM-->>-ClientApp: Generate response
ClientApp-->>-User: Display final answer
endMCP Core Primitives
MCP核心原语
MCP provides three core primitives for context exchange:
- Resources: Expose data sources (files, databases, APIs)
- Tools: Enable actions and operations
- Prompts: Provide reusable prompt templates
MCP提供三种用于上下文交换的核心原语:
- 资源(Resources):暴露数据源(文件、数据库、API)
- 工具(Tools):支持操作与执行动作
- 提示词(Prompts):提供可复用的提示词模板
MCP Integration Workflow
MCP集成工作流
Phase 1: Research & Discovery
阶段1:研究与发现
1.1 Identify MCP Server Needs
1.1 确定MCP服务器需求
Questions to ask:
- What data sources do you need to access?
- What actions/tools do you need to perform?
- Are there existing MCP servers for your use case?
- Do you need to build a custom MCP server?
需要思考的问题:
- 你需要访问哪些数据源?
- 你需要执行哪些操作/使用哪些工具?
- 是否已有适用于你场景的MCP服务器?
- 是否需要构建自定义MCP服务器?
1.2 Research Available MCP Servers
1.2 调研可用的MCP服务器
Official MCP Server Repository: https://github.com/modelcontextprotocol
Popular MCP Servers (2025):
- Linear MCP: Project management and issue tracking
- Playwright MCP: Browser automation and testing
- Context7 MCP: Library documentation retrieval
- GitHub MCP: Repository management and automation
- Postgres MCP: Database queries and operations
- Google Drive MCP: File storage and retrieval
- Slack MCP: Team communication
- Stripe MCP: Payment processing
- Puppeteer MCP: Web scraping and automation
官方MCP服务器仓库:https://github.com/modelcontextprotocol
2025年热门MCP服务器:
- Linear MCP:项目管理与问题追踪
- Playwright MCP:浏览器自动化与测试
- Context7 MCP:库文档检索
- GitHub MCP:仓库管理与自动化
- Postgres MCP:数据库查询与操作
- Google Drive MCP:文件存储与检索
- Slack MCP:团队沟通
- Stripe MCP:支付处理
- Puppeteer MCP:网页抓取与自动化
1.3 Evaluate MCP Server Quality
1.3 评估MCP服务器质量
Trust Score Criteria (1-10 scale):
- 9-10: Highly authoritative (official SDKs, major platforms)
- 7-9: Well-maintained community projects
- 5-7: Experimental or niche implementations
- <5: Proof-of-concept or unmaintained
Documentation Coverage:
- High: 500+ code snippets
- Medium: 100-500 code snippets
- Low: <100 code snippets
信任分数标准(1-10分制):
- 9-10分:高度权威(官方SDK、主流平台)
- 7-9分:维护良好的社区项目
- 5-7分:实验性或小众实现
- <5分:概念验证或已停止维护
文档覆盖度:
- 高:500+代码片段
- 中:100-500代码片段
- 低:<100代码片段
Phase 2: MCP Server Architecture
阶段2:MCP服务器架构
2.1 MCP Server Components
2.1 MCP服务器组件
typescript
// TypeScript MCP Server Structure
import { FastMCP } from '@modelcontextprotocol/typescript-sdk';
const mcp = new FastMCP({
name: "My MCP Server",
version: "1.0.0"
});
// 1. Resources: Expose data
mcp.resource("greeting://{name}", (name: string) => {
return `Hello, ${name}!`;
});
// 2. Tools: Enable actions
mcp.tool("calculate", {
description: "Perform calculations",
parameters: {
operation: { type: "string" },
a: { type: "number" },
b: { type: "number" }
}
}, async (params) => {
// Tool implementation
});
// 3. Prompts: Reusable templates
mcp.prompt("code-review", {
description: "Code review template",
arguments: ["language", "code"]
});typescript
// TypeScript MCP Server Structure
import { FastMCP } from '@modelcontextprotocol/typescript-sdk';
const mcp = new FastMCP({
name: "My MCP Server",
version: "1.0.0"
});
// 1. Resources: Expose data
mcp.resource("greeting://{name}", (name: string) => {
return `Hello, ${name}!`;
});
// 2. Tools: Enable actions
mcp.tool("calculate", {
description: "Perform calculations",
parameters: {
operation: { type: "string" },
a: { type: "number" },
b: { type: "number" }
}
}, async (params) => {
// Tool implementation
});
// 3. Prompts: Reusable templates
mcp.prompt("code-review", {
description: "Code review template",
arguments: ["language", "code"]
});2.2 Transport Mechanisms
2.2 传输机制
MCP supports multiple transport mechanisms:
1. Stdio Transport (Default for local servers)
python
from mcp.server.fastmcp import FastMCP
from mcp.transports.stdio import serve_stdio
mcp = FastMCP("Demo")MCP支持多种传输机制:
1. 标准输入输出传输(Stdio Transport)(本地服务器默认方式)
python
from mcp.server.fastmcp import FastMCP
from mcp.transports.stdio import serve_stdio
mcp = FastMCP("Demo")Run with stdio
Run with stdio
asyncio.run(serve_stdio(mcp))
**2. HTTP/SSE Transport** (For remote servers)
```typescript
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/typescript-sdk';
const transport = new StreamableHTTPServerTransport(8000);
await server.connect(transport);asyncio.run(serve_stdio(mcp))
**2. HTTP/SSE传输**(用于远程服务器)
```typescript
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/typescript-sdk';
const transport = new StreamableHTTPServerTransport(8000);
await server.connect(transport);Phase 3: MCP Client Integration
阶段3:MCP客户端集成
3.1 Client Setup (Python)
3.1 Python客户端设置
python
from mcp.client import stdio_client, ClientSession
async def run_client():
server_params = {
"command": "python",
"args": ["server.py"]
}
async with stdio_client(server_params) as (reader, writer):
async with ClientSession(reader, writer) as session:
# Initialize connection
await session.initialize()
# Discover tools
tools = await session.list_tools()
# Call a tool
result = await session.call_tool("add", arguments={"a": 5, "b": 7})
print(f"Result: {result}")python
from mcp.client import stdio_client, ClientSession
async def run_client():
server_params = {
"command": "python",
"args": ["server.py"]
}
async with stdio_client(server_params) as (reader, writer):
async with ClientSession(reader, writer) as session:
# Initialize connection
await session.initialize()
# Discover tools
tools = await session.list_tools()
# Call a tool
result = await session.call_tool("add", arguments={"a": 5, "b": 7})
print(f"Result: {result}")3.2 Client Setup (TypeScript)
3.2 TypeScript客户端设置
typescript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
class MCPClient {
private client: Client;
constructor() {
this.client = new Client({
name: "example-client",
version: "1.0.0"
}, {
capabilities: {
prompts: {},
resources: {},
tools: {}
}
});
}
async connectToServer(transport: Transport) {
await this.client.connect(transport);
}
async listTools() {
return await this.client.listTools();
}
async callTool(name: string, args: any) {
return await this.client.callTool({
name: name,
arguments: args
});
}
}typescript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
class MCPClient {
private client: Client;
constructor() {
this.client = new Client({
name: "example-client",
version: "1.0.0"
}, {
capabilities: {
prompts: {},
resources: {},
tools: {}
}
});
}
async connectToServer(transport: Transport) {
await this.client.connect(transport);
}
async listTools() {
return await this.client.listTools();
}
async callTool(name: string, args: any) {
return await this.client.callTool({
name: name,
arguments: args
});
}
}3.3 Client Setup (C#)
3.3 C#客户端设置
csharp
using ModelContextProtocol.Client;
using ModelContextProtocol.Protocol.Transport;
var clientTransport = new StdioClientTransport(new()
{
Name = "Demo Server",
Command = "/path/to/server/executable",
Arguments = [],
});
await using var mcpClient = await McpClientFactory.CreateAsync(clientTransport);
// List tools
var tools = await mcpClient.ListToolsAsync();
foreach (var tool in tools)
{
Console.WriteLine($"Tool: {tool.Name}");
Console.WriteLine($"Description: {tool.Description}");
}
// Call a tool
var result = await mcpClient.CallToolAsync("add", new { a = 5, b = 7 });csharp
using ModelContextProtocol.Client;
using ModelContextProtocol.Protocol.Transport;
var clientTransport = new StdioClientTransport(new()
{
Name = "Demo Server",
Command = "/path/to/server/executable",
Arguments = [],
});
await using var mcpClient = await McpClientFactory.CreateAsync(clientTransport);
// List tools
var tools = await mcpClient.ListToolsAsync();
foreach (var tool in tools)
{
Console.WriteLine($"Tool: {tool.Name}");
Console.WriteLine($"Description: {tool.Description}");
}
// Call a tool
var result = await mcpClient.CallToolAsync("add", new { a = 5, b = 7 });Phase 4: Building Custom MCP Servers
阶段4:构建自定义MCP服务器
4.1 Python MCP Server (FastMCP)
4.1 Python MCP服务器(FastMCP)
python
from fastmcp import FastMCP
from fastmcp.transports.stdio import serve_stdio
import asyncio
mcp = FastMCP(
name="Weather MCP Server",
version="1.0.0"
)
@mcp.tool()
def get_weather(location: str) -> dict:
"""Gets current weather for a location."""
# Implementation would call weather API
return {
"temperature": 72.5,
"conditions": "Sunny",
"location": location
}
@mcp.resource("weather://{location}")
def weather_resource(location: str) -> str:
"""Get weather data as a resource"""
return f"Current weather in {location}"
class WeatherTools:
@mcp.tool()
def forecast(self, location: str, days: int = 1) -> dict:
"""Gets weather forecast"""
return {
"location": location,
"forecast": [
{"day": i+1, "temperature": 70 + i, "conditions": "Partly Cloudy"}
for i in range(days)
]
}
weather_tools = WeatherTools()
if __name__ == "__main__":
asyncio.run(serve_stdio(mcp))python
from fastmcp import FastMCP
from fastmcp.transports.stdio import serve_stdio
import asyncio
mcp = FastMCP(
name="Weather MCP Server",
version="1.0.0"
)
@mcp.tool()
def get_weather(location: str) -> dict:
"""Gets current weather for a location."""
# Implementation would call weather API
return {
"temperature": 72.5,
"conditions": "Sunny",
"location": location
}
@mcp.resource("weather://{location}")
def weather_resource(location: str) -> str:
"""Get weather data as a resource"""
return f"Current weather in {location}"
class WeatherTools:
@mcp.tool()
def forecast(self, location: str, days: int = 1) -> dict:
"""Gets weather forecast"""
return {
"location": location,
"forecast": [
{"day": i+1, "temperature": 70 + i, "conditions": "Partly Cloudy"}
for i in range(days)
]
}
weather_tools = WeatherTools()
if __name__ == "__main__":
asyncio.run(serve_stdio(mcp))4.2 TypeScript MCP Server
4.2 TypeScript MCP服务器
typescript
import { FastMCP } from '@modelcontextprotocol/typescript-sdk';
import { serve_stdio } from '@modelcontextprotocol/typescript-sdk/transports/stdio';
const mcp = new FastMCP({
name: "Weather MCP Server",
version: "1.0.0"
});
mcp.tool("get_weather", {
description: "Gets current weather for a location",
parameters: {
location: { type: "string", required: true }
}
}, async (params) => {
return {
temperature: 72.5,
conditions: "Sunny",
location: params.location
};
});
mcp.resource("weather://{location}", async (location: string) => {
return `Current weather in ${location}`;
});
// Start server
serve_stdio(mcp);typescript
import { FastMCP } from '@modelcontextprotocol/typescript-sdk';
import { serve_stdio } from '@modelcontextprotocol/typescript-sdk/transports/stdio';
const mcp = new FastMCP({
name: "Weather MCP Server",
version: "1.0.0"
});
mcp.tool("get_weather", {
description: "Gets current weather for a location",
parameters: {
location: { type: "string", required: true }
}
}, async (params) => {
return {
temperature: 72.5,
conditions: "Sunny",
location: params.location
};
});
mcp.resource("weather://{location}", async (location: string) => {
return `Current weather in ${location}`;
});
// Start server
serve_stdio(mcp);4.3 Java MCP Server
4.3 Java MCP服务器
java
import io.modelcontextprotocol.server.McpServer;
import io.modelcontextprotocol.server.McpToolDefinition;
import io.modelcontextprotocol.server.transport.StdioServerTransport;
public class WeatherMcpServer {
public static void main(String[] args) throws Exception {
McpServer server = McpServer.builder()
.name("Weather MCP Server")
.version("1.0.0")
.build();
server.registerTool(McpToolDefinition.builder("weatherTool")
.description("Gets current weather for a location")
.parameter("location", String.class)
.execute((ctx) -> {
String location = ctx.getParameter("location", String.class);
WeatherData data = getWeatherData(location);
return ToolResponse.content(
String.format("Temperature: %.1f°F, Conditions: %s",
data.getTemperature(),
data.getConditions())
);
})
.build());
try (StdioServerTransport transport = new StdioServerTransport()) {
server.connect(transport);
Thread.currentThread().join();
}
}
}java
import io.modelcontextprotocol.server.McpServer;
import io.modelcontextprotocol.server.McpToolDefinition;
import io.modelcontextprotocol.server.transport.StdioServerTransport;
public class WeatherMcpServer {
public static void main(String[] args) throws Exception {
McpServer server = McpServer.builder()
.name("Weather MCP Server")
.version("1.0.0")
.build();
server.registerTool(McpToolDefinition.builder("weatherTool")
.description("Gets current weather for a location")
.parameter("location", String.class)
.execute((ctx) -> {
String location = ctx.getParameter("location", String.class);
WeatherData data = getWeatherData(location);
return ToolResponse.content(
String.format("Temperature: %.1f°F, Conditions: %s",
data.getTemperature(),
data.getConditions())
);
})
.build());
try (StdioServerTransport transport = new StdioServerTransport()) {
server.connect(transport);
Thread.currentThread().join();
}
}
}Phase 5: LLM Integration Patterns
阶段5:LLM集成模式
5.1 Integrating MCP with OpenAI
5.1 将MCP与OpenAI集成
typescript
import OpenAI from "openai";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
class MCPOpenAIClient {
private openai: OpenAI;
private mcpClient: Client;
constructor() {
this.openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
this.mcpClient = new Client({
name: "openai-mcp-client",
version: "1.0.0"
}, {
capabilities: { tools: {} }
});
}
// Adapt MCP tools to OpenAI format
mcpToolToOpenAITool(tool: any) {
return {
type: "function" as const,
function: {
name: tool.name,
description: tool.description,
parameters: {
type: "object",
properties: tool.input_schema.properties,
required: tool.input_schema.required,
},
},
};
}
async processRequest(userMessage: string) {
// Get MCP tools
const mcpTools = await this.mcpClient.listTools();
const openaiTools = mcpTools.map(t => this.mcpToolToOpenAITool(t));
// Call OpenAI with tools
const response = await this.openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userMessage }],
tools: openaiTools,
});
// Handle tool calls
const toolCalls = response.choices[0].message.tool_calls;
if (toolCalls) {
for (const toolCall of toolCalls) {
const result = await this.mcpClient.callTool({
name: toolCall.function.name,
arguments: JSON.parse(toolCall.function.arguments),
});
console.log("Tool result:", result);
}
}
}
}typescript
import OpenAI from "openai";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
class MCPOpenAIClient {
private openai: OpenAI;
private mcpClient: Client;
constructor() {
this.openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
this.mcpClient = new Client({
name: "openai-mcp-client",
version: "1.0.0"
}, {
capabilities: { tools: {} }
});
}
// Adapt MCP tools to OpenAI format
mcpToolToOpenAITool(tool: any) {
return {
type: "function" as const,
function: {
name: tool.name,
description: tool.description,
parameters: {
type: "object",
properties: tool.input_schema.properties,
required: tool.input_schema.required,
},
},
};
}
async processRequest(userMessage: string) {
// Get MCP tools
const mcpTools = await this.mcpClient.listTools();
const openaiTools = mcpTools.map(t => this.mcpToolToOpenAITool(t));
// Call OpenAI with tools
const response = await this.openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userMessage }],
tools: openaiTools,
});
// Handle tool calls
const toolCalls = response.choices[0].message.tool_calls;
if (toolCalls) {
for (const toolCall of toolCalls) {
const result = await this.mcpClient.callTool({
name: toolCall.function.name,
arguments: JSON.parse(toolCall.function.arguments),
});
console.log("Tool result:", result);
}
}
}
}5.2 Integrating MCP with Azure OpenAI (C#)
5.2 将MCP与Azure OpenAI集成(C#)
csharp
using Azure.AI.Inference;
using Azure;
using ModelContextProtocol.Client;
using System.Text.Json;
var endpoint = "https://models.inference.ai.azure.com";
var token = Environment.GetEnvironmentVariable("GITHUB_TOKEN");
var client = new ChatCompletionsClient(new Uri(endpoint), new AzureKeyCredential(token));
var mcpClient = await McpClientFactory.CreateAsync(clientTransport);
// Convert MCP tools to Azure format
ChatCompletionsToolDefinition ConvertFrom(string name, string description, JsonElement schema)
{
FunctionDefinition functionDefinition = new FunctionDefinition(name)
{
Description = description,
Parameters = BinaryData.FromObjectAsJson(new
{
Type = "object",
Properties = schema
})
};
return new ChatCompletionsToolDefinition(functionDefinition);
}
// Get tools from MCP server
var mcpTools = await mcpClient.ListToolsAsync();
var toolDefinitions = new List<ChatCompletionsToolDefinition>();
foreach (var tool in mcpTools)
{
JsonElement propertiesElement;
tool.JsonSchema.TryGetProperty("properties", out propertiesElement);
var def = ConvertFrom(tool.Name, tool.Description, propertiesElement);
toolDefinitions.Add(def);
}
// Use tools in chat completion
var chatHistory = new List<ChatRequestMessage>
{
new ChatRequestSystemMessage("You are a helpful assistant"),
new ChatRequestUserMessage("What's the weather in Seattle?")
};
var response = await client.CompleteAsync(chatHistory,
new ChatCompletionsOptions { Tools = toolDefinitions });csharp
using Azure.AI.Inference;
using Azure;
using ModelContextProtocol.Client;
using System.Text.Json;
var endpoint = "https://models.inference.ai.azure.com";
var token = Environment.GetEnvironmentVariable("GITHUB_TOKEN");
var client = new ChatCompletionsClient(new Uri(endpoint), new AzureKeyCredential(token));
var mcpClient = await McpClientFactory.CreateAsync(clientTransport);
// Convert MCP tools to Azure format
ChatCompletionsToolDefinition ConvertFrom(string name, string description, JsonElement schema)
{
FunctionDefinition functionDefinition = new FunctionDefinition(name)
{
Description = description,
Parameters = BinaryData.FromObjectAsJson(new
{
Type = "object",
Properties = schema
})
};
return new ChatCompletionsToolDefinition(functionDefinition);
}
// Get tools from MCP server
var mcpTools = await mcpClient.ListToolsAsync();
var toolDefinitions = new List<ChatCompletionsToolDefinition>();
foreach (var tool in mcpTools)
{
JsonElement propertiesElement;
tool.JsonSchema.TryGetProperty("properties", out propertiesElement);
var def = ConvertFrom(tool.Name, tool.Description, propertiesElement);
toolDefinitions.Add(def);
}
// Use tools in chat completion
var chatHistory = new List<ChatRequestMessage>
{
new ChatRequestSystemMessage("You are a helpful assistant"),
new ChatRequestUserMessage("What's the weather in Seattle?")
};
var response = await client.CompleteAsync(chatHistory,
new ChatCompletionsOptions { Tools = toolDefinitions });Phase 6: MCP Configuration in Claude Code
阶段6:Claude Code中的MCP配置
6.1 Claude Code MCP Configuration
6.1 Claude Code MCP配置
Claude Code automatically discovers and loads MCP servers configured in :
claude_desktop_config.jsonLocation:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Configuration Format:
json
{
"mcpServers": {
"linear": {
"command": "npx",
"args": ["-y", "@linear/mcp-server"],
"env": {
"LINEAR_API_KEY": "your-api-key"
}
},
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp-server"]
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost/db"
}
},
"custom-server": {
"command": "python",
"args": ["/path/to/custom/server.py"]
}
}
}Claude Code会自动发现并加载中配置的MCP服务器:
claude_desktop_config.json配置文件位置:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
配置格式:
json
{
"mcpServers": {
"linear": {
"command": "npx",
"args": ["-y", "@linear/mcp-server"],
"env": {
"LINEAR_API_KEY": "your-api-key"
}
},
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp-server"]
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost/db"
}
},
"custom-server": {
"command": "python",
"args": ["/path/to/custom/server.py"]
}
}
}6.2 Verify MCP Server Connection
6.2 验证MCP服务器连接
Once configured, MCP servers automatically connect when Claude Code starts.
Available MCP tools in Claude Code:
typescript
// All MCP tools are prefixed with "mcp__<server-name>__<tool-name>"
mcp__linear__create_issue
mcp__linear__list_issues
mcp__linear__update_issue
mcp__playwright__browser_navigate
mcp__playwright__browser_screenshot
mcp__context7__resolve_library_id
mcp__context7__get_library_docs配置完成后,MCP服务器会在Claude Code启动时自动连接。
Claude Code中可用的MCP工具:
typescript
// All MCP tools are prefixed with "mcp__<server-name>__<tool-name>"
mcp__linear__create_issue
mcp__linear__list_issues
mcp__linear__update_issue
mcp__playwright__browser_navigate
mcp__playwright__browser_screenshot
mcp__context7__resolve_library_id
mcp__context7__get_library_docsPhase 7: Security & Best Practices
阶段7:安全与最佳实践
7.1 Security Considerations (2025 Update)
7.1 安全注意事项(2025更新)
Known Security Issues (April 2025):
- Prompt Injection: MCP tools can be exploited via prompt injection attacks
- Tool Permissions: Combining tools can enable unintended file exfiltration
- Lookalike Tools: Malicious tools can silently replace trusted ones
- Credential Exposure: API keys and credentials must be protected
Security Best Practices:
- Validate Tool Inputs
python
@mcp.tool()
def read_file(filepath: str) -> str:
"""Read a file - with security validation"""
# Validate filepath to prevent path traversal
import os
filepath = os.path.normpath(filepath)
# Ensure within allowed directory
allowed_dir = "/home/user/documents"
if not filepath.startswith(allowed_dir):
raise ValueError("Access denied: Path outside allowed directory")
with open(filepath, 'r') as f:
return f.read()- Implement Authentication
python
from fastmcp import FastMCP
import os
mcp = FastMCP("Secure Server", version="1.0.0")
@mcp.middleware
async def authenticate(request, call_next):
api_key = request.headers.get("X-API-Key")
expected_key = os.getenv("MCP_API_KEY")
if api_key != expected_key:
raise PermissionError("Invalid API key")
return await call_next(request)- Rate Limiting
python
from collections import defaultdict
import time
call_counts = defaultdict(list)
@mcp.middleware
async def rate_limit(request, call_next):
client_id = request.client_id
now = time.time()
# Clean old entries
call_counts[client_id] = [t for t in call_counts[client_id] if now - t < 60]
# Check rate limit (10 calls per minute)
if len(call_counts[client_id]) >= 10:
raise Exception("Rate limit exceeded")
call_counts[client_id].append(now)
return await call_next(request)已知安全问题(2025年4月):
- 提示词注入:MCP工具可能被提示词注入攻击利用
- 工具权限:组合使用工具可能导致意外的文件泄露
- 仿冒工具:恶意工具可能悄悄替换可信工具
- 凭证暴露:API密钥和凭证必须妥善保护
安全最佳实践:
- 验证工具输入
python
@mcp.tool()
def read_file(filepath: str) -> str:
"""Read a file - with security validation"""
# Validate filepath to prevent path traversal
import os
filepath = os.path.normpath(filepath)
# Ensure within allowed directory
allowed_dir = "/home/user/documents"
if not filepath.startswith(allowed_dir):
raise ValueError("Access denied: Path outside allowed directory")
with open(filepath, 'r') as f:
return f.read()- 实现认证机制
python
from fastmcp import FastMCP
import os
mcp = FastMCP("Secure Server", version="1.0.0")
@mcp.middleware
async def authenticate(request, call_next):
api_key = request.headers.get("X-API-Key")
expected_key = os.getenv("MCP_API_KEY")
if api_key != expected_key:
raise PermissionError("Invalid API key")
return await call_next(request)- 速率限制
python
from collections import defaultdict
import time
call_counts = defaultdict(list)
@mcp.middleware
async def rate_limit(request, call_next):
client_id = request.client_id
now = time.time()
# Clean old entries
call_counts[client_id] = [t for t in call_counts[client_id] if now - t < 60]
# Check rate limit (10 calls per minute)
if len(call_counts[client_id]) >= 10:
raise Exception("Rate limit exceeded")
call_counts[client_id].append(now)
return await call_next(request)7.2 Error Handling Best Practices
7.2 错误处理最佳实践
python
from fastmcp import FastMCP
from typing import Optional
import logging
mcp = FastMCP("Robust Server", version="1.0.0")
logger = logging.getLogger(__name__)
@mcp.tool()
def safe_api_call(endpoint: str, params: Optional[dict] = None) -> dict:
"""Make an API call with comprehensive error handling"""
try:
# Validate inputs
if not endpoint.startswith("https://"):
raise ValueError("Only HTTPS endpoints allowed")
# Make API call
response = requests.get(endpoint, params=params, timeout=5)
response.raise_for_status()
return {
"success": True,
"data": response.json()
}
except requests.Timeout:
logger.error(f"Timeout calling {endpoint}")
return {
"success": False,
"error": "Request timeout",
"error_type": "timeout"
}
except requests.HTTPError as e:
logger.error(f"HTTP error calling {endpoint}: {e}")
return {
"success": False,
"error": str(e),
"error_type": "http_error",
"status_code": e.response.status_code
}
except Exception as e:
logger.exception(f"Unexpected error calling {endpoint}")
return {
"success": False,
"error": "Internal server error",
"error_type": "internal_error"
}python
from fastmcp import FastMCP
from typing import Optional
import logging
mcp = FastMCP("Robust Server", version="1.0.0")
logger = logging.getLogger(__name__)
@mcp.tool()
def safe_api_call(endpoint: str, params: Optional[dict] = None) -> dict:
"""Make an API call with comprehensive error handling"""
try:
# Validate inputs
if not endpoint.startswith("https://"):
raise ValueError("Only HTTPS endpoints allowed")
# Make API call
response = requests.get(endpoint, params=params, timeout=5)
response.raise_for_status()
return {
"success": True,
"data": response.json()
}
except requests.Timeout:
logger.error(f"Timeout calling {endpoint}")
return {
"success": False,
"error": "Request timeout",
"error_type": "timeout"
}
except requests.HTTPError as e:
logger.error(f"HTTP error calling {endpoint}: {e}")
return {
"success": False,
"error": str(e),
"error_type": "http_error",
"status_code": e.response.status_code
}
except Exception as e:
logger.exception(f"Unexpected error calling {endpoint}")
return {
"success": False,
"error": "Internal server error",
"error_type": "internal_error"
}7.3 Testing MCP Servers
7.3 测试MCP服务器
Integration Testing (Python):
python
import pytest
from mcp.server import McpServer
from mcp.client import McpClient
@pytest.mark.asyncio
async def test_mcp_server_integration():
# Start test server
server = McpServer()
server.register_tool(WeatherForecastTool(MockWeatherService()))
await server.start(port=5000)
try:
# Create client
client = McpClient("http://localhost:5000")
# Test tool discovery
tools = await client.discover_tools()
assert "weatherForecast" in [t.name for t in tools]
# Test tool execution
response = await client.execute_tool("weatherForecast", {
"location": "Seattle",
"days": 3
})
# Verify response
assert response.status_code == 200
assert "Seattle" in response.content[0].text
finally:
await server.stop()集成测试(Python):
python
import pytest
from mcp.server import McpServer
from mcp.client import McpClient
@pytest.mark.asyncio
async def test_mcp_server_integration():
# Start test server
server = McpServer()
server.register_tool(WeatherForecastTool(MockWeatherService()))
await server.start(port=5000)
try:
# Create client
client = McpClient("http://localhost:5000")
# Test tool discovery
tools = await client.discover_tools()
assert "weatherForecast" in [t.name for t in tools]
# Test tool execution
response = await client.execute_tool("weatherForecast", {
"location": "Seattle",
"days": 3
})
# Verify response
assert response.status_code == 200
assert "Seattle" in response.content[0].text
finally:
await server.stop()Phase 8: Advanced MCP Patterns
阶段8:高级MCP模式
8.1 Chain of Tools Workflow
8.1 工具链工作流
python
class ChainWorkflow:
def __init__(self, tools_chain):
self.tools_chain = tools_chain
async def execute(self, mcp_client, initial_input):
current_result = initial_input
all_results = {"input": initial_input}
for tool_name in self.tools_chain:
response = await mcp_client.execute_tool(tool_name, current_result)
all_results[tool_name] = response.result
current_result = response.result
return {
"final_result": current_result,
"all_results": all_results
}python
class ChainWorkflow:
def __init__(self, tools_chain):
self.tools_chain = tools_chain
async def execute(self, mcp_client, initial_input):
current_result = initial_input
all_results = {"input": initial_input}
for tool_name in self.tools_chain:
response = await mcp_client.execute_tool(tool_name, current_result)
all_results[tool_name] = response.result
current_result = response.result
return {
"final_result": current_result,
"all_results": all_results
}Usage
Usage
data_pipeline = ChainWorkflow([
"dataFetch",
"dataCleaner",
"dataAnalyzer",
"dataVisualizer"
])
result = await data_pipeline.execute(
mcp_client,
{"source": "sales_database", "table": "transactions"}
)
undefineddata_pipeline = ChainWorkflow([
"dataFetch",
"dataCleaner",
"dataAnalyzer",
"dataVisualizer"
])
result = await data_pipeline.execute(
mcp_client,
{"source": "sales_database", "table": "transactions"}
)
undefined8.2 Parallel Tool Execution
8.2 并行工具执行
typescript
async function executeParallelTools(mcpClient: Client, tools: ToolCall[]) {
const promises = tools.map(tool =>
mcpClient.callTool({
name: tool.name,
arguments: tool.arguments
})
);
const results = await Promise.all(promises);
return results;
}
// Usage
const toolCalls = [
{ name: "get_weather", arguments: { location: "Seattle" } },
{ name: "get_weather", arguments: { location: "Portland" } },
{ name: "get_weather", arguments: { location: "Vancouver" } }
];
const weatherData = await executeParallelTools(mcpClient, toolCalls);typescript
async function executeParallelTools(mcpClient: Client, tools: ToolCall[]) {
const promises = tools.map(tool =>
mcpClient.callTool({
name: tool.name,
arguments: tool.arguments
})
);
const results = await Promise.all(promises);
return results;
}
// Usage
const toolCalls = [
{ name: "get_weather", arguments: { location: "Seattle" } },
{ name: "get_weather", arguments: { location: "Portland" } },
{ name: "get_weather", arguments: { location: "Vancouver" } }
];
const weatherData = await executeParallelTools(mcpClient, toolCalls);8.3 Context-Aware Tool Selection
8.3 上下文感知工具选择
python
from typing import List, Dict
import openai
class ContextAwareToolSelector:
def __init__(self, mcp_client, llm_client):
self.mcp_client = mcp_client
self.llm_client = llm_client
async def select_tools(self, user_query: str, available_tools: List[Dict]) -> List[str]:
"""Use LLM to intelligently select which tools to use"""
tool_descriptions = "\n".join([
f"- {tool['name']}: {tool['description']}"
for tool in available_tools
])
prompt = f"""
User query: {user_query}
Available tools:
{tool_descriptions}
Select the most relevant tools to answer this query.
Return a JSON array of tool names.
"""
response = await self.llm_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
response_format={"type": "json_object"}
)
selected_tools = json.loads(response.choices[0].message.content)
return selected_tools["tools"]python
from typing import List, Dict
import openai
class ContextAwareToolSelector:
def __init__(self, mcp_client, llm_client):
self.mcp_client = mcp_client
self.llm_client = llm_client
async def select_tools(self, user_query: str, available_tools: List[Dict]) -> List[str]:
"""Use LLM to intelligently select which tools to use"""
tool_descriptions = "\n".join([
f"- {tool['name']}: {tool['description']}"
for tool in available_tools
])
prompt = f"""
User query: {user_query}
Available tools:
{tool_descriptions}
Select the most relevant tools to answer this query.
Return a JSON array of tool names.
"""
response = await self.llm_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
response_format={"type": "json_object"}
)
selected_tools = json.loads(response.choices[0].message.content)
return selected_tools["tools"]MCP Ecosystem (2025)
MCP生态系统(2025)
Major Platform Adoptions
主流平台适配
OpenAI (March 2025):
- MCP integrated into ChatGPT Desktop
- MCP support in Agents SDK
- MCP compatibility in Responses API
Google (April 2025):
- MCP support in Gemini models
- Data Commons MCP Server (public datasets)
- MCP integration in Google DeepMind infrastructure
Microsoft (2025):
- MCP in Copilot Studio (GA)
- Semantic Kernel integration
- Azure OpenAI compatibility
Anthropic:
- Native MCP support in Claude Code
- Reference MCP server implementations
- MCP specification maintenance
OpenAI(2025年3月):
- MCP集成到ChatGPT Desktop
- Agents SDK中支持MCP
- Responses API兼容MCP
Google(2025年4月):
- Gemini模型支持MCP
- Data Commons MCP服务器(公共数据集)
- Google DeepMind基础设施集成MCP
Microsoft(2025):
- Copilot Studio中支持MCP(正式版)
- Semantic Kernel集成
- Azure OpenAI兼容
Anthropic:
- Claude Code原生支持MCP
- 参考MCP服务器实现
- 维护MCP规范
Official MCP Servers
官方MCP服务器
Repository: https://github.com/modelcontextprotocol
Enterprise Integrations:
- Google Drive, Slack, GitHub, GitLab
- Postgres, MySQL, MongoDB
- AWS, Azure, GCP
- Stripe, Salesforce
Development Tools:
- Puppeteer, Playwright
- Docker, Kubernetes
- Git, GitHub Actions
企业级集成:
- Google Drive、Slack、GitHub、GitLab
- Postgres、MySQL、MongoDB
- AWS、Azure、GCP
- Stripe、Salesforce
开发工具:
- Puppeteer、Playwright
- Docker、Kubernetes
- Git、GitHub Actions
MCP SDK Support (2025)
MCP SDK支持(2025)
Official SDKs:
- Python:
pip install modelcontextprotocol - TypeScript:
npm install @modelcontextprotocol/sdk - C#: NuGet package
- Java: Maven/Gradle
- Rust: Cargo package
- Swift: Swift Package Manager
官方SDK:
- Python:
pip install modelcontextprotocol - TypeScript:
npm install @modelcontextprotocol/sdk - C#: NuGet包
- Java: Maven/Gradle
- Rust: Cargo包
- Swift: Swift Package Manager
Documentation Research Workflow
文档研究工作流
Using Context7 for MCP Research
使用Context7进行MCP研究
When researching MCP servers and integration patterns:
bash
undefined研究MCP服务器和集成模式时:
bash
undefined1. Resolve library ID
1. 解析库ID
/ctx7 model context protocol
/ctx7 model context protocol
2. Get comprehensive documentation
2. 获取全面文档
Context7 returns documentation with:
Context7返回的文档包含:
- Trust Score (7-10 for quality sources)
- 信任分数(7-10分表示高质量来源)
- Code Snippets (100-5000+)
- 代码片段(100-5000+)
- Implementation examples
- 实现示例
- Best practices
- 最佳实践
3. Evaluate quality
3. 评估质量
- Trust Score 9-10: Official documentation
- 信任分数9-10: 官方文档
- Trust Score 7-9: Well-maintained projects
- 信任分数7-9: 维护良好的项目
- Code Snippets 500+: Comprehensive coverage
- 代码片段500+: 全面覆盖
**Best Context7 Sources for MCP** (by Trust Score):
1. `/microsoft/mcp-for-beginners` (Trust: 9.9, Snippets: 30,945)
2. `/modelcontextprotocol/python-sdk` (Trust: 7.8, Snippets: 119)
3. `/modelcontextprotocol/typescript-sdk` (Trust: 7.8, Snippets: 55)
4. `/modelcontextprotocol/csharp-sdk` (Trust: 7.8, Snippets: 59)
**MCP相关的优质Context7来源**(按信任分数排序):
1. `/microsoft/mcp-for-beginners`(信任分数: 9.9, 代码片段: 30,945)
2. `/modelcontextprotocol/python-sdk`(信任分数: 7.8, 代码片段: 119)
3. `/modelcontextprotocol/typescript-sdk`(信任分数: 7.8, 代码片段: 55)
4. `/modelcontextprotocol/csharp-sdk`(信任分数: 7.8, 代码片段: 59)Common MCP Integration Patterns
常见MCP集成模式
Pattern 1: Simple Tool Integration
模式1:简单工具集成
Use Case: Add a single MCP tool to Claude Code
json
// claude_desktop_config.json
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["weather_server.py"]
}
}
}python
undefined使用场景:为Claude Code添加单个MCP工具
json
// claude_desktop_config.json
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["weather_server.py"]
}
}
}python
undefinedweather_server.py
weather_server.py
from fastmcp import FastMCP
from fastmcp.transports.stdio import serve_stdio
import asyncio
mcp = FastMCP("Weather", version="1.0.0")
@mcp.tool()
def get_weather(location: str) -> str:
return f"Weather in {location}: Sunny, 72°F"
asyncio.run(serve_stdio(mcp))
undefinedfrom fastmcp import FastMCP
from fastmcp.transports.stdio import serve_stdio
import asyncio
mcp = FastMCP("Weather", version="1.0.0")
@mcp.tool()
def get_weather(location: str) -> str:
return f"Weather in {location}: Sunny, 72°F"
asyncio.run(serve_stdio(mcp))
undefinedPattern 2: Multi-Server Integration
模式2:多服务器集成
Use Case: Combine multiple MCP servers for complex workflows
json
{
"mcpServers": {
"linear": {
"command": "npx",
"args": ["-y", "@linear/mcp-server"],
"env": { "LINEAR_API_KEY": "..." }
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "..." }
},
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp-server"]
}
}
}使用场景:组合多个MCP服务器实现复杂工作流
json
{
"mcpServers": {
"linear": {
"command": "npx",
"args": ["-y", "@linear/mcp-server"],
"env": { "LINEAR_API_KEY": "..." }
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "..." }
},
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp-server"]
}
}
}Pattern 3: Custom Resource Server
模式3:自定义资源服务器
Use Case: Expose internal documentation or data
python
from fastmcp import FastMCP
from fastmcp.transports.stdio import serve_stdio
import asyncio
import os
mcp = FastMCP("Internal Docs", version="1.0.0")
@mcp.resource("docs://{path}")
def get_documentation(path: str) -> str:
"""Get internal documentation by path"""
docs_dir = "/company/docs"
filepath = os.path.join(docs_dir, path)
if not os.path.exists(filepath):
return f"Documentation not found: {path}"
with open(filepath, 'r') as f:
return f.read()
@mcp.tool()
def search_docs(query: str) -> list:
"""Search internal documentation"""
# Implementation would use search index
return [
{"title": "Getting Started", "path": "onboarding/getting-started.md"},
{"title": "API Reference", "path": "api/reference.md"}
]
asyncio.run(serve_stdio(mcp))使用场景:暴露内部文档或数据
python
from fastmcp import FastMCP
from fastmcp.transports.stdio import serve_stdio
import asyncio
import os
mcp = FastMCP("Internal Docs", version="1.0.0")
@mcp.resource("docs://{path}")
def get_documentation(path: str) -> str:
"""Get internal documentation by path"""
docs_dir = "/company/docs"
filepath = os.path.join(docs_dir, path)
if not os.path.exists(filepath):
return f"Documentation not found: {path}"
with open(filepath, 'r') as f:
return f.read()
@mcp.tool()
def search_docs(query: str) -> list:
"""Search internal documentation"""
# Implementation would use search index
return [
{"title": "Getting Started", "path": "onboarding/getting-started.md"},
{"title": "API Reference", "path": "api/reference.md"}
]
asyncio.run(serve_stdio(mcp))Troubleshooting MCP Integration
MCP集成故障排除
Issue 1: MCP Server Not Connecting
问题1:MCP服务器无法连接
Symptoms: Tools not appearing in Claude Code
Solutions:
- Check configuration file location
- Verify command and args are correct
- Test server independently:
python server.py - Check logs in Claude Code
- Ensure environment variables are set
症状:工具未出现在Claude Code中
解决方案:
- 检查配置文件位置
- 验证命令和参数是否正确
- 独立测试服务器:
python server.py - 查看Claude Code中的日志
- 确保环境变量已设置
Issue 2: Tool Execution Failures
问题2:工具执行失败
Symptoms: Tool calls return errors
Solutions:
- Validate tool parameter schemas
- Add error handling in tool implementation
- Check server logs for exceptions
- Test with simple inputs first
- Verify authentication/API keys
症状:工具调用返回错误
解决方案:
- 验证工具参数 schema
- 在工具实现中添加错误处理
- 查看服务器日志中的异常信息
- 先使用简单输入进行测试
- 验证认证/API密钥
Issue 3: Performance Issues
问题3:性能问题
Symptoms: Slow tool responses
Solutions:
- Add caching for repeated calls
- Implement connection pooling
- Use async/await properly
- Add timeout handling
- Consider HTTP transport for remote servers
症状:工具响应缓慢
解决方案:
- 为重复调用添加缓存
- 实现连接池
- 正确使用async/await
- 添加超时处理
- 远程服务器考虑使用HTTP传输
Best Practices Summary
最佳实践总结
- Start Simple: Begin with a single tool, expand gradually
- Use Official SDKs: Leverage maintained libraries (Python, TypeScript, C#, Java)
- Implement Security: Validate inputs, authenticate, rate limit
- Handle Errors Gracefully: Return structured error responses
- Document Thoroughly: Provide clear tool descriptions and parameter schemas
- Test Extensively: Write integration tests for all tools
- Monitor Performance: Track response times and error rates
- Version Your Servers: Use semantic versioning for changes
- Leverage Context7: Research documentation before implementation
- Follow MCP Specification: Adhere to official protocol standards
- 从简开始:先从单个工具入手,逐步扩展
- 使用官方SDK:利用维护完善的库(Python、TypeScript、C#、Java)
- 实现安全机制:验证输入、添加认证、速率限制
- 优雅处理错误:返回结构化错误响应
- 全面文档:提供清晰的工具描述和参数schema
- 充分测试:为所有工具编写集成测试
- 监控性能:跟踪响应时间和错误率
- 服务器版本化:对变更使用语义化版本
- 利用Context7:实现前先研究文档
- 遵循MCP规范:遵守官方协议标准
Quick Reference
快速参考
MCP Server Checklist
MCP服务器检查清单
- Server name and version defined
- Tools have clear descriptions
- Parameter schemas are complete
- Error handling implemented
- Authentication/authorization configured
- Logging enabled
- Tests written
- Documentation created
- Security validated
- Performance tested
- 已定义服务器名称和版本
- 工具具有清晰的描述
- 参数schema完整
- 已实现错误处理
- 已配置认证/授权
- 已启用日志
- 已编写测试
- 已创建文档
- 已验证安全性
- 已测试性能
Essential Commands
常用命令
bash
undefinedbash
undefinedInstall MCP SDKs
安装MCP SDK
pip install modelcontextprotocol
npm install @modelcontextprotocol/sdk
pip install modelcontextprotocol
npm install @modelcontextprotocol/sdk
Test server independently
独立测试服务器
python server.py
python server.py
Verify Claude Code config
验证Claude Code配置
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json
Research with Context7
使用Context7进行研究
/ctx7 [library-name]
undefined/ctx7 [library-name]
undefinedResources
资源
Official Documentation:
- MCP Specification: https://modelcontextprotocol.io/specification
- MCP GitHub: https://github.com/modelcontextprotocol
- MCP Servers: https://github.com/modelcontextprotocol (servers directory)
SDKs:
- Python SDK: https://github.com/modelcontextprotocol/python-sdk
- TypeScript SDK: https://github.com/modelcontextprotocol/typescript-sdk
- C# SDK: https://github.com/modelcontextprotocol/csharp-sdk
- Java SDK: https://github.com/modelcontextprotocol/java-sdk
Learning Resources:
- Microsoft MCP for Beginners: https://github.com/microsoft/mcp-for-beginners
- Anthropic MCP Announcement: https://www.anthropic.com/news/model-context-protocol
Skill Version: 1.0.0
Last Updated: 2025-10-18
Context7 Research: Microsoft MCP for Beginners (Trust: 9.9)
官方文档:
- MCP规范: https://modelcontextprotocol.io/specification
- MCP GitHub: https://github.com/modelcontextprotocol
- MCP服务器: https://github.com/modelcontextprotocol(servers目录)
SDK:
- Python SDK: https://github.com/modelcontextprotocol/python-sdk
- TypeScript SDK: https://github.com/modelcontextprotocol/typescript-sdk
- C# SDK: https://github.com/modelcontextprotocol/csharp-sdk
- Java SDK: https://github.com/modelcontextprotocol/java-sdk
学习资源:
- Microsoft MCP入门指南: https://github.com/microsoft/mcp-for-beginners
- Anthropic MCP发布公告: https://www.anthropic.com/news/model-context-protocol
技能版本: 1.0.0
最后更新: 2025-10-18
Context7研究来源: Microsoft MCP for Beginners(信任分数: 9.9)