linkup-search

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
This skill teaches you how to use Linkup's search and fetch tools effectively. Linkup is an agentic web search API — it interprets natural language instructions and executes retrieval steps to return accurate, real-time web data. Read this skill before making any Linkup search or fetch call.

本技能将教你如何高效使用Linkup的搜索和fetch工具。Linkup是一款面向Agent的网页搜索API——它可以理解自然语言指令并执行检索步骤,返回准确的实时网页数据。在调用任何Linkup搜索或fetch功能前,请先阅读本技能。

1. How to Construct a Query

1. 如何构建查询语句

Your Linkup query should focus on data retrieval, not answer generation. Tell Linkup what to find and where to look. Do the reasoning and synthesis yourself after receiving the results.
Before writing your query, reason through three questions in order. Each answer constrains the next.
你的Linkup查询应聚焦于数据检索,而非答案生成。告诉Linkup要找什么、去哪里找。在获取结果后,由你自己进行推理和整合。
编写查询前,请依次思考以下三个问题。每个问题的答案会约束下一步的操作。

Step 1: What inputs do I already have?

步骤1:我已拥有哪些输入信息?

I have...Then...
A specific URLScrape it directly — don't waste a search finding it
A company name, topic, or question (no URL)You'll need to search
Both a URL and a broader questionCombine: scrape the known URL + search for the rest
我已拥有...接下来...
特定URL直接爬取该URL——不要浪费搜索资源去寻找它
公司名称、主题或问题(无URL)你需要进行搜索
同时拥有URL和更宽泛的问题结合两者:爬取已知URL + 搜索剩余内容

Step 2: Where does the data I need live?

步骤2:我需要的数据存放在哪里?

The data I need is...ExampleThen...
In search snippets (titles, short excerpts, factual claims)A funding amount, a launch date, a job title
standard
is enough — snippets will contain the answer
On full web pages (tables, detailed specs, long-form content)A pricing table, a job listing, an article's body textYou need to scrape the page
I'm not sureDefault to
deep
我需要的数据...示例接下来...
在搜索摘要中(标题、简短摘录、事实声明)融资金额、发布日期、职位名称使用
standard
深度即可——摘要中会包含答案
在完整网页上(表格、详细规格、长篇内容)定价表、招聘信息、文章正文你需要爬取该页面
不确定——默认使用
deep
深度

Step 3: Do I need to chain steps sequentially?

步骤3:我需要按顺序执行多步操作吗?

ScenarioSequential?Depth
All the information can be gathered in parallel searchesNo
standard
I have one URL and just need to scrape itNo
standard
(one URL) or
/fetch
I need to find URLs first, then scrape themYes
deep
I need to scrape a page, then search again based on what I foundYes
deep
I need to scrape multiple known URLsYes
deep
When uncertain, default to
deep
.
场景是否需要顺序执行?深度选择
所有信息可通过并行搜索收集
standard
我已有一个URL,只需爬取它
standard
(单个URL)或
/fetch
我需要先找到URL,再爬取它们
deep
我需要先爬取页面,再根据找到的内容再次搜索
deep
我需要爬取多个已知URL
deep
不确定时,默认使用
deep
深度。

Worked Examples

示例演示

Inputs: company name only, no URL
Data needed: pricing details (lives on a full page, not in snippets)
Sequential: yes — need to find the pricing page first, then scrape it
→ depth="deep"
→ query: "Find the pricing page for {company}. Scrape it. Extract plan names, prices, and features."
Inputs: company name only, no URL
Data needed: latest funding round amount (lives in search snippets)
Sequential: no
→ depth="standard"
→ query: "Find {company}'s latest funding round amount and date"
Inputs: a specific URL (https://example.com/pricing)
Data needed: pricing details from that page
Sequential: no — I already have the URL
→ depth="standard" or /fetch
→ query: "Scrape https://example.com/pricing. Extract plan names, prices, and included features."
Inputs: a company name
Data needed: the company's ICP, inferred from homepage + blog + case studies
Sequential: yes — need to find pages, then scrape them, then synthesize
→ depth="deep"
→ query: "Find and scrape {company}'s homepage, use case pages, and 2-3 recent blog posts. Extract: industries mentioned, company sizes referenced, job titles targeted, and pain points addressed."

输入:仅公司名称,无URL
所需数据:定价详情(存放在完整页面中,而非摘要)
是否顺序执行:是——需要先找到定价页面,再爬取
→ depth="deep"
→ 查询语句:"Find the pricing page for {company}. Scrape it. Extract plan names, prices, and features."
输入:仅公司名称,无URL
所需数据:最新一轮融资金额及日期(存放在搜索摘要中)
是否顺序执行:否
→ depth="standard"
→ 查询语句:"Find {company}'s latest funding round amount and date"
输入:特定URL(https://example.com/pricing)
所需数据:该页面的定价详情
是否顺序执行:否——我已有URL
→ depth="standard" 或 /fetch
→ 查询语句:"Scrape https://example.com/pricing. Extract plan names, prices, and included features."
输入:公司名称
所需数据:公司的理想客户画像(ICP),从主页、博客和案例研究中推断
是否顺序执行:是——需要先找到页面,再爬取,最后整合
→ depth="deep"
→ 查询语句:"Find and scrape {company}'s homepage, use case pages, and 2-3 recent blog posts. Extract: industries mentioned, company sizes referenced, job titles targeted, and pain points addressed."

2. Choosing Search Depth

2. 选择搜索深度

Linkup supports two search depths. Your answers from Section 1 determine which to use.
Linkup支持两种搜索深度。你在第1部分的答案会决定使用哪种深度。

Standard (
depth="standard"
) — €0.005/call

Standard(
depth="standard"
)—— 每次调用€0.005

  • Can run multiple parallel web searches if instructed
  • Can scrape one URL if provided in the prompt
  • Cannot scrape multiple URLs
  • Cannot use URLs discovered in search results to scrape them
  • 若有指令,可运行多个并行网页搜索
  • 若提示中提供URL,可爬取一个URL
  • 无法爬取多个URL
  • 无法使用搜索结果中发现的URL进行爬取

Deep (
depth="deep"
) — €0.05/call

Deep(
depth="deep"
)—— 每次调用€0.05

  • Executes up to 10 iterative retrieval passes, each aware of prior context
  • Can scrape multiple URLs
  • Can use URLs discovered in search results to scrape them
  • Supports sequential instructions (outputs from one step feed the next)
When uncertain, default to
deep
.
Cost tip: 3–5 parallel
standard
calls with focused sub-queries is often faster and cheaper than one
deep
call. Reserve
deep
for when you need to scrape multiple URLs or chain search → scrape.

  • 最多执行10次迭代检索,每次都能感知之前的上下文
  • 可爬取多个URL
  • 可使用搜索结果中发现的URL进行爬取
  • 支持顺序指令(一步的输出作为下一步的输入)
不确定时,默认使用deep深度。
成本提示: 3-5次聚焦子查询的并行
standard
调用通常比一次
deep
调用更快、更便宜。仅在需要爬取多个URL或链式执行搜索→爬取时使用
deep

3. Choosing Output Type

3. 选择输出类型

Output TypeReturnsUse When
searchResults
Array of
{name, url, content}
You need raw sources to reason over, filter, or synthesize yourself
sourcedAnswer
Natural language answer + sourcesThe answer will be shown directly to a user (chatbot, Q&A)
structured
JSON matching a provided schemaResults feed into automated pipelines, CRM updates, data enrichment
Default choice: Use
searchResults
when you will process the results. Use
sourcedAnswer
when the user needs a direct answer. Use
structured
when downstream code needs to parse the output.

输出类型返回内容使用场景
searchResults
{name, url, content}
数组
你需要原始来源进行推理、筛选或整合时
sourcedAnswer
自然语言答案 + 来源答案需直接展示给用户(聊天机器人、问答场景)时
structured
符合指定Schema的JSON结果需输入自动化流水线、CRM更新或数据补全时
默认选择: 当你需要处理结果时使用
searchResults
;当用户需要直接答案时使用
sourcedAnswer
;当下游代码需要解析输出时使用
structured

4. Writing Effective Queries

4. 编写高效的查询语句

Rule of thumb: The level of complexity and the choice of depth of your query ofen depends on the use case:
  • Conversational chatbot where low latency is important: keep prompts simples, keyword style, standard depth
  • Deep researcher: more detailed more, leverage scraping, deep depth
经验法则:查询的复杂程度和深度选择通常取决于使用场景:
  • 低延迟优先的对话式聊天机器人:保持提示简洁,使用关键词风格,standard深度
  • 深度调研:更详细的提示,利用爬取功能,deep深度

Be specific

具体化

BadGood
"Tell me about the company""Find {company}'s annual revenue and employee count"
"Microsoft revenue""Microsoft fiscal year 2024 total revenue"
"React hooks""React useEffect cleanup function best practices"
"AI news""OpenAI product announcements January 2026"
Add context: dates ("Q4 2025"), locations ("French company Total"), versions ("since React 19"), domains ("on sec.gov").
不好的示例好的示例
"Tell me about the company""Find {company}'s annual revenue and employee count"
"Microsoft revenue""Microsoft fiscal year 2024 total revenue"
"React hooks""React useEffect cleanup function best practices"
"AI news""OpenAI product announcements January 2026"
添加上下文: 日期(如"Q4 2025")、地点(如"法国公司Total")、版本(如"since React 19")、域名(如"on sec.gov")。

Keyword-style for simple lookups

简单查询使用关键词风格

Short keyword queries work fine for straightforward facts:
"Bitcoin price today"
"NVIDIA Q4 2024 revenue"
"Anthropic latest funding round"
简短的关键词查询适用于直接的事实查找:
"Bitcoin price today"
"NVIDIA Q4 2024 revenue"
"Anthropic latest funding round"

Instruction-style for complex extraction

复杂提取使用指令风格

When you need specific extraction or multi-step retrieval, write your query as a natural language instruction — what to find, where to look, what to extract:
"Find Datadog's current pricing page. Extract plan names, per-host prices, and included features for each tier."
"Find Acme Corp's investor relations page on acme.com. Extract the most recent quarterly revenue figure and year-over-year growth rate."
当你需要特定提取或多步检索时,将查询编写为自然语言指令——说明要找什么、去哪里找、要提取什么:
"Find Datadog's current pricing page. Extract plan names, per-host prices, and included features for each tier."
"Find Acme Corp's investor relations page on acme.com. Extract the most recent quarterly revenue figure and year-over-year growth rate."

Request parallel searches for breadth

要求并行搜索以覆盖广度

For broad research, explicitly ask for multiple passes. This works even in
standard
:
"Find recent news about OpenAI. Run several searches with adjacent keywords including 'OpenAI funding', 'OpenAI product launch', and 'OpenAI partnership announcements'."
Or issue 3–5 separate
standard
calls from your agent, each with a focused sub-query:
Query 1: "Datadog current annual recurring revenue from latest earnings"
Query 2: "Datadog number of customers over $100k ARR"
Query 3: "Datadog net revenue retention rate from investor presentations"
对于广泛调研,明确要求多次检索。这在
standard
深度下也适用:
"Find recent news about OpenAI. Run several searches with adjacent keywords including 'OpenAI funding', 'OpenAI product launch', and 'OpenAI partnership announcements'."
或者从你的Agent发起3-5次独立的
standard
调用,每个调用使用聚焦的子查询:
查询1:"Datadog current annual recurring revenue from latest earnings"
查询2:"Datadog number of customers over $100k ARR"
查询3:"Datadog net revenue retention rate from investor presentations"

Sequential instructions (deep only)

顺序指令(仅deep深度支持)

When you need to discover a URL then extract from it, be explicit about the sequence:
"First, find the LinkedIn company page for Snowflake. Then scrape the page and extract: employee count, headquarters, industry, and company description."
当你需要先发现URL再从中提取内容时,明确说明顺序:
"First, find the LinkedIn company page for Snowflake. Then scrape the page and extract: employee count, headquarters, industry, and company description."

Scrape a known URL (standard: one URL max)

爬取已知URL(standard:最多一个URL)

If you already have a URL, include it in the prompt. In
standard
, this is limited to one URL per call:
"Scrape https://example.com/pricing. Extract all plan names, prices, and feature lists."
You can combine one scrape + search in a single
standard
call:
"Scrape https://linkup.so. Also search for articles mentioning Linkup clients. Return a list of known clients with the source of each."
To scrape multiple URLs, or to scrape URLs discovered during search, use
deep
.

如果你已有URL,将其包含在提示中。在
standard
深度下,每次调用最多只能爬取一个URL:
"Scrape https://example.com/pricing. Extract all plan names, prices, and feature lists."
你可以在单个
standard
调用中结合一次爬取+搜索:
"Scrape https://linkup.so. Also search for articles mentioning Linkup clients. Return a list of known clients with the source of each."
要爬取多个URL,或爬取搜索过程中发现的URL,请使用
deep
深度。

5. Using the
/fetch
Endpoint

5. 使用
/fetch
端点

When your agent already knows the exact URL, use
/fetch
instead of
/search
. It's faster, cheaper, and purpose-built for single-page extraction.
Use
/fetch
when...
Use
/search
when...
You have a specific URL and want its content as markdownYou don't know which URL has the answer
You're scraping a known page (pricing, article, docs)You need results from multiple pages
Your agent found a URL in a previous step and needs to read itYou need Linkup's agentic retrieval to find and extract
Default to
renderJs: true
.
Many sites load content via JavaScript. The latency tradeoff is almost always worth the reliability gain.

当你的Agent已知道确切URL时,使用
/fetch
而非
/search
。它更快、更便宜,专为单页提取设计。
何时使用
/fetch
何时使用
/search
你有特定URL,想要获取其markdown格式的内容你不知道哪个URL包含答案
你要爬取已知页面(定价页、文章、文档)你需要从多个页面获取结果
你的Agent在之前步骤中找到一个URL,需要读取它你需要Linkup的Agent式检索来查找和提取内容
默认设置
renderJs: true
许多网站通过JavaScript加载内容。为了可靠性,几乎总是值得承担这部分延迟代价。

6. Advanced Techniques

6. 高级技巧

LinkedIn extraction (if you have the LinkedIn URL of the person/company/post -> standard)

LinkedIn提取(如果你已有个人/公司/帖子的LinkedIn URL → standard深度)

  • return the linkedin profile details of {{linkedin_url}}
  • return the last 10 linkedin posts of {{linkedin_url}}
  • return the last 10 linkedin comments of {{linkedin_url}}
  • extracts the comments from {{linkedin_post_url}}
  • return the linkedin profile details of {{linkedin_url}}
  • return the last 10 linkedin posts of {{linkedin_url}}
  • return the last 10 linkedin comments of {{linkedin_url}}
  • extracts the comments from {{linkedin_post_url}}

LinkedIn extraction (if you need to search for the LinkedIn URL first -> deep)

LinkedIn提取(如果你需要先搜索LinkedIn URL → deep深度)

First find LinkedIn posts about context engineering.
Then, for each URL, extract the post content and comments.
Return the LinkedIn profile URL of each commenter.
First find LinkedIn posts about context engineering.
Then, for each URL, extract the post content and comments.
Return the LinkedIn profile URL of each commenter.

Date filtering and domain filtering

日期过滤和域名过滤

Use
fromDate
and
toDate
to limit results to a time window:
Query: "Find news about Anthropic product launches"
fromDate: "2025-01-01"
toDate: "2025-03-31"
Use
includeDomains
to focus on specific sources, or
excludeDomains
to remove noise:
Query: "Find Tesla's latest quarterly earnings data"
includeDomains: ["tesla.com", "sec.gov"]
Instructions: for both domain filtering and date filtering, only use if implicitly or explicitly instructed to do so.
使用
fromDate
toDate
将结果限制在特定时间窗口内:
Query: "Find news about Anthropic product launches"
fromDate: "2025-01-01"
toDate: "2025-03-31"
使用
includeDomains
聚焦特定来源,或
excludeDomains
去除无关内容:
Query: "Find Tesla's latest quarterly earnings data"
includeDomains: ["tesla.com", "sec.gov"]
注意:仅当有明确或隐含指令时,才使用域名过滤和日期过滤。

7. MCP Setup

7. MCP设置

Two tools:
linkup-search
(query, depth) and
linkup-fetch
(url, renderJs).
ClientSetup
VS Code / CursorAdd to MCP config:
{"servers":{"linkup":{"url":"https://mcp.linkup.so/mcp?apiKey=YOUR_API_KEY","type":"http"}}}
Claude Code
claude mcp add --transport http linkup https://mcp.linkup.so/mcp?apiKey=YOUR_API_KEY
Claude DesktopDownload MCPB bundle, double-click to install
Auth format (v2.x):
apiKey=YOUR_API_KEY
in args. Old v1.x
env
format no longer works.

两个工具:
linkup-search
(参数:query, depth)和
linkup-fetch
(参数:url, renderJs)。
客户端设置方法
VS Code / Cursor在MCP配置中添加:
{"servers":{"linkup":{"url":"https://mcp.linkup.so/mcp?apiKey=YOUR_API_KEY","type":"http"}}}
Claude Code执行命令:
claude mcp add --transport http linkup https://mcp.linkup.so/mcp?apiKey=YOUR_API_KEY
Claude Desktop下载MCPB包,双击安装
认证格式(v2.x):在参数中使用
apiKey=YOUR_API_KEY
。旧版v1.x的
env
格式已不再生效。

Quick Reference

快速参考

STANDARD:  €0.005. Parallel searches ✓  Scrape one provided URL ✓  Scrape multiple URLs ✗  Chain search→scrape ✗
DEEP:      €0.05.  Iterative searches ✓  Scrape multiple URLs ✓   Chain search→scrape ✓
UNCERTAIN: Default to deep.
OUTPUT:    searchResults (raw sources)  |  sourcedAnswer (natural language)  |  structured (JSON schema)
FETCH:     Single known URL → /fetch with renderJs: true
QUERIES:   Keyword for simple lookups. Instruction-style for complex extraction. Be specific.
COVERAGE:  "Run several searches with adjacent keywords" for breadth (works in standard).
CHAINING:  "First find X, then scrape X" — deep only.
STANDARD:  €0.005. 并行搜索 ✓  爬取指定的一个URL ✓  爬取多个URL ✗  链式搜索→爬取 ✗
DEEP:      €0.05.  迭代搜索 ✓  爬取多个URL ✓   链式搜索→爬取 ✓
不确定时:默认使用deep深度。
输出类型:  searchResults(原始来源) | sourcedAnswer(自然语言) | structured(JSON Schema)
FETCH:     单个已知URL → 使用/fetch并设置renderJs: true
查询风格:  简单查询用关键词,复杂提取用指令风格,需具体化。
覆盖广度:  使用"Run several searches with adjacent keywords"实现广度覆盖(standard深度支持)。
链式操作:  "First find X, then scrape X" — 仅deep深度支持。