Introduction
Model Context Protocol (MCP) has emerged as the standard way for LLMs to discover and use external tools, including web search APIs. First proposed by Anthropic and now widely adopted, MCP creates a universal interface between AI models and tool providers. In this guide, we explain what MCP is, how it relates to web search, and how to build MCP-compatible search tools using Keiro.
What Is MCP?
MCP (Model Context Protocol) is an open standard that defines how AI models interact with external tools and data sources. Think of it as a "USB standard for AI tools" — any MCP-compatible tool can work with any MCP-compatible model without custom integration code.
Key concepts in MCP:
- Tools: Functions that the model can call (e.g., web_search, read_page)
- Resources: Data sources the model can read from
- Prompts: Reusable prompt templates
- Server: A process that exposes tools, resources, and prompts via MCP
- Client: The AI model or application that connects to MCP servers
Why MCP Matters for Web Search
Before MCP, every AI application had to custom-code its search integration. Different LLM frameworks (LangChain, LlamaIndex, CrewAI) each had their own tool abstraction. MCP standardizes this, meaning you build one search tool server and it works everywhere.
Building a Keiro MCP Server
Let us build an MCP server that exposes Keiro's search capabilities as standard MCP tools. We will use the official MCP Python SDK.
Installation
pip install mcp requests
Server Implementation
import requests
from mcp.server import Server
from mcp.types import Tool, TextContent
KEIRO_API_KEY = "your-keiro-api-key"
KEIRO_BASE = "https://kierolabs.space/api"
server = Server("keiro-search")
@server.tool()
async def web_search(query: str) -> list[TextContent]:
"""Search the web for current information using Keiro API.
Args:
query: The search query string
"""
resp = requests.post(f"{KEIRO_BASE}/search", json={
"apiKey": KEIRO_API_KEY,
"query": query
})
results = resp.json().get("results", [])
text = "\n\n".join([
f"**{r.get('title', 'N/A')}**\n{r.get('content', r.get('snippet', ''))}\nURL: {r.get('url', '')}"
for r in results[:5]
])
return [TextContent(type="text", text=text or "No results found.")]
@server.tool()
async def deep_research(query: str) -> list[TextContent]:
"""Perform deep, multi-step research on a complex topic.
Args:
query: The research question or topic
"""
resp = requests.post(f"{KEIRO_BASE}/research", json={
"apiKey": KEIRO_API_KEY,
"query": query
})
data = resp.json()
summary = data.get("summary", "No summary available.")
sources = data.get("sources", [])
text = f"{summary}\n\nSources:\n"
text += "\n".join([f"- {s.get('title', 'N/A')}: {s.get('url', '')}" for s in sources[:5]])
return [TextContent(type="text", text=text)]
@server.tool()
async def get_answer(query: str) -> list[TextContent]:
"""Get a direct answer to a question with source citations.
Args:
query: The question to answer
"""
resp = requests.post(f"{KEIRO_BASE}/answer", json={
"apiKey": KEIRO_API_KEY,
"query": query
})
data = resp.json()
return [TextContent(type="text", text=data.get("response", "No answer available."))]
@server.tool()
async def read_webpage(url: str) -> list[TextContent]:
"""Extract clean content from a specific web page URL.
Args:
url: The full URL of the page to read
"""
resp = requests.post(f"{KEIRO_BASE}/web-crawler", json={
"apiKey": KEIRO_API_KEY,
"url": url
})
data = resp.json()
title = data.get("title", "Unknown")
content = data.get("content", "Could not extract content.")
return [TextContent(type="text", text=f"# {title}\n\n{content[:8000]}")]
if __name__ == "__main__":
import asyncio
from mcp.server.stdio import stdio_server
async def main():
async with stdio_server() as (read, write):
await server.run(read, write, server.create_initialization_options())
asyncio.run(main())
Connecting the MCP Server to Claude Desktop
To use your Keiro MCP server with Claude Desktop, add it to your MCP configuration:
{
"mcpServers": {
"keiro-search": {
"command": "python",
"args": ["/path/to/keiro_mcp_server.py"],
"env": {
"KEIRO_API_KEY": "your-keiro-api-key"
}
}
}
}
Once configured, Claude will automatically discover the search, research, answer, and web crawler tools and use them when appropriate.
MCP vs Custom Tool Integration
| Aspect | MCP Server | Custom Integration |
|---|---|---|
| Reusability | Works with any MCP client | Specific to one framework |
| Discovery | Automatic tool discovery | Manual configuration |
| Setup Effort | Build once, use everywhere | Rebuild per framework |
| Standardization | Open protocol | Proprietary |
| Flexibility | Moderate (protocol constraints) | Maximum |
Best Practices for MCP Search Tools
- Write clear tool descriptions: MCP clients use descriptions to decide when to call each tool. Be specific about when each tool should be used.
- Provide multiple granularity levels: Offer both quick search and deep research tools so the model can choose the right level of depth.
- Format output for readability: Use Markdown in your TextContent for better display in MCP clients.
- Handle errors gracefully: Return informative error messages instead of crashing, so the model can decide how to proceed.
- Limit output size: Truncate very long content to avoid overwhelming the model's context window.
The Future of MCP + Search
As MCP adoption grows, we expect to see:
- Search APIs like Keiro offering official MCP servers
- MCP-native search features being built directly into LLM platforms
- Composable MCP tool chains where search results automatically flow to analysis tools
- Standardized quality metrics for MCP search tools
Conclusion
MCP is quickly becoming the standard way to give LLMs access to web search. By building a Keiro MCP server, you get a universal search tool that works with Claude Desktop, custom agents, and any other MCP-compatible system. The combination of MCP's standardization and Keiro's comprehensive, affordable API creates a powerful foundation for AI applications that need real-time web data.
Build your Keiro MCP server today. Get your API key at kierolabs.space.