Introduction
Perplexity AI popularized the concept of AI-powered search — ask a question, get a sourced answer with citations. Building your own version is a great way to learn about AI search, and with Keiro's API, the backend is surprisingly simple. In this tutorial, we build a complete Perplexity-style application with a Next.js frontend.
What We Are Building
Our Perplexity clone will have:
- A search input where users type questions
- Real-time web search powered by Keiro
- AI-generated answers with source citations
- A clean, modern UI showing sources alongside the answer
Architecture
The application has three layers:
- Frontend: Next.js with React for the UI
- API Route: Next.js API route that orchestrates Keiro search and LLM generation
- External APIs: Keiro for search and OpenAI for answer generation
Option 1: Using Keiro /answer (Simplest)
The fastest way to build this is using Keiro's /answer endpoint, which combines search and answer generation in a single call:
API Route (/app/api/search/route.ts)
import { NextRequest, NextResponse } from "next/server";
const KEIRO_API_KEY = process.env.KEIRO_API_KEY!;
export async function POST(req: NextRequest) {
const { query } = await req.json();
if (!query || typeof query !== "string") {
return NextResponse.json({ error: "Query is required" }, { status: 400 });
}
try {
// Single call to Keiro /answer - search + generation in one
const response = await fetch("https://kierolabs.space/api/answer", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
apiKey: KEIRO_API_KEY,
query: query,
}),
});
const data = await response.json();
return NextResponse.json({
answer: data.response || "No answer generated.",
sources: data.sources || [],
});
} catch (error) {
console.error("Search error:", error);
return NextResponse.json(
{ error: "Failed to process search" },
{ status: 500 }
);
}
}
Option 2: Custom Search + Generation (More Control)
For more control over the generation step, use Keiro /search-pro combined with OpenAI:
API Route with Custom Generation
import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";
const KEIRO_API_KEY = process.env.KEIRO_API_KEY!;
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
export async function POST(req: NextRequest) {
const { query } = await req.json();
// Step 1: Search with Keiro
const searchResp = await fetch("https://kierolabs.space/api/search-pro", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
apiKey: KEIRO_API_KEY,
query: query,
}),
});
const searchData = await searchResp.json();
const results = searchData.results || [];
// Step 2: Format context
const context = results
.slice(0, 6)
.map(
(r: any, i: number) =>
`[${i + 1}] ${r.title}\nURL: ${r.url}\nContent: ${r.content || r.snippet || ""}`
)
.join("\n\n");
// Step 3: Generate answer with OpenAI
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "system",
content:
"You are a helpful search assistant. Answer the user's question based on the provided search results. " +
"Always cite your sources using [1], [2], etc. Be comprehensive but concise. " +
"If the search results do not contain enough information, say so.",
},
{
role: "user",
content: `Search Results:\n${context}\n\nQuestion: ${query}`,
},
],
temperature: 0.3,
});
const answer = completion.choices[0].message.content || "";
// Step 4: Return answer with sources
const sources = results.slice(0, 6).map((r: any) => ({
title: r.title || "",
url: r.url || "",
snippet: r.content || r.snippet || "",
}));
return NextResponse.json({ answer, sources });
}
Frontend Component
"use client";
import { useState } from "react";
interface Source {
title: string;
url: string;
snippet: string;
}
interface SearchResult {
answer: string;
sources: Source[];
}
export default function SearchPage() {
const [query, setQuery] = useState("");
const [result, setResult] = useState<SearchResult | null>(null);
const [loading, setLoading] = useState(false);
const handleSearch = async (e: React.FormEvent) => {
e.preventDefault();
if (!query.trim()) return;
setLoading(true);
setResult(null);
try {
const resp = await fetch("/api/search", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query }),
});
const data = await resp.json();
setResult(data);
} catch (error) {
console.error("Search failed:", error);
} finally {
setLoading(false);
}
};
return (
<div style={{ maxWidth: 800, margin: "0 auto", padding: "2rem" }}>
<h1>AI Search</h1>
<form onSubmit={handleSearch}>
<input
type="text"
value={query}
onChange={(e) => setQuery(e.target.value)}
placeholder="Ask anything..."
style={{ width: "100%", padding: "1rem", fontSize: "1.1rem" }}
/>
<button type="submit" disabled={loading}>
{loading ? "Searching..." : "Search"}
</button>
</form>
{result && (
<div style={{ marginTop: "2rem" }}>
<div style={{ display: "flex", gap: "2rem" }}>
<div style={{ flex: 2 }}>
<h2>Answer</h2>
<div style={{ whiteSpace: "pre-wrap" }}>{result.answer}</div>
</div>
<div style={{ flex: 1 }}>
<h3>Sources</h3>
{result.sources.map((source, i) => (
<div key={i} style={{ marginBottom: "1rem", padding: "0.5rem", border: "1px solid #ddd" }}>
<a href={source.url} target="_blank" rel="noopener">
[{i + 1}] {source.title}
</a>
<p style={{ fontSize: "0.85rem", color: "#666" }}>
{source.snippet?.slice(0, 150)}...
</p>
</div>
))}
</div>
</div>
</div>
)}
</div>
);
}
Adding Follow-Up Questions
Perplexity suggests follow-up questions after each answer. We can generate these easily:
// Add to your API route after generating the answer
const followUpResp = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{
role: "system",
content: "Generate 3 relevant follow-up questions based on the original question and answer. Return as a JSON array of strings."
},
{
role: "user",
content: `Question: ${query}\nAnswer: ${answer}`
}
],
response_format: { type: "json_object" }
});
const followUps = JSON.parse(followUpResp.choices[0].message.content || "{}");
// Include followUps.questions in the response
Adding Research Mode
For complex questions, add a "Research" button that uses Keiro's /research-pro endpoint:
// Research API route
export async function POST(req: NextRequest) {
const { query } = await req.json();
const response = await fetch("https://kierolabs.space/api/research-pro", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
apiKey: KEIRO_API_KEY,
query: query,
}),
});
const data = await response.json();
return NextResponse.json({
answer: data.summary || "No research results.",
sources: data.sources || [],
isResearch: true,
});
}
Cost to Run This App
| Component | Cost per Query | 1,000 Queries/Day |
|---|---|---|
| Keiro /search-pro (Pro plan) | ~$0.000125 | $0.125 |
| OpenAI GPT-4o | ~$0.005 | $5.00 |
| Total | ~$0.005 | $5.13/day |
Using the Keiro /answer endpoint instead, you can skip the OpenAI cost entirely, reducing the total to about $0.000125 per query — or $0.125/day for 1,000 queries.
Deployment
This application deploys easily to Vercel, Netlify, or any platform that supports Next.js. Just set your environment variables:
KEIRO_API_KEY=your-keiro-api-key
OPENAI_API_KEY=your-openai-api-key # Only if using custom generation
Conclusion
Building a Perplexity clone is one of the best ways to learn about AI search applications, and with Keiro's API, the backend is just a few API calls. The /answer endpoint makes it possible to build a functional Perplexity clone with zero LLM costs, while the /search-pro + OpenAI approach gives you full control over the generation.
Get your Keiro API key at kierolabs.space and build your own AI search engine today.