appsignal
From Keyword Search to Ask AI: How We Upgraded AppSignal's Docs Experience

Documentation search is often the last thing devs think about, until someone posts publicly that they couldn't find a basic answer, or your support queue fills up with things that are genuinely in the docs. We decided to get ahead of that.
This is the story of how we went from a minimal keyword-only search on our docs to a conversational Ask AI experience.
The Problem: Search Was Holding Users Back
We ran an honest audit of our search experience, and the gaps were hard to ignore.
Our search was keyword-only. Searching "sidekick" instead of "sidekiq" returned nothing — no typo tolerance, no fuzzy matching. A user asking "how do I track slow queries?" would get zero results unless they used our exact terminology.


Keyword search was just one limitation. Docs, blog, and the learning center each had their own separate search, so finding related content meant jumping between three sites. We had no analytics visibility either: no zero-results tracking, no way to identify content gaps or boost high-value content like our MCP Server docs. And since our docs are reference-style, users had to read full pages to find answers, while competitors were already shipping AI assistants in theirs.

Now, users can pull meaningful results from a single search, or let the AI gather answers for them across the homepage, blog, and docs:

What We Built
Ask AI with BYOLLM
Algolia's Ask AI is LLM-agnostic, which means you bring your own API key from OpenAI, Anthropic, Gemini, or Mistral. Algolia doesn't charge for generation; you pay your LLM provider per token. For us, that means powering the assistant with Claude via our existing Anthropic API key.
Four things made this worth doing:
Markdown indexing. A dedicated text-only index powers Ask AI with clean, well-structured content, free of nav noise or layout artifacts. We chunk and tag records by language, version, and topic.
Prompting with a system overlay. On top of Algolia's hidden safety layer, we added a custom system prompt scoped strictly to AppSignal content.
Suggested questions. Curated, clickable prompts so users can start a conversation instantly, no blank-box anxiety. Questions like "How does AppSignal anomaly detection work?" and "How do I track frontend errors?" dramatically improve discoverability for developers who don't know exactly what to type.
Performance analytics. Five metrics out of the box: Ask AI vs keyword share, upvote/downvote ratio, thread depth, recent feedback, and content gaps.
The Side Panel
Instead of a full-page navigation, the new side panel surfaces Ask AI responses inline, without pulling users out of their flow. It's triggered from the "Ask AI" button, opening a hybrid experience where keyword results and AI answers coexist.
The side panel also keeps a conversation history, so users can pick up where they left off and revisit previous questions without starting over.

Crawler on a Schedule
The Algolia Crawler now runs on a schedule across docs, blog, and .com, keeping the index fresh and ensuring Ask AI always answers from current content. We expanded from 4.5k records on the trial to 11.6k records with the full index.
Prompt Design: The Part Nobody Talks About
Getting the assistant to behave correctly was more work than setting up the infrastructure.
Here's a skeleton system prompt, swap in your own org and product specifics to adapt it:
You are a helpful AI assistant embedded on [YOUR-ORG]'s website. You help users with [YOUR DOMAIN OR PRODUCT AREA]. Answering rules: - Answer exclusively from search results returned by your connected indices. Never use general knowledge to fill gaps. - When results partially match, say "Here's what I found:" and summarize with links. - When no results match, reply with a clear fallback pointing to [YOUR DOCS URL] or [YOUR SUPPORT CHANNEL]. - Never invent or guess URLs — only use URLs returned in search results. - Answer concisely in Markdown with short paragraphs and code snippets where relevant.
A few of the lessons behind rules like these:
Scope it hard. The assistant was initially pointing users to Algolia documentation, not AppSignal's. We restricted the knowledge base explicitly to AppSignal content. Non-trivial to notice without a proper Q&A sweep.
Never construct URLs from memory. LLMs will hallucinate plausible-looking paths. Our prompt explicitly bans constructing paths like /pricing or /compare/competition — it can only use URLs returned in actual search results.
Define the fallback. When results don't match, the assistant replies: "I couldn't find this in AppSignal's resources. Try browsing docs.appsignal.com or contact support@appsignal.com." Clean, honest, actionable.
Handle partial matches gracefully. When results partially match, "Here's what I found:" followed by a summary with source links performs better than a refusal.
Running a Baseline
Before shipping, we ran every question from a structured baseline set through the assistant manually. A few things surfaced:
- Pricing pages returned 404s, the
/pricingpath no longer exists, it's/plansnow. The crawler was picking up dead URLs. - Competitor comparison queries (vs Datadog, vs Sentry) returned poor results until we added all AppSignal marketing pages to the crawler scope and explicitly referenced comparison URLs in the prompt.
- "How do I use AppSignal with AI tools?" triggered a 206 error, likely a response timeout from a long LLM call. Still investigating.
The baseline process exposed real gaps. Running it once before launch is table stakes; running it automatically on a schedule, a recurring eval, is the goal.
Takeaways
If you're on a similar docs search journey, here's what we'd do differently:
- Audit before you build. We found gaps, dead links, missing pages in the crawler, prompt hallucinations, only because we ran a structured baseline. Don't skip this.
- BYOLLM is a feature, not a footnote. Controlling which model powers your assistant matters more than it looks on a feature page.
- The prompt is product. You can have perfect infrastructure and a useless assistant if the prompt isn't carefully scoped, tested, and updated alongside your content.
The search bar is often the first interaction a developer has with your product after landing on your docs. Treat it that way.
What's next: This upgraded search experience is live on our docs today, and in the next iteration we plan to bring it to blog.appsignal.com as well, so you can ask questions and find answers across our posts the same way.
Got feedback? We're always listening. Share your thoughts in our community Discord or email us at support@appsignal.com.
Wondering what you can do next?
Finished this article? Here are a few more things you can do:
- Try out AppSignal with a 30-day free trial.
- Reach out to our support team with any feedback or questions.
- Share this article on social media
Most popular AppSignal articles

Easily Monitor Multiple Heroku Apps with AppSignal
You can now monitor multiple Heroku apps from a single AppSignal instance.
See more
Fine-Tune Your Charts with Minutely Metrics in AppSignal
Discover how minutely metrics in AppSignal deliver precise performance monitoring. Check out detailed performance data, spot anomalies quickly, troubleshoot issues more efficiently, and optimize your application's performance.
See more
Secure Your Sign-Ins with AppSignal's Single Sign-On
Secure team sign-ins and enhance access management with AppSignal's Single Sign-On Business Add-On. Integrate AppSignal with your identity provider for seamless, secure access management.
See more

Ewa Szyszka
Ewa is a ML developer and technical writer who specializes in computer vision and natural language processing. Her favorite technologies include Python, TypeScript and creating content wich streaches capabilities of CompyUI, Gemini & Midjourney.
All articles by Ewa Szyszka
Karen Patteri de Souza
AI advocate and Senior Technical Writer at AppSignal, shaping developer-first documentation at the intersection of LLMs, SDKs, APIs, and user experience. Always up for chatting about LLMs in docs, LLM output quality evaluation, scientific research, music, and great films or series.
All articles by Karen Patteri de SouzaBecome our next author!
AppSignal monitors your apps
AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

