appsignal

From Keyword Search to Ask AI: How We Upgraded AppSignal's Docs Experience

From Keyword Search to Ask AI: How We Upgraded AppSignal's Docs Experience

Documentation search is often the last thing devs think about, until someone posts publicly that they couldn't find a basic answer, or your support queue fills up with things that are genuinely in the docs. We decided to get ahead of that.

This is the story of how we went from a minimal keyword-only search on our docs to a conversational Ask AI experience.

The Problem: Search Was Holding Users Back

We ran an honest audit of our search experience, and the gaps were hard to ignore.

Our search was keyword-only. Searching "sidekick" instead of "sidekiq" returned nothing — no typo tolerance, no fuzzy matching. A user asking "how do I track slow queries?" would get zero results unless they used our exact terminology.

Search for 'sidekick' returning no results
Search for 'sidekick': no results
Search for 'sidekiq' returning relevant documentation hits
Search for 'sidekiq': relevant results found

Keyword search was just one limitation. Docs, blog, and the learning center each had their own separate search, so finding related content meant jumping between three sites. We had no analytics visibility either: no zero-results tracking, no way to identify content gaps or boost high-value content like our MCP Server docs. And since our docs are reference-style, users had to read full pages to find answers, while competitors were already shipping AI assistants in theirs.

AppSignal documentation homepage before the upgrade
Before: the original AppSignal documentation homepage with keyword-only search

Now, users can pull meaningful results from a single search, or let the AI gather answers for them across the homepage, blog, and docs:

AppSignal documentation with Ask AI side panel
After: the Ask AI side panel surfaces AI-generated answers inline, alongside keyword results, without pulling users out of the docs

What We Built

Ask AI with BYOLLM

Algolia's Ask AI is LLM-agnostic, which means you bring your own API key from OpenAI, Anthropic, Gemini, or Mistral. Algolia doesn't charge for generation; you pay your LLM provider per token. For us, that means powering the assistant with Claude via our existing Anthropic API key.

Four things made this worth doing:

Markdown indexing. A dedicated text-only index powers Ask AI with clean, well-structured content, free of nav noise or layout artifacts. We chunk and tag records by language, version, and topic.

Prompting with a system overlay. On top of Algolia's hidden safety layer, we added a custom system prompt scoped strictly to AppSignal content.

Suggested questions. Curated, clickable prompts so users can start a conversation instantly, no blank-box anxiety. Questions like "How does AppSignal anomaly detection work?" and "How do I track frontend errors?" dramatically improve discoverability for developers who don't know exactly what to type.

Performance analytics. Five metrics out of the box: Ask AI vs keyword share, upvote/downvote ratio, thread depth, recent feedback, and content gaps.

The Side Panel

Instead of a full-page navigation, the new side panel surfaces Ask AI responses inline, without pulling users out of their flow. It's triggered from the "Ask AI" button, opening a hybrid experience where keyword results and AI answers coexist.

The side panel also keeps a conversation history, so users can pick up where they left off and revisit previous questions without starting over.

Ask AI conversation history showing previously asked questions
Conversation history in the Ask AI side panel lets users pick up where they left off and revisit previous questions without starting over

Crawler on a Schedule

The Algolia Crawler now runs on a schedule across docs, blog, and .com, keeping the index fresh and ensuring Ask AI always answers from current content. We expanded from 4.5k records on the trial to 11.6k records with the full index.

Prompt Design: The Part Nobody Talks About

Getting the assistant to behave correctly was more work than setting up the infrastructure.

Here's a skeleton system prompt, swap in your own org and product specifics to adapt it:

Shell
You are a helpful AI assistant embedded on [YOUR-ORG]'s website. You help users with [YOUR DOMAIN OR PRODUCT AREA]. Answering rules: - Answer exclusively from search results returned by your connected indices. Never use general knowledge to fill gaps. - When results partially match, say "Here's what I found:" and summarize with links. - When no results match, reply with a clear fallback pointing to [YOUR DOCS URL] or [YOUR SUPPORT CHANNEL]. - Never invent or guess URLs — only use URLs returned in search results. - Answer concisely in Markdown with short paragraphs and code snippets where relevant.

A few of the lessons behind rules like these:

Scope it hard. The assistant was initially pointing users to Algolia documentation, not AppSignal's. We restricted the knowledge base explicitly to AppSignal content. Non-trivial to notice without a proper Q&A sweep.

Never construct URLs from memory. LLMs will hallucinate plausible-looking paths. Our prompt explicitly bans constructing paths like /pricing or /compare/competition — it can only use URLs returned in actual search results.

Define the fallback. When results don't match, the assistant replies: "I couldn't find this in AppSignal's resources. Try browsing docs.appsignal.com or contact support@appsignal.com." Clean, honest, actionable.

Handle partial matches gracefully. When results partially match, "Here's what I found:" followed by a summary with source links performs better than a refusal.

Running a Baseline

Before shipping, we ran every question from a structured baseline set through the assistant manually. A few things surfaced:

  • Pricing pages returned 404s, the /pricing path no longer exists, it's /plans now. The crawler was picking up dead URLs.
  • Competitor comparison queries (vs Datadog, vs Sentry) returned poor results until we added all AppSignal marketing pages to the crawler scope and explicitly referenced comparison URLs in the prompt.
  • "How do I use AppSignal with AI tools?" triggered a 206 error, likely a response timeout from a long LLM call. Still investigating.

The baseline process exposed real gaps. Running it once before launch is table stakes; running it automatically on a schedule, a recurring eval, is the goal.

Takeaways

If you're on a similar docs search journey, here's what we'd do differently:

  1. Audit before you build. We found gaps, dead links, missing pages in the crawler, prompt hallucinations, only because we ran a structured baseline. Don't skip this.
  2. BYOLLM is a feature, not a footnote. Controlling which model powers your assistant matters more than it looks on a feature page.
  3. The prompt is product. You can have perfect infrastructure and a useless assistant if the prompt isn't carefully scoped, tested, and updated alongside your content.

The search bar is often the first interaction a developer has with your product after landing on your docs. Treat it that way.

What's next: This upgraded search experience is live on our docs today, and in the next iteration we plan to bring it to blog.appsignal.com as well, so you can ask questions and find answers across our posts the same way.

Got feedback? We're always listening. Share your thoughts in our community Discord or email us at support@appsignal.com.

Wondering what you can do next?

Finished this article? Here are a few more things you can do:

Ewa Szyszka

Ewa Szyszka

Ewa is a ML developer and technical writer who specializes in computer vision and natural language processing. Her favorite technologies include Python, TypeScript and creating content wich streaches capabilities of CompyUI, Gemini & Midjourney.

All articles by Ewa Szyszka
Karen Patteri de Souza

Karen Patteri de Souza

AI advocate and Senior Technical Writer at AppSignal, shaping developer-first documentation at the intersection of LLMs, SDKs, APIs, and user experience. Always up for chatting about LLMs in docs, LLM output quality evaluation, scientific research, music, and great films or series.

All articles by Karen Patteri de Souza

Become our next author!

Find out more

AppSignal monitors your apps

AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

Discover AppSignal
AppSignal monitors your apps