appsignal

Log-based metrics, now in AppSignal Labs

Serena Chou

Serena Chou on

Log-based metrics, now in AppSignal Labs

A lot of what's useful in a high-volume log source is a count, a rate, or a measurement — 5xx responses per minute, p95 request duration, job retry rate. You don't need every line to track those. You need the metric.

Log-based metrics is now in beta as part of AppSignal Labs.

How log-based metrics work

Define a query against any log source, using the same expression syntax you already use for log search, and AppSignal extracts a metric every time a log line matches. The metric flows into the same place as every other metric in AppSignal: dashboards, anomaly detection triggers, alerts, the metrics API, the MCP server.

AppSignal supports three types:

  • Count — counts matching log lines. One per HTTP 5xx response, one per failed login attempt, one per timed-out background job.
  • Gauge — records the value of an attribute at the moment a line is ingested. Queue depth from a worker heartbeat, active connection count from a Postgres log line, free disk space from a host report.
  • Measurement — builds a histogram of a numeric attribute over time. Request duration from an NGINX access log, payload size from a webhook handler, job runtime from a Sidekiq line.

The query language is the one you already know. severity=error AND hostname=web-1 for a counter. duration as the field on a measurement. message:"429" to count rate-limited requests.

Why this changes what you keep

Extraction runs as a log line action during ingestion, in an order you control. The pattern we'd point you at — and the one that gets the most out of this — is to place metric and trigger actions before filter actions on the same source.

That order matters. A filter action permanently discards the matching line. Anything that runs after it never sees the line. Anything that runs before it does. So a counter that increments on 5xx responses, followed by a filter that drops debug-level lines, gives you the full error rate without paying to store the debug noise.

The same pattern lets you keep error counts from a chatty health check endpoint without keeping the health checks themselves, or capture a latency measurement on every request while only retaining the slow ones for line-by-line review.

A few concrete examples

A handful of patterns we've seen work well:

  • 5xx rate by host — counter on severity=error AND status>=500, split by hostname as a tag for a per-host breakdown. Pair with an anomaly detection trigger when the rate spikes.
  • Request latency p95 — measurement on the duration attribute from an access log. Shows up on the dashboard the same way a latency metric from any AppSignal integration would.
  • Failed login attempts per user — counter on message:"login failed", split by user.id. Useful for both security alerting and detecting a broken auth flow.
  • Background job retry rate — counter on message:retry, split by job class. A leading indicator before a job class starts paging someone.
  • Queue depth — gauge on a periodic worker heartbeat line that already includes a queue size attribute.

The "split by" pattern is the form's Group by tag option — toggle it on, pick one or more attributes, and AppSignal emits a separate metric series per unique value at ingestion. You can also scope the action to specific log sources or severity levels, instead of writing them into the query.

In each case, the metric is cheaper to query at scale than the underlying logs, retains in the same place every other AppSignal metric retains, and works with the rest of the platform: triggers, dashboards, the API, the MCP server.

What's actually new

Log line actions of type metrics already existed for customers who set them up via the AppSignal MCP server. This release brings them into the UI as a first-class product surface:

  • A forms-based editor inside the Logs Explorer, with the current view's query pre-filled.
  • A live preview while you tune the query, with a time-range selector for Last 24 hours, Last 48 hours, or Last 7 days — chart on one tab, matching log lines on the other.
  • Per-line creation directly from numeric attributes in the log inspector.
  • A new Metrics page under Logging to review, edit, and delete what you've created.

Any metrics created through the MCP manage_log_line_action tool are visible on the Metrics page — no migration needed.

What "in beta" means for this one

Labs means earlier access in exchange for a few rough edges. One thing worth knowing up front: metrics start accumulating from the moment you create the action. There's no backfill against historical logs, and there isn't one on the roadmap — the first useful chart shows up after the action has been running for the duration you're querying.

The fastest way to tell us where else it falls short is the in-feature feedback button, or the AppSignal Discord community.

How to try it

You'll need at least one log source configured. Once it's sending logs, open the Logs Explorer — that's where metrics get created. There are two paths in:

  • From the ⋮ menu at the top of any log view, select Create a metric. The view's current query becomes the metric's match expression. Build the filter in the Logs Explorer first, confirm the lines you want, then promote it.
  • Select any log line to open the Log details panel, find a numeric attribute, then select Create a metric from the Actions section. The attribute is pre-filled as the field to extract. This is the fastest path for a gauge or measurement.

Both paths open the same form, where you name the metric, pick its type, and add any tags. Once created, the metric flows into the same place every other AppSignal metric does — dashboards, anomaly detection triggers, the metrics API. Review and manage your metrics on the new Metrics page under Logging.

If you'd rather configure it from your editor, point the MCP server at a manage_log_line_action call with action_type: "metrics".

If you run a high-volume log source you don't need to read line by line, this is the one to turn on.

More to come.

Wondering what you can do next?

Finished this article? Here are a few more things you can do:

Serena Chou

Serena Chou

Obsessed with building intuitive customer-first products, communities and climbing rocks. Always excited to hear from developers on what they need to innovate, come chat any time.

All articles by Serena Chou

Become our next author!

Find out more

AppSignal monitors your apps

AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

Discover AppSignal
AppSignal monitors your apps