
Slow endpoints are difficult to detect because they don’t fail. They simply get slower and slower. Average latency may look fine, but that can be misleading.
That’s why we need to look at other values, like p90 and p95, which often reflect what’s really going on. For example, p90 represents the slowest 10% of requests, and p95 represents the slowest 5%. When these values increase, users start experiencing delays.
For a backend developer, the problem is not necessarily how to make things faster; it’s rather how to identify where time is being spent. A slow request is not generally caused by one big query but by a series of small operations that add up.
In this article, we are going to trace a slow request using a small demo application built with Django. We’ll use AppSignal to identify why it’s slow and how we can improve it.
Setting up the Demo Application
We'll start with a simple project named django_books_demo. It’s a data store for books, along with their authors and publishers.
To create the project and an app, use the following commands:
django-admin startproject django_books_demo cd django_books_demo python manage.py startapp inventory
Here’s what the project structure should look like:
django_books_demo/ │ ├── manage.py ├── django_books_demo/ │ ├── settings.py │ ├── urls.py │ └── ... │ └── inventory/ ├── models.py ├── views.py └── migrations/

Installing AppSignal
Next, we install the AppSignal Python integration:
pip install appsignal
Add the following to your Django settings:
APPSIGNAL_PUSH_API_KEY="YOUR_APPSIGNAL_KEY" APPSIGNAL_APP_NAME="django-books-demo" APPSIGNAL_ENVIRONMENT="development"
Now, we need to make sure the AppSignal agent is running when Django is running. We do this by adding the following to manage.py:
import appsignal appsignal.start()
Once you have done this, AppSignal will automatically monitor the performance of the Django requests you make.
Creating the Data Models
Our demo application is simple. It only manages books and their relationships.
# models.py from django.db import models class Author(models.Model): name = models.CharField(max_length=200) class Publisher(models.Model): name = models.CharField(max_length=200) class Book(models.Model): title = models.CharField(max_length=200) author = models.ForeignKey(Author, on_delete=models.CASCADE) publisher = models.ForeignKey(Publisher, on_delete=models.CASCADE)
We need to run the following commands to actually create the database tables:
python manage.py makemigrations python manage.py migrate
Populating the Database
We need to populate the database to simulate real-world usage.
First, we start the shell:
python manage.py shell
Then, we fill the database with a large amount of data to demonstrate ORM’s shortcomings:
import random from inventory.models import Author, Book, Publisher authors = [Author.objects.create(name=f"Author{i}") for i in range(20)] publishers = [Publisher.objects.create(name=f"Publisher{i}") for i in range(10)] for i in range(1000): Book.objects.create( title=f"Book{i}", author=random.choice(authors), publisher=random.choice(publishers), )
Now that we have 1,000 books in our database, it’s easier to spot where ORM is slowing things down.
Creating a Slow Django Endpoint
Let’s set up an endpoint that returns all the books in our database.
from django.http import HttpResponse, JsonResponse from .models import Book def home(request): return HttpResponse("Hello books app") def books(request): books = Book.objects.all() data = [] for book in books: data.append( { "title": book.title, "author": book.author.name, "publisher": book.publisher.name, } ) return JsonResponse(data, safe=False)
Then add this endpoint in urls.py:
from django.urls import path from inventory.views import books, home, test_error urlpatterns = [ path("", home, name="home"), path("books/", books), ]
At first glance, our code looks perfectly normal and doesn't contain any obvious issues. However, it has a performance pitfall that’s common in Django applications.
Inside the loop, it fetches the related objects:
book.author.name book.publisher.name
However, because the relationships were not prefetched, this will cause extra database queries for each access.
This is known as the N+1 query problem.
Triggering the Slow Request
First, start a development server:
python manage.py runserver
Then send a few requests to the endpoint:
http://127.0.0.1:8000/books/
Each request will generate trace data that AppSignal captures.
Step 1: Finding the Slow Endpoint
Send a few hits to the endpoint. Then, check the AppSignal dashboard, and go to: Performance -> Issue list
The Issue list includes all endpoints used by the application, along with request counts and response times.
These are the results for our demo application:
| Endpoint | Mean response time | Requests |
|---|---|---|
| GET /books/ | 4.81 sec | 11 |
| GET /home | 1.07 sec | 7 |
The /books/ endpoint clearly stands out.
The response time is over 4 seconds, which is considered long for a simple API request.

Step 2: Analyzing Request Performance
AppSignal also provides performance charts to visualize how request latency varies over a period of time.
The charts offer two types of data:
- Mean response time: Average request latency
- 90th percentile latency (
p90): Slower request latency that impacts user experience

The spikes on the graph indicate slower request execution times for the /books/ endpoint during testing: almost 6 seconds. This is a confirmation that the /books/ endpoint is performing poorly.
Interestingly, the Slow queries section on AppSignal is empty. This is because the slower request execution times aren’t caused by a single expensive query. Instead, the request is executing hundreds of small queries, which is in accordance with the N+1 query anti-pattern.
Understanding the Root Cause
Django QuerySets are lazy.
When we access a related object, such as book.author, Django makes an additional database query unless the relationship has already been loaded.
So, here’s what’s happening in our loop:
1 query to fetch books +1000 queries to fetch authors +1000 queries to fetch publishers
The number of queries is directly proportional to the number of results. This isn’t a problem with the database itself, it’s actually an ORM issue, and request tracing helps us understand the difference between the two.
Fixing the Slow Query
Django offers an optimization feature called select_related() that lets you retrieve related data in the query. This way, it can load all the models upfront, to avoid running thousands of extra queries.
We will update the view:
def books(request): books = Book.objects.select_related("author", "publisher") data = [] for book in books: data.append( { "title": book.title, "author": book.author.name, "publisher": book.publisher.name, } ) return JsonResponse(data, safe=False)
The updated query tells Django to join the tables into one query instead of running multiple.
Comparing the Results
As you can see in the table below, after refreshing the endpoint once the fix has been applied, AppSignal shows that the request’s response time is much shorter:
| Version | Query strategy | Response time |
|---|---|---|
| Original implementation | N+1 queries | 3–6 seconds |
| Optimized implementation | select_related() | ~60 ms |
Here’s one of the traces recorded after the fix has been applied:
GET /books/ → 60 ms
And the response time has been dramatically reduced:

This only confirms that removing the N+1 queries has a significant impact on request times.
A Repeatable Debugging Workflow
Performance debugging is much easier when a standard process is followed.
Here’s how it works with AppSignal:
- Identify which endpoints are slow using the Actions dashboard.
- Open a representative trace.
- Determine what kind of operation is taking up most of the time.
- Optimize the code.
- Measure the improvement using new traces.
This kind of process represents a much more systematic approach to performance debugging.
Why Observability Matters
Performance issues can creep up on you as the volume of your data increases. Without proper monitoring, issues like N+1 queries can go unnoticed until user experience begins to degrade and noticeable delays occur.
Observability tools like AppSignal offer:
- Request-level visibility lets you see what actually happens during a single request, so you’re not guessing where the time is going.
- Database vs. app time helps you quickly figure out if the slowdown is coming from queries or your own code.
- Performance trends show how things change over time, so you can catch problems early, before they reach users.
- Measurable before and after performance comparisons prove that your changes have actually improved performance.
This way, your team can identify bottlenecks and ensure that optimizations are indeed making the system faster.
Conclusion
Slow endpoints don’t usually manifest themselves with obvious error messages. They sneak up on you quietly, stealing time and resources as your app grows.
In our example, AppSignal has helped us debug a slow Django endpoint and identify a classic issue called N+1 queries. With a simple change to our ORM, we cut our request time from several seconds to a few milliseconds.
For a backend developer, request tracing offers a powerful perspective on how your app is behaving. With good observability tools in place, it’s much easier to debug and resolve performance issues.
Wondering what you can do next?
Finished this article? Here are a few more things you can do:
- Subscribe to our Python Wizardry newsletter and never miss an article again.
- Start monitoring your Python app with AppSignal.
- Share this article on social media
Most popular Python articles

An Introduction to Flask-SQLAlchemy in Python
In this article, we'll introduce SQLAlchemy and Flask-SQLAlchemy, highlighting their key features.
See more
Monitor the Performance of Your Python Flask Application with AppSignal
Let's use AppSignal to monitor and improve the performance of your Flask applications.
See more
Find and Fix N+1 Queries in Django Using AppSignal
We'll track the N+1 query problem in a Django app and fix it using AppSignal.
See more

Jaume Boguña
Jaume is a dynamic and results-driven data engineer with a strong background in aerospace and data science. Experienced in delivering scalable, data-driven solutions and in managing complex projects from start to finish.
All articles by Jaume BoguñaBecome our next author!
AppSignal monitors your apps
AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

