
In this tutorial, we will build a Flask app with Docker, create a background worker with Celery, and use RabbitMQ as the message broker between the two services. This (relatively) simple example will be used to demonstrate how Docker makes it easier to run multiple services and share configuration details between developers.
In the last section, we will also build and optimize our app for deployment to Render, so we can avoid some of the common gotchas it is easy to fall victim to along the way!
First, let's quickly touch on why we're using Flask, Docker, and Render.
Why Flask?
Flask is a lightweight and flexible Python web framework that's ideal for small to medium-sized applications. Flask provides both a high degree of simplicity and extensibility via Python's rich package library. This makes it a great choice for rapid prototyping and microservices.
On the downside, Flask lacks built-in features found in more comprehensive frameworks like Django. This means you will need to integrate common app functionality like user authorization/authentication, form handling, and database management from scratch.
Why Docker?
Docker allows developers to wrap applications with dependencies into a single virtual container, ensuring consistency across development, testing, and production environments. This makes it easier to manage complex setups, simplifies deployment, and avoids the typical developer refrain: "but it works on my machine". Docker also integrates well with CI/CD pipelines and supports orchestration tools like Docker Compose and Kubernetes.
But Docker does introduce an additional layer of complexity and a steeper learning curve. Resource usage can also be higher compared to running an app directly on your local machine without a virtual container.
Why Render?
Render offers a developer-friendly platform for deploying web apps, APIs, static sites, background workers, and more. It automates deployment from Git repositories, provides free SSL, custom domains, and offers managed services like PostgreSQL and Redis. Render’s simplicity and pay-as-you-go pricing model are particularly suited for startups and small developer teams.
That said, Render has limitations in the level of customization offered compared to more mature platforms like AWS or GCP. For teams with specific infrastructure needs or strict performance requirements, you might need to consider something else.
Let's turn to installing Docker on our machine next.
Docker Installation
In this section, we will not follow the recommended approach of installing Docker Desktop for macOS. At the time of writing, my laptop running macOS Ventura with Apple's M-series silicon chip flags a false security certificate issue. Windows users can also check out the official documentation, but please note this hasn't been battle-tested like the following macOS installation steps.
The Docker Desktop security issue is not just alarming in itself, but also a major headache! Of course, there is a fix: manually adding the certificate via the command line and removing the (rather dramatic) macOS warning. But even after doing this, the Mac operating system appears to silently block the Docker Desktop app from launching. This is very frustrating, to say the least!
So, to save you from all the drama, I recommend installing the necessary Docker app components manually from the command line (Docker Desktop automatically bundles all these individual components together). Let's go through the steps one by one.
First, use Homebrew to install Docker's main command line interface (CLI) application:
brew install docker
Next, install Colima, a lightweight Docker virtual machine (VM) which works well on Macs with M1 and M2 chips:
brew install colima
Then run:
colima start
As a result, you should see this output:
INFO[0001] starting colima INFO[0001] runtime: docker INFO[0002] starting ... context=vm INFO[0015] provisioning ... context=docker INFO[0016] starting ... context=docker INFO[0018] done
You can stop Colima at any point with:
colima stop
To run the docker compose up
command in the next section when we build our sample Flask app, we need to install docker-compose
:
brew install docker-compose
Lastly, if you have previously attempted to install Docker Desktop (like me!), you will need to separately install the Docker credentials helper component. You will also have to remove any old references to how Docker Desktop handles credentials in one of the configuration files.
So now run this command:
brew install docker-credential-helper
Open the hidden configuration file: ~/.docker/config.json
and make sure the following line in the JSON file looks like this:
"credsStore": "osxkeychain"
You can test that everything is working correctly by running Docker's built-in "hello world" command:
docker run hello-world
Hopefully, you will get the output below, and we can start building!
Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (arm64v8) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
Create Your Python Flask App With Docker
First, let's create a new project directory called flask-docker-render
and an app.py
file:
mkdir flask-docker-render cd flask-docker-render touch app.py
Note: At any time, you can view or make your own copy of the full code used in this article.
Now, let's go ahead and install Flask and Celery in our project:
pip install Flask gunicorn celery python-dotenv
To update your requirements.txt
file with the newly-installed packages, run:
pip freeze > requirements.txt
You should see Flask and Celery appear in your requirements file (along with some other associated packages):
amqp==5.3.1 async-timeout==5.0.1 billiard==4.2.1 blinker==1.9.0 celery==5.5.1 click==8.1.8 click-didyoumean==0.3.1 click-plugins==1.1.1 click-repl==0.3.0 Flask==3.1.0 gunicorn==23.0.0 itsdangerous==2.2.0 Jinja2==3.1.6 kombu==5.5.2 MarkupSafe==3.0.2 packaging==24.2 prompt_toolkit==3.0.50 python-dateutil==2.9.0.post0 python-dotenv==1.1.0 six==1.17.0 tzdata==2025.2 vine==5.1.0 wcwidth==0.2.13 Werkzeug==3.1.3
Also, create config.py
in your root and add:
class Config: DEBUG = False DEVELOPMENT = False CSRF_ENABLED = True class ProductionConfig(Config): pass class DevelopmentConfig(Config): DEBUG = True DEVELOPMENT = True
Now add this code to your app.py
file:
# app.py import os from flask import Flask, jsonify from tasks import generate_report from dotenv import load_dotenv load_dotenv() app = Flask(__name__) env_config = os.getenv("PROD_APP_SETTINGS", "config.DevelopmentConfig") app.config.from_object(env_config) @app.route('/start-task/') def start_task(): print("📬 /start-task was called!") task = generate_report.delay() return jsonify({"task_id": task.id, "status": "started"}), 202 if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
In app.py
, we build a very minimal Flask app that creates a /start-task/
route and imports the generate_report
function from tasks.py
. Let's add that below:
# tasks.py import time import os from celery import Celery from dotenv import load_dotenv load_dotenv() broker_url = os.environ.get("CELERY_BROKER_URL", "amqp://guest:guest@rabbitmq:5672//") celery_app = Celery('tasks', broker=broker_url) @celery_app.task def generate_report(): time.sleep(5) return "Report complete!"
In tasks.py
, we initialize a Celery app and simulate a slow background task by using Python's built-in time module to add a 10-second delay. By separating the core Flask app from the Celery worker code, we ensure that our project is more modular and maintainable.
Note: We have also supplied the environment variable CELERY_BROKER_URL
, which will come in handy in the next deployment section. (For now, in local development mode, the code just grabs the default local rabbitmq broker url).
Next, we will create a Dockerfile
in the project root:
FROM python:3.10-alpine WORKDIR /app RUN apk add --no-cache gcc musl-dev linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . EXPOSE 5000 RUN find . -name "*.pyc" -delete CMD ["gunicorn", "-b", "0.0.0.0:5000", "app:app"]
The Dockerfile
provides all the instructions needed to run a virtual container on our local machine. Let's break down each instruction here:
FROM python:3.10-alpine
: We specify the Python version number and tell Docker we want the lightweight Alpine Linux version for our Linux container.WORKDIR /app
: Set the working directory to/app
.RUN apk add --no-cache gcc musl-dev linux-headers
: Using theapk
Linux Alpine package manager, we install the system-level build tools needed to compile our Python packages.COPY requirements.txt requirements.txt
: Copy therequirements.txt
from the source local project to the container.RUN pip install -r requirements.txt
: Install all the packages listed inrequirements.txt
inside the container.COPY ..
: copy everything in your local project to the container (excluding anything in.dockerignore
).EXPOSE 5000
: Tell Docker to listen on port 5000.- Delete stale
.pyc
files that are not needed and can cause problems for later deployments. - Tell Docker to run the Flask app using the production-grade Gunicorn web server and use
app.py
as the application entry point.
We also need to create a special yaml file for Docker called compose.yaml
in the project root:
services: web: build: . ports: - "8000:5000" depends_on: - rabbitmq worker: build: . depends_on: - rabbitmq command: celery -A tasks.celery_app worker --loglevel=info rabbitmq: image: rabbitmq:3-management ports: - "5672:5672" - "15672:15672" # Management UI
In compose.yaml
, we have a single configuration file to manage several services required for our project. In this case, the Flask app triggers the tasks (mapping the default Flask 5000 port to a custom 8000 port), a Celery worker that works in the background to complete the tasks, and the RabbitMQ server, which acts as the "broker" between both services. The rabbitmq
section downloads the official RabbitMQ image from Docker Hub (once) and stores it for future use.
Lastly, we also need a separate Dockerfile.worker
for our main Celery command to make sure that Celery runs correctly when we deploy to Render later in this tutorial (this file doesn't have any impact on our local Docker setup). We need this additional file as, unfortunately, Render does not currently support compose.yaml
.
In the next deployment section, we will also need Render's own render.yaml
file to ensure everything works smoothly.
# Dockerfile.worker FROM python:3.10-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["celery", "-A", "tasks.celery_app", "worker", "--loglevel=info"]
The Dockerfile.worker
is very similar to the Dockerfile
for our Flask web service. The key difference here is the CMD
(startup command). This tells Celery to start a worker service and look in tasks.py
for the celery_app
object.
With those five key files ready (app.py
, tasks.py
, Dockerfile
, Dockerfile.worker
, and compose.yaml
) we can now build and start Flask, Celery, and RabbitMQ all with one Docker command:
docker compose up
The first time you run this command, you will see output that shows that RabbitMQ has been downloaded from the official Docker Hub image to the Docker container on your local machine. Any subsequent times you run the command, you will just see the boot-up output for Flask, Celery, and RabbitMQ.
Go to localhost:8000/start-task
in your browser. If everything works correctly, you should see something very similar to the JSON below:
{ "status": "started", "task_id": "14361d15-3a8e-4f73-8ed0-7f7fefc36efd" }
Note: Your task_id
will differ, as it is randomly generated each time.
To verify that the delayed Celery task is working, when you hit the start-task
endpoint, you should see the following in your terminal logs:
[2025-04-15 16:00:16,306: INFO/MainProcess] Task tasks.generate_report[b7f6f260-5087-4893-b6ea-110888f620a6] received
Then, 10 seconds later, the task will show as successfully completed in the terminal:
[2025-04-15 16:00:26,331: INFO/ForkPoolWorker-2] Task tasks.generate_report[b7f6f260-5087-4893-b6ea-110888f620a6] succeeded in 10.017975573999138s: 'Report complete!'
Well done for making it this far! Now that we have everything working correctly on our local machine, it's time to deploy our project to Render!
Deploy Your Python Flask Project to Render
When deploying to Render, first create a render.yaml
file:
services: - type: web name: flask-web env: docker plan: starter dockerfilePath: ./Dockerfile dockerContext: . envVars: - key: PROD_APP_SETTINGS value: config.ProductionConfig - key: CELERY_BROKER_URL sync: false - type: worker name: celery-worker env: docker plan: starter dockerfilePath: ./Dockerfile.worker dockerContext: . envVars: - key: CELERY_BROKER_URL sync: false
The render.yaml
file acts as a blueprint to define all the infrastructure needed to deploy our app (in fact, Render actually calls this "Blueprints" in its UI). This means that if you have multiple services to deploy, everything is explicitly defined in the code. You don't have to add and configure each service in the UI. Also, when you set up everything via Render's Blueprint approach, you just have to push your main
branch to GitHub, and it will trigger an automatic deploy for all your services.
Here's a quick explanation of everything that's happening in the render.yaml
above. We define two services: one is a web service (our Flask app) and the second is the background Celery worker. Both are using Docker as their environment and are on the Render starter plan ($7 per month). One important thing to note is the two different Docker files used: the web service uses the plain Dockerfile
, while the Celery worker uses Dockerfile.worker
(both in the project root).
Lastly, the environment variables are pre-populated with a key and value for the web service. Since config.ProductionConfig
is not sensitive information (but just used to automatically rotate between local and production environments), we have included this in the code. But the CELERY_BROKER_URL
production environment variable should never be hardcoded. So we just provide the key for this, tell Render not to sync, and later we will manually enter the value in the Render UI.
Create CloudAMQP Instance
Now that we have our render.yaml
file ready, let's sign up for an account at CloudAMQP, a managed cloud hosting service for RabbitMQ.
There are a few simple steps we need to follow in the CloudAMQP UI:
- Click on Create New Instance and select the free Loyal Lemming plan.
- Select your region and data center (AWS is fine), and your instance is created!
- Click on your new instance to access the dashboard.
- In your dashboard overview, look for the AMQP details and copy the AMQP URL. It will appear partly redacted in the dashboard, so make sure you copy the whole thing! This URL is crucial: we will need it to create our Render blueprint in the next section.
Create Render Blueprint
- Go to Render and create a new account (if you haven't done so already).
- Go to the Blueprints section and then click on + New Blueprint Instance.
- You should see a list of your GitHub repositories: select the one you want to connect as a Blueprint.
- In the next screen, name your Blueprint, then under Specified configurations, add your AMQP URL as the value associated with the
CELERY_BROKER_URL
key for both the web service and the background worker. - Click Deploy Blueprint. This will start the sync/deploy process.
Lastly, enter your new Render-generated URL (and don't forget the /start-task
endpoint). So the URL should look something like this:
https://[YOUR-RENDER-URL].onrender.com/start-task/
If everything works correctly, you will see the same behavior as when you run the project locally. After hitting the start-task
endpoint in your browser, go to Logs in your celery-worker dashboard. You should see something very similar to the following output:
[2025-04-24 15:31:34,645: INFO/MainProcess] Task tasks.generate_report[87ca895f-debc-4d07-bdde-a3408faa8c7e] received [2025-04-24 15:31:44,657: INFO/ForkPoolWorker-16] Task tasks.generate_report[87ca895f-debc-4d07-bdde-a3408faa8c7e] succeeded in 10.010491968001588s: 'Report complete!'
Congratulations! You've made it to the end of this tutorial!
Wrapping Up
In this post, we ran through setting up and deploying a Flask app to Render using Docker.
First, we installed Docker on Mac (in the most painless way possible). Then, we created the local Docker version of our Flask app with Dockerfile
, Dockerfile.worker
, and compose.yaml
files. Lastly, we prepared our app for deployment to Render with render.yaml
, created a Render Blueprint, and a new CloudAMQP instance.
If you would like to take things further and integrate your Render-deployed app with AppSignal, check out AppSignal's Render integration docs.
You can also check out the docs to use the AppSignal standalone agent as a Docker image.
Happy coding!
Wondering what you can do next?
Finished this article? Here are a few more things you can do:
- Subscribe to our Python Wizardry newsletter and never miss an article again.
- Start monitoring your Python app with AppSignal.
- Share this article on social media
Most popular Python articles
An Introduction to Flask-SQLAlchemy in Python
In this article, we'll introduce SQLAlchemy and Flask-SQLAlchemy, highlighting their key features.
See moreMonitor the Performance of Your Python Flask Application with AppSignal
Let's use AppSignal to monitor and improve the performance of your Flask applications.
See moreFind and Fix N+1 Queries in Django Using AppSignal
We'll track the N+1 query problem in a Django app and fix it using AppSignal.
See more

Daniel Easterman
Our guest author Daniel is a Technical Writer and Software Developer, writing mainly about Python, Flask, and all things tech.
All articles by Daniel EastermanBecome our next author!
AppSignal monitors your apps
AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!
