Update docker-example (#3324)

This commit is contained in:
Masen Furer 2024-09-08 19:21:05 -07:00 committed by GitHub
parent c70cba1e7c
commit fa894289d4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
20 changed files with 398 additions and 151 deletions

View File

@ -1,133 +1,30 @@
# Reflex Docker Container
# Reflex Docker Examples
This example describes how to create and use a container image for Reflex with your own code.
This directory contains several examples of how to deploy Reflex apps using docker.
## Update Requirements
In all cases, ensure that your `requirements.txt` file is up to date and
includes the `reflex` package.
The `requirements.txt` includes the reflex package which is needed to install
Reflex framework. If you use additional packages in your project you have to add
this in the `requirements.txt` first. Copy the `Dockerfile`, `.dockerignore` and
the `requirements.txt` file in your project folder.
## `simple-two-port`
## Build Simple Reflex Container Image
The most basic production deployment exposes two HTTP ports and relies on an
existing load balancer to forward the traffic appropriately.
The main `Dockerfile` is intended to build a very simple, single container deployment that runs
the Reflex frontend and backend together, exposing ports 3000 and 8000.
## `simple-one-port`
To build your container image run the following command:
This deployment exports the frontend statically and serves it via a single HTTP
port using Caddy. This is useful for platforms that only support a single port
or where running a node server in the container is undesirable.
```bash
docker build -t reflex-app:latest .
```
## `production-compose`
## Start Container Service
This deployment is intended for use with a standalone VPS that is only hosting a
single Reflex app. It provides the entire stack in a single `compose.yaml`
including a webserver, one or more backend instances, redis, and a postgres
database.
Finally, you can start your Reflex container service as follows:
## `production-app-platform`
```bash
docker run -it --rm -p 3000:3000 -p 8000:8000 --name app reflex-app:latest
```
It may take a few seconds for the service to become available.
Access your app at http://localhost:3000.
Note that this container has _no persistence_ and will lose all data when
stopped. You can use bind mounts or named volumes to persist the database and
uploaded_files directories as needed.
# Production Service with Docker Compose and Caddy
An example production deployment uses automatic TLS with Caddy serving static files
for the frontend and proxying requests to both the frontend and backend.
Copy the following files to your project directory:
* `compose.yaml`
* `compose.prod.yaml`
* `compose.tools.yaml`
* `prod.Dockerfile`
* `Caddy.Dockerfile`
* `Caddyfile`
The production app container, based on `prod.Dockerfile`, builds and exports the
frontend statically (to be served by Caddy). The resulting image only runs the
backend service.
The `webserver` service, based on `Caddy.Dockerfile`, copies the static frontend
and `Caddyfile` into the container to configure the reverse proxy routes that will
forward requests to the backend service. Caddy will automatically provision TLS
for localhost or the domain specified in the environment variable `DOMAIN`.
This type of deployment should use less memory and be more performant since
nodejs is not required at runtime.
## Customize `Caddyfile` (optional)
If the app uses additional backend API routes, those should be added to the
`@backend_routes` path matcher to ensure they are forwarded to the backend.
## Build Reflex Production Service
During build, set `DOMAIN` environment variable to the domain where the app will
be hosted! (Do not include http or https, it will always use https).
**If `DOMAIN` is not provided, the service will default to `localhost`.**
```bash
DOMAIN=example.com docker compose build
```
This will build both the `app` service from the `prod.Dockerfile` and the `webserver`
service via `Caddy.Dockerfile`.
## Run Reflex Production Service
```bash
DOMAIN=example.com docker compose up
```
The app should be available at the specified domain via HTTPS. Certificate
provisioning will occur automatically and may take a few minutes.
### Data Persistence
Named docker volumes are used to persist the app database (`db-data`),
uploaded_files (`upload-data`), and caddy TLS keys and certificates
(`caddy-data`).
## More Robust Deployment
For a more robust deployment, consider bringing the service up with
`compose.prod.yaml` which includes postgres database and redis cache, allowing
the backend to run with multiple workers and service more requests.
```bash
DOMAIN=example.com docker compose -f compose.yaml -f compose.prod.yaml up -d
```
Postgres uses its own named docker volume for data persistence.
## Admin Tools
When needed, the services in `compose.tools.yaml` can be brought up, providing
graphical database administration (Adminer on http://localhost:8080) and a
redis cache browser (redis-commander on http://localhost:8081). It is not recommended
to deploy these services if they are not in active use.
```bash
DOMAIN=example.com docker compose -f compose.yaml -f compose.prod.yaml -f compose.tools.yaml up -d
```
# Container Hosting
Most container hosting services automatically terminate TLS and expect the app
to be listening on a single port (typically `$PORT`).
To host a Reflex app on one of these platforms, like Google Cloud Run, Render,
Railway, etc, use `app.Dockerfile` to build a single image containing a reverse
proxy that will serve that frontend as static files and proxy requests to the
backend for specific endpoints.
If the chosen platform does not support buildx and thus heredoc, you can copy
the Caddyfile configuration into a separate Caddyfile in the root of the
project.
This example deployment is intended for use with App hosting platforms, like
Azure, AWS, or Google Cloud Run. It is the backend of the deployment, which
depends on a separately hosted redis instance and static frontend deployment.

View File

@ -0,0 +1,5 @@
.web
.git
__pycache__/*
Dockerfile
uploaded_files

View File

@ -0,0 +1,65 @@
# This docker file is intended to be used with container hosting services
#
# After deploying this image, get the URL pointing to the backend service
# and run API_URL=https://path-to-my-container.example.com reflex export frontend
# then copy the contents of `frontend.zip` to your static file server (github pages, s3, etc).
#
# Azure Static Web App example:
# npx @azure/static-web-apps-cli deploy --env production --app-location .web/_static
#
# For dynamic routes to function properly, ensure that 404s are redirected to /404 on the
# static file host (for github pages, this works out of the box; remember to create .nojekyll).
#
# For azure static web apps, add `staticwebapp.config.json` to to `.web/_static` with the following:
# {
# "responseOverrides": {
# "404": {
# "rewrite": "/404.html"
# }
# }
# }
#
# Note: many container hosting platforms require amd64 images, so when building on an M1 Mac
# for example, pass `docker build --platform=linux/amd64 ...`
# Stage 1: init
FROM python:3.11 as init
ARG uv=/root/.cargo/bin/uv
# Install `uv` for faster package boostrapping
ADD --chmod=755 https://astral.sh/uv/install.sh /install.sh
RUN /install.sh && rm /install.sh
# Copy local context to `/app` inside container (see .dockerignore)
WORKDIR /app
COPY . .
RUN mkdir -p /app/data /app/uploaded_files
# Create virtualenv which will be copied into final container
ENV VIRTUAL_ENV=/app/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN $uv venv
# Install app requirements and reflex inside virtualenv
RUN $uv pip install -r requirements.txt
# Deploy templates and prepare app
RUN reflex init
# Stage 2: copy artifacts into slim image
FROM python:3.11-slim
WORKDIR /app
RUN adduser --disabled-password --home /app reflex
COPY --chown=reflex --from=init /app /app
# Install libpq-dev for psycopg2 (skip if not using postgres).
RUN apt-get update -y && apt-get install -y libpq-dev && rm -rf /var/lib/apt/lists/*
USER reflex
ENV PATH="/app/.venv/bin:$PATH" PYTHONUNBUFFERED=1
# Needed until Reflex properly passes SIGTERM on backend.
STOPSIGNAL SIGKILL
# Always apply migrations before starting the backend.
CMD [ -d alembic ] && reflex db migrate; \
exec reflex run --env prod --backend-only --backend-port ${PORT:-8000}

View File

@ -0,0 +1,113 @@
# production-app-platform
This example deployment is intended for use with App hosting platforms, like
Azure, AWS, or Google Cloud Run.
## Architecture
The production deployment consists of a few pieces:
* Backend container - built by `Dockerfile` Runs the Reflex backend
service on port 8000 and is scalable to multiple instances.
* Redis container - A single instance the standard `redis` docker image should
share private networking with the backend
* Static frontend - HTML/CSS/JS files that are hosted via a CDN or static file
server. This is not included in the docker image.
## Deployment
These general steps do not cover the specifics of each platform, but all platforms should
support the concepts described here.
### Vnet
All containers in the deployment should be hooked up to the same virtual private
network so they can access the redis service and optionally the database server.
The vnet should not be exposed to the internet, use an ingress rule to terminate
TLS at the load balancer and forward the traffic to a backend service replica.
### Redis
Deploy a `redis` instance on the vnet.
### Backend
The backend is built by the `Dockerfile` in this directory. When deploying the
backend, be sure to set REDIS_URL=redis://internal-redis-hostname to connect to
the redis service.
### Ingress
Configure the load balancer for the app to forward traffic to port 8000 on the
backend service replicas. Most platforms will generate an ingress hostname
automatically. Make sure when you access the ingress endpoint on `/ping` that it
returns "pong", indicating that the backend is up an available.
### Frontend
The frontend should be hosted on a static file server or CDN.
**Important**: when exporting the frontend, set the API_URL environment variable
to the ingress hostname of the backend service.
If you will host the frontend from a path other than the root, set the
`FRONTEND_PATH` environment variable appropriately when exporting the frontend.
Most static hosts will automatically use the `/404.html` file to handle 404
errors. _This is essential for dynamic routes to work correctly._ Ensure that
missing routes return the `/404.html` content to the user if this is not the
default behavior.
_For Github Pages_: ensure the file `.nojekyll` is present in the root of the repo
to avoid special processing of underscore-prefix directories, like `_next`.
## Platform Notes
The following sections are currently a work in progress and may be incomplete.
### Azure
In the Azure load balancer, per-message deflate is not supported. Add the following
to your `rxconfig.py` to workaround this issue.
```python
import uvicorn.workers
import reflex as rx
class NoWSPerMessageDeflate(uvicorn.workers.UvicornH11Worker):
CONFIG_KWARGS = {
**uvicorn.workers.UvicornH11Worker.CONFIG_KWARGS,
"ws_per_message_deflate": False,
}
config = rx.Config(
app_name="my_app",
gunicorn_worker_class="rxconfig.NoWSPerMessageDeflate",
)
```
#### Persistent Storage
If you need to use a database or upload files, you cannot save them to the
container volume. Use Azure Files and mount it into the container at /app/uploaded_files.
#### Resource Types
* Create a new vnet with 10.0.0.0/16
* Create a new subnet for redis, database, and containers
* Deploy redis as a Container Instances
* Deploy database server as "Azure Database for PostgreSQL"
* Create a new database for the app
* Set db-url as a secret containing the db user/password connection string
* Deploy Storage account for uploaded files
* Enable access from the vnet and container subnet
* Create a new file share
* In the environment, create a new files share (get the storage key)
* Deploy the backend as a Container App
* Create a custom Container App Environment linked up to the same vnet as the redis container.
* Set REDIS_URL and DB_URL environment variables
* Add the volume from the environment
* Add the volume mount to the container
* Deploy the frontend as a Static Web App

View File

@ -42,10 +42,11 @@ COPY --chown=reflex --from=init /app /app
# Install libpq-dev for psycopg2 (skip if not using postgres).
RUN apt-get update -y && apt-get install -y libpq-dev && rm -rf /var/lib/apt/lists/*
USER reflex
ENV PATH="/app/.venv/bin:$PATH"
ENV PATH="/app/.venv/bin:$PATH" PYTHONUNBUFFERED=1
# Needed until Reflex properly passes SIGTERM on backend.
STOPSIGNAL SIGKILL
# Always apply migrations before starting the backend.
CMD reflex db migrate && reflex run --env prod --backend-only
CMD [ -d alembic ] && reflex db migrate; \
exec reflex run --env prod --backend-only

View File

@ -0,0 +1,75 @@
# production-compose
This example production deployment uses automatic TLS with Caddy serving static
files for the frontend and proxying requests to both the frontend and backend.
It is intended for use with a standalone VPS that is only hosting a single
Reflex app.
The production app container (`Dockerfile`), builds and exports the frontend
statically (to be served by Caddy). The resulting image only runs the backend
service.
The `webserver` service, based on `Caddy.Dockerfile`, copies the static frontend
and `Caddyfile` into the container to configure the reverse proxy routes that will
forward requests to the backend service. Caddy will automatically provision TLS
for localhost or the domain specified in the environment variable `DOMAIN`.
This type of deployment should use less memory and be more performant since
nodejs is not required at runtime.
## Customize `Caddyfile` (optional)
If the app uses additional backend API routes, those should be added to the
`@backend_routes` path matcher to ensure they are forwarded to the backend.
## Build Reflex Production Service
During build, set `DOMAIN` environment variable to the domain where the app will
be hosted! (Do not include http or https, it will always use https).
**If `DOMAIN` is not provided, the service will default to `localhost`.**
```bash
DOMAIN=example.com docker compose build
```
This will build both the `app` service from the `prod.Dockerfile` and the `webserver`
service via `Caddy.Dockerfile`.
## Run Reflex Production Service
```bash
DOMAIN=example.com docker compose up
```
The app should be available at the specified domain via HTTPS. Certificate
provisioning will occur automatically and may take a few minutes.
### Data Persistence
Named docker volumes are used to persist the app database (`db-data`),
uploaded_files (`upload-data`), and caddy TLS keys and certificates
(`caddy-data`).
## More Robust Deployment
For a more robust deployment, consider bringing the service up with
`compose.prod.yaml` which includes postgres database and redis cache, allowing
the backend to run with multiple workers and service more requests.
```bash
DOMAIN=example.com docker compose -f compose.yaml -f compose.prod.yaml up -d
```
Postgres uses its own named docker volume for data persistence.
## Admin Tools
When needed, the services in `compose.tools.yaml` can be brought up, providing
graphical database administration (Adminer on http://localhost:8080) and a
redis cache browser (redis-commander on http://localhost:8081). It is not recommended
to deploy these services if they are not in active use.
```bash
DOMAIN=example.com docker compose -f compose.yaml -f compose.prod.yaml -f compose.tools.yaml up -d
```

View File

@ -12,7 +12,6 @@ services:
DB_URL: sqlite:///data/reflex.db
build:
context: .
dockerfile: prod.Dockerfile
volumes:
- db-data:/app/data
- upload-data:/app/uploaded_files

View File

@ -1 +0,0 @@
reflex

View File

@ -0,0 +1,5 @@
.web
.git
__pycache__/*
Dockerfile
uploaded_files

View File

@ -0,0 +1,14 @@
:{$PORT}
encode gzip
@backend_routes path /_event/* /ping /_upload /_upload/*
handle @backend_routes {
reverse_proxy localhost:8000
}
root * /srv
route {
try_files {path} {path}/ /404.html
file_server
}

View File

@ -10,31 +10,13 @@ FROM python:3.11
ARG PORT=8080
# Only set for local/direct access. When TLS is used, the API_URL is assumed to be the same as the frontend.
ARG API_URL
ENV PORT=$PORT API_URL=${API_URL:-http://localhost:$PORT}
ENV PORT=$PORT API_URL=${API_URL:-http://localhost:$PORT} REDIS_URL=redis://localhost PYTHONUNBUFFERED=1
# Install Caddy server inside image
RUN apt-get update -y && apt-get install -y caddy && rm -rf /var/lib/apt/lists/*
# Install Caddy and redis server inside image
RUN apt-get update -y && apt-get install -y caddy redis-server && rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Create a simple Caddyfile to serve as reverse proxy
RUN cat > Caddyfile <<EOF
:{\$PORT}
encode gzip
@backend_routes path /_event/* /ping /_upload /_upload/*
handle @backend_routes {
reverse_proxy localhost:8000
}
root * /srv
route {
try_files {path} {path}/ /404.html
file_server
}
EOF
# Copy local context to `/app` inside container (see .dockerignore)
COPY . .
@ -54,4 +36,6 @@ EXPOSE $PORT
# Apply migrations before starting the backend.
CMD [ -d alembic ] && reflex db migrate; \
caddy start && reflex run --env prod --backend-only --loglevel debug
caddy start && \
redis-server --daemonize yes && \
exec reflex run --env prod --backend-only

View File

@ -0,0 +1,36 @@
# simple-one-port
This docker deployment runs Reflex in prod mode, exposing a single HTTP port:
* `8080` (`$PORT`) - Caddy server hosting the frontend statically and proxying requests to the backend.
The deployment also runs a local Redis server to store state for each user.
Using this method may be preferable for deploying in memory constrained
environments, because it serves a static frontend export, rather than running
the NextJS server via node.
For platforms which only terminate TLS to a single port, this container can be
deployed instead of the `simple-two-port` example.
## Build
```console
docker build -t reflex-simple-one-port .
```
## Run
```console
docker run -p 8080:8080 reflex-simple-one-port
```
Note that this container has _no persistence_ and will lose all data when
stopped. You can use bind mounts or named volumes to persist the database and
uploaded_files directories as needed.
## Usage
This container should be used with an existing load balancer or reverse proxy to
terminate TLS.
It is also useful for deploying to simple app platforms, such as Render or Heroku.

View File

@ -0,0 +1,5 @@
.web
.git
__pycache__/*
Dockerfile
uploaded_files

View File

@ -1,5 +1,8 @@
# This Dockerfile is used to deploy a simple single-container Reflex app instance.
FROM python:3.11
FROM python:3.12
RUN apt-get update && apt-get install -y redis-server && rm -rf /var/lib/apt/lists/*
ENV REDIS_URL=redis://localhost PYTHONUNBUFFERED=1
# Copy local context to `/app` inside container (see .dockerignore)
WORKDIR /app
@ -18,4 +21,6 @@ RUN reflex export --frontend-only --no-zip
STOPSIGNAL SIGKILL
# Always apply migrations before starting the backend.
CMD [ -d alembic ] && reflex db migrate; reflex run --env prod
CMD [ -d alembic ] && reflex db migrate; \
redis-server --daemonize yes && \
exec reflex run --env prod

View File

@ -0,0 +1,44 @@
# simple-two-port
This docker deployment runs Reflex in prod mode, exposing two HTTP ports:
* `3000` - node NextJS server using optimized production build
* `8000` - python gunicorn server hosting the Reflex backend
The deployment also runs a local Redis server to store state for each user.
## Build
```console
docker build -t reflex-simple-two-port .
```
## Run
```console
docker run -p 3000:3000 -p 8000:8000 reflex-simple-two-port
```
Note that this container has _no persistence_ and will lose all data when
stopped. You can use bind mounts or named volumes to persist the database and
uploaded_files directories as needed.
## Usage
This container should be used with an existing load balancer or reverse proxy to
route traffic to the appropriate port inside the container.
For example, the following Caddyfile can be used to terminate TLS and forward
traffic to the frontend and backend from outside the container.
```
my-domain.com
encode gzip
@backend_routes path /_event/* /ping /_upload /_upload/*
handle @backend_routes {
reverse_proxy localhost:8000
}
reverse_proxy localhost:3000
```