Django, Docker and frontend assets
Or, how to prevent frontend assets from disappearing when mounting app code in Docker containers.
When we use a Docker container to run our Django app in production, it makes sense to use multi-stage builds and have a frontend build phase from which we copy the assets are to the backend phase (the "target" phase that will be run on the server). A Dockerfile
that can achieve such a behavior looks somewhat like the following (parts that are not important here are skipped):
# Dockerfile
FROM node:18-bookworm-slim as frontend
# Install frontend dependencies
COPY package.json package-lock.json ./
RUN npm ci --no-optional --no-audit --progress=false
# Copy app code
COPY . .
# Compile static files
RUN npm run build
FROM python:3.9 as backend-production
...
# Copy app code
COPY . .
# Copy the previously built frontend assets from the `frontend` phase
COPY --from=frontend /home/node/app/lpld/static/build /app/lpld/static/build
RUN SECRET_KEY=none ./manage.py collectstatic --noinput --clear
CMD ./scripts/run.sh
It first goes through a frontend
stage.
The frontend
stage installs the frontend dependencies and then builds the assets.
The backend-production
stage copies the app code from the host into the image.
Then it copies the assets from the frontend
stage into the image.
The collectstatic
command then gathers all static files in the directory or storage that is configured in the Django settings.
In development, we can use Docker Compose to run our stack, e.g. to have the app, a database and frontend tooling running in parallel with a single command (docker compose up
).
Now, it makes sense to use that same image that is used for production during development.
We might want to add some development specific tooling, but that should be on top of what we use in production.
The resulting Dockerfile
might look something like this:
# Dockerfile
FROM node:18-bookworm-slim as frontend
...
FROM python:3.9 as backend-production
...
# A new stage for our development tooling
FROM backend-production AS backend-development
# Install development tooling here.
...
Using Docker Compose also adds the ability to define some development specific configuration. For example, we might want to mount the whole repo directory into the container so that the code in the container updates as we work on it. Otherwise we would have to rebuild the image and restart the container after every change. That would be way too annoying.
In the docker-compose.yml
this would look something like this:
# docker-compose.yml
services:
web:
build:
context: .
target: backend-development
...
volumes:
# This is mounting the repo into the container.
- .:/app
If we now run our app (docker compose up
) and visit it in the browser (e.g. at http://localhost:8000
) our site will look broken.
None of the frontend assets can be found.
This is because of our bind mount (.:/app
).
Because the frontend assets have not been build on the host, but only in the image build process, they are not in the repo directory on the host.
When we now mount the repo .
into the /app
directory in the container, the frontend assets that were already in the container, are now gone.
They are overridden by the repo on the host that does not contain the build assets.
We can use volumes to protect directories that have been built into the image from getting overridden by mounting the repo into the container. Assuming that, in the web
container, the built assets have been copied to /app/lpld/static/build
, we can protect them like so:
# docker-compose.yml
services:
web:
build:
context: .
target: backend-development
...
volumes:
- .:/app
- /app/lpld/static/build
This creates an anonymous volume.
Because of the existence of the anonymous volume (/app/lpld/static/build
) the directory in the container is not overridden by the bind mount of the repo (.:/app
), even if the bind mount is listed first.
During container start, the volume is created with the content already available in the image.
If we now run our app (docker compose up
) and visit it in the browser (e.g. at http://localhost:8000) our site should be fixed and static assets should be served.
Maybe, depending on the config and command that is run in the container on start up, we might need to collect the static files again (docker compose exec web ./manage.py collectstatic
).
When running in debug mode and with the Django dev server, this should not be necessary as static files are directly served.
Now, usually, during development, we want to also run the frontend tooling (some build process triggered by a file watcher) and have the updated build assets served immediately by the web app.
Since we already have a build stage that installed the frontend and build all the assets, we don't need to do all this again.
We can run a second container based on the frontend
stage with the frontend tooling next to the web
container.
# docker-compose.yml
services:
web:
...
frontend:
build:
context: .
target: frontend
...
This will start a second container (frontend
) when we start the stack with docker compose up
.
That is nice.
But, in this configuration not very useful.
This code available in the container will only be what was originally built into the image already. This means our changes won't be reflected in the container.
So the next thing we need to add here is also mount the repo into the container.
# docker-compose.yml
services:
web:
...
frontend:
build:
context: .
target: frontend
volumes:
- .:/home/node/app
...
If we now start the container and the frontend tooling inside (e.g. npm run start
) we will likely see errors saying something akin to sh: 1: run-p: not found
.
The specific message will depends on the watcher, build or run script, but the gist will be the same: some tool could not be found.
That is weird, because all the dependencies have been installed in the image. But, no matter how often we try to rebuild that frontend image, our scripts won't work.
The culprit, once again, is the bind mount of the repo directory into the container (.:/home/node/app
).
The issue is, again, that we are overriding what was written into the container image during the build phase with what is present on the host.
Because we never installed the dependencies on the host, there will be no node_modules
directory on the host.
This means that when we mount the repo directory from the host into the container we are also removing that node_modules
directory from the container.
This is why our script won't find the tool that was installed into the container image in the build phase.
Again, we can use the anonymous volume to protect the directory in the container image from being overridden when the repo is mounted.
# docker-compose.yml
services:
web:
...
frontend:
build:
context: .
target: frontend
volumes:
- .:/home/node/app
- /home/node/app/node_modules
...
With this in place, we should be able to start the stack and run our tooling successfully, because all the installed dependencies are still present.
However, if we now try to work on our frontend code, we will notice quite soon that the changes we are making on the host are not reflected on our app.
The web
container keeps serving the old files although we have mounted our host code into the frontend
container and are running the tooling successfully.
What is we are missing is the connection from the frontend
container to the web
container.
The new assets are build in the frontend
container, but, they never show up in the web
container to be served.
Now, this is weird because we are already mounting the whole repo directory from the host into both of the containers.
But, because of the anonymous volume used to protect the initially build assets in the web
container, the changes we make to the assets in the frontend
container do not arrive in the web
container.
To create the missing connection between the containers, we can now use a named volume. Named volumes, as opposed to anonymous volumes, can be shared between containers. Named volumes still come with the ability to protect data from the image against overrides by directory mounts.
We create the named volume in a new top-level volumes
section.
Then we update the anonymous volume in the web
container to a named volume in read-only mode and we add a new named volume in read-write mode to the frontend
container.
# docker-compose.yml
services:
web:
build:
context: .
target: backend-development
...
volumes:
- .:/app
- frontend_assets:/app/lpld/static/build:ro
frontend:
build:
context: .
target: frontend
volumes:
- .:/home/node/app
- /home/node/app/node_modules
- frontend_assets:/home/node/app/lpld/static/build:rw
volumes:
frontend_assets:
That's it.
If we now run the stack with docker compose up
, run our frontend tooling in the frontend
container and edit some frontend source files, we should see the changes reflected in what is served by the web
container.
To recap: When we mount our repo from the host into the container we override all code (e.g. build frontend assets) that was build into the image. We can use an anonymous volume to protect directories from being overridden by the mounting. We use name volumes to allow updates made in one container (frontend
) to show up in the other container (web
).