It’s no secret that we are big fans of Docker during our daily development work. It’s still one of the easiest ways to ensure a common working environment when developing locally and avoid the “It works on my machine” arguments. Even Laravel ships with a default Docker-based environment called Sail these days.
But there is still a big difference between using Docker locally and running it on production. The most popular way is by using Kubernetes and we’ve written about that before.
One of the things I wanted to learn more about in 2021 was how I could deploy a Laravel application, packaged as a Docker image to a “production” server. However, I didn’t want to jump straight into Kubernetes so I started with Docker Compose to keep things simple.
Building the docker image
I wouldn’t call myself an expert on infrastructure related matters. I know my way around them but I don’t know all of the intricacies. Instead, I rely on other (more knowledgeable) people to handle those for me.
I already had some experience building Docker images for local development, in the pre-Sail days, and back then I used the webdevops/Dockerfile. They offer various Docker images that you can base your own images on, with a lot of things already pre-configured or easily customisable.
I started in an existing, slightly complex, Laravel application that consists of an API but also contains a scheduler and a background worker (through Laravel Horizon).
The first step was to create a Dockerfile.prod
in the repository. I started from the webdevops/php-nginx:8.1-alpine
image which already included PHP CLI, PHP FPM, and Nginx. I then started adding the installation steps I was used to from other deployments.
FROM webdevops/php-nginx:8.2-alpine
ENV WEB_DOCUMENT_ROOT=/app/public
ENV PHP_DISMOD=bz2,calendar,exiif,ffi,intl,gettext,ldap,mysqli,imap,pdo_pgsql,pgsql,soap,sockets,sysvmsg,sysvsm,sysvshm,shmop,xsl,zip,gd,apcu,vips,yaml,imagick,mongodb,amqp
WORKDIR /app
COPY composer.json composer.lock
RUN composer install --no-interaction --optimize-autoloader --no-dev
COPY . .
RUN php artisan optimize
RUN php artisan horizon:publish
# Ensure all of our files are owned by the same user and group.
RUN chown -R application:application .
With this, I was already able to build my image by running the following command. I’m using GitHub’s Container registry to store my image so I prefix my image name with ghcr.io
docker build --file Dockerfile.prod -t ghcr.io/bramdevries/laravel-example-server:latest .
So far so good! Next up was running this image inside a container and having something visible inside the browser. I created a separate docker-compose.production.yml
file (to avoid conflicting with Sail’s default docker-compose.yml
) that contained the following:
version: '3'
services:
api:
image: ghcr.io/bramdevries/laravel-example-server:latest
build:
dockerfile: Dockerfile.prod
env_file:
- .env.production
volumes:
- ./storage:/app/storage
ports:
- "8000:80"
networks:
- app
redis:
image: redis:6
volumes:
- 'data.redis:/data'
networks:
- app
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
volumes:
data.redis:
driver: local
networks:
app:
driver: bridge
In this configuration, we’re adding a redis
container that runs Redis which we’ll use for Horizon, caching, and sessions. We’re also creating an api
container that uses our newly created image and an external .env
file to configure some of the environment variables needed by our application.
After running this with docker-compose -f docker-compose.production.yml up
, our application is accessible on http://localhost:8000.
Handling environment variables
The health-check endpoint listed several issues, one of which was the database not being accessible. The issue was that because we’re running php artisan optimize
in our Dockerfile.prod
. This caches the configuration values based on environment variables which, at that point, are not yet available.
One of the options was to include the variables in the image itself, but that would prevent it from being re-usable as it would contain sensitive credentials that I do not want to expose. So instead I looked through the documentation of the webdevops image and found an interesting section on provisioning.
What this comes down to is that the image exposes a couple of events that you can hook into; one of these was the entrypoint
event which is triggered when the container starts. All I had to do was add a shell script in the /opt/docker/provision/entrypoint.d/
directory to run the optimize
command instead.
I created a docker/php-nginx
directory where I could keep all customisations of the image and added docker/php-nginx/provision/entrypoint.d/artisan.sh
that contained the following:
#!/bin/bash
/usr/local/bin/php /app/artisan optimize
I then added this into my image through a COPY statement.
COPY docker/php-nginx /opt/docker
Now, when I start the container with docker-compose
the artisan optimize
command will be run and take into account the environment variables from my .env.production
file.
Afterwards, I also used this artisan.sh
file to run php artisan migrate
so that the database is updated whenever a new version of the image gets deployed.
Running Horizon
So far, we have our API accessible in our browser but we also want to run our Horizon process so our background jobs are correctly being handled.
Again, this is something the webdevops people thought about and they give you the option to define additional services through Supervisor configuration files. All I had to do was create the docker/php-nginx/etc/supervisor.d/horizon.conf
file.
[program:horizon]
command=/usr/local/bin/php /app/artisan horizon
process_name=%(program_name)s
startsecs = 0
autostart = true
autorestart = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Running the scheduler
With Laravel, the scheduling can be done from within your application, all you need to do is run the php artisan schedule:run
command every minute through cron. I found some documentation on GitHub on how to configure this but I ran into problems because I was using an alpine image. In the end, this is what worked.
First, I created the crontab file in docker/php-nginx/etc/cron/application
.
* * * * * /usr/local/bin/php /app/artisan schedule:run
The name of the file is important as it has to be the name of the user that is running the supervisor processes (which is application
by default).
Then, I had to add my own implementation of the cron service. This file is a copy of https://github.com/webdevops/Dockerfile/blob/master/docker/base/alpine/conf/bin/service.d/cron.d/10-init.sh with one change: it copies the crontab files to /etc/crontabs
instead of /etc/cron.d
.
This is needed due to a different implementation being used on alpine images. So I recreated the file in docker/php-nginx/bin/service.d/cron.d/10-init.sh
# Install crontab files
if [[ -d "/opt/docker/etc/cron" ]]; then
mkdir -p /etc/cron.d/
find /opt/docker/etc/cron -type f | while read CRONTAB_FILE; do
# fix permissions
chmod 0644 -- "$CRONTAB_FILE"
# add newline, cron needs this
echo >> "$CRONTAB_FILE"
# Install files
cp -a -- "$CRONTAB_FILE" "/etc/crontabs/$(basename "$CRONTAB_FILE")"
done
fi
But once I did this my scheduled commands were running as expected.
Conclusion
In the end, I did succeed in building an image that contained my Laravel application and that I was able to run using docker-compose. However, I don’t consider this to be a true production-ready image as I don’t think having one image that contains the web, scheduler, and background workers follows the true Docker philosophy
Combining everything in a single image would also make things considerably harder to scale. If I wanted to move from 1 to 3 containers to serve the API then it would also include an additional scheduler and Horizon process. While I’m sure there are ways to solve this problem (using environment variables or custom commands), it doesn’t feel like the best solution.
In addition to the above, for a production set up I would also rely on managed services for stateful resources such as Redis or a MySQL database. Most cloud providers such as DigitalOcean or Amazon Web Services offer solutions for those as part of their offering.
A solution to these problems would be using an orchestration tool such as Kubernetes or Docker Swarm. These are concepts I would like to explore further in 2022.
Member discussion