Docker helps keep your development and production environment constant allowing swift migration of apps from development to production. Since I wanted to migrate my blogs from a traditional web host to the AWS cloud, I found containerizing my web applications using Docker as the easiest way.
Since all my blogs run on WordPress, I needed to create a LAMP (Linux, Apache, MySQL, PHP) stack. Of course, I could have simply installed and configured all of them on my EC2 instance but what if wanted to migrate my blogs to another instance in the future? Moreover, I develop primarily on my Windows 10 workstation and migrating my apps from there to the cloud will have the overhead of having to optimize my apps and making configurational changes to the infrastructure to ensure everything runs smoothly. Using Docker will ensure a smooth migration from my dev environment to the cloud.
One way to deploy your web app using Docker is to create a Docker image with all the needed softwares installed. But this is not advisable since you have multiple services running within the same server making your app monolithic and tough to scale horizontally. Furthermore, the container exits when the main process is killed, so running multiple processes can be unwieldy.
Here is where Docker Compose comes to our rescue. Using Docker Compose, we can run a network of services with each service running a container. Such containers will be very minimalistic and simple allowing us to separate the concerns of our app. For instance, you can have Apache running on one container, PHP on another and MySQL on a different one. These containers can all be networked together using Docker Compose allowing each of these services to interact with one another.
To get started, we need to create a yml file named docker-compose. This will have all the configurational information needed to spin up all our services. To build a LAMP stack we need only three services namely:
There are Docker images that have both Apache and PHP included together in them. But as mentioned above, running two major services in one container is not advisable. So, here, I run Apache and PHP separately. The Apache server will proxy requests for PHP files to the PHP service.
We start by specifying the version of the compose file format. I am going to use the latest version, i.e., 3.7.
version: '3.7'
Then we will list down the services we need under the services attribute.
services:
apache:
php:
mysql:
You can use any name you like for the services. Then, for each of the service, I will provide the details needed to build the images. The first attribute is the build attribute that tells Docker Compose the directory where the Dockerfile to build the image is found. Create three different directories for the three services and assign their paths to the build attributes.
services:
apache:
build: './apache'
php:
build: './php'
mysql:
build: './mysql'
Next, I need to map the ports of the host computer (the computer running Docker) to the ports of the containers. An easy way to understand this is to think of a container as a virtual machine. If we want a user to access our web server running in Docker, then the host machine should route the requests to port 80 to the port 80 of our container. To do this, we use the ports attribute. Since we only need the Apache service to be accessible from the outside, we will map the ports to that container only. If you want to access your MySQL database using external clients, then you will have to map the port to that container as well, but exposing your database to the outside world is generally not advisable. If you use TLS (https) with your website, you need to expose port 443 as well.
services:
apache:
build: './apache'
ports:
- 80:80
- 443:443
The next step is to map our volumes to the containers.
You have to get your code into the container. One way of doing it is to instruct Docker (through Dockerfile, which will be discussed later) to copy your files into a directory inside the container. However, the problem with this method is that data persistence becomes impossible. If your files are modified during the course of the life of the container, then once the container is killed, the changes would vanish. The other problem is that accessing the data from outside the container becomes difficult.
Make things easier does Docker’s ability to map volumes. Just like we do when deploying VMs, we can map a local directory to a container. Any files inside the mapped directory will be accessible to both the container and the host machine. The data will also be persisted.
Use the volumes attribute to map the volumes. The portion before the semicolon specifies the directory in your host machine and the portion after it specifies the directory in the container you would want to map the directory to. Create a folder where your static files will be stored and map it to /usr/local/apache2/htdocs
. It is from this directory Apache serves your files.
apache:
build: './apache'
ports:
- 80:80
- 443:443
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
Our PHP service also needs access to our files to interpret the PHP files. So, we need to map our volume to the PHP container as well.
php:
build: './php'
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
Since I needed access to the tmp folder in PHP, I mapped that as well. Do it only if you want to.
MySQL databases are stored in the /var/lib/mysql
folder. Persisting database data is also important so I mapped it to a local directory too.
mysql:
build: './mysql'
volumes:
- ./database:/var/lib/mysql
Next, we will create a network for our containers so that we can decide which containers can communicate with one another. By default, all the services join one common network. In our case, we would want PHP and MySQL to be able to communicate with one another. Apache should be able to communicate with both PHP and MySQL. But we need to expose only Apache to the external world. So, we need two networks. One that exposes Apache to the outside world and another more restricted network through which Apache, MySQL, and PHP can communicate with one another. We will name them “frontend” and “backend”.
First, we need to define the two networks using the networks attribute.
networks:
backend:
frontend:
Note that the networks attribute shouldn’t go under services. Instead. It should be a main attribute.
We don’t need to specify anything other than just listing the networks but if you want to configure the networks further, you can do so here.
Next, we can add our containers to the networks. Use the networks attribute within each service to assign them to networks.
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
networks:
- backend
volumes:
- ./database:/var/lib/mysql
As you can see, Apache is connected to the frontend and backend networks, and MySQL and PHP are connected to the backend networks. This will isolate both MySQL and PHP from the external environment while allowing Apache to communicate with both the external environment and the isolated one.
Another attribute you can see is the restart attribute. This tells Docker to restart the services should they stop for any reason.
Now that our docker compose file is ready, we need to create the Dockerfiles for each of our containers.
For Apache, create a Dockerfile (just create a txt file, name it Dockerfile, and remove the txt extension) and insert the following.
FROM httpd:2.4.35-alpine
RUN apk update; \
apk upgrade;
COPY ./apache.conf /usr/local/apache2/conf/httpd.conf
EXPOSE 80
EXPOSE 443
We import the Apache Docker image from the repository, update and upgrade the Alpine Linux. Then, we are copying the Apache configuration file to the conf directory. Finally, we are exposing port 80 and 443 to the outside world.
For MySQL, do the following.
FROM mysql:8.0.13
ENV MYSQL_ROOT_PASSWORD <password>
COPY my.cnf /etc/mysql/
Here we set the environment variable MYSQL_ROOT_PASSOWORD
, which will be used to set the root password of our MySQL root user. Replace <password>
with your password.
For PHP, we do almost the same thing except that we install some needed PHP plugins for a WordPress installation. Depending on the use case you may have to install some other plugins as well or maybe none at all.
FROM php:7.3-rc-fpm-alpine
RUN apk update; \
apk upgrade;
RUN docker-php-ext-install mysqli
RUN apk add freetype libpng libjpeg-turbo freetype-dev libpng-dev libjpeg-turbo-dev
RUN docker-php-ext-install -j$(nproc) iconv
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install -j$(nproc) gd
COPY php.ini /usr/local/etc/php/php.ini
The RUN
command lets you run a Linux command. Here, we need to install mysqli to allow PHP to interact with our MySQL server.
Now, the only step left to be completed is to configure our Apache, PHP, and MySQL servers. In Apache, we need to proxy requests for PHP files to the PHP server. To proxy to the PHP server, we need to know the PHP container’s IP address. But since we are using Docker Compose, containers in the same network can identify each other by their service name. So, we can use the service name as a domain name to access these containers. To proxy requests, add the following line inside the virtual host.
ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://php:9000/usr/local/apache2/htdocs/$1
You can simply copy the default Apache conf file and make the necessary changes and then use the COPY
command to copy the file into the container as was shown above. You can obtain my configuration files from my GitHub repo.
For MySQL, we need to specify password as the authentication method.
default-authentication-plugin=mysql_native_password
Voila! Now, we can hit the ground running. Run docker-compose up to build the images and run the containers. Once the images are built once, the next time you run this command, the built images will be run without building the images again.
The previous article discussed transferring knowledge from one completed problem to a new problem. However,…
Memetic automatons leverage knowledge gained from solving previous problems to solve new optimization problems faster…
This article discusses how we can use data to automatically select the best meme from…
Memetic computation is an extension of evolutionary computation that combines the use of memes and…
Evolutionary algorithms are heuristic search and optimization algorithms that imitate the natural evolutionary process. This…
Mininet is a popular network emulator that allows us to create virtual networks using virtual…
View Comments
Hi! I’m kinda stuck on one step.
Near the end, it says “To proxy requests, add the following line inside the virtual host.”
Where is this line supposed to be placed? I don’t understand what “inside the virtual host” is :O
Thanks for the article, very useful to start digesting this docker-compose stuff ^^
Hi! This is supposed to be placed in the conf file in the apache server. You will be creating virtual hosts to serve your web site. You are supposed to place the line inside the virtual host. You can check this file out for more details. https://github.com/thivi/DockerLAMPStack/blob/master/apache/apache.conf
It starts from the 523rd line.
Thanks for the quick reply! 😇 I will go on then
Thanks a lot for this tutorial, clear and well detailed.
It was a way for me to test docker as I wanted to setup a quick lamp server.
You're welcome!
(newbie here) Thanks !
I ran into this:
ERROR: for apache Cannot start service apache:
…
merged/usr/local/apache2/htdocs\\\” caused \\\”not a directory\\\”\””: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I think this error is caused by volume mapping. Are you sure you are mapping a DIRECTORY in your PC to /usr/local/apache2/htdocs? If you are using my code from the GitHub repo, make sure you have created a folder called public_html at the root (where the docker-compose.yml file is).
Hi,
I get this:
error: database is uninitialized and password option is not specified
Did you add a MySQL password?
Add this to the Docker file in the MySQL directory.
ENV MYSQL_ROOT_PASSWORD , where should be your password.
Check out this file for more detail: https://github.com/thivi/DockerLAMPStack/blob/master/mysql/Dockerfile
Thanks for replying:
Here is LAMP/mysql/Dockerfile:
FROM mysql:8.0.13
ENV MYSQL_ROOT_PASSWORD test123
COPY my.cnf /etc/mysql/
Hello,
Thanks for your tutorial – I’m using it to set up a WordPress stack.
I’ve also installed PHPMyAdmin as a container. I can access PHPMyAdmin but I’m able to login to both PHPMyAdmin and mysql as ‘root’ with no password. It seems that the password being set by MYSQL_ROOT_PASSWORD is not being respected. I’m also lacking any privileges as the root user to create any databases via PHPMyAdmin. I haven’t tried via the mysql CLI but presumably I would have the same problem there.
When I try to run mysql_secure_installation I am prompted to enter a new password but am then met with ” … Failed! Error: The MySQL server is running with the –skip-grant-tables option so it cannot execute this statement”. I have also tried to FLUSH PRIVILEGES but am met with “Table ‘mysql.user’ doesn’t exist”
Am I missing something? I appreciate PHPMyAdmin isn’t covered in your tutorial but this feels more like a mysql issue. Any help much appreciated!
Hi thanks for the scripts, however there is a little issue with SSL, I had to turn SSL off and comment the lines below in the apache container to work
SSLEngine on
SSLCertificateFile /usr/local/apache2/cert/certificate.crt
SSLCertificateKeyFile /usr/local/apache2/cert/private.key
SSLCertificateChainFile /usr/local/apache2/cert/ca_bundle.crt
You need to create a crt to put in the cert/ directory and make sure the names match those in the apache.conf if you are just doing local development then a self signed cert using openssl is a simple solution.
Thank you! I'm transitioning to docker for WP dev and all other tutorials I found ran a lamp stack on the same container which contradicted Dockers documentation on best practices. Your tutorial is well structured in outlining the proper way to implement a lamp stack with docker and showed me how docker works for this. Really grateful for you!
Glad that you found it useful. Thank you.