Containerizing a React + Node.js App using Docker (and NGINX)

Will Jones
8 min readJul 7, 2021

Google Kubernetes Engine or, GKE, is a fully managed Kubernetes service, that allows you to deploy highly scalable and available applications either through the Kubernetes and GCloud CLI or through the GCP (Google Cloud Platform) console.

This and the following article will run through how to deploy a client/server web app on GKE using Docker to first containerize the application, and Nginx to serve the front-end content. I’ll assume that if you are reading this, you probably understand at least what Kubernetes and Docker is, and why you would use them to deploy your app. A prerequisite would also be that you have installed Docker locally on your machine, as well as Kubernetes and Kubectl. What I really want to do is discuss how its all used!

How are we going to do this?

The approach I went with, and in my opinion the most flexible, is to containerize the React code and Node.js code separately. If your server were to experience an interruption or go down all together, you still want users to at least access all the browser functionality of your site while the problem is being fixed. And while all the code will essentially exist in the same Kubernetes cluster, the images that the containerization produces are completely independent of each other!

Dockerfile and Dockerignore

Dockerfile

A dockerfile is a file that contains all the commands for building the docker image from your code repository. A dockerfile has no extension and is generally at the root of the code that you want containerized.

Dockerfiles are quite readable using commands such as FROM, COPY, RUN, EXPOSE, etc, and typically rely on knowledge of basic Linux/UNIX terminal commands. Your dockerfile procedure will basically include setting up a Linux image that everything can run in, creating a working directory for the code to go in, copying over your package.json file so all the dependencies are there ready to go, then copying over the rest of the React files, running yarn/npm build and finally configuring Nginx to serve the files. The Nginx port will be exposed to the Kubernetes cluster so everything communicates nicely.

Dockerignore

The dockerignore file is basically the docker version of a gitignore file. The same way gitignore tells git what not to push to GitHub, dockerignore tells docker what not to add to the container. What we add to this file will be covered below!

Containerizing the Server-Side Code (Node.js + Express)

Using Docker to produce a container image of the Node.js back-end

In the root of your server code (in most setups, in the root of a folder called server) create a file called Dockerfile (no extension). Still in the root directory, create a file called .dockerignore;

cd server
touch Dockerfile .dockerignore

Your server structure may now look something like this:

Includes node_modules, routes, package.json, server.js and the newly created Dockerfile and .dockerignore file

In the Dockerfile, we want to follow these steps;

  1. Setup the Linux image
  2. Set the working directory for the container
  3. Copy over our package.json file
  4. Install the dependencies into the container (from package.json)
  5. Tell the container what port to listen to
  6. Run the server code in the container
  7. Copy over the rest of the files from your local machine

Start with the Linux image, simply add to the first line of the Dockerfile;

FROM node:15.3.0-alpine3.10

It is convention that keywords are capitalized. This line is telling the docker container to use a specific version of Alpine Linux. We need this since our server runs in node and the container environment runs in Linux.

Next create the working directory;

WORKDIR /usr/src/app/server

This tells the container we want the code to go into the directory as described ‘/usr/src/app/server’. (This is up to you what path you want to define!)

Next copy over the package.json file;

COPY package.json .

The ‘.’ represents the root of the directory we are moving package.json to, which makes sense since this file is always in the root of the code.

Next we install the dependencies.

RUN yarn

You may be using NPM so ‘RUN npm install’ would be appropriate there.

Now we tell the docker container what port to listen to so it communicates with our server code. In my case I’m using port 5001;

EXPOSE 5001

Now we want to run the server code within the container. Since my package.json start script is node server.js the Dockerfile command is;

CMD [ "yarn", "start" ]

This tells the container to use yarn to run the start script in package.json.

Finally, we need to copy the rest of the local server files over into the container;

COPY . .

The first ‘.’ represents all files at the level of the Dockerfile in the local directory, and the second ‘.’ tells Docker to copy everything exactly as it is in level with the package.json we copied over before.

The final Dockerfile should look similar to the following:

Dockerfile for Server (using Node.js)

.dockerignore

The .dockerignore file looks just like a .gitignore file. However, the items we want to ignore are a little different. A typical .dockerignore will look as such:

.dockerignore file of a typical configuration

We don’t need node_modules since we are running yarn (or npm install) in the container after copying over package.json, so a fresh copy of the node_modules will be installed anyway.

We also don’t need Git (.git) or the .gitignore file since the container isn’t using version control. If we don’t need it we don’t want it in the container causing unnecessary bloat!

Depending on how you are managing secrets and environment variables, you may want to ignore any .env files you have as well. I have put .env in my .dockerignore since I have added my variables to the K8 Pod deployment as Kubernetes secrets, so no need for .env.

Containerizing the Client-Side Code (React)

Using Docker to produce a container image of the React front-end

For the client side code we are doing the same thing just with a few extra steps involved. First create the Dockerfile and .dockerignore files in the root of you client directory.

The steps for a docker container running React code are as follows;

  1. Setup the Linux image
  2. Set the working directory for the container
  3. Copy over our package.json file
  4. Copy over the yarn.lock (or package-lock.json) file
  5. Add environment dependencies such as node gyp and python for installing certain package.json dependencies
  6. Copy over the rest of the files from the local machine
  7. Run the build command
  8. Set up an Alpine image for Nginx
  9. Copy over the build/ folder into Nginx to serve through HTML
  10. Remove the default Nginx configuration
  11. Copy your Nginx configuration into the container
  12. Expose the Nginx port (default is 80)
  13. Run the Nginx web server

First, lets go over creating an Nginx configuration!

NGINX Configuration

Nginx is simply an open-source web server that can be used to create proxies, perform load balancing, and increase the security of your application. Since our server container IS a server, it doesn’t require additional tools to expose itself to a network, however our front-end React code needs Nginx to serve our build content to the web. In this case, we need our front-end container to communicate within our Kubernetes cluster (which itself will be exposed to the public internet) that we will eventually create!

In the root of you client directory, create a folder called nginx and inside create a file called nginx.conf;

cd client && mkdir nginx
cd nginx && touch nginx.conf

You can copy the contents of the gist below into your nginx configuration:

Nginx Configuration

We are creating a server body that listens to port 80 (default exposed Nginx port).

Then we create a location / body (lines 4–8) that will serve the contents of /usr/share/nginx/html/index.htm* when a request for the root domain is made. For example, yourdomain.com. This will also work if you are using React-Router and have routes such as yourdomain.com/page1 or yourdomain.com/home.

error_page is defined on line 10 to serve the default Nginx web page for any error it encounters (since Nginx is a web server, all Nginx errors return a 50x status code). Lines 12–14 tell Nginx where to serve the 50x error page from.

Now that we have a basic Nginx configuration complete, let’s move on to the Docker stuff!

So, firstly in our client Dockerfile we need the Alpine Linux image;

FROM node:15-alpine as build

We are setting this ‘as build’ to use later in our Dockerfile when we want to run everything we build in the Alpine image within Nginx.

Next, set up the working directory similar to how we did in the server container;

WORKDIR /usr/src/app/

Then, copy over the package.json file into the current directory using ‘.’;

COPY package.json .

Also copy over yarn.lock (or package-lock.json) depending on what you’re using;

COPY yarn.lock .

Next, we add a script that will install some dependencies into the container such as node-gyp and Python. We need these since we have third-party NPM packages as dependencies to install for our application. We then install the dependencies from our package.json file and remove all the virtual environment dependencies we just added.

It is likely your local dev environment has all these things installed, but since the Alpine Linux image is bare bones, we need to give it a hand;

RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
&& yarn \
&& apk del .gyp

Simply copy and paste the above into your Dockerfile. Without these modules, your container will struggle to install your NPM dependencies (and as with any React application not using create-react-app, there are a lot!)

Next, copy over the rest of the client files;

COPY . .

Now, we can build. I’m using Webpack to create the output bundle, in which my package.json script looks like ‘build: webpack’, so;

RUN yarn build

That’s stage 1 of our client Docker container complete! Next, Nginx.

Let’s pull down the Nginx image;

FROM nginx:stable-alpine

Remember the as build we set earlier, well now we use that here;

COPY --from=build /usr/src/app/build /usr/share/nginx/html

We are telling docker to use the build we just created above (that resides in the directory /usr/src/app/build) and copy it into the Nginx directory /usr/share/nginx/html

Now we remove the default Nginx configuration;

RUN rm /etc/nginx/conf.d/default.conf

The default Nginx config is always stored in /etc/nginx/conf.d/default.conf.

Now copy over our Nginx configuration we created earlier;

COPY nginx/nginx.conf /etc/nginx/conf.d

Now expose the Nginx default port 80;

EXPOSE 80

And finally run the Nginx server;

CMD [ "nginx", "-g", "daemon off;" ]

The final Dockerfile should look similar to the following:

Dockerfile for React client directory

.dockerignore

The .dockerignore will be very similar to the server one from before, except if you have a build/ directory already in your client code, you don’t want to copy that over, so:

.dockerignore for client docker container

Finally Building the Docker Images

Assuming you have Docker installed, we can now use some basic CLI commands to build the images. Making sure you are in the same directory as your Dockerfile;

docker build -t [image-name]:[image-version] .

Image name and image version can be anything you want. The flag ‘-t’ stand for tag, as we are tagging the build with a name and version. Run this command from both server and client directory.

You can run,

docker ps -a

and this will show all the images you have.

If you made it this far, I hope you found this article helpful! 😄 💻

In the next article, we will create a Kubernetes cluster on GKE and deploy our client and server images to the cluster, so stay tuned!

--

--

Will Jones

Writing about the things I learn as a software engineer, front-end, back-end, and devops.