Day 3 of #40DaysOfKubernetes: Docker Multistage Builds and Best Practices
Welcome to Day 3 of #40DaysOfKubernetes! Today, we delved deep into Docker multistage builds and explored essential Docker commands and best practices for building efficient Docker images. Let’s walk through what we accomplished step by step:
Cloning a GitHub Repository
To begin our journey into Docker multistage builds, we cloned a GitHub repository that contains an application suitable for demonstrating this technique. Let’s assume we cloned a repository named multistage-app
:
git clone https://github.com/piyushsachdeva/todoapp-docker.git
Docker Multistage Build
Docker multistage builds are a powerful feature that allows us to create smaller and more efficient Docker images by using multiple FROM
statements in a single Dockerfile. Each stage can be used to perform a specific task, such as building dependencies, compiling code, and packaging the application. Let’s create a Dockerfile using multistage builds for our example application:
Dockerfile
FROM node:18-alpine AS installer
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:latest AS deployer
COPY --from=installer /app/build /usr/share/nginx/html
Explanation:
- Stage 1 (Build):
- Uses the
node:14
image as the base image. - Sets the working directory to
/app
. - Copies
package.json
andpackage-lock.json
to the working directory and installs dependencies usingnpm install
. - Copies the application code.
- Runs the build command (
npm run build
in this case) to compile the application. - Stage 2 (Nginx):
- Uses
nginx:alpine
as the base image, which is a lightweight version of Nginx. - Copies the built files from Stage 1 (
/app/build
) into the Nginx web server’s default document root (/usr/share/nginx/html
). - Exposes port 80, which is the default port for HTTP traffic.
- Sets the command to start Nginx in the foreground (
nginx -g 'daemon off;'
).
This approach separates the build environment (Node.js) from the runtime environment (Nginx), resulting in a smaller and more secure final image.
Docker Image Management
Today, we also explored essential Docker commands for managing images and containers:
Removing Docker Images
Over time, unused Docker images can accumulate and consume disk space. To remove them, we used the docker image rm
command:
docker image rm todo
Replace <image_id>
with the actual ID of the Docker image you want to remove.
Viewing Docker Logs
When debugging or monitoring a Docker container, viewing logs is crucial. We used the docker logs
command to display logs from a running container:
docker logs ac8e57738756
Replace <container_id>
with the ID or name of the Docker container.
Executing Commands in a Container
Sometimes, you may need to execute commands inside a running container. The docker exec
command allows us to do this:
docker exec -it ac8e57738756 sh
This command opens an interactive terminal (bash
in this example) inside the specified container (<container_id>
).
Inspecting Docker Objects
To gain detailed information about Docker objects such as containers, images, volumes, and networks, we used the docker inspect
command:
docker inspect ac8e57738756
Replace <container_or_image_id>
with the ID or name of the Docker object you want to inspect.
Exploring Docker Documentation and Best Practices
To ensure we followed best practices for Docker image creation, we referred to the Docker documentation. Here are some key best practices we focused on:
- Avoiding Unnecessary Packages: Installing only necessary packages in your Docker images reduces image size and potential vulnerabilities.
- Sorting Multi-line Arguments: Structuring Dockerfile commands to leverage Docker’s layer caching mechanism optimizes build times and readability.
- Pinning Base Image Versions: Specify exact versions for base images (
FROM node:14
,FROM nginx:alpine
) to ensure consistency and reproducibility across environments. - Regularly Rebuilding Images: Frequently rebuilding Docker images ensures they stay updated with the latest security patches and improvements.
Implementing these practices not only improves security and efficiency but also streamlines the development and deployment processes.
Summary
Day 3 was a productive day focused on Docker multistage builds, mastering essential Docker commands for image and container management, and adopting best practices for Docker image optimization. By the end of the day, we successfully built a multistage Docker image for our example application, managed Docker resources effectively, and reinforced our understanding of creating lean and secure containers.
Link for the reference : https://youtu.be/ajetvJmBvFo?si=U5R4-wrAirzFim6E