Improve your Docker game with these 5 best practices

Reading Time: 3 minutes


Hi everyone! Today in this blog we’re going to discuss a few best practices to use docker in our production environment. Docker is obviously a technology that is well established. It has now become a standard and is familiar to almost everyone.

However, not everyone is using Docker according to the best practices. So, in this blog, we’re going to discuss a few things that can change your game.

1. Use Official Docker Images as Base Images

So, the first best practice is to use an official and verified docker image whenever available. For example, if we are running a node.js application and we want to run it as a docker image.

best practices

So, instead of taking a base operating system image and installing node.js, npm, or any other tool required for the application, we should use the official node image. This will not only make our dockerfile cleaner but also let us use an official image that is already verified and built with best practices.

2. Use specific image versions

Now, as we will build our application image with this official node image, it will always use the latest tag of that image. This can have a few problems like we might get different docker image versions, the new image version might break stuff or cause unexpected behavior. The latest tag is basically unpredictable. So, instead of using a random latest tag, we should always fixate our version. The more specific, the better.

best practices

3. Use small-sized Official Images

There can be multiple official images, not only with different version numbers but also with different operating system distributions. So. how do we choose? See, if the image is built on a full-blown OS distribution like Ubuntu or CentOS which comes with a bunch of tools and system utilities already packed in, the image size will obviously be larger. But we don’t need most of these tools in our application image.

In contrast, having smaller images we need less storage space in our image repository as well as on a deployment server, and of course, we can transfer the images faster while pulling and pushing them from a repository. So, the best practice here is to use an image based on a leaner and smaller OS distro such as Alpine. Alpine is a popular, security-oriented, and lightweight Linux distro.


4. Optimize Caching Image Layers

Docker images are built based on a dockerfile. Each command in a dockerfile creates an image layer. Each layer will get cached by docker. So, when we re-build our image, if nothing has changed in a layer or any layers preceding it, it will be re-used from the cache. Caching makes the image building faster. The best practice here is to order our dockerfile commands from least to most frequently changing. So that whenever an image is re-build, only the layer that has been changed and the layers following the changed layer are executed again.

5. Use .dockerignore to explicitly exclude files and folders

Now usually when we build the image, we don’t need everything we have in the project to run the application inside. For example, we don’t need auto-generated folders like target, build,, etc. So, how do we exclude these files and folders from our application image? The best practice here is simply to use a .dockerignore file in the root directory. We can list all the files and folders we want to ignore in this .dockerignore file. This will reduce the image size and will prevent unintended exposure.

That’s all for now. I hope this article gave you some insightful knowledge of best practices related to dockerfile. Please feel free to drop any comment, question, or suggestion. Thanks.

Written by 

Riya is a DevOps Engineer with a passion for new technologies. She is a programmer by heart trying to learn something about everything. On a personal front, she loves traveling, listening to music, and binge-watching web series.