Table of contents of the article:
What is Docker?
Docker is a new approach to virtualization. If you understand virtualization, feel free to skip the next section. If not, you'll need a basic understanding of virtualization before it can help you understand Docker.
Docker is an open source containerization platform that allows you to easily build, deploy and run applications in isolated environments called "containers". This helps isolate applications from the underlying operating system and ensures that they always work the same way, regardless of the environment in which they are run.
Docker containers use operating system-level virtualization technologies, such as Linux cgroups and namespaces, to isolate operating system resources, such as memory and processes. This allows you to create highly portable and lightweight execution environments, as containers do not require a full operating system or hypervisor to be installed.
Docker solves environmental compatibility issues by making it easy to deploy applications on any Docker-compatible operating system. It also makes it easy to create development and test environments that are consistent with production ones, since developers can use the same containers used in production. Additionally, using containers can also help reduce infrastructure costs, as containers can run on a single physical or virtual machine, rather than requiring the use of multiple machines for each application.
What is virtualization?
Let's start with a metaphor: imagine you own a house. You have a friend who needs a place. You have a few options if you want to help your friend.
- Move your friend straight to your bedroom with you. This might get a little uncomfortable.
- Build a new home for your friend on your property. This is an expensive solution.
- Invite your friend to stay in the guest room. We will soon go somewhere else ...
The third option is good enough. You are able to help your friend without building him a new home, but at the same time keeping your lives separate. You will share some common resources like the kitchen and living room, but you can walk into your bedrooms and close the door for some privacy.
Virtualization is like setting up your friend in your spare bedroom. Imagine you want to run a web server on your computer. You want to keep it separate from your operating system and applications. To do this, you can perform a virtual machine containing the web server. It works like a separate computer, but uses your computer's processor and RAM. When you start the virtual machine, the entire operating system is displayed in a window within the operating system.
What's different about Docker?
Docker is a different way of doing virtualization. Where a typical virtual machine encloses the operating system with the running application, Docker shares as much as possible between the virtualized systems. This causes them to use fewer resources when they are performed and renders them easier to distribute to other developers or your production environment.
Classic virtualization, such as that provided by hypervisors such as VMware or Hyper-V, creates an isolated execution environment called a "virtual machine" (VM) in which a complete operating system runs. This allows you to run multiple operating systems simultaneously on a single physical machine, but places a significant overhead on operating system resources, as each VM requires its own memory, CPU, and disk space.
Docker, on the other hand, uses containerization to create isolated execution environments called “containers” that share the host machine's operating system kernel. This means that containers do not require a full operating system to be installed, but instead use shared host operating system resources. This allows you to create execution environments that are lighter and more portable than VMs, as containers can be easily moved between different physical or virtual machines without modification.
In summary, classic virtualization creates complete isolated execution environments through the use of hypervisors that create virtual machines, while Docker creates isolated execution environments through containerization, which share the kernel of the host operating system.
Why should developers use Docker?
Docker offers web developers some great ones superpowers .
Easy sharing of development environments
If you and I are teaming up on a Node app, we would like to make sure they both have the Node installed and that is the same version in so that our environments are coherent. We could skip this and hope for the best, but could cause us problems that may be difficult to narrow down. Libraries and our code sometimes behave differently between different versions of the node.
The solution is to make sure we both have the same version of Node, but, if we each already have other projects on our systems that require other versions of Node, we would probably want to install NVM which allows us to change Node versions easily. We can then add a .nvmrc file to the root of the project specifying the common version we want.
We only have to do it once, so our job is done. To summarize, here's what we needed to do:
- Decide on a version of the node.
- Install NVM.
- Install our chosen version of Node.
- Add a .nvmrc to the project directory, setting the correct version of the node.
- Launch the app.
It works, but it's a lot of work. We need to do most of this again for anyone else who wants to join us on this project. Even if we take all these steps, we cannot yet guarantee that the environment is the same for all developers. Things could break between developers running different operating systems or even different versions of the same operating system.
Docker allows us to solve all these problems by offering the same development environment to all developers. Here instead, with Docker, here's what we would do:
- Install Docker.
- Write a Dockerfile.
- Run docker build -t . The name of the image can be anything you choose.
- Run docker run -p 3000: 3000 . The “p” option maps a container port to a local port. This allows you to connect port 3000 on your computer to which port 3000 on the container will be associated. Use the same image name as in step 3.
This may not sound much simpler than Node / NVM setup (and it really isn't). It non they do come with an advantage though. You will need to install Docker once regardless of your tech stack. Sure, you'll only need to install Node once (unless you need multiple versions), but, when you're ready to work on an app that's on a different stack, you'll need to install all the software you need. With Docker, you will simply have to write a different Dockerfile (or a Docker Compose a depending on the complexity of your app).
The Dockerfile is very simple: it is a text file called “Dockerfile” without an extension. Let's take a look at a Dockerfile you could use for a simple Node.
# This Docker image will be based on the Node 11.6 image FROM node: 11.6.0 # Install dependencies COPY package * .json ./ RUN npm install # Copy the node app from the host into the image at / app COPY. / app # Expose port 3000 and start the app EXPOSE 3000 CMD npm start
This Dockerfile is written for a node app that listens on port 3000 and starts with the npm start command. Put it in your project repository and new on-boarding developers become quite simple and 100% consistent - every developer always gets the same environment.
Develop on the same environment as production
Once the app is installed in a Docker development environment, you can ship the entire container directly to production. If you think it is a problem to deal with the inconsistencies between two developers, just wait for you to write the code that works on your machine just to make sure that nonfunctions in production . It is extremely frustrating.
You have tons of options for deploying Docker containers to production. Here are some of them:
- AWS ECS( official tutorial )
- Digital Ocean( tutorial )
- Heroku( official tutorial )
- io( official tutorial )
I like Heroku's approach because it's the only one that allows you to simply ramp up your project with a Dockerfile to run them. Others take many other steps like pushing the Docker image to a repository. The extra steps aren't the end of the world, but they're not necessary.
What about more complex apps?
Due to Docker's philosophy (one process per container), the most apps will require multiple containers . For example, a WordPress site should consist of a container for the web server running PHP and a container for the MySQL database. This means you need a way for the containers to talk. This is called container orchestration .
If you can run all containers on a single host, Docker Compose it will probably meet the orchestration needs. It's included when you install Docker and it's easy to learn. It allows you to launch multiple containers at the same time and network with each other so they can talk to each other. This is the fastest and easiest way to orchestrate multiple containers.
If you have to orchestrate containers scattered over more guest, Kubernetes is the prevailing solution. Many hosts that support Docker deployments offer Kubernetes for orchestration.
Quick Wins from Understanding Docker
It may not seem important now, but file this knowledge for the first time you come across a problem caused by differences in environments. She non wants it to happen again. Learning Docker, you will be able to ensure a consistent environment for your app , regardless of where it is running or who is running it. This means consistent results on you, your customers and your employers can rely on.