Before Learning about the fundamentals of docker and how to use it is important to understand what docker is and why you should use it
Docker is a containerization platform that allows for the creation, deployment, and running of lightweight, isolated virtual operating systems called containers. Docker allows developers to package their applications and dependencies into a single container that can be easily shared and run on any system that has Docker installed, regardless of the underlying operating system or hardware
You don’t need to worry about what host system the user is running as long as they have docker, your code will run.
Portability: Docker allows you to package your application and its dependencies into a single container that can be easily moved between different environments
Isolation: It provides an isolated environment for your application to run in, which helps to prevent conflicts with other applications or services running on the same system
Efficiency: Docker containers are lightweight which means you can run more containers on a single machine with minimal overhead.
Reproducibility: Docker makes it easy to reproduce your development, testing, and production environments, ensuring that your application runs consistently and reliably in different settings
Security: Docker provides several built-in security features, such as user namespaces, which help to protect your application and the host system from potential security threats.
Since we know what docker is and why you should use it let us learn the fundamentals of docker.
The following are the list of docker commands that one should be familiar with when using docker and we will be using these commands in this tutorial
Docker images:
• docker images - gives the list of all images
• docker image rm <image_name> - removes an image
• docker build –tag <image_name><path> - build image from dockerfile
Docker containers:
• docker ps - returns a list of running containers
• docker ps -a - returns a list of running and stopped containers
• docker run <image_name> - runs a container from image
• docker run -p <host_port>: <container_port>- maps host port to container
• docker run -v <host_directory>: <container_directory> - mount host
• docker run -env <key>= <value>- pass environment variable
• docker inspect <container_id > - gives details of a container
Docker-compose:
• docker-compose build - builds images
• docker-compose up - starts containers
• docker-compose stop - stops running containers
• docker-compose rm - removes stopped containers
Docker specific keywords in docker-compose.yml file
• version: which version of docker-compose to use
• services: names of containers we want to run
• build: steps describing the build process
• context: where the Dockerfile is located at to build the image
• ports: map ports from the host machine to the container
• volumes: map the host machine or a docker volume to the container
• environment: pass environment variables to the container
• depends_on: start the listed services before starting the current service
And finally keywords to use in dockerfile
• FROM image_name - starts build by layering onto an existing image
• COPY host_pathcontainer_path - copies file or directory from host to the image
• RUN shell_command - runs a shell command in the image
• WORKDIR path - sets the current path in the image
• ENV variable value - sets the env variable equal to the value
• EXPOSE port - exposes a container port
• ENTRYPOINT [‘shell’, ‘command’] - prefixes to CMD
• CMD [‘shell’, ‘command’] - executes shell command at runtime
Docker images are self-contained packages that include all the components necessary to run an application as a container. These components include the application’s source code, as well as any dependencies, libraries and tools needed to execute the code. When a Docker image is run, it becomes as instance (or multiple instances) of an image(which is when a docker container is created)
Let’s look at an example of downloading a image from dockerhub (it is like a github but for docker images) using the following format
docker pull <IMAGE_NAME>
D:\Docker Tutorial>docker pull mongo
You can also run the following command
docker images
To get the list of docker images on your system
While it is possible to create a docker image from scratch, most developers use pre-existing images from public or private repositories.
Docker containers are the live, running instances of docker images. Let’s run our first container. And the command for doing so is of the following format
docker run <image_name>
D:\Docker Tutorial>docker run mongo
You can use the following command to get the list of running containers
docker ps
It should be noted that every time you do the “docker run” command, it creates a new container(instance of the image)
While docker images are read-only files, Users can interact with containers and adjust their settings/conditions, change any data they want using docker commands. You can enter into the terminal of the docker container by running a command of the following format(on windows),
docker exec -t <container_id>sh
D:\Docker Tutorial>docker exec -it a596d656de7b sh
You can exit the container with the command “exit”
You can also stop and remove the container using commands of the following format:
docker stop<container_id> (takes time to stop)
docker kill<container_id>(stops immediately)
docker rm<container_id>
You can also look for containers that have been stopped by using command,
docker ps -a
By writing a Dockerfile, you can specify the exact configuration and dependencies needed for your application, which can then be used by the Docker Engine to build a Docker Image
Let’s create a simple web server with the Python Flask framework
Initially, generate a new directory to store all the relevant files. Within the directory, create a server.py file and copy paste the below code,
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' if __name__ == '__main__': app.run(debug=True, host='0.0.0.0')
This is a basic web server that displays a page saying “Hello, World!”
Create a file called “Dockerfile” and paste the below code
# 3-server/Dockerfile FROM python:3.6 # Create a directory WORKDIR /app # Copy the app code COPY . ./ # Install Requirements RUN pip install Flask # Expose flask's port EXPOSE 5000 # Run the server CMD python server.py
Let’s see what the commands does
The FROM is used to build an image by layering onto an existing image, here by specifying “FROM python:3.6” we are declaring to install python.
WORKDIR: creates the directory if it doesn’t exist and cd into it, similar to running “mkdir /app && cd /app”
Using RUN you can execute linux command, here we are running a command to install “Flask” in the container
By COPY command we are instructing to copy folders from current directory to /app/
EXPOSE command allows connections to port 5000 on the container. We also need to map the container port to the host which we will do during running the image for accessing from outside the image like chrome or postman.
Finally CMD is an entry point command (you can have only one in dockerfile) here we are instructing to start the app by running the command “python server.py”
Now, let’s build the docker image, go to the command line and run the following command,
D:\Docker Tutorial\Python DemoServer> docker build -t python-webapp
Let’s run the image and map the container’s port 5000 to port 8000 on our host , using the command,
D:\Docker Tutorial\Python DemoServer> docker run -p 8000:5000 python-webapp
Then go to localhost:8000 on your browser see our flask app running successfully,
Docker Compose is a tool that allows you to define and run multi-container Docker applications. It uses YAML files (which is similar to JSON, XML etc. and behave a bit like python where whitespaces matter, and items are separated with colons) to define the services, networks, and volumes required for your application to run, By defining all the components of your application in a single file, Docker Compose makes it easy to manage the lifecycle of your application and its dependencies.
With Docker Compose, you can specify the relationships between containers and the configuration details of each container. This allows you to launch all the required containers with a single command , making it easy to deploy your application across different environments
Overall, Docker Compose simplifies the process of deploying and managing multi-container applications, making it a valuable tool for developers. Two or more containers in the same docker network can communicate using only the container name no need to use the port, ip etc.
In this demo project we are going to create MongodbGUI which requires two docker containers to run simultaneously
you have to first download the docker images for mongodb and mongo-express from docker-hub by running the below commands one by one,
D:\Docker Tutorial\DemoApp> docker pull mongo
D:\Docker Tutorial\DemoApp> docker pull mongo-express
Let’s create a simple docker-compose.yaml file which tries to create two containers one from the mongo-express(the UI to connect with mongodb) docker image and the other from mongo (which is the image name of mongodb) image. Also Docker compose takes care of creating a common network
version: '3' services: mongodb: image: mongo ports: - 27017:27017 environment: - MONGO_INITDB_ROOT_USERNAME=admin - MONGO_INITDB_ROOT_PASSWORD=password mongo-express: image: mongo-express restart: always # fixes MongoNetworkError when mongodb is not ready when mongo-express starts ports: - 8080:8081 environment: - ME_CONFIG_MONGODB_ADMINUSERNAME=admin - ME_CONFIG_MONGODB_ADMINPASSWORD=password - ME_CONFIG_MONGODB_SERVER=mongodb
Let us look what each of the keyword in this file mean,
version: specifies which version of docker-compose to use(here we specify the latest version)
services:lists the containers we want to run from the specified image
image:name of the docker image
ports:map ports from the host machine to the container
environment:: to set the environment variable for the respective images
restart:as mentioned restarts mongo-express when mongodb is not ready and MongoNetworkError arises
Also we are connecting to the mongodb from mongo-express using the username and password we set for the mongodb container using the environment keyword in the yaml file. You can check for different environment variables in the documentation of the relevant docker-image in docker hub
Lets run the docker-compose file using the following command,
D:\Docker Tutorial\DemoApp> docker compose -f docker compose -yaml up
You can see the network docker-compose created to communicate between containers by typing the command,
Here “demoapp_default” is the network created by docker_compose. We can also look at the containers created ,
You can see the two containers “demoapp-mongo-express-1”, “demoapp-mongodb-1” created here the docker_compose added its own prefix and suffixes to the name we mentioned in our yaml file
You can look at the mongo-express UI created from our container by typing “localhost:8000” in the browser,
From here we can connect to the mongodb and create databases and so on.
Now to stop and remove the containers and network created by the docker_compose file run the following command,
Now you can check if the containers have been stopped,
Let’s say we have a database container. It has a virtual file system where the data is usually stored. So the data gets removed whenever we restart or remove the container. Hence there is no data persistence. Which is where docker volumes come to the rescue. Where a folder in the physical host file system is mounted into the virtual file system of docker so the data gets replicated in the host file system. Also the data gets populated from the host file system whenever you restart the container.
There are three different ways to define volume,
Note: the path in the container changes upon the db image you use hence check the documentation for the relevant path for the relevant db image you use.
Host Volume: We can decide where on the host file system the reference is made
Ex: during the run command (-v /home/mount/data:/var/lib/mysql/data)
Anonymous Volumes:For each container a folder is generated by docker itself that gets mounted
Ex: during the run command(-v /var/lib/mysql/data)
Named Volumes:You can reference the volume by name for the folder generated by docker which is actually preferred by many
The path of the docker volume differ with OS,
• Windows - d:\ProgramData\docker\volumes
• Linux and MacOS - /var/lib/docker/volumes
There are several software options for container orchestration. Popular ones include kubernetes, Mesos and DockerSwarm.
Docker Swarm is a native container orchestration tool made by Docker Inc . It lets us coordinate how our containers run similar to docker compose, but targeted for production. This lets us run many container instances of our application in parallel - meaning our application can sustain high levels of traffic. It can autoscale to changes in traffic.
The following are some of the best practices to follow when working with containers
• Containers should not hold permanent data
• Store data outside of the container
• Containers should be disposable
• Containers should communicate internally whenever possible. Only exports ports if necessary
• Minimise image layers possible when writing dockerfiles
In conclusion, Docker is a powerful tool for creating, deploying, and managing containerized applications and has become an essential tool for modern software development and deployment
VS Online Services : Custom Software Development
VS Online Services has been providing custom software development for clients across the globe for many years - especially custom ERP, custom CRM, Innovative Real Estate Solution, Trading Solution, Integration Projects, Business Analytics and our own hyperlocal e-commerce platform vBuy.in and vsEcom.
We have worked with multiple customers to offer customized solutions for both technical and no technical companies. We work closely with the stake holders and provide the best possible result with 100% successful completion To learn more about VS Online Services Custom Software Development Solutions or our product vsEcom please visit our SaaS page, Web App Page, Mobile App Page to know about the solution provided. Have an idea or requirement for digital transformation? please write to us at siva@vsonlineservices.com