Continuous Automation: DevOps • Agile • Leadership • Business Innovation: What the hell happened?
Continuous Automation: DevOps • Agile • Leadership • Business Recovering from Failure
Continuous Automation: DevOps • Agile • Leadership • Business ExeterStudios.com: You’re living in a buzzword world

Docker Up and Running (Part 1)

Docker has taken the DevOps and development world by storm. It represents a lightweight virtualization option coupled with GIT style source control options. When leveraged with micro-services it’s value to the industry is immeasurable.  This tutorial represents a beginners guide to docker, how it works and some ways it can be leveraged within an existing organizations workflow.  The main value it provides over traditional VM solutions is in its lightweight portability and shared Linux Kernel *(just to name a couple). Either way Docker is housed at https://www.docker.com

Docker Container Architecture

Docker’s organization describes their virtualization architecture as follows: DockerArch2

“Containers and virtual machines have similar resource isolation and allocation benefits — but a different architectural approach allows containers to be more portable and efficient.” — source: http://docker.com

The main page of Docker.com goes on to specify some of the key differences between traditional virtualized solutions. Specifically quoted below:

“Containers include the application and all of its dependencies –but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.”

This may sound a bit abstract at first but once its dissected a bit it will make more sense.  Their main documentation page goes on to specify the exact meaning as follows:

“At its core, Docker provides a way to run almost any application securely isolated in a container. The isolation and security allow you to run many containers simultaneously on your host. The lightweight nature of containers, which run without the extra load of a hypervisor, means you can get more out of your hardware.

Surrounding the container is tooling and a platform which can help you in several ways:

1. Get your applications (and supporting components) into Docker containers
2. Distribute and ship those containers to your teams for further development and testing
3. Deploy those applications to your production environment, whether it is in a local data center or the Cloud”

Docker use cases:

Now that we have a basic understanding of the purpose of Docker and how it’s architected let’s take a look at some real world use cases. Docker’s lightweight containerization solution provides developers, operations and other engineering staff the ability to solidify their infrastructure and commit it into a proprietary GIT style control. In addition to source control like support for virtualizations it provides a unique way for people to share infrastructure (A Docker Registry). Because of these powerful use cases Docker could potentially serve any (or more) of the following roles:

  1. As a way to ensure deployments work in ANY environment
  2. For build infrastructure, test infrastructure, and deployment infrastructure
  3. For hosting micro-services (this is an awesome scenario)
  4. For collaborating on application development

Installing Docker:

Docker can be setup fairly easily with any of the available installation tools that http://www.docker.com provides. There are step by step tutorials that are easily available on that site. Generally the steps involve the following tasks:

  1. Downloading the proper Docker installation app for your OS (Win, Mac etc.)
  2. Running Docker as a system service
  3. Pulling a base image from the public Registry (‘#> docker pull ubuntu’)
  4. Launching the container or executing a command on it. (‘#> docker run -it ubuntu:latest’)

Docker’s various installation kits can be retrieved from the following URL: https://www.docker.com/products/docker#/windows

Understanding Docker terminology (basic glossary):

To become truly effective with Docker it’s important to understand some of the basic nomenclature surrounding the use of Docker. Below is a very brief glossary of some of the more commonly used Docker terms and their meaning.

Docker Container – An active or inactive instantiation of a Docker Image. To get a list of docker containers running on the system one could execute
the ‘#> docker ps’ command. To get a list of inactive containers (ones that were previously running but have since been shut down) one could execute the ‘#> docker ps -a’ command.

A Docker container is composed from the following elements:

  • A Docker image
  • Execution environment
  • A standard set of instructions

Docker Image – An ordered collection of root filesystem changes and corresponding execution parameters for use within a container. Images are read-only and represent uninstantiated containers. An image might also be considered an inert, immutable, file that’s essentially a snapshot of a container. Images are created with the build command, and they’ll produce a container when started with the run command. Images are stored in a Docker registry such as registry.hub.docker.com. To view a list of locally available images one could execute the ‘#> docker images’ command.

Docker Registry – A docker registry is a centrally hosted set of repositories that contain docker images. Docker provides an official registry which can be found at registry.hub.docker.com. Docker images can be retrieved from a registry via the ‘#> docker pull’ command syntax

Docker Repository – A centrally hosted area for Docker images of a specific classification to be stored. A repository in the Docker world lives inside a registry. A registry can house MANY repositories.

DockerFile – A docker file is a proprietary syntaxed file that describes the prescribed state of the container. Generally a Docker image can be created via manual commands entered into a terminal OR via the contents of a docker file.

Note: This list is by no means comprehensive. Please refer to the Docker official documentation for additional terms. This documentation set can be located at the following URL: https://docs.docker.com/engine/understanding-docker/

Retrieving & Running A Docker Container:

Assuming that Docker has been successfully installed on your local system, getting started with Docker itself  should be generally straightforward. While docker provides UI options for Mac Users and Windows folks, this tutorial will focus primarily on command line operational tasks. The first step in getting a Guest OS up and running on your system is to pull one from an available registry+repo. For this example we will pull an Ubuntu image from the official Docker registry. This is accomplished via the following command:

Once executed this command should produce an output on the terminal similar to the following:

As we can see from the output this command basically fetched a fresh / latest copy of the Ubuntu image from the central dockerhub registry. We can see this locally by typing ‘#> docker images’ into the command line terminal.

Now that we have a copy of Ubuntu locally let’s fire it up and poke around. To get docker to run our container type the following command into the terminal:

Terminal output / prompt:

Once at the terminal lets check to ensure its actually Ubuntu. To do this from our root@* command prompt issue the following command:

Once the command is issued, the terminal should produce the following output:

Cool! So now we know our Docker container  running Ubuntu 16.04.  But wait there’s more! As we mentioned prior Docker allows us to store our changes to the container in a GIT source-control like system. Let’s see how to make a simple change to our container (while it’s running), exit it, find the container ID (a unique hash)  and commit the change.

Now exit the container (‘#> exit’) and commit the change.

The commit command above will commit our container id (4fa304aacc04) to our image (ubuntu:latest).  Once committed docker will output something like the following:

Now lets see our commit was successful by looking at the available images on the system. To do this type the following command.

As we can see the ‘repository’ is now the hash that was our container ID.  Now that we know the basics we can even look at the complete history of the container. This is accomplished by issuing the following command from your host OS.

the output will be a complete history of commands issued on the container (while it was running).

Conclusion

This tutorial is aimed at getting up and running quickly with docker and discovering some of the basic terminology surrounding how to use it and what it does. Future docker tutorials will dive into more of the intricacies of Docker, including how to use registries and some of the more popular microservice concepts.  Docker is a comprehensive system. The more you play with it and leverage it in your daily life the better you will get at it. Until next time keep dockerizing.

 

[Total: 0    Average: 0/5]

Category: DevOps Tools

Share this Article

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

Article by: jmcallister