Docker

Charm uses Docker containers running on Kubernetes for all of its modern infrastructure.

Docker allows us to maintain consistency when building in different environments as well as enabling rapid deployment of well-tested platforms ensuring that we can accommodate the needs of other business units within the company.

Staff Summaries

Member A

Docker is a means of encapsulating services with self-contained “containers”. Similar to old chroot implementations but with additional networking segmentation, kernel module control and resource management and limitation.

Compared to VMs where an entire machine is emulated including processor instructions, Docker containers utilise extensions in the host’s kernel in order to present an environment in which a cut down version of an operating systems may run in.The definition of the environment is performed via a “Dockerfile”. This file essentially contains instructions on how the virtual environment should be constructed. Normally, you will start with a base image at the beginning of the file and then proceed to add commands to build your own environment.

Each command in the Dockerfile, and its base image is considered to be a “layer”. A Docker image is a collection of these layers. Any modifications to the Dockerfile will only update the affected layers and not require that the whole image be updated. This helps to save recompilation time.

Once a Docker image has been created and compiled, it is tagged with a version (often the SHA commit tag of the current git branch or revision) and uploaded to a Docker Registry. This image can then be deployed any number of times.

Docker images may finally be modified at runtime through means of environment variables and attached volumes.

An important note is that generally Docker images should be considered ephemeral – this means that their state may change or cease to exist at any time. Contrary to older system design where a system would be setup once and then expected to maintain its configuration, in a Docker deployment, all configuration must be completed when defining the Dockerfile. While this may seem unnecessarily restrictive, it allows several very important benefits:

  • No knowledge is required to deploy a Docker image – all associated dependencies etc. can be assumed to be set up and ready to go when the image is started. No build or installation steps are required. This is particularly important in cases where a large number of configuration files need to be modified or 3rd party libraries brought in where it may take 2 hours to deploy a new version of some software
  • Speed of deployment – as above, no configuration being necessary makes it possible to repeatably test and deploy the same version of the software without any risk of misconfiguration errors
  • Consistency of deployment – given that every time a Docker image is started it always has the same clean state, it is impossible for “system rot” or progressive destabilisation over time

Member B

Virtual Machine: A complete self-contained guest operating system running as a child process of a parent OS.

Docker Image: An application, plus all of its dependencies, packaged into a single unit that can be easily deployed to run on a host OS.  This single unit is called a container and as only the application is running, the overheads of needing an entire virtual machine are not needed. Docker images run in a Linux or Windows environment, you don’t need to install an entire OS to do this. Instead put it in a docker container and have it run on your local PC.

Terminology:

Image: A file which describes how to create a container. It can contain application code, libraries, build commands and so-on.

Container: These are built from images which require all the dependencies required to run an application.

Dockerfile: A shell script that defines all the tasks needed to be executed. With a docker file, you can create an image, and then from that a container.

A Dockerfile is a list of commands similar to a shell script. Here’s a basic example:

FROM ubuntu

RUN apt-get update

RUN apt-get -y install nginx

CMD [“echo”,”Image created”]

Note the unix commands!

Building a Docker image file from this will create a container which, when run, will create a self-contained nginx web-server in a unix environment.


Member C

Docker is a tool that makes it easy to launch and rebuild applications by containerizing multiple libraries and dependencies.
Containerization allows you to manage your infrastructure in code for faster and more secure deployment. Also, in virtualization, compared to VM, which was the main technology in the past, only the necessary software can be started, so the resources of the pc can be used efficiently.


Member D

  • Docker is virtual environment software/platform.
  • Docker uses host’s kernel and is faster than general virtual machine.
  • Because of the container contains development/execute configures and libraries, it produces common environment not depend on host’s environment.

Leave a Reply

Your email address will not be published. Required fields are marked *