Docker: Building Docker Images with Dockerfiles

In this guide we’ll learn about the Dockerfile. What it is, how to create one, and how to configure the basics to bring up your own Dockerized app.

What is a Dockerfile?

  • A Dockerfile is a text configuration file written in a popular, human-readable Markup Language called YAML.
  • It is a step-by-step script of all the commands you need to run to assemble a Docker Image.
  • The docker build command processes this file generating a Docker Image in your Local Image Cache, which you can then start-up using the docker run command, or push to a permanent Image Repository.

Create a Dockerfile

Creating a Dockerfile is as easy as creating a new file named “Dockerfile” with your text editor of choice and defining some instructions.

#
# Each instruction in this file generates a new layer that gets pushed to your local image cache
#

#
# Lines preceeded by # are regarded as comments and ignored
#

#
# The line below states we will base our new image on the Latest Official Ubuntu
FROM ubuntu:latest

#
# Identify the maintainer of an image
MAINTAINER My Name “myname@somecompany.com”

#
# Update the image to the latest packages
RUN apt-get update && apt-get upgrade -y

#
# Install NGINX to test.
RUN apt-get install nginx -y

#
# Expose port 80
EXPOSE 80

#
# Last is the actual command to start up NGINX within our Container
CMD [“nginx”, “-g”, “daemon off;”]

Run below command to build image from Dockerfile

$ docker build . -t <image tag name>

eg.$ docker build . -t premaseem/dockerimage

 

Dockerfile Commands

  • ADD – Defines files to copy from the Host file system onto the Container
    • ADD ./local/config.file /etc/service/config.file
  • CMD – This is the command that will run when the Container starts
    • CMD [“nginx”, “-g”, “daemon off;”]
  • ENTRYPOINT – Sets the default application used every time a Container is created from the Image. If used in conjunction with CMD, you can remove the application and just define the arguments there
    • CMD Hello World!
    • ENTRYPOINT echo
  • ENV – Set/modify the environment variables within Containers created from the Image.
    • ENV VERSION 1.0
  • EXPOSE – Define which Container ports to expose
    • EXPOSE 80
  • FROM – Select the base image to build the new image on top of
    • FROM ubuntu:latest
  • MAINTAINER – Optional field to let you identify yourself as the maintainer of this image
    • MAINTAINER Some One “someone@xyz.xyz”
  • RUN – Specify commands to make changes to your Image and subsequently the Containers started from this Image. This includes updating packages, installing software, adding users, creating an initial database, setting up certificates, etc. These are the commands you would run at the command line to install and configure your application
    • RUN apt-get update && apt-get upgrade -y && apt-get install -y nginx && rm -rf/var/lib/apt/lists/*
  • USER – Define the default User all commands will be run as within any Container created from your Image. It can be either a UID or username
    • USER docker
  • VOLUME – Creates a mount point within the Container linking it back to file systems accessible by the Docker Host. New Volumes get populated with the pre-existing contents of the specified location in the image. It is specially relevant to mention is that defining Volumes in a Dockerfile can lead to issues. Volumes should be managed with docker-compose or “docker run” commands.
    • VOLUME /var/log
  • WORKDIR – Define the default working directory for the command defined in the “ENTRYPOINT” or “CMD” instructions
    • WORKDIR /home
Advertisements

Mongodb – can’t login via -u root -p bitnami

Are you trying to login like this?

mongo admin --username root --password YOUR_PASSWORD

where YOUR_PASSWORD is the one you can see in your AWS system log:

Setting Bitnami application password to 'YOUR_PASSWORD'

If this doesn’t work, you can also try resetting MongoDB password:

http://wiki.bitnami.com/Components/mongoDB?highlight=mongo#How_to_reset_the_MongoDB_root_password.3f

Cloud : The transition to an as-a-service model

These days there is a tread of companies moving away from a product-oriented to a service-oriented business. Every thing is exposed as Service oriented component which is essential a rest api which can be consumed easily without much dependency or coupling.

sao.png

Difference between Horizontally and Vertically Sacaling

Horizontal scaling means that you scale by adding more machines into your pool of resources where Vertical scaling means that you scale by adding more power (CPU, RAM) to your existing machine.

scale

Vertical scaling can essentially resize your server with no change to your code. It is the ability to increase the capacity of existing hardware or software by adding resources. Vertical scaling is limited by the fact that you can only get as big as the size of the server.

Horizontal scaling affords the ability to scale wider to deal with traffic. It is the ability to connect multiple hardware or software entities, such as servers, so that they work as a single logical unit. This kind of scale cannot be implemented at a moment’s notice.

Having said that I would like to give you an easy example to understand it better :

Let’s assume you are going to holiday trip with your team. Actual problem is there are 50 members in your team & travel agent has sent only one bus having capacity of 25 passengers. What you will do? You need at least 2 buses. What you can do is either you can ask for two buses or you can ask for double-decker bus which can carry 50 passengers at a time .  Conclusion is you need to scale the basic resource (bus). If you choose option of double-decker then it is called ‘Vertical Scaling’ as you haven’t increased the number of buses, you just increased the capacity of a bus.  If you have opt for two buses then it is ‘Horizontal Scaling’ as you have increased the number of buses (resources).Horizontal-vs-vertical-scaling

Note : Vertical scaling is called as scaling up and horrzontal scaling is called as scaling out. These are very important concepts in cloud computing and Databases 😉

 

Understanding RAID – Redundant Array of Independent Disks

raid.jpg

 

What is RAID?

RAID is an acronym for Redundant Array of Independent (or Inexpensive) Disks. In fact, RAID is the way of combining several independent and relatively small disks into a single storage of a large size. The disks included into the array are called array members. The disks can be combined into the array in different ways which are known as RAID levels. Each of RAID levels has its own characteristics of:

  • Fault-tolerance which is the ability to survive of one or several disk failures.
  • Performance which shows the change in the read and write speed of the entire array as compared to a single disk.
  • The capacity of the array which is determined by the amount of user data that can be written to the array. The array capacity depends on the RAID level and does not always match the sum of the sizes of the RAID member disks. To calculate the capacity of the particular RAID type and a set of the member disks you can use a free online RAID calculator.
RAID levels
  • RAID 0 – based on striping. This RAID level doesn’t provide fault tolerance but increases the system performance (high read and write speed).
  • RAID 1 – utilizes mirroring technique, increases read speed in some cases, and provides fault tolerance in the loss of no more than one member disk.
  • RAID 0+1 – based on the combination of striping and mirroring techniques. This RAID level inherits RAID 0 performance and RAID 1 fault tolerance.
  • RAID1E – uses both striping and mirroring techniques, can survive a failure of one member disk or any number of nonadjacent disks. There are three subtypes of RAID 1E layout: near, interleaved, and far. More information and diagrams on the RAID 1E page.
  • RAID 5 – utilizes both striping and parity techniques. Provides the read speed improvement as in RAID 0 approximately, survives the loss of one RAID member disk.
  • RAID 5E – a variation of RAID 5 layout the only difference of which is an integrated spare space allowing to rebuild a failed array immediately in case of a disk failure. Read more on the RAID5E page.
  • RAID 5 with delayed parity – pretty similar to basic RAID 5 layout, but uses nonstandard scheme of striping. More information about RAID5 with delayed parity.
  • RAID 6 – similar to RAID 5 but uses two different parity functions. The read speed is the same as in RAID 5.

How RAID is organized?

Two independent aspects are clearly distinguished in the RAID organization.

  1. The organization of data in the array (RAID storage techniques: striping, mirroring, parity, combination of them).
  2. Implementation of each particular RAID installation – hardware or software.
RAID storage techniques

The main methods of storing data in the array are:

  • Striping – splitting the flow of data into blocks of a certain size (called “block size”) then writing of these blocks across the RAID one by one. This way of data storage affects on the performance.
  • Mirroring is a storage technique in which the identical copies of data are stored on the RAID members simultaneously. This type of data placement affects the fault tolerance as well as the performance.
  • Parity is a storage technique which is utilized striping and checksum methods. In parity technique, a certain parity function is calculated for the data blocks. If a drive fails, the missing block are recalculated from the checksum, providing the RAID fault tolerance.

All the existing RAID types are based on striping, mirroring, parity, or combination of these storage techniques.