How to reduce pain while performing yoga asanas

Sweet Pain

Yoga is a method of bringing balance in your mind and body. Yogasanas are meant for better health and having a lively body. When I was 10 years old, I started to do yoga asanas which have helped me to look young and be healthy even though I am nearing 40 yrs now. In our yoga asana sessions, I used to see different people of ages ranging from 10 to 20, 20 to 30, 30 to 50, and above, finding it difficult to do some of the yoga postures. So how do we get to do yoga asanas effectively, so that we don’t feel the pain while performing the same? Hence let’s get to know the secret to reduce pain while doing yoga asanas.

First is to warm up your body before yoga asanas by performing some of the normal physical exercises, such as stretching, jumping, and twisting your body. It is preferable to do yoga asanas when you are just hungry or ~4 hrs after food. Thereafter, once the body gets warmed up, next is to start doing your yoga asanas. Maybe start with Surya Namaskar and then do particular exercises like paschimottanasana. Let me explain how to get rid of the pain during paschimottanasana. If you are not familiar with this yoga asana please refer to this YouTube link https://youtu.be/m7pGFqv9KjM. During this asana, as directed in the video links, make sure that both the legs are stitched straight on the ground without bending your knees and then raising both of your hands stretched apart in opposite direction and then pulling it up, to join them together with breathing in or inhalation slowly. Stretch your back and bend forward to hold your feet or the end of your calf muscles. Focus on your breathing at each posture, as suggested in the video link.

One thing to note while doing yoga asanas is not to force yourself to do any stretching or bending but when there is a need for this, do it as much as you can and then repeat this until the pain goes away. So the secret to reducing pain during yoga asanas is to focus on your breathing while pushing a little bit every day of the particular posture you’re doing in Yoga asana. What I mean by this is to hold onto a posture for example during paschimottasana, bending your forehead to touch your knees and holding onto your legs with both of your hands, and remaining in this posture. You will actually feel pain just in nerves/muscles under the knees or a painful stretch feeling. Try to remain in this posture and during this time, just focus on breathing slowly as much as you can, but if there is discomfort in doing the same stop stretching and come back to normal position as soon as possible, with next postures of the particular asana. As I said before do not force yourself to do any posture, as it might cause other problems and might help you to visit a doctor! (Just joking but quite possible in the complex postures)

The second thing is to always keep in mind and understand the breathing involved while doing each posture of a yogasanam. It is also important to rest in between yoga asanas, maybe for a few seconds, as it brings back the calmness in your body caused by the previous yoga asana. Most importantly, is to rest once all the yoga asanas are completed which you have been practicing or following as a routine every day morning from your Yoga training.

Always close your yoga session with Shavasana. In a future blog, I will pen down my experience of doing Shavasana which helps you to rest and feel almost two hours of sleep in just 5 minutes of this asana!

Disclaimer:

Methods and directions in this blog to do yogas asanas are from my own experiences and are suggested to try with expert advice. Any health hazards caused by following these methods is at your own risk and responsibility.

Hadoop data volume failures and solution – cloudera

I stumbled upon the following error in Cloudera, which shows Hadoop data volume failures, due to which the datanode is down and the HDFS service in cloudera is down too.

I had no clue where to look for errors in this case as comprehending and managing the Cloudera 8 node cluster (1 master + 7 datanodes) is very complex with so many components involved, like Hadoop roles,  HDFS, YARN, Hive, Spark,  and intricacies of how each of the components interacts.

Since metadata information is present Namenode and a heartbeat handshake happens between namenode and datanode, namenode or data node logs should have this information, showing reasons for the data volume failures.

So i checked the namenode and datanode logs in cloudera setup, in the namenode machine and followed below steps to fix the issue.

Problem: (HDFS service down as datanode is down due data volume failures)

soln-dead-datanodes1

i checked to see if which datanode volume maybe a probelm with below hdfs report. It shows dfs used 100%

hdfs dfsadmin –report

HDFS Disk capacity explained:

https://community.cloudera.com/t5/Community-Articles/Details-of-the-output-hdfs-dfsadmin-report/ta-p/245505

Master Node: bdw21-13

[root@bdw21-13 logs]# hdfs dfsadmin -report

Configured Capacity: 107099623178240 (97.41 TB)

Present Capacity: 101596650946560 (92.40 TB)

DFS Remaining: 98975373618999 (90.02 TB)

DFS Used: 2621277327561 (2.38 TB)

DFS Used%: 2.58%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

Missing blocks (with replication factor 1): 0

————————————————-

Live datanodes (7):

Name: 1.1.21.14:50010 (bdw21-14)

Hostname: bdw21-14

Rack: /default

Decommission Status : Normal

Configured Capacity: 15578127007744 (14.17 TB)

DFS Used: 371912425472 (346.37 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 14405782257664 (13.10 TB)

DFS Used%: 2.39%

DFS Remaining%: 92.47%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 2

Last contact: Wed Aug 21 01:21:31 CDT 2019

Name: 1.1.21.15:50010 (bdw21-15)

Hostname: bdw21-15

Rack: /default

Decommission Status : Normal

Configured Capacity: 15578127007744 (14.17 TB)

DFS Used: 344836427776 (321.15 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 14432858255360 (13.13 TB)

DFS Used%: 2.21%

DFS Remaining%: 92.65%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 2

Last contact: Wed Aug 21 01:21:32 CDT 2019

Name: 1.1.21.16:50010 (bdw21-16)

Hostname: bdw21-16

Rack: /default

Decommission Status : Normal

Configured Capacity: 15578127007744 (14.17 TB)

DFS Used: 386013941827 (359.50 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 14391680741309 (13.09 TB)

DFS Used%: 2.48%

DFS Remaining%: 92.38%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 2

Last contact: Wed Aug 21 01:21:31 CDT 2019

Name: 1.1.21.17:50010 (bdw21-17)

Hostname: bdw21-17

Rack: /default

Decommission Status : Normal

Configured Capacity: 14604494069760 (13.28 TB)

DFS Used: 433112637440 (403.37 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 13420976128000 (12.21 TB)

DFS Used%: 2.97%

DFS Remaining%: 91.90%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 2

Last contact: Wed Aug 21 01:21:33 CDT 2019

Name: 1.1.21.18:50010 (bdw21-18)

Hostname: bdw21-18

Rack: /default

Decommission Status : Normal

Configured Capacity: 14604494069760 (13.28 TB)

DFS Used: 207213023299 (192.98 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 13646875742141 (12.41 TB)

DFS Used%: 1.42%

DFS Remaining%: 93.44%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 3

Last contact: Wed Aug 21 01:21:31 CDT 2019

Name: 1.1.21.20:50010 (bdw21-20)

Hostname: bdw21-20

Rack: /default

Decommission Status : Normal

Configured Capacity: 15578127007744 (14.17 TB)

DFS Used: 421409095747 (392.47 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 14356285587389 (13.06 TB)

DFS Used%: 2.71%

DFS Remaining%: 92.16%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 2

Last contact: Wed Aug 21 01:21:31 CDT 2019

Name: 1.1.21.21:50010 (bdw21-21)

Hostname: bdw21-21

Rack: /default

Decommission Status : Normal

Configured Capacity: 15578127007744 (14.17 TB)

DFS Used: 456779776000 (425.41 GB)

Non DFS Used: 0 (0 B)

DFS Remaining: 14320914907136 (13.02 TB)

DFS Used%: 2.93%

DFS Remaining%: 91.93%

Configured Cache Capacity: 4294967296 (4 GB)

Cache Used: 0 (0 B)

Cache Remaining: 4294967296 (4 GB)

Cache Used%: 0.00%

Cache Remaining%: 100.00%

Xceivers: 2

Last contact: Wed Aug 21 01:21:31 CDT 2019

[root@bdw21-13 logs]#

[root@bdw21-13 logs]# hdfs dfsadmin -report > /home/tpc/hdfs-space-21Aug2019.txt

[root@bdw21-13 logs]# grep -i “DFS Used” /home/tpc/hdfs-space-21Aug2019.txt

DFS Used: 2621277425664 (2.38 TB)

DFS Used%: 2.58%

DFS Used: 371912437760 (346.37 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 2.39%

DFS Used: 344836440064 (321.15 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 2.21%

DFS Used: 386013954048 (359.50 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 2.48%

DFS Used: 433112653824 (403.37 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 2.97%

DFS Used: 207213031424 (192.98 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 1.42%

DFS Used: 421409116160 (392.47 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 2.71%

DFS Used: 456779792384 (425.41 GB)

Non DFS Used: 0 (0 B)

DFS Used%: 2.93%

DFS Used: 0 (0 B)

Non DFS Used: 0 (0 B)

DFS Used%: 100.00%

[root@bdw21-13 logs]# grep -i “DFS Used” /home/tpc/hdfs-space-21Aug2019.txt

[root@bdw21-13 logs]# grep -i datanodes /home/tpc/hdfs-space-21Aug2019.txt

Live datanodes (7):

[root@bdw21-13 logs]#

References:

https://community.cloudera.com/t5/Support-Questions/quot-No-host-heartbeat-CDH-versions-cannot-be-verified-quot/td-p/5283

https://community.cloudera.com/t5/Support-Questions/Volume-failure-reported-while-disks-seem-fine/m-p/22706

Solution:

NN-Log-Location

Error seen:

datanode-volume-failure

So I removed the /data/2/df/dn volume from HDFS configuration in cdh and started the datanode on node-17 which was giving the volume failure error.

Caveat: Before trying above steps I tried to set dfs.datanode.failed.volumes.tolerated to 0 so that volume failures won’t affect the datanode live status. (setting 7 would have been better) and hence we had under replicated blocks in the cluster as shown below (Fig1) . Hence rebalanced the cluster to distribute the blocks among 7 datanodes. But on resolving the volume failure issues by removing the failed volume from HDFS configuration, now has corrupt blocks (Fig2) (initially it was showing as missing)

Fig (1)

fig1

Fig(2) :

fig2

Fixed above with below steps:

  1. Corrupt blocks found with below command:

[root@bdw21-13 hadoop-conf]# hdfs fsck / | egrep -v ‘^\.+$’ | grep -v replica | grep -v Replica

Connecting to namenode via http://bdw21-13:50070/fsck?ugi=root&path=%2F

FSCK started by root (auth:SIMPLE) from /1.1.21.13 for path / at Thu Aug 22 15:50:45 CDT 2019

/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/store_sales/store_sales_206.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1079657961

/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/store_sales/store_sales_206.dat: MISSING 1 blocks of total size 199777 B……………….

/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/web_returns/web_returns_156.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1079658663

/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/web_returns/web_returns_156.dat: MISSING 1 blocks of total size 14152 B………………………………………………………………………………..

/home/tpc/Big-Data-Benchmark-for-Big-Bench/data/reason/reason_023.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1077955383

/home/tpc/Big-Data-Benchmark-for-Big-Bench/data/reason/reason_023.dat: MISSING 1 blocks of total size 80 B……………………………………

/home/tpc/Big-Data-Benchmark-for-Big-Bench/data_refresh/store_sales/store_sales_172.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1077956851

/home/tpc/Big-Data-Benchmark-for-Big-Bench/data_refresh/store_sales/store_sales_172.dat: MISSING 1 blocks of total size 382385 B…………….

/user/hive/warehouse/bigbench10tb.db/store_returns/000172_0: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1075509573

/user/hive/warehouse/bigbench10tb.db/store_returns/000172_0: MISSING 1 blocks of total size 45693754 B………..

…………………………………………………………………………………Status: CORRUPT

Total size:    863019635593 B

Total dirs:    3930

Total files:   44593

Total symlinks:                0

Total blocks (validated):      38733 (avg. block size 22281249 B)

********************************

UNDER MIN REPL’D BLOCKS:      5 (0.012908889 %)

CORRUPT FILES:        5

MISSING BLOCKS:       5

MISSING SIZE:         46290148 B

CORRUPT BLOCKS:       5

********************************

Corrupt blocks:                5

Number of data-nodes:          7

Number of racks:               1

FSCK ended at Thu Aug 22 15:50:45 CDT 2019 in 479 milliseconds

The filesystem under path ‘/’ is CORRUPT

[root@bdw21-13 hadoop-conf]#

  1. So the corrupt files/blocks are below (5 matches):

/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/store_sales/store_sales_206.dat

/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/web_returns/web_returns_156.dat

/home/tpc/Big-Data-Benchmark-for-Big-Bench/data/reason/reason_023.dat

/home/tpc/Big-Data-Benchmark-for-Big-Bench/data_refresh/store_sales/store_sales_172.dat

/user/hive/warehouse/bigbench10tb.db/store_returns/000172_0

  1. Removed the corrupt files (removed once still exist in trash)
  2. Now we have some corrupt files shown in Trash (above removed files itself) and so removing them permanently now.

/user/root/.Trash/Current/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/store_sales/store_sales_206.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1079657961

/user/root/.Trash/Current/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/store_sales/store_sales_206.dat: MISSING 1 blocks of total size 199777 B..

/user/root/.Trash/Current/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/web_returns/web_returns_156.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1079658663

/user/root/.Trash/Current/home/mukund/Big-Data-Benchmark-for-Big-Bench/data/web_returns/web_returns_156.dat: MISSING 1 blocks of total size 14152 B..

/user/root/.Trash/Current/home/tpc/Big-Data-Benchmark-for-Big-Bench/data/reason/reason_023.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1077955383

/user/root/.Trash/Current/home/tpc/Big-Data-Benchmark-for-Big-Bench/data/reason/reason_023.dat: MISSING 1 blocks of total size 80 B..

/user/root/.Trash/Current/home/tpc/Big-Data-Benchmark-for-Big-Bench/data_refresh/store_sales/store_sales_172.dat: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1077956851

/user/root/.Trash/Current/home/tpc/Big-Data-Benchmark-for-Big-Bench/data_refresh/store_sales/store_sales_172.dat: MISSING 1 blocks of total size 382385 B..

/user/root/.Trash/Current/user/hive/warehouse/bigbench10tb.db/store_returns/000172_0: CORRUPT blockpool BP-1704754621-1.1.21.13-1558413910334 block blk_1075509573

/user/root/.Trash/Current/user/hive/warehouse/bigbench10tb.db/store_returns/000172_0: MISSING 1 blocks of total size 45693754 B…………………………………………………………………………..

……………………………………………………………………………………….

……………………………………………………………………………………….

…………………………………………………………………………………Status: CORRUPT

Total size:    863019635593 B

Total dirs:    3900

Total files:   44593

Total symlinks:                0

Total blocks (validated):      38733 (avg. block size 22281249 B)

********************************

UNDER MIN REPL’D BLOCKS:      5 (0.012908889 %)

dfs.namenode.replication.min: 1

CORRUPT FILES:        5

MISSING BLOCKS:       5

MISSING SIZE:         46290148 B

CORRUPT BLOCKS:       5

********************************

Minimally replicated blocks:   38728 (99.98709 %)

Over-replicated blocks:        0 (0.0 %)

Under-replicated blocks:       0 (0.0 %)

Mis-replicated blocks:         0 (0.0 %)

Default replication factor:    3

Average block replication:     2.9996128

Corrupt blocks:                5

Missing replicas:              0 (0.0 %)

Number of data-nodes:          7

Number of racks:               1

FSCK ended at Thu Aug 22 16:11:30 CDT 2019 in 1118 milliseconds

The filesystem under path ‘/’ is CORRUPT

[root@bdw21-13 hadoop-conf]#

So no more corrupt files now and HDFS service is up with 7 datanodes.

Please comment if above helps you or for any clarifications. 

Jenkins credentials issue

Today i faced with below issue while the github was ported to some new hardware or network. So this resulted in below issue for the data pipeline i already had in my jenkins setup for Big Data.

Issue1: Below was due to issue2

ssh: Could not resolve hostname github.hpe.com: Temporary failure in name resolution
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Build step 'Execute shell' marked build as failure

Issue2:

Failed to connect to repository : Command "git ls-remote -h https://github.company.com/github_usrname/project_name.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'https://github.company.com/github_usrname/project_name.git/': Could not resolve host: github.hpe.com; Unknown error 

Solution:

Is to generate a personal access token by following below steps:

a) In your github link go to settings –> Developer settings –> Personal access tokens

b) Click on Generate new token

c) Update Jenkins credentials in your Job: (click on labels with ” ” below)

Top left “jenkins” –> “Credentials” –> Delete previous user and then recreate it.

d) If the issue1 persists update the ssh key from the linux system you are trying to use the git.

Go to https://github.company.com/settings/keys

Click on “New SSH key” and add the ssh key generated in your linux system.

[root@hostname project-pipeline]# ssh-keygen -t rsa -b 4096 -C “email_id@company.com”
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5f:33:9a:df:3d:76:9c:ad:a3:b3:2e:b3:c3:a7:04:b8 email_id.g-n@company.com
The key’s randomart image is:
+–[ RSA 4096]—-+
| |
| |
| |
| . |
| . S + |
| . o + o |
| E .= .o|
| .=.o..=+|
| oO=++o+|
+—————–+
[root@hostname project-pipeline]# eval “$(ssh-agent -s)”
Agent pid 5742
[root@hostname project-pipeline]#

[root@hostname project-pipeline]# cat /root/.ssh/id_rsa.pub
starts with ssh-rsa*************
[root@hostname project-pipeline]#

Reference:

https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token

https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent

https://help.github.com/en/enterprise/2.20/user/github/authenticating-to-github/checking-for-existing-ssh-keys

Learning how to learn

Hello! i took this course from coursera with title same as of this blog and inorder to retain what i learnt from this course, would like to keyboard down here. Enjoy and see it this helps you to learn new things.

Often it occurred to me that whatever i learnt with either interest or need to learn anything new, it fades away with time, once you are not in touch with it or changed to other daily activities. So here are some tips to understand how your brain works when learning and help you to learn new things in science or mathematics or other abstract subjects, with some tools, techniques to over come behaviors like procrastination.

It’s quite common to get stuck on a problem–often because you have initial ideas about what the solution should be that block your ability to see the real solution. so how do we solve such situations? Moreover if the subject is too complicated to understand but you need to learn it due some academic or profession need, then how do you focus to achieve it?

Answer lies in some of the below methods……

  • diffuse or focus: It has helped many great scientists like thomas alva to do “diffuse” thinking compared to “focus” thinking. In focus thinking our brain forms patterns which are constrained in a limited space and generally not helpful to retain information, since the learning content might be very much abstract. However diffuse thinking can help to process the new information by creating, broader spatial patterns in brain. Jogging, taking bath, holding key (rest – wake up with a drop) and sleeping are some of the ways to do diffuse thinking.
  • Pomodoro technique: Focus on learning for 25 mins with utmost attention and then reward yourself by taking a break like having coffee/ listening to music/whatever you like to do in short time say in 5-10 mins. Continue this process to learn further with breaks and this is a classical way to resolve procrastination.
  • Memory retention: Sometimes sleeping over a problem repeatedly leads to a solution. Rather than learning all the new material / content in 1 day, learn over many days and repeat the process of learning (summary) to move the new knowledge from “working memory” to ” Long term memory”. Sleeping helps in removing some of the toxins in brain and removes less important memories. Hence resting and sleeping is good for long term memory retention. This is possible because with sleep, metabolic toxins are removed and strong neuron structures are formed with stable structures in long term memory.

Did you know:

For many years the scientific view of the brain was that once our brain was mature the neurons we had could be strengthened with learning but new neurons couldn’t develop as we aged. A lot of people still believe this is true, which creates a pretty bleak outlook as they get older. But now scientists have better methods of watching the brain in action and they can see that our brains develop new neurons while we sleep, when we surround ourselves with stimulating environments and people – and when we exercise! Interestingly, even if we don’t have a stimulating environment exercise still assists our brains in growing new neurons.

More to come … TBC …….. Chunks

Reference:

“Learning How to Learn: Powerful mental tools to help you master tough subjects” from https://www.coursera.org/learn/learning-how-to-learn/home/

Disclaimer:

The content is written in my own words but there could be few sentences which resemble closely to course content but i am sharing this to get more people to enroll in this course, which is helping has helped me a lot and know things well to learn new or tough subjects. Therefore the intent is not plagiarism but to promote the course / help people.

open KVM installation on rhel7

1)

-bash-4.2# lscpu | grep Virtualization
Virtualization: VT-x
-bash-4.2#

2) Install packages / binaries

yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install

3) start services

systemctl enable libvirtd

systemctl start libvirtd

3) Verify KVM installation

-bash-4.2# lsmod | grep -i kvm
kvm_intel 174841 0
kvm 578518 1 kvm_intel
irqbypass 13503 1 kvm
-bash-4.2#

4) Configure bridged networking

-bash-4.2# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242acffc174 no
virbr0 8000.52540058ad34 yes virbr0-nic
-bash-4.2# virsh net-list
Name State Autostart Persistent
———————————————————-
default active yes yes

-bash-4.2#

5) All VMs (guest machine) only have network access to other VMs on the same server. A private network 192.168.122.0/24 created for you. Verify it

-bash-4.2# virsh net-dumpxml default
<network>
<name>default</name>
<uuid>5b3f1a22-6392-447c-83c1-6fb90847f1df</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr0′ stp=’on’ delay=’0’/>
<mac address=’52:54:00:58:ad:34’/>
<ip address=’192.168.122.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.168.122.2′ end=’192.168.122.254’/>
</dhcp>
</ip>
</network>

-bash-4.2#

6) Create a bridge on your LAN, so that your VM has access to outside host world

vi /etc/sysconfig/network-scripts/ifcfg-eno2 (take backup before altering it)

add line:

BRIDGE=br0

vi /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE="br0"
# I am getting ip from DHCP server #
BOOTPROTO="dhcp"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
ONBOOT="yes"
TYPE="Bridge"
DELAY="0"

7) restart n/w manager

systemctl restart NetworkManager

8) verify using below command

-bash-4.2# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no
docker0 8000.0242acffc174 no
virbr0 8000.52540058ad34 yes virbr0-nic
-bash-4.2#

9)

In this example, I’m creating RHEL 7.5 VM with 2GB RAM, 2 CPU core, 1 nic and 40GB disk space, enter

Download RHEL7.5 iso file and place it in below:

/var/lib/libvirt/boot/

-bash-4.2# ls -lrt /var/lib/libvirt/boot/
total 0
-bash-4.2#

After download:

-bash-4.2# ls -lrt /var/lib/libvirt/boot/
total 4509696
-rw-r–r–. 1 root root 4617928704 Mar 22 2018 RHEL-7.5-20180322.0-Server-x86_64-dvd1.iso
-bash-4.2#

10) Issue seen while provisioning rhel7 VM

-bash-4.2# virt-install \
virt-install –virt-type=kvm –name rhel7 –memory=2048,maxmemory=4096 –vcpus=2 –os-variant=rhel7.5 –cdrom=/var/lib/libvirt/boot/RHEL-7.5-20180322.0-Server-x86_64-dvd1.iso –network=bridge=virbr0,model=virtio –graphics vnc –disk path=/var/lib/libvirt/images/rhel7.qcow2,size=40,bus=virtio,format=qcow2
WARNING The requested volume capacity will exceed the available pool space when the volume is fully allocated. (40960 M requested capacity > 25473 M available)
ERROR The requested volume capacity will exceed the available pool space when the volume is fully allocated. (40960 M requested capacity > 25473 M available) (Use –check disk_size=off or –check all=off to override)
-bash-4.2#

Solution: find some free space on any of your partitions and create a symbolic link to that path in the virtual drive required for the VM

-bash-4.2# mkdir -p /home/media-rhel7.5/images
-bash-4.2#

Before linking to free partition, space on the virtual drive. It has 26GB but we need 40GB

-bash-4.2# df -h /var/lib/libvirt/images/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 50G 26G 25G 51% /
-bash-4.2#

Free pace available under /home:

bash-4.2# df -h /home/media-rhel7.5/images
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-home 318G 263G 55G 83% /home
-bash-4.2#

-bash-4.2# ln -fs /var/lib/libvirt/images/ .
-bash-4.2# ls -lrt
total 0
lrwxrwxrwx. 1 root root 24 Feb 18 04:44 images -> /var/lib/libvirt/images/
-bash-4.2# pwd
/home/media-rhel7.5/images
-bash-4.2#

-bash-4.2# ls -lrt
total 0
lrwxrwxrwx. 1 root root 26 Feb 18 04:54 images -> /home/media-rhel7.5/images
-bash-4.2# pwd
/var/lib/libvirt/images
-bash-4.2#

11) Above didn’t work so i requested for 20 GB space:

virt-install \
–virt-type=kvm \
–name rhel75 \
–memory=2048,maxmemory=4096 \
–vcpus=2 \
–os-variant=rhel7.5 \
–cdrom=/var/lib/libvirt/boot/RHEL-7.5-20180322.0-Server-x86_64-dvd1.iso \
–network=bridge=virbr0,model=virtio \
–graphics vnc \
–disk path=/var/lib/libvirt/images/rhel7.qcow2,size=20,bus=virtio,format=qcow2

12)

-bash-4.2# sudo virsh dumpxml rhel7 | grep vnc
<graphics type=’vnc’ port=’5900′ autoport=’yes’ listen=’127.0.0.1′>
-bash-4.2#

13) How to share files with host and guest rhel7.5 kvm host and rhel7 guest

http://nts.strzibny.name/how-to-set-up-shared-folders-in-virt-manager/

https://www.cyberciti.biz/faq/how-to-add-disk-image-to-kvm-virtual-machine-with-virsh-command/

Reference Link:

https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/

https://forums.fedoraforum.org/showthread.php?318506-Trying-to-use-virtualization-and-installing-VMs

share files b/w host and VMs:

http://www.linux-kvm.org/page/9p_virtio

https://unix.stackexchange.com/questions/86071/use-virt-manager-to-share-files-between-linux-host-and-windows-guest

A Conscience tale of tea

In retrospect, the moments are still vivid and arose the same sensation of warmth I am feeling now. I am talking about the moment of making tea for my friend, who later became my girlfriend. This was during my days of stay at 1 BHK in my bachelorhood along with my eldest brother in Bengaluru (a more lingual name) then known as Bangalore.  I was studying my  Bachelors in Engineering at one of the best colleges in Karnataka BMSCE (visited it a couple of days back with one of my classmates)

Agnostic to the art of making tea- I used to prepare it, as that’s the only staple drink I had with some homemade snacks from my native Bhadravathi (a small town near shivamogga district ie further south towards of the Karnataka from Bengaluru) in the state of Karnataka – southern state of India.  It’s been almost 14 years since this conjecture, enticing me to write about the nostalgic moments of initial Engineering days, triggered by my colleague in office, a few hours ago when he asked about how to make a native filter coffee.  So it’s a misnomer in contrast to the title of this blog, but that’s the emotion which was like a deja vu moment for me and hence the blog at this point of time.  The question is, what’s the relationship between tea and my girlfriend? Well, it requires some preamble and here it is…

Being from a modest middle-class family and an introvert by nature,  after much sweat had gotten into a top 3 Engineering college, and so I had to do well in my studies to get a job.  So again I had pulled myself up to get a 75+ percentage in my 1st semester, but to my dismay ended up in 62% with a flunked subject i.e “Strength of Materials” – part of civil engineering. I was devastated and thought there is no light after this dungeon of failures (However a re-evaluation of my exam helped me get through with no hassles of re-exam – would have been a black mark!).  That’s when I met samita* (offcourse it’s a female name!),  a student of my eldest brother (An IITian from chennai MSc Physics – aspiring IAS officer at that point of time; was teaching in Jain college V V puram and tutor at Universal Academy). It was my brother who introduced me to her as I was most of the time alone in my 1 BHK and moreover was very much depressed because of my failure in 1st semester itself, contrary to my father’s expectations to top the college, which was by far never in my mind.  So I made some of the friends in college by the start of the second semester only because of samita*, that’s the effect of the relationship between tea and girlfriend. In a way, she taught me to make new friends and incepted the idea that spending time with friends is not a futile time.  I was engrossed in always becoming the best studious student and topping the college or at least get a 75+ percentage so that I am eligible to get a job in my final semester.  After the Year 2000 Y2K problem landing a job itself was arbitrary and I was getting to be nowhere near to that goal, failing in the initial step itself.. So why was she behind me trying to even spend time with me, with such a naive boy? ; that’s the question which still baffles me but I think my role was just to be of a friend listening to an alcohol drunkard dipped in misery.  Her father had expired recently (maybe a couple of weeks) before meeting me and I was vent out medium to bring her to balance.

What more I can say about her, she was most pretty, tall and not to say most appealing to me and I became the die-hard fan of her thoughts, English accent, and her down-to-earth nature. She would especially like the tea I used to prepare with my innate special process and also ingredients learned from my mother and brother with a pinch of vanilla from our own garden in Bhadravathi. My father had practiced doing pollination of the vanilla flowers which later grow into beads, which are dried and powdered to use for various food preparations.  Sometimes she would just drop in to have this Special Red Label Tea from Vijayanagar, a couple of miles from my room in her scooty (A two-wheeler motorcycle).  As a matter of fact, she did not turn up for a week or so and so I phoned her home but she was unavailable.  My eyes were on a stalk on seeing her waiting on the open stairs of my old rusty room in N. R. colony as I least expected her presence and I felt like how could this happen on my birthday!!

As usual, I prepared the special chai (Nomenclature of an Indian tea) for her and we talked for a long time and she offered to clean the books on the shelf with a surprise gift in her small bag. It was that special day she revealed that her father had left her and she was in great despair, although there was no dearth for money in her family. I was moved by her intimacy to have shared such true feeling and So she had become my close friend, but deep in my heart, I fell in love with her and hence the Conscience tale of tea ………..

 

To be continued………………………

*samita : Name changed

 

 

 

Setting up Jenkins with Zepplin on Apache Spark and Cassandra

Steps to Install Jenkins and Deploying SPARK Cluster with Zeppelin Notebook and Cassandra using Jenkins

­­­­Basic Architecture:

Architecture_zepplin_spark_cassandra

PRE-REQUISITES to follow before beginning with the installation:

Follow the below steps on all the nodes (for both master and slave nodes)

  • Install Git on all the slave machines excluding Jenkins master since the code is pulled from github.

$ yum install –y git

  • Firewall has to be disabled on all the nodes (since the spark cluster will be deployed on the jenkins slave machines),which is essential for the nodes in the spark cluster to communicate.

$ systemctl stop firewalld

$ systemctl disable firewalld

  • Java has to be installed on all the nodes for Jenkins to correctly configure the slaves.

$ yum install –y java-1.8.0-openjdk-devel

  • Password less ssh has to be established between Jenkins master and Jenkins slaves and also between spark master and spark slaves.

     Execute these steps on jenkins master machine.

$ ssh-keygen -t rsa  (Generate the ssh key).

$ ssh-copy-id <Jenkins slave machine IP>

Repeat the step 2 for all the slave machines that the jenkins master has to be connected.

Execute these steps on spark master machine.

$ ssh-keygen -t rsa  (Generate the ssh key).

$ ssh-copy-id <Spark slave machine IP>

Repeat the step 2 for all the Spark slave machines that the Spark master has to be connected.

STEP 1.a : Install Jenkins [Jenkins master]

Please complete the pre-requisite before you begin with the Jenkins Installation

  • Jenkins is installed in Centos using the following commands:

$ yum install java-1.8.0-openjdk-devel

$ curl –silent –location http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo |  tee /etc/yum.repos.d/jenkins.repo

$  rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key

$  yum install Jenkins

$  systemctl start Jenkins

$  systemctl status Jenkins

$  systemctl enable Jenkins

After completing the above steps, copy the /root/.ssh/known_hosts file to jenkins home directory.

$ mkdir /var/lib/jenkins/.ssh

$ cp /root/.ssh/known_hosts /var/lib/jenkins/.ssh/

$ chmod 777 /var/lib/jenkins/known_hosts

  • By default while installing Jenkins a user named jenkins is created with home directory /var/lib/jenkins.
  • The admin password for Jenkins is present in /var/lib/Jenkins/secrets.

$ cat /var/lib/jenkins/secrets/initialAdminPassword.

  • In the system where Jenkins is installed, if the system is behind a corporate network, then it asks for the proxy settings during installation.

 

STEP 1.b: Now add Jenkins Slaves to Jenkins Master

  • We can add jenkins slaves to jenkins master using Jenkins Web UI(GUI)

Login to Jenkins Web UI from http://<master-ip-address&gt;:8080

On Jenkins Web UI navigate to

Manage JenkinsàManage nodesànew node

Give a name for the node and select Permanent Agent and click OK

  • The screen shown below should appear.

Jenkins_setup1

And provide appropriate details marked in red in the above figure along with the unique label for each Jenkins Slave. Also add the ssh credentials.

  • Use the same label specified here in the Jenkins file. For this demo we have labelled the Spark master machine as “spark_master” and the spark slaves as “spark_slave1”,”spark_slave2” respectively. Now click on SAVE.
  • Verify the installation as below by checking if all master and slave nodes are up and running:

Also click on the node name à Log and verify the details. The status should be Agent successfully connected and online.

Jenkins_spark_cluster

STEP 2: Deploy Spark, Zeppelin Notebook and Cassandra using scripts from Git (not shared here but will update soon):

  • Create a pipeline for a job in Jenkins, this will download the scripts to /root/scripts on all the slave nodes

Log in to Jenkins Web UI from http://<JenkinsMasterIP&gt;:8080

Click on New Item and select pipeline and give a name.

Click OK.

Configure the job as shown below.

Jenkins_gitub

  • The Jenkinsfile is designed to deploy a 3 node(1 master and 2 slaves) spark cluster. This should be edited as per requirements and pushed back to git or use it in jenkins job by specifying definition as pipeline Script instead of pipeline script from SCM.
  • Click on SAVE.
  • Click on Build Now to run the job.
  • You can monitor the job by clicking on the job name > console output.

 

DESCRIPTION OF THE SCRIPTS

Please refer to the below table:

All_jenkins_spark_cassandra_scripts

The execution of these scripts in order will form the pipeline in Jenkinsfile.

Pipeline is as described as below.

zepplin_setup_spark_cassandra

To be continued…….

Performance Analysis with Docker

I am demonstrating how dockers is better in terms of performance compared to a VM. Using sysbench, a micro benchmark used to measure performance of CPU/Memory/HDD, i will be demonstrating “sysbench running in a Docker” vs “sysbench running in a VM”.

I have a standalone / bare metal box (Host) with rhel7.5 installed on it. I have created a rhel7.5 VM where sysbench’s cpu benchmark is run and used as a measure to quantify docker with  a VM performance.

 

A) Dockerfile: (CPU workoad)

FROM centos:latest

RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash
RUN yum -y install sysbench
CMD [“sysbench”, “–threads=31”, “–cpu-max-prime=1000”, “cpu”, “run”]

 

B)

-bash-4.2# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
lambzee/sysbench latest ec033d4c4a90 35 minutes ago 362MB
<none> <none> 8c74b451f649 37 minutes ago 362MB
<none> <none> cddacf83b468 39 minutes ago 362MB
<none> <none> ce7451874a1b About an hour ago 362MB
<none> <none> 4832fc2eafb5 About an hour ago 202MB
lambzee/catnip latest 8e56323c21b5 13 hours ago 700MB
busybox latest d8233ab899d4 3 days ago 1.2MB
registry 2 d0eed8dad114 2 weeks ago 25.8MB
alpine latest caf27325b298 2 weeks ago 5.53MB
hello-world latest fce289e99eb9 6 weeks ago 1.84kB
centos latest 1e1148e4cc2c 2 months ago 202MB
jenkinsci/blueocean latest 128631e0a9ef 6 months ago 442MB
python 3-onbuild 292ed8dee366 7 months ago 691MB
prakhar1989/static-site latest f01030e1dcf3 3 years ago 134MB
-bash-4.2#

 

HOST:

-bash-4.2# sysbench –threads=31 –cpu-max-prime=1000 cpu run
sysbench 1.1.0-18a9f86 (using bundled LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 31
Initializing random number generator from current time
Prime numbers limit: 1000

Initializing worker threads…

Threads started!

CPU speed:
events per second: 866392.87

Throughput:
events/s (eps): 866392.8729
time elapsed: 10.0002s
total number of events: 8664142

Latency (ms):
min: 0.03
avg: 0.04
max: 4.05
95th percentile: 0.04
sum: 307785.97

Threads fairness:
events (avg/stddev): 279488.4516/3333.41
execution time (avg/stddev): 9.9286/0.01

 

C) docker run lambzee/sysbench:latest

DOCKER:

-bash-4.2# docker run lambzee/sysbench:latest
sysbench 1.0.16 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 31
Initializing random number generator from current time
Prime numbers limit: 1000

Initializing worker threads…

Threads started!

CPU speed:
events per second: 857156.66

General statistics:
total time: 10.0002s
total number of events: 8572987

Latency (ms):
min: 0.03
avg: 0.04
max: 0.93
95th percentile: 0.04
sum: 307757.11

Threads fairness:
events (avg/stddev): 276547.9677/2703.01
execution time (avg/stddev): 9.9276/0.01

-bash-4.2#

D) Memory workload

Host:

sysbench –threads=50 memory run
sysbench 1.1.0-18a9f86 (using bundled LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 50
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global

Initializing worker threads…

Threads started!

Total operations: 70532349 (7052879.39 per second)

68879.25 MiB transferred (6887.58 MiB/sec)
Throughput:
events/s (eps): 7052879.3933
time elapsed: 10.0005s
total number of events: 70532349

Latency (ms):
min: 0.00
avg: 0.01
max: 37.01
95th percentile: 0.01
sum: 467129.95

Threads fairness:
events (avg/stddev): 1410646.9800/156437.47
execution time (avg/stddev): 9.3426/0.08

=============================================================

Docker:

=============================================================

docker build –tag sysmem .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM centos:latest
—> 1e1148e4cc2c
Step 2/4 : RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash
—> Using cache
—> a10fb8bbd2ea
Step 3/4 : RUN yum -y install sysbench
—> Using cache
—> 90762493db64
Step 4/4 : CMD [“sysbench”, “–threads=50”, “memory”, “run”]
—> Running in 3a3e37337d4c
Removing intermediate container 3a3e37337d4c
—> 7edf2e84171a
Successfully built 7edf2e84171a
Successfully tagged sysmem:latest
-bash-4.2#

=============================================================

-bash-4.2# docker run 7edf2e84171a
sysbench 1.0.16 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 50
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global

Initializing worker threads…

Threads started!

Total operations: 88069879 (8805216.54 per second)

86005.74 MiB transferred (8598.84 MiB/sec)
General statistics:
total time: 10.0004s
total number of events: 88069879

Latency (ms):
min: 0.00
avg: 0.01
max: 13.02
95th percentile: 0.01
sum: 474722.50

Threads fairness:
events (avg/stddev): 1761397.5800/40857.85
execution time (avg/stddev): 9.4944/0.04

-bash-4.2#

=============================================================

CPU and Memory both:

-bash-4.2# docker build -t lambzee/syscpumem .
Sending build context to Docker daemon 2.048kB
Step 1/5 : FROM centos:latest
—> 1e1148e4cc2c
Step 2/5 : RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash
—> Using cache
—> a10fb8bbd2ea
Step 3/5 : RUN yum -y install sysbench
—> Using cache
—> 90762493db64
Step 4/5 : CMD [“sysbench”, “–threads=31”, “–cpu-max-prime=1000”, “cpu”, “run”]
—> Using cache
—> ec033d4c4a90
Step 5/5 : CMD [“sysbench”, “–threads=50”, “memory”, “run”]
—> Running in 55fabe5f89dd
Removing intermediate container 55fabe5f89dd
—> 634ad20f6941
Successfully built 634ad20f6941
Successfully tagged lambzee/syscpumem:latest
-bash-4.2#

 

docker image – Forcing to not use cache:

-bash-4.2# docker build –no-cache -t lambzee/syscpumem-cache .
Sending build context to Docker daemon 2.048kB
Step 1/5 : FROM centos:latest
—> 1e1148e4cc2c
Step 2/5 : RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash
—> Running in 33a86db6260b
Detected operating system as centos/7.
Checking for curl…
Detected curl…
Downloading repository file: https://packagecloud.io/install/repositories/akopytov/sysbench/config_file.repo?os=centos&dist=7&source=script
done.
Installing pygpgme to verify GPG signatures…
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
* base: repo1.ash.innoscale.net
* extras: ftp.usf.edu
* updates: mirror.us-midwest-1.nexcess.net
Retrieving key from https://packagecloud.io/akopytov/sysbench/gpgkey
Importing GPG key 0x04DCFD39:
Userid : “https://packagecloud.io/akopytov/sysbench-prerelease (https://packagecloud.io/docs#gpg_signing) <support@packagecloud.io>”
Fingerprint: 9789 8d69 f99e e5ca c462 a0f8 cf10 4890 04dc fd39
From : https://packagecloud.io/akopytov/sysbench/gpgkey
Package pygpgme-0.3-9.el7.x86_64 already installed and latest version
Nothing to do
Installing yum-utils…
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: repo1.ash.innoscale.net
* extras: ftp.usf.edu
* updates: mirror.us-midwest-1.nexcess.net
Package yum-utils-1.1.31-50.el7.noarch already installed and latest version
Nothing to do
Generating yum cache for akopytov_sysbench…
Importing GPG key 0x04DCFD39:
Userid : “https://packagecloud.io/akopytov/sysbench-prerelease (https://packagecloud.io/docs#gpg_signing) <support@packagecloud.io>”
Fingerprint: 9789 8d69 f99e e5ca c462 a0f8 cf10 4890 04dc fd39
From : https://packagecloud.io/akopytov/sysbench/gpgkey
Generating yum cache for akopytov_sysbench-source…

The repository is setup! You can now install packages.
Removing intermediate container 33a86db6260b
—> 5bb9a69415dd
Step 3/5 : RUN yum -y install sysbench
—> Running in 6b1b1ae15233
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: repo1.ash.innoscale.net
* extras: ftp.usf.edu
* updates: mirror.us-midwest-1.nexcess.net
Resolving Dependencies
–> Running transaction check
—> Package sysbench.x86_64 0:1.0.16-1.el7.centos will be installed
–> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libpq.so.5()(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libmysqlclient.so.18()(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libaio.so.1()(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Running transaction check
—> Package libaio.x86_64 0:0.3.109-13.el7 will be installed
—> Package mariadb-libs.x86_64 1:5.5.60-1.el7_5 will be installed
—> Package postgresql-libs.x86_64 0:9.2.24-1.el7_5 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
sysbench x86_64 1.0.16-1.el7.centos akopytov_sysbench 431 k
Installing for dependencies:
libaio x86_64 0.3.109-13.el7 base 24 k
mariadb-libs x86_64 1:5.5.60-1.el7_5 base 758 k
postgresql-libs x86_64 9.2.24-1.el7_5 base 234 k

Transaction Summary
================================================================================
Install 1 Package (+3 Dependent packages)

Total download size: 1.4 M
Installed size: 6.2 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/libaio-0.3.109-13.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for libaio-0.3.109-13.el7.x86_64.rpm is not installed
——————————————————————————–
Total 1.3 MB/s | 1.4 MB 00:01
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
Userid : “CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>”
Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
Package : centos-release-7-6.1810.2.el7.centos.x86_64 (@CentOS)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:mariadb-libs-5.5.60-1.el7_5.x86_64 1/4
Installing : postgresql-libs-9.2.24-1.el7_5.x86_64 2/4
Installing : libaio-0.3.109-13.el7.x86_64 3/4
Installing : sysbench-1.0.16-1.el7.centos.x86_64 4/4
Verifying : libaio-0.3.109-13.el7.x86_64 1/4
Verifying : postgresql-libs-9.2.24-1.el7_5.x86_64 2/4
Verifying : 1:mariadb-libs-5.5.60-1.el7_5.x86_64 3/4
Verifying : sysbench-1.0.16-1.el7.centos.x86_64 4/4

Installed:
sysbench.x86_64 0:1.0.16-1.el7.centos

Dependency Installed:
libaio.x86_64 0:0.3.109-13.el7 mariadb-libs.x86_64 1:5.5.60-1.el7_5
postgresql-libs.x86_64 0:9.2.24-1.el7_5

Complete!
Removing intermediate container 6b1b1ae15233
—> d84c1dd508a2
Step 4/5 : CMD [“sysbench”, “–threads=31”, “–cpu-max-prime=1000”, “cpu”, “run”]
—> Running in d7ca71ffbb47
Removing intermediate container d7ca71ffbb47
—> a9aa887d92ce
Step 5/5 : CMD [“sysbench”, “–threads=50”, “memory”, “run”]
—> Running in 8705d8e86cc7
Removing intermediate container 8705d8e86cc7
—> 5f62b352a901
Successfully built 5f62b352a901
Successfully tagged lambzee/syscpumem-cache:latest
-bash-4.2#

================ more threads to check exposed ports=========BELOW=========

-bash-4.2# docker build –no-cache -t lambzee/cpu-bench:latest .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM centos:latest
—> 1e1148e4cc2c
Step 2/4 : RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash
—> Running in c8b10f729b81
Detected operating system as centos/7.
Checking for curl…
Detected curl…
Downloading repository file: https://packagecloud.io/install/repositories/akopytov/sysbench/config_file.repo?os=centos&dist=7&source=script
done.
Installing pygpgme to verify GPG signatures…
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
* base: ftp.usf.edu
* extras: mirror.us.oneandone.net
* updates: ftp.usf.edu
Retrieving key from https://packagecloud.io/akopytov/sysbench/gpgkey
Importing GPG key 0x04DCFD39:
Userid : “https://packagecloud.io/akopytov/sysbench-prerelease (https://packagecloud.io/docs#gpg_signing) <support@packagecloud.io>”
Fingerprint: 9789 8d69 f99e e5ca c462 a0f8 cf10 4890 04dc fd39
From : https://packagecloud.io/akopytov/sysbench/gpgkey
Package pygpgme-0.3-9.el7.x86_64 already installed and latest version
Nothing to do
Installing yum-utils…
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: ftp.usf.edu
* extras: mirror.us.oneandone.net
* updates: ftp.usf.edu
Package yum-utils-1.1.31-50.el7.noarch already installed and latest version
Nothing to do
Generating yum cache for akopytov_sysbench…
Importing GPG key 0x04DCFD39:
Userid : “https://packagecloud.io/akopytov/sysbench-prerelease (https://packagecloud.io/docs#gpg_signing) <support@packagecloud.io>”
Fingerprint: 9789 8d69 f99e e5ca c462 a0f8 cf10 4890 04dc fd39
From : https://packagecloud.io/akopytov/sysbench/gpgkey
Generating yum cache for akopytov_sysbench-source…

The repository is setup! You can now install packages.
Removing intermediate container c8b10f729b81
—> 30ab22ef87d7
Step 3/4 : RUN yum -y install sysbench
—> Running in 8c02a2218b25
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: ftp.usf.edu
* extras: mirror.us.oneandone.net
* updates: ftp.usf.edu
Resolving Dependencies
–> Running transaction check
—> Package sysbench.x86_64 0:1.0.16-1.el7.centos will be installed
–> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libpq.so.5()(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libmysqlclient.so.18()(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Processing Dependency: libaio.so.1()(64bit) for package: sysbench-1.0.16-1.el7.centos.x86_64
–> Running transaction check
—> Package libaio.x86_64 0:0.3.109-13.el7 will be installed
—> Package mariadb-libs.x86_64 1:5.5.60-1.el7_5 will be installed
—> Package postgresql-libs.x86_64 0:9.2.24-1.el7_5 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
sysbench x86_64 1.0.16-1.el7.centos akopytov_sysbench 431 k
Installing for dependencies:
libaio x86_64 0.3.109-13.el7 base 24 k
mariadb-libs x86_64 1:5.5.60-1.el7_5 base 758 k
postgresql-libs x86_64 9.2.24-1.el7_5 base 234 k

Transaction Summary
================================================================================
Install 1 Package (+3 Dependent packages)

Total download size: 1.4 M
Installed size: 6.2 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/libaio-0.3.109-13.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for libaio-0.3.109-13.el7.x86_64.rpm is not installed
——————————————————————————–
Total 1.7 MB/s | 1.4 MB 00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
Userid : “CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>”
Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
Package : centos-release-7-6.1810.2.el7.centos.x86_64 (@CentOS)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:mariadb-libs-5.5.60-1.el7_5.x86_64 1/4
Installing : postgresql-libs-9.2.24-1.el7_5.x86_64 2/4
Installing : libaio-0.3.109-13.el7.x86_64 3/4
Installing : sysbench-1.0.16-1.el7.centos.x86_64 4/4
Verifying : libaio-0.3.109-13.el7.x86_64 1/4
Verifying : postgresql-libs-9.2.24-1.el7_5.x86_64 2/4
Verifying : 1:mariadb-libs-5.5.60-1.el7_5.x86_64 3/4
Verifying : sysbench-1.0.16-1.el7.centos.x86_64 4/4

Installed:
sysbench.x86_64 0:1.0.16-1.el7.centos

Dependency Installed:
libaio.x86_64 0:0.3.109-13.el7 mariadb-libs.x86_64 1:5.5.60-1.el7_5
postgresql-libs.x86_64 0:9.2.24-1.el7_5

Complete!
Removing intermediate container 8c02a2218b25
—> 9ca25a74de05
Step 4/4 : CMD [“sysbench”, “–threads=56”, “–cpu-max-prime=100000”, “cpu”, “run”]
—> Running in 8362c03c719a
Removing intermediate container 8362c03c719a
—> c6bbc956dcb1
Successfully built c6bbc956dcb1
Successfully tagged lambzee/cpu-bench:latest
-bash-4.2#

Run a local registry

Use a command like the following to start the registry container:

$ docker run -d -p 5000:5000 –restart=always –name registry registry:2

 

docker pull jwholdsworth/dstat

First dstat docker:

Dockerfile:

FROM centos:latest

RUN yum -y install dstat
CMD [“dstat”, “-tcmndylp”, “–top-cpu”]

=========================== docker image – dstat created=====================

-bash-4.2# docker build -t lambzee/dstatmod .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM centos:latest
—> 1e1148e4cc2c
Step 2/3 : RUN yum -y install dstat
—> Running in c304e3a785f6
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
* base: mirror.teklinks.com
* extras: mirror.dal10.us.leaseweb.net
* updates: repos-tx.psychz.net
Resolving Dependencies
–> Running transaction check
—> Package dstat.noarch 0:0.7.2-12.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
dstat noarch 0.7.2-12.el7 base 163 k

Transaction Summary
================================================================================
Install 1 Package

Total download size: 163 k
Installed size: 752 k
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/dstat-0.7.2-12.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for dstat-0.7.2-12.el7.noarch.rpm is not installed
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
Userid : “CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>”
Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
Package : centos-release-7-6.1810.2.el7.centos.x86_64 (@CentOS)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : dstat-0.7.2-12.el7.noarch 1/1
Verifying : dstat-0.7.2-12.el7.noarch 1/1

Installed:
dstat.noarch 0:0.7.2-12.el7

Complete!
Removing intermediate container c304e3a785f6
—> 091147fd2d64
Step 3/3 : CMD [“dstat”, “-tcmndylp”, “–top-cpu”]
—> Running in 259ecfb143cc
Removing intermediate container 259ecfb143cc
—> c0c062513de9
Successfully built c0c062513de9
Successfully tagged lambzee/dstatmod:latest

 

===========================================================

-bash-4.2# docker run -d –name sysbenchcont -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” lambzee/syscpumem:latest
3aa3564855529e4e2369172bcab1abebcef68ab8fb3c62044c9f5d84ea1ae9b8
-bash-4.2#

-bash-4.2# docker container logs sysbenchcont
sysbench 1.0.16 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 50
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global

Initializing worker threads…

Threads started!

Total operations: 84279286 (8426154.22 per second)

82303.99 MiB transferred (8228.67 MiB/sec)
General statistics:
total time: 10.0004s
total number of events: 84279286

Latency (ms):
min: 0.00
avg: 0.01
max: 12.67
95th percentile: 0.01
sum: 476269.34

Threads fairness:
events (avg/stddev): 1685585.7200/153335.46
execution time (avg/stddev): 9.5254/0.05

-bash-4.2#

======================== Docker compose ========================

https://docs.docker.com/compose/install/

-bash: docker-compose: command not found
-bash-4.2# sudo curl -L “https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 617 0 617 0 0 1301 0 –:–:– –:–:– –:–:– 1304
100 11.2M 100 11.2M 0 0 873k 0 0:00:13 0:00:13 –:–:– 557k
-bash-4.2#

sudo chmod +x /usr/local/bin/docker-compose bash-4.2# docker-compose --version docker-compose version 1.23.2, build 1110ad01 -bash-4.2#

-bash-4.2# docker-compose up -d
Creating network “compose_my_net” with driver “bridge”
Creating compose_dstatmon_1 … done
Creating compose_mem_1 … done
Creating compose_cpu_1 … done
-bash-4.2#

-bash-4.2# more docker-compose.yml
version: “3”
services:
cpu:
image: lambzee/cpu:latest
depends_on:
– dstatmon
networks:
– my_net
mem:
image: lambzee/mem:latest
depends_on:
– dstatmon
networks:
– my_net

dstatmon:
image: lambzee/dstatmon:latest
networks:
– my_net

networks:
my_net:
driver: bridge

-bash-4.2#

-bash-4.2# docker network ls
NETWORK ID NAME DRIVER SCOPE
c70967cabb58 bridge bridge local
2a64a26e5fbf compose_my_net bridge local
14663c854d6b host host local
93fe31b80ab4 none null local
-bash-4.2#

bash-4.2# docker-compose down
Stopping compose_cpu_1 … done
Stopping compose_dstatmon_1 … done
Removing compose_cpu_1 … done
Removing compose_mem_1 … done
Removing compose_dstatmon_1 … done
Removing network compose_my_net
-bash-4.2#

 

-bash-4.2# docker system prune -a
WARNING! This will remove:
– all stopped containers
– all networks not used by at least one container
– all images without at least one container associated to them
– all build cache
Are you sure you want to continue? [y/N] y
Deleted Networks:
my_bridge_nt

Deleted Images:
untagged: 10.10.0.130:5000/cpu:latest
untagged: lambzee/cpu:latest
deleted: sha256:16f1187689039f15342a0c4d3ca05a02b46de2d8118a016059991bcbd3805a3b
untagged: lambzee/mem:latest
deleted: sha256:a0836afe0b89afd30ffd07799c6faef56bdb067dfec277d273be28a606268fd1
deleted: sha256:df88dbd115c75304cdd4ae0dc4d3bf143f6d945301fbd2a5df02fc5d8f026b02
deleted: sha256:a85b45691794a023d9efb6fbda1a0394b7039b2437848d5dbdb9196c0a214f3b
deleted: sha256:95313c457389656086c3972ef7ae846e009a31eab7dc1ea0ec18f5a8c92b6e87
deleted: sha256:672d46ea8d44714aae36bd55e5d2f61a00bda54069d9b981ad8cff5f71c2fe95
untagged: centos:latest
untagged: centos@sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
deleted: sha256:1e1148e4cc2c148c6890a18e3b2d2dde41a6745ceb4e5fe94a923d811bf82ddb
deleted: sha256:071d8bd765171080d01682844524be57ac9883e53079b6ac66707e192ea25956
untagged: busybox:latest
untagged: busybox@sha256:061ca9704a714ee3e8b80523ec720c64f6209ad3f97c0ff7cb9ec7d19f15149f
deleted: sha256:d8233ab899d419c58cf3634c0df54ff5d8acc28f8173f09c21df4a07229e1205
deleted: sha256:adab5d09ba79ecf30d3a5af58394b23a447eda7ffffe16c500ddc5ccb4c0222f

Total reclaimed space: 363MB
-bash-4.2#

Docker Expounded

Steps to setup docker environment and Demo

  1. Docker Installation and it’s Life Cycle
  2. Demo of Docker Life Cycle with cpu, memory intensive workload (sysbench) and dstat monitoring tool as containers.

 

Step (A)

Docker Installation:  (on a rhel system)

 

Hello from Docker!

This message shows that your installation appears to be working correctly….

 

  • Start the docker daemon. “sudo dockerd &”

 

 

Docker Life-Cycle:  (on a rhel system)

  • Creating Docker File: In a new terminal say under folder /home/docker/cpu_cont/, create a docker file as below. (a docker Image)

 

File name should be “Dockerfile”: (CPU intensive workoad)

Bash#>/home/docker/cpu_cont>more Dockerfile

FROM centos:latest

RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash
RUN yum -y install sysbench
CMD [“sysbench”, “–threads=50”, “–time=3600”, “–cpu-max-prime=1000”, “cpu”, “run”]

 

 

 

 

 

 

 

As shown below, similarly create the memory intensive workload and dstat monitoring tool, docker images under /home/docker/mem_cont/ and /home/docker/dstat_cont/ respectively.

Bash#>/home/docker/mem_cont> more Dockerfile

FROM centos:latest

RUN curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | bash

RUN yum -y install sysbench

CMD [“sysbench”, “–threads=20”, “–time=3600”, “memory”, “run”]

 

 

Bash#>/home/docker/dstat_cont> more Dockerfile

FROM centos:latest

RUN yum -y install dstat

CMD [“dstat”, “-tcmndylp”, “–top-cpu”]

  • Creating a registry to store images: Now create a registry in the system to push and pull images as needed.

Bash#> docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

=============================================================

Bash#> docker run -d -p 5000:5000 –restart=always –name registry registry:2

f9d3a0e22816c1ee4cf20b6cea25cd020ceb8e44db0a5a9daada9b95afa230db

=============================================================

Bash-4> docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES

f9d3a0e22816        registry:2          “/entrypoint.sh /etc…”   4 seconds ago       Up 3 seconds        0.0.0.0:5000->5000/tcp   registry

Bash#>

  • Creating a dockerhub in https://hub.docker.com/. Just signup with easy instructions on the portal.
  • Pushing existing images to the dockerhub repo. So I have docker login created in docker hub with login name as “lambzee” and hence to push locally created images.

Pushing the cpu image to dockerhub:

bash#> docker push lambzee/cpu:latest

The push refers to repository [docker.io/lambzee/cpu]

0ad8619a5950: Pushed

9f4917957ac9: Pushed

071d8bd76517: Mounted from library/centos

latest: digest: sha256:c12b6c3b61f40ffd811e79ce5b31c21a1532be8b1f0e2845028208a51089c932 size: 953

bash#>

 

Pushing the mem image to dockerhub lambzee:

bash#> docker push lambzee/mem:latest

The push refers to repository [docker.io/lambzee/mem]

0ad8619a5950: Mounted from lambzee/cpu

9f4917957ac9: Mounted from lambzee/cpu

071d8bd76517: Mounted from lambzee/cpu

latest: digest: sha256:107baac65649b0c01418ada047d4ca79c0b33d4d628e631dedef6a7d56d80371 size: 953

bash#>

 

The push can be verified by checking online at https://hub.docker.com as shown below.

docker_hub_pic

Step(B): Demo of Docker Life Cycle with cpu, memory intensive workload (sysbench) and dstat monitoring tool as containers.

We have seen how one can create an image of docker and push to docker hub. Now in below steps, we will see how dockers are run in detached mode and inspect them. Also how two started docker containers can be talk to each other and how you can convert an image into docker container and vice versa.

Finally conclude with docker-compose usage

Before we start, I would like to introduce to four network layout of the docker containers namely None, Bridge, Host and Overlay.

docker_nw

As shown above, when we start dockers in none network, they do not talk to each other and hence are isolated in the same docker host.  Now lets start cpu and dstat containers in none network.

bash#> docker run -d –net none lambzee/cpu:latest

7fae95a0ccd3a5f9a72047a9936e38e5d8721b0ea18291ae2d409489010f9fe3

bash#> docker ps

CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES

7fae95a0ccd3        lambzee/cpu:latest   “sysbench –threads=…”   5 seconds ago       Up 5 seconds                            zen_chatterjee

bash#>

 

Also start the dstat monitor to run the dstat os monitoring tool in an another container as below.

bash#> docker run -d –net none lambzee/dstat:latest

b134f38793296493fb490c0a1652873040a63d8b26250bb553185a52089211e5

 

 

 

 

bash#> docker ps

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES

b134f3879329        lambzee/dstat:latest   “dstat -tcmndylp –t…”   6 seconds ago       Up 5 seconds                            blissful_raman

57ca9ece9ec3        lambzee/cpu:latest     “sysbench –threads=…”   5 minutes ago       Up 5 minutes                            zen_chatterjee

bash#>

 

Let’s check what exactly if dstat container is connected to outside world using 10.115.50.32 ip i.e. my ILO IP compared to the host docker machine.

bash#> docker exec -it 57ca9ece9ec3 bash

[root@57ca9ece9ec3 /]# ping 10.115.50.32

connect: Network is unreachable

[root@57ca9ece9ec3 /]#

Let’s now see the host if connected to my ILO IP.

 

bash#> ping 10.115.50.32

PING 10.115.50.32 (10.115.50.32) 56(84) bytes of data.

64 bytes from 10.115.50.32: icmp_seq=1 ttl=255 time=0.348 ms

64 bytes from 10.115.50.32: icmp_seq=2 ttl=255 time=0.123 ms

64 bytes from 10.115.50.32: icmp_seq=3 ttl=255 time=0.131 ms

^C

— 10.115.50.32 ping statistics —

3 packets transmitted, 3 received, 0% packet loss, time 2003ms

rtt min/avg/max/mdev = 0.123/0.200/0.348/0.105 ms

bash#>

 

[root@57ca9ece9ec3 /]# ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

inet 127.0.0.1  netmask 255.0.0.0

loop  txqueuelen 1000  (Local Loopback)

RX packets 0  bytes 0 (0.0 B)

RX errors 0  dropped 0  overruns 0  frame 0

TX packets 0  bytes 0 (0.0 B)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

[root@57ca9ece9ec3 /]#

The none n/w model above shows that we have loop back n/w is available where individual networks are suitable.

bash#> docker network  inspect none

[

{

“Name”: “none”,

“Id”: “93fe31b80ab4aef7d2fb70176c46b52c991c9d08d604b740e1b37fefe9496c1d”,

“Created”: “2018-08-02T08:37:50.850741608-04:00”,

“Scope”: “local”,

“Driver”: “null”,

“EnableIPv6”: false,

“IPAM”: {

“Driver”: “default”,

“Options”: null,

“Config”: []

},

“Internal”: false,

“Attachable”: false,

“Ingress”: false,

“ConfigFrom”: {

“Network”: “”

},

“ConfigOnly”: false,

“Containers”: {

“3cbef6faa8a997e109da0a1baf63b3ce09e5aab1ba55310e4aea61e3a4a2e01d”: {

“Name”: “stupefied_shtern”,

“EndpointID”: “25bcb01cbd3796c6d1fd5841adb145c68f1673dc75f29f1077b467df2fdff36e”,

“MacAddress”: “”,

“IPv4Address”: “”,

“IPv6Address”: “”

},

“55c1437c0daf152127947b999b9f86b3b1f39cfa7a01cf2633bdb3118e79f6ef”: {

“Name”: “compassionate_bhaskara”,

“EndpointID”: “f749c3cdd51b5e85a908ee61a95011634c61c69b066fe1cb2a5fad078717bb8f”,

“MacAddress”: “”,

“IPv4Address”: “”,

“IPv6Address”: “”

}

},

“Options”: {},

“Labels”: {}

}

]

bash#> docker ps

CONTAINER ID        IMAGE                  COMMAND                  CREATED              STATUS              PORTS               NAMES

55c1437c0daf        lambzee/cpu:latest     “sysbench –threads=…”   About a minute ago   Up About a minute                       compassionate_bhaskara

3cbef6faa8a9        lambzee/dstat:latest   “dstat -tcmndylp –t…”   3 minutes ago        Up 3 minutes                            stupefied_shtern

bash#>

 

 

 

 

 

 

 

 

Installation and Use of docker Compose:

Download and install docker-compose as below. Docker compose enabled execution of more than one service with desired container topology this enabling communication in a multi-container environment within a host or across hosts. In below case we have only within host communication shown.

    https://docs.docker.com/compose/install/

bash#> docker-compose: command not found
bash#> sudo curl -L “https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 617 0 617 0 0 1301 0 –:–:– –:–:– –:–:– 1304
100 11.2M 100 11.2M 0 0 873k 0 0:00:13 0:00:13 –:–:– 557k
bash#>

sudo chmod +x /usr/local/bin/docker-compose bash-4.2# docker-compose –version docker-compose version 1.23.2, build 1110ad01 bash#>

 

 

Docker-compose yml file is used to setup what services to run and what networks the services use and what topology is followed to connect these services. In below docker-compose.yml file I am using a separate yml file with a new directory created as below.

 

-bash-4.2# pwd

/home/hs/docker/compose

-bash-4.2#

 

-bash-4.2# more docker-compose.yml

version: “3”

services:

cpu:

image: lambzee/cpu:latest

depends_on:

– dstatmon

networks:

– my_net

mem:

image: lambzee/mem:latest

depends_on:

– dstatmon

networks:

– my_net

 

dstatmon:

image: lambzee/dstatmon:latest

networks:

– my_net

 

networks:

my_net:

driver: bridge

 

-bash-4.2#

 

 

 

From above we see that cpu and memory intensive workloads are started on common “my_net” networks and “bridge” driver. “depends_on” key word is used inorder to say a specific service cpu/mem can be monitored separately with dstat.

 

One can start the docker compose as below.

bash#> docker-compose up -d
Creating network “compose_my_net” with driver “bridge”
Creating compose_dstatmon_1 … done
Creating compose_mem_1 … done
Creating compose_cpu_1 … done
bash#>

bash#> more docker-compose.yml
version: “3”
services:
cpu:
image: lambzee/cpu:latest
depends_on:
– dstatmon
networks:
– my_net
mem:
image: lambzee/mem:latest
depends_on:
– dstatmon
networks:
– my_net

dstatmon:
image: lambzee/dstatmon:latest
networks:
– my_net

networks:
my_net:
driver: bridge

bash#>

bash#> docker network ls
NETWORK ID NAME DRIVER SCOPE
c70967cabb58 bridge bridge local
2a64a26e5fbf compose_my_net bridge local
14663c854d6b host host local
93fe31b80ab4 none null local
bash#>

 

One can stop the docker compose as below.

bash-4.2# docker-compose down
Stopping compose_cpu_1 … done
Stopping compose_dstatmon_1 … done
Removing compose_cpu_1 … done
Removing compose_mem_1 … done
Removing compose_dstatmon_1 … done
Removing network compose_my_net
bash#>

 

Future work:

To work on showing virtual machine to docker comparison and demonstration of services / containers across host swarm mode docker daemon and docker daemon in a VM , both connected by a overlay network.

 

Performance Data Analysis- R scripts

In this blog, i would like to introduce some of the simple Rstudio Rscripts i have been using for performance data analysis from the data obtained using turbostat tool.  Turbostat is a Linux command-line utility which provides not only all the cores frequency but also c-states status during performance runs of various configuration runs done during our Benchmarking exercise.

Those who are not familiar with Benchmarking world, let me explain things in a nutshell.  When E-commerce/Banking/Insurance/other financial companies are trying to fuel their business with more customers, they upgrade their hardware and thus look for best servers in the market to buy something which is value for their money.  Best servers, meaning the ones which provide optimal performance or price performance of the industry standard benchmarks for instance from consortium such as SPEC (www.spec.org). These servers could, in turn, belong to on-premise or cloud infrastructure.

Let me introduce with further no due wait, the R scripts I am using for the performance data analysis. In the future, I plan to provide this as a dashboard, so that it becomes as available as a service to whoever wants to use it for their turbostat raw data.

You should have R and Rstudio both installed on your laptop. Google or just refer here https://www.ics.uci.edu/~sternh/courses/210/InstallingRandRStudio.pdf

Note that you have to first install R and then install Rstudio.

R script:

1) library(readr)

uses the readr package required for reading the give dataset into Rstudio
2)  library(“ggplot2″)

this is used for visualization you want to plot
3) turbostat_data <- read_table2(file.choose(),col_names=TRUE,na=”NA”)

i am creating a dataset above to plot it’s columns
4) summary(turbostat_data)

See below Rstudio trace logs, which shows some of the statistics of the given dataset (each column).
5) cpu <-subset(turbostat_data,select = c(Core,Avg_MHz,`Busy%`))

6) colnames(cpu) y-axis
7) x x-axis
8) ggplot(cpu, aes(x=x, y=cpu$Avg_MHz, colour = Core)) + geom_line() + labs(title=”Workload sles15_dl380-SUT_16GB_DIMM_2GHz_2dpc Core opert Frequency vs Time”, caption = “Source: Trim_turbostat-16gbdimm-2GHz-_2dpc”, subtitle=”redis”)
9)ggsave(“\\Graphs\\1-jan-2019rom-Workload_sles15_Standalone_Server_16GB_DIMM_2GHz_2dpc_Core_Freq_vs_time.jpeg”, plot = last_plot(), device = NULL)

Above saves the graphs in given path on your laptop where you have R and Rstudio installed
10)cpudata <-subset(turbostat_data,`Busy%` <100, select = c(Core,Avg_MHz,`Busy%`))
11) colnames(cpudata) <- c(“Core”,”Avg_MHz”,”cpuBusy”)
12)p <-seq (1,5,length =nrow(cpudata))
13) ggplot(cpudata, aes(x=p, y=cpudata$cpuBusy, colour = Core)) + geom_line() + labs(title=”Workload sles15_dl380-SUT_16GB_DIMM_2GHz_2dpc Core Utilizations vs Time”, caption = “Source: Trim_turbostat-16gbdimm-2GHz-_2dpc”, subtitle=”redis”)
14) ggsave(“Graphs\\1-jan-2019rom-Workload_sles15_Standalone_Server_16GB_DIMM_2GHz_2dpc_Core_Util_vs_time.jpeg”, plot = last_plot(), device = NULL)
15) p<-seq(1,5,length =nrow(turbostat_data))
16) ggplot(turbostat_data, aes(x=p, y=turbostat_data$CoreTmp, colour = Core)) + geom_line() + labs(title=”Workload sles15_dl380-SUT_16GB_DIMM_2GHz_2dpc Core Temp vs Time”, caption = “Source: Trim_turbostat-16gbdimm-2GHz-_2dpc”, subtitle=”redis”)
17) ggsave(“Graphs\\1-jan-2019rom-Workload_sles15_Standalone_Server_16GB_DIMM_2GHz_2dpc_Core_Temp_vs_time.jpeg”, plot = last_plot(), device = NULL)

Visualizations look something like below.

1-jan-2019rom-Workload_sles15_Standalone_Server_16GB_DIMM_2933_2dpc_Core_Temp_vs_time1-jan-2019rom-Workload_sles15_Standalone_Server_16GB_DIMM_2933_2dpc_Core_Util_vs_time1-jan-2019rom-Workload_sles15_Standalone_Server_16GB_DIMM_2933_2dpc_Core_Freq_vs_time

First graph from top to bottom, provides details on core temperatures over workload execution, whereas second and third graphs show how cores utilization, cores frequency vary as workload execution progresses.

So how can we use this for performance analysis?
In core frequency vs workload execution graph, we can check if cores frequency is as expected from processor specifications or else raise alarm to ROM team to look into the issues.

Another observation we can have is to check cores utilization / SUT (system under test) utilization by the workload. If not close to 80%+, need to check why so, as it might be related to hardware or software stack performance bottleneck. One can write scripts for many use-cases as this and keep them as template and reuse it for regression analysis.

I know there are so many visualization tools available to do this job, but with R and Rstudio the advantage is you can get these scripts ready and get the plots very quickly compared to excel.  This depends on individual preference and the one tool end user is comfortable with. Underlining idea is to understand performance anomalies and identify issues early.

Future work:

I am working on integrating this onto jupyterLab and make it configurable for any dataset with 0<1GB < 1TB data size, using python. I will keep posted on the same. Hope this is useful.

Disclaimer:

The postings on this site are my own and do not necessarily represent my employer’s positions, strategies, or opinions.

Shreeharsha GN