Quantcast
Channel: GoLinuxHub
Viewing all 392 articles
Browse latest View live

How to check the number of HBA cards or ports are available in my Linux setup ?

$
0
0
In my earlier article I had given a bunch of commands which can help a newbie understand dm multipath

Tutorial/Cheatsheet: Begineer's Guide to Understanding Device Mapper Multipath for Linux


Method 1

On a Linux box with Emulex NIC card you can view the available HBA cards using below command
# lspci  | grep -i fibre
04:00.2 Fibre Channel: Emulex Corporation OneConnect 10Gb FCoE Initiator (be3) (rev 01)
04:00.3 Fibre Channel: Emulex Corporation OneConnect 10Gb FCoE Initiator (be3) (rev 01)

Here as you see my node has an Emulex 554FLB NIC card with be2net driver
# ethtool -i eth0 | grep driver
driver: be2net

On a Linux machine with Qlogic card you can use below command to check the available HBA cards
# lspci  | grep -i hba
03:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
03:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)

Here as you see my node has an Qlogic 530FLB NIC card with bnx2x driver
# ethtool -i eth0 | grep driver
driver: bnx2x






Method 2

Or you can install sysfsutils rpm to get systool which can also show you similar information
# systool -D -c fc_host
Class = "fc_host"

  Class Device = "host0"

  Class Device = "host1"


Method 3

If you do not have this tool available then you can get the list of HBA at below location
# ls -ld /sys/class/fc_host/*
lrwxrwxrwx 1 root root 0 May 22 18:29 /sys/class/fc_host/host0 -> ../../devices/pci0000:00/0000:00:02.0/0000:04:00.2/host0/fc_host/host0
lrwxrwxrwx 1 root root 0 May 22 18:29 /sys/class/fc_host/host1 -> ../../devices/pci0000:00/0000:00:02.0/0000:04:00.3/host1/fc_host/host1

Here as you see I have two HBA with the name host0 and host1

I hope the article was useful.


How to check the HBA name, make, model, firmware, driver version in Linux

$
0
0
In my earlier article I had given a bunch of commands to help understand dm multipath

Tutorial/Cheatsheet: Begineer's Guide to Understanding Device Mapper Multipath for Linux
NOTE: The commands to validate this may vary based on the type of HBA being used so I will show some commands and examples for Emulex HBA card where these commands can be used. I cannot assure if the same would work for a Linux machine with Qlogic card.

Below command will give you the connected HBA detail
# lspci -nn | grep -i Fibre
04:00.2 Fibre Channel [0c04]: Emulex Corporation OneConnect 10Gb FCoE Initiator (be3) [19a2:0714] (rev 01)
04:00.3 Fibre Channel [0c04]: Emulex Corporation OneConnect 10Gb FCoE Initiator (be3) [19a2:0714] (rev 01)

With this we know we have a Emulex HBA





Next install sysfsutils rpm if not installed already.

Now use the below command. This will give you a bunch of information regarding your HBA card.
# systool -a -v -c scsi_host | egrep "Class Device|model|version|proc_name|info|fwrev"
  Class Device = "host0"
  Class Device path = "/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.2/host0/scsi_host/host0"
    bg_info             = "BlockGuard Disabled"
    fwrev               = "11.1.183.23, sli-4:0:1"
    info                = "HP FlexFabric 10Gb 2-port 554FLB Adapter on PCI bus 04 device 02 irq 36 port 1 Logical Link Speed: 8000 Mbps"
    lpfc_drvr_version   = "Emulex LightPulse Fibre Channel SCSI driver 11.2.0.6"
    modeldesc           = "HP FlexFabric 10Gb 2-port 554FLB Adapter"
    modelname           = "554FLB"
    npiv_info           = "NPIV Physical"
    option_rom_version  = "11.1.183.23"
    proc_name           = "lpfc"
  Class Device = "host1"
  Class Device path = "/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.3/host1/scsi_host/host1"
    bg_info             = "BlockGuard Disabled"
    fwrev               = "11.1.183.23, sli-4:0:1"
    info                = "HP FlexFabric 10Gb 2-port 554FLB Adapter on PCI bus 04 device 03 irq 41 port 2 Logical Link Speed: 8000 Mbps"
    lpfc_drvr_version   = "Emulex LightPulse Fibre Channel SCSI driver 11.2.0.6"
    modeldesc           = "HP FlexFabric 10Gb 2-port 554FLB Adapter"
    modelname           = "554FLB"
    npiv_info           = "NPIV Physical"
    option_rom_version  = "11.1.183.23"
    proc_name           = "lpfc"

These fields are populated from below location so if you wish to get any more information then navigate around all the files from this location
# ls /sys/class/scsi_host/host*
# ls /sys/class/fc_host/host*

To get the Model Name
# grep -v "zZzZ" /sys/class/scsi_host/host*/model*
/sys/class/scsi_host/host0/modeldesc:HP FlexFabric 10Gb 2-port 554FLB Adapter
/sys/class/scsi_host/host0/modelname:554FLB
/sys/class/scsi_host/host1/modeldesc:HP FlexFabric 10Gb 2-port 554FLB Adapter
/sys/class/scsi_host/host1/modelname:554FLB

To get the driver name
# grep -v "zZzZ" /sys/class/scsi_host/host*/proc_name
/sys/class/scsi_host/host0/proc_name:lpfc
/sys/class/scsi_host/host1/proc_name:lpfc

Now since we know the driver name, we can get more information for our driver
# modinfo lpfc
filename:       /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/scsi/lpfc/lpfc.ko.xz
version:        0:11.2.0.6
author:         Emulex Corporation - tech.support@emulex.com
description:    Emulex LightPulse Fibre Channel SCSI driver 11.2.0.6
license:        GPL
retpoline:      Y
rhelversion:    7.4
srcversion:     61B09422B7415BF170E0D67

NOTE: Although most Fibre Channel drivers register a model name and description within sysfs, not all scsi drivers will. For example, the Smart Array, SIL2424 and ATA HBA, as shown in the above configuration, do not supply that information. For those types of cards you must use lspci -k to retrieve the information.

I hope the article was useful.

The Six Best Screen Recording Apps for Linux and How to Install Them

$
0
0
Recording your screen can be a great way to demonstrate how to do something, especially if you are trying to introduce someone to Linux. After all, screenshots will only get you so far. Of course, you will need a good piece of screen recording software if you want to make great videos of Linux. Thankfully, there are many different options out there that will allow you to record the screen of your Linux desktop.


Today, let’s take a quick look at six of the very best screen recording apps for Linux so you can start making high quality Linux videos today. These apps range from the very simple, to the professional and all come with many different features designed to help you create the perfect video of your Linux laptop or desktop.


1. Simple Screen Recorder



One of my personal favorites because it is so easy to use, Simple Screen Recorder still comes with a host of features designed to help you create amazing screen recordings. Adjust settings such as resolution, recording of the entire screen or just a smaller rectangle, and much more to create the perfect video. Unfortunately, it doesn’t include any support for your webcam, so you will have to look elsewhere if you want to record yourself as well. Still, it’s ease of use is one of its strongest features and it gives you just enough to tweak and customize your videos.





Installation:
Open your terminal and type:
sudo apt update
sudo apt install simplescreenrecorder


2. Kazam Screencaster


If you are really looking for just a barebones recorder, then Kazam could be right for you. It is lightweight and very easy to use, but it’s controls are simple and it doesn’t have very many features for you to tweak and customize your videos. Still, it does support multiple video formats and it can even be used to take screenshots as well, so you don’t have to jump around to different programs all the time. One of the nicest features it has must be the delayed timer, so you can start recording but give yourself time to get ready.

Installation:
Open your terminal and type:
sudo apt update
sudo apt install kazam


3. Open Broadcaster Software Studio



Probably the most feature-rich application on this list, after you take one look at Open Broadcaster you will know you are playing in the big leagues. It’s interface is much more complicated because of all its features, but you will be hard-pressed to find anything better, and any professional should consider adding this software to their toolkit if they plan on making videos of their desktop. With features such as multiple screen support, custom transitions and even audio equalization, you will find every feature you need to create high quality, professional-grade videos of your desktop. Because of this, Open Broadcaster is also more difficult to use, with a much steeper learning curve. Still, once you master it, it will be tough to find any other application out there that can rival it.

Installation:
Open your terminal and type:
sudo add-apt-repository ppa:obsproject/obs-studio
sudo apt update
sudo apt install obs-studio


4. recordMyDesktop


Originally designed to be a command line tool, today there are a couple of graphical front ends for this software. Overall, it’s lightweight and relatively easy to use. It comes with a wide range of video output options so you can customize your recordings, but there are only a few video save options. On top of that, the interface isn’t very polished, but it does get the job done. If you want something lightweight and aren’t a fan of the other lightweight options on this list, then I encourage you to give recordMyDesktop a try.

Installation:
Open your terminal and type:
sudo apt update
sudo apt install gtk-recordmydesktop


5. VokoScreen


In my opinion, VokoScreen gives the best balance between features and simplicity available today. It’s easy to use, but still gives you a host of options including timer start and even hotkey support. You can export your files in a wide range of options and you can even record your entire desktop or just a window that you specify. But, like so many of these simple applications, the user interface could be better. At least it’s not hard to navigate. Assuming this one problem doesn’t bother you, I think you will enjoy the features and ease of use found in VokoScreen.

Installation:
Open your terminal and type:

sudo apt update
sudo apt install vokoscreen


6. VLC Media Player


You are all probably already aware and how great this software is as it pertains to watching videos and music. You can organize your files, and it will play practically anything out there. On top of that, it can also be used as a screen recorder. That means if you already use VLC on your computer, you don’t have to install anything else to record video. And while recording may not be its main goal, it gives you almost every feature you can imagine, including codec selection, frame rate adjustments, and more. You can even live stream your desktop using this software.

Installation:
Open your terminal and type:
sudo apt update
sudo apt install vlc

Parting Thoughts

This is by no means a be all, end all list of the available screen recording apps in Linux. Still, these represent some of the best out there today. They are all easy to use and perfect for recording your Linux desktop on any linux laptop or PC. If you have been wanting to create Linux How To videos on your Linux machine, then I encourage you to install and try each one of these. Feel free to experiment. After all, that’s what Linux is all about.

When you find one that works for you and has all the features you need for your videos, you will be well on your way toward creating amazing videos of Linux so you can share your knowledge with others.

How to send log messages using rsyslog to remote server using tcp and udp ports (remote logging) in Red Hat Linux

$
0
0
In my last article I had shared the steps to redirect specific log messages to a different log file using rsyslog

In this article I will share the steps to forward the system log to remote server using both TCP and UDP ports so you can choose


Below is my setup detail

Server: 10.43.138.14 -> The one which will send message
Client: 10.43.138.1 -> The one which will receive the message
Below rpm must be installed on the client setup to validate the incoming message
nmap-ncat


Using TCP

If you wish to transfer the system log files to remote server using tcp port then follow below list of steps

With older version of rsyslog below syntax was used in the /etc/rsyslog.conf
*.* @remote_server:port

NOTE: Use single "@" here above as highlighted for TCP

But this sytanx is deprecated and should not be used.
Now we have new syntax available which gives us more number of options to be used.

On Server (10.43.138.14)
Add below content at the end of the file /etc/rsyslog.conf
*.* action(type="omfwd" target="192.0.2.1" port="10514" protocol="tcp")

NOTE: If there are additional rules which are added before this entry then the same will be applied before sending those messages to remote server so place this entry in your rsyslog.confaccordingly

You can tweak this to add some more arguments
*.* action(type="omfwd"
queue.type="LinkedList"
action.resumeRetryCount="-1"
queue.size="10000"
queue.saveonshutdown="on"
target="10.43.138.1" Port="10514" Protocol="tcp")

queue.type enables a LinkedList in-memory queue, queue_type can be direct, linkedlist or fixedarray (which are in-memory queues), or disk.

enabled queue.saveonshutdown saves in-memory data if rsyslog shuts down,

the action.resumeRetryCount= “-1” setting prevents rsyslog from dropping messages when retrying to connect if server is not responding,

queue.size where size represents the specified size of disk queue part. The defined size limit is not restrictive, rsyslog always writes one complete queue entry, even if it violates the size limit.

Save and restart the rsyslog service
# systemctl restart rsyslog

On client side
Add the provided port to the firewall
# iptables -A INPUT -p tcp --dport 10514  -j ACCEPT

Next open the port using nc
# nc -l -p 10514 -4

On Server side I send some dummy message
# logger "testing message from 10.43.138.14"

On client side
<13>May 29 12:58:33 golinuxhub-client deepak: testing message from 10.43.138.14

You should also start getting all your log messages from the server on your client.


Using UDP

If you wish to transfer the system log files to remote server using udp port then follow below list of steps

With older version of rsyslog below syntax was used in the rsyslog.conf
*.* @@remote_server:port

NOTE: Use "@" twice here above as highlighted for UDP

But this sytanx is deprecated and should not be used.
Now we have new syntax available which gives us more number of options to be used.

On Server (10.43.138.14)
Add below content at the end of the file /etc/rsyslog.conf
*.* action(type="omfwd" target="192.0.2.1" port="10514" protocol="udp")

NOTE: If there are additional rules which are added before this entry then the same will be applied before sending those messages to remote server so place this entry in yourrsyslog.confaccordingly





You can tweak this to add some more arguments
*.* action(type="omfwd"
queue.type="LinkedList"
action.resumeRetryCount="-1"
queue.size="10000"
queue.saveonshutdown="on"
target="10.43.138.1" Port="10514" Protocol="udp")

queue.type enables a LinkedList in-memory queue, queue_type can be direct, linkedlist or fixedarray (which are in-memory queues), or disk.

enabled queue.saveonshutdown saves in-memory data if rsyslog shuts down,

the action.resumeRetryCount= “-1” setting prevents rsyslog from dropping messages when retrying to connect if server is not responding,

queue.size where size represents the specified size of disk queue part. The defined size limit is not restrictive, rsyslog always writes one complete queue entry, even if it violates the size limit.

Save and restart the rsyslog service
# systemctl restart rsyslog


On Client
Enable or uncomment these two entires for the client to be able to receive the messages
# vim /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514

Followed by a restart of rsyslog service
# systemctl restart rsyslog

Next add the provided port to the firewall
# iptables -A INPUT -p udp --dport 10514  -j ACCEPT

And start listening to the port we are using (since this is a UDP port hence I have used -u)
# nc -l -p 10514 -4 -u

Now we are all set so lets send a message using logger from our server node
# logger "Testing rsyslog message using udp port"

Same appears on our client side
<13>May 29 14:37:32 Ban17-be002-2b deepak: Testing rsyslog message using udp port

I hope the article was useful.

20+ scenario based interview questions with answers for beginners and experienced users in Linux

$
0
0

This page consists a bunch of scenario based questions and their most possible answers, I have tried to answer to the best of my knowledge but if you feel there could be more possible answers or if you have more list of questions and answers which you have faced and think will be helpful for others then please do let us in know via comment box available at the end of this page and I can add them here on your behalf with your name so that the credit for that Q/A goes to you.


Q. You are unable to do ssh to a node, what could be the problem?

A.
Now just by saying ssh is not happening will not say anything about the problem. It is like saying "I have a pain in my body" but where do you have the pain to be precise? head ache? stomach pain? or what else? so you have to narrow it down..
  • So next would be ask your interviewer on the exact problem or else we have to jump in and analyse it further.
  • In such scenarios it is always recommended to get a GUI access of the node as that would not require ssh access and you can directly login to the node and check the respective ssh log to understand the problem
  • The ssh log location may vary based on the distribution type like /var/log/secure, /var/log/sshd, /var/log/messages, /var/log/auth etc are some of the files you should look out for..
Next check the kind of error you get and then debug the problem accordingly.
Most possible scenarios
1. Host is not allowed to do ssh to the server
2. A direct root login may not be allowed
3. AllowUsers and AllowGroup is defined for the target node sshd config and hence the login fails
4. Many times a password less authentication fails due to incorrect permision of the necessary directory and files like .ssh, authorized_keys etc so make sure the permission of these files and directories are not world readable or writable.

These are only some of the examples and the list of possible scenarios can be many more..





Q. Suppose you have Linux box with IP, "192.168.10.11", and you are able to ssh this node using another Linux box which has IP, "192.168.10.12", BUT you are unable to connect to that node from another Windows Box having IP "192.169.10.29", what could be the problem?

A. These mostly happen because of IP routing issues. Here most likely gateway is missing in 192.168.10.12 as to connect to a node a gateway connectivity is needed while for nodes within the same subnet can still connect to each other. A simple ping test and trace route can give more hint of the situation.

Q. User root has created a file "secret" with below permission which must not be opened by anyone except root and another user "deepak", how can this be done?

# ls -l secret
-rwx------ 1 root root 0 May 31 10:59 secret

A. You can use setfacl for this purpose as shown below
# setfacl -m u:oamsys:rwx secret

below command will show the existing acl rules.
# getfacl secret
# file: secret
# owner: root
# group: root
user::rwx
user:oamsys:rwx
group::---
mask::rwx
other::---

NOTE: For the sake of this example I have given full permission to oamsys but in real all might not be needed so you can assign permission as required


Q. User "deepak" owns a script file i.e. /tmp/deepak.sh and is owned by deepak:deepak.

But this file also must be allowed to be executed by another user "ankit", but the problem is this script can be only executed as "deepak" user so you cannot just use acl or any such thing here. So what is the solution?

A.
 This can be done via sudo.
RunAs(User:Group)
  • A Runas_Spec determines the user and/or the group that a command may be run as.
  • A fully-specified Runas_Spec consists of two Runas_Lists (as defined above) separated by a colon (‘:’) and enclosed in a set of parentheses.
  • The first Runas_List indicates which users the command may be run as via sudo's -u option.
  • The second defines a list of groups that can be specified via sudo's -g option.
  • If both Runas_Lists are specified, the command may be run with any combination of users and groups listed in their respective Runas_Lists.
  • If only the first is specified, the command may be run as any user in the list but no -g option may be specified.
  • If the first Runas_List is empty but the second is specified, the command may be run as the invoking user with the group set to any listed in the Runas_List.
  • If both Runas_Lists are empty, the command may only be run as the invoking user.
With this argument we tell sudo to accept "-u" and "-g" option where "-u" will run the command/script as the respective user and "-g" will do the same as respective group.

Add below content in the sudoers file
ankit  golinuxhub=(deepak) /tmp/deepak_script.sh

Save and exit the file.

Now if you notice here I have given RunAs access to "deepak" which means if user "ankit" runs the script as "deepak" then he will be allowed to run the script.
$ sudo -u deepak /tmp/deepak_script.sh
[sudo] password for ankit:
Hello This is Deepak's fIle


Q. By default when I create a user I see that the default shell assigned is /bin/bash and the default home directory which is assigned is under /home.
How can I make sure that next time I user "useradd", the default assigned shell is ksh and default home directory of user is /export/home/<username>

A. Useradd takes default arguments from "/etc/default/useradd"
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes

So either you can use additional arguments with useradd to make sure your home directory is "/export/home" or else you can modify the above file so that without any additional argument the home directory will be "/export/home"

Q. There are many times a root user just leaves it session open which is kind of breach of security as any session for any user (specially root) if left idle for certain amount of time must be closed so that no one can use it for some wrong purpose. How can this be achieved?

A. We can introduce TMOUT variable in the profile of the user which should do the trick.


Q. I created a password less authentication between two linux box but still every time I try to do ssh, it still prompts me for password, what wrong could I have done? What all I should check?

A. Assuming private and public key were successfully created
1. Make sure the public key you generated is same as what is copied to the target node's authorised key file. In such case I always prefer to use ssh-copy-id rather than manually copying the public key to client node.
2. The permission of .ssh directory, the generated keys and authorized keys must not be world readable, writable or executable
3. Analyse the /var/log/sshd, /var/log/secure, /var/log/messages or any other relevant file which contains the logs for ssh as the error what appears will help debug further

Q. After upgrading kernel the machine fails to boot, what will you do?

A.The very first thing to be done here is to edit the grub menu at boot stage and make the system boot with alternative kernel (assuming the last kernel is still installed) or else try booting the system with using the rescue option from the grub menu.

Once the node is UP then you can analyse the issue of why the node is failing to boot from new kernel. Many times the kernel is not properly installed and all the libraries are not available which leads to this problem. or the GRUB can be corrupted so you can regerate the initramfs using grub2-mkconfig
# grub2-mkconfig -o /boot/grub2/grub.cfg

If there is a kernel panic observed then boot the system with alternate kernel or rescue and then enable kdump. Share the kdump with the support engineers as they can then further try to debug the source of the problem


Q. How do I make sure that the swap memory used by my application is not flushed away by any other process?
A.To lock memory for application then the application must be running in a cgroup for which you can assign a low value swappiness so that it's memory is not swapped out when the system goes out of memory or else in general if you do not wish your memory to be swapped out then reduce the swappiness via sysctl to a lower value.


Q. Every time I login to my Linux box instead of getting a login prompt like "golinuxhub:~ #", I get a "-bash-4.2#" prompt, what could be the possible reason?

A.
There can be multiple reasons for it, by default when a bash shell is assigned to a user a PS1 variable is also set which will make sure you get a proper login prompt but for some reason if that does not happens then make sure the PS1 variable is properly set for your user.

The permanent value of PS1 is generally found in /etc/profile or can also be found under /etc/bashrc, /etc/profile.d/* etc.

So look out for the same and make sure this file gets called every time user logs in. By default when a user log in then ~/.profile is called so you can put the PS1 variable here or /etc/profile (assuming this file will be called internally via .profile of each user)


Q. While attempting to do su (switch user) from one user to another user I get an error message "Authentication failure" and the su fails even when I know I am giving the correct password, what could be the possible reason?

A.
In general "Authentication Failure" means the password provided is not matching the password stored in /etc/shadow for the user. But there can be many other reasons for this error since you know that you are entering correct password (unless you left CAPS LOCK on and by mistake incorrect password is getting typed :) )

Now if you have ssh access with root then well and good as you can go through the logs to understand more about the problem

But if su - root is failing then we may be in a problem, as a root level authentication is needed or another user which has similar privilege, if not then

But assuming you have root level access then you can use pam_tally2 (deprecated in RHEL7) or faillock to see if the user is locked for some reason.


If a user is locked due to failed attempts then we need to reset the account
# faillock --reset --user deepak
# pam_tally2 --reset --user deepak

Q. On my RHEL 7 setup the rsyslog service fails to start but the problem is once the rsyslog server fails I do not get any messages in /var/log/messages hence I am unable to debug or find the problem why the rsyslog service is failing. Where should I check my system messages in such scenarios?

A.
On RHEL 7 we have "journal" which is a component of systemd that is responsible for viewing and management of log files. Logging data is collected, stored, and processed by the Journal's journald service. It creates and maintains binary files called journals based on logging information that is received from the kernel, from user processes, from standard output, and standard error output of system services or via its native API. These journals are structured and indexed, which provides relatively fast seek times. Journal entries can carry a unique identifier. The journald service collects numerous meta data fields for each log message. The actual journal files are secured, and therefore cannot be manually edited.

To view the log files you can use
# journalctl

Q. I have a service on my RHEL setup which I want to run on a specific CPU core, is this possible? If yes how can this be done?

A.
There is a variable CPUAffinity which can be used for this purpose. Use this variable with the CPU core value with which you wish to bind your service in the service unit file as shown below. Here my service will run always on 13th processor
# vim /etc/systemd/system/test.service
...
[Service]
CPUAffinity=13
Type=forking
Restart=no
...


Q. I have a physical hardware with 10 CPU processors but I want to use only 6 of them and I do not my application to see the other 4 CPU processor, is it possible?

A.
We can use "maxcpus" or "nr_cpus" for this purpose. This will help limit the number of CPU processor which is visible to the kernel or any other application running on the system.


Q. I have a script lgg_monitor.sh which will be continously running to monitor some logs on my Linux server and it is expected that these log size would be very high since it will be running for long time but my server does not has enough space to capture and save these logs, is there any way I can save them? I don't have any additional disk or any other storage box which can be used.

A.
We can use "nc" here and to transfer the logs runtime to a different node in the network which has more space.

On the receving side run below command (Either netcat or nc can be used based on your distribution)
# netcat -l -p 55000 > /tmp/logs_from_server1.log

On the sending side
# ./log_monitor.sh > /dev/tcp/<receiving_server_ip>/55000

You can use any other free port number, just make sure this port is open on the firewall of receiving server.

With this the logs will not be written directly on the node where monitoring script is running instead it will be sent to remote server.

Q. After my reboot my node, I observe that the system start up time is different compared to the localtime even when my machine is properly connected to the NTP server, why does the boot up logs in /var/log/messages are getting generated with wrong date and time?

A.
It is most likely because your BIOS date and time are wrongly set, go to your linux server's BIOS and make sure the date and time is properly set. You should also use ntpdate service to make sure the hwclock is updated with system clock and both are in sync so you can avoid such discrepencies.

NOTE: If the BIOS date and time is incorrect then even ntpdate service cannot help. It can only make sure that once ntpdate service comes up it will correct the system log getting generated at the boot up stage in /var/log/messages


Q. I am trying to perform a hard disk replacement but when I plugin a new disk to my linux server, I see some strange partitions and raid devices are appearing on my machine. Why is this happening, how do I correct this?

A.
This is happening because most likely the disk you are using was in use in some more node and still has data from the old server so it is always a good idea to clear the existing partition table of the newly connected disk. You can use "mdadm: and "wipefs" to do this.


Q. By default if a use "restart" with systemtl for a service for example systemctl restart sshd, then it will restart sshd service but is it possible to make sure that systemctl will perform restart only if the provided service is in running state and if the target service is in non-running state i.e. failed/stopped etc then systemctl should not attempt to restart that service.

A.
In RHEL 7 we have below two options available
systemctl try-restart something.service
OR
systemctl condrestart something.service

From the man page
try-restart PATTERN...
    Restart one or more units specified on the command line if the units are running. This does nothing if units are not running. Note that, for compatibility with Red Hat init scripts

So if the service is in not running state then the same will be untouched.


Q. I am trying to perform kickstart based installation and my installation fails with some error "Software selection (Source changed - please verify)". Now there can many more such of errors so how do I find out the root cause of the installation failure as after the failure the kickstart anaconda doesnot provides me a login shell hence I am unable to debug this further.

A.By default during kickstart based installation as soon as the anaconda starts multiple terminals are created so if the installation fails at first terminal you can always navigate to other terminal to get a bash prompt.
All the installation logs are store inside /tmp where you can try to debug the cause of the installation failure.


Q. During kickstart based installation of my RHEL 7 node I have generating a log file at %pre stage for the scripts which were executed but after a successful installation of the server when I go to the location where the logs were saved, I do not find anything there? Does that mean the log were never created? Did I user wrong syntax? How do I check this?

A.
To create a logfile for respctive %pre or %post section using --log argument

For example
%pre --log=/var/log/kickstart_pre.log
%end

By default, %post scripts are executed in chrooted environment. Since, /var/log/kickstart_pre.log is available in the installers environment, you won't be able to copy it directly. You can execute the %post script outside chroot environment to copy the file from installers environment.

For example, the script will look like this:
%post --log=/var/log/kickstart_post.log --nochroot
/bin/cp -rvf /var/log/kickstart_pre.log /mnt/sysimage/var/log/
%end


Please post more question if you have any with the possible answer which you wish to add here..

What are the CPU c-states? How to check and monitor the CPU c-state usage in Linux per CPU and core?

$
0
0

What are C-states, cstates, or C-modes?

There are various power modes of the CPU which are determined on the basis of their current usage and are collectively called “C-states” or “C-modes.”

The lower-power mode was first introduced with the 486DX4 processor. To the present, more power modes has been introduced and enhancements has been made to each mode for the CPU to consume less power in these low-power modes.

  • Each state of the CPU utilises different amount of power and impacts the application performance differently.
  • Whenever a CPU core is idle, the builtin power-saving logic kicks in and tries to transition the core from the current C-state to a higher C-state, turning off various processor components to save power
  • But you also need to understand that every time an application tries to bind itself to a CPU to do some task, the respective CPU has to come back from its "deeper sleep state" to "running state" that needs more time to wake up the CPU and be again 100% up and running.  It also has to be done in an atomic context, so that nothing tries to use the core while it's being powered up.
  • So the various modes to which the CPU transitions are called C-states
  • They are usually starting in C0, which is the normal CPU operating mode, i.e., the CPU is 100% turned on
  • With increasing C number, the CPU sleep mode is deeper, i.e., more circuits and signals are turned off and more time the CPU will require to return to C0 mode, i.e., to wake-up.
  • Each mode is also known by a name and several of them have sub-modes with different power saving – and thus wake-up time – levels.

Below table explains all the CPU C-states and their meaning



How can I disable processor sleep states?

Latency sensitive applications do not want the processor to transition into deeper C-states, due to the delays induced by coming out of the C-states back to C0. These delays can range from hundreds of microseconds to milliseconds.

There are various methods to achieve this.

Method 1
By booting with the kernel command line argument processor.max_cstate=0 the system will never enter a C-state other than zero.

You can add these variable in your grub2 file. Append "processor.max_cstate=0" as shown below
# vim /etc/sysconfig/grub
GRUB_CMDLINE_LINUX="novga console=ttyS0,115200 panic=1 numa=off elevator=cfq rd.md.uuid=f6015b65:f15bf68d:7abf04cc:e53fa9a2 rd.lvm.lv=os/root rd.md.uuid=a66dd4fd:9bf06835:5c2bc8df:f150487f rd.md.uuid=84bfe346:bb18024a:054d652a:d7678fa4 processor.max_cstate=0"

Rebuild your initramfs
# grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot the node to activate the changes

Method 2
  • The second method is to use the Power Management Quality of Service interface (PM QOS). 
  • The file /dev/cpu_dma_latency is the interface which when opened registers a quality-of-service request for latency with the operating system. 
  • A program should open /dev/cpu_dma_latency, write a 32-bit number to it representing a maximum response time in microseconds and then keep the file descriptor open while low-latency operation is desired.  Writing a zero means that you want the fastest response time possible.
  • Various tuned profile can do this by reading the file continously and writing a value based on the input provided foe eg, network-latency, latency-performance etc.

Below is a snippet from latency-performance tuned file
[cpu]
force_latency=1

Here as you see this file will always be on open state by the tuned as long as tuned is in running state
# lsof /dev/cpu_dma_latency
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
tuned   1543 root    8w   CHR  10,61      0t0 1192 /dev/cpu_dma_latency

These profiles write force_latency as 1 to make sure the CPU c-state does not enters deeper C state other than C1.





How to read and interpret /dev/cpu_dma_latency?

If we use normal text editor tool to read this file then the output would be something like
# cat /dev/cpu_dma_latency
▒5w

Since this value is "raw" (not encoded as text) you can read it with something like hexdump.
# hexdump -C /dev/cpu_dma_latency
00000000  00 94 35 77                                       |..5w|
00000004

When you read this further
# echo $(( 0x77359400 ))
2000000000

It tells us that the current latency value time is 2000 seconds which is the time a CPU would need or take to come up from a deeper C state to C0.
NOTE: By default on Red Hat Enterprise Linux 7 it is set to 2000 seconds.

When we set a tuned profile with force_latency=1

For example here I will set tuned profile of network-latency
# tuned-adm profile network-latency

Check the existing active profile
# tuned-adm active
Current active profile: network-latency

Now lets check the latency value
# hexdump -C /dev/cpu_dma_latency
00000000  01 00 00 00                                       |....|
00000004

As you see the latency value has been changed to 1 micro second.


What is the maximum C-state allowed for my CPU?

We have multiple CPU c-states as you can see in the above table but depending upon the latency values and other max_cstate value provided in the GRUB the maximum allowed c-states for any processor can vary.

Below file should give the value from your node
# cat /sys/module/intel_idle/parameters/max_cstate
9


How do I check the existing latency value for different C-states?

The latency value may change depending upon various C-states and the transition time from deeper C-states to C0.

Below command shall give you the existing latency values of all the c-states per cpu
# cd /sys/devices/system/cpu/cpu0/cpuidle

# for state in state{0..4} ; do echo c-$state `cat $state/name` `cat $state/latency` ; done
c-state0 POLL 0
c-state1 C1-HSW 2
c-state2 C1E-HSW 10
c-state3 C3-HSW 33
c-state4 C6-HSW 133

Similar value can be grepped for all the available CPUs by changing the cpu number in the above highlighted area.


How to check and monitor the CPU c-state usage in Linux per CPU and core?

You can use "turbostat" tool for this purpose which will give you runtime value for the CPU c-state usage for all the available CPU and cores.

I will be using 'turbostat' and 'stress' tool to monitor the CPU c-state and put some load on my CPU respectively.
To install these rpms you can use

# yum install kernel-tools
# yum install stress

For example

Case 1: Using throughput-performance tuned profile

To check the currently active profile
# tuned-adm active
Current active profile: throughput-performance

With this our latency value is default i.e. 2000 seconds
# hexdump -C /dev/cpu_dma_latency
00000000  00 94 35 77                                       |..5w|
00000004

Check the output using turbostat
# turbostat
        Core    CPU     Avg_MHz Busy%   Bzy_MHz TSC_MHz IRQ     SMI     CPU%c1  CPU%c3  CPU%c6  CPU%c7  CoreTmp PkgTmp  PkgWatt RAMWatt PKG_%RAM_%
        -       -       6       0.34    1754    2597    2963    640     1.24    0.07    98.35   0.00    54      61      29.33   6.65    0.00 0.00
        0       0       5       0.30    1817    2597    116     40      0.76    0.06    98.88  0.00    51      61      15.36   2.62    0.00 0.00
        1       8       7       0.39    1722    2597    253     40      1.84    0.08    97.69   0.00    52
        2       1       5       0.28    1786    2597    97      40      1.04    0.04    98.64   0.00    51
        3       9       4       0.22    1811    2597    45      40      0.45    0.00    99.32   0.00    51
        4       2       5       0.29    1883    2597    86      40      0.69    0.06    98.96   0.00    53
        5       10      4       0.22    1830    2597    39      40      0.46    0.00    99.31   0.00    52
        6       3       7       0.39    1682    2597    279     40      1.67    0.07    97.87   0.00    54
        7       11      7       0.39    1762    2597    200     40      1.79    0.08    97.75   0.00    51
        0       4       8       0.43    1837    2597    268     40      1.59    0.07    97.91   0.00    37      49      13.97   4.03    0.00 0.00
        1       12      7       0.39    1734    2597    251     40      1.49    0.10    98.02   0.00    40
        2       5       5       0.27    1727    2597    84      40      0.64    0.06    99.03   0.00    39
        3       13      5       0.27    1837    2597    70      40      0.58    0.03    99.12   0.00    40
        4       6       6       0.32    1775    2597    164     40      1.07    0.04    98.56   0.00    40
        5       14      6       0.37    1675    2597    234     40      1.44    0.07    98.13   0.00    40
        6       7       7       0.43    1735    2597    299     40      1.75    0.15    97.68   0.00    39
        7       15      9       0.56    1634    2597    478     40      2.63    0.16    96.66  0.00    38

As you see all the available CPU and cores are at c-6 state because all are free. Now if I start putting stress then the CPU will start transitioing from C6 to c0 state and c6 will become free as all CPU will be in running state
        Core    CPU     Avg_MHz Busy%   Bzy_MHz TSC_MHz IRQ     SMI     CPU%c1  CPU%c3  CPU%c6  CPU%c7  CoreTmp PkgTmp  PkgWatt RAMWatt PKG_%RAM_%
        -       -       384     13.84   2782    2594    16172   640     2.14    0.17    83.84   0.00    54      58      42.87   8.42    0.00 0.00
        0       0       419     15.09   2790    2590    896     40      1.19    0.08    83.64   0.00    50      58      21.18   3.16    0.00 0.00
        1       8       255     9.21    2778    2590    1073    40      4.91    0.55    85.34   0.00    51
        2       1       439     15.76   2793    2591    892     40      1.29    0.05    82.90   0.00    54
        3       9       441     15.81   2800    2591    997     40      0.64    0.02    83.53   0.00    53
        4       2       439     15.74   2797    2592    890     40      0.80    0.06    83.39   0.00    54
        5       10      258     9.39    2758    2594    1118    40      5.34    0.41    84.86   0.00    51
        6       3       317     11.43   2780    2594    962     40      3.47    0.32    84.78   0.00    52
        7       11      327     11.86   2764    2594    1236    40      5.00    0.41    82.73   0.00    50
        0       4       39      1.46    2660    2594    485     40      2.31    0.22    96.01   0.00    37      47      21.69   5.26    0.00 0.00
        1       12      461     16.68   2767    2594    1314    40      2.69    0.16    80.47   0.00    46
        2       5       465     16.68   2791    2595    944     40      0.86    0.08    82.38   0.00    41
        3       13      458     16.50   2779    2595    1067    40      1.32    0.14    82.04   0.00    46
        4       6       463     16.63   2788    2596    1243    40      0.99    0.07    82.31   0.00    46
        5       14      452     16.31   2778    2596    1001    40      1.27    0.11    82.31   0.00    46
        6       7       462     16.58   2789    2596    1023    40      0.77    0.05    82.60   0.00    44
        7       15      452     16.29   2776    2597    1031    40      1.45    0.07    82.19   0.00    41

        Core    CPU     Avg_MHz Busy%   Bzy_MHz TSC_MHz IRQ     SMI     CPU%c1  CPU%c3  CPU%c6  CPU%c7  CoreTmp PkgTmp  PkgWatt RAMWatt PKG_%RAM_%
        -       -       2428    86.63   2804    2599    85363   656     6.08    0.96    6.33    0.00    57      60      119.27  17.04   0.00 0.00
        0       0       2377    84.85   2802    2600    5756    41      9.47    1.09    4.59    0.00    55      60      55.56   6.59    0.00 0.00
        1       8       1835    65.48   2801    2602    5742    41      20.04   2.11    12.37   0.00    54
        2       1       2802    99.93   2803    2601    5037    41      0.07    0.00    0.00    0.00    57
        3       9       2802    99.93   2803    2601    5035    41      0.07    0.00    0.00    0.00    56
        4       2       2802    99.94   2803    2600    5044    41      0.06    0.00    0.00    0.00    57
        5       10      1992    71.12   2802    2598    5688    41      16.62   1.77    10.50   0.00    54
        6       3       2799    99.94   2803    2599    5049    41      0.06    0.00    0.00    0.00    57
        7       11      1914    68.39   2801    2598    5720    41      18.45   2.09    11.07   0.00    51
        0       4       2066    73.79   2800    2600    5335    41      9.85    2.19    14.17   0.00    46      53      63.72   10.45   0.00 0.00
        1       12      2803    99.86   2807    2600    5088    41      0.14    0.00    0.00    0.00    52
        2       5       656     23.46   2800    2597    3312    41      21.81   6.10    48.63   0.00    45
        3       13      2799    99.86   2807    2597    5610    41      0.14    0.00    0.00    0.00    53
        4       6       2799    99.86   2807    2597    7143    41      0.14    0.00    0.00    0.00    51
        5       14      2799    99.86   2807    2597    5044    41      0.14    0.00    0.00    0.00    50
        6       7       2799    99.86   2807    2597    5679    41      0.14    0.00    0.00    0.00    50
        7       15      2799    99.86   2807    2597    5081    41      0.14    0.00    0.00    0.00    48

        Core    CPU     Avg_MHz Busy%   Bzy_MHz TSC_MHz IRQ     SMI     CPU%c1  CPU%c3  CPU%c6  CPU%c7  CoreTmp PkgTmp  PkgWatt RAMWatt PKG_%RAM_%
        -       -       2421    86.42   2807    2595    84373   656     6.28    1.07    6.23    0.00    59      62      120.52  17.00   0.00 0.00
        0       0       2798    99.83   2808    2595    5039    41      0.17    0.00    0.00    0.00    57      62      55.92   6.54    0.00 0.00
        1       8       1891    67.58   2803    2595    5151    41      16.92   2.72    12.78   0.00    55
        2       1       2798    99.83   2808    2595    5032    41      0.17    0.00    0.00    0.00    59
        3       9       2798    99.83   2808    2595    6068    41      0.17    0.00    0.00    0.00    58
        4       2       2798    99.83   2808    2595    5041    41      0.17    0.00    0.00    0.00    58
        5       10      1527    54.56   2804    2595    5540    41      24.02   3.73    17.70   0.00    56
        6       3       2793    99.83   2808    2590    5045    41      0.17    0.00    0.00    0.00    58
        7       11      1692    60.57   2804    2590    5556    41      20.66   3.24    15.53   0.00    54
        0       4       1425    50.99   2800    2595    5251    41      19.20   4.24    25.57   0.00    48      57      64.60   10.46   0.00 0.00
        1       12      2799    99.85   2809    2595    5053    41      0.15    0.00    0.00    0.00    54
        2       5       2799    99.84   2809    2595    5054    41      0.16    0.00    0.00    0.00    53
        3       13      1419    50.79   2800    2595    4642    41      17.88   3.22    28.11   0.00    49
        4       6       2799    99.85   2809    2595    5059    41      0.15    0.00    0.00    0.00    55
        5       14      2799    99.84   2809    2595    5047    41      0.16    0.00    0.00    0.00    53
        6       7       2799    99.84   2809    2595    6206    41      0.16    0.00    0.00    0.00    53
        7       15      2801    99.84   2809    2597    5589    41      0.16    0.00    0.00    0.00    50

Now towards the end as you see the Busy% increases and the CPU state under c-6 is reduced which means the CPU are currently in running state.


Case 2: Change tuned profile to latency-performance
# tuned-adm profile latency-performance

# tuned-adm active
Current active profile: latency-performance

Next monitor the CPU c-state when the system is idle
        Core    CPU     Avg_MHz Busy%   Bzy_MHz TSC_MHz IRQ     SMI     CPU%c1  CPU%c3  CPU%c6  CPU%c7  CoreTmp PkgTmp  PkgWatt RAMWatt PKG_%RAM_%
        -       -       61      2.17    2800    2597    2923    656     97.83   0.00    0.00    0.00    68      74      78.78   6.14    0.00 0.00
        0       0       363     13.00   2800    2597    56      41      87.00   0.00    0.00    0.00    65      74      39.31   2.22    0.00 0.00
        1       8       4       0.14    2800    2597    9       41      99.86   0.00    0.00    0.00    68
        2       1       4       0.14    2800    2597    23      41      99.86   0.00    0.00    0.00    66
        3       9       61      2.17    2800    2597    211     41      97.83   0.00    0.00    0.00    66
        4       2       5       0.18    2800    2597    93      41      99.82   0.00    0.00    0.00    67
        5       10      4       0.14    2800    2597    20      41      99.86   0.00    0.00    0.00    66
        6       3       4       0.15    2800    2597    25      41      99.85   0.00    0.00    0.00    68
        7       11      8       0.28    2800    2597    337     41      99.72   0.00    0.00    0.00    64
        0       4       4       0.16    2800    2597    68      41      99.84   0.00    0.00    0.00    57      66      39.46   3.93    0.00 0.00
        1       12      4       0.14    2800    2597    34      41      99.86   0.00    0.00    0.00    58
        2       5       5       0.18    2800    2597    134     41      99.82   0.00    0.00    0.00    58
        3       13      38      1.36    2800    2597    928     41      98.64   0.00    0.00    0.00    59
        4       6       433     15.50   2800    2597    35      41      84.50   0.00    0.00    0.00    59
        5       14      7       0.24    2800    2597    375     41      99.76   0.00    0.00    0.00    59
        6       7       4       0.14    2800    2597    17      41      99.86   0.00    0.00    0.00    58
        7       15      21      0.74    2800    2597    558     41      99.26  0.00    0.00    0.00    55

As you see even when the CPU and cores are sitting idle still the CPU won't transition to deeper c-states since we are forcing it to stay at C1


What is POLL idle state ?

If cpuidle is active, X86 platforms have one special idle state. The POLL idle state is not a real idle state, it does not save any power. Instead, a busy-loop is executed doing nothing for a short period of time. This state is used if the kernel knows that work has to be processed very soon and entering any real hardware idle state may result in a slight performance penalty.

There exist two different cpuidle drivers on the X86 architecture platform:

"acpi_idle" cpuidle driver
The acpi_idle cpuidle driver retrieves available sleep states (C-states) from the ACPI BIOS tables (from the _CST ACPI function on recent platforms or from the FADT BIOS table on older ones). The C1 state is not retrieved from ACPI tables. If the C1 state is entered, the kernel will call the hlt instruction (or mwait on Intel).

"intel_idle" cpuidle driver
In kernel 2.6.36 the intel_idle driver was introduced. It only serves recent Intel CPUs (Nehalem, Westmere, Sandybridge, Atoms or newer). On older Intel CPUs the acpi_idle driver is still used (if the BIOS provides C-state ACPI tables). The intel_idle driver knows the sleep state capabilities of the processor and ignores ACPI BIOS exported processor sleep states tables.


Why the OS might ignore BIOS settings?

  • The OS might ignore BIOS settings based on the idle driver which is in use.
  • If one uses intel_idle (the default on intel machines) the OS can ignore ACPI and BIOS settings, i.e. the driver can re-enable the C-states.
  • In case one disables intel_idle and uses the older acpi_idle driver the OS should follow the BIOS settings.

One can disable the intel_idle driver by:

passing intel_idle.max_cstate=0 to kernel boot command line or
passing idle=* (where * can be e.g. poll, i.e. idle=poll)

IMPORTANT NOTE: Make sure your processor supports acpi driver or else you should not change the driver.

How to check currently loaded driver?

  • The intel_idle driver is a CPU idle driver that supports modern Intel processors.
  • The intel_idle driver presents the kernel with the duration of the target residency and exit latency for each supported Intel processor.
  • The CPU idle menu governor uses this data to predict how long the CPU will be idle
# cat /sys/devices/system/cpu/cpuidle/current_driver
intel_idle

Or you can also use below command
# dmesg |grep idle
[    1.766866] intel_idle: MWAIT substates: 0x2120
[    1.766868] intel_idle: v0.4.1 model 0x3F
[    1.767023] intel_idle: lapic_timer_reliable_states 0xffffffff
[    1.835938] cpuidle: using governor menu


I hope the article was useful.

How to configure your BIND DNS server on a different port no other than 53 in Linux

$
0
0
By default DNS server works on port no. 53 but what if you want to change the default port no. in your machine.

I wanted to give a try if it is possible and if yes then in that case how am I suppose to do that.
Well here is the solution which I found.

The following commands are with respect to RedHat and CentOS so kindly check the commands if you are planning to do the same in any other distribution.

Open up your named.conf and make the following changes
# vi /etc/named.conf
listen-on port 6236 { 127.0.0.1; };
query-source port 6236;

Make sure your firewall and selinux is not blocking the port no. you have selected





# netstat -ntlp | grep 6236
tcp        0     0     10.10.10.30:6236        0.0.0.0:*     LISTEN      32711/named
tcp        0     0     127.0.0.1:6236          0.0.0.0:*     LISTEN      32711/named

[root@server ~]# telnet localhost 6236
Trying ::1...
Connected to localhost.
Escape character is '^]'.

This all means that the port 6236 is open on our system

Now when you have configured your DNS server use the following command to dig your server
# nslookup -po=6236 server.example.com
# dig -p 6236 server.example.com

I hope the article was useful.

How to measure power consumption in watts using powerstat in Linux with examples

$
0
0
In my last article I had explained the various CPU c-states in detail and how you can disable the same, which you can access using below link.

What are the CPU c-states? How to disable the C-states? How to check and monitor the CPU c-state usage in Linux per CPU and core?

In this article I will show the usage of powerstat to measure power in watts for various tuned profile and also with some load.
I will be using Red Hat Enterprise Linux 7 (RHEL 7) for demonstrating the commands.

If powerstat is available at your reporisoty then you can install the same using
# yum install powerstat

Or you can always download this tool based on your environment and install it locally.
# rpm -Uvh /tmp/powerstat-0.02.17-10.1.x86_64.rpm
warning: /tmp/powerstat-0.02.17-10.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID ee454f98: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:powerstat-0.02.17-10.1           ################################# [100%]

IMPORTANT NOTE: The power usage will fluctuate mostly when a CPU/Memory/Disk IO eating process transitions from being idle to running multiple times. As based on the time taken for this transition i.e. coming from deeper CPU c-state to C) takes time and impacts the power usage. If a process is all the time in running state then the power usage will be constant.





With profile throughput-performance

Case 1: System is Idle
Before starting let us measure the power usage when the system is completely idle

In the below example I am taking 60 samples or power usage taken per second.
# powerstat -R -c -z
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
09:51:23   0.0   0.0   0.1  99.9   0.0    1    576    718    0    0    0  29.51
09:51:24   0.0   0.0   0.1  99.9   0.0    1    620    615    0    0    0  29.37
09:51:25   0.1   0.0   0.0  99.9   0.0    1    791    605    0    0    0  29.66
09:51:26   0.1   0.0   0.0  99.9   0.0    1    698    632    0    0    0  29.47
09:51:27   0.0   0.0   0.1  99.9   0.0    1    591    606    0    0    0  29.43
09:51:28   0.1   0.0   0.1  99.8   0.0    1   2757   1173    1    0    1  30.54
09:51:29   0.3   0.0   0.3  99.4   0.0    1   8237   1670    0    0    2  37.66
09:51:30   0.1   0.0   0.0  99.9   0.0    1    667    660    0    0    0  29.51
09:51:31   0.1   0.0   0.0  99.9   0.0    1    664    648    0    0    0  29.51
09:51:32   0.0   0.0   0.1  99.9   0.0    1    595    589    0    0    0  29.43
09:51:33   0.0   0.0   0.0 100.0   0.0    1    604    623    0    0    0  29.40
09:51:34   0.1   0.0   0.0  99.9   0.0    1    581    570    0    0    0  29.36
09:51:35   0.0   0.0   0.1  99.9   0.0    1    606    628    0    0    0  29.43
09:51:36   0.1   0.0   0.1  99.9   0.0    1    702    665    0    0    0  29.41
09:51:37   0.0   0.0   0.0 100.0   0.0    1    578    583    0    0    0  29.37
09:51:38   0.0   0.0   0.0 100.0   0.0    1    566    557    0    0    0  29.26
09:51:39   0.1   0.0   0.1  99.9   0.0    1    661    691    0    0    0  29.49
09:51:40   0.1   0.0   0.0  99.9   0.0    1    732    706    0    0    0  29.41
09:51:41   0.0   0.0   0.1  99.9   0.0    1    645    641    0    0    0  29.41
09:51:42   0.0   0.0   0.0 100.0   0.0    1    558    534    0    0    0  29.35
09:51:43   0.1   0.0   0.0  99.9   0.0    1    553    544    0    0    0  29.33
09:51:44   0.0   0.0   0.0 100.0   0.0    1    619    609    0    0    0  29.36
09:51:45   0.0   0.0   0.1  99.9   0.0    1    570    569    0    0    0  29.39
09:51:46   0.0   0.0   0.0 100.0   0.0    1    679    636    0    0    0  29.45
09:51:47   0.1   0.0   0.0  99.9   0.0    1    547    540    0    0    0  29.36
09:51:48   0.0   0.0   0.1  99.9   0.0    1    686    760    1    0    1  29.49
09:51:49   0.1   0.0   0.0  99.9   0.0    1    616    600    0    0    0  29.38
09:51:50   0.1   0.0   0.1  99.8   0.0    1    864    897   12   12    5  29.69
09:51:51   0.1   0.0   0.1  99.9   0.0    1    714    723    1    1    5  29.47
09:51:52   0.0   0.0   0.0 100.0   0.0    1    596    577    0    0    0  29.40
09:51:53   0.0   0.0   0.0 100.0   0.0    1    599    583    0    0    0  29.35
09:51:54   0.0   0.0   0.1  99.9   0.0    1    574    560    0    0    0  29.30
09:51:55   0.1   0.0   0.0  99.9   0.0    1    588    592    0    0    0  29.36
09:51:56   0.0   0.0   0.0 100.0   0.0    1    702    683    0    0    0  29.47
09:51:57   0.1   0.0   0.1  99.9   0.0    1    559    566    0    0    0  29.36
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
09:51:58   0.1   0.0   0.1  99.9   0.0    1   2596   1047    0    0    0  30.44
09:51:59   0.3   0.0   0.2  99.5   0.0    1   8247   1642    1    0    0  33.62
09:52:00   0.1   0.0   0.0  99.9   0.0    1    801    740    2    0    2  29.52
09:52:01   0.0   0.0   0.0 100.0   0.0    1    685    671    0    0    1  29.44
09:52:02   0.0   0.0   0.1  99.9   0.0    1    569    550    0    0    1  29.28
09:52:03   0.1   0.0   0.0  99.9   0.0    1    605    636    0    0    0  29.35
09:52:04   0.0   0.0   0.1  99.9   0.0    1    628    600    0    0    0  29.30
09:52:05   0.1   0.0   0.0  99.9   0.0    1    752    730    0    0    0  29.58
09:52:06   0.1   0.0   0.1  99.9   0.0    1    692    636    0    0    0  29.36
09:52:07   0.0   0.0   0.1  99.9   0.0    1    557    560    0    0    0  29.37
09:52:08   0.0   0.0   0.0 100.0   0.0    1    677    707    1    0    1  29.38
09:52:09   0.1   0.0   0.1  99.8   0.0    1    675    717    0    0    0  29.43
09:52:10   0.0   0.0   0.1  99.9   0.0    1    696    663    0    0    0  29.35
09:52:11   0.1   0.0   0.0  99.9   0.0    1    658    640    0    0    0  29.35
09:52:12   0.2   0.0   0.2  99.6   0.0    1    642    662    0    0    0  29.83
09:52:13   0.0   0.0   0.1  99.9   0.0    1    561    563    0    0    0  29.31
09:52:14   0.1   0.0   0.0  99.9   0.0    1    584    583    0    0    0  29.27
09:52:15   0.0   0.0   0.0 100.0   0.0    1    614    682    0    0    0  29.47
09:52:16   0.1   0.0   0.1  99.9   0.0    1    667    626    0    0    0  29.42
09:52:17   0.0   0.0   0.0 100.0   0.0    1    555    547    0    0    0  29.33
09:52:18   0.0   0.0   0.0 100.0   0.0    1    610    618    0    0    0  29.28
09:52:19   0.1   0.0   0.1  99.9   0.0    1    624    629    0    0    0  29.32
09:52:20   0.0   0.0   0.0 100.0   0.0    1    697    666    0    0    0  29.34
09:52:21   0.1   0.0   0.0  99.9   0.0    1    629    618    0    0    0  29.34
09:52:22   0.0   0.0   0.1  99.9   0.0    1    555    530    0    0    0  29.28
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   0.0   0.0   0.0  99.9   0.0  1.0  957.9  680.3  0.3  0.2  0.3  29.65
 GeoMean   0.0   0.0   0.0  99.9   0.0  1.0  723.3  659.7  0.0  0.0  0.0  29.63
  StdDev   0.1   0.0   0.1   0.1   0.0  0.0 1402.9  211.4  1.6  1.5  1.0   1.19
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.0   0.0   0.0  99.4   0.0  1.0  547.0  530.0  0.0  0.0  0.0  29.26
 Maximum   0.3   0.0   0.3 100.0   0.0  1.0 8247.0 1670.0 12.0 12.0  5.0  37.66
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  29.65 Watts on average with standard deviation 1.19
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.

C-State    Resident      Count Latency
C6-HSW      99.085%      42217     133
C3-HSW       0.318%       9200      33
C1E-HSW      0.170%       4761      10
C1-HSW       0.246%       3428       2
POLL         0.001%          3       0
C0           0.181%

So on my blade the average power usage when the system is idle is ~30W.
As you see when the system is dile the CPU goes into deeper sleep state hence saving power.

Case 2: When the system is under load
To reproduce this environment I will use "stress" tool to put some load on my CPU, memory and disks.
I will be running below command in one of the terminal to put some load on the node
# stress -c 18 -i 13 -m 3
stress: info: [5200] dispatching hogs: 18 cpu, 13 io, 3 vm, 0 hdd

Next check the power usage
# powerstat -R -c -z
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
09:58:01  78.6   0.0  21.4   0.0   0.0   35   5132  18480    2    0    3 137.36
09:58:02  78.9   0.0  21.1   0.0   0.0   27   5028  18275    0    0    1 137.70
09:58:03  79.8   0.0  20.2   0.0   0.0   30   4792  17996    0    0    0 137.81
09:58:04  75.2   0.0  24.8   0.0   0.0   36   5568  18005    0    0    0 136.29
09:58:05  77.2   0.0  22.8   0.0   0.0   32   5981  18444    0    0    0 137.00
09:58:06  76.1   0.0  23.9   0.0   0.0   34   5681  17843    0    0    0 136.73
09:58:07  77.1   0.0  22.9   0.0   0.0   35   4727  17743    1    0    0 137.35
09:58:08  72.5   0.0  27.5   0.0   0.0   33   5803  18002    0    0    0 136.11
09:58:09  74.9   0.0  25.1   0.0   0.0   35   5751  17756    1    0    1 136.75
09:58:10  71.7   0.0  28.3   0.0   0.0   30   6368  18299    0    0    0 135.94
09:58:11  70.8   0.0  29.2   0.0   0.0   38   6079  18402    0    0    0 135.44
09:58:12  71.5   0.0  28.5   0.0   0.0   36   6201  18146    0    0    0 135.58
09:58:13  76.0   0.0  24.0   0.0   0.0   28   5391  18074    0    0    0 137.09
09:58:14  74.9   0.0  25.1   0.0   0.0   35   5834  18167    0    0    0 136.93
09:58:15  72.9   0.0  27.1   0.0   0.0   37   6104  17988    0    0    0 136.23
09:58:16  70.5   0.0  29.5   0.0   0.0   30   6029  17959    0    0    0 135.77
09:58:17  73.9   0.0  26.1   0.0   0.0   34   5782  18211    1    0    0 137.25
09:58:18  73.9   0.0  26.1   0.0   0.0   28   4833  17817    0    0    0 137.08
09:58:19  74.4   0.0  25.6   0.0   0.0   38   5664  18141    0    0    0 137.02
09:58:20  74.2   0.0  25.8   0.0   0.0   37   6218  18179    0    0    0 136.47
09:58:21  76.9   0.0  23.1   0.0   0.0   30   6020  18442    0    0    0 137.69
09:58:22  74.7   0.0  25.3   0.0   0.0   32   4904  17974    0    0    0 137.22
09:58:23  77.2   0.0  22.8   0.0   0.0   38   5344  17879    0    0    0 137.94
09:58:24  73.9   0.0  26.1   0.0   0.0   35   6255  18041    0    0    0 137.16
09:58:25  73.9   0.0  26.1   0.0   0.0   35   5744  18089    0    0    0 137.06
09:58:26  74.8   0.0  25.2   0.0   0.0   33   5669  18022    0    0    0 137.52
09:58:27  71.7   0.0  28.3   0.0   0.0   36   5451  17969    0    0    0 136.69
09:58:28  74.8   0.0  25.2   0.0   0.0   36   5789  18018    1    0    0 137.51
09:58:29  76.6   0.0  23.4   0.0   0.0   32   8520  18747    8    0    1 137.83
09:58:30  74.2   0.0  25.8   0.0   0.0   34   7467  18298    2    0    0 137.44
09:58:31  68.9   0.0  31.1   0.0   0.0   36   5975  18074    0    0    0 135.63
09:58:32  75.0   0.0  25.0   0.0   0.0   35   5031  17742    0    0    0 137.87
09:58:33  74.6   0.0  25.4   0.0   0.0   31   4711  17499    1    0    0 137.88
09:58:34  70.6   0.0  29.4   0.0   0.0   36   5922  17785    0    0    0 136.76
09:58:35  73.3   0.0  26.7   0.0   0.0   33   5390  17845    0    0    0 137.48
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
09:58:36  70.7   0.0  29.3   0.0   0.0   37   6319  18248    0    0    0 136.09
09:58:37  71.5   0.0  28.5   0.0   0.0   38   6036  18126    0    0    0 136.75
09:58:38  73.1   0.0  26.9   0.0   0.0   31   5788  18040    0    0    0 136.98
09:58:39  74.6   0.0  25.4   0.0   0.0   36   5287  17844    0    0    0 137.73
09:58:40  78.6   0.0  21.4   0.0   0.0   34   6193  17913    0    0    0 138.11
09:58:41  75.5   0.0  24.5   0.0   0.0   36   5973  18012    0    0    0 137.70
09:58:42  75.6   0.0  24.4   0.0   0.0   36   5548  17802    0    0    0 137.91
09:58:43  77.1   0.0  22.9   0.0   0.0   29   5514  17557    0    0    0 137.98
09:58:44  77.5   0.0  22.5   0.0   0.0   36   5243  17797    0    0    0 138.10
09:58:45  75.4   0.0  24.6   0.0   0.0   38   6221  17947   38   34   27 137.92
09:58:46  73.9   0.0  26.1   0.0   0.0   35   6462  18384   38   36   48 137.11
09:58:47  73.2   0.0  26.8   0.0   0.0   35   6237  18120    0    0    0 136.74
09:58:48  77.7   0.0  22.3   0.0   0.0   27   5607  18152    1    0    0 138.17
09:58:49  74.9   0.0  25.1   0.0   0.0   37   5490  17965    0    0    1 137.79
09:58:50  76.9   0.0  23.1   0.0   0.0   36   5116  17777   12   12    5 138.42
09:58:51  74.9   0.0  25.1   0.0   0.0   38   5208  17733    1    1    5 138.06
09:58:52  74.1   0.0  25.9   0.0   0.0   33   5093  17786    0    0    0 137.58
09:58:53  73.8   0.0  26.2   0.0   0.0   37   5785  17906    0    0    0 137.43
09:58:54  71.6   0.0  28.4   0.0   0.0   33   5277  17565    0    0    0 137.07
09:58:55  68.8   0.0  31.2   0.0   0.0   29   5674  17579    0    0    0 136.17
09:58:56  72.2   0.0  27.8   0.0   0.0   31   5552  17699    0    0    0 137.55
09:58:57  71.7   0.0  28.3   0.0   0.0   31   5371  17603    0    0    0 137.45
09:58:58  72.0   0.0  28.0   0.0   0.0   28   5472  18050    0    0    0 137.22
09:58:59  72.5   0.0  27.5   0.0   0.0   30   8640  18744    2    0    0 137.22
09:59:00  73.8   0.0  26.2   0.0   0.0   36   6647  18271    6    0    0 137.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  74.3   0.0  25.7   0.0   0.0 33.8 5781.9 18016.2  1.9  1.4  1.5 137.17
 GeoMean  74.3   0.0  25.6   0.0   0.0 33.6 5741.1 18014.2  0.0  0.0  0.0 137.17
  StdDev   2.4   0.0   2.4   0.0   0.0  3.1  727.8  268.8  7.0  6.4  7.0   0.71
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  68.8   0.0  20.2   0.0   0.0 27.0 4711.0 17499.0  0.0  0.0  0.0 135.44
 Maximum  79.8   0.0  31.2   0.0   0.0 38.0 8640.0 18747.0 38.0 36.0 48.0 138.42
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU: 137.17 Watts on average with standard deviation 0.71
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.

C-State    Resident      Count Latency
C6-HSW       0.000%          3     133
C3-HSW       0.000%          4      33
C1E-HSW      0.000%          5      10
C1-HSW       0.000%          2       2
POLL         0.000%          0       0
C0         100.000%

As you see the CPU are always at running state "C0" and the power usage is around ~137W


With profile latency-performance

In this tuned profile we set the force_latency value to "1" hence the CPU c-state will not go lower than C1 even when the system is idle. Hence the power usage will always be compartiviely high as the system never sleeps in real.

Case 1: System is Idle
Before starting let us measure the power usage when the system is completely idle
# powerstat -R -c -z
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
10:06:41   0.1   0.0   0.0  99.9   0.0    1    641    731    0    0    0  72.39
10:06:42   0.0   0.0   0.0 100.0   0.0    1    561    518    0    0    0  71.48
10:06:43   0.0   0.0   0.1  99.9   0.0    1    566    534    0    0    0  71.17
10:06:44   0.0   0.0   0.0 100.0   0.0    1    610    570    0    0    0  71.75
10:06:45   0.6   0.0   1.0  98.4   0.0    1   2224   2340   75   70   75  72.49
10:06:46   0.0   0.0   0.0 100.0   0.0    1    633    594    0    0    0  71.41
10:06:47   0.1   0.0   0.0  99.9   0.0    1    622    585    0    0    0  71.43
10:06:48   0.0   0.0   0.1  99.9   0.0    1    728    806    1    0    1  71.49
10:06:49   0.0   0.0   0.0 100.0   0.0    1    631    599    0    0    0  71.49
10:06:50   0.1   0.0   0.1  99.9   0.0    1    845    928   10   12    6  71.44
10:06:51   0.0   0.0   0.1  99.9   0.1    1    723    772    1    1    5  71.99
10:06:52   0.0   0.0   0.0 100.0   0.0    1    603    571    0    0    0  71.46
10:06:53   0.0   0.0   0.0 100.0   0.0    1    560    541    0    0    0  71.40
10:06:54   0.0   0.0   0.0 100.0   0.0    1    594    566    0    0    0  71.34
10:06:55   0.1   0.0   0.0  99.9   0.0    1    580    561    0    0    0  71.53
10:06:56   0.0   0.0   0.0 100.0   0.0    1    658    639    0    0    0  71.96
10:06:57   0.0   0.0   0.1  99.9   0.0    1    593    555    0    0    0  71.41
10:06:58   0.1   0.0   0.0  99.9   0.0    1   2367    823    0    0    0  71.46
10:06:59   0.1   0.0   0.0  99.9   0.0    1   9397   1464    0    0    0  71.89
10:07:00   0.0   0.0   0.1  99.9   0.0    1    796    734    2    0    2  71.70
10:07:01   0.0   0.0   0.0 100.0   0.0    1    656    632    0    0    0  71.52
10:07:02   0.1   0.0   0.0  99.9   0.0    1    563    543    0    0    1  73.34
10:07:03   0.0   0.0   0.1  99.9   0.0    1    600    645    0    0    0  71.81
10:07:04   0.0   0.0   0.0 100.0   0.0    1    630    604    0    0    0  71.77
10:07:05   0.0   0.0   0.1  99.9   0.0    1    754    720    0    0    0  72.06
10:07:06   0.1   0.0   0.0  99.9   0.0    1    666    656    0    0    0  71.59
10:07:07   0.0   0.0   0.0 100.0   0.0    1    601    557    0    0    0  71.53
10:07:08   0.0   0.0   0.1  99.9   0.0    1    683    696    1    0    1  71.52
10:07:09   0.1   0.0   0.0  99.9   0.0    1    669    676    0    0    0  71.55
10:07:10   0.0   0.0   0.0 100.0   0.0    1    691    652    0    0    0  71.43
10:07:11   0.0   0.0   0.1  99.9   0.1    1    669    654    0    0    0  71.58
10:07:12   0.1   0.0   0.0  99.9   0.0    1    583    552    0    0    0  71.54
10:07:13   0.0   0.0   0.0 100.0   0.0    1    580    559    0    0    0  72.71
10:07:14   0.0   0.0   0.0 100.0   0.0    1    614    580    0    0    0  72.01
10:07:15   0.0   0.0   0.0 100.0   0.0    1    553    521    0    0    0  72.33
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
10:07:16   0.0   0.0   0.0 100.0   0.0    1    672    654    0    0    0  73.04
10:07:17   0.0   0.0   0.0 100.0   0.0    1    612    551    0    0    0  72.55
10:07:18   0.0   0.0   0.0 100.0   0.0    1    610    606    0    0    0  72.39
10:07:19   0.1   0.0   0.1  99.9   0.0    1    599    567    0    0    0  72.69
10:07:20   0.0   0.0   0.0 100.0   0.0    1    715    675    0    0    0  72.71
10:07:21   0.0   0.0   0.0 100.0   0.0    1    634    606    0    0    0  72.86
10:07:22   0.1   0.0   0.0  99.9   0.0    1    586    539    0    0    0  72.86
10:07:23   0.0   0.0   0.0 100.0   0.0    1    553    520    0    0    0  72.83
10:07:24   0.0   0.0   0.0 100.0   0.0    1    621    589    0    0    0  72.71
10:07:25   0.0   0.0   0.0 100.0   0.0    1    565    531    0    0    0  72.65
10:07:26   0.0   0.0   0.1  99.9   0.0    1    595    570    0    0    0  72.73
10:07:27   0.2   0.0   0.1  99.7   0.0    1    607    607    0    0    0  73.01
10:07:28   0.1   0.0   0.0  99.9   0.0    1   2479   1010    1    0    2  72.77
10:07:29   0.1   0.0   0.1  99.8   0.0    1   9091   1576    1    0    1  72.98
10:07:30   0.0   0.0   0.0 100.0   0.0    1    675    655    0    0    0  72.91
10:07:31   0.1   0.0   0.0  99.9   0.0    1    619    610    0    0    0  72.69
10:07:32   0.0   0.0   0.1  99.9   0.0    1    645    656    0    0    0  72.70
10:07:33   0.0   0.0   0.0 100.0   0.0    1    579    642    0    0    0  72.64
10:07:34   0.0   0.0   0.0 100.0   0.0    1    600    606    0    0    0  72.77
10:07:35   0.0   0.0   0.0 100.0   0.0    1    593    583    0    0    0  72.85
10:07:36   0.1   0.0   0.0  99.9   0.0    1    650    634    0    0    0  72.85
10:07:37   0.0   0.0   0.0 100.0   0.0    1    609    592    0    0    0  72.85
10:07:38   0.1   0.0   0.0  99.9   0.0    1    602    580    0    0    0  72.82
10:07:39   0.1   0.0   0.0  99.9   0.0    1    682    699    0    0    0  72.75
10:07:40   0.0   0.0   0.1  99.9   0.0    1    729    704    0    0    0  72.80
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   0.0   0.0   0.0  99.9   0.0  1.0 1006.1  687.3  1.5  1.4  1.6  72.17
 GeoMean   0.0   0.0   0.0  99.9   0.0  1.0  736.1  655.9  0.0  0.0  0.0  72.17
  StdDev   0.1   0.0   0.1   0.2   0.0  0.0 1576.6  284.0  9.7  9.1  9.6   0.62
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.0   0.0   0.0  98.4   0.0  1.0  553.0  518.0  0.0  0.0  0.0  71.17
 Maximum   0.6   0.0   1.0 100.0   0.1  1.0 9397.0 2340.0 75.0 70.0 75.0  73.34
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  72.17 Watts on average with standard deviation 0.62
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.

C-State    Resident      Count Latency
C6-HSW       0.000%          0     133
C3-HSW       0.000%          0      33
C1E-HSW      0.000%          0      10
C1-HSW      99.229%      60985       2
POLL         0.654%        124       0
C0           0.117%

As expected the power usage when the system is idle here is ~72W while for throughoput-performance the same was ~30W.
Also if you notice the CPU c-state always stays at C1-HSW while for throughput-performance the active CPU c-state was C6-HSW when the system was idle.

Case 2: When the system is under load
To reproduce this environment I will use "stress" tool to put some load on my CPU, memory and disks.
I will be running below command in one of the terminal to put some load on the node
# stress -c 18 -i 13 -m 3
stress: info: [5200] dispatching hogs: 18 cpu, 13 io, 3 vm, 0 hdd

Next check the power usage
# powerstat -R -c -z
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
10:11:51  58.8   0.0  41.1   0.0   0.1   33  10351  20261    1    1    5 129.77
10:11:52  56.2   0.0  43.7   0.0   0.1   32  10623  20220    0    0    0 128.95
10:11:53  55.9   0.0  44.1   0.0   0.0   35  10580  20338    0    0    0 128.79
10:11:54  56.8   0.0  43.1   0.1   0.0   35  10163  20119    0    0    0 129.28
10:11:55  57.2   0.0  42.7   0.0   0.1   35   9818  19848    0    0    0 129.88
10:11:56  57.4   0.0  42.6   0.0   0.0   35  10231  20053    0    0    0 129.86
10:11:57  55.1   0.0  44.9   0.0   0.0   35  10232  19938    0    0    0 129.36
10:11:58  55.1   0.0  44.9   0.0   0.1   35  10220  19926    0    0    0 129.20
10:11:59  56.7   0.0  43.2   0.1   0.0   35  16106  22753    4    0    0 129.59
10:12:00  55.1   0.0  44.8   0.1   0.0   35  10626  20228    0    0    0 129.72
10:12:01  54.1   0.0  45.9   0.0   0.0   34  10345  20101    2    0    3 129.55
10:12:02  54.1   0.0  45.9   0.0   0.0   35   9651  19712    0    0    1 129.78
10:12:03  54.7   0.0  45.3   0.0   0.0   35   9673  19700    0    0    0 129.59
10:12:04  55.1   0.0  44.9   0.0   0.1   36  10372  20192    0    0    0 129.66
10:12:05  56.3   0.0  43.7   0.0   0.0   33  10352  20294    0    0    0 130.25
10:12:06  57.8   0.0  42.1   0.0   0.1   35   9449  19577    0    0    0 130.90
10:12:07  57.9   0.0  42.0   0.0   0.1   35   9944  19799    0    0    0 131.01
10:12:08  55.9   0.0  43.9   0.1   0.1   28  10218  20024    0    0    0 130.53
10:12:09  54.6   0.0  45.4   0.0   0.1   35   9825  20164    1    0    1 130.14
10:12:10  54.0   0.0  46.0   0.0   0.0   35   9992  20225    0    0    0 129.88
10:12:11  53.7   0.0  46.3   0.0   0.0   35  10535  20295    0    0    0 129.64
10:12:12  53.6   0.0  46.4   0.0   0.0   35  10095  20100    0    0    0 130.04
10:12:13  55.1   0.0  44.9   0.0   0.0   35  10112  19923    0    0    0 130.79
10:12:14  55.5   0.0  44.5   0.0   0.0   35  10231  20114    0    0    0 131.04
10:12:15  54.9   0.0  45.1   0.0   0.0   35  11026  20462    0    0    0 130.56
10:12:16  56.5   0.0  43.5   0.0   0.0   34  10282  20241    0    0    0 130.94
10:12:17  57.9   0.0  42.1   0.0   0.1   35  10454  20099    0    0    0 131.68
10:12:18  58.4   0.0  41.5   0.0   0.1   35  10297  20173    0    0    0 131.85
10:12:19  54.6   0.0  45.4   0.0   0.0   35  10406  20257    0    0    0 130.47
10:12:20  54.3   0.0  45.7   0.0   0.0   35  10616  20393    0    0    0 130.86
10:12:21  55.1   0.0  44.9   0.0   0.0   35  10865  20403    0    0    0 130.87
10:12:22  55.1   0.0  44.8   0.0   0.1   35  10344  20006    0    0    0 130.90
10:12:23  54.8   0.0  45.2   0.0   0.0   35  10103  19905    0    0    0 131.05
10:12:24  54.9   0.0  45.1   0.0   0.0   35  10600  20325    0    0    0 131.05
10:12:25  56.2   0.0  43.8   0.0   0.0   35  10811  20359    0    0    0 131.50
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
10:12:26  55.6   0.0  44.4   0.0   0.0   34  10703  20446    0    0    0 130.99
10:12:27  55.9   0.0  44.0   0.1   0.0   34  10696  20486    0    0    0 131.49
10:12:28  56.7   0.0  43.2   0.0   0.1   35  11010  20683    0    0    0 131.85
10:12:29  55.8   0.0  44.2   0.0   0.0   35  16858  23145    3    0    1 131.48
10:12:30  55.6   0.0  44.4   0.0   0.0   35  12076  21155    1    0    0 131.51
10:12:31  55.5   0.0  44.5   0.0   0.0   35  11212  20780    0    0    0 131.57
10:12:32  54.8   0.0  45.1   0.1   0.0   27  10940  22796    0    0    0 131.57
10:12:33  57.0   0.0  42.9   0.0   0.1   35  10804  20561    0    0    0 132.34
10:12:34  55.7   0.0  44.3   0.0   0.0   35  10930  20540    0    0    0 131.40
10:12:35  57.2   0.0  42.8   0.0   0.1   35  10617  20446    0    0    0 132.26
10:12:36  55.1   0.0  44.8   0.1   0.0   31  10129  19919    0    0    0 132.80
10:12:37  55.1   0.0  44.8   0.0   0.1   34  10462  20136    0    0    0 132.04
10:12:38  55.2   0.0  44.8   0.0   0.0   37   9821  20056    0    0    0 132.07
10:12:39  54.7   0.0  45.3   0.0   0.0   35  10056  20125    0    0    0 131.71
10:12:40  53.8   0.0  46.1   0.0   0.1   35  10318  20053    0    0    0 131.87
10:12:41  55.1   0.0  44.8   0.1   0.0   35   9804  19796    0    0    0 132.06
10:12:42  55.4   0.0  44.6   0.1   0.0   35   9184  19393    0    0    0 132.49
10:12:43  54.5   0.0  45.5   0.0   0.0   35  10046  19917    0    0    0 132.02
10:12:44  54.6   0.0  45.4   0.0   0.1   35   9985  19979    0    0    0 131.96
10:12:45  57.2   0.0  42.8   0.1   0.0   38  10564  20350   15   13    3 132.70
10:12:46  55.6   0.0  44.4   0.0   0.0   35  12089  20722   60   57   72 132.09
10:12:47  55.8   0.0  44.1   0.0   0.1   35   9824  19830    1    0    0 132.35
10:12:48  54.4   0.0  45.6   0.0   0.0   34   9621  19643    0    0    0 131.95
10:12:49  56.9   0.0  42.9   0.1   0.1   35  10055  20139    1    0    1 132.79
10:12:50  55.8   0.0  44.2   0.0   0.1   35  10229  19948   11   12    5 132.34
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  55.6   0.0  44.3   0.0   0.0 34.6 10559.7 20292.8  1.7  1.4  1.5 130.98
 GeoMean  55.6   0.0  44.3   0.0   0.0 34.5 10504.4 20282.4  0.0  0.0  0.0 130.97
  StdDev   1.2   0.0   1.2   0.0   0.0  1.6 1216.1  671.7  8.0  7.6  9.2   1.08
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  53.6   0.0  41.1   0.0   0.0 27.0 9184.0 19393.0  0.0  0.0  0.0 128.79
 Maximum  58.8   0.0  46.4   0.1   0.1 38.0 16858.0 23145.0 60.0 57.0 72.0 132.80
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU: 130.98 Watts on average with standard deviation 1.08
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.

C-State    Resident      Count Latency
C6-HSW       0.000%          0     133
C3-HSW       0.000%          0      33
C1E-HSW      0.000%          0      10
C1-HSW       0.037%       1194       2
POLL         0.000%          5       0
C0          99.963%

So here the results are almost same as with throughput-performance, since the CPU usage was always at C0 state.

Some more examples of using powerstat

NOTE: By default if powerstat is executed without any argument then it will collect 60 samples at 1 second intervals.

Enable all sampling collection

This will give collective output with "C-State Statistics", "Average CPU Frequency", "Thermal Zone Temperatures" and "Power Histogram"
# powerstat -a -R 1 60
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts   CPU Freq  GPU W
11:20:40   0.1   0.0   0.0  99.9   0.0    1    728    830    0    0    0  76.13   2.60 GHz  -N/A-
11:20:41   0.0   0.0   0.0 100.0   0.0    1    635    632    0    0    0  75.78   2.60 GHz  -N/A-
11:20:42   0.0   0.0   0.0 100.0   0.0    1    564    563    0    0    0  75.51   2.60 GHz  -N/A-
11:20:43   0.0   0.0   0.0 100.0   0.0    1    533    528    0    0    0  75.74   2.60 GHz  -N/A-
11:20:44   0.0   0.0   0.1  99.9   0.0    1    666    670    0    0    0  75.84   2.60 GHz  -N/A-
11:20:45   0.1   0.0   0.0  99.9   0.0    1    586    577    0    0    0  76.07   2.60 GHz  -N/A-
11:20:46   0.0   0.0   0.0 100.0   0.0    1    558    544    0    0    0  75.75   2.60 GHz  -N/A-
11:20:47   0.0   0.0   0.0 100.0   0.0    1    625    595    0    0    0  75.75   2.60 GHz  -N/A-
11:20:48   0.0   0.0   0.0 100.0   0.0    1    603    617    0    0    0  75.98   2.60 GHz  -N/A-
11:20:49   0.7   0.0   1.1  98.2   0.0    1   2368   2441   75   70   75  77.36   2.60 GHz  -N/A-
11:20:50   0.1   0.0   0.1  99.8   0.0    1    940   1046   11   12    6  76.44   2.60 GHz  -N/A-
11:20:51   0.0   0.0   0.0 100.0   0.0    1    678    758    1    1    5  76.04   2.60 GHz  -N/A-
11:20:52   0.0   0.0   0.1  99.9   0.0    1    580    573    0    0    0  75.96   2.60 GHz  -N/A-
11:20:53   0.0   0.0   0.0 100.0   0.0    1    546    541    0    0    0  76.24   2.60 GHz  -N/A-
11:20:54   0.1   0.0   0.0  99.9   0.0    1    651    646    0    0    0  76.36   2.60 GHz  -N/A-
11:20:55   0.0   0.0   0.0 100.0   0.0    1    594    595    0    0    0  76.05   2.60 GHz  -N/A-
11:20:56   0.0   0.0   0.0 100.0   0.0    1    589    585    0    0    0  76.32   2.60 GHz  -N/A-
11:20:57   0.0   0.0   0.0 100.0   0.0    1    609    579    0    0    0  76.16   2.60 GHz  -N/A-
11:20:58   0.1   0.0   0.0  99.9   0.0    1   1380    596    0    0    0  75.84   2.60 GHz  -N/A-
11:20:59   0.3   0.0   0.1  99.6   0.0    1   9912   1934    0    0    0  76.59   2.60 GHz  -N/A-
11:21:00   0.0   0.0   0.0 100.0   0.0    1    810    741    2    0    2  76.14   2.60 GHz  -N/A-
11:21:01   0.1   0.0   0.1  99.9   0.0    1    624    632    0    0    0  75.82   2.60 GHz  -N/A-
11:21:02   0.0   0.0   0.0 100.0   0.0    1    561    545    0    0    1  75.98   2.60 GHz  -N/A-
11:21:03   0.0   0.0   0.0 100.0   0.0    1    573    587    0    0    0  75.94   2.60 GHz  -N/A-
11:21:04   0.0   0.0   0.0 100.0   0.0    1    633    642    0    0    0  75.73   2.60 GHz  -N/A-
11:21:05   0.1   0.0   0.1  99.9   0.0    1    758    719    0    0    0  75.75   2.60 GHz  -N/A-
11:21:06   0.0   0.0   0.0 100.0   0.0    1    614    614    0    0    0  75.87   2.60 GHz  -N/A-
11:21:07   0.0   0.0   0.0 100.0   0.0    1    649    604    0    0    0  76.27   2.60 GHz  -N/A-
11:21:08   0.0   0.0   0.0 100.0   0.0    1    570    571    0    0    0  75.95   2.60 GHz  -N/A-
11:21:09   0.1   0.0   0.0  99.9   0.0    1    697    766    0    0    0  76.10   2.60 GHz  -N/A-
11:21:10   0.1   0.0   0.0  99.9   0.0    1    708    741    1    0    1  76.20   2.60 GHz  -N/A-
11:21:11   0.0   0.0   0.1  99.9   0.0    1    675    687    0    0    0  75.96   2.60 GHz  -N/A-
11:21:12   0.0   0.0   0.0 100.0   0.0    1    615    612    0    0    0  75.96   2.60 GHz  -N/A-
11:21:13   0.1   0.0   0.0  99.9   0.0    1    537    531    0    0    0  76.22   2.60 GHz  -N/A-
11:21:14   0.0   0.0   0.0 100.0   0.0    1    603    606    0    0    0  75.96   2.60 GHz  -N/A-
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts   CPU Freq  GPU W
11:21:15   0.0   0.0   0.0 100.0   0.0    1    571    562    0    0    0  76.38   2.60 GHz  -N/A-
11:21:16   0.0   0.0   0.0 100.0   0.0    1    605    631    0    0    0  76.12   2.60 GHz  -N/A-
11:21:17   0.1   0.0   0.0  99.9   0.0    1    627    596    0    0    0  76.13   2.60 GHz  -N/A-
11:21:18   0.0   0.0   0.0 100.0   0.0    1    557    566    0    0    0  76.32   2.60 GHz  -N/A-
11:21:19   0.0   0.0   0.0 100.0   0.0    1    643    651    0    0    0  76.15   2.60 GHz  -N/A-
11:21:20   0.0   0.0   0.0 100.0   0.0    1    664    635    0    0    0  76.05   2.60 GHz  -N/A-
11:21:21   0.0   0.0   0.1  99.9   0.0    1    609    620    0    0    0  76.15   2.60 GHz  -N/A-
11:21:22   0.1   0.0   0.0  99.9   0.0    1    562    565    0    0    0  75.90   2.60 GHz  -N/A-
11:21:23   0.0   0.0   0.0 100.0   0.0    1    556    547    0    0    0  76.20   2.60 GHz  -N/A-
11:21:24   0.0   0.0   0.0 100.0   0.0    1    617    615    0    0    0  76.17   2.60 GHz  -N/A-
11:21:25   0.0   0.0   0.0 100.0   0.0    1    569    560    0    0    0  76.21   2.60 GHz  -N/A-
11:21:26   0.0   0.0   0.0 100.0   0.0    1    557    552    0    0    0  76.17   2.60 GHz  -N/A-
11:21:27   0.1   0.0   0.0  99.9   0.0    1    875    651    0    0    0  77.02   2.60 GHz  -N/A-
11:21:28   0.0   0.0   0.1  99.9   0.0    1   1458    635    0    0    0  77.37   2.60 GHz  -N/A-
11:21:29   0.2   0.0   0.1  99.7   0.0    1   9915   1926    0    0    1  77.43   2.60 GHz  -N/A-
11:21:30   0.0   0.0   0.1  99.9   0.0    1    720    752    1    0    1  76.88   2.60 GHz  -N/A-
11:21:31   0.1   0.0   0.0  99.9   0.0    1    641    631    0    0    0  76.79   2.60 GHz  -N/A-
11:21:32   0.0   0.0   0.0  99.9   0.1    1    610    587    0    0    0  76.31   2.60 GHz  -N/A-
11:21:33   0.0   0.0   0.0 100.0   0.0    1    580    593    0    0    0  76.34   2.60 GHz  -N/A-
11:21:34   0.0   0.0   0.0 100.0   0.0    1    587    600    0    0    0  76.54   2.60 GHz  -N/A-
11:21:35   0.0   0.0   0.0 100.0   0.0    1    596    584    0    0    0  76.11   2.60 GHz  -N/A-
11:21:36   0.0   0.0   0.0 100.0   0.0    1    613    670    0    0    0  76.09   2.60 GHz  -N/A-
11:21:37   0.0   0.0   0.0 100.0   0.0    1    652    629    0    0    0  76.05   2.60 GHz  -N/A-
11:21:38   0.1   0.0   0.0  99.9   0.0    1    550    543    0    0    0  76.30   2.60 GHz  -N/A-
11:21:39   0.1   0.0   0.0  99.9   0.0    1    742    767    0    0    0  76.41   2.60 GHz  -N/A-
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------  --------- ------
 Average   0.0   0.0   0.0  99.9   0.0  1.0  994.1  701.9  1.5  1.4  1.5  76.19   2.60 GHz  -N/A-
 GeoMean   0.0   0.0   0.0  99.9   0.0  1.0  720.0  661.9  0.0  0.0  0.0  76.19   2.60 GHz   0.00
  StdDev   0.1   0.0   0.1   0.2   0.0  0.0 1678.5  336.3  9.7  9.1  9.6   0.39   0.00 Hz   -N/A-
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------  --------- ------
 Minimum   0.0   0.0   0.0  98.2   0.0  1.0  533.0  528.0  0.0  0.0  0.0  75.51   2.60 GHz  -N/A-
 Maximum   0.7   0.0   1.1 100.0   0.1  1.0 9915.0 2441.0 75.0 70.0 75.0  77.43   2.60 GHz  -N/A-
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------  --------- ------
Summary:
CPU:  76.19 Watts on average with standard deviation 0.39
GPU:   0.00 Watts on average with standard deviation 0.00
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.
Note: No thermal zones on this device.

C-State    Resident      Count Latency
C6-HSW       0.000%          0     133
C3-HSW       0.000%          0      33
C1E-HSW      0.000%          0      10
C1-HSW      99.629%      61118       2
POLL         0.251%        126       0
C0           0.120%

Histogram (of 60 power measurements)

 Range (Watts)  Count
75.514 - 75.705     1 ##
75.706 - 75.897    11 ########################
75.898 - 76.089    13 ############################
76.090 - 76.281    18 ########################################
76.282 - 76.473     9 ####################
76.474 - 76.665     2 ####
76.666 - 76.858     1 ##
76.859 - 77.050     2 ####
77.051 - 77.242     0
77.243 - 77.434     3 ######

Histogram (of 60 CPU utilization measurements)

Range (%CPU)  Count
0.000 - 0.174    56 ########################################
0.175 - 0.349     2 #
0.350 - 0.524     1
0.525 - 0.699     0
0.700 - 0.875     0
0.876 - 1.050     0
1.051 - 1.225     0
1.226 - 1.400     0
1.401 - 1.575     0
1.576 - 1.750     1

Range is zero, cannot produce histogram of CPU average frequencies


Enable Histogram

This will give you a history of CPU power usage summary at the end of execution
# powerstat -R -H
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
11:03:14  54.7   0.0  45.2   0.1   0.1   35  10273  20083    0    0    0 130.07
11:03:15  56.2   0.0  43.8   0.0   0.0   35  10391  19978    0    0    0 130.79
11:03:16  55.0   0.0  45.0   0.0   0.0   35  10916  20086    0    0    0 129.94
11:03:17  55.6   0.0  44.4   0.0   0.0   35  10268  19892    0    0    0 130.44
11:03:18  55.2   0.0  44.6   0.1   0.1   35  10415  20122    0    0    0 130.73
11:03:19  53.9   0.0  46.1   0.0   0.0   35  10077  20076    0    0    0 129.84
11:03:20  54.2   0.0  45.8   0.0   0.0   35  10044  19819    0    0    0 130.06
11:03:21  55.0   0.0  45.0   0.0   0.0   35   9424  19463    0    0    0 130.44
11:03:22  56.0   0.0  43.7   0.1   0.3   34  10477  24073    0    0    0 130.84
11:03:23  55.5   0.0  44.4   0.0   0.1   35   9703  19740    0    0    0 130.59
11:03:24  55.3   0.0  44.6   0.1   0.0   35  10275  20064    0    0    0 130.42
11:03:25  54.6   0.0  45.4   0.0   0.0   35   9623  19753    0    0    0 130.27
11:03:26  54.9   0.0  45.0   0.0   0.1   35   9444  19531    0    0    0 130.27
11:03:27  56.9   0.0  43.1   0.0   0.0   35  10328  20064    0    0    0 130.95
11:03:28  56.8   0.0  43.2   0.1   0.0   45  13479  21303    0    0    0 130.82
11:03:29  55.2   0.0  44.8   0.0   0.0   35  13593  21262    2    0    2 130.19
11:03:30  57.6   0.0  42.3   0.0   0.1   35  10620  20209    0    0    0 131.09
11:03:31  55.3   0.0  44.7   0.0   0.0   33  10575  20194    0    0    0 130.47
11:03:32  55.9   0.0  44.1   0.0   0.0   35  10215  19983    0    0    0 130.64
11:03:33  54.8   0.0  45.0   0.1   0.1   35  10678  20224    1    0    0 130.10
11:03:34  55.3   0.0  44.7   0.0   0.0   35  10376  20045    0    0    0 130.26
11:03:35  53.8   0.0  46.2   0.0   0.0   34  10199  19931    0    0    0 129.78
11:03:36  57.4   0.0  42.6   0.0   0.0   35  10411  20188    0    0    0 131.12
11:03:37  55.8   0.0  44.2   0.1   0.0   35  10492  20164    0    0    0 130.74
11:03:38  55.3   0.0  44.7   0.0   0.0   35  10354  20118    0    0    0 130.57
11:03:39  56.2   0.0  43.8   0.1   0.0   34  10263  20001    0    0    0 130.69
11:03:40  55.0   0.0  45.0   0.0   0.0   35  10755  20312    0    0    0 130.26
11:03:41  54.1   0.0  45.8   0.0   0.1   35  10629  20274    0    0    0 129.91
11:03:42  55.0   0.0  45.0   0.0   0.0   35  10498  20084    0    0    0 130.17
11:03:43  55.2   0.0  44.8   0.0   0.0   35  10981  20251    0    0    0 130.17
11:03:44  55.0   0.0  45.0   0.0   0.0   35  10618  20277    0    0    0 130.03
11:03:45  55.6   0.0  44.3   0.1   0.0   35  10486  20196    0    0    1 130.19
11:03:46  56.9   0.0  43.0   0.1   0.1   33  10032  19766    0    0    0 131.07
11:03:47  55.3   0.0  44.7   0.0   0.0   35  10079  19938    0    0    0 130.63
11:03:48  58.6   0.0  41.2   0.1   0.1   35  10364  20035    0    0    0 131.62
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
11:03:49  56.9   0.0  43.0   0.1   0.0   37  11002  20277    6    1    1 130.82
11:03:50  55.4   0.0  44.6   0.0   0.1   35  11145  20446    6   11    5 129.96
11:03:51  56.4   0.0  43.6   0.1   0.0   35   9995  19933    1    1    5 130.73
11:03:52  54.2   0.0  45.7   0.1   0.1   36  10269  20055    0    0    0 129.88
11:03:53  56.7   0.0  43.3   0.0   0.0   35  10882  20435    0    0    0 130.83
11:03:54  55.4   0.0  44.6   0.0   0.0   35  10314  20226    0    0    0 130.27
11:03:55  58.2   0.0  41.8   0.0   0.0   35  10202  19926    0    0    0 131.20
11:03:56  56.1   0.0  43.8   0.0   0.1   35  10320  19916    0    0    0 130.65
11:03:57  54.7   0.0  45.2   0.0   0.1   35  10573  19918    0    0    0 130.09
11:03:58  57.3   0.0  42.6   0.1   0.1   42  13602  21503    0    0    0 130.76
11:03:59  55.9   0.0  44.1   0.0   0.0   35  13752  21402    2    0    0 130.33
11:04:00  56.6   0.0  43.4   0.0   0.0   35  11798  20576    2    0    2 130.67
11:04:01  56.6   0.0  43.3   0.1   0.1   35  10919  20352    0    0    1 130.67
11:04:02  56.2   0.0  43.7   0.0   0.1   35  10133  19869    0    0    1 130.84
11:04:03  55.7   0.0  44.1   0.1   0.1   35  10205  19979    0    0    0 130.25
11:04:04  56.3   0.0  43.7   0.0   0.1   35   9981  19789    0    0    0 130.83
11:04:05  54.5   0.0  45.4   0.1   0.0   35  10732  20289    0    0    0 129.98
11:04:06  55.2   0.0  44.8   0.0   0.0   35   9902  20018    0    0    0 130.06
11:04:07  55.0   0.0  45.0   0.0   0.0   35  10204  20154    0    0    0 129.85
11:04:08  56.4   0.0  43.6   0.0   0.0   35  10115  20042    0    0    0 130.42
11:04:09  55.3   0.0  44.6   0.0   0.1   35  10158  20099    1    0    1 130.17
11:04:10  55.3   0.0  44.6   0.0   0.1   35  10543  20257    0    0    0 130.69
11:04:11  56.0   0.0  44.0   0.0   0.0   35  10389  20045    0    0    0 130.45
11:04:12  56.8   0.0  43.1   0.0   0.1   35  10443  20042    0    0    0 130.74
11:04:13  55.8   0.0  44.2   0.0   0.0   35  10332  20128    0    0    0 130.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  55.7   0.0  44.2   0.0   0.0 35.2 10587.8 20219.6  0.3  0.2  0.3 130.46
 GeoMean  55.7   0.0  44.2   0.0   0.0 35.2 10554.2 20210.5  0.0  0.0  0.0 130.46
  StdDev   1.0   0.0   1.0   0.0   0.0  1.6  895.6  630.3  1.2  1.4  1.0   0.39
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  53.8   0.0  41.2   0.0   0.0 33.0 9424.0 19463.0  0.0  0.0  0.0 129.78
 Maximum  58.6   0.0  46.2   0.1   0.3 45.0 13752.0 24073.0  6.0 11.0  5.0 131.62
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU: 130.46 Watts on average with standard deviation 0.39
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.

Histogram (of 60 power measurements)

  Range (Watts)   Count
129.781 - 129.964     7 #######################
129.965 - 130.148     7 #######################
130.149 - 130.333    12 ########################################
130.334 - 130.517     7 #######################
130.518 - 130.702     9 ##############################
130.703 - 130.886    12 ########################################
130.887 - 131.071     2 ######
131.072 - 131.255     3 ##########
131.256 - 131.440     0
131.441 - 131.624     1 ###

Histogram (of 60 CPU utilization measurements)

  Range (%CPU)    Count
 99.875 -  99.886     2 #
 99.887 -  99.899     0
 99.900 -  99.911     0
 99.912 -  99.924     0
 99.925 -  99.937    14 #############
 99.938 -  99.949     2 #
 99.950 -  99.962     0
 99.963 -  99.974     0
 99.975 -  99.987     0
 99.988 -  99.999    42 ########################################


Show process activity log

This will show process fork/exec/exit activity log at the end of execution
# powerstat -R -s
Running for 60.0 seconds (60 samples at 1.0 second intervals).
Power measurements will start in 0 seconds time.

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
11:25:57  59.1   0.0  40.8   0.1   0.0   35  10831  20371    1    0    0 130.01
11:25:58  56.2   0.0  43.8   0.0   0.0   34  10823  20321    0    0    0 129.30
11:25:59  56.4   0.0  43.6   0.0   0.0   35  16537  23175    0    0    0 129.61
11:26:00  54.9   0.0  45.0   0.0   0.1   28  10874  20393    0    0    0 129.18
11:26:01  55.2   0.0  44.8   0.0   0.0   34  10502  20094    2    0    2 129.16
11:26:02  54.9   0.0  45.1   0.1   0.0   33  10324  20108    0    0    1 129.12
11:26:03  55.7   0.0  44.2   0.1   0.1   35  10398  20258    0    0    0 129.72
11:26:04  59.2   0.0  40.7   0.0   0.1   35  10425  20210    0    0    0 131.04
11:26:05  58.5   0.0  41.4   0.1   0.0   35  10390  20415    0    0    0 130.61
11:26:06  56.2   0.0  43.8   0.0   0.0   35  10614  20343    0    0    0 130.01
11:26:07  58.0   0.0  42.0   0.0   0.1   35  10156  19989    0    0    0 130.69
11:26:08  55.9   0.0  44.1   0.1   0.0   35  10094  19997    0    0    0 130.20
11:26:09  57.5   0.0  42.4   0.0   0.1   35  10218  20028    0    0    0 130.47
11:26:10  54.0   0.0  46.0   0.0   0.0   35  10515  20430    1    0    1 129.29
11:26:11  54.8   0.0  45.2   0.0   0.0   35  10422  20120    0    0    0 129.65
11:26:12  55.5   0.0  44.5   0.0   0.0   35  10787  20148    0    0    0 129.78
11:26:13  56.1   0.0  43.9   0.0   0.0   35  10414  20213    0    0    0 130.23
11:26:14  56.1   0.0  43.9   0.0   0.1   29  10418  20060    0    0    0 130.29
11:26:15  54.8   0.0  45.1   0.1   0.1   35  10151  19862    0    0    0 130.03
11:26:16  55.2   0.0  44.8   0.0   0.0   34  10010  19894    0    0    0 130.47
11:26:17  55.4   0.0  44.4   0.0   0.2   35  10319  19964    0    0    0 130.78
11:26:18  56.5   0.0  43.4   0.1   0.0   35   9874  19770    0    0    0 130.77
11:26:19  53.8   0.0  46.2   0.0   0.0   35   9957  19754    0    0    0 129.76
11:26:20  54.3   0.0  45.7   0.0   0.0   35  10574  20074    0    0    0 130.05
11:26:21  55.2   0.0  44.8   0.0   0.0   35  10042  19996    0    0    0 130.51
11:26:22  56.3   0.0  43.6   0.0   0.1   35   9875  19979    0    0    0 130.91
11:26:23  57.0   0.0  42.9   0.0   0.1   35  10501  20254    0    0    0 131.22
11:26:24  54.5   0.0  45.5   0.1   0.0   35  11144  20257    0    0    0 130.60
11:26:25  56.0   0.0  43.9   0.0   0.1   35  10462  19960    0    0    0 130.70
11:26:26  54.7   0.0  45.3   0.0   0.0   35  10603  20407    0    0    0 130.52
11:26:27  56.4   0.0  43.6   0.0   0.0   35  11158  20557    0    0    0 130.92
11:26:28  55.0   0.0  45.0   0.0   0.0   35  11611  20603    0    0    0 130.26
11:26:29  55.4   0.0  44.6   0.0   0.0   34  17089  23325    2    0    0 130.35
11:26:30  59.6   0.0  40.2   0.1   0.1   36  10700  20369    1    0    1 132.11
11:26:31  56.8   0.0  43.2   0.0   0.0   35  11308  20668    0    0    0 131.48
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
11:26:32  57.0   0.0  43.0   0.0   0.0   35  10580  20293    0    0    0 131.47
11:26:33  57.8   0.0  42.2   0.0   0.1   35  10600  20266    0    0    0 131.81
11:26:34  55.4   0.0  44.6   0.0   0.0   33  10096  20099    0    0    0 131.18
11:26:35  55.5   0.0  44.4   0.0   0.1   35  10216  19940    0    0    0 131.61
11:26:36  56.0   0.0  43.9   0.1   0.0   35  10450  20028    0    0    0 132.05
11:26:37  55.0   0.0  45.0   0.0   0.0   35  10059  20038    0    0    0 131.37
11:26:38  55.9   0.0  44.0   0.0   0.1   30  10002  19780    0    0    0 131.87
11:26:39  56.7   0.0  43.2   0.1   0.0   35  11244  20582    0    0    0 132.10
11:26:40  54.3   0.0  45.6   0.0   0.1   35  10629  20460    0    0    0 131.23
11:26:41  54.2   0.0  45.8   0.0   0.0   35  10485  20307    0    0    0 131.22
11:26:42  55.3   0.0  44.7   0.0   0.0   35  10366  20110    0    0    0 131.66
11:26:43  54.0   0.0  46.0   0.0   0.0   36  10699  20285    0    0    0 131.16
11:26:44  55.2   0.0  44.8   0.0   0.0   35  10653  20134    0    0    0 131.67
11:26:45  55.8   0.0  44.0   0.0   0.1   35  10034  19971    0    0    0 131.97
11:26:46  55.0   0.0  45.0   0.0   0.0   35   9779  19686    0    0    0 131.82
11:26:47  58.5   0.0  41.4   0.1   0.1   34  12355  19748    3    0    0 132.45
11:26:48  54.8   0.0  45.2   0.0   0.1   35  10302  20153    0    0    0 131.79
11:26:49  54.5   0.0  45.4   0.1   0.0   34  12477  20890   75   70   74 131.86
11:26:50  56.2   0.0  43.6   0.1   0.1   35  10330  20245   13   12    7 132.56
11:26:51  55.1   0.0  44.9   0.0   0.0   35  10296  20063    1    1    5 131.89
11:26:52  56.6   0.0  43.4   0.0   0.1   35  10929  20268    0    0    0 132.40
11:26:53  55.0   0.0  45.0   0.0   0.0   35  10798  20423    0    0    0 131.79
11:26:54  54.2   0.0  45.8   0.0   0.0   35  10524  20104    0    0    0 131.68
11:26:55  55.7   0.0  44.2   0.1   0.0   35  10006  19986    0    0    0 132.22
11:26:56  55.2   0.0  44.7   0.1   0.1   35  10475  20188    0    0    0 132.04
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  55.8   0.0  44.1   0.0   0.0 34.6 10741.7 20273.5  1.6  1.4  1.5 130.91
 GeoMean  55.8   0.0  44.1   0.0   0.0 34.5 10686.1 20265.2  0.0  0.0  0.0 130.91
  StdDev   1.3   0.0   1.4   0.0   0.0  1.4 1236.9  602.1  9.7  9.1  9.5   0.94
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  53.8   0.0  40.2   0.0   0.0 28.0 9779.0 19686.0  0.0  0.0  0.0 129.12
 Maximum  59.6   0.0  46.2   0.1   0.2 36.0 17089.0 23325.0 75.0 70.0 74.0 132.56
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU: 130.91 Watts on average with standard deviation 0.94
Note: power read from RAPL domains: dram, dram, package-0, package-1.
These readings do not cover all the hardware in this device.

Log of fork()/exec()/exit() calls:
11:25:56 fork: parent pid=2174 -> clone pid=19697 (ssOMAgent)
11:26:00 fork: parent pid=2174 -> clone pid=19698 (ssOMAgent)
11:26:00 fork: parent pid=2174 -> clone pid=19699 (ssOMAgent)
11:26:00 exit: pid=19698 exit_code=0 (ssOMAgent)
11:26:00 exit: pid=19699 exit_code=0 (ssOMAgent)
11:26:01 exit: pid=19627 exit_code=0 (<unknown>)
11:26:09 fork: parent pid=2057 -> fork pid=19700 (/usr/bin/postgres)
11:26:09 exit: pid=19700 exit_code=0 (/usr/bin/postgres)
11:26:29 fork: parent pid=4412 -> clone pid=19701 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/bin/java)
11:26:29 fork: parent pid=4412 -> clone pid=19702 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/bin/java)
11:26:29 fork: parent pid=2057 -> fork pid=19703 (/usr/bin/postgres)
11:26:29 exit: pid=19703 exit_code=0 (/usr/bin/postgres)
11:26:46 fork: parent pid=2174 -> clone pid=19704 (ssOMAgent)
11:26:46 fork: parent pid=2174 -> clone pid=19705 (ssOMAgent)
11:26:46 fork: parent pid=2174 -> clone pid=19706 (ssOMAgent)
11:26:48 fork: parent pid=1914 -> fork pid=19707 (/usr/bin/monit)
11:26:48 exec: pid=19707 (/usr/bin/monit)
11:26:48 exec: pid=19707 (/usr/bin/monit)
11:26:48 fork: parent pid=1914 -> fork pid=19708 (/usr/bin/monit)
11:26:48 fork: parent pid=19707 -> fork pid=19709 (/bin/sh)
11:26:48 exit: pid=19709 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19707 -> fork pid=19710 (/bin/sh)
11:26:48 exec: pid=19708 (/usr/bin/monit)
11:26:48 exec: pid=19708 (/usr/bin/monit)
11:26:48 fork: parent pid=1914 -> fork pid=19711 (/usr/bin/monit)
11:26:48 fork: parent pid=19710 -> fork pid=19712 (/bin/sh)
11:26:48 fork: parent pid=19710 -> fork pid=19713 (/bin/sh)
11:26:48 fork: parent pid=19708 -> fork pid=19714 (/bin/sh)
11:26:48 exec: pid=19711 (/usr/bin/monit)
11:26:48 exec: pid=19711 (/usr/bin/monit)
11:26:48 exit: pid=19714 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19708 -> fork pid=19715 (/bin/sh)
11:26:48 fork: parent pid=19711 -> fork pid=19716 (/bin/sh)
11:26:48 exec: pid=19713 (/bin/sh)
11:26:48 exec: pid=19712 (/bin/sh)
11:26:48 exec: pid=19712 (/bin/sh)
11:26:48 fork: parent pid=19712 -> fork pid=19717 (/bin/sh)
11:26:48 fork: parent pid=19716 -> fork pid=19718 (/bin/sh)
11:26:48 fork: parent pid=19716 -> fork pid=19719 (/bin/sh)
11:26:48 fork: parent pid=19716 -> fork pid=19720 (/bin/sh)
11:26:48 fork: parent pid=19716 -> fork pid=19721 (/bin/sh)
11:26:48 fork: parent pid=19716 -> fork pid=19722 (/bin/sh)
11:26:48 fork: parent pid=19715 -> fork pid=19723 (/bin/sh)
11:26:48 fork: parent pid=19715 -> fork pid=19724 (/bin/sh)
11:26:48 exec: pid=19722 (/bin/sh)
11:26:48 exec: pid=19723 (/bin/sh)
11:26:48 exec: pid=19723 (/bin/sh)
11:26:48 exec: pid=19721 (/bin/sh)
11:26:48 fork: parent pid=19723 -> fork pid=19725 (/bin/sh)
11:26:48 exec: pid=19717 (/bin/sh)
11:26:48 exit: pid=19717 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19712 -> fork pid=19726 (/bin/sh)
11:26:48 exec: pid=19719 (/bin/sh)
11:26:48 exec: pid=19718 (/bin/sh)
11:26:48 exec: pid=19724 (/bin/sh)
11:26:48 exec: pid=19720 (/bin/sh)
11:26:48 exec: pid=19726 (/bin/sh)
11:26:48 exit: pid=19726 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19712 -> fork pid=19727 (/bin/sh)
11:26:48 exec: pid=19725 (/bin/sh)
11:26:48 exit: pid=19725 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19723 -> fork pid=19728 (/bin/sh)
11:26:48 fork: parent pid=19728 -> fork pid=19729 (/bin/sh)
11:26:48 exit: pid=19719 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19720 exit_code=256 (/bin/sh)
11:26:48 exit: pid=19721 exit_code=256 (/bin/sh)
11:26:48 exit: pid=19722 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19718 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19716 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19711 -> fork pid=19730 (/bin/sh)
11:26:48 exec: pid=19727 (/bin/sh)
11:26:48 exit: pid=19727 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19712 -> fork pid=19731 (/bin/sh)
11:26:48 fork: parent pid=19731 -> fork pid=19732 (/bin/sh)
11:26:48 fork: parent pid=19731 -> fork pid=19733 (/bin/sh)
11:26:48 fork: parent pid=19730 -> fork pid=19734 (/bin/sh)
11:26:48 fork: parent pid=19730 -> fork pid=19735 (/bin/sh)
11:26:48 fork: parent pid=19730 -> fork pid=19736 (/bin/sh)
11:26:48 fork: parent pid=19730 -> fork pid=19737 (/bin/sh)
11:26:48 fork: parent pid=19730 -> fork pid=19738 (/bin/sh)
11:26:48 exec: pid=19729 (/bin/sh)
11:26:48 exit: pid=19729 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19728 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19723 -> fork pid=19739 (/bin/sh)
11:26:48 exec: pid=19735 (/bin/sh)
11:26:48 exec: pid=19732 (/bin/sh)
11:26:48 exec: pid=19736 (/bin/sh)
11:26:48 exec: pid=19733 (/bin/sh)
11:26:48 exec: pid=19738 (/bin/sh)
11:26:48 exec: pid=19739 (/bin/sh)
11:26:48 exec: pid=19737 (/bin/sh)
11:26:48 exec: pid=19734 (/bin/sh)
11:26:48 exit: pid=19733 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19732 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19731 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19712 -> fork pid=19740 (/bin/sh)
11:26:48 fork: parent pid=19739 -> fork pid=19741 (su)
11:26:48 exec: pid=19740 (/bin/sh)
11:26:48 exit: pid=19740 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19712 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19713 exit_code=256 (/bin/sh)
11:26:48 exit: pid=19710 exit_code=256 (/bin/sh)
11:26:48 fork: parent pid=19707 -> fork pid=19742 (/bin/sh)
11:26:48 exec: pid=19741 (su)
11:26:48 exec: pid=19741 (su)
11:26:48 exec: pid=19741 (su)
11:26:48 fork: parent pid=19742 -> fork pid=19743 (/bin/sh)
11:26:48 fork: parent pid=19742 -> fork pid=19744 (/bin/sh)
11:26:48 exec: pid=19744 (/bin/sh)
11:26:48 exit: pid=19734 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19735 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19736 exit_code=256 (/bin/sh)
11:26:48 exit: pid=19737 exit_code=256 (/bin/sh)
11:26:48 exit: pid=19738 exit_code=0 (/bin/sh)
11:26:48 exit: pid=19730 exit_code=0 (/bin/sh)
11:26:48 fork: parent pid=19711 -> fork pid=19745 (/bin/sh)
11:26:48 exec: pid=19741 (su)
11:26:49 fork: parent pid=19745 -> fork pid=19746 (/bin/sh)
11:26:49 fork: parent pid=19745 -> fork pid=19747 (/bin/sh)
11:26:49 fork: parent pid=19745 -> fork pid=19748 (/bin/sh)
11:26:49 fork: parent pid=19745 -> fork pid=19749 (/bin/sh)
11:26:49 fork: parent pid=19745 -> fork pid=19750 (/bin/sh)
11:26:49 exec: pid=19743 (/bin/sh)
11:26:49 exec: pid=19743 (/bin/sh)
11:26:49 fork: parent pid=19743 -> fork pid=19751 (/bin/sh)
11:26:49 exec: pid=19746 (/bin/sh)
11:26:49 exec: pid=19749 (/bin/sh)
11:26:49 exec: pid=19750 (/bin/sh)
11:26:49 exec: pid=19748 (/bin/sh)
11:26:49 exec: pid=19751 (/bin/sh)
11:26:49 exec: pid=19747 (/bin/sh)
11:26:49 exit: pid=19751 exit_code=0 (/bin/sh)
11:26:49 fork: parent pid=19743 -> fork pid=19752 (/bin/sh)
11:26:49 exit: pid=19746 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19748 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19747 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19749 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19750 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19745 exit_code=0 (/bin/sh)
11:26:49 fork: parent pid=19711 -> fork pid=19753 (/bin/sh)
11:26:49 fork: parent pid=19753 -> fork pid=19754 (/bin/sh)
11:26:49 fork: parent pid=19753 -> fork pid=19755 (/bin/sh)
11:26:49 fork: parent pid=19753 -> fork pid=19756 (/bin/sh)
11:26:49 fork: parent pid=19753 -> fork pid=19757 (/bin/sh)
11:26:49 fork: parent pid=19753 -> fork pid=19758 (/bin/sh)
11:26:49 fork: parent pid=19741 -> fork pid=19759 (python)
11:26:49 exec: pid=19752 (/bin/sh)
11:26:49 exit: pid=19752 exit_code=0 (/bin/sh)
11:26:49 exec: pid=19754 (/bin/sh)
11:26:49 fork: parent pid=19743 -> fork pid=19760 (/bin/sh)
11:26:49 exec: pid=19756 (/bin/sh)
11:26:49 exec: pid=19758 (/bin/sh)
11:26:49 exec: pid=19757 (/bin/sh)
11:26:49 exec: pid=19759 (python)
11:26:49 fork: parent pid=19759 -> fork pid=19761 (sh)
11:26:49 exit: pid=19754 exit_code=0 (/bin/sh)
11:26:49 exec: pid=19760 (/bin/sh)
11:26:49 exit: pid=19760 exit_code=0 (/bin/sh)
11:26:49 fork: parent pid=19743 -> fork pid=19762 (/bin/sh)
11:26:49 exec: pid=19755 (/bin/sh)
11:26:49 exit: pid=19755 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19756 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19757 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19758 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19753 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19711 exit_code=0 (/usr/bin/monit)
11:26:49 exec: pid=19761 (sh)
11:26:49 fork: parent pid=19762 -> fork pid=19763 (/bin/sh)
11:26:49 fork: parent pid=19762 -> fork pid=19764 (/bin/sh)
11:26:49 exit: pid=19761 exit_code=0 (sh)
11:26:49 exit: pid=19759 exit_code=0 (python)
11:26:49 fork: parent pid=19741 -> fork pid=19765 (python)
11:26:49 exec: pid=19764 (/bin/sh)
11:26:49 exec: pid=19765 (python)
11:26:49 fork: parent pid=19765 -> fork pid=19766 (sh)
11:26:49 exec: pid=19763 (/bin/sh)
11:26:49 exec: pid=19766 (sh)
11:26:49 exit: pid=19766 exit_code=0 (sh)
11:26:49 exit: pid=19765 exit_code=0 (python)
11:26:49 fork: parent pid=19741 -> fork pid=19767 (python)
11:26:49 exit: pid=19764 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19763 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19762 exit_code=0 (/bin/sh)
11:26:49 fork: parent pid=19743 -> fork pid=19768 (/bin/sh)
11:26:49 exec: pid=19767 (python)
11:26:49 fork: parent pid=19767 -> fork pid=19769 (sh)
11:26:49 exec: pid=19768 (/bin/sh)
11:26:49 exit: pid=19768 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19743 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19744 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19742 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19707 exit_code=0 (/usr/bin/monit)
11:26:49 exec: pid=19769 (sh)
11:26:49 exit: pid=19769 exit_code=0 (sh)
11:26:49 exit: pid=19767 exit_code=0 (python)
11:26:49 fork: parent pid=19741 -> fork pid=19770 (python)
11:26:49 exec: pid=19770 (python)
11:26:49 fork: parent pid=19770 -> fork pid=19771 (/bin/sh)
11:26:49 fork: parent pid=19770 -> fork pid=19772 (/bin/sh)
11:26:49 fork: parent pid=19770 -> fork pid=19773 (/bin/sh)
11:26:49 fork: parent pid=19770 -> fork pid=19774 (/bin/sh)
11:26:49 fork: parent pid=19770 -> fork pid=19775 (/bin/sh)
11:26:49 fork: parent pid=19770 -> fork pid=19776 (/bin/sh)
11:26:49 exec: pid=19775 (/bin/sh)
11:26:49 exec: pid=19774 (/bin/sh)
11:26:49 exec: pid=19772 (/bin/sh)
11:26:49 exec: pid=19771 (/bin/sh)
11:26:49 exec: pid=19776 (/bin/sh)
11:26:49 exec: pid=19773 (/bin/sh)
11:26:49 exit: pid=19772 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19773 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19774 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19771 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19775 exit_code=256 (/bin/sh)
11:26:49 exit: pid=19776 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19770 exit_code=0 (python)
11:26:49 exit: pid=19741 exit_code=0 (su)
11:26:49 exit: pid=19739 exit_code=0 (/bin/sh)
11:26:49 fork: parent pid=19723 -> fork pid=19777 (/bin/sh)
11:26:49 fork: parent pid=16907 -> fork pid=19778 (kworker/u32:0)
11:26:49 fork: parent pid=19777 -> fork pid=19779 (/bin/sh)
11:26:49 exec: pid=19778 (kworker/u32:0)
11:26:49 fork: parent pid=19779 -> fork pid=19780 (/bin/sh)
11:26:49 exec: pid=19780 (/bin/sh)
11:26:49 exit: pid=19778 exit_code=0 (kworker/u32:0)
11:26:49 exit: pid=19780 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19779 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19777 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19723 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19724 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19715 exit_code=0 (/bin/sh)
11:26:49 fork: parent pid=19708 -> fork pid=19781 (/bin/sh)
11:26:49 exec: pid=19781 (/bin/sh)
11:26:49 exit: pid=19781 exit_code=0 (/bin/sh)
11:26:49 exit: pid=19708 exit_code=0 (/usr/bin/monit)
11:26:49 fork: parent pid=2057 -> fork pid=19782 (/usr/bin/postgres)
11:26:49 exit: pid=19782 exit_code=0 (/usr/bin/postgres)
11:26:49 fork: parent pid=2162 -> clone pid=19783 (ssProbeframework)
11:26:49 fork: parent pid=2162 -> clone pid=19784 (ssProbeframework)
11:26:49 fork: parent pid=2162 -> fork pid=19785 (ssProbeframework)
11:26:49 fork: parent pid=2162 -> fork pid=19786 (ssProbeframework)
11:26:49 exec: pid=19786 (ssProbeframework)
11:26:49 exec: pid=19786 (ssProbeframework)
11:26:49 fork: parent pid=19786 -> fork pid=19787 (/bin/sh)
11:26:49 exec: pid=19785 (ssProbeframework)
11:26:49 fork: parent pid=2162 -> clone pid=19788 (ssProbeframework)
11:26:49 fork: parent pid=19785 -> fork pid=19789 (/bin/sh)
11:26:50 exec: pid=19789 (/bin/sh)
11:26:50 exit: pid=19789 exit_code=0 (/bin/sh)
11:26:50 fork: parent pid=19785 -> fork pid=19790 (/bin/sh)
11:26:50 fork: parent pid=19785 -> fork pid=19791 (/bin/sh)
11:26:50 exec: pid=19791 (/bin/sh)
11:26:50 exec: pid=19791 (/bin/sh)
11:26:50 exec: pid=19791 (/bin/sh)
11:26:50 exec: pid=19790 (/bin/sh)
11:26:50 exit: pid=19790 exit_code=0 (/bin/sh)
11:26:50 exec: pid=19787 (/bin/sh)
11:26:50 exit: pid=19787 exit_code=0 (/bin/sh)
11:26:50 fork: parent pid=19786 -> fork pid=19792 (/bin/sh)
11:26:50 fork: parent pid=19786 -> fork pid=19793 (/bin/sh)
11:26:50 fork: parent pid=19786 -> fork pid=19794 (/bin/sh)
11:26:50 exit: pid=19791 exit_code=0 (/bin/sh)
11:26:50 exit: pid=19785 exit_code=0 (ssProbeframework)
11:26:50 exec: pid=19793 (/bin/sh)
11:26:50 exec: pid=19792 (/bin/sh)
11:26:50 exec: pid=19794 (/bin/sh)
11:26:51 exit: pid=19792 exit_code=0 (/bin/sh)
11:26:51 exit: pid=19793 exit_code=0 (/bin/sh)
11:26:51 exit: pid=19794 exit_code=0 (/bin/sh)
11:26:51 fork: parent pid=19786 -> fork pid=19795 (/bin/sh)
11:26:51 exec: pid=19795 (/bin/sh)
11:26:51 exit: pid=19795 exit_code=0 (/bin/sh)
11:26:51 exit: pid=19786 exit_code=0 (ssProbeframework)

I hope the article was useful.


How to configure local customised repository for zypper based installation in SuSE Enterprise Linux

$
0
0
Earlier I had written an article with detailed list of step by step guide to create autoyast.xml file for automated scratch installation of SLES 11 and SLES 12

Step by Step Guide to create autoyast xml file for SuSE Linux (SLES) with examples
In this article I will show you detailed list of steps to create a custom repository for SuSE Linux Enterprise Linux.



Generate GPG Key

Before starting you will need a GPG key which will be used to sign the content of the repository. If you already have an existing gpg key then you can ignore this or else create a new gpg key for your custom repository

NOTE: Here the highlighted sections are the input which must be given for creating a key, you can give different input based on your requirement
# gpg --gen-key
gpg (GnuPG) 2.0.9; Copyright (C) 2008 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) DSA and Elgamal (default)
   (2) DSA (sign only)
   (5) RSA (sign only)
Your selection? 5
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: Deepak Prasad (GoLinuxHub)
Email address: golinuxhub1@gmail.com
Comment: This is a test Key
You selected this USER-ID:
    "Deepak Prasad (GoLinuxHub) (This is a test Key) <golinuxhub1@gmail.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x New passphrase                                      x
x                                                     x
x                                                     x
x Passphrase ________________________________________ x
x                                                     x
x           <OK>                     <Cancel>         x
mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Repeat passphrase                                   x
x                                                     x
x                                                     x
x Passphrase ________________________________________ x
x                                                     x
x           <OK>                     <Cancel>         x
mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj
 
can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 031D26CD marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   4096R/031D26CD 2018-06-21
      Key fingerprint = EF12 A620 E193 D165 AF2D  B60D 51EB 6A3E 4BF2 3A26
uid                  Deepak Prasad (GoLinuxHub) (This is a test Key) <golinuxhub1@gmail.com>

Note that this key cannot be used for encryption.  You may want to use
the command "--edit-key" to generate a subkey for this purpose.

Now we have successfully generated a GPG key.

To check the details of the available keys on your node
 # gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub   1024D/9C800ACA 2000-10-19 [expired: 2018-03-17]
uid                  SuSE Package Signing Key <build@suse.de>

pub   1024R/307E3D54 2006-03-21 [expired: 2018-03-17]
uid                  SuSE Package Signing Key <build@suse.de>

pub   2048R/39DB7C82 2013-01-31 [expired: 2017-01-30]
uid                  SuSE Package Signing Key <build@suse.de>

pub   4096R/031D26CD 2018-06-21
uid                  Deepak Prasad (GoLinuxHub) (This is a test Key) <golinuxhub1@gmail.com>

Here "031D26CD" is the GPG KEY ID which we will use for signing our contents of the repository


Create a text file with passphrase

Next create a plain text file with the passphrase which was used for the respective GPG key id
# echo "mypassw0rd"> /tmp/password

This will be used at a later stage.





Create directory structure

For this article I will create my directory structure under "/tmp/deeprasa" for the custom SuSE repository

Make sure below rpm is installed on your node
inst-source-utils
or you can download and install the same using zypper or rpm command.

Next navigate inside your home directory where you want to create the directory structure
# cd /tmp/deeprasa

# create_update_source.sh .
Creating ./updates..
/EXTRA_PROV not found, trying to find it elsewhere...
INFO:    datadirs       : ./updates/
INFO:    languages      : english
INFO:    output dir     : ./updates/
WARNING: extra_provides : file ./updates//EXTRA_PROV not found!
INFO:    processed 0 packages in 1 volumes
INFO:    now recoding to UTF-8: packages packages.DU packages.en

This will create below structure
golinuxhub-server:/tmp/deeprasa # ls -l *
updates:
total 20
-rw-r--r-- 1 root root    0 Jun 21 11:12 content
-rw-r--r-- 1 root root   42 Jun 21 11:12 directory.yast
drwxr-xr-x 2 root root 4096 Jun 21 11:12 media.1
-rw-r--r-- 1 root root   10 Jun 21 11:12 packages
-rw-r--r-- 1 root root   10 Jun 21 11:12 packages.DU
-rw-r--r-- 1 root root   10 Jun 21 11:12 packages.en

yast:
total 8
-rw-r--r-- 1 root root 11 Jun 21 11:12 instorder
-rw-r--r-- 1 root root 20 Jun 21 11:12 order


Copy the rpms for the custom repository

Next create directory under /tmp/deeprasa/updates where you will copy all the rpms
# cd /tmp/deeprasa/updates

Here we have created three directories where we will copy the rpms based on the architecture type.
# mkdir -p suse/x86_64 suse/i686 suse/noarch

My rpms are present inside /tmp/rpms
# cp -av /tmp/rpms/ suse/x86_64/
`/tmp/rpms/' -> `suse/x86_64/rpms'
`/tmp/rpms/bash-doc-3.2-147.35.1.x86_64.rpm' -> `suse/x86_64/rpms/bash-doc-3.2-147.35.1.x86_64.rpm'
`/tmp/rpms/bash-3.2-147.35.1.x86_64.rpm' -> `suse/x86_64/rpms/bash-3.2-147.35.1.x86_64.rpm'
`/tmp/rpms/bind-9.9.6P1-0.39.1.x86_64.rpm' -> `suse/x86_64/rpms/bind-9.9.6P1-0.39.1.x86_64.rpm'
`/tmp/rpms/bind-chrootenv-9.9.6P1-0.39.1.x86_64.rpm' -> `suse/x86_64/rpms/bind-chrootenv-9.9.6P1-0.39.1.x86_64.rpm'


Create necessary files

Once all the rpms are copied, next it is time to create all other necessary files and directories which are needed for the repository
# cd /tmp/deeprasa/updates/suse

# create_package_descr -x setup/descr/EXTRA_PROV -C
INFO:    datadirs       : .
INFO:    languages      : english
INFO:    output dir     : ./setup/descr/
WARNING: extra_provides : file setup/descr/EXTRA_PROV not found!
INFO:    creating output directory ./setup/descr/
INFO:    processed 8 packages in 1 volumes
INFO:    now recoding to UTF-8: packages packages.DU packages.en

Next create MD5SUMS file which will contain md5sum value of all the available rpms
# create_md5sums ./
INFO:   created MD5SUMS in /tmp/deeprasa/updates/suse/./setup/descr
INFO:   created MD5SUMS in /tmp/deeprasa/updates/suse/./x86_64

Below is my MD5SUMS file for the list of rpms
golinuxhub-server:/tmp/deeprasa/updates/suse/x86_64 # cat MD5SUMS
8b29f664006cab0187d18647e22dea87  bash-3.2-147.35.1.x86_64.rpm
d1d426cd61af5ee8ee971ea61418d023  bash-doc-3.2-147.35.1.x86_64.rpm
26f0829b54d2b8260c1c0f5efb7ac3d1  bind-9.9.6P1-0.39.1.x86_64.rpm
a3450462b957602502b85d21bcbf38c8  bind-chrootenv-9.9.6P1-0.39.1.x86_64.rpm

Next create a file with the content of setup/descr as shown below
golinuxhub-server:/tmp/deeprasa/updates/suse/x86_64 # cd ../setup/descr/

golinuxhub-server:/tmp/deeprasa/updates/suse/setup/descr # ls > directory.yast

golinuxhub-server:/tmp/deeprasa/updates/suse/setup/descr # ls -l
total 24
-rw-r--r-- 1 root root  135 Jun 21 11:41 MD5SUMS
-rw-r--r-- 1 root root   56 Jun 21 11:44 directory.yast
-rw-r--r-- 1 root root 4927 Jun 21 11:40 packages
-rw-r--r-- 1 root root 1766 Jun 21 11:40 packages.DU
-rw-r--r-- 1 root root 1684 Jun 21 11:40 packages.en

Next create sha1
# cd /tmp/deeprasa/updates/

golinuxhub-server:/tmp/deeprasa/updates/ # create_sha1sums -x -n .

I will use the default header as I had in my DVD for the columns under content
golinuxhub-server:/tmp/deeprasa/updates/ # sed -i '1iVENDOR        SUSE LINUX Products GmbH, Nuernberg, Germany' /tmp/deeprasa/updates/content

This will add populate your content file as shown below
# cat content
VENDOR        SUSE LINUX Products GmbH, Nuernberg, Germany
META SHA1 1206b18fb0b70c36ef39a1b2e9f105488836e42a  packages
META SHA1 1206b18fb0b70c36ef39a1b2e9f105488836e42a  packages.DU
META SHA1 1206b18fb0b70c36ef39a1b2e9f105488836e42a  packages.en


Assign GPG

Here our GPG KEY ID is "031D26CD" which we created at the first stage on this article.
# cd /tmp/deeprasa/updates/media.1
# gpg --local-user 031D26CD -b --sign --armor --passphrase-file /tmp/password --batch products
# gpg --local-user 031D26CD  --export --armor  > products.key
# ls > directory.yast

NOTE: Here /tmp/password contains the passphrase assigned to the GPG key.

Next repeat the same for contents
# cd /tmp/deeprasa/updates/
# gpg --local-user 031D26CD -b --sign --armor --passphrase-file /tmp/password  --batch content
# gpg --local-user 031D26CD --export --armor  > content.key

These will create
-rw-r--r-- 1 root root  197 Jun 21 14:36 products.asc
-rw-r--r-- 1 root root 5541 Jun 21 14:36 products.key

and
-rw-r--r-- 1 root root  197 Jun 21 14:36 content.asc
-rw-r--r-- 1 root root 5541 Jun 21 14:37 content.key

respectively.


Create archive of the repo

I will navigate to the directory where my repo exists and create a "test_repo.tgz"
# cd /tmp/deeprasa/updates

# tar -czvf ../test_repo.tgz *
content
content.asc
content.key
directory.yast
media.1/
media.1/products.asc
media.1/products.key
media.1/media
media.1/products
media.1/directory.yast
packages
packages.DU
packages.en
suse/
suse/setup/
suse/setup/descr/
suse/setup/descr/directory.yast
suse/setup/descr/packages
suse/setup/descr/packages.en
suse/setup/descr/MD5SUMS
suse/setup/descr/packages.DU
suse/x86_64/
suse/x86_64/bash-doc-3.2-147.35.1.x86_64.rpm
suse/x86_64/bash-3.2-147.35.1.x86_64.rpm
suse/x86_64/MD5SUMS
suse/x86_64/bind-9.9.6P1-0.39.1.x86_64.rpm
suse/x86_64/bind-chrootenv-9.9.6P1-0.39.1.x86_64.rpm

So here our repo structure is complete. You can archive this and use it for installation via zypper.


Validate the repo

I will copy the archive I created above to my test setup where we will validate the repository along with zypper

Below is my setup detail
So I will be creating a repo on my NFS client while the archive will be extracted on the server

On Server
# mkdir /tmp/repo && cd /tmp/repo
# tar -xzvf test_repo.tgz

Below is the extracted content
# ls -l
total 1108
-rw-r--r-- 1 root root     248 Jun 21  2018 content
-rw-r--r-- 1 root root     197 Jun 21  2018 content.asc
-rw-r--r-- 1 root root    5541 Jun 21  2018 content.key
-rw-r--r-- 1 root root      42 Jun 21  2018 directory.yast
drwxr-xr-x 2 root root    4096 Jun 21  2018 media.1
-rw-r--r-- 1 root root      10 Jun 21  2018 packages
-rw-r--r-- 1 root root      10 Jun 21  2018 packages.DU
-rw-r--r-- 1 root root      10 Jun 21  2018 packages.en
drwxr-xr-x 4 root root    4096 Jun 21  2018 suse


On Client
Now on the client side I will create a repo with an alias "test_repo"
# zypper addrepo nfs://180.144.62.160/tmp/repo test_repo
Adding repository 'test_repo' [done]
Repository 'test_repo' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: nfs://180.144.62.160/tmp/repo

Our repo is successfully created

Let us check the available packages
# zypper pa
Building repository 'test_repo' cache [done]
Loading repository data...
Reading installed packages...
S | Repository | Name           | Version        | Arch
--+------------+----------------+----------------+-------
v | test_repo  | bash           | 3.2-147.35.1   | x86_64
v | test_repo  | bash-doc       | 3.2-147.35.1   | x86_64
  | test_repo  | bind           | 9.9.6P1-0.39.1 | x86_64
  | test_repo  | bind-chrootenv | 9.9.6P1-0.39.1 | x86_64

So all looks good.

I hope the article was useful.

How to check and compare rpm version and release in Linux using bash script

$
0
0

Recently I came into a situation where I was supposed to install a hotfix patch on my Linux setup where I was supposed to programmatically compare the version of all the available rpms with the list of rpms downloaded from the yum repository. Now in most organization the linux servers would not be connected to internet hence the comparing part needs to be done manually.

Now I have an excel sheet which has 1000+ rpms and on my setup I have around 400+ rpms installed. Now I should compare rpms at both places and if any new rpm is found then update the same.





Comparing rpm versions is very tricky as each one has their own syntax and they do not follow a general syntax or naming convention other than the below format
name-release-version.arch

But here each section can be different.
With Red Hat we have a tool "rpmdev-vercmp" which is part of "yum-utils" rpm

This tool can be used to perform the comparison
# rpmdev-vercmp python-chardet-2.2.1-1.el7_1.noarch python-chardet-2.2.1-1.el7_2.noarch
python-chardet-2.2.1-1.el7_1.noarch < python-chardet-2.2.1-1.el7_2.noarch

But my fade of luck, I am not allowed to install this additional rpm. So I decided to write my own algorithm to compare rpm versions and I came up with below script
#!/bin/bash

# Get both the package detail and store them in their respective variable
pkg1="$1"
pkg2="$2"

# Check if rpm is installed
function check_rpm_status {
 rpm -q $1 | grep -v "not installed">/dev/null 2>&1
 if [ $? == "0" ];then
    return 0
 else
    return 1
 fi
}

function get_rpm_version {
 python -c "print '-'.join('$1'.rsplit('-',2)[-2:])"
}

# Get the version and release detail of both the packages
if check_rpm_status $pkg1;then
    pkg1_ver=`rpm -q --queryformat "%{VERSION}-%{RELEASE}.%{ARCH}\n" $pkg1`
else
    pkg1_ver=$(get_rpm_version $pkg1)
fi

if check_rpm_status $pkg2;then
    pkg2_ver=`rpm -q --queryformat "%{VERSION}-%{RELEASE}.%{ARCH}\n" $pkg2`
else
    pkg2_ver=$(get_rpm_version $pkg2)
fi

# Modify the collected version and release detail to further compare them
pkg1_ver_modified=`echo $pkg1_ver | sed -E -e 's/.el[0-9]|.x86_64|.i[0-9]86|.noarch//g; s/-|_/./g; s/[a-z]//g'`
pkg2_ver_modified=`echo $pkg2_ver | sed -E -e 's/.el[0-9]|.x86_64|.i[0-9]86|.noarch//g; s/-|_/./g; s/[a-z]//g'`

# Store the version into array for comparison
IFS="." read -a pkg1_array <<< ${pkg1_ver_modified}
IFS="." read  -a pkg2_array<<< ${pkg2_ver_modified}

# Collect the result and store in these variables
both_are_equal=0
pkg1_is_greater=0
pkg2_is_greater=0

# Main
for ((i=0; i<${#pkg1_array[@]} || i<${#pkg2_array[@]}; i++)); do
  [[ ${pkg1_array[$i]} -eq ${pkg2_array[$i]} ]] && ((both_are_equal++))
  if [[ ${pkg1_array[$i]} -gt ${pkg2_array[$i]} ]];then
     ((pkg1_is_greater++))
     break
  fi
  [[ ${pkg1_array[$i]} -lt ${pkg2_array[$i]} ]] && ((pkg2_is_greater++))
done


if [[ ${#pkg2_array[@]} -eq ${#pkg1_array[@]} && ${#pkg1_array[@]} -eq $both_are_equal ]];then
   echo "$pkg1 is equal to $pkg2"
elif [[ $pkg1_is_greater -gt "0"&& $pkg2_is_greater -eq "0" ]];then
   echo "$pkg1 > $pkg2"
else
   echo "$pkg1 < $pkg2"
fi

This script will work same as "rpmdev-vercmp" tool.



Syntax:
/tmp/compare-rpm.sh $pkg1 $pkg2

Below are some examples I tried for various types of rpms
# ./compare-rpm.sh libssh2-1.4.13-4.el7_1.20.x86_64 libssh2-1.4.13-13.el7_1.20.x86_64
libssh2-1.4.13-4.el7_1.20.x86_64 < libssh2-1.4.13-13.el7_1.20.x86_64

# ./compare-rpm.sh selinux-policy-targeted-3.13.1-166.el7.noarch selinux-policy-targeted-3.12.1-166.el7.noarch
selinux-policy-targeted-3.13.1-166.el7.noarch > selinux-policy-targeted-3.12.1-166.el7.noarch

# ./compare-rpm.sh plymouth-core-libs-0.8.9-0.28.20140113.el7.x86_64 plymouth-core-libs-0.8.10-0.28.20140114.el7.x86_64
plymouth-core-libs-0.8.9-0.28.20140113.el7.x86_64 < plymouth-core-libs-0.8.10-0.28.20140114.el7.x86_64

# ./compare-rpm.sh net-tools-2.0-0.22.20131004git.el7.x86_64 net-tools-2.0-0.22.20141004git.el7.x86_64
net-tools-2.0-0.22.20131004git.el7.x86_64 < net-tools-2.0-0.22.20141004git.el7.x86_64

# ./compare-rpm.sh Red_Hat_Enterprise_Linux-Release_Notes-7-en-US-7-2.el7.noarch Red_Hat_Enterprise_Linux-Release_Notes-7-en-US-7-3.el7.noarch
Red_Hat_Enterprise_Linux-Release_Notes-7-en-US-7-2.el7.noarch < Red_Hat_Enterprise_Linux-Release_Notes-7-en-US-7-3.el7.noarch

# ./compare-rpm.sh  linux-firmware-20170606-56.gitc990aae.el7.noarch linux-firmware-20170606-56.gitc998aae.el7.noarch
linux-firmware-20170606-56.gitc990aae.el7.noarch < linux-firmware-20170606-56.gitc998aae.el7.noarch

Atleast this tool worked for me, I hope this can be useful to others reading this article.

Please share your feedback if you face any issues with this script, we can try to make this more robust.

How to check and update planned day light saving (DST) changes (timezone) in Linux

$
0
0
Daylight Saving Time happens in multiple countries across the globe and in many places the DST keeps changing every couple of years.

IMPORTANT NOTE: If your system is configured with online NTP pool servers then you need not worry about the leap seconds or DST changes as NTP server will take care of all these changes and will adjust your system clock accordingly.

Below article assumes that you don't have a NTP server and are dependent on locally installed timezone (tzdata) rpm.




How do I check the planned DST changes for a timezone?

You can get this information from (https://www.timeanddate.com) but you must make sure that your local system is also in sync with the DST changes as showed under (https://www.timeanddate.com)

For example I would like to see the planned DST changes for CET timezone.
From (https://www.timeanddate.com/time/change/germany/berlin) we get the below information
25 Mar 2018 - Daylight Saving Time Starts
When local standard time is about to reach
Sunday, 25 March 2018, 02:00:00 clocks are turned forward 1 hour to
Sunday, 25 March 2018, 03:00:00 local daylight time instead.

28 Oct 2018 - Daylight Saving Time Ends
When local daylight time is about to reach
Sunday, 28 October 2018, 03:00:00 clocks are turned backward 1 hour to
Sunday, 28 October 2018, 02:00:00 local standard time instead.

Lets match if with the DST changes available on my system
# zdump -v /usr/share/zoneinfo/CET | grep 2018
/usr/share/zoneinfo/CET  Sun Mar 25 00:59:59 2018 UTC = Sun Mar 25 01:59:59 2018 CET isdst=0 gmtoff=3600
/usr/share/zoneinfo/CET  Sun Mar 25 01:00:00 2018 UTC = Sun Mar 25 03:00:00 2018 CEST isdst=1 gmtoff=7200
/usr/share/zoneinfo/CET  Sun Oct 28 00:59:59 2018 UTC = Sun Oct 28 02:59:59 2018 CEST isdst=1 gmtoff=7200
/usr/share/zoneinfo/CET  Sun Oct 28 01:00:00 2018 UTC = Sun Oct 28 02:00:00 2018 CET isdst=0 gmtoff=3600
So we know that my local timeone rpm is capable enough to handle the DST changes.

With RHEL 7 with timedatectl also we can get this information on the planned DST changes
# timedatectl status
      Local time: Sun 2018-03-25 03:00:00 CEST
  Universal time: Sun 2018-03-25 01:00:00 UTC
        RTC time: Sun 2018-03-25 01:25:41
       Time zone: CET (CEST, +0200)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2018-03-25 01:59:59 CET
                  Sun 2018-03-25 03:00:00 CEST
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2018-10-28 02:59:59 CEST
                  Sun 2018-10-28 02:00:00 CET


But do we know it will really work?

Let us validate this by manually tweaking our local timezone and date

First change the local timezone to CET, the existing timezone as you see is 'Asia/Kolkata'
# ll /etc/localtime
lrwxrwxrwx. 1 root root 34 Jan 11 12:28 /etc/localtime -> ../usr/share/zoneinfo/Asia/Kolkata

Change it to CET
# ln -s ../usr/share/zoneinfo/CET /etc/localtime

# ll /etc/localtime
lrwxrwxrwx 1 root root 25 Jan 21 11:51 /etc/localtime -> ../usr/share/zoneinfo/CET

My current date and time
# date
Sun Jan 21 11:52:07 CET 2018





Lets change it to "Sun Mar 25 01:59:59 2018 CET" when we know the DST change should make the system clock shift one hour ahead
# date --set "25 Mar 2018 1:59:56 CET"
Sun Mar 25 01:59:56 CET 2018

On another terminal I have a while loop running to monitor the changing time
# while true;do echo -n checking DST changes with timezone ;date;sleep 1; done
checking DST changes with timezone Sun Mar 25 01:59:56 CET 2018
checking DST changes with timezone Sun Mar 25 01:59:57 CET 2018
checking DST changes with timezone Sun Mar 25 01:59:58 CET 2018
checking DST changes with timezone Sun Mar 25 01:59:59 CET 2018
checking DST changes with timezone Sun Mar 25 03:00:00 CEST 2018
checking DST changes with timezone Sun Mar 25 03:00:01 CEST 2018
checking DST changes with timezone Sun Mar 25 03:00:02 CEST 2018
checking DST changes with timezone Sun Mar 25 03:00:03 CEST 2018

If you notice the time changed from 01:59:59 to 03:00:00 because of the planned DST change

Next lets check the DST end changes which as per the timezone is scheduled at 28th oct 2018 when the time shifts back one hour
# date --set "28 Oct 2018 02:59:56 CEST"
Sun Oct 28 02:59:56 CEST 2018

Using our while loop
# while true;do echo -n "checking DST changes with timezone "; date;sleep 1; done
checking DST changes with timezone Sun Oct 28 02:59:56 CEST 2018
checking DST changes with timezone Sun Oct 28 02:59:57 CEST 2018
checking DST changes with timezone Sun Oct 28 02:59:58 CEST 2018
checking DST changes with timezone Sun Oct 28 02:59:59 CEST 2018
checking DST changes with timezone Sun Oct 28 02:00:00 CET 2018
checking DST changes with timezone Sun Oct 28 02:00:01 CET 2018
checking DST changes with timezone Sun Oct 28 02:00:02 CET 2018
checking DST changes with timezone Sun Oct 28 02:00:03 CET 2018

So the DST ended with expected timeshift from 02:59:59 to 02:00:00.


What should I do if the timezone (tzdata) rpm does not has planned DST changes?

Many times it can happen that the DST schedule changes without much prior notification, so in such situation you are very much dependent on NTP but what if you don't have NTP server?
In that case you have to make sure you have the latest timezone (tzdata) rpm which has the new changes for the specific timezone.

For Red Hat you can get the list of changes done for individual tzdata rpm under below page
https://access.redhat.com/articles/1187353

In case your vendor has not yet released a tzadata rpm file and you need a new fix then you can always download it from the main source of tz database

For latest available tzdata
https://www.iana.org/time-zones

If you want to access older tzdata archive
ftp://ftp.iana.org/tz/



For the sake of this article I will give an example from a recent scenario
In the year 2016 Turkey government announced not to have DST changes anymore so the old tzdata rpm was not aware of this change hence if not updated it will continue to shift the time as per the old planned DST changes

My existing tzdata rpm
# rpm -qa | grep tzdata
tzdata-2016a-1.el7.noarch

which is currently unaware that now for Turkey timezone there should be no more DST changes after the year 2016
# zdump -v /usr/share/zoneinfo/Turkey | grep 2017
/usr/share/zoneinfo/Turkey  Sun Mar 26 00:59:59 2017 UTC = Sun Mar 26 02:59:59 2017 EET isdst=0 gmtoff=7200
/usr/share/zoneinfo/Turkey  Sun Mar 26 01:00:00 2017 UTC = Sun Mar 26 04:00:00 2017 EEST isdst=1 gmtoff=10800
/usr/share/zoneinfo/Turkey  Sun Oct 29 00:59:59 2017 UTC = Sun Oct 29 03:59:59 2017 EEST isdst=1 gmtoff=10800
/usr/share/zoneinfo/Turkey  Sun Oct 29 01:00:00 2017 UTC = Sun Oct 29 03:00:00 2017 EET isdst=0 gmtoff=7200

As you see if I check the planned DST changes for the year 2017, it still shows me that the DST will start on '26th March' from 'EET' to 'EEST' and will end on '29th Oct' from 'EEST' to 'EET' again.

To fix this we need updated tzdata rpm with the necessary changes, this was updated in 2016g tzadata rpm so I downloaded the same from IANA database (ftp://ftp.iana.org/tz/)

and copied the same to my setup
# mkdir /tmp/tzdb
# cp /root/tzdata2016g.tar.gz /tmp/tzdb/
# tar -xzf tzdata2016g.tar.gz
next extract the needed timezone file here

In the NEWS file you should get the information regarding the Turkey time changes
    Turkey switched from EET/EEST (+02/+03) to permanent +03,
    effective 2016-09-07.  (Thanks to Burak AYDIN.)  Use "+03" rather
    than an invented abbreviation for the new time.

Lets extract the needed timezone file and place it on our system
# zic -d zoneinfo europe

This will create a directory zoneinfo and will extract all the timezone files under europe
Here we will have 'Istanbul' timezone which is same as Turkey, overwrite the existing Istanbul timezone with the new one
# cp ./zoneinfo/Asia/Istanbul /usr/share/zoneinfo/Asia/Istanbul
cp: overwrite â/usr/share/zoneinfo/Asia/Istanbulâ? y

If you observe I only modified Istanbul timezone but my 3 files are updated
# rpm -V tzdata
S.5....T.    /usr/share/zoneinfo/Asia/Istanbul
S.5....T.    /usr/share/zoneinfo/Europe/Istanbul
S.5....T.    /usr/share/zoneinfo/Turkey

So now lets see if this timezone has the updated information about the new time changes from Turkey government.



First lets check for the year 2017
# zdump -v /usr/share/zoneinfo/Asia/Istanbul | grep 2017
zdump: warning: zone "/usr/share/zoneinfo/Asia/Istanbul" abbreviation "+04" lacks alphabetic at start
As expected there are no planned DST changes in the year 2017 as Turkey government ended the DST in 2016 itself

For the year 2016 if you will compare the output from our last old tzdata rpm
# zdump -v /usr/share/zoneinfo/Asia/Istanbul | grep 2016
zdump: warning: zone "/usr/share/zoneinfo/Asia/Istanbul" abbreviation "+04" lacks alphabetic at start
/usr/share/zoneinfo/Asia/Istanbul  Sun Mar 27 00:59:59 2016 UTC = Sun Mar 27 02:59:59 2016 EET isdst=0 gmtoff=7200
/usr/share/zoneinfo/Asia/Istanbul  Sun Mar 27 01:00:00 2016 UTC = Sun Mar 27 04:00:00 2016 EEST isdst=1 gmtoff=10800
/usr/share/zoneinfo/Asia/Istanbul  Tue Sep  6 20:59:59 2016 UTC = Tue Sep  6 23:59:59 2016 EEST isdst=1 gmtoff=10800
/usr/share/zoneinfo/Asia/Istanbul  Tue Sep  6 21:00:00 2016 UTC = Wed Sep  7 00:00:00 2016 +03 isdst=0 gmtoff=10800

The DST will end on Sep 7 and the timezone will change from EEST to '+03' instead of 'EET'

IMPORTANT NOTE: The above will only update system level timezone, all the java applications follow their own timezone hence you have to make sure you update the tzdata of your JRE separately or else your java based alarms will continue to use old date and time.

 I will write an article shortly with the steps to update tzdata for JRE
I hope the article was useful.

How to set date and time in iLO3 / iLO4 using SNTP and RIBCL scripts from Onboard Administrator in HP Proliant Blades

$
0
0
By default iLO is configured to use Date and Time information as set in the BIOS but that is not very reliable.

I would have expected an iLO to connect with Onboard Administrator and get the time synced and we would only make sure our OA is synced with NTP server but here HPE asks us to configure SNTP (Simple Network Time Protocol) on all the iLOs for them to reflect correct date and time.

If it is one blade then there is not much effort but assuming we have 100s of blades obviously you would not login to each iLO and update the SNTP server details.
This can be performed on a large scale using Onboard Administrator.

NOTE: The default polling interval for SNTP is 30 minutes and an iLO reset is needed to activate the SNTP related changes

IMPORTANT NOTE: 
Executing RIBCL scripts is not supported on older firmware versions of Onboard Administrator and iLO4. Below steps are executed and tested from OA 4.40 and higher and iLO4 2.40 and higher

There is no SNTP support for iLO-2, the iLO date and time can be synchronised through the following:
  • System ROM (during POST)
  • Insight Management Agents (in the OS)
I have not validated the steps on iLO3 but as per HPE this should also work on iLO3 so attempt in your lab setup before trying this on production environment.





Below RIBCL script can be used to update the SNTP values for the iLO, here replace the fields highlighted with yellow with the values as per your environment.
hponcfg 11<< eof
<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="HPadmin" PASSWORD="Passw0rd">
<RIB_INFO MODE="write">
<MOD_NETWORK_SETTINGS>
    <DHCP_SNTP_SETTINGS value="No"/>
    <DHCPV6_SNTP_SETTINGS value="No"/>
    <SNTP_SERVER1 value="10.10.10.11"/>
    <SNTP_SERVER2 value="10.10.10.12"/>
    <TIMEZONE value="Asia/Kolkata"/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
</LOGIN>
</RIBCL>
eof

Login to the Onboard Administrator with a user having Administrator privilege using an ssh client like Putty

If you intend to update SNTP only for one server then provide the bay number of the respective bay in the below highlighted section (copy and paste the entire section on the OA CLI console)
BlrSiteA1-01-01> hponcfg 11<< eof
<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="HPadmin" PASSWORD="Passw0rd">
<RIB_INFO MODE="write">
<MOD_NETWORK_SETTINGS>
    <DHCP_SNTP_SETTINGS value="No"/>
    <DHCPV6_SNTP_SETTINGS value="No"/>
    <SNTP_SERVER1 value="10.10.10.11"/>
    <SNTP_SERVER2 value="10.10.10.12"/>
    <TIMEZONE value="Asia/Kolkata"/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
</LOGIN>
</RIBCL>
eof

Below would be the execution output
Bay 11: Executing RIBCL request ...
Bay 11: Awaiting RIBCL results ...
Bay 11: RIBCL results retrieved.
<!-- ======== START RIBCL RESULTS ======== -->


<!-- ======== Bay 11 RIBCL results ======== -->

<?xml version="1.0"?>
<RIBCL VERSION="2.23">
<RESPONSE
    STATUS="0x0000"
    MESSAGE='No error'
     />
</RIBCL>
<?xml version="1.0"?>
<RIBCL VERSION="2.23">
<RESPONSE
    STATUS="0x0000"
    MESSAGE='No error'
     />
</RIBCL>
<?xml version="1.0"?>
<RIBCL VERSION="2.23">
<RESPONSE
    STATUS="0x0000"
    MESSAGE='No error'
     />
</RIBCL>
<?xml version="1.0"?>
<RIBCL VERSION="2.23">
<RESPONSE
    STATUS="0x0000"
    MESSAGE='No error'
     />
</RIBCL>
<?xml version="1.0"?>
<RIBCL VERSION="2.23">
<RESPONSE
    STATUS="0x0000"
    MESSAGE='No error'
     />
</RIBCL>

<!-- ======== END RIBCL RESULTS ======== -->

Next perform iLO reset to activate the changes
Execute below command from the Oanboard Administrator CLI
> reset ilo 11

Entering anything other than 'YES' will result in the command not executing.

Are you sure you want to reset iLO? YES

Bay 11: Resetting iLO using Hardware reset...

Bay 11: Successfully reset iLO through Hardware reset




If you have multiple blades on which you wish to update the SNTP value then replace "11" with the list of blades separated by comma

For example:
Below will be executed only on blade 11
hponcfg 11<< eof
<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="HPadmin" PASSWORD="Passw0rd">
<RIB_INFO MODE="write">
<MOD_NETWORK_SETTINGS>
    <DHCP_SNTP_SETTINGS value="No"/>
    <DHCPV6_SNTP_SETTINGS value="No"/>
    <SNTP_SERVER1 value="10.10.10.11"/>
    <SNTP_SERVER2 value="10.10.10.12"/>
    <TIMEZONE value="Asia/Kolkata"/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
</LOGIN>
</RIBCL>
eof

Below script will be called on blade 11,12,13
hponcfg 11,12,13<< eof
<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="HPadmin" PASSWORD="Passw0rd">
<RIB_INFO MODE="write">
<MOD_NETWORK_SETTINGS>
    <DHCP_SNTP_SETTINGS value="No"/>
    <DHCPV6_SNTP_SETTINGS value="No"/>
    <SNTP_SERVER1 value="10.10.10.11"/>
    <SNTP_SERVER2 value="10.10.10.12"/>
    <TIMEZONE value="Asia/Kolkata"/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
</LOGIN>
</RIBCL>
eof

If you wish to execute the script on all the blades of the enclosure
hponcfg all<< eof
<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="HPadmin" PASSWORD="Passw0rd">
<RIB_INFO MODE="write">
<MOD_NETWORK_SETTINGS>
    <DHCP_SNTP_SETTINGS value="No"/>
    <DHCPV6_SNTP_SETTINGS value="No"/>
    <SNTP_SERVER1 value="10.10.10.11"/>
    <SNTP_SERVER2 value="10.10.10.12"/>
    <TIMEZONE value="Asia/Kolkata"/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
</LOGIN>
</RIBCL>
eof





You can also configure SNTP manually using the iLO4 web page.


Open the iLO using any supported browser (preferred IE)


Navigate to Network -> iLO Dedicated Network Port
Select "SNTP" from the Menu TAB as shown and provide the NTP server address details



Next to activate the changes reset your iLO using the RESET TAB under Diagnostics option as shown below


I hope the article was useful.


Part 1: Step by Step Guide to Install Openstack using Packstack with Compute and Controller node on RHEL 7

$
0
0
OpenStack project, which is also called a cloud operational system, consists of a number of different projects developing separate subsystems. Any OpenStack installation can include only a part of them. Some subsystems can even be used separately or as part of any other OpenSource project. Their number is increasing from version to version of OpenStack project, both through the appearance of new ones and the functionality split of the existing ones. For example, nova-volume service was extracted as a separate Cinder project

Make sure the hypervisor is enabled and supported on your blade
# grep -E ' svm | vmx' /proc/cpuinfo

You should see svm or vmx among the flags supported by the processor. Also if you execute the command:
# lsmod | grep kvm
kvm_intel 143187 3
kvm 455843 1 kvm_intel
or
# lsmod | grep kvm
kvm_amd 60314 3
kvm 461126 1 kvm_amd

you should see two kernel modules loaded in the memory. The kvm is the module independent of the vendor, and the kvm_intel or kvm_amd executes VT-x or AMD-V functionality, respectively





Download Links for OpenStack Distributions

 OpenStack DistributionWeb Site Link
 Red Hat OpenStack Platform (60-day trial)https://www.redhat.com/en/insights/openstack
 RDO by Red Hathttps://www.rdoproject.org/
 Mirantis OpenStackhttps://www.mirantis.com/products/mirantis-openstacksoftware/
 Ubuntu OpenStackhttp://www.ubuntu.com/cloud/openstack
 SUSE OpenStack Cloud (60-day trial)https://www.suse.com/products/suse-openstack-cloud/


Installing Red Hat OpenStack Platform with PackStack

Packstack provides an easy way to deploy an OpenStack Platform environemnt on one or several machines. It is customizable through a answer file, which contains a set of parameters that allows custom configuration of underlying Openstack platform service.


What is Answer File?

Packstack provides by default an answer file template that deploys an all in one environment without having to customize it. These answer files includes options to tune almost every aspect of the Openstack platform environment, including the architecture layout, moving to a multiple compute nodes-based deployment, or tuning the backend to be used both for Cinder and Neutron services.


Step 1: Bring UP the physical host server

First of all you need a base server on which you will create your entire Openstack cloud. I have bought my server with RHEL 7.4

My setup detail



  • Next login to your server and registor it with Red Hat Subscription
  • Install Virtual Machine Manager (if not already installed) using the "Application Installer"
  • Next start creating your virtual machines as described in below chapters



Step 2: Configure BIND DNS Server

A DNS server is needed before configuring your openstack setup.


Below are my sample configuration files
# cd /var/named/chroot/var/named

My forward configuration file for the controller and compute nodes
# cat example.zone
$TTL 1D
@       IN SOA  example. root (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
@               IN NS   example.
                IN A    127.0.0.1
                IN A    10.43.138.12
openstack       IN A    10.43.138.12
controller      IN A    192.168.122.49
compute         IN A    192.168.122.215
compute-rhel    IN A    192.168.122.13
controller-rhel IN A    192.168.122.12

My reverse zone file for my physical host server hosting openstack
# cat example.rzone
$TTL 1D
@       IN SOA  example. root.example. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
@       IN NS   example.
        IN A    127.0.0.1
        IN PTR  localhost.
12      IN PTR  openstack.example.

My reverse zone file for controller and compute node
# cat openstack.rzone
$TTL 1D
@       IN SOA  example. root.example. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
@       IN NS   example.
        IN A    127.0.0.1
        IN PTR  localhost.
49      IN PTR  controller.example.
215     IN PTR  compute.example.
12      IN PTR  controller-rhel.example.
13      IN PTR  compute-rhel.example.

Below content added in named.rfc1912.zones
zone "example" IN {
        type master;
        file "example.zone";
        allow-update { none; };
};

zone "138.43.10.in-addr.arpa" IN {
        type master;
        file "example.rzone";
        allow-update { none; };
};

zone "122.168.192.in-addr.arpa" IN {
        type master;
        file "openstack.rzone";
        allow-update { none; };
};



Step 3:  Bring UP Compute VM

My setup has a single disk with 200GB of disk space which will be used for creating instance.


NOTE: The storage used by an instance will be under /var/lib/glance so any partition used by /var must have some free storage for an instance to be created.Below is my setup snippet
[root@compute-rhel ~]# lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home rhel -wi-ao---- 134.49g
  root rhel -wi-ao----  50.00g
  swap rhel -wi-ao----  14.50g

[root@compute-rhel ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree
  /dev/vda2  rhel lvm2 a--  <199.00g 4.00m

[root@compute-rhel ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree
  rhel   1   3   0 wz--n- <199.00g 4.00m

[root@compute-rhel ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   50G  2.3G   48G   5% /
devtmpfs                15G     0   15G   0% /dev
tmpfs                   15G     0   15G   0% /dev/shm
tmpfs                   15G   17M   15G   1% /run
tmpfs                   15G     0   15G   0% /sys/fs/cgroup
/dev/vda1             1014M  131M  884M  13% /boot
/dev/mapper/rhel-home  135G   33M  135G   1% /home
tmpfs                  2.9G     0  2.9G   0% /run/user/0

[root@compute-rhel ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:             28           0          26           0           1          27
Swap:            14           0          14

Pre-requisite
Disable and stop the below services using the commands as shown
# systemctl stop NetworkManager
# systemctl disable NetworkManager

# systemctl stop firewalld
# systemctl disable firewalld

# systemctl restart network
# systemctl enable network

Register and subscribe to the necessary Red Hat channels as done for controller
# subscription-manager register 

Find the entitlement pool for Red Hat Enterprise Linux OpenStack Platform in the output of the following command:
# subscription-manager list --available --all

Use the pool ID located in the previous step to attach the Red Hat Enterprise Linux OpenStack Platform entitlements:
# subscription-manager attach --pool=POOL_ID

Disable all the repos
# subscription-manager repos --disable=*

Next enable all the needed repos
# subscription-manager repos --enable=rhel-7-server-rh-common-rpms
Repository 'rhel-7-server-rh-common-rpms' is enabled for this system.

# subscription-manager repos --enable=rhel-7-server-openstack-8-rpms
Repository 'rhel-7-server-openstack-8-rpms' is enabled for this system.

# subscription-manager repos --enable=rhel-7-server-extras-rpms
Repository 'rhel-7-server-extras-rpms' is enabled for this system.

# subscription-manager repos --enable=rhel-7-server-rpms
Repository 'rhel-7-server-rpms' is enabled for this system.




Step 4: Bring UP Controller VM

I have already shared the configuration for my Virtual Machine. I do not need to reserve much resources for the controller as it will only be used to run the important openstack services.

My setup details


IMPORTANT NOTE: I will need to create an additional volume group for the CINDER service which can be used to create additional volumes with the name "cinder-volumes"

So make sure when you are installing the controller node, create one additional volume-group "cinder-volumes" with enough space, for me I have given 100GB which will be used for adding additional volume when launching Instance.

Below is my setup snippet
[root@controller-rhel ~]# pvs
  PV         VG             Fmt  Attr PSize    PFree
  /dev/vda3  rhel           lvm2 a--   <38.52g   <7.69g
  /dev/vdb1  cinder-volumes lvm2 a--  <100.00g <100.00g

[root@controller-rhel ~]# vgs
  VG             #PV #LV #SN Attr   VSize    VFree
  cinder-volumes   1   0   0 wz--n- <100.00g <100.00g
  rhel             1   2   0 wz--n-  <38.52g   <7.69g

[root@controller-rhel ~]# lvs
  LV     VG   Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool00 rhel twi-aotz-- 30.79g               15.04  11.48
  root   rhel Vwi-aotz-- 30.79g pool00        15.04

[root@controller-rhel ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:              9           2           4           0           3           7
Swap:             0           0           0

Pre-requisite
Disable and stop the below services using the commands as shown
# systemctl stop NetworkManager
# systemctl disable NetworkManager

# systemctl stop firewalld
# systemctl disable firewalld

# systemctl restart network
# systemctl enable network

Register your server
# subscription-manager register 

Find the entitlement pool for Red Hat Enterprise Linux OpenStack Platform in the output of the following command:
# subscription-manager list --available --all

Use the pool ID located in the previous step to attach the Red Hat Enterprise Linux OpenStack Platform entitlements:
# subscription-manager attach --pool=POOL_ID

Disable all the repos
# subscription-manager repos --disable=*

Enable the below repositories (for this article I will be using openstack-8)
# subscription-manager repos --enable=rhel-7-server-rh-common-rpms
Repository 'rhel-7-server-rh-common-rpms' is enabled for this system.

# subscription-manager repos --enable=rhel-7-server-openstack-8-rpms
Repository 'rhel-7-server-openstack-8-rpms' is enabled for this system.

# subscription-manager repos --enable=rhel-7-server-extras-rpms
Repository 'rhel-7-server-extras-rpms' is enabled for this system.

# subscription-manager repos --enable=rhel-7-server-rpms
Repository 'rhel-7-server-rpms' is enabled for this system.

Next install the packstack tool
# yum install -y openstack-packstack

Next generate your answer file /root/answers.txt and view the resulting file
# packstack --gen-answer-file ~/answer-file.txt

Now we are ready to create and modify our answers file to deploy openstack services on our controller and compute node



Step 5: Create answers file and Install Openstack

The answer file will have different set of options which will be used to configure your openstack

Below are the changes which I have done for my setup.

Once you are done it is time to execute your packstack utility on the controller as shown below
[root@controller-rhel ~]# packstack --answer-file /root/answers.txt
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20180707-225026-DOdBB6/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Setting up CACERT                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MariaDB manifest entries                     [ DONE ]
Adding Apache manifest entries                       [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Neutron VPNaaS Agent manifest entries         [ DONE ]
Adding Neutron FWaaS Agent manifest entries          [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron Metering Agent manifest entries       [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Adding Neutron SR-IOV Switch Agent manifest entries  [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding post install manifest entries                 [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.122.13_prescript.pp
Applying 192.168.122.12_prescript.pp
192.168.122.13_prescript.pp:                         [ DONE ]
192.168.122.12_prescript.pp:                         [ DONE ]
Applying 192.168.122.13_chrony.pp
Applying 192.168.122.12_chrony.pp
192.168.122.13_chrony.pp:                            [ DONE ]
192.168.122.12_chrony.pp:                            [ DONE ]
Applying 192.168.122.12_amqp.pp
Applying 192.168.122.12_mariadb.pp
192.168.122.12_amqp.pp:                              [ DONE ]
192.168.122.12_mariadb.pp:                           [ DONE ]
Applying 192.168.122.12_apache.pp
192.168.122.12_apache.pp:                            [ DONE ]
Applying 192.168.122.12_keystone.pp
Applying 192.168.122.12_glance.pp
Applying 192.168.122.12_cinder.pp
192.168.122.12_keystone.pp:                          [ DONE ]
192.168.122.12_cinder.pp:                            [ DONE ]
192.168.122.12_glance.pp:                            [ DONE ]
Applying 192.168.122.12_api_nova.pp
192.168.122.12_api_nova.pp:                          [ DONE ]
Applying 192.168.122.12_nova.pp
Applying 192.168.122.13_nova.pp
192.168.122.12_nova.pp:                              [ DONE ]
192.168.122.13_nova.pp:                              [ DONE ]
Applying 192.168.122.13_neutron.pp
Applying 192.168.122.12_neutron.pp
192.168.122.12_neutron.pp:                           [ DONE ]
192.168.122.13_neutron.pp:                           [ DONE ]
Applying 192.168.122.12_osclient.pp
Applying 192.168.122.12_horizon.pp
192.168.122.12_osclient.pp:                          [ DONE ]
192.168.122.12_horizon.pp:                           [ DONE ]
Applying 192.168.122.13_postscript.pp
Applying 192.168.122.12_postscript.pp
192.168.122.12_postscript.pp:                        [ DONE ]
192.168.122.13_postscript.pp:                        [ DONE ]
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.122.12. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.122.12/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * The installation log file is available at: /var/tmp/packstack/20180707-225026-DOdBB6/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20180707-225026-DOdBB6/manifests

If everything goes nice then you should see all GREEN and at the end of you will get the link to your dashboard.

NOTE: You can rerun PackStack with option -d if you need to update the configuration.

Install openstack-utils to check the status of all the openstack services
# yum -y install openstack-utils

Next check the status
[root@controller-rhel ~]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-openvswitch-agent:              active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Support services ==
mysqld:                                 unknown
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
target:                                 active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
Warning keystonerc not sourced



Step 6: Source keystonerc file

Next you can source your keystoncerc file to get more detailed list of openstack-service status. This keystonerc file will be created with packstack above and will be available at the home folder of root as shown below for me
[root@controller-rhel ~]# ls -l keystonerc_admin
-rw-------. 1 root root 229 Jul  7 22:57 keystonerc_admin

[root@controller-rhel ~]# pwd
/root

[root@controller-rhel ~]# source keystonerc_admin

Next check the status again
[root@controller-rhel ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-openvswitch-agent:              active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Support services ==
mysqld:                                 unknown
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
target:                                 active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+----------------------------------+---------+---------+-------------------+
|                id                |   name  | enabled |       email       |
+----------------------------------+---------+---------+-------------------+
| e97f18a9994e4b99bcc0e6fe8db95cd3 |  admin  |   True  |   root@localhost  |
| dccbaca5e2ee4866b343573678ec3bf7 |  cinder |   True  |  cinder@localhost |
| 7dec80c93f8a4aafa1559a59e6bf606c |  glance |   True  |  glance@localhost |
| 778e4fbefdfa4329bf9b7143ce6ffe74 | neutron |   True  | neutron@localhost |
| e3d85ca8a8bb4ba5a9457712ce5814f5 |   nova  |   True  |   nova@localhost  |
+----------------------------------+---------+---------+-------------------+
== Glance images ==
+----+------+
| ID | Name |
+----+------+
+----+------+
== Nova managed services ==
+----+------------------+------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | controller-rhel.example| internal | enabled | up    | 2018-07-07T18:02:59.000000 | -               |
| 2  | nova-scheduler   | controller-rhel.example| internal | enabled | up    | 2018-07-07T18:03:00.000000 | -               |
| 3  | nova-conductor   | controller-rhel.example| internal | enabled | up    | 2018-07-07T18:03:01.000000 | -               |
| 4  | nova-cert        | controller-rhel.example| internal | enabled | up    | 2018-07-07T18:02:57.000000 | -               |
| 5  | nova-compute     | compute-rhel.example   | nova     | enabled | up    | 2018-07-07T18:03:04.000000 | -               |
+----+------------------+------------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+----+-------+------+
| ID | Label | Cidr |
+----+-------+------+
+----+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

So as you see it gives me a detailed status of all the openstack services.

Now you can login to the horizon dashboard.




Part 2: Configure Openstack OVSBridge, Network (Neutron), Public and Private Network, Router in Openstack

$
0
0

Below steps and screenshots are validated from Red Hat based Openstack platform, but the steps and commands will work also on the opensource Openstack running on any other distribution

The important part of networking in the OpenStack cloud is OVS. Open vSwitch is not a part of OpenStack project. However, OVS is used in most implementations of OpenStack clouds. It has also been integrated into many other virtual management systems including OpenQRM, OpenNebula, and oVirt. Open vSwitch can provide support for protocols such as OpenFlow, GRE, VLAN, VXLAN, NetFlow, sFlow, SPAN, RSPAN, and LACP. It can operate in distributed configurations with a central controller.


Open vSwitch by itself consists of several components:
  • openswitch_mod.ko: The module plays the role of ASIC (application-specific integrated circuit) in hardware switches. This module is an engine of traffic processing.
  • Daemon ovs-vswitchd : The daemon is in charge of management and logic for data transmitting.
  • Daemon ovsdb-server : The daemon is used for the internal database. It also provides RPC (remote procedure call) interfaces to one or more Open vSwitch databases (OVSDBs).

To check the version of openvswitch installed
[root@controller-rhel ~]# ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.5.0
Compiled Aug  2 2017 11:12:47
DB Schema 7.12.1






Step 1: Configure OVSBridge on the Controller

OpenStack Neutron Services and Their Placement


To do the below changes make sure openvswitch is installed on your setup
# rpm -q openvswitch

Navigate to the path of your interface configuration files
[root@controller-rhel ~]# cd /etc/sysconfig/network-scripts/

Copy the configuration of your eth0 as below (The name of the interface may vary depending upon the environment)
# cp ifcfg-eth0 ifcfg-br-ex

Make the highlighted changes in your ifcfg-br-ex
[root@controller-rhel network-scripts]# cat ifcfg-br-ex
TYPE="OVSBridge"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
NAME="br-ex"
UUID="e20b64ec-fc48-4b21-b60f-e110f5380fc3"
DEVICE="br-ex"
DEVICETYPE="ovs"
ONBOOT="yes"
IPADDR="192.168.122.12"
PREFIX="24"
GATEWAY="192.168.122.1"
DNS1="10.43.138.12"
NM_CONTROLLED="no"

Next make the below changes in your ifcfg-eth0 file and remove all the unwanted entries
[root@controller-rhel network-scripts]# cat ifcfg-eth0
TYPE="OVSPort"
BOOTPROTO="static"
DEFROUTE="yes"
IPV6INIT="no"
NAME="eth0"
UUID="e20b64ec-fc48-4b21-b60f-e110f5380fc3"
DEVICE="eth0"
DEVICETYPE="ovs"
OVS_BRIDGE="br-ex"
ONBOOT="yes"

Restart your network services
# systemctl restart network

NOTE: If there is some mistake in the configuration of your network then you may loose connectivity here so you can login to the console of your setup and troubleshoot the configuration files.

Once done validate your new configuration, the IP Address must be now assigned to "br-ex" device instead on eth0
[root@controller-rhel network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:59:bb:98 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe59:bb98/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:63:84:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:63:84:f4 brd ff:ff:ff:ff:ff:ff
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 82:4c:d5:4b:54:32 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether ba:d0:22:7f:95:4c brd ff:ff:ff:ff:ff:ff
8: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 82:91:ea:b4:b6:44 brd ff:ff:ff:ff:ff:ff
9: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65470 qdisc noqueue master ovs-system state UNKNOWN qlen 1000
    link/ether 26:c7:82:58:f7:4a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::24c7:82ff:fe58:f74a/64 scope link
       valid_lft forever preferred_lft forever
10: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 2a:e2:56:f0:f3:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.12/24 brd 192.168.122.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::28e2:56ff:fef0:f34c/64 scope link
       valid_lft forever preferred_lft forever

Validate the bridge connection
[root@controller-rhel ~]# ovs-vsctl show
84045430-57bb-4057-9b6b-d059aaa60c05
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a87a0d"
            Interface "vxlan-c0a87a0d"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.122.12", out_key=flow, remote_ip="192.168.122.13"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        fail_mode: standalone
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth0"
            Interface "eth0"
    ovs_version: "2.5.0"

As you can see, three bridges exist:
  • Integration bridge (br-int): There is a single integration bridge on each node. This bridge acts as a virtual switch where all virtual network cards from all virtual machines are connected. OVS Neutron agent automatically creates the integration bridge. The integration bridge "br-int", tage and untags VLAN traffic that originated from the instance and traffic destined for the instance
  • External bridge (br-ex): This bridge is for interconnection with external networks. In o
  • Tunnel bridge (br-tun): This bridge is a virtual switch like br-int . It connects the GRE and VXLAN tunnel endpoints. The tunneling bridge "br-tun" translates the traffic that was received by the integration bridge "br-int" into VXLAN tunnels.

Here is an example of the code from a config file for neutron-openvswitch-agent
[root@controller-rhel ~]# grep -o '^[^#]*' /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.168.122.12
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
extensions =
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver




Step 2: Configure OVSBridge on the Compute

Repeat the same set of activities as performed on the controller node to change your interface and enable OVSBridge.

Below are my sample config files from the compute node
[root@compute-rhel network-scripts]# cat ifcfg-br-ex
TYPE="OVSBridge"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
NAME="br-ex"
UUID="0924e54e-bf9d-43e1-98d5-32b3e465ab26"
DEVICE="br-ex"
DEVICETYPE="ovs"
ONBOOT="yes"
IPADDR="192.168.122.13"
PREFIX="24"
GATEWAY="192.168.122.1"
DNS1="10.43.138.12"
NM_CONTROLLED="no"

[root@compute-rhel network-scripts]# cat ifcfg-eth0
TYPE="OVSPort"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
NAME="eth0"
UUID="0924e54e-bf9d-43e1-98d5-32b3e465ab26"
DEVICE="eth0"
ONBOOT="yes"
NM_CONTROLLED="no"
DEVICETYPE="ovs"
OVS_BRIDGE="br-ex"

Below is the snippet of the code for neutron-openvswitch-agent
[root@compute-rhel network-scripts]# grep -o '^[^#]*' /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.168.122.13
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
extensions =
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver




Step 3: Create Internal Network

Now we are ready to create internal network which will be assigned to the instances

Login to your horizon dashboard

Navigate to Project -> Network -> Networks



Click on "Create Network"


Fill the provided details

Network Name  :internal_network
Admin State  :UP
Create Subnet:Checked



Click on Next

Subnet Name  :int_subnet
Network Address:192.168.100.0/24
IP Version            : IPv4
Gateway IP:192.168.100.254


Subnet Details

DHCP Enable   : Checked
DNS Servers  : 10.43.138.12



Click on "Create"

So our network is successfully created as you see below






Step 4: Create External Network

Now we need an external network which can be used as Floating IP to connect to the instance.

IMPORTANT NOTE: Make sure this external network what you intend to use is reachable from your host server. For my case I will use the same subnet as used for my controller and compute setup i.e. 192.168.122.0/24

Login to your horizon dashboard

Navigate to Project-> Network -> Networks



Click on "Create Network"

Fill the provided details
Network Name  :external_network
Admin State  :UP
Create SubnetChecked



Click on Next
Subnet Name  :ext_subnet
Network Address:192.168.122.0/24
IP Version  : IPv4
Gateway IP  :192.168.122.1


Subnet Details

DHCP Enable  : Checked
DNS Servers  : 10.43.138.12


Click on "Create"

Now our network is created but currently it will work only as internal network unless we explicitly assign it as "external"

So Navigate to Admin -> Networks

Here you will see the list of available networks which we created as shown below


Next select the check box of the "external-network" and click on "Edit Network"


Next select the check box as shown below to make this as external network (public) and click on "Save Changes"


You can also validate this from your CLI
[root@controller-rhel ~(keystone_admin)]# neutron net-list
+--------------------------------------+------------------+-------------------------------------------------------+
| id                                   | name             | subnets                                               |
+--------------------------------------+------------------+-------------------------------------------------------+
| b85f4695-ac80-426a-9b69-87d0cec277db | external_network | 69f78d46-910c-4fb5-a086-812ff4743ec5 192.168.122.0/24 |
| 60be14fb-f28e-40be-a1f7-e09731ce2062 | internal_network | a1d247b9-6db3-43ca-a6af-b2ade51e80bc 192.168.100.0/24 |
+--------------------------------------+------------------+-------------------------------------------------------+

To get more details about the network we created
[root@controller-rhel ~(keystone_admin)]# neutron net-show external_network
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | b85f4695-ac80-426a-9b69-87d0cec277db |
| mtu                       | 0                                    |
| name                      | external_network                     |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 50                                   |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 69f78d46-910c-4fb5-a086-812ff4743ec5 |
| tenant_id                 | dbb0e4e20f874acd85cbc7927517390a     |
+---------------------------+--------------------------------------+

Now the internal network
[root@controller-rhel ~(keystone_admin)]# neutron net-show internal_network
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 60be14fb-f28e-40be-a1f7-e09731ce2062 |
| mtu                       | 0                                    |
| name                      | internal_network                     |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 80                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | a1d247b9-6db3-43ca-a6af-b2ade51e80bc |
| tenant_id                 | dbb0e4e20f874acd85cbc7927517390a     |
+---------------------------+--------------------------------------+




Step 5: Create Routers

Whether using IPv4 or Ipv6, network traffic needs to move from host to host and network to network. Each host has a routing table, which tells it how to route traffic for particular networks. The routing table entries will list a destination network, which interface to send out the traffic out, and the IP address of any intermediate router that is required to relay the message to its final destination. The routing table entry which matches the destination of the network traffic is used to route it. If two entries match, the one with the longest prefix is used.

In order for instance to communicate with any external subnet, a router must be deployed. red Hat Openstack platform provides routing by using an SDN- based virtual router. Similar to physulcal routers, SDN-based virtual routers require one subnet per interface. Traffic received by the router use the router's default gateway as the next hop. The default gateway uses a virtual bridge to route the traffic to an external network. Each router has many interfaces that conncet to subnets and one gateway that connects to a network.

To create a router, in Horizon

Navigate to Project -> Network -> Routers

Click on "Create Router"

Provide the below details
Router name:test-router
Admin State: UP
External Network  :external_network (Select the public network which you created above at step 3)


And click on "Create Router"

Next click on the router name i.e. "test-router" for us
It will show you the router details under "Overview"
Navigate to "Interfaces" TAB and click on "Add Interface"



Next select the internal network which we created from the drop down menu for "Subnet"

You can leave the IP Address section blank as we have enabled DHCP so the IP Address will be automatically allocated to us.

Once done click on "Add Interface"

We are done with our Network Setup.

Part 3: Create Glance Image, Cinder Volumes, Flavor Templates and key pairs for an Instance in Openstack

$
0
0

In the article I will show you the steps to create all the necessary pre-requisites for creating an instance
Below steps are validated on Red Hat based Openstack Platform but the same commands will work on opensource Openstack, although the image screenshots may differ.






Creating Glance Image

Glance consists of two services that are implemented as GNU/Linux daemons:
  • glance-api: Accepts Image REST API calls for image discovery, retrieval, and storage.
  • glance-registry: Stores, processes, and retrieves metadata about images. For example, metadata are size, type, owner, etc. external services never touches glance-registry directly.

Below are the links for the repository for various images which you can plan and use for your setup.
 OpenStack DistributionWeb Site Link
  Red Hat OpenStack Platform (60-day trial)https://www.redhat.com/en/insights/openstack
  RDO by Red Hathttps://www.rdoproject.org/
  Mirantis OpenStackhttps://www.mirantis.com/products/mirantis-openstacksoftware/
  Ubuntu OpenStackhttp://www.ubuntu.com/cloud/openstack
  SUSE OpenStack Cloud (60-day trial)https://www.suse.com/products/suse-openstack-cloud/

For this article I will download a qcow2 image of CentOS
[root@openstack ~]# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1805.qcow2

This will download the CentOS-7 image on my setup. Make sure you download this on your host server as your controller will try to search for the image on its base machine.

For the sake of this article I will change the root password of this image
# virt-customize -a /tmp/CentOS-7-x86_64-GenericCloud-20141129_01.qcow2 --root-password password:redhat
[   0.0] Examining the guest ...
[  34.9] Setting a random seed
[  34.9] Setting passwords
[  47.6] Finishing off

This will change the password of the root user in the image to "redhat"

Next login to your Horizon dashboard

Navigate to Project -> Compute -> Images -> Create Image

Now follow the instructions on the screen
Name  :CentOS7
Image Source:Image File
Browse:Locate the image from your setup
Format:QCOW2
Architecture: x86_64



Next click on "Create Image"

You can validate the image which you just created on your controller node at the below location
[root@controller-rhel ~]# cd /var/lib/glance/images/

[root@controller-rhel images]# ls -l
total 965892
-rw-r-----. 1 glance glance 989069312 Jul  8 13:02 eb1c248f-aade-426b-adde-012c8de98521

Or you can use below command
[root@controller-rhel ~(keystone_admin)]# openstack image list
+--------------------------------------+---------+
| ID                                   | Name    |
+--------------------------------------+---------+
| eb1c248f-aade-426b-adde-012c8de98521 | CentOS7 |
+--------------------------------------+---------+

To get the details of the image
[root@openstack ~]# qemu-img info /tmp/CentOS-7-x86_64-GenericCloud-20141129_01.qcow2
image: /tmp/CentOS-7-x86_64-GenericCloud-20141129_01.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 943M
cluster_size: 65536
Format specific information:
    compat: 0.10

Disk formats supported by Glance





Create Volumes

We have used block-storage (Cinder) as our backend type for creating Volumes. We can add additional Volumes to the Instances which we create here.

OpenStack block storage service consists of four services implemented as GNU/Linux daemons:
  • cinder-api: API service provides an HTTP endpoint for API requests. Currently, two versions of API are supported and required for the cloud. So Cinder provides six endpoints. The cinder-api verifies the identity requirements for an incoming request and after that routes them to the cinder-volume for action through the message broker.
  • cinder-scheduler: Scheduler service reads requests from the message queue and selects the optimal storage provider node to create or manage the volume.
  • cinder-volume: The service works with a storage back end through the drivers. The cinder-volume gets requests from the scheduler and responds to read and write requests sent to the block storage service to maintain state. You can use several back ends at the same time. For each back end you need one or more dedicated cindervolume service.
  • cinder-backup: The backup service works with the backup back end through the driver architecture.
The space for this volume will be allocated using the cinder-volume group which we had created in the first stage of this article as below link

You can use the cinder service-list command to query the status of Cinder services:
[root@controller-rhel ~(keystone_admin)]# cinder service-list
+------------------+--------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |        Host        | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   controller-rhel   | nova | enabled |   up  | 2018-07-08T08:29:48.000000 |        -        |
|  cinder-volume   | controller-rhel@lvm | nova | enabled |   up  | 2018-07-08T08:29:49.000000 |        -        |
+------------------+--------------------+------+---------+-------+----------------------------+-----------------+

You can check the free size available on your cinder volume on the controller
[root@controller-rhel ~]# vgs
  VG             #PV #LV #SN Attr   VSize    VFree
  cinder-volumes   1   0   0 wz--n- <100.00g <100.00g
  rhel             1   2   0 wz--n-  <38.52g   <7.69g

[root@controller-rhel ~]# pvs
  PV         VG             Fmt  Attr PSize    PFree
  /dev/vda3  rhel           lvm2 a--   <38.52g   <7.69g
  /dev/vdb1  cinder-volumes lvm2 a--  <100.00g <100.00g

As you see I have create a cinder-volume VG with 100GB of storage so I can create volumes till this limit is reached.

To create a "Volume"

Navigate to Project -> Volumes -> "Create Volume"

Fill in the details as requested on the screen

Volume Name:40GB_volume
Volume Source:No source
Type:iscsi
Size:40GB
Availability Zone: Nova



Click on "Create Volume"

So now our volume is successfully created

You can also validate the colume by using below command on the controller
[root@controller-rhel ~(keystone_admin)]# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID                                   | Display Name | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 60067a3c-1b56-420e-8130-3174cf571574 | 40GB_volume  | available |   40 |             |
+--------------------------------------+--------------+-----------+------+-------------+




Create Flavors

Flavors here are the resource templates, which determine the instance's size for RAM, disk and capacity for number of cores. Flavors can also specify secondary ephemeral storage, swap disk, metadata to restrict usage or special project access. The default install of Openstack provides five flavors. However there are use cases, like changing default memory and capacity to suit the underlying hardware needs or changing or adding metadata to force specific I/O rate for the instance, that may need to create and manage specialized flavors.

Default Flavors


NOTE: Creation of flavors is done by "admin" user, but may be delegated to other users by redefining the access controls. To allow all users to configure flavors, specify the policy in the "/etc/nova/policy.json" file

[root@controller-rhel ~]# grep flavormanage /etc/nova/policy.json
    "compute_extension:flavormanage": "rule:admin_api",

To create a flavor

Navigate to Admin -> System -> Flavors -> Create Flavor

Fill in the required details
Name:m1.test
ID:auto
VCPUs:4
RAM:4096MB
Root Disk:40GB
Ephemeral Disk:10GB
Swap Disk:1024MB


Click on "Create Flavor"




Managing Key Pairs

SSH keys are a secured, trusted way of connecting to remote servers via the SSH protocol without using passwords. SSH keys always come in pairs: private key and a public key. On Linux systems, the private key is usually stored in ~/.ssh/id_KEY_TYPE and the public key under ~/.ssh/id_KEY_TYPE.pub. KEY_TYPE being the encryption algorithm such as RSA or DSA.
The type of encryption most often used by default is RSA, so the keys would be named id_rsa and id_rsa.pub. While the public key is meant to be shared or sent to remote servers freely (in the ~/.ssh/authorized_keys file), the private key should be secured on the local machine with strict rule access.

Openstack gives users the ability to generate and import existing key pairs; upon generation, while the public key is stored in the database, the private key is not stored, as accessing the database would compromise the integrity as well as the security of the keys.

To create a key pair

Login to the Horizon Dashboard

Navigate to Project -> Access & Security -> Key Pairs -> Create Key Pair

Give a name to your key pair, I am giving "test", click on Next

It will ask you to download the key pair on your controller node, save it under Downloads


Once a key pair is generated, users can use it against an instance to connect to it. If a public key is imported, Nova will store it in its database and would expect users to possess the private key the public key is dependent on.

IMPORTANT NOTE: Depending on how the image is configured, it might not be possible to connect to the instance without a valid key pair.

To view the key pair using CLI
[root@controller-rhel ~(keystone_admin)]# nova keypair-list
+------+-------------------------------------------------+
| Name | Fingerprint                                     |
+------+-------------------------------------------------+
| test | a5:ed:65:bd:dd:24:d0:af:12:1a:8f:b2:6e:f7:82:51 |
+------+-------------------------------------------------+




Part 4: How to create, launch and connect to an instance from scratch in Openstack

$
0
0

Below steps and screenshots are validated from Red Hat based Openstack platform, but the steps and commands will work also on the opensource Openstack running on any other distribution

Create an Instance

To Launch an Instance, Login to you Horizon Dashboard using the user for which the instance needs to be created. I will use admin user as I have not created any other project for this article.

Login using "admin" to your Horizon DashBoard

Navigate to Project -> Compute -> Instance -> Launch Instance

Fill in the required details

Under Details TAB
Availability Zone:nova
Instance Name:testvm
Flavor:m1.test
Instance Count:1
Boot Source:Boot from Image
Image Name:CentOS7







Under Access and Security
Key Pair:test
Security Group:default



Under Networking
Selected networks:internal_network



Click on "Launch"

It may take few minutes depending upon your environment for the instance to be up and ready.




Associate Floating IP to the Instance

Since we have not yet generated any floating IP, at this stage it self we will generate and assign one floating IP to my instance

Next select the instance and from the drop down menu as shwon below select "Associate Floating IP"

Click on the "plus" sign to generate a floating IP


Select the external network pool and generate a free floating IP which can be associated to the instance


Now we have a Floating IP with us "192.168.122.4" which we can assign to this instance


Now as you can see floating ip has been assigned to my instance





Modify Security Group for the Instance

We had used the "default" security group for this instance as a part of which by default ICMP and SSH protocols are blocked so we must enable them before we try to connect the instance

To modify the security group

Navigate to Project ->  Compute -> Access & Security -> Security Groups -> default

Select the checkbox of default and click on "Manage Rules"

Click on "Add Rule"

From the drop down menu select "Allow ICMP" and "Add"

Next again create another rule and select "SSH" and "Add"





Connect to the Instance

NOTE: Make sure the permission on the private key file is 600
[root@openstack ~]# chmod 600 Downloads/test.pem

Next try to connect to the instance
[root@openstack ~]# ssh -i Downloads/test.pem 192.168.122.4
root@192.168.122.4's password:

[root@host-192-168-100-2 ~]#

IMPORTANT NOTE: Since we had changed the password of our image file for root user, we are directly able to login as root. If you have not followed that step then you must login as "centos" user.

So all went well here. I hope the article was useful.

Please let me know your views and feedback in the comment section below.


How to test ssh connection using bash script in Linux

$
0
0
There can be multiple scenarios due to which ssh connection may fail, some of them can be
  • network issues
  • password incorrect
  • passphrase incorrect
  • and many more..

so we will try to rule out all the possible failed scenarios with some pre-checks before we do the actual task to make sure my ssh doesn't get hang

Ping Test

_HOST=192.168.100.10
ping -q -W 5 -c 1 $_HOST>/dev/null 2>&1

Here I am doing a ping test with a timeout for 5 seconds and will send one packet to the target host to make sure my network is proper between client and server.

Check port connectivity

You can use multiple tools to check the port based connectivity to make sure target port is available
${_PORT}=22
${_HOST}=192.168.100.10

timeout 5 bash -c "</dev/tcp/$_HOST/${_PORT}"

OR





On distributions with netcat rpm
# netcat -w 5 ${_HOST} ${_PORT}
_RETCODE=$?

if [ $_RETCODE -ne "0" ];then
    echo "Port 22 not available"
fi

Test ssh connectivity

Here we will try to do a quick ssh and validate the return code
ssh -q -o BatchMode=yes  -o StrictHostKeyChecking=no -i /export/home/sufuser/.ssh/id_rsa $_HOST'exit 0'
_RCODE=$?
if [ $_RCODE -ne 0 ]
then
    log_and_print_red "unable to ssh, host is not accessible"
    continue
fi

I hope the article was useful.

How to check the swap memory usage by a process in Linux

$
0
0
Earlier I had written an article on swappiness, here I will show you some of the methods which can be used to check the swap memory utilisation.



Below article is tested and validated on Red Hat Enterprise Linux 7.

IMPORTANT NOTE: 
There is no way to know how much swap space is used by a process in kernel versions prior to version 2.6.18-128.el5 (RHEL 5 update 3). So in any prior RHEL versions (all of RHEL 3, RHEL 4, and RHEL 5 up to and including RHEL 5 update 2) the necessary kernel code for determining how much swap space is used by individual processes is not present

Using the below tool we only can know the total, used and available swap memory
# free -m
              total        used        free      shared  buff/cache   available
Mem:         128816       10014      117010         126        1791      117822
Swap:          4091        1821        2270

But it does not gives any information on the swap memory usage per process or application





Method 1

We have 'top' utility which can be used as the first tool to get the swap utilisation value per process. By default 'top' does not shows the SWAP utilisation so you will need to add additional field "SWAP" which will then show this value.

'top' natively shows this information by using this formula:
Raw
VIRT = SWAP + RES or equal
SWAP = VIRT - RES

The output would look like something below
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                             SWAP
 6735 dbmrun    20   0 19.324g 3.178g   4868 S   0.7  2.5 151:06.86 java                                                              364288
 2216 ssrun     20   0 24.737g 1.639g   2588 S   0.0  1.3  24:59.10 jsvc                                                              211668
 2214 ne3suser  20   0 12.719g 1.055g   5672 S   0.3  0.8  22:04.87 jsvc                                                              284028
 4162 ne3suser  20   0 14.428g 822776   5280 S   0.0  0.6  15:10.43 jsvc                                                              958204
 2215 ssrun     20   0 7011772 344136   6024 S   0.0  0.3  16:28.47 jsvc                                                               24996
 2217 ssrun     20   0 6315572 188004   4596 S   0.0  0.1  12:24.31 jsvc                                                               19128
 2168 postgres  20   0  280636  35828  35516 S   0.0  0.0   0:23.95 postgres                                                             796
  757 root      20   0   53220  16856  16532 S   0.0  0.0   0:22.14 systemd-journal                                                        0
 1954 root      20   0  568732  11504    400 S   0.0  0.0   1:05.69 tuned                                                               1676
 1143 root      20   0  359164  10224   8812 S   0.0  0.0   0:35.86 rsyslogd                                                             216
 2010 root      20   0  188204   8284    780 S   0.0  0.0   1:25.03 amsHelper                                                           4592

Here as you see a new column is added in the last which shows us the SWAP memory usage.

Method 2

We can also check this value from the per process status which will give value similar to what we got with 'top' utility

For example if I wish to get swap usage detaul for 'amsHelper' process
# grep -i VmSwap /proc/$(pgrep amsHelper)/status
VmSwap:     4592 kB

If the process has multiple PID associated then below command will help
# for i in $(pidof postgres);do grep -i vmswap /proc/$i/status;done
VmSwap:     1560 kB
VmSwap:     1068 kB
VmSwap:      916 kB
VmSwap:      928 kB
VmSwap:      960 kB
VmSwap:      856 kB
VmSwap:     1020 kB
VmSwap:      944 kB
VmSwap:     1012 kB
VmSwap:      756 kB
VmSwap:     1024 kB
VmSwap:      816 kB
VmSwap:     1132 kB
VmSwap:      900 kB
VmSwap:      884 kB
VmSwap:      864 kB
VmSwap:      888 kB
VmSwap:      796 kB

To calculate the total
# for i in $(pidof postgres);do grep -i vmsw /proc/$i/status;done | awk '{s+=$2} END {print s}'
17324



Method 3

You can also use 'pmap' to get the detail of the swap memory usage for a process. This value will again be similar to what we calculated above

For example to get the swap usage for 'amsHelper'
# pmap -X $(pgrep amsHelper)
2010:   /sbin/amsHelper -f
         Address Perm   Offset Device  Inode   Size  Rss  Pss Referenced Anonymous Swap Locked Mapping
        00400000 r-xp 00000000  fd:00  13670   1632  408  408        408         0    0      0 amsHelper
        00798000 r--p 00198000  fd:00  13670      4    4    4          4         4    0      0 amsHelper
        00799000 rw-p 00199000  fd:00  13670     52   16   16         12        16    0      0 amsHelper
        007a6000 rw-p 00000000  00:00      0    356   32   32         32        32   16      0
        021c6000 rw-p 00000000  00:00      0  11584 7280 7280       7268      7280 4240      0 [heap]
    7f4177075000 r-xp 00000000  fd:00   3119     48    0    0          0         0    0      0 libnss_files-2.17.so
    7f4177081000 ---p 0000c000  fd:00   3119   2044    0    0          0         0    0      0 libnss_files-2.17.so
    7f4177280000 r--p 0000b000  fd:00   3119      4    0    0          0         0    4      0 libnss_files-2.17.so
    7f4177281000 rw-p 0000c000  fd:00   3119      4    0    0          0         0    4      0 libnss_files-2.17.so
    7f4177282000 rw-p 00000000  00:00      0     24    0    0          0         0    0      0
    7f4177288000 r-xp 00000000  fd:00   3123     40    0    0          0         0    0      0 libnss_nis-2.17.so
    7f4177292000 ---p 0000a000  fd:00   3123   2048    0    0          0         0    0      0 libnss_nis-2.17.so
    7f4177492000 r--p 0000a000  fd:00   3123      4    0    0          0         0    4      0 libnss_nis-2.17.so
    7f4177493000 rw-p 0000b000  fd:00   3123      4    0    0          0         0    4      0 libnss_nis-2.17.so
    7f4177494000 r-xp 00000000  fd:00   3111     88    0    0          0         0    0      0 libnsl-2.17.so
    7f41774aa000 ---p 00016000  fd:00   3111   2044    0    0          0         0    0      0 libnsl-2.17.so
    7f41776a9000 r--p 00015000  fd:00   3111      4    0    0          0         0    4      0 libnsl-2.17.so
    7f41776aa000 rw-p 00016000  fd:00   3111      4    0    0          0         0    4      0 libnsl-2.17.so
    7f41776ab000 rw-p 00000000  00:00      0      8    0    0          0         0    0      0
    7f41776ad000 r-xp 00000000  fd:00   3113     32    0    0          0         0    0      0 libnss_compat-2.17.so
    7f41776b5000 ---p 00008000  fd:00   3113   2048    0    0          0         0    0      0 libnss_compat-2.17.so
    7f41778b5000 r--p 00008000  fd:00   3113      4    0    0          0         0    4      0 libnss_compat-2.17.so
    7f41778b6000 rw-p 00009000  fd:00   3113      4    0    0          0         0    4      0 libnss_compat-2.17.so
    7f41778b7000 r--p 00000000  fd:00 132689 103588    8    0          8         0    0      0 locale-archive
    7f417dde0000 r-xp 00000000  fd:00   3131     28    0    0          0         0    0      0 librt-2.17.so
    7f417dde7000 ---p 00007000  fd:00   3131   2044    0    0          0         0    0      0 librt-2.17.so
    7f417dfe6000 r--p 00006000  fd:00   3131      4    0    0          0         0    4      0 librt-2.17.so
    7f417dfe7000 rw-p 00007000  fd:00   3131      4    0    0          0         0    4      0 librt-2.17.so
    7f417dfe8000 r-xp 00000000  fd:00   3531     16    0    0          0         0    0      0 libattr.so.1.1.0
    7f417dfec000 ---p 00004000  fd:00   3531   2044    0    0          0         0    0      0 libattr.so.1.1.0
    7f417e1eb000 r--p 00003000  fd:00   3531      4    0    0          0         0    4      0 libattr.so.1.1.0
    7f417e1ec000 rw-p 00004000  fd:00   3531      4    0    0          0         0    4      0 libattr.so.1.1.0
    7f417e1ed000 r-xp 00000000  fd:00   3281    384    0    0          0         0    0      0 libpcre.so.1.2.0
    7f417e24d000 ---p 00060000  fd:00   3281   2048    0    0          0         0    0      0 libpcre.so.1.2.0
    7f417e44d000 r--p 00060000  fd:00   3281      4    0    0          0         0    4      0 libpcre.so.1.2.0
    7f417e44e000 rw-p 00061000  fd:00   3281      4    0    0          0         0    4      0 libpcre.so.1.2.0
    7f417e44f000 r-xp 00000000  fd:00   3144    232    0    0          0         0    0      0 libnspr4.so
    7f417e489000 ---p 0003a000  fd:00   3144   2044    0    0          0         0    0      0 libnspr4.so
    7f417e688000 r--p 00039000  fd:00   3144      4    0    0          0         0    4      0 libnspr4.so
    7f417e689000 rw-p 0003a000  fd:00   3144      8    0    0          0         0    8      0 libnspr4.so
    7f417e68b000 rw-p 00000000  00:00      0      8    0    0          0         0    4      0
    7f417e68d000 r-xp 00000000  fd:00   3146     12    0    0          0         0    0      0 libplds4.so
    7f417e690000 ---p 00003000  fd:00   3146   2044    0    0          0         0    0      0 libplds4.so
    7f417e88f000 r--p 00002000  fd:00   3146      4    0    0          0         0    4      0 libplds4.so
    7f417e890000 rw-p 00003000  fd:00   3146      4    0    0          0         0    4      0 libplds4.so
    7f417e891000 r-xp 00000000  fd:00   3145     16    0    0          0         0    0      0 libplc4.so
    7f417e895000 ---p 00004000  fd:00   3145   2044    0    0          0         0    0      0 libplc4.so
    7f417ea94000 r--p 00003000  fd:00   3145      4    0    0          0         0    4      0 libplc4.so
    7f417ea95000 rw-p 00004000  fd:00   3145      4    0    0          0         0    4      0 libplc4.so
    7f417ea96000 r-xp 00000000  fd:00   3328    152    0    0          0         0    0      0 libnssutil3.so
    7f417eabc000 ---p 00026000  fd:00   3328   2044    0    0          0         0    0      0 libnssutil3.so
    7f417ecbb000 r--p 00025000  fd:00   3328     28    0    0          0         0   28      0 libnssutil3.so
    7f417ecc2000 rw-p 0002c000  fd:00   3328      4    0    0          0         0    4      0 libnssutil3.so
    7f417ecc3000 r-xp 00000000  fd:00   3129     88    0    0          0         0    0      0 libresolv-2.17.so
    7f417ecd9000 ---p 00016000  fd:00   3129   2048    0    0          0         0    0      0 libresolv-2.17.so
    7f417eed9000 r--p 00016000  fd:00   3129      4    0    0          0         0    4      0 libresolv-2.17.so
    7f417eeda000 rw-p 00017000  fd:00   3129      4    0    0          0         0    4      0 libresolv-2.17.so
    7f417eedb000 rw-p 00000000  00:00      0      8    0    0          0         0    0      0
    7f417eedd000 r-xp 00000000  fd:00   3489   1748    0    0          0         0    0      0 libdb-5.3.so
    7f417f092000 ---p 001b5000  fd:00   3489   2048    0    0          0         0    0      0 libdb-5.3.so
    7f417f292000 r--p 001b5000  fd:00   3489     28    0    0          0         0   28      0 libdb-5.3.so
    7f417f299000 rw-p 001bc000  fd:00   3489     12    0    0          0         0   12      0 libdb-5.3.so
    7f417f29c000 r-xp 00000000  fd:00   3543     28    0    0          0         0    0      0 libacl.so.1.1.0
    7f417f2a3000 ---p 00007000  fd:00   3543   2048    0    0          0         0    0      0 libacl.so.1.1.0
    7f417f4a3000 r--p 00007000  fd:00   3543      4    4    4          0         4    0      0 libacl.so.1.1.0
    7f417f4a4000 rw-p 00008000  fd:00   3543      4    4    4          0         4    0      0 libacl.so.1.1.0
    7f417f4a5000 r-xp 00000000  fd:00   3533     16    0    0          0         0    0      0 libcap.so.2.22
    7f417f4a9000 ---p 00004000  fd:00   3533   2044    0    0          0         0    0      0 libcap.so.2.22
    7f417f6a8000 r--p 00003000  fd:00   3533      4    0    0          0         0    4      0 libcap.so.2.22
    7f417f6a9000 rw-p 00004000  fd:00   3533      4    0    0          0         0    4      0 libcap.so.2.22
    7f417f6aa000 r-xp 00000000  fd:00   3280    144    0    0          0         0    0      0 libselinux.so.1
    7f417f6ce000 ---p 00024000  fd:00   3280   2044    0    0          0         0    0      0 libselinux.so.1
    7f417f8cd000 r--p 00023000  fd:00   3280      4    0    0          0         0    4      0 libselinux.so.1
    7f417f8ce000 rw-p 00024000  fd:00   3280      4    0    0          0         0    4      0 libselinux.so.1
    7f417f8cf000 rw-p 00000000  00:00      0      8    0    0          0         0    4      0
    7f417f8d1000 r-xp 00000000  fd:00   3127     92   40    0         40         0    0      0 libpthread-2.17.so
    7f417f8e8000 ---p 00017000  fd:00   3127   2044    0    0          0         0    0      0 libpthread-2.17.so
    7f417fae7000 r--p 00016000  fd:00   3127      4    0    0          0         0    4      0 libpthread-2.17.so
    7f417fae8000 rw-p 00017000  fd:00   3127      4    0    0          0         0    4      0 libpthread-2.17.so
    7f417fae9000 rw-p 00000000  00:00      0     16    0    0          0         0    4      0
    7f417faed000 r-xp 00000000  fd:00   3107      8    0    0          0         0    0      0 libdl-2.17.so
    7f417faef000 ---p 00002000  fd:00   3107   2048    0    0          0         0    0      0 libdl-2.17.so
    7f417fcef000 r--p 00002000  fd:00   3107      4    4    4          0         4    0      0 libdl-2.17.so
    7f417fcf0000 rw-p 00003000  fd:00   3107      4    4    4          0         4    0      0 libdl-2.17.so
    7f417fcf1000 r-xp 00000000  fd:00   3527    176    0    0          0         0    0      0 liblua-5.1.so
    7f417fd1d000 ---p 0002c000  fd:00   3527   2044    0    0          0         0    0      0 liblua-5.1.so
    7f417ff1c000 r--p 0002b000  fd:00   3527      8    4    4          0         4    4      0 liblua-5.1.so
    7f417ff1e000 rw-p 0002d000  fd:00   3527      4    4    4          0         4    0      0 liblua-5.1.so
    7f417ff1f000 r-xp 00000000  fd:00   3389    148    0    0          0         0    0      0 liblzma.so.5.2.2
    7f417ff44000 ---p 00025000  fd:00   3389   2044    0    0          0         0    0      0 liblzma.so.5.2.2
    7f4180143000 r--p 00024000  fd:00   3389      4    4    4          0         4    0      0 liblzma.so.5.2.2
    7f4180144000 rw-p 00025000  fd:00   3389      4    4    4          0         4    0      0 liblzma.so.5.2.2
    7f4180145000 r-xp 00000000  fd:00   3307     36    0    0          0         0    0      0 libpopt.so.0.0.0
    7f418014e000 ---p 00009000  fd:00   3307   2044    0    0          0         0    0      0 libpopt.so.0.0.0
    7f418034d000 r--p 00008000  fd:00   3307      4    4    4          0         4    0      0 libpopt.so.0.0.0
    7f418034e000 rw-p 00009000  fd:00   3307      4    4    4          0         4    0      0 libpopt.so.0.0.0
    7f418034f000 r-xp 00000000  fd:00   3483     92    0    0          0         0    0      0 libelf-0.168.so
    7f4180366000 ---p 00017000  fd:00   3483   2044    0    0          0         0    0      0 libelf-0.168.so
    7f4180565000 r--p 00016000  fd:00   3483      4    4    4          0         4    0      0 libelf-0.168.so
    7f4180566000 rw-p 00017000  fd:00   3483      4    4    4          0         4    0      0 libelf-0.168.so
    7f4180567000 r-xp 00000000  fd:00   3293     84    0    0          0         0    0      0 libz.so.1.2.7
    7f418057c000 ---p 00015000  fd:00   3293   2044    0    0          0         0    0      0 libz.so.1.2.7
    7f418077b000 r--p 00014000  fd:00   3293      4    4    4          0         4    0      0 libz.so.1.2.7
    7f418077c000 rw-p 00015000  fd:00   3293      4    4    4          0         4    0      0 libz.so.1.2.7
    7f418077d000 r-xp 00000000  fd:00   3392     60    0    0          0         0    0      0 libbz2.so.1.0.6
    7f418078c000 ---p 0000f000  fd:00   3392   2044    0    0          0         0    0      0 libbz2.so.1.0.6
    7f418098b000 r--p 0000e000  fd:00   3392      4    4    4          0         4    0      0 libbz2.so.1.0.6
    7f418098c000 rw-p 0000f000  fd:00   3392      4    4    4          0         4    0      0 libbz2.so.1.0.6
    7f418098d000 r-xp 00000000  fd:00   8210   1156    0    0          0         0    0      0 libnss3.so
    7f4180aae000 ---p 00121000  fd:00   8210   2048    0    0          0         0    0      0 libnss3.so
    7f4180cae000 r--p 00121000  fd:00   8210     20    4    4          0         4   16      0 libnss3.so
    7f4180cb3000 rw-p 00126000  fd:00   8210      8    4    4          0         4    4      0 libnss3.so
    7f4180cb5000 rw-p 00000000  00:00      0      8    0    0          0         0    8      0
    7f4180cb7000 r-xp 00000000  fd:00   3101   1760  304   10        304         0    0      0 libc-2.17.so
    7f4180e6f000 ---p 001b8000  fd:00   3101   2048    0    0          0         0    0      0 libc-2.17.so
    7f418106f000 r--p 001b8000  fd:00   3101     16    8    8          8         8    8      0 libc-2.17.so
    7f4181073000 rw-p 001bc000  fd:00   3101      8    8    8          8         8    0      0 libc-2.17.so
    7f4181075000 rw-p 00000000  00:00      0     20    8    8          8         8   12      0
    7f418107a000 r-xp 00000000  fd:00   4599     48    0    0          0         0    0      0 libpci.so.3.5.1
    7f4181086000 ---p 0000c000  fd:00   4599   2044    0    0          0         0    0      0 libpci.so.3.5.1
    7f4181285000 r--p 0000b000  fd:00   4599      4    4    4          0         4    0      0 libpci.so.3.5.1
    7f4181286000 rw-p 0000c000  fd:00   4599      4    4    4          0         4    0      0 libpci.so.3.5.1
    7f4181287000 r-xp 00000000  fd:00   8422    388    0    0          0         0    0      0 librpm.so.3.2.2
    7f41812e8000 ---p 00061000  fd:00   8422   2044    0    0          0         0    0      0 librpm.so.3.2.2
    7f41814e7000 r--p 00060000  fd:00   8422     12    4    4          0         4    8      0 librpm.so.3.2.2
    7f41814ea000 rw-p 00063000  fd:00   8422     12    4    4          0         4    8      0 librpm.so.3.2.2
    7f41814ed000 rw-p 00000000  00:00      0      4    0    0          0         0    4      0
    7f41814ee000 r-xp 00000000  fd:00   8424    160    0    0          0         0    0      0 librpmio.so.3.2.2
    7f4181516000 ---p 00028000  fd:00   8424   2044    0    0          0         0    0      0 librpmio.so.3.2.2
    7f4181715000 r--p 00027000  fd:00   8424      8    4    4          0         4    4      0 librpmio.so.3.2.2
    7f4181717000 rw-p 00029000  fd:00   8424      8    4    4          0         4    4      0 librpmio.so.3.2.2
    7f4181719000 rw-p 00000000  00:00      0      8    0    0          0         0    4      0
    7f418171b000 r-xp 00000000  fd:00   3109   1028    8    0          8         0    0      0 libm-2.17.so
    7f418181c000 ---p 00101000  fd:00   3109   2044    0    0          0         0    0      0 libm-2.17.so
    7f4181a1b000 r--p 00100000  fd:00   3109      4    4    4          0         4    0      0 libm-2.17.so
    7f4181a1c000 rw-p 00101000  fd:00   3109      4    4    4          0         4    0      0 libm-2.17.so
    7f4181a1d000 r-xp 00000000  fd:00   3094    132    8    0          8         0    0      0 ld-2.17.so
    7f4181bb0000 r--s 00000000  fd:00 274663    264    0    0          0         0    0      0 modules.dep
    7f4181bf3000 r--s 00000000  00:13  20825    212    0    0          0         0  196      0 dbQN6MqZ (deleted)
    7f4181c28000 rw-p 00000000  00:00      0     56   24   24          4        24   32      0
    7f4181c36000 r--s 00000000  fd:00 132688     28    0    0          0         0    0      0 gconv-modules.cache
    7f4181c3d000 rw-p 00000000  00:00      0      4    4    4          0         4    0      0
    7f4181c3e000 r--p 00021000  fd:00   3094      4    4    4          0         4    0      0 ld-2.17.so
    7f4181c3f000 rw-p 00022000  fd:00   3094      4    4    4          0         4    0      0 ld-2.17.so
    7f4181c40000 rw-p 00000000  00:00      0      4    4    4          0         4    0      0
    7ffd33f16000 rw-p 00000000  00:00      0    132   16   16         16        16   20      0 [stack]
    7ffd33fce000 r-xp 00000000  00:00      0      8    4    0          4         0    0      0 [vdso]
ffffffffff600000 r-xp 00000000  00:00      0      4    0    0          0         0    0      0 [vsyscall]
                                             ====== ==== ==== ========== ========= ==== ======
                                             188208 8296 7934       8140      7516 4788     0 KB

Here the third last column is showing the swap usage for this proc

Method 4

Now there is another way to find the swap memory usage which is among the best compared to above and reliable because it will give the swap usage for every library used by a process similar to above but in more detail.

For example to check the detail of "amsHelper" process
# cat /proc/$(pgrep amsHelper)/smaps

It will have blocks like below for every library file used by the process
7ffd33f16000-7ffd33f37000 rw-p 00000000 00:00 0                          [stack]
Size:                132 kB
Rss:                  16 kB
Pss:                  16 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:        16 kB
Referenced:           16 kB
Anonymous:            16 kB
AnonHugePages:         0 kB
Swap:                 20 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Locked:                0 kB
VmFlags: rd wr mr mw me gd ac


As you see it will gove you a long list so we must collect the required output and then calculate the entire value to get the overall usage report
# cat /proc/$(pgrep amsHelper)/smaps | grep -i swap |awk '{s+=$2} END {print s}'
4788

I hope the article was useful.

How to configure offline yum repository using DVD and HTTP or Apache server over the network in RHEL / CentOS 7

$
0
0
In this article I will show you the steps to configure an offline yum repository in your network using http server

IMPORTANT NOTE: I had already written an aticle to use yum with apache but that was tested and validated with RHEL 5 and 6, and with RHEL 7 some httpd configuration option have changed. If you are using older version of RHEL please follow below link

To make this work we would need a basic http server so install all the http related packages.
Now before creating a http based yum repository, create an offline repo using the RHEL/CentOS dvd.

Next install httpd rpm and its dependency using yum
# yum install httpd -y

Next it is time to configure our http server
Edit your main configuration file i.e. "/etc/httpd/conf/httpd.conf" and add below content at the end of the file





NOTE: Here I will use /var/www/html as my source path where the RHEL/CentOS dvd will be mounted. You can change the path accordingly as per your requirement
Alias /web"/var/www/html/"
<VirtualHost 192.168.1.6:80>
        ServerAdmin root@server.golinuxhub.com
        ServerName golinuxhub-server
        DocumentRoot /var/www/html
        ErrorLog logs/error_log
  <Directory "/var/www/html/">
     Options Indexes MultiViews
     AllowOverride All
     Require all granted
  </Directory>
</VirtualHost>

If you have firewalld running on your system then you can run below command to add firewalld rules for httpd
# firewall-cmd --permanent --add-service=http
success

# firewall-cmd --reload
success

Next restart your httpd service
# systemctl restart httpd

Make sure the RHEL/CentOS dvd is mounted on your source directory i.e. /var/www/html
# mount /tmp/rhel-server-7.4-x86_64-dvd.iso /var/www/html/
mount: /dev/loop0 is write-protected, mounting read-only

# ls /var/www/html/
addons  EFI  EULA  extra_files.json  GPL  images  isolinux  LiveOS  media.repo  Packages  repodata  RPM-GPG-KEY-redhat-beta  RPM-GPG-KEY-redhat-release  TRANS.TBL

Next try to access your http server using http://192.168.1.6/web/ on your browser
NOTE: Replace the host IP (192.168.1.6) with your node IP


If all is good proceed to next step or else if you face any issue follow "/etc/httpd/logs/error_log" for more information on the issue



It is time to re-configure our repo file which for me is "/etc/yum.repos.d/rhel.repo" with below content
[RHEL_Repo]
name=Red Hat Enterprise Linux 7.4
baseurl=http://192.168.1.6/web/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
NOTE: The gpg keys are by default installed on systems by a Red Hat release package for your type of installation hence you can use the above path and make sure that it exists

Here as you see my baseurl reflects my http server which contains the rpm from the rhel dvd.

Next save and exit the file

Next let's clean the cache
# yum clean all
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Cleaning repos: RHEL_Repo
Cleaning up everything
Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos

# rm -rf /var/cache/yum

Now let's see if our new repo is working as expected
# yum repolist all
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
RHEL_Repo                                                                                           | 4.1 kB  00:00:00
(1/2): RHEL_Repo/group_gz                                                                           | 137 kB  00:00:00
(2/2): RHEL_Repo/primary_db                                                                         | 4.0 MB  00:00:00
repo id                                      repo name                                                       status
RHEL_Repo                                    Red Hat Enterprise Linux 7.4                                    enabled: 4,986
repolist: 4,986

So as you see my repo "RHEL_Repo" is enabled and has 4,986 rpms.
Now you can use the same repo file in your network and use this offline repository.


How to use http or apache service running on a different port other than 80 with my yum repository?

By default we use port 80 for configuring http server but for some reason if you wish to use a different port number then also nothing should change and you can follow the same procedure.



You may need to do some conf changes for your httpd config file as below

Change
Listen 80
to
Listen 8080

and virtual hosting configuration as below
Alias /web "/var/www/html/"
<VirtualHost 192.168.1.6:8080>
        ServerAdmin root@server.golinuxhub.com
        ServerName golinuxhub-server
        DocumentRoot /var/www/html
        ErrorLog logs/error_log
<Directory "/var/www/html/">
   Options Indexes MultiViews
   AllowOverride All
   Require all granted
</Directory>
</VirtualHost>

Add necessary firewalld rules for new port
# firewall-cmd --permanent --add-port=8080/tcp
success

# firewall-cmd --reload
success

Restart your httpd service
# systemctl restart httpd

Validate your httpd server on the browser


Next also change the yum repo file as below
[RHEL_Repo]
name=Red Hat Enterprise Linux 7.4
baseurl=http://192.168.1.6:8080/web/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

I hope the article was useful.

How to lock or unlock a root and normal user account using pam_tally2 and pam_faillock after certain number of failed login attempts in Linux

$
0
0
To secure your server against unwanted attempts to login from third party it is always a good idea to implement login related hardening so that a user is denied login after certain amount of failed login attempts.

In my last article I had shared steps to check the status if your account is locked

Lock account using pam_tally2

pam_tally2 is a login counter (tallying) module. This module maintains a count of attempted accesses, can reset count on success, can deny access if too many attempts fail.

Below two configuration files must be modified to perform all the account lock or unlock related changes
/etc/pam.d/system-auth
/etc/pam.d/password-auth

By default these login attempts related information is stored under "/var/log/tallylog" but this can be changed as per your requirement using "file=/path/to/counter" in the pam.d file.

Some more variables which can be used for additional restrictions/modifications.
onerr=[fail|succeed]
    If something weird happens (like unable to open the file), return with PAM_SUCCESS if onerr=succeed is given, else with the corresponding PAM error code.

deny=n
    Deny access if tally for this user exceeds n.

lock_time=n
    Always deny for n seconds after failed attempt.

unlock_time=n
    Allow access after n seconds after failed attempt. If this option is used the user will be locked out for the specified amount of time after he exceeded his maximum allowed attempts. Otherwise the account is locked until the lock is removed by a manual intervention of the system administrator.

file=/path/to/counter
    File where to keep counts. Default is /var/log/tallylog.

Syntax to be used
pam_tally2.so [file=/path/to/counter] [onerr=[fail|succeed]] [even_deny_root] [deny=n] [lock_time=n] [unlock_time=n] [root_unlock_time=n] [audit] [silent]

lock non-root user (normal user) for failed login attempts

below is the minimal configuration. Here we are locking a normal user account if incorrect password is used for 3 attempts

Add the below two lines in both these configuration file
auth        required      pam_tally2.so deny=3 onerr=fail
account     required      pam_tally2.so

My sample system-auth and password-auth file
auth        required      pam_env.so
auth        required      pam_tally2.so deny=3 onerr=fail
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_tally2.so
account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3 type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_oddjob_mkhomedir.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so





lock "root" user for failed login attempts

Here we have appended "even_deny_root" as shown below to make sure "root" user is also block if incorrect password is used for 3 times
auth        required      pam_tally2.so deny=3 even_deny_root onerr=fail
account     required      pam_tally2.so

My sample system-auth and password-auth file
auth        required      pam_env.so
auth        required      pam_tally2.so deny=3 even_deny_root onerr=fail
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_tally2.so
account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3 type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_oddjob_mkhomedir.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so


NOTE: The above changes do not need a reboot or any service restart to activate the changes

Unlock non-root (normal) user account using pam_tally2

Once above changes are successfully done, attempt to login to your server using incorrect password for more than 3 attempts using a normal user.

For example I did some failed login attempts for user "deepak" from "10.43.138.2"

To check the existing status
# pam_tally2
Login           Failures Latest failure     From
deepak              3    08/03/18 11:20:18  10.43.138.3

After 3 failed login attempts now I get below message when attempting to do ssh
# ssh deepak@10.43.138.4
Password:
Account locked due to 3 failed logins
Password:

So as expected our account is locked.

To unlock the user use the below command
# pam_tally2 --user deepak --reset
Login           Failures Latest failure     From
deepak              3    07/28/18 22:35:51  10.43.138.2

Next check the status again
# pam_tally2 --user deepak
Login           Failures Latest failure     From
deepak              0

So the failed login attempts has been cleared.



Unlock "root" user account using pam_tally2

To check the status of all the users
# pam_tally2
Login           Failures Latest failure     From
root                5    08/03/18 11:20:33  10.43.138.3

To unlock the "root" user use the below command
# pam_tally2 --user root --reset
Login           Failures Latest failure     From
root                7    08/03/18 11:52:55  10.43.138.3

IMPORTANT NOTE: I would recommend to lock "root" user only with "unlock_time" or else you may end up in a situation where you will not have any active session and will fail to unlock the "root" user.

Lock account using pam_faillock for failled login attempts

pam_faillock is a module counting authentication failures during a specified interval

In Red Hat Enterprise Linux 7, the pam_faillock PAM module allows system administrators to lock out user accounts after a specified number of failed attempts. Limiting user login attempts serves mainly as a security measure that aims to prevent possible brute force attacks targeted to obtain a user's account password.

With the pam_faillock module, failed login attempts are stored in a separate file for each user in the /var/run/faillock directory.

Below two configuration files must be modified to achieve this
/etc/pam.d/system-auth
/etc/pam.d/password-auth

Below are some of the configurable options
{preauth|authfail|authsucc}
    This argument must be set accordingly to the position of this module instance in the PAM stack.
 
    The preauth argument must be used when the module is called before the modules which ask for the user credentials such as the password. The module just examines whether the user should be blocked from accessing the service in case there were anomalous number of failed consecutive authentication attempts recently. This call is optional if authsucc is used.

    The authfail argument must be used when the module is called after the modules which determine the authentication outcome, failed. Unless the user is already blocked due to previous authentication failures, the module will record the failure into the appropriate user tally file.

    The authsucc argument must be used when the module is called after the modules which determine the authentication outcome, succeded. Unless the user is already blocked due to previous authentication failures, the module will then clear the record of the failures in the respective user tally file. Otherwise it will return authentication error. If this call is not done, the pam_faillock will not distinguish between consecutive and non-consecutive failed authentication attempts. The preauth call must be used in such case. Due to complications in the way the PAM stack can be configured it is also possible to call pam_faillock as an account module. In such configuration the module must be also called in the preauth stage.

fail_interval=n
    The length of the interval during which the consecutive authentication failures must happen for the user account lock out is n seconds. The default is 900 (15 minutes).

unlock_time=n
    The access will be reenabled after n seconds after the lock out. The default is 600 (10 minutes).

    If the n is set to never or 0 the access will not be reenabled at all until administrator explicitly reenables it with the faillock command. Note though that the default directory that pam_faillock uses is usually cleared on system boot so the access will be also reenabled after system reboot. If that is undesirable a different tally directory must be set with the dir option.

    Also note that it is usually undesirable to permanently lock out the users as they can become easily a target of denial of service attack unless the usernames are random and kept secret to potential attackers.

even_deny_root
    Root account can become locked as well as regular accounts.

root_unlock_time=n
    This option implies even_deny_root option. Allow access after n seconds to root account after the account is locked. In case the option is not specified the value is the same as of the unlock_time option.

audit
    Will log the user name into the system log if the user is not found.

silent
    Don't print informative messages. This option is implicite in the authfail and authsucc functions.

Syntax to be used
auth ... pam_faillock.so {preauth|authfail|authsucc} [dir=/path/to/tally-directory] [even_deny_root] [deny=n] [fail_interval=n] [unlock_time=n] [root_unlock_time=n] [admin_group=name] [audit] [silent] [no_log_info]

account ... pam_faillock.so [dir=/path/to/tally-directory] [no_log_info]



Lock non-root user using pam_faillock for 3 failed login attempts

Add the below lines to lock a non-root user for 10 minutes after 3 failed login attempts
auth        required      pam_faillock.so preauth silent deny=3 fail_interval=900 unlock_time=600
auth        required      pam_faillock.so authfail deny=3 fail_interval=900 unlock_time=600

account     required      pam_faillock.so

My sample system-auth and password-auth file
auth        required      pam_env.so
auth        required      pam_faillock.so preauth silent audit deny=3 unlock_time=600
auth        sufficient    pam_unix.so nullok try_first_pass
auth        required      pam_faillock.so authfail audit deny=3 fail_interval=900 unlock_time=600
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_faillock.so
account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3 type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_oddjob_mkhomedir.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

Lock "root" user using pam_faillock for 3 failed login attempts

To apply account locking for the "root" user as well, add the even_deny_root option to the pam_faillock entries both the configuration file in the below format
auth        required      pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600
auth        [default=die] pam_faillock.so authfail audit deny=3 even_deny_root unlock_time=600

account     required      pam_faillock.so

My sample system-auth and password-auth file
auth        required      pam_env.so
auth        required      pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600
auth        sufficient    pam_unix.so nullok try_first_pass
auth        required      pam_faillock.so authfail audit deny=3 fail_interval=900 unlock_time=600
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_faillock.so
account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3 type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_oddjob_mkhomedir.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

IMPORTANT NOTE: If pam_faillock.so is not working as expected, the following changes may have to be made to SSHD's configuration:

# vi /etc/ssh/sshd_config
ChallengeResponseAuthentication yes
PasswordAuthentication no-> to make sure the password input always goes through this PAM conversation

Restart the sshd service to make the changes affect
# systemctl restart sshd



Unlock normal (non-root) user account using faillock

By default if there are no failed login attempts then the output of "faillock" will be blank as below
# faillock
deepak:
When                Type  Source                                           Valid
root:
When                Type  Source                                           Valid

Once I intentionally give wrong password while attempting ssh the faillock values will get appeneded automatically
# faillock
deepak:
When                Type  Source                                           Valid
2018-08-02 11:49:31 RHOST 10.43.138.2                                          V
2018-08-02 11:49:39 RHOST 10.43.138.2                                          V
2018-08-02 11:49:43 RHOST 10.43.138.2                                          V
root:
When                Type  Source                                           Valid

Once the number of attempt has reached the threshold, the user account will be locked
Below messages can be seen in /var/log/secure after more than 3 failed login attempts
Aug  2 11:49:43 openstack sshd[29038]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.43.138.2  user=deepak
Aug  2 11:49:43 openstack sshd[29038]: pam_faillock(sshd:auth): Consecutive login failures for user deepak account temporarily locked

To unlock the user "deepak"
# faillock --user deepak --reset

Next if you check the current status, everything related to user "deepak" should be clean
# faillock
deepak:
When                Type  Source                                           Valid
root:
When                Type  Source                                           Valid

Unlock root user account using faillock

To demonstrate I have attempted some failed logins using "root" user

So after a couple of failed logins I see faillock now shows me about all the attempts
# faillock
deepak:
When                Type  Source                                           Valid
root:
When                Type  Source                                           Valid
2018-08-03 11:54:01 RHOST 10.43.138.3                                          V
2018-08-03 11:54:07 RHOST 10.43.138.3                                          V
2018-08-03 11:54:11 RHOST 10.43.138.3                                          V
2018-08-03 11:54:15 RHOST 10.43.138.3                                          V
2018-08-03 11:54:19 RHOST 10.43.138.3                                          V
2018-08-03 11:54:21 RHOST 10.43.138.3                                          V

From the /var/log/secure we see that user "root" is locked
Aug  2 22:36:44 openstack sshd[8486]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.43.138.3  user=root
Aug  2 22:36:44 openstack sshd[8486]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked

To unlock the "root" user
# faillock --user root --reset

IMPORTANT NOTE: I would recommend to lock "root" user only with "unlock_time" or else you may end up in a situation where you will not have any active session and will fail to unlock the "root" user.

I hope the article was useful.

Viewing all 392 articles
Browse latest View live


Latest Images