Quantcast
Channel: LinOxide
Viewing all 1287 articles
Browse latest View live

How to Add Host and Manage Services in Icinga2

$
0
0

In my previous article, I've explained how to install and configure an Icinga2 with Icinga Web2 interface. Now it's time to introduce some hosts to our monitoring system.  Unlike Nagios, we can add the hosts automatically to the Icinga2 systems. The configuration is quite simple and easy compared to other monitoring systems.

As stated before,  the communication between the monitoring server and the client nodes are more secure comparing other versions. All communications are secured by TLS connections with certificates which is setup by Icinga2 servers on initialization.

Let's start with the procedures on how to add a hosts to our monitoring system. You can take a look at the work flow.

steps

Configuring Icinga2 Master Node

We've already setup our Icinga2 master node, now we need to make the following initialization to allow our host nodes and connect to them securely. We need to run this command " icinga2 node wizard" to run the setup wizard.

root@ubuntu:~# icinga2 node wizard
Welcome to the Icinga 2 Setup Wizard!

We'll guide you through all required configuration details.

The setup wizard will ask you whether this is a satellite or master setup. Since we run this on the master server we can type 'n'. By typing 'n', it installs the master setup and start generating the certificates for secured TLS communication.

Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: n
Starting the Master setup routine...
Please specifiy the common name (CN) [ubuntu.icinga-master.com]:
Checking for existing certificates for common name 'ubuntu.icinga-master.com'...
Certificates not yet generated. Running 'api setup' now.
information/cli: Generating new CA.
information/base: Writing private key to '/var/lib/icinga2/ca/ca.key'.
information/base: Writing X509 certificate to '/var/lib/icinga2/ca/ca.crt'.
information/cli: Generating new CSR in '/etc/icinga2/pki/ubuntu.icinga-master.com.csr'.
information/base: Writing private key to '/etc/icinga2/pki/ubuntu.icinga-master.com.key'.
information/base: Writing certificate signing request to '/etc/icinga2/pki/ubuntu.icinga-master.com.csr'.
information/cli: Signing CSR with CA and writing certificate to '/etc/icinga2/pki/ubuntu.icinga-master.com.crt'.
information/cli: Copying CA certificate to '/etc/icinga2/pki/ca.crt'.
Generating master configuration for Icinga 2.
information/cli: Adding new ApiUser 'root' in '/etc/icinga2/conf.d/api-users.conf'.
information/cli: Enabling the 'api' feature.
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
information/cli: Created backup file '/etc/icinga2/zones.conf.orig'.

We don't need to change the ports, so leave it as it is.

Please specify the API bind host/port (optional):
Bind Host []:
Bind Port []:
information/cli: Created backup file '/etc/icinga2/features-available/api.conf.orig'.
information/cli: Updating constants.conf.
information/cli: Created backup file '/etc/icinga2/constants.conf.orig'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
Done.

Now restart your Icinga 2 daemon to finish the installation!

After running this setup wizard, you need to restart the Icinga2 service.

root@ubuntu:~# systemctl restart icinga2

Installing and Configuring Icinga2-Client

We need to install Icinga2 on the host node as the initial step. For that, we need to add the Icinga2 repository to the host node and update the APT repository packages.

root@ubuntu:~# apt install software-properties-common
root@ubuntu:~# add-apt-repository ppa:formorer/icinga
This PPA provides Icinga 1, Icinga 2 and Icinga web Packages for Ubuntu. They are directly derived from the Debian Packages that I maintain within Debian.
More info: https://launchpad.net/~formorer/+archive/ubuntu/icinga
Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmpcrlq876s/secring.gpg' created
gpg: keyring `/tmp/tmpcrlq876s/pubring.gpg' created
gpg: requesting key 36862847 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpcrlq876s/trustdb.gpg: trustdb created
gpg: key 36862847: public key "Launchpad PPA for Alexander Wirt" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
root@ubuntu:~#apt-get update
root@ubuntu:~# apt-get install icinga2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
icinga2-bin icinga2-common icinga2-doc libboost-program-options1.58.0 libboost-regex1.58.0 libboost-system1.58.0 libboost-thread1.58.0
libicinga2 libyajl2 monitoring-plugins-basic monitoring-plugins-common
Suggested packages:
icinga2-studio vim-icinga2 icinga | icinga | nagios3
The following NEW packages will be installed:

Creating config file /etc/nagios-plugins/config/dhcp.cfg with new version

Creating config file /etc/nagios-plugins/config/disk.cfg with new version

Creating config file /etc/nagios-plugins/config/dummy.cfg with new version

Creating config file /etc/nagios-plugins/config/ftp.cfg with new version

Creating config file /etc/nagios-plugins/config/http.cfg with new version

Creating config file /etc/nagios-plugins/config/load.cfg with new version

Creating config file /etc/nagios-plugins/config/mail.cfg with new version

Creating config file /etc/nagios-plugins/config/news.cfg with new version

Creating config file /etc/nagios-plugins/config/ntp.cfg with new version

Creating config file /etc/nagios-plugins/config/ping.cfg with new version

Creating config file /etc/nagios-plugins/config/procs.cfg with new version

Creating config file /etc/nagios-plugins/config/real.cfg with new version

Creating config file /etc/nagios-plugins/config/ssh.cfg with new version

Creating config file /etc/nagios-plugins/config/tcp_udp.cfg with new version

Creating config file /etc/nagios-plugins/config/telnet.cfg with new version

Creating config file /etc/nagios-plugins/config/users.cfg with new version
Setcap for check_icmp and check_dhcp worked!
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...

Now we need to run the set-up Wizard on our host node and install the Satellite setup.

root@ubuntu:~# icinga2 node wizard
Welcome to the Icinga 2 Setup Wizard!

We'll guide you through all required configuration details.

Since, this is our Satelite setup, we need to type 'Y' to proceed with our Satellite setup.

Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: yes

This will proceeds with the Satellite node setup and installs the required certificates for TLS communication.

Starting the Node setup routine...
Please specifiy the common name (CN) [host1.icinga2server.com]:
Please specify the master endpoint(s) this node should connect to:
Master Common Name (CN from your master setup): ubuntu.icinga-master.com
Do you want to establish a connection to the master from this node? [Y/n]: y
Please fill out the master connection information:
Master endpoint host (Your master's IP address or FQDN): 139.162.55.62
Master endpoint port [5665]:
Add more master endpoints? [y/N]:
Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
Host [139.162.55.62]:
Port [5665]:
information/base: Writing private key to '/etc/icinga2/pki/host1.icinga2server.com.key'.
information/base: Writing X509 certificate to '/etc/icinga2/pki/host1.icinga2server.com.crt'.
information/cli: Fetching public certificate from master (139.162.55.62, 5665):

Certificate information:

Subject: CN = ubuntu.icinga-master.com
Issuer: CN = Icinga CA
Valid From: Jun 26 06:49:50 2016 GMT
Valid Until: Jun 23 06:49:50 2031 GMT
Fingerprint: 13 8A 73 C5 36 E7 1D DA FE 9D E1 E6 1E 32 ED E2 3C 6B 48 E8

Is this information correct? [y/N]: yes

We need to provide the host information and Master server information to complete the Node setup. After providing the details, it will enter CSR auto signing. After this, Icinga 2  saves some configuration on the host node and  setup a secure connection with it.

After saving these certificates, it needs to be validated by the master to prove that you’re actually in command of both servers and approve of this secure communication. For that, I run this "icinga2 pki ticket --cn 'host1.icinga2server.com"  on my master server and provided the code generated  in the Node setup to proceed further.

Please specify the request ticket generated on your Icinga 2 master.
(Hint: # icinga2 pki ticket --cn 'host1.icinga2server.com'): 836289c1bcd427879b06703dfb35aa122bf89dc2
information/cli: Requesting certificate with ticket '836289c1bcd427879b06703dfb35aa122bf89dc2'.

warning/cli: Backup file '/etc/icinga2/pki/host1.icinga2server.com.crt.orig' already exists. Skipping backup.
information/cli: Writing signed certificate to file '/etc/icinga2/pki/host1.icinga2server.com.crt'.
information/cli: Writing CA certificate to file '/etc/icinga2/pki/ca.crt'.

After signing the certificates, it askes for the API/bind port. We can skip this sections as before and proceed with the reset of the configurations.

Please specify the API bind host/port (optional):
Bind Host []:
Bind Port []:
Accept config from master? [y/N]: y
Accept commands from master? [y/N]: y
information/cli: Disabling the Notification feature.
Disabling feature notification. Make sure to restart Icinga 2 for these changes to take effect.
information/cli: Enabling the Api listener feature.
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.

information/cli: Created backup file '/etc/icinga2/features-available/api.conf.orig'.
information/cli: Generating local zones.conf.
information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
information/cli: Created backup file '/etc/icinga2/zones.conf.orig'.
information/cli: Updating constants.conf.
information/cli: Created backup file '/etc/icinga2/constants.conf.orig'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
Done.

Now restart your Icinga 2 daemon to finish the installation!

Once the Node setup is complete, we need to restart the Icinga2 daemon on the Host side.

Updating the configurations from client to master

Now we can go back to our Master server and confirm with the host addition. We can run this command to list the host nodes and services added to the server.

root@ubuntu:~# icinga2 node list
Node 'host1.icinga2server.com' (last seen: Sun Jun 26 07:03:40 2016)
* Host 'host1.icinga2server.com'
* Service 'apt'
* Service 'disk'
* Service 'disk /'
* Service 'http'
* Service 'icinga'
* Service 'load'
* Service 'ping4'
* Service 'ping6'
* Service 'procs'
* Service 'ssh'
* Service 'swap'
* Service 'users'
root@ubuntu:~#

Now we need to update Icinga2 master configuration to update these modification and to add the host nodes to the monitoring checks.

root@ubuntu:~#icinga2 node update-config
root@ubuntu:~# systemctl restart icinga2

Finally we can restart the services to save these changes and view our host node in the Icinga Web2 interface. We can login to the Icinga Web interface at http://139.162.55.62/icingaweb2/ with our admin credentials and confirm the host status.

hosts

httpservice

Managing Services in Icinga2

As you can see from my above screenshot, http service is showing critical in my client server. I've not installed Apache on my client server, hence I don't actually need the HTTP service to be monitored in our client server. Let's see how I'm going to remove that service from the monitoring services.

When a client server is added to the Master, it creates a folder for that client server inside the repository.d folder on the Master server in the Icinga2 configuration path with its hostname as below:

root@ubuntu:/etc/icinga2/repository.d/hosts# ls -l
total 8
drwxr-x--- 2 nagios nagios 4096 Jun 26 07:04 host1.icinga2server.com
-rw-r--r-- 1 root root 100 Jun 26 07:04 host1.icinga2server.com.conf
root@ubuntu:/etc/icinga2/repository.d/hosts#

We need to get inside the client folder "host1.icinga2server.com" and view the service files which are added to the hosts on initialization.

root@ubuntu:/etc/icinga2/repository.d/hosts/host1.icinga2server.com# ls -l
total 48
-rw-r--r-- 1 root root 152 Jun 26 07:04 apt.conf
-rw-r--r-- 1 root root 155 Jun 26 07:04 disk %2F.conf
-rw-r--r-- 1 root root 153 Jun 26 07:04 disk.conf
-rw-r--r-- 1 root root 153 Jun 26 07:04 http.conf
-rw-r--r-- 1 root root 155 Jun 26 07:04 icinga.conf
-rw-r--r-- 1 root root 153 Jun 26 07:04 load.conf
-rw-r--r-- 1 root root 154 Jun 26 07:04 ping4.conf
-rw-r--r-- 1 root root 154 Jun 26 07:04 ping6.conf
-rw-r--r-- 1 root root 154 Jun 26 07:04 procs.conf
-rw-r--r-- 1 root root 152 Jun 26 07:04 ssh.conf
-rw-r--r-- 1 root root 153 Jun 26 07:04 swap.conf
-rw-r--r-- 1 root root 154 Jun 26 07:04 users.conf

We can see all the service configuration files for that particular host residing inside this folder. Now we need to remove those service check file which we need to disable from the monitoring.

For example : In our case, we need to disable http service, hence, I'm moving http.conf from this folder. Either you can remove it or just move these files.

root@ubuntu:/etc/icinga2/repository.d/hosts/host1.icinga2server.com# mv http.conf http.conf-disabled

After making any changes we need to reload the Icinga2 service on the server.

root@ubuntu:/etc/icinga2# service icinga2 reload

We can confirm it from the web interface, whether that services are removed.

disabledservicefinal

But this service monitoring can be re-enabled on updating the node configuration on the Master server. if that service is still listed for that client as below:

root@ubuntu:~# icinga2 node list
Node 'host1.icinga2server.com' (last seen: Wed Jun 29 12:31:20 2016)
* Host 'host1.icinga2server.com'
* Service 'Icinga Web 2'
* Service 'apt'
* Service 'disk'
* Service 'disk /'
* Service 'http'
* Service 'icinga'
* Service 'load'
* Service 'ping4'
* Service 'ping6'
* Service 'procs'
* Service 'ssh'
* Service 'swap'
* Service 'users'

Therefore, we need to remove this from the node list. Let's see how we can do that.

1. Login to the Client server and move to the folder called "/etc/icinga2/conf.d", there we can see the hosts.conf file.

root@host1:/etc/icinga2/conf.d# ls -l
total 48
-rw-r--r-- 1 root root 35 May 19 12:56 app.conf
-rw-r--r-- 1 root root 114 May 17 11:03 apt.conf
-rw-r--r-- 1 root root 1300 May 19 12:56 commands.conf
-rw-r--r-- 1 root root 542 May 19 12:56 downtimes.conf
-rw-r--r-- 1 root root 638 May 19 12:56 groups.conf
-rw-r--r-- 1 root root 1501 May 19 12:56 hosts.conf
-rw-r--r-- 1 root root 674 May 19 12:56 notifications.conf
-rw-r--r-- 1 root root 801 May 19 12:56 satellite.conf
-rw-r--r-- 1 root root 2131 Jun 29 06:37 services.conf
-rw-r--r-- 1 root root 1654 May 19 12:56 templates.conf
-rw-r--r-- 1 root root 906 May 19 12:56 timeperiods.conf
-rw-r--r-- 1 root root 308 May 19 12:56 users.conf

Now we need to edit the hosts.conf file and comment the http service check part from there.

disable

Restart the Icinga2 service on Client server to update these changes.

2. Move back to your Master server, reload the Icinga2 service and update the node configuration.

root@ubuntu:/etc/icinga2# service icinga2 reload

root@ubuntu:/etc/icinga2# icinga2 node update-config

removing httpd

Now we can confirm the removal of http service from Master configuration.

root@ubuntu:~# icinga2 node list
Node 'host1.icinga2server.com' (last seen: Wed Jun 29 12:46:51 2016)
* Host 'host1.icinga2server.com'
* Service 'Icinga Web 2'
* Service 'apt'
* Service 'disk'
* Service 'disk /'
* Service 'icinga'
* Service 'load'
* Service 'ping4'
* Service 'ping6'
* Service 'procs'
* Service 'ssh'
* Service 'swap'
* Service 'users'

Likewise, we can add or remove any services in Icinga2. I believe this article is informative and helpful. I would recommend your valuable suggestions and comments on this. Happy Reading :)

The post How to Add Host and Manage Services in Icinga2 appeared first on LinOxide.


How To Install Linux Mint 18 From a USB Flash Drive

$
0
0

Linux Mint is a popular Ubuntu-based Linux Distribution that aims for an easy desktop usage experience, from installation to day-to-day work. There are two Desktop Environment choices, MATE and Cinnamon. Linux Mint 18, code-named "Sarah", was released on June 30, 2016. This article will explain how to get Linux Mint 18 onto a USB Flash Drive (4GB Minimum), using either Linux, Windows, or Mac.

 Download the ISO

Obtain the Linux Mint 18 ISO from the Official Download Page. The Cinnamon 64-bit edition is recommended, unless the computer's CPU is relatively old, made earlier than 2010.

Write The ISO to a Flash Drive: Windows Instructions

Download and run the Universal USB Installer, an open source software for Windows that writes image files to USB Drives.

The process is rather straightforward:

  1. Choose "Linux Mint" as the Linux distribution
  2. Pick the Linux Mint ISO that was downloaded
  3. Show where the Flash Drive is (WARNING: all previous data on the drive will be lost)

After the imaging process is finished, you should now have a bootable flash drive from which you can install Linux Mint.

Write The ISO to a Flash Drive: Linux and OS X Instructions

The flash drive is easy to build on Linux and OS X, since the "dd" tool comes preinstalled with the system, and is a command-line tool.
WARNING: If given the wrong device file in the "of" argument, this has the potential to unintentionally reformat your hard disk.

Run the dd command as root. The usage format:

dd if=/path/to/the/Linux/Mint/ISO of=/path/to/the/flash/drive

Example:

dd if=/home/linoxide/Downloads/linuxmint-18-cinnamon-64bit.iso of=/dev/sdb

Install From USB Flash Drive To a Computer

Make sure the computer is turned off, then plug in your flash drive. Turn on the computer and have your computer boot the flash drive. You may need to reconfigure the BIOS or UEFI for this.

Mint Bootloader

When the Flash Drive is loaded, select "Start Linux Mint" from the menu that will appear.

Mint Live Desktop

The Linux Mint desktop will load.

Launch the install by double-clicking on "Install Linux Mint".

Mint Codecs

Choose whatever language you want the installation to be in.

Screenshot from 2016-07-01 13-12-32

Check this box if you want to be able to play MP3s and other file formats out of the box. This option is made available for users who wish to have no proprietary software on their machines.

Mint Install Type

"Erase disk and install Linux Mint" should be the choice for beginners who already have backed up important data from the computer. Dual-booting can also be an option if another Operating System is installed on the computer, but is not covered by this tutorial.

Mint TZ

Confirm installation type.

Screenshot from 2016-07-01 13-14-27

Set the timezone and location that Linux Mint should configure for.

Mint Keyboard

If you have custom keyboard layouts, this is where to set the configuration.

Mint Personalize

Add a personal touch to your machine by adding your personal info and picking out a name for your machine.

Mint Installing

Wait for the installation to finish.

Mint Rebooting

After installation has finished, signal the installer to "Restart Now".

Screenshot from 2016-07-01 13-36-47

Remove the Flash Drive, then press Enter to complete the installation.

Mint Login Screen

After rebooting, the Linux Mint login screen will appear.

Mint Enter Credentials

Input the password you set during installation...

Mint Welcome Screen

...and you should now be at your Linux Mint 18 Desktop, with the "Welcome Screen" being shown.

Note: If You Already Have Linux Mint And Wish To Upgrade to 18

The following information are taken from the Linux Mint Release Announcement Page:

  • If you are running the BETA, click the refresh button in your Update Manager and apply any outstanding level 1 updates. Note also that samba was removed in the stable release as it negatively impacted boot speed. To remove samba, open a terminal and type “apt purge samba”.
  • It will also be possible to upgrade from Linux Mint 17.3. Upgrade instructions will be published next month.

Conclusion

We have created a Linux Mint 18 Installer USB Flash Drive, and have installed the Operating System to a computer. We are now ready to do some computing on our freshly-installed Linux Mint 18 Operating System. Always remember to keep your system updated to the latest software versions to keep it secure and up-to-date. If you are having issues installing, referring to the Release Notes might help you solve the issues.

The post How To Install Linux Mint 18 From a USB Flash Drive appeared first on LinOxide.

How To Configure Single-Node Ceph Cluster To Run Properly

$
0
0

Ceph is designed to be a fault-tolerant, scalable storage system. This means that in a production environment, it is expected that at a minimum, there will be three Ceph nodes in a cluster. If you can only afford a single node for now, or if you need only a single Ceph node for testing purposes, You will run into some problems. A single-node Ceph cluster will consider itself to be in a degraded state, since by default, it will be looking for another node to replicate data to. You will not be able to use it. This How-To will show you how to reconfigure a single Ceph Node so that it will be usable. This will work if your Ceph Node has at least two OSDs available. We have added an introduction to ceph in our previous article to get started.

The CRUSH Map

CRUSH is the algorithm that Ceph uses to determine how and where to place data to satisfy replication and resiliency rules. The CRUSH Map gives CRUSH a view of what the cluster physically looks like, and the replication rules for each node. We will obtain a copy of the CRUSH Map from the Ceph node, edit it to replicate data only within the node's OSDs, then re-insert it into the Ceph node, overwriting the existing CRUSH Map. This will allow the single-node Ceph cluster to operate in a clean state, ready and willing to serve requests.

Obtaining The CRUSH Map

Access your ceph admin node. This may be your Ceph storage node as well, if that is how it was installed. All of the following commands are performed from the Ceph admin node.

Extract the cluster CRUSH Map and save it as a file named "crush_map_compressed"

ceph osd getcrushmap -o crush_map_compressed

Edit The CRUSH MAP

This is a compressed binary file that Ceph interprets directly, we will need to decompress it into a text format that we can edit. The following command decompresses the CRUSH Map file we extracted, and saves the contents to a file named "crush_map_decompressed"

crushtool -d crush_map_compressed -o crush_map_decompressed

Now open up the decompressed CRUSH file with your favorite text editor. Assuming that the Ceph node is named "Storage-01", and that it has 6 OSDs, the CRUSH Map should look similar to this:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host Storage-01 {
id -2           # do not change unnecessarily
# weight 21.792
alg straw
hash 0  # rjenkins1
item osd.0 weight 3.632
item osd.1 weight 3.632
item osd.2 weight 3.632
item osd.3 weight 3.632
item osd.4 weight 3.632
item osd.5 weight 3.632
}
root default {
id -1           # do not change unnecessarily
# weight 21.792
alg straw
hash 0  # rjenkins1
item Storage-01 weight 21.792
}

# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map

Pay attention to the bottom section that starts with "# rules" - that is the section that defines how replication is done across the cluster.
Take this line

step chooseleaf firstn 0 type host

and change the "host" to "osd". It should look like this:

step chooseleaf firstn 0 type osd

Changing this will cause the CRUSH algorithm to be satisfied with just replicating data onto an OSD that is not necessarily on a separate host. This will allow the cluster to enter a clean and active state when data has been replicated from one OSD to the other.

Save the change.

Insert The CRUSH Map

Now that we have a modified CRUSH Map, let's insert it back into the cluster to override the running CRUSH Map configuration.

Compress it again:

crushtool -c crush_map_decompressed -o new_crush_map_compressed

Then insert it using the ceph CLI tool:

ceph osd setcrushmap -i new_crush_map_compressed

If you check the cluster status immediately with "ceph -s", you might catch the node replicating data into its other OSD, but it will eventually look like this:

ceph@ceph-admin:~/os-cluster$ ceph -s
cluster 15ac0bfc-9c48-4992-a2f6-b710d8f03ff4
health HEALTH_OK
monmap e1: 1 mons at {Storage-01=192.168.0.30:6789/0}
election epoch 9, quorum 0 Storage-01
osdmap e105: 6 osds: 6 up, 6 in
flags sortbitwise
pgmap v426885: 624 pgs, 11 pools, 77116 MB data, 12211 objects
150 GB used, 22164 GB / 22315 GB avail
624 active+clean
client io 0 B/s rd, 889 B/s wr, 0 op/s rd, 0 op/s wr

It is now showing an active+clean state.

Conclusion

Although it is designed to be in a high-availability, muti-node setup, it is possible for Ceph to be reconfigured to run as a single-node cluster. Of course, the user should be aware of the risks of data loss when running ceph in that configuration. But this allows for test setups to be made, and low-level SLAs to be fulfilled. Redhat has recently announced new Ceph Storage 2 with enhanced object storage capabilities with improved ease of use.

The post How To Configure Single-Node Ceph Cluster To Run Properly appeared first on LinOxide.

How to Install Go on Ubuntu Linux and CentOS

$
0
0

GO is a general purpose system programming language which means that you can build wide variety of application using it. It is purely an open source language developed by Google. It has cross platform, which supports all major operating systems.

Go source code is compared to a binary executable or library, this results in a very high performance when running Go application. Compilation speed for the Go applications are really fast. In a nutshell, Go is an elegant language with a clean and concise specifications that are readable and comprehensive. One of the major strengths of Golang is its concurrency, which means multiple process of the Go applications can run at same time.

In this article, I'll explain how to install Go language on our latest Linux distributions of Ubuntu and CentOS.

Install Go language on Ubuntu (16.04)

Go language and its tool kits are available in  our base repositories in all the major operating systems. We can install Go language in Ubuntu by just running this command.

root@ubuntu:~# apt-get install golang

root@ubuntu:~# go version
go version go1.6.1 linux/amd64

Now, we need to place the Go codes inside a work directory where, we can build the Go tools and install its binaries. I created a directory for Go codes in /home folder.

root@ubuntu:~# mkdir /home/go

Create a file "/etc/profile.d/goenv.sh" for setting up Go environment variable server-wide as below:

root@ubuntu:~# cat /etc/profile.d/goenv.sh
export GOROOT=/usr/lib/go
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

root@ubuntu:~# source /etc/profile.d/goenv.sh

Install Go language on CentOS 7

As I explained before, it is quite easy to install Go Language in Redhat Based distributions too. It is available in their base repository packages. We can install it by just running this command below:

[root@localhost ~]# yum install golang

it will install all required packages for this language.

golang

You can confirm with the Go version installed.

[root@localhost ~]# go version
go version go1.4.2 linux/amd64

We can manage the Go source codes using the "Go" tool. There are many commands which can be used with Go tool. Here are the list of them.

go tools

We can get more information regarding each command usage by executing "go command help" like go build help or go install help.

You can create a work folder in this installation too, which will help you to build and install its binaries. Furthermore, create the environment variables server-wide..

[root@Centos7 ~]# mkdir ~go

[root@Centos7 ~]# source /etc/profile.d/goenv.sh

[root@Centos7 ~]# cat /etc/profile.d/goenv.sh
export GOROOT=/usr/lib/go
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

Install latest version 1.6 from Source

If you notice, in the above installations, you can see that the versions of Go language packages installed is different in two distributions. This means that it's not mandatory to have the latest versions available on our base repository packages. So whenever we need to install the latest package, we can download it direct from the source and install. Let's see how to do that.

Depending on our server architecture, we can download the required package and extract to install.

[root@server1 src]# wget https://storage.googleapis.com/golang/go1.6.2.linux-amd64.tar.gz

2016-07-01 07:50:26 (93.6 MB/s) - ‘go1.6.2.linux-amd64.tar.gz’ saved [84840658/84840658]

[root@server1src]# tar -xzvf go1.6.2.linux-amd64.tar.gz -C /usr/local/

I've downloaded the package for a 64 bit architecture. You can create a work folder set environment variables server-wide as before.

root@server1~]# mkdir ~go

[root@Centos7 ~]# cat /etc/profile.d/goenv.sh
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

[root@s ~]# source /etc/profile.d/goenv.sh

[root@server1 bin]# go version
go version go1.6.2 linux/amd64

The only difference in creating environment variable is that, here the Go libraries reside inside our /usr/local folder comparing the above cases.

A Simple program in Go

Now we can test our installation by creating a test program. Our first sample program will print the  “hello world” message.  Create a file to print "helloworld.go".

helloworld

Now we need to run this program using go command.

 [root@Centos7 ~]# go run helloworld.go
hello world

At times, we’ll need to build our programs into binaries. We can use build command for that.

[root@Centos7 ~]# go build helloworld.go
[root@Centos7 ~]# ls
helloworld helloworld.go

We can then execute the built binary directly like this.

[root@Centos7 ~]# ./helloworld
hello world

If this works, means you've built your Go successfully :)

You can get more examples of this program to start learning. I hope you enjoyed reading this article. I would recommend your valuable suggestions and comments on this.Have a Good day!

The post How to Install Go on Ubuntu Linux and CentOS appeared first on LinOxide.

How to Install KDE Plasma 5.7 in Linux Distros

$
0
0

KDE Project team has proudly announced its flagship Desktop Environment KDE Plasma 5.7 with greatly improved Wayland windowing system, along with numerous feature improvements including improved workflows, better Kiosk support, New system tray and task manager, and more. The Jump List Actions feature, which is introduced in the previous release, has been now extended to be accessible from KRunner. For those who don't know, Jump List Actions will allow the users to access for a specific tasks within an application. KDE Volume control applet will now allow you to control volume on a per-application basis, and also you can raise the volume levels above 100%.

Also, when there is no hardware keyboard attached, a virtual keyboard will be automatically shown to provide a smooth converged experience to tablets and convertibles. And, when the keyboard is connected back, the virtual keyboard will be disabled automatically. Also, the touchpad can be enabled or disabled through a global shortcut. Another notable change in this release is the agenda view in the Calendar which provides a quick and easily accessible overview of upcoming events, appointments and holidays etc. For more details, refer the KDE Plasma 5.7 release notes.

In this tutorial, we will walk you thorough how to install KDE Plasma 5.7 in Ubuntu 16.04, Fedora 24, and openSUSE 13.2, and Arch Linux.

Install KDE Plasma On Ubuntu / Linux Mint

KDE Plasma 5.7 installation has been tested on Ubuntu 16.04. However, the same steps should work on Ubuntu derivatives such as Linux Mint 18.

First of all, update the Ubuntu system:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

Then, add Ubuntu backports repository using command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

Update the repository lists using command:

sudo apt-get update

Finally, run the following command to install the latest KDE plasma DE.

sudo apt-get install kubuntu-desktop

During installation, you will be asked to configure the display manager for Plasma DE. Click OK to continue.

VirtualBox_Ubuntu 16.04 LTS Desktop_06_07_2016_20_16_31

In the next wizard, select the default display manager. In my case, I selected "lightdm".

VirtualBox_Ubuntu 16.04 LTS Desktop_06_07_2016_20_17_56

Wait for few moments to complete Plasma DE installation. Oops! I got the following error at the end.

Errors were encountered while processing:
/var/cache/apt/archives/kde-config-telepathy-accounts_4%3a15.12.3-0ubuntu1_amd64.deb
/var/cache/apt/archives/kaccounts-providers_4%3a15.12.3-0ubuntu1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

VirtualBox_Ubuntu 16.04 LTS Desktop_07_07_2016_14_33_12

To fix this, find the conflicted package that causes the problem:

sudo find /var/cache -name "kde-config-telepathy-accounts*"

Sample output:

/var/cache/apt/archives/kde-config-telepathy-accounts_4%3a15.12.3-0ubuntu1_amd64.deb

Install it with flag --force-overwrite as shown below:

sudo dpkg -i --force-overwrite /var/cache/apt/archives/kde-config-telepathy-accounts_4%3a15.12.3-0ubuntu1_amd64.deb

Finally, run the following command to complete KDE plasma installation.

sudo apt-get -f install

Now, reboot the Ubuntu system.From the Log in menu select Plasma DE:

VirtualBox_Ubuntu 16.04 LTS Desktop_07_07_2016_14_43_18

Install KDE Plasma On Fedora

KDE Plasma packages are available in the default repositories starting from Fedora 23. To install Plasma 5.7 DE in Fedora 23 and 24, run the following command as root user:

dnf install @kde-desktop

Install KDE Plasma On CentOS / RHEL / Scientific Linux

To install Plasma 5.7 in RHEL, and its clones like CentOS and Scientific Linux, run the following command as root user:

yum groupinstall "KDE Plasma Workspaces"

Install KDE Plasma On openSUSE

KDE plasma is available in the default repositories in openSUSE 13.1 and 13.2. To install install Plasma 5.7, just run:

sudo zypper in -t pattern kde kde_plasma

Install KDE Plasma On Arch Linux

On rolling releases like Arch Linux and its derivatives, Plasma packages are available in the [extra] repository. Just enable the [extra] repository, and install it using command:

sudo pacman -Syu
sudo pacman -S plasma-meta

or

sudo pacman -S plasma

Also, Beta releases are available in the [kde-unstable] repository.
KDE Plasma screenshot tour:

KDE Plasma DE in Ubuntu 16.04 LTS

KDE Plasma DE in Ubuntu 16.04 LTS

KDE Plasma details

KDE Plasma details

Web browser in KDE

Web browser in KDE

Dolphin filemanager

Dolphin file manager

Konsole

Konsole

As of writing this guide, KDE plasma 5.7 version is not yet updated in the official repositories. Since it is just released we have to few more days to get the latest version. That's all for now. Start using the latest KDE Plasma version on your Linux desktop. If you like this tutorial, share it on your social networks and support Linoxide. Also, subscribe to our newsletter to get daily updates to your inbox.

The post How to Install KDE Plasma 5.7 in Linux Distros appeared first on LinOxide.

Learn Ansible Basics to Install, Create Roles, PlayBook in Linux

$
0
0

DevOps has become an increasingly important aspect of daily life for many systems administrators. The demand to automate as much as possible, combined with the needs for flexibility and scalability, can give the most seasoned veteran a headache. Ansible will help ease a lot of those pains.

Ansible is an open source tool used for configuration management. It is lightweight; it does not require an agent on a remote host, but rather performs tasks over SSH. Ansible is a python based tool that uses YAML for configuration, giving it an easier learning curve. Ansible can be used for a variety of automation tasks including OS and application configuration management, PaaS orchestration in VMWare or cloud environment, application deployment, and many others. Setting up Ansible incorrectly can have ramifications against security and scalability. Spend some time setting up Ansible to use a non-privileged user and roles.

Installation of Ansible on Ubuntu 16.04

The first step is to install prerequisite packages.

$ sudo apt-get install software-properties-common

Next, install the Ansible apt repository, update the apt cache, and finally install Ansible.

$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Note: Installation for Ubunutu 14.04 and 15.10 are the same.

Installation of Ansible on CentOS 7

First, setup the EPEL (Extra Packages for Enterprise Linux) repository then install Ansible.

$ sudo yum -y install epel-release
$ sudo yum -y install ansible

Note: Installation for CentOS 6 is the same.

Configure Ansible User

As mentioned earlier, Ansible uses SSH to connect to remote hosts. Creating an ansible user will allow Ansible to connect as a non-privileged user and utilize sudo for escalated privileges.  Create an ansible user on the host that Ansible is installed on.

$ sudo groupadd -g 5001 ansible
$ sudo useradd -u 5001 -g 5001 -c "Ansible User" -m ansible

Create an SSH key for the ansible user to use for remote connections. For easier automation, generate the SSH key without a password.

$ sudo su - ansible
$ ssh-keygen -t rsa -b 4096

Ansible Configuration

The default Ansibile configuration is ready to go out of the box. By default, it uses the root user for remote connections. To use the ansible user, edit this line in /etc/ansible/ansible.cfg.

ansible_remote_user

Ansible Inventory

The inventory is the list of hosts that Ansible will connect to. The inventory can be organized into logical groups to allow Ansible tasks to be run on multiple hosts. A host can also be unassigned, meaning it is not part of a host group. Hostnames or IP addresses can be used.

For example, to setup a group of web servers:

[web-servers]
web01.domain.com
web02.domain.com
web01.otherdomain.com
172.30.0.150

Creating a Role

A role within Ansible is a way to organize a group of tasks. Roles are beneficial in an environment where multiple hosts require the same tasks, but may not necessarily perform the same function. Roles are located under /etc/ansible/roles by default and are organized with a folder structure.

ansible_role_tree

When a play (a task or series of tasks) is run, Ansible will look for main.yml under tasks/ handlers/ var/ and meta/ and add the items to the play. Any files or scripts used in a task can be referenced by the absolute path or placed under the files/ templates/ or tasks/ folders.

An example role would be to create the ansible user on a remote host. This role will create a local ansible user on the remote host, copy the user's SSH key to an authorized_keys file, and add sudo privileges for future tasks.

ansible_role_tasks_main_yml

Creating a Playbook

A playbook in Ansible gives the ability to run one or more plays against a host or hosts.

ansible_role_playbook

Playbooks can contain any elements used in a role.  For example, variables can be assigned for a hostname or a password within a playbook.  Using roles simplifies and organizes Ansible. It allows for creation of a single role to use across multiple hosts; for example, changing only the variables assigned to that host.

A playbook can contain as many plays as needed. For example, a playbook to build a web server might have a play to provision a VM, apply OS updates, install and configure Apache, and add it to a load balancer.

Running a Playbook

Use the "ansible-playbook" command to run a playbook. Ansible will report on the status of a playbook run in real-time. To run the playbook against one host, use this command:

$ ansible-playbook ansible-user.yml -l 172.30.0.150 -k

The "-l" flag limits the run to the IP address. The "-k" flag is used to prompt for a user password, in this case the ubuntu user created on the remote host. For all future playbooks, if run as the ansible user from the Ansible server, no password will be required and SSH key-based authentication will be used.

ansible_playbook_run

Conclusion

Ansible is a powerful tool for any DevOps arsenal. When setup correctly from the start, it will provide flexibility to perform any task within an environment, and do so securely. Taking the time to do it right at the start will save headaches later on. Read more about ansible at their documentation page.

The post Learn Ansible Basics to Install, Create Roles, PlayBook in Linux appeared first on LinOxide.

How to Install and Use Metasploit Security Tool

$
0
0

According to Rapid7 (company behind the project) web site, Metasploit framework is the World's most used open source penetration testing software. It can be either used for offensive or defensive purposes by hackers.  Metasploit framework is available in many Linux distributions such as Kali (old name Backtrack). However, in this tutorial , Metasploit software will be installed on the Ubuntu 16.04 LTS.

Following are two versions of Metasploit framework;

  1. Metasploit Pro (Paid and full features)
  2. Metasploit Community (free and limited functionality )

version

Metasploit Installation

It is required to register on the Rapid7 website to download the Metasploit installer. First you have to download Metasploit community version of the framework.

Fill following registration form to download installer and receive 1 year license key in the provided email id.

registration

After successful submission of above form will prompt following more options.

Download the Metasploit installer for the desired operating system(Linux in this tutorial).

Download links

Installer downloading progress is shown in following snapshot.

download-installer

As per 2nd step  given on the Rapid7 website,  Metasploit software also uses few same techniques as malware and malicious attackers to audit your security. Before the installation and during its use, please switch off anti-virus solutions and local firewall  to run Metasploit properly.

2

And the last step is activating the Metasploit framework using license key.

3

Metasploit installer can be download using wget command.

wget http://downloads.metasploit.com/data/releases/metasploit-latest-linux-x64-installer.run

link

Run following command to set executable permission on the installer script.

chmod +x  metasploit-latest-linux-x64-installer.run

permission changed

As shown in the following snapshot, installer script is executed in the terminal.

./metasploit-latest-linux-x64-installer.run

executing

Installer prompts the setup wizard of Metasploit framework.

metasploit-wizar

Click on the Forward button and accept the agreement for further installation.

metasploit-wizar-1

On the next prompt, choose a folder for Metasploit installation. Default installation path is /opt/metasploit directory.

installation directory

Install Metasploit as a service to start it on each boot.

service

Disable antivirus and firewall to start Meatsploit software.

disable

The default ssl port of Metasploit service is 3790. However, it can be changed by user during installation process.

port for metasploit

Generation of SSL certificate for Metasploit service is shown below.

ssl certificates

As shown in the following screenshot, Metasploit setup is ready to install it on the Virtual Machine (VM).

readytoinstall

Installation process is started and shown below.

unpacking

Finally, setup wizard is finish and Metasploit Web user interface will be open.

finish

Welcome page after setup wizard is shown below.

webpage

As shown in the above snapshot,  visit following URL to start using Metasploit framework

https://localhost:3790/

Following exception shows that connection is not secure. Therefore, add exception and accept the Metasploit generated certificate.

insecure connection

Adding browser exception to accept the certificate.

add exception

Confirming security exception to begin web interface over https.

confirm

As shown in following snapshot, create a user to access the web interface of Metasploit framework.

login information

Following figure shows that product key is entered to activate the software.

enter license key

As shown in the following figure, product is successfully activate and restart is also required for Metasploit instance.

restart after activation

Metasploit Usage

As shown in following figure that first project created in the Metasploit web interface is default.

default project

Click on the name of the project to see more associated options.

overview

Above screenshot shows the overview of the project like discovered hosts, services, vulnerabilities etc.   Top menu shows the available features in the Metasploit framework and most of feature required paid license to use.

Following figure shows that scan feature of Metasploit tool to discover the hosts in the network.

scan

Address (192.168.1.1) is given in the target setting area with default scan options.

give target address

Nmap is integrated with Metasploit framework to perform host discovery.  Progress of Nmap scan is shown below.

scan result

Click on Host option under Analysis menu to view the scan result.

analysis menu

Details of target are shown in the following figure.  The target in this scan is  DSL router which is using Linux 2.6.X kernel and hostname is Broadcom.Home.

result

Three services (dns,http and telnet) are open on the dsl router and no  vulnerability is found on the target.

services

Many features in the community version of Metasploit are only for trial. As shown in the following few screenshots, automatic exploitation, brute force  and  reporting  feature required paid license.

Automatic Exploitation feature

automatic exploit

Bruteforce module

bruteforce

Reporting feature

reports

Nexpose (another project of Rapid7) plugin is also integrated  which detects vulnerabilities, prioritize remediation and improve the security outcomes. Nexpose is alternative for Nessus and OpenVAS security scanners.

Conclusion

The Metasploit framework is comprehensively explored in this tutorial and installed on the Ubuntu platform. It is ranked as top security tool in the open source community. It is used by  security professionals to perform penetration testing.

The post How to Install and Use Metasploit Security Tool appeared first on LinOxide.

How to Setup Mail Server Using Postfix, MariaDB, Dovecot and Roundcube

$
0
0

Hello everybody, today we are going to setup mail server using Postfix, Devcot and MariaDB on Ubuntu 16.04 LTS. In this article we will shows you how to setup Postfix (smtp server), Dovecot (imap/pop server) and MariaDB to store information on virtual domains and users. The task of the smtp server is to accept incoming mails and relay outgoing mails from authorized users on the system. Whereas Dovecot allows authorized users to access their Inbox and read whatever mails there are. So, our primary focus in the article is to setup a fast and secure mail server using virtual users.

Setup Prerequisites

There are some number of prerequisites, that should be followed before moving towards mail server setup. This includes domain forwarding to your server and setup a static IP address to your server. Open this file '/etc/hostname' to setup your appropriate hostname.

# vim /etc/hosts
172.25.10.171 ubuntu-16.linoxide.com ubuntu-16

In order to use your mail server on a wider network, you must correctly configure the DNS and MX records for your host's domain. Then make sure that the iptables firewall or any other external firewall is not blocking any of the standard mail ports on your server that is (25, 465, 587, 110, 995, 143, and 993).

After domain and hostname setup, run below command to update your system with latest updates before installing its other required packages.

# apt-get update

Installing Packages

Now install all required packages including 'Postfix', 'Devcot' and 'MySQL' with some other necessary packages to setup our mail server by flowing the command below using root user credentials.

# aptitude update && aptitude install apache2 postfix dovecot-core dovecot-imapd dovecot-pop3d dovecot-lmtpd dovecot-mysql spamassassin clamav clamav-daemon clamav-base mariadb-client mariadb-server php

Once you pressed 'y' key to continue, you will be prompted to configure postfix configurations. Choose 'Internet Site' under postfix configuration and click on the 'ok' key to continue.

Postfix Configurations

Next you will be asked to type your system mail name which will be included in your emails.

system mail name

After you click on the 'Ok' button, your system will process for a while to complete the installation of all packages.

Setup MariaDB Database

When the installation is complete and the above service are enabled and running, we will start off by setting up the database and tables to store information about Postfix mail accounts.

Run the below command to configure root passowrd on your MariaDB.

# mysql_secure_installation

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
- Dropping test database...
ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
... Failed! Not critical, keep moving...
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Now connect to the MariDB and run below commands to create a new database and the MySQL user and grant the new user permissions over the newly created database.

# mysql -u root -p

MariaDB [(none)]> create database mailserver;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT SELECT ON mailserver.* TO 'mailuser'@'127.0.0.1' IDENTIFIED BY '*****';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Now we will create some tables using the following commands. First run below command for the domains that will receive mail on our domain.

CREATE TABLE `virtual_domains` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(50) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

To create a new tables for all of your email addresses and passwords use below query .

CREATE TABLE `virtual_users` (
`id` int(11) NOT NULL auto_increment,
`domain_id` int(11) NOT NULL,
`password` varchar(106) NOT NULL,
`email` varchar(100) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `email` (`email`),
FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Then create one more table for the email aliases using below query.

CREATE TABLE `virtual_aliases` (
`id` INT NOT NULL AUTO_INCREMENT,
`domain_id` INT NOT NULL,
`source` varchar(100) NOT NULL,
`destination` varchar(100) NOT NULL,
PRIMARY KEY (`id`),
FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

creating tables in MariaDB

We have created a database and some necessary tables, now will add some data to MySQL by adding some test domains to the virtual_domains table by using the following query.

INSERT INTO `mailserver`.`virtual_domains`
(`id` ,`name`)
VALUES
('1', 'test.com'),
('2', 'ubuntu-16.test.com'),
('3', 'ubuntu-16'),
('4', 'localhost.test.com');

Then add email addresses to the virtual_users table by replace the email address values with the addresses that you wish to configure on the mailserver and update the password values with strong passwords you have.

INSERT INTO `mailserver`.`virtual_users`
(`id`, `domain_id`, `password` , `email`)
VALUES
('1', '1', ENCRYPT('password', CONCAT('$6$', SUBSTRING(SHA(RAND()), -16))), 'email1@test.com'),
('2', '1', ENCRYPT('password', CONCAT('$6$', SUBSTRING(SHA(RAND()), -16))), 'email2@test.com');

Run the following query to insert the data into the tables for email alias setup.

INSERT INTO `mailserver`.`virtual_aliases`
(`id`, `domain_id`, `source`, `destination`)
VALUES
('1', '1', 'alias@test.com', 'email1@test.com');

inserting db data

Testing MariaDB Data

after entered all required information into MariaDB, check that the data is there.

Let's run below query to check the contents of the virtual_domains table first.

MariaDB [mailserver]> SELECT * FROM mailserver.virtual_domains;
+----+--------------------------+
| id | name |
+----+--------------------------+
| 1 | test.com |
| 2 | ubuntu-16.test.com |
| 3 | ubuntu-16 |
| 4 | localhost.test.com |
+----+--------------------------+
4 rows in set (0.00 sec)

Check the 'virtual_users' and 'virtual_aliases' table to verify the the hashed passwords should be there using below queries as shown.

MariaDB [mailserver]> SELECT * FROM mailserver.virtual_users;
+----+-----------+---------------------------------------------+-----------------------+
| id | domain_id | password | email |
+----+-----------+--------------------+-----------------------+
| 1 | 1 | $6$b4046061316dbf73$eth.RbLdk2Z1ArUti2yYzoF0T8/xz1wbVrrX1RticBNTeKz4wWKj23zj49UOSW95njitEWv65tlketGVvzRz01 | email1@test.com |
| 2 | 1 | $6$a193a0a79bd30c14$RN1HiqeeuwJ2uvjY43VO0vLLj2GFpJpPtOu3rCZH66qVWIlFUcRDg/7gr9cpVuyYSGejXoF7D69YgI/vQPR17. | email2@test.com |
+----+-----------+--------------------------------------------------+-----------------------+
2 rows in set (0.00 sec)

MariaDB [mailserver]> SELECT * FROM mailserver.virtual_aliases;
+----+-----------+----------------+-----------------------+
| id | domain_id | source | destination |
+----+-----------+----------------+-----------------------+
| 1 | 1 | alias@test.com | email1@test.com |
+----+-----------+----------------+-----------------------+
1 row in set (0.00 sec)

Setup Postfix Configurations

After MySQL database setup, now we will setup Postfix so the server can accept incoming messages for the domains. Create a copy of the default Postfix configuration file in case you need to revert to the default configuration.

# cp /etc/postfix/main.cf /etc/postfix/main.cf.org

Then open the file using vi or vim editor to match the following configuration but do not forget to update your domain and hostname.

# vi /etc/postfix/main.cf

# See /usr/share/postfix/main.cf.dist for a commented, more complete version

# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no

# TLS parameters
#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
#smtpd_use_tls=yes
#smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
#smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

smtpd_tls_cert_file=/etc/dovecot/dovecot.pem
smtpd_tls_key_file=/etc/dovecot/private/dovecot.pem
smtpd_use_tls=yes
smtpd_tls_auth_only = yes

#Enabling SMTP for authenticated users, and handing off authentication to Dovecot
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes

smtpd_recipient_restrictions =
permit_sasl_authenticated,
permit_mynetworks,
reject_unauth_destination

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.

myhostname = ubuntu-16.test.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
#mydestination = test.com, ubuntu-16.test.com, localhost.test.com, localhost
mydestination = localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all

#Handing off local delivery to Dovecot's LMTP, and telling it where to store mail
virtual_transport = lmtp:unix:private/dovecot-lmtp

#Virtual domains, users, and aliases
virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf,
mysql:/etc/postfix/mysql-virtual-email2email.cf

Close the file after making saved changes and then create a new file for your virtual domains and ensure that you update these changes according to your own setup.

#vim /etc/postfix/mysql-virtual-mailbox-domains.cf

user = mailuser
password = mailuser_pass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT 1 FROM virtual_users WHERE email='%s'

Create below file and put the follwoing contents in it with you password.

# vim /etc/postfix/mysql-virtual-mailbox-maps.cf

user = mailuser
password = mailuser_pass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT 1 FROM virtual_users WHERE email='%s'

Save and close the file and now create n new file '/etc/postfix/mysql-virtual-alias-maps.cf' and enter the following values as shown.

# vi /etc/postfix/mysql-virtual-alias-maps.cf

user = mailuser
password = mailuser_pass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT destination FROM virtual_aliases WHERE source='%s'

Then create '/etc/postfix/mysql-virtual-email2email.cf' file and enter the following values.

# vi /etc/postfix/mysql-virtual-email2email.cf

user = mailuser
password = mailuser_pass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT email FROM virtual_users WHERE email='%s'

Save and close the file and then run below command to restart postfix service.

# service postfix restart

Run the below command to ensure that Postfix can find the first domain and it should return '1' to be successful.

# postmap -q test.com mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf

Now test Postfix to verify that it can find the aliases by entering the following command by replacing alias@test.com with the actual alias you used.

# postmap -q email1@test.com mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf

You should again receive 1 as the output. Then test Postfix to verify that it can find the aliases by entering the following command.

# postmap -q alias@test.com mysql:/etc/postfix/mysql-virtual-alias-maps.cf

This will return the email address to which the alias forwards as shown in the image below.

Potfix conf testing

Next we will configure the '/etc/postfix/master.cf' file. Before making changes in it save it copy first.

# cp /etc/postfix/master.cf /etc/postfix/master.cf.org

Then open the configuration file for editing and uncomment the two lines starting with 'submission' and 'smtps' and the block of lines starting with '-o' after each which should be like the below.

# vim /etc/postfix/master.cf

postfix master.cf

Save the configuration and then run command below to restart postfix services, and you have done your postfix configurations successfully.

# service postfix restart

Dovecot Configuration Setup

Dovecot allows users to log in and check their email using POP3 and IMAP. Here we will configure Dovecot to force users to use SSL when they connect so that their passwords are never sent to the server in plain text. Let's start by copying all original configurations before making any changes.

root@ubuntu-16:~# cp /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.org
root@ubuntu-16:~# cp /etc/dovecot/conf.d/10-mail.conf /etc/dovecot/conf.d/10-mail.conf.org
root@ubuntu-16:~# cp /etc/dovecot/conf.d/10-auth.conf /etc/dovecot/conf.d/10-auth.conf.org
root@ubuntu-16:~# cp /etc/dovecot/dovecot-sql.conf.ext /etc/dovecot/dovecot-sql.conf.ext.org
root@ubuntu-16:~# cp /etc/dovecot/conf.d/10-master.conf /etc/dovecot/conf.d/10-master.conf.org
root@ubuntu-16:~# cp /etc/dovecot/conf.d/10-ssl.conf /etc/dovecot/conf.d/10-ssl.conf.org

# vim /etc/dovecot/dovecot.conf
# Enable installed protocols
!include_try /usr/share/dovecot/protocols.d/*.protocol

dovecont conf

After saving this, open the '/etc/dovecot/conf.d/10-mail.conf' file to modify the following variables within the configuration file which controls how Dovecot interacts with the server’s file system to store and retrieve messages.

# vim /etc/dovecot/conf.d/10-mail.conf

mail_location = maildir:/var/mail/vhosts/%d/%n
mail_privileged_group = mail

Run below command to create the folder for the domain and then create the 'vmail' user with a user and group id of '5000'. This user will be in charge of reading mail from the server.

# mkdir -p /var/mail/vhosts/test.com

# groupadd -g 5000 vmail

# useradd -g vmail -u 5000 vmail -d /var/mail

Then change the owner of the '/var/mail/' folder and its contents to belong to vmail user.

dovcot setup

Open the user authentication file as shown below and disable plain-text authentication by uncommenting the below line.

#vim /etc/dovecot/conf.d/10-auth.conf

auth_mechanisms = plain login

Within the same file comment the system user login line and enable MySQL authentication by uncommenting the 'auth-sql.conf.ext' line as shown.

#!include auth-system.conf.ext
!include auth-sql.conf.ext

Save the configuration file and then edit the '/etc/dovecot/conf.d/auth-sql.conf.ext' file with the authentication information by pasting the following lines into in the file as shown below.

# vim /etc/dovecot/conf.d/auth-sql.conf.ext

sql auth

After saving update the '/etc/dovecot/dovecot-sql.conf.ext' file with the custom MySQL connection information.

# vim /etc/dovecot/dovecot-sql.conf.ext

# Database driver: mysql, pgsql, sqlite
driver = mysql
connect = host=127.0.0.1 dbname=mailserver user=mailuser password=mailuser_pass
password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

Save the file and change the permissions, owner and group of the /etc/dovecot/ directory to vmail and dovecot.

# chown -R vmail:dovecot /etc/dovecot
# chmod -R o-rwx /etc/dovecot

Disable unencrypted IMAP and POP3 by setting the protocols’ ports to 0, as shown below. Ensure that the entries for port and ssl below the IMAPS and pop3s entries are uncommented.

service imap-login {
inet_listener imap {
#port = 143
}
inet_listener imaps {
port = 993
ssl = yes
}

}

service pop3-login {
inet_listener pop3 {
port = 0
}
inet_listener pop3s {
port = 995
ssl = yes
}
}

service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
mode = 0600
user = postfix
group = postfix
}
# Create inet listener only if you can't use the above UNIX socket
#inet_listener lmtp {
# Avoid making LMTP visible for the entire internet
#address =
#port =
#}
}

Search the service auth section and configure it as shown.

service auth {
# auth_socket_path points to this userdb socket by default. It's typically
# used by dovecot-lda, doveadm, possibly imap process, etc. Its default
# permissions make it readable only by root, but you may need to relax these
# permissions. Users that have access to this socket are able to get a list
# of all usernames and get results of everyone's userdb lookups.
unix_listener /var/spool/postfix/private/auth {
mode = 0666
user = postfix
group = postfix
}

unix_listener auth-userdb {
mode = 0600
user = vmail
#group =
}

# Postfix smtp-auth
user = dovecot
}

Uncomment the user line and set it to vmail under service auth-section.

service auth-worker {
# Auth worker process is run as root by default, so that it can access
# /etc/shadow. If this isn't necessary, the user should be changed to
# $default_internal_user.
#user = root
user = vmail
}

That's it , now save the configuration file and run below command to restart Dovecot service.

# service dovecot restart

Now setup any test account in an email client to ensure that everything is working. Provide the full email address, including your domain name and the password that you added to the MariaDB table for you email address. Then try sending an email to this account from an outside email account and then reply to it. After that check the mail log file in /var/log/mail.log for an incoming message, and the second block for an outgoing message.

Setup Roundcube for Webmail Interface

You can use email client which supports 'smtp' and 'pop/imap', but webmail part is completely optional.

Run the below command to install Roundcube along with its plugins on your Ubuntu 16.04 server and press 'Y' key to start installation process.

# apt-get install roundcube roundcube-plugins roundcube-plugins-extra

installing roundcube

During the installation, you will be asked to for a database installed and configured before it can be used. This is optional and we will be selecting 'No' here as we have already setup the database.

roundcube-core config

After installation, open its configuration file using any editor and put the following parameters in it as shown.

# vim /etc/roundcube/config.inc.php

$rcmail_config['default_host'] = 'localhost';
$rcmail_config['imap_cache'] = memcache;
$rcmail_config['messages_cache'] = db

Save and close the file after making your changes. Then create a virtual host in your web server and launch your favorite web browser to begin configuring Roundcube. The first step of Roundcube’s graphical configuration is an environment check. Click on the NEXT button at the bottom of the page to continue. You can see our detailed article on roundcube installation on Ubuntu 16.04.

Conclusion:

Congratulation, we have successfully setup mail server using Postfix, MariaDB, Dovecot and Roundcube on Ubuntu 16.04. If DNS records have not been created for the mail server yet then do now. Once the DNS records have propagated, email will be delivered via the new mail server. Now you can add new domains, email addresses, and aliases for the users by simply adding a new line to the appropriate MySQL table and enjoy communication over your newly build Ubuntu mail server.

The post How to Setup Mail Server Using Postfix, MariaDB, Dovecot and Roundcube appeared first on LinOxide.


Install New Skype App for Linux Clients Including on Chrome

$
0
0

Skype is one of the most widely used communications application in the world. It is available for Windows, Mac OS X, Linux, and other mobile devices. Sadly, Linux users have been left out in the cold for a while, as the Linux client has not been updated since 2014. Skype's announcement of a next-generation Linux-native client and Web Client means that Skype is starting to give importance to the application's availability across all Operating Systems. This tutorial will explain how to install Skype on Ubuntu 16.04 and Fedora 24.

Take note that this is an ALPHA version, meaning that a lot of things are still in development, and many basic features might not work as expected. Skype really wants feedback from the Linux users, so make sure your feedback and comments are heard on their blog post. Linux users' comments will help Skype know what needs to be fixed, tested, and in what direction their development should go.

Get The Package

Head over to the Skype Community Announcement page, and download the appropriate package (the DEB package for Ubuntu and other Debian-based distros, and RPM for Fedora and other Red Hat-based distros).

Skype Alpha for Linux Announcement Page

Installing on Ubuntu

If you have an existing Skype installation on your Ubuntu machine, uninstall it first. Make sure that Skype is not running, then uninstall it with the following command:

sudo apt remove skype
sudo apt autoremove

Now, open a terminal and navigate to where the Skype .deb package was downloaded to, most likely in ""~/Downloads",

cd Downloads

and use dpkg to install the package that was downloaded.

sudo dpkg -i skypeforlinux-64-alpha.deb

In case dpkg returns an error, stating that dependencies are not fulfilled, running

sudo apt -f install

Should quickly remedy the problem.

Skype is now installed on your system.

Skype Alpha on Ubuntu

Installing on Fedora

Similarly, if you have an existing Skype installation on your Fedora machine, uninstall it first. Make sure that Skype is not running, then uninstall it with the following command:

sudo dnf remove skype
sudo dnf autoremove

Now, open a terminal and navigate to where the Skype .deb package was downloaded to, most likely in ""~/Downloads",

cd Downloads

and use dnf to install the package that was downloaded.

sudo dnf install ./skypeforlinux-64-alpha.rpm

Skype Alpha is now installed on your Fedora system.

Skype Alpha on Fedora

Accessing the Web Client

Included in the announcement is the availability of one-to-one and group voice calls for Linux users running Google Chrome, or on any Chromebook. Previously, the web version of Skype only allowed Linux users to communicate using instant messages.

Using the latest Chrome version, just head on over to http://web.skype.com , and all these features should be automatically available.

Skype running on Google Chrome on Ubuntu

Conclusion

Linux users are always happy whenever news arrives that a popular game or application will be released on Linux, and Skype Alpha is no exception. Linux users who have been frustrated by the lack of updates to the Skype clients for Linux will be pleased to know that calls work reliably again, and Group interactions are now once again functional. Here's hoping that Skype for Linux will continue to be developed, and that Skype realizes how important Linux support for Skype is. Enjoy!

The post Install New Skype App for Linux Clients Including on Chrome appeared first on LinOxide.

Install and Configure Git on Ubuntu 16.04

$
0
0

Git is an open source, distributed, version control system designed to handle every type of projects from small to big with speed and efficiency. It is easy to learn and has a low memory consumption with lightning speed performance. It surpasses several other SCM tools like Subversion, CVS, Perforce and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows.

Moreover, Git 2.9.0 has a variety of features and bug fixes in comparison with rest of the versions, some of the advanced features of Git 2.9 making it prominent from rest is as below:

  • Faster and more flexible submodules : It brings support for cloning and updating submodules in parallel.
  • Beautiful diff usages : It adds a new experimental heuristics for diff handling.
  • Testing commits with Git interactive rebase

Advantages of GIT over others SCM tools

  • Branching and Merging
  • Small and Fast
  • Distributed
  • Data Assurance
  • Staging Area
  • Free and opensource

In this article, I'll demonstrate how to install the latest Git version  on an Ubuntu 16.04 server. Let's start with the installation steps.

Installing Git

On an Ubuntu server, we can install Git packages from their repositories by just running this command.

root@ubuntu:~# apt-get update
root@ubuntu:~# apt-get install git

But it's not mandatory that we get the latest Git release packages by installing this way. In such case, we prefer to install Git by downloading from their source packages. We can download our Git release packages here.

I'll explain the steps on how I installed the latest Git 2.9.0 version on my system.

Download the Git files

Step 1 : Download the Git 2.9 package from the above download link

root@ubuntu:~# wget https://github.com/git/git/archive/v2.9.0.zip
root@ubuntu:~# unzip v2.9.0.zip

Install the unzip module if it's not present in the server by just running this command "apt install unzip".

Configure and Build

Step 2 : Move to the extracted Git folder and start configuring. First, we need to make the configure and build the Git package. Inorder to make the configuration part to work, we need to install autoconf in our server.

root@ubuntu:~/git-2.9.0# apt-get install autoconf

root@ubuntu:~/git-2.9.0# make configure
GEN configure
root@ubuntu:~/git-2.9.0# ./configure --prefix=/usr/local

After installing autoconf, we can create the configure file for Git and start configuring using the above command.

But during the configuration time, if you come across a similar error, please install the following package.

Error:

configure: error: in `/root/git-2.9.0':
configure: error: no acceptable C compiler found in $PATH
See `config.log' for more details

 Fix : 

root@ubuntu:~/git-2.9.0# apt-get install gcc

You can install gcc package to enable the C compiler on your server and complete the configuration part smoothly like this.

root@ubuntu:~/git-2.9.0# ./configure --prefix=/usr/local
checking for BSD sysctl... no
checking for POSIX Threads with ''... no
checking for POSIX Threads with '-mt'... no
checking for POSIX Threads with '-pthread'... yes
configure: creating ./config.status
config.status: creating config.mak.autogen
config.status: executing config.mak.autogen commands

Our next step is to build the Git packages. We can still building the package by running this make command.

root@ubuntu:~/git-2.9.0# make prefix=/usr/local

PS : At times, you may come across some errors while running this, due to some missing packages.

Errors :

root@ubuntu:~/git-2.9.0# make prefix=/usr/local
CC credential-store.o
In file included from credential-store.c:1:0:
cache.h:40:18: fatal error: zlib.h: No such file or directory
compilation terminated.
Makefile:1935: recipe for target 'credential-store.o' failed
make: *** [credential-store.o] Error 1

/bin/sh: 1: msgfmt: not found
Makefile:2094: recipe for target 'po/build/locale/pt_PT/LC_MESSAGES/git.mo' failed
make: *** [po/build/locale/pt_PT/LC_MESSAGES/git.mo] Error 127

In order to rectify these errors, you can install the following packages which is needed by the Git.

apt-get install zlib1g-dev
apt-get install tcl-dev
apt-get install libssl-dev
apt-get install gettext

After fixing these errors, you can re-run these make commands to complete the build process.

root@ubuntu:~/git-2.9.0# make prefix=/usr/local
root@ubuntu:~/git-2.9.0# make prefix=/usr/local install

makeinstall

Now we can confirm our Git installation. You can set the environment variable to fetch the Git libraries from /usr/local by running ldconfig.

root@ubuntu:~# git version
git version 2.9.0

Git Setup

Git comes with a tool called git config which allows you to get and set configuration variables that control all aspects of how Git works. These variables can be stored in three different places:

getconf

/etc/gitconfig file : This file contains values for every user on the system and all their repositories.

git config --system : This option will reads and writes from this file specifically.

~/.gitconfig or ~/.config/git/config file : This file is specific to each user.

git config --global : This option will reads and writes from this file specifically.

.git/config : config file in the Git directory of whatever repository you’re currently using. This file is specific to that single repository.

git config --local : This option will reads and writes from this file specifically.

Creating your Identity

First thing, which you need to do after the Git Installation is marking your identity. You need to set your username and email address. This is important why because Git commit uses this information and it's immutably attached into the commits which you're creating.

root@ubuntu:~# git config --global user.name "Saheetha Shameer"
root@ubuntu:~# git config --global user.email linoxide1@gmail.com

Checking your Git settings

You can check your current Git settings by using the command git config --list. This will list all Git settings.

root@ubuntu:~# git config --list
user.name=Saheetha Shameer
user.email=saheetha1@gmail.com

You can also check the status by typing a specific key value like this

root@ubuntu:~# git config user.name
Saheetha Shameer

Git Commands

You can get more about Git commands by running the git help command. Here are some of the common Git commands and their uses.

gitcommands

You can get the Git Manual page by running this command git help config.

 Creating and Managing a Git Repository

First of all, let's see how you can create a Git repository. You can run this command to create your git repository on the existing folder. I created a folder called gitdemo and initiated the command git init to create my repository.

root@ubuntu:~# mkdir gitdemo
root@ubuntu:~# cd gitdemo/
root@ubuntu:~/gitdemo#
root@ubuntu:~/gitdemo# git init
Initialized empty Git repository in /root/gitdemo/.git/

After execution of this command you can see it creates a folder called .git. This is where git stores everything including change sets, branches etc. Let's see the structure of this folder.

gitfilestruc

At any time you can delete this folder to destroy your repository. In short, this means git uses a local file base setup where you can take manual backup of this folder to preserve or commit to any remote repository or even sent these backup file to your friends, to give them direct access to your repository with git installed in their system.

At this moment, we don't have anything to put under version control, so let's create a file and check the git status.

root@ubuntu:~/gitdemo# touch testrepo.txt
root@ubuntu:~/gitdemo# git status
On branch master

Initial commit

Untracked files:
(use "git add <file>..." to include in what will be committed)

testrepo.txt

nothing added to commit but untracked files present (use "git add" to track)

Even though the file testrepo.txt exists, it isn't yet tracked by Git.  Git status tells us that our file is being untracked. We need to fix this by the command git add <filename>.

root@ubuntu:~/gitdemo# git add testrepo.txt
root@ubuntu:~/gitdemo# git status
On branch master

Initial commit

Changes to be committed:
(use "git rm --cached <file>..." to unstage)

new file: testrepo.txt

Now our git status shows as our test file is ready to commit. This is called Staging. We've staged to this file to commit.

Before committing the repo, we need to initiate git add command to update all changes. For example, if we modify anything in our test file, that won't add the changes on commit until we run this git add command again. Git status will help you identify the modifications.

root@ubuntu:~/gitdemo# git status
On branch master

Initial commit

Changes to be committed:
(use "git rm --cached <file>..." to unstage)

new file: testrepo.txt

Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified: testrepo.txt

Hence, we need to run the git add command to stage these changes.

root@ubuntu:~/gitdemo# git add testrepo.txt
root@ubuntu:~/gitdemo# git status
On branch master

Initial commit

Changes to be committed:
(use "git rm --cached <file>..." to unstage)

new file: testrepo.txt

Once we stage our changes, we are ready to commit them to our repository. You can use git commit command for that. By running this command "git commit testrepo.txt", it will display a vim window, where you need to type "Initial commit" on the top of the screen and save to exit.

commit

root@ubuntu:~/gitdemo# git commit testrepo.txt
[master (root-commit) 2b3f303] Initial commit
1 file changed, 2 insertions(+)
create mode 100644 testrepo.txt

PS : you can even use git commit -m "changes" instead

If we run the git status again, we can see that there are no more pending changes which means they're all committed to our repository.

root@ubuntu:~/gitdemo# git status
On branch master
nothing to commit, working directory clean

We can get the details of our commit history by running git log, which will provide with their details like authors, time, date , commit notes etc.

root@ubuntu:~/gitdemo# git log
commit 2b3f30387f3b7417acbbc5287132df7441aa0881
Author: Saheetha Shameer <linoxide1@gmail.com>
Date: Thu Jul 14 08:02:52 2016 +0000

Initial commit

You can get even more information about the git log by referring its manual with the command man git log.  I hope this article is informative and useful for you. Thank you for reading this. Have a Nice Day!

The post Install and Configure Git on Ubuntu 16.04 appeared first on LinOxide.

10 Best Known Forensics Tools That Works on Linux

$
0
0

Now a days, computer or digital forensics is a very important because of  crimes related to computer, Internet and mobiles. Evidences such as computer and digital devices contain or store sensitive information which can be useful for forensic investigator in a particular crime or incident.

Digital forensic investigation required tools to extract desired information from the devices. Several commercial  tools are exist for forensic investigation however huge amount is required to buy. Open source community also contributed in this field and there are several open source tools for digital forensic field.  In this article, best tools related to digital forensic will be explored.

Before exploring well-known tools for digital forensic, following Linux distributions are also contains many free forensic tools.

1) SIFT  (SANS Investigative Forensic Toolkit)

An international team of forensics experts,  along SANS instructors, created the SANS Incident Forensic Toolkit (SIFT) Workstation for incident response and digital forensics use.  SIFT forensic suite  is freely available to the whole community. The free SIFT toolkit, that can match any modern incident response and forensic tool suite, which is used in SANS courses.  It demonstrates that advanced investigations and responding to intrusions can be accomplished using cutting-edge open-source tools that are freely available and frequently updated.

SIFT SANS

Features of SIFT distribution are following:

  • Ubuntu LTS 14.04 Base
  • 32/64 bit base system
  • Latest forensic tools and techniques
  • VMware Appliance ready to tackle forensics
  • Cross compatibility between Linux and Windows
  • Option to install stand-alone via (.iso) or use via VMware Player/Workstation/

2) CAINE (Computer Aided INvestigative Environment)

CAINE is an Linux live distribution created as a Digital Forensics project. CAINE offers a complete forensic environment that is organized to integrate existing software tools as software modules and to provide a friendly graphical interface.

CAINE

The main objectives that CAINE distribution  aims to guarantee are the following:

  • an inter-operable environment that supports the digital investigator during the four phases of the digital investigation
  • user-friendly graphical interface
  • contains open source tools

3) KALI  (formerly Backtrack)

Kali Linux is an open source project that is maintained and funded by Offensive Security, a provider of world-class information security training and penetration testing services. Kali Linux is the fist choice of penetration tester and security professional. It has security tools for different purposes. Open source  tools for mobile,network and RAM analysis are available in the Kali Linux.

kali

4) DEFT linux ( Digital Evidence & Forensics Toolkit )

DEFT is a distribution made for Computer Forensics, with the purpose of running live on systems without tampering or corrupting devices (hard disks, pendrives). It is based on GNU Linux and  it can run live (via CD/DVD or USB pendrive), installed or run as a virtual machine on VMware/Virtualbox.  DEFT is paired with DART ( known as Digital Advanced Response Toolkit), a Forensics System which can be run on Windows and contains the best tools for Forensics and Incident Response.

deft

5) Martiux

It is a fully featured security distribution based on Debian consisting of a powerful bunch of more than 300 open source and free tools that can be used for various purposes including, but not limited to, penetration testing, ethical hacking, system and network administration, cyber forensics investigations, security testing, vulnerability analysis, and much more. It is a distribution designed for security enthusiasts and professionals, although it can be used normally as your default desktop system.

Matriux

Matriux is designed to run from a Live environment like a CD / DVD or USB stick or it can easily be installed to your hard disk in a few steps. Matriux also includes a set of computer forensics and data recovery tools that can be used for forensic analysis and investigations and data retrieval.

6) Santoku

Santoku is dedicated to mobile forensics, analysis, and security, and packaged in an easy to use, Open Source platform. It is sponsored by the mobile security firm "nowsecure".

santoku

Free Forensic tools for Linux

There are several categories of computer forensics tools however, following are well-known categories:

  • Memory forensic analysis
  • Hard drive forensic analysis
  • Forensic imaging
  • Network Forensic

7) Volatility

Since that time, memory analysis has become one of the most important topics to the future of digital investigations and Volatility has become the world’s most widely used memory forensics platform.  It is well-known  memory forensics framework for incident response and malware analysis which allows to extract digital artifacts from volatile memory (RAM) dumps .Volatility has been used on some of the most critical investigations of the past decade.
volatility2
Using Volatility you can extract information about running processes, open network sockets and network connections, DLL's loaded for each process, cached registry hives, process IDs, and more. It has become an indispensable digital investigation tool relied upon by law enforcement, military, academia, and commercial investigators throughout the world. Volatility framework supports both Windows and linux platform for forensic investigation

8) Linux "dd" utility

"dd" utility comes by default on the majority of Linux distributions available today (e.g. Ubuntu, Fedora). This tool can be used for various digital forensic tasks such as forensically wiping a drive (zero-ing out a drive) and creating a raw image of a drive. It is a very powerful tool that can have devastating effects if not used with care. It is recommended that you experiment in a safe environment before using this tool in the real world.

dd

9) Sleuth kit (Autopsy)

Sleuth Kit is an open source digital forensics toolkit that can be used to perform in-depth analysis of various file systems (FAT,NTFS, EXT2/3 etc and raw images). Autopsy is a graphical interface  that for  Sleuth Kit (command line tool). It comes with features like Timeline Analysis, Hash Filtering, File System Analysis and Keyword Searching with the ability to add other modules for extended functionality.

autopsy

When you launch Autopsy, you can choose to create a new case or load an existing one. To create a new case you will need to load a forensic image to start analysis and once the analysis process is complete, use the nodes on the left hand pane to choose which results to view.

10) Xplico

Xplico is an open source network forensic analysis tool. It is basically used to extract useful data from applications which use Internet and network protocols. It supports most of the popular protocols including HTTP, IMAP, POP, SMTP, SIP, TCP, UDP, TCP and others. Output data of the tool is stored in SQLite database of MySQL database. It also supports IPv4 and IPv6 both. It is already available  in  Kali Linix, DEFT, Security Onion and Matriux security distributions.
xplicio

Conclusion

This article is about the contribution of open source in digital forensic field.  Free and best known tools related to different area of digital forensic are discussed. Several Linux distributions are listed which contains many free forensics tools.

The post 10 Best Known Forensics Tools That Works on Linux appeared first on LinOxide.

How To Create Ceph Storage Cluster on Ubuntu 16.04

$
0
0

Ceph is one of the most exciting open source storage technologies to come out in recent years. Scalable to exabytes, and extendable to multiple datacenters, the Ceph developers have made it easy for System Administrators and Infrastructure Architects to deploy their software. This article will offer a step-by-step guide on how to create a basic Ceph stroage cluster. This has been tested on Ubuntu 16.04. Take note that unless otherwise stated, all commands are executed as root. Also note that when this document mentions "all Ceph nodes", this includes the Admin node as well.

General Setup

In our example, we will create a basic three-node Ceph cluster, each with two OSDs. We will use the hostname convention "Storage-x", where "x" is a number from 1 to 3, used to refer to specific nodes. We will use an external computer (could be your own computer or laptop) as the ceph Admin Node.

Network Setup

Each node will be on the same private network, with a gateway through which The Internet can be accessed.
The Admin node should be on the same network as well, but does not need to be available to the network all the time. Consequently, your work computer may be the Admin node, and may use a VPN to connect to the Ceph nodes' network.

Ceph uses TCP ports 6789 for Ceph Monitor nodes and ports 6800-7100 for Ceph OSDs to the public zone. For example on iptables :

sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT

Prepare Nodes

Each storage node needs to be synchronized, so we will install ntp on them, and make sure that we can access them via SSH through the Network.

sudo apt install ntp ssh

Make sure that each Ceph node's hostname is resolvable from all Ceph nodes. On each Ceph node, edit the /etc/hosts file and add the following:

[Admin Node IP Address] admin-node
[Storage-1 IP Address] Storage-1
[Storage-2 IP Address] Storage-2
[Storage-3 IP Address] Storage-3

Substitute each node's IP address accordingly.
To test if resolution works, do the following:

ping admin-node
ping Storage-1
ping Storage-2
ping Storage-3

Make sure that the admin-node hostname resolves to the Admin Node's external network IP Address, not the loopback IP Address (127.0.0.1).

On each Ceph node (meaning the Admin node and all Storage nodes), we will add the Ceph Ubuntu package repository to apt, then update the local cache with the new repository's contents:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | apt-key add -
echo deb http://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list
apt-get update

Create a cephadmin user on all Ceph nodes. This user will be used to install and administer Ceph on the whole cluster of nodes, so make sure to maximize security for the credentials to this user.

ssh-keygen -t rsa

useradd cephadmin

Set Up Password-less SSH

The Ceph install scripts and tools from the Admin node will need to be able to access all of the members of the cluster passwordlessly.
On the admin node, switch to the cephadmin user and create a SSH key:

Copy the ssh key you generated over to the three Storage nodes:

ssh-copy-id Storage-1
ssh-copy-id Storage-2
ssh-copy-id Storage-3

As the cephadmin user on your Admin node, test if the passwordless ssh into the Storage nodes now properly work:

ssh Storage-1
ssh Storage-2
ssh Storage-3

Set Up Password-less Sudo

Now that passwordless access via SSH is set up, configure passwordless sudo for the cephadmin user on all of the Ceph nodes:

visudo

You should see the following:

#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root ALL=(ALL:ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL

# See sudoers(5) for more information on "#include" directives:

#includedir /etc/sudoers.d

Add the following line at the bottom:

cephadmin ALL=(ALL) NOPASSWD: ALL

This should now allow passwordless sudo for the cephadmin user on all Ceph nodes.

Install ceph-deploy

Ceph-deploy is a tool created by the Ceph developers to facilitate quick deployments of Ceph clusters by scripting the individual steps needed to deploy a node. We will take advantage of this by installing the tool on our Admin node.

apt install ceph-deploy

On the Admin node, switch to the cephadmin user we created in Part 1 of the tutorial, switch to the home directory, and create a subdirectory that will be used to contain all of the files needed for deployment and management of our Ceph cluster:

su cephadmin
cd
mkdir my-first-ceph-cluster
cd my-first-ceph-cluster

At this point, all of the deployment commands to executed are to be done only from within /home/cephadmin/my-first-ceph-cluster. Administration commands are to be done from within the same directory as well.

Ceph Configuration

Assign all Storage Nodes as Monitor Nodes:

ceph-deploy new Storage-1 Storage-2 Storage-3

Within your working directory, you should now see that files have been generated by ceph-deploy, including keyring files and the Ceph config file.

Add the following line to the ceph.conf file:

osd pool default size = 2

Since we only have two OSDs per Storage Node in our cluster, this will allow Ceph to be satisfied with having only a single extra copy of every data we store in it.
Add the following line to ceph.conf as well:

public network = {ip-address}/{netmask}

Where you will replace the network part with the actual values. For example, if your storage nodes are on the 192.168.1.0/24 network, that should be the value. Do not be confused when it mentions "public network"; it only refers to a network external to the Ceph cluster. Internal replication networks are not covered by this tutorial.

Deploy Ceph

The software and config files will now be installed and copied over to the Ceph Nodes.

ceph-deploy install admin-node Storage-1 Storage-2 Storage-3

This will install all base Ceph packages to the nodes.

Install and Configure Ceph Monitor software to the Storage Nodes:

ceph-deploy mon create-initial

Ceph OSDs

While it is possible to use directories as OSDs, it is not recommended in a production setup. Assuming that the Ceph OSDs to be used on each of the Storage Nodes are /dev/sda and /dev/sdb, we have Ceph prepare the disks for use.

WARNING: The following command will destroy existing data on the specified OSDs, so care should be taken that there are no mistakes in executing the command.

ceph-deploy osd prepare Storage-1:/dev/sda
ceph-deploy osd prepare Storage-1:/dev/sdb
ceph-deploy osd prepare Storage-2:/dev/sda
ceph-deploy osd prepare Storage-2:/dev/sdb
ceph-deploy osd prepare Storage-3:/dev/sda
ceph-deploy osd prepare Storage-3:/dev/sdb

If the preceding commands run without any errors, then the OSDs are ready, and we can now activate them as a running resource in the cluster:

ceph-deploy osd activate Storage-1:/dev/sda
ceph-deploy osd activate Storage-1:/dev/sdb
ceph-deploy osd activate Storage-2:/dev/sda
ceph-deploy osd activate Storage-2:/dev/sdb
ceph-deploy osd activate Storage-3:/dev/sda
ceph-deploy osd activate Storage-3:/dev/sdb

Finalization

Copy over the admin keyrings to each Ceph node so that Ceph administration on each node is possible:

ceph-deploy admin admin-node Storage-1 Storage-2 Storage-3

Check the status of all OSDs on all Storage Nodes:

ceph osd tree

Check the overall status of your Ceph cluster:

ceph health

If you get a

HEALTH_OK

it means that the cluster is working properly.

If you wish to see more cluster statistics, the following command should do:

ceph status

Conclusion

We now have a three-node Ceph cluster up and running. With this setup, the cluster may lose one node, incur no data loss, and continue to serve requests. Highly-available monitor services are available on each storage node as well. This is a very basic production-level setup. If you wish to learn more, head over to the official Ceph documentation for additional information.

The post How To Create Ceph Storage Cluster on Ubuntu 16.04 appeared first on LinOxide.

How To Run CentOS Atomic Host on VirtualBox

$
0
0

In an increasingly containerized IT environment, tools and platforms that allow ease of container management and deployment are increasingly becoming popular. CentOS Atomic Host is a micro-OS that provides a platform whose purpose is to easily deploy and scale containers. It is created by Project Atomic, which is a Red Hat-sponsored organization. Comparable to CoreOS, it provides a software stack consisting of the Linux kernel and GNU tools, Docker, and Kubernetes. This tutorial will show how to test it out on VirtualBox on your Linux-based desktop system.

Obtain the CentOS Atomic Host

Navigate to the official Atomic Host download page.

Download Atomic Host
Download the CentOS-based image.

Convert Image to VirtualBox format

Since the image is in qcow2 format, we will use qemu-img to convert the file to a vdi image that VirtualBox can work with. In Ubuntu, qemu-img is provided by the qemu-utils package.

qemu-img convert -f qcow2 -O vdi filename-of-downloaded-imge.qcow2 converted-image-filename.vdi

Create a Metadata ISO

The Atomic Host image is designed to be run on a cloud infrastructure, so it relies on metadata from a metadata server to initialize the Operating System through cloud-init. This means that the hostname, user password, and other system components cannot be initialized by Virtualbox. This also means that we cannot log in or use the VM. Luckily, the Atomic Host image creators have configured the image to be able to pull metadata from an optical disk. We can use Virtualbox to serve metadata through an ISO file that is presented as a virtual optical drive to the Atomic Host VM.

In your working directory, create a file named meta-data, and insert the following contents:

instance-id: atomic-host001
local-hostname: atomic01.example.org

Now, create a file named user-data:

#cloud-config
password: atomic
ssh_pwauth: True
chpasswd: { expire: False }

ssh_authorized_keys:
- Insert your ssh public key here

Note that the line with the '#' at the start is not a comment; it is required as part of the syntax used by cloud-init.

With the necessary information in place in the files, create an ISO image that contains the two files:

genisoimage -output init.iso -volid cidata -joliet -rock user-data meta-data

Create Atomic Host Virtual Machine

With a ready VM image file and the required metadata ISO, you are now ready to create the Virtual Machine.
Run Virtualbox, and create a new Virtual Machine. For the Hard disk section, Choose "Use an existing virtual hard disk file", and click on the folder icon on the right to point to the Atomic Host image.

Virtualbox VM Creation Window

When the Virtual Machine has been created, right-click on it in the Virtual Machines list, and click on "Settings". Click on "Storage".

Virtual Machine Settings Window - Initial

You should see the "Empty" virtual disk drive. Click on it, then click on the optical disk icon towards the top right of the screen. Click on "Choose Virtual Optical Disk File" from the drop-down list. Navigate to and select your the metadata ISO that was created earlier.

Virtual Machine Settings Window

You are now ready to start your Atomic Host VM. Start it up. It will boot from the image file, and will automatically load the metadata from the virtual optical drive.

The CentOS Atomic Host Login Console

Note that in the image above, the system's hostname has already been changed to the local-hostname value that was set in the metadata ISO files. The default username is "centos". Use the password you gave in the metadata file.

Conclusion

We now have an Atomic Host that we can use for testing. After logging in through the console, you may want to find out its IP address with "ifconfig" to be able to work through SSH. As a starting point, the standard Docker commands suite, such as "docker", and the Kubernetes commands such as "kubectl" are available right off the gate. Enjoy!

The post How To Run CentOS Atomic Host on VirtualBox appeared first on LinOxide.

How to Install and Use Arango Database on Ubuntu16.04

$
0
0

Arango database (ArangoDB) is a multi-model database system, meaning one that uses combination of Key-Value pairs, documents and graphs to store data. It has a flexible data model for documents and graphs. It is a general purpose database and provides all features that are needed for a modern web application. Because of the web interface that it provides, the database is easy to use. Latest version (v3.0) of it was released on June 23rd. It comes with improved capabilities.  In this tutorial, we will understand how to install and use ArangoDB v3.0.2 on Ubuntu 16.04.

Installation

Before proceeding with installation, we need to setup the repository by downloading the public key(Release Key) from the  repositories, adding it and the repository.

NOTE:Please precede all commands with 'sudo' if you are not the root user.

Execute the following command:

$ wget https://www.arangodb.com/repositories/arangodb3/xUbuntu_16.04/Release.key

This will save the Release.key in the current directory. The output of this command will be as follows:

root@ubuntu:~# wget https://www.arangodb.com/repositories/arangodb3/xUbuntu_16.04/Release.key
--2016-07-13 13:47:53--

https://www.arangodb.com/repositories/arangodb3/xUbuntu_16.04/Release.key
Resolving www.arangodb.com (www.arangodb.com)... 85.214.140.94
Connecting to www.arangodb.com (www.arangodb.com)|85.214.140.94|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1003 [application/pgp-keys]
Saving to: ‘Release.key’

Release.key 100%[===================>] 1003 --.-KB/s in 0s

2016-07-13 13:47:55 (41.4 MB/s) - ‘Release.key’ saved [1003/1003]

Let us now add this key.

$ apt-key add Release.key

We can add the repository by using 'apt-add-repository' command. This requires installing 'software-properties-common'. If not already installed, install this first and then proceed to add the repository.

$ apt install software-properties-common

$ apt-add-repository 'deb https://www.arangodb.com/repositories/arangodb3/xUbuntu_16.04/ /'

$ apt-get update

Now we are ready to install ArangoDB3

$ apt-get-install arangodb3=3.0.2

While configuring the packages, it asks for entering a password for the database root user account. Once provided, it will go and setup the database completely.

Configuring packages

This command installs and starts the ArangoDB. We can verify if the database is running by executing the following command:

$ service arangodb status

Here is the output from the above command:

root@ubuntu:~# service arangodb status
● arangodb3.service - LSB: arangodb
Loaded: loaded (/etc/init.d/arangodb3; bad; vendor preset: enabled)
Active: active (running) since Wed 2016-07-13 13:59:40 UTC; 18min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/arangodb3.service
├─14720 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb/arangod.pid --temp.path /var/tmp/arangod --log.
└─14721 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb/arangod.pid --temp.path /var/tmp/arangod --log.

Jul 13 13:59:37 ubuntu systemd[1]: Starting LSB: arangodb...
Jul 13 13:59:37 ubuntu arangodb3[14662]: * Starting arango database server arangod
Jul 13 13:59:40 ubuntu arangodb3[14662]: {startup} starting up in daemon mode
Jul 13 13:59:40 ubuntu arangodb3[14662]: ...done.
Jul 13 13:59:40 ubuntu systemd[1]: Started LSB: arangodb.
Jul 13 13:59:40 ubuntu arangodb3[14662]: changed working directory for child process to '/var/tmp'

Accessing ArangoDB using command line

ArangoDB ships 'arangosh' which is the command line interface to access the DB. We can perform all administrative tasks from this client itself.  This is invoked by executing the 'arangosh' command:

$ arangosh

When asked for a password, provide the one that was given at the time of installing the package.

ArangoDB CLI

If we need any help within the shell, we can execute the help() command within.

arangosh [_system]> db._help();

Creating a user

When the ArangoDB is installed first, it by default, creates a database '_system'  and user 'root'. We can either add users within this database or create our own database and add users to it. In order to add a user into the _system database, execute the following command:

arangosh [_system]> require("org/arangodb/users").save(userid, password);

It produces an output similar to what is shown below:

127.0.0.1:8529@_system> require("org/arangodb/users").save("user1", "userpwd1");

{
"user" : "user1",
"active" : true,
"extra" : {
},
"changePassword" : false,
"code" : 201
}

The  examples in the following picture show how to create a new database using arangosh and add a user to it.

Commands to create a db and users

As you can see, a new database by name 'sample' is created. We can switch to this DB using 'useDatabase' command and then create users within it.

Granting access to a user

By default, users will not have permissions to access any database. We can grant the access rights to one or more databases by using 'grantDatabase()' command. A user who has access to the _system() database is known as the superuser.

user.grantDatabase(user, db)

In the first example, lets give 'user1' permissions to access the _system() DB.

arangosh [_system]> require("org/arangodb/users").grantDatabase("user1","_system");

Revoking  a user

To revoke the access to a database for a user, we can use the 'revokeDatabase' command.

users.grantDatabase(user, database)

Example:

arangosh [_system]> require("org/arangodb/users").revokeDatabase("user1","_system");

A few other useful commands

 Replace - Looks up an existing ArangoDB user and replaces its user data

users.replace(user, password, active, extra)

Example:

arangosh [_system]> require("org/arangodb/users").replace("user1", "new_passwd");

This function will not work from the web interface.

Update - Update an existing ArangoDB user with a new password and other data

users.update(user, password, active, extra)

Example:

arangosh [_system]> require("org/arangodb/users").update("user1", "updated_passwd");

isValid - Verify whether a given combination of user and password is valid or not

users.isValid(user, password)

Example:

arangosh [_system]> require("org/arangodb/users").isValid("user1", "updated_passwd");

true

all - List all the existing users of the database

users.all()

Example:

arangosh [_system]> require("org/arangodb/users").all();

remove - Remove an existing user from the database

users.remove(user)

Example:

arangosh [_system]> require("org/arangodb/users").remove("user1");

Document - Fetch an existing user from the ArangoDB database

users.document(user)

Example:

arangosh [_system]> require("org/arangodb/users").document("user1");

This command fails if the user is not found in the database.

ArangoDB Web Interface

ArangoDB provides a built-in web interface for performing administrative tasks. Before accessing it, we need to set up the server's access point.

Configuration

Configuration files arangod.conf and arangosh.conf are located in '/etc/arangodb3'. They need to be edited to add the server's IP address against the line containing 'endpoint'.

root@ubuntu:~# vi /etc/arangodb3/arangod.conf

endpoint = tcp://139.162.53.68:8529

root@ubuntu:~# vi /etc/arangodb3/arangosh.conf

[server]
endpoint = tcp://139.162.53.68:8529
authentication = true

In the latest version, authentication is enabled by default. Hence, no need to change it.  Let us now restart the arangodb service.

service arangodb3 restart

Access

To access the database from your browser, point it to the following if you are on the local system :

http://localhost:8529

or provide the ip-address of the server and the port if you are accessing from a remote system's browser:

http://your-server-ip:8529

This will open up the login screen for the _system db by default.

ArangoDb login screen

If you want to switch to a different database, provide the database name in the URL as shown:

http://your-server-ip:8529/_db/sample

Once you login using username and password, you will see the below screen:

ArangoDB WebInterface - Dashboard

Database Management

If we go to the 'Databases' section, we will be able to see the databases that we created using the arangosh interface. To create a new database, click on 'Add Database'.  This pops up a screen asking for database name and the user. Enter these details and press the 'create' button.

Creating a new DB

Collections and Documents

Collections are like tables in an RDBMS. We can create documents in these collections and store the required data.

Creating collections

Go the 'Collections' section and click on 'Add Collection'. Provide the name of the new collection to be created and save.

New Collection

Creating Documents

The newly created collection will not have any document in it. Documents can be compared to rows in a table. To create a new document, click on the green colored '+' sign on the upper right hand corner of the screen. Provide a key if necessary and press on the 'create' button.

Create a new document

Adding Data to documents

We are now ready to add some data into the newly created document.  Open the document that was previously created and change the editor mode to 'Code'.

Changing editor mode

Now let us fill up the editor with some data (JSON object) for the document.

{
"tutorialName": "How to install ArangoDB3",
"platform": "Ubuntu16.04",
"documentType": "Tutorial",
"covers": [
"installation",
"shellinterface",
"webinterface"]
}

Save the data and change back the editor mode to Tree again for some human readable data format

Data in tree mode

ArangoDB Query Language(AQL)

ArangoDB ships with it, a query editor known as the AQL editor. With the help of this, we can send queries to get results from the configured database servers. It can be used for simple as well as complex queries. One can query across collections, iterate over graphs etc.

Query submission

As seen from the above image,  submitting the query 'Return Linux_Tutorial' and pressing the 'Execute' button gives an output similar to what is shown below:

Result of a Query

Conclusion

In this tutorial, we have learnt how to install the latest version of ArangoDB on Ubuntu 16.04 and a quick overview of its features including arangosh and the web interface. It combines the flexibility of document-type of databases and the power of graph databases. Being an open-source database with a flexible data model, it lets the user build high performance applications. Now that you know how to use it, go give it a try!

The post How to Install and Use Arango Database on Ubuntu16.04 appeared first on LinOxide.

Powerful SSH Command Options with Examples on Linux

$
0
0

SSH is a popular, powerful, software-based approach to network security. It is used for logging into a remote machine and for executing commands on a remote machine. Whenever data is sent by a computer to the network, ssh will automatically encrypts it. It is designed and created to provide the best security when accessing another computer remotely. SSH server, by default, listens on the standard TCP port 22.

In this guide, we will discuss how to use SSH to connect to a remote system.

Basic Syntax

ssh ec2-user@52-66-84-114

Once you have connected to the server, Password prompt will asked for verification (if password less connectivity not established) for verify your identity , providing a password for connecting with a server.

Later, we will cover how to generate keys to use instead of passwords.

To exit back into your local session, simply type:

shaha@oc8535558703 ~]$ exit

There is two main configuration files for SSH .

1) ~/.ssh/config ( Per-user's configuration file )

This is the per-user configuration file. This file is used by the SSH client. this file must have strict permissions: read/write for the user, and not accessible by others. We use all parameter in this file for accessing another computer remotely . This files called client configuration files

[shaha@oc8535558703 ~]$ ls -lrt ~/.ssh/config
-rw-------. 1 shaha shaha 988 Jul 19 23:54 /home/shaha/.ssh/config
[shaha@oc8535558703 ~]$

2) /etc/ssh/ssh_config ( system-wide configuration file )

Systemwide configuration file. This file provides defaults for those values that are not specified in the user's configuration file, and for those users who do not have a configuration file. This file must be world-readable. all parameter defined in this file world-readable.

[root@oc8535558703 ~]# ls -rlt /etc/ssh/ssh_config
-rw-r--r--. 1 root root 2047 Apr 26 16:36 /etc/ssh/ssh_config
[root@oc8535558703 ~]#

SSH Command Line Options

StrictHostKeyChecking

If you would like to bypass this verification step, you can set the "StrictHostKeyChecking" option to "no" on the command line.

This option disables the prompt and automatically adds the host key to the ~/.ssh/known_hosts file.

$ ssh -oport=922 -o "StrictHostKeyChecking=no" user@172.23.XX.XX

ConnectTimeout

for ip in ${IP} ; do
ssh -o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10 -l ${USERNAME} ${SCRIPT_HOST} "${COMMAND} -i $ip || echo timeout" >> ./myscript.out
done

I'm executing a script connecting via password-less SSH on a remote host. I want to set a timeout, so that if the remote host is taking an infinite time to run, I want to comeout of that ssh session and continue other lines in my sh script.

BatchMode

If you use ssh -o “BatchMode yes”,  and password less connectivity is enable the command execute successfully on remote, else it will return error and continues.

Batch mode command execution using SSH — success case

ssh -o "batchmode=yes" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.com who

[Note: This will display the output of remote-host's who command]

Batch mode command execution using SSH — Failure case

$ ssh -o "batchmode=yes" ec2-user@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com who
Permission denied (publickey,password).
[ec2-user@ip-172-31-13-103 ~]$

Note: If you didn’t use -o “BatchMode yes”, the above command would’ve asked for the password for my account on the remote host. This is the key difference in using the BatchMode yes option.

Bind IP Example

ssh -oPort=922 -oBindAddress=172.18.XX.X a2308078@41.223.XX.XX

SSH / OpenSSH / Port Forwarding

There are three types of port forwarding with SSH:

1. Local port forwarding : connections from the SSH client are forwarded via the SSH server, then to a destination server

2. Remote port forwarding : connections from the SSH server are forwarded via the SSH client, then to a destination server

3. Dynamic port forwarding : connections from various programs are forwarded via the SSH client, then via the SSH server, and finally to several destination servers

Local Port Forwarding

ssh -L 8080:172.18.19.23:80 -L 12345:172.18.19.20:80

This would forward two connections, one to 172.18.19.23, the other to 172.18.19.20. Pointing your browser at http://localhost:8080/ would download pages from 172.18.19.23, and pointing your browser to http://localhost:12345/ would download pages from 172.18.19.20.

Remote Port Forwarding

ssh -R 5900:localhost:5900 ec2-user@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com

The -R option specifies remote port forwarding. For the duration of the SSH session, ec2-52-66-84-114.ap-south-1.compute.amazonaws.com would be able to access your desktop by connecting a VNC client to port 5900 on his computer (if you had set up a shared desktop)

Dynamic Port Forwarding

ssh -C -D 1001 User@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com

The -D option specifies dynamic port forwarding. 1001 is the standard SOCKS port. Although you can use any port number, some programs will only work if you use 1001. -C enables compression, which speeds the tunnel up when proxying mainly text-based information (like web browsing), but can slow it down when proxying binary information (like downloading files).

Next you would tell Firefox to use your proxy:

go to Edit -> Preferences -> Advanced -> Network -> Connection -> Settings...
check "Manual proxy configuration"
make sure "Use this proxy server for all protocols" is cleared
clear "HTTP Proxy", "SSL Proxy", "FTP Proxy", and "Gopher Proxy" fields
enter "127.0.0.1" for "SOCKS Host"
enter "1001" (or whatever port you chose) for Port.

Forwarding GUI Programs

ssh -X User@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com

Once the connection is made, type the name of your GUI program on the SSH command-line:

firefox &

Another example

ssh -X ec2-user@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com

[ec2-user@ip-172-31-13-103 ~]$ xeyes &

Enables trusted X11 forwarding. Trusted X11 forwarding are not subjected to the X11 SECURITY extension controls.

PORT

ssh -oport=922 "EC2_KEY_PAYER.pem" -v ec2-user@ec2-52-66-84-114

Port to connect to on the remote host. This can be specified on a per-host basis in the configuration file.

Use Configuration files from command line

ssh -F /export/oracle/db/config/ssh/config.922pw -f svwprd1b@172.23.XX.XX -t "rm /home/oracle11/work/datastage/testing_ssh"

If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config.

We can create any configration file for ssh connectivity

It will read all configration from config file & run in background , then execute command on remote server

ssh -F /var/dcs/db/confi/config.922 -f -N svwprd1b@172.24.X.70 -t "rm /svw/svwprd1b/work/svwprd1b/testing_ssh"

It will read all configuration from config file & run in background , -N restrict executing command on remote

**** -f puts ssh in background
**** -N makes it not execute a remote command

Find version of the SSH command

We can find the version of SSH installed on the unix system using the -V option to the ssh.

ssh -V ec2-user@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com

OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013

This is shown below:

-v Option for run ssh command in Verbose mode.

Causes ssh to print debugging messages about its progress. This is helpful in debugging connection, authentication,
and configuration problems. Multiple -v options increase the verbosity. The maximum is 3.

Debugging the SSH Client

When we are not able to connect to the remote host, it is good to debug and find the exact error messages that causing the issue. Use the -v option for debugging the ssh client.
output of ssh command verbose mode .

shaha@oc8535558703 ~]$ ssh -v ec2-user@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to ec2-52-66-84-114.ap-south-1.compute.amazonaws.com [52.66.84.114] port 22.
debug1: Connection established.
debug1: identity file /home/shaha/.ssh/identity type -1
debug1: identity file /home/shaha/.ssh/identity-cert type -1
debug1: identity file /home/shaha/.ssh/id_rsa type 1
debug1: identity file /home/shaha/.ssh/id_rsa-cert type -1
debug1: identity file /home/shaha/.ssh/id_dsa type -1
debug1: identity file /home/shaha/.ssh/id_dsa-cert type -1
debug1: identity file /home/shaha/.ssh/id_ecdsa type -1
debug1: identity file /home/shaha/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1
debug1: match: OpenSSH_6.6.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host 'ec2-52-66-84-114.ap-south-1.compute.amazonaws.com' is known and matches the RSA host key.
debug1: Found key in /home/shaha/.ssh/known_hosts:35
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering public key: /home/shaha/.ssh/id_rsa
debug1: Authentications that can continue: publickey,password
debug1: Trying private key: /home/shaha/.ssh/identity
debug1: Trying private key: /home/shaha/.ssh/id_dsa
debug1: Trying private key: /home/shaha/.ssh/id_ecdsa
debug1: Next authentication method: password
ec2-user@ec2-52-66-84-114.ap-south-1.compute.amazonaws.com's password:
debug1: Authentication succeeded (password).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env XMODIFIERS = @im=ibus
debug1: Sending env LANG = en_US.UTF-8
Last login: Tue Jul 19 18:40:56 2016 from 223.188.198.5

__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2016.03-release-notes/
4 package(s) needed for security, out of 13 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-172-31-13-103 ~]$

SSH Config File options

The /etc/ssh/ssh_config file is the system-wide configuration file for Open SSH which allows you to set options
that modify the operation of the client programs. The file contains keyword-value pairs, one per line,
with keywords being case insensitive.
Here are the most important keywords to configure your ssh for top security .

Edit the ssh_config file, vi /etc/ssh/ssh_config and add/or change, if necessary the following parameters:

# Site-wide defaults for various options

Host *
ForwardAgent no
ForwardX11 no
RhostsAuthentication no
RhostsRSAAuthentication no
RSAAuthentication yes
PasswordAuthentication yes
FallBackToRsh no
UseRsh yes
BatchMode yes
CheckHostIP yes
StrictHostKeyChecking no
IdentityFile ~/.ssh/identity
Port 922

Description of config file parameter

Host *

The option Host restricts all forwarded declarations and options in the configuration file to be only for those hosts
that match one of the patterns given after the keyword. The pattern * means for all hosts up to the next Host keyword. With this option you can set different declarations for different hosts in the same ssh_config file.

ForwardAgent no

The option ForwardAgent specifies which connection authentication agent if any should be forwarded to the remote machine.

ForwardX11 no

ssh -o "ForwardX11=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option ForwardX11 is for people that use the Xwindow GUI and want to automatically redirect X11 sessions to
the remote machine. Since we setup a server and don't have GUI installed on it, we can safely turn this option off.

RhostsAuthentication no

ssh -o "RhostsAuthentication=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option RhostsAuthentication specifies whether we can try to use rhosts based authentication.
Because rhosts authentication is insecure you shouldn't use this option.

RhostsRSAAuthentication no

ssh -o "RhostsRSAAuthentication=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option RhostsRSAAuthentication specifies whether or not to try rhosts authentication in concert with RSA host authentication.

RSAAuthentication yes

ssh -o "RSAAuthentication=yes" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option RSAAuthentication specifies whether to try RSA authentication. This option must be set to yes for better security on your sessions. RSA uses public and private keys pair created with the ssh-keygen1utility for authentication purposes.

PasswordAuthentication yes

ssh -o "PasswordAuthentication=yes" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option PasswordAuthentication specifies whether we should use password-based authentication. For strong security. This option must always be set to yes. this parameter protect your server connectivity to other
without password no one connect to server

FallBackToRsh no

ssh -o "FallBackToRsh=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option FallBackToRsh specifies that if a connection with ssh daemon fails rsh should automatically be used instead. Recalling that rsh service is insecure, this option must always be set to no.

UseRsh no

ssh -o "UseRsh=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws.co

The option UseRsh specifies that rlogin/rsh services should be used on this host. As with the FallBackToRsh
option, it must be set to no for obvious reasons.

BatchMode no

ssh -o "BatchMode=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws

The option BatchMode specifies whether a username and password querying on connect will be disabled. This option is useful when you create scripts and dont want to supply the password. e.g. Scripts that use the scp command to make backups over the network.

CheckHostIP yes

ssh -o "CheckHostIP=yes" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws

The option CheckHostIP specifies whether or not ssh will additionally check the host IP address that connect to the server to detect DNS spoofing. It's recommended that you set this option to yes.

StrictHostKeyChecking no

ssh -o "StrictHostKeyChecking=no" ec2-user@ec2-52-66-11-114.ap-south-1.compute.amazonaws

The option StrictHostKeyChecking specifies whether or not ssh will automatically add new host keys to the $HOME/.ssh/known_hosts file, or never automatically add new host keys to the host file. This option, when set to yes, provides maximum protection against Trojan horse attacks. One interesting procedure with this option is to set it to no at the beginning, allow ssh to add automatically all common hosts to the host file as they are connected to, and then return to set it to yes to take advantage of this feature.

IdentityFile ~/.ssh/identity

The option IdentityFile specifies an alternate RSA authentication identity file to read. Also, multiple identity files may be specified in the configuration file ssh_config.

Cipher blowfish

The option Cipher specifies what cipher should be used for encrypting sessios. The blowfish use 64-bit blocks and keys of up to 448 bits.

EscapeChar ~

The option EscapeChar specifies the session escape character for suspension.

Sample Configuration file for testing with parameter

We have create /export/oracle/db/config/ssh/config.922pw using multiple parameter for testing .

[shah@hostname:~]$ cat /export/oracle/db/config/ssh/config.922pw

# Site-wide defaults for some commonly used options. For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.

VerifyHostKeyDNS no
StrictHostKeyChecking no
UserKnownHostsFile /dev/null

Host *

Host 172.23.6.117 172.23.XX 172.24.XX 172.24.XX 10.56.xx.xx
Protocol 2,1
Compression yes
CompressionLevel 7
IdentityFile /var/dcs_6.0/db/dcs/config/ssh/ssh_keys/id_rsa_ime_prod
CheckHostIP no
PreferredAuthentications publickey,keyboard-interactive,password
LogLevel ERROR
ForwardAgent no
ForwardX11 yes
RhostsAuthentication no
RhostsRSAAuthentication no
RSAAuthentication yes
PasswordAuthentication yes
FallBackToRsh no
UseRsh no
BatchMode no
CheckHostIP yes
StrictHostKeyChecking no
Port 922
user cgi
Cipher blowfish
IgnoreUserKnownHosts yes
UserKnownHostsFile no
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
ServerAliveInterval 100
Compression yes
CompressionLevel 5
CheckHostIP no

When we try to connect remote server with our configuration files , all parameter call in ssh connectivity .
please find the below out of ssh connectivity with config file .

[user@hostname:.ssh]$ ssh -F /export/oracle/db/config/ssh/config.922pw user@172.27.3.XX.XX
Last unsuccessful login: Fri Jul 15 12:10:33 WAT 2016 on ssh from 10.14.43.39
Last login: Fri Jul 15 14:55:14 WAT 2016 on ssh from 172.27.0.XX
[user@hostname:.ssh]$

The post Powerful SSH Command Options with Examples on Linux appeared first on LinOxide.


Script to Create OpenStack Node with DevStack on CentOS 7

$
0
0

A lot of times, it is necessary to have an OpenStack deployment that can be quickly set up for testing and training. DevStack is a series of scripts that help to set up OpenStack very quickly. It is mostly used for development environments and functional testing of OpenStack's different projects. This article will detail the steps needed to deploy OpenStack on a CentOS 7 system, using DevStack.

Generate Passwords

You will need to generate four passwords that will be used byDevStack to configure OpenStack:

    • OpenStack Admin
    • Database
    • RabbitMQ
    • Services

You can generate passwords using the openssl CLI tool:

openssl rand -hex 8

Software

On your CentOS 7 system, upgrade to the latest software packages, and install git as well, which will be needed to obtain the scripts.
As root:

yum update; yum install git

Create a user "devstack" to run DevStack with:

adduser devstack

Give sudo privileges to the devstack user by running visudo as root, and entering the following line into the file:

devstack ALL=(ALL) ALL

Switch to the devstack user, and change to its home directory:

su devstack
cd

Obtain the DevStack scripts by cloning the DevStack repository from GitHub:

git clone https://git.openstack.org/openstack-dev/devstack

Configuration

Create a local.conf file inside the cloned repository's directory (should be /home/devstack/devstack), and insert the following lines, substituting the passwords that were generated beforehand:

[[local|localrc]]
ADMIN_PASSWORD=OpenstackAdminPassword
DATABASE_PASSWORD=DatabasePassword
RABBIT_PASSWORD=RabbitMQPassword
SERVICE_PASSWORD=ServicesPassword

This is the most basic configuration that can be set. The official devstack configuration reference lists several possible setups and configuration options, should there be further customization required.

Run DevStack

Now that everything is in place, we are now ready to execute DevStack:

~/devstack/stack.sh

This will start the execution of the DevStack scripts. Depending on your Internet connection speed, it could take a while, as it will install packages and pull code from The Internet. On a 16Mbps Internet Connections, it took approximately an hour for DevStack to run to completion.

After stack.sh concludes its execution, you should see the last few lines of output as something like this:

=========================
DevStack Component Timing
=========================
Total runtime 3755

run_process 57
pip_install 847
restart_apache_server 18
wait_for_service 38
yum_install 717
git_timed 814
=========================

This is your host IP address: 192.168.2.108
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.2.108/dashboard
Keystone is serving at http://192.168.2.108/identity/
The default users are: admin and demo
The password: secret
2016-07-11 09:25:25.644 | WARNING:
2016-07-11 09:25:25.644 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-07-11 09:25:25.644 | stack.sh completed in 3755 seconds.
[devstack@localhost devstack]$

This means that everything involving OpenStack installation and configuration went as expected.

You now have the following OpenStack services running on your node:

  • Keystone
  • Glance
  • Nova
  • Cinder

Screens

By running

screen -ls

You should see that DevStack created a screen session automatically. This screen contains multiple windows, wherein each OpenStack service runs. Connect to it by executing

screen -x

You may cycle to the next screen by Hitting Ctrl+a, then space.

Using screen can be particularly intimidating to use; a screen quick reference document is often needed by beginners.

You may now visit the OpenStack Dashboard by using a Web Browser to navigate to the URL that was presented by DevStack, in our case:

Horizon is now available at http://192.168.2.108/dashboard

The DevStack Dashboard (logged in)

Conclusion

Creating a test OpenStack deployment on CentOS 7 is pretty easy with DevStack. Once again, it should be noted that this is for testing purposes only, and is in no way recommended for production environments. Should further services or configuration customization be required, the devStack configuration reference should be consulted.

The post Script to Create OpenStack Node with DevStack on CentOS 7 appeared first on LinOxide.

An Ultimate Guide to Secure Ubuntu Host

$
0
0

Ubuntu is termed as the highly secured operating system available but it has flaws in its default install like every other operating system. To remove these weaknesses, IT Security specialist has issued guidelines to combat your system's back-doors/weaknesses and protect you from some of the common Ubuntu exploits. In this guide we will look at few important security settings that every system administrator want to apply in his server.

1. Harden boot settings

To prevent non root users from changing the boot loader configuration file which is /boot/grub/grub.cfg, set the owners and groups of this file to root. Execute the following command to change the ownership to root.

# chown root:root /boot/grub/grub.cfg

To prevent the non root users from reading the boot parameters, set the permission for boot loader file to read and write only. Execute the following command to achieve this benchmark.

# chmod og-rwx /boot/grub/grub.cfg

Also set a password for boot loader, so that any unauthorized user trying to reboot the system must provide a password to proceed to next step. This ensures an unauthorized user will be unable to change the boot parameter like disabling the SELinux or changing the boot partition. Execute the following command to create a boot loader password.

# grub-mkpasswd-pbkdf2

Now create a new file by the name /etc/grub.d/00_header and add the following lines.

set superusers="<user-list>"
password_pbkdf2 <user> <encrypted-password>

Remove the --unrestricted option in CLASS parameter of the file /etc/grub.d/10_linux . This ensures a mandatory password requirement to proceed to next step i.e editing boot parameters.

Update the grub

# update-grub

2. Secure file-system

Create partitions based under different categories like users data in /home partition, swap files in /swap partition, temporary files in /tmp partition, system configurations files in /etc partition, device files in /dev partition etc. This will prevent resource exhaustion as well as flexible mounting options based on intended usage of data.

2.1 Create partition for /tmp

The first reason for creating separate partition for /tmp is there are chances of resource exhaustion since /tmp directory is world-writable. Also making a separate partition for /temp allows to set noexec option marking it useless for unauthorized user to execute code and hard-link to system setuid program.

2.2 Set nodev option for /tmp

Set nodev option for /tmp partition to prevent users from creating block/character device file. Edit /etc/fstab file and add the following line.

# mount -o remount,nodev /tmp

2.3 Set nosuid option for /tmp

To prevent users from creating set userid files in /tmp file system add the following line in /etc/fstab since /tmp file-system is used for temporary file storage.

# mount -o remount,nosuid /tmp

2.4 Set noexec option for /tmp

To prevent users from running executable binaries, set noexec option for /tmp partition. Add the following line in /etc/fstab to block running executable binaries.

# mount -o remount,noexec /tmp

2.5 Create separate partition for /var

The systems daemons and other services temporarily store dynamic data in /var with some directories may be world writable. Therefore there is chances of resource exhaustion in /var. To prevent the resource exhaustion in /var, create a separate partition for /var in new installation and for previously installed system, use LVM to create new partition.

2.6 Bind /var/tmp to /tmp

Binding mounting of /var/tmp to /tmp will allow /var/tmp to be protected in the same way as /tmp is protected. This will also prevent /var from exhausting memory in /var/tmp with temporary files. Execute the following command to bind /tmp and /var/tmp

# sudo mount --bind /tmp /var/tmp

To make it permanent add the following line in /etc/fstab

# /tmp /var/tmp none bind 0 0

2.7 Create separate partition for /var/log

To protect sensitive audit data and protection against resource exhaustion, create a separate partition for /var/log in new installation and for previously installed system, use LVM to create new partition.

2.8 Create separate partition for /var/log/audit

The audit daemon stores log data in /var/log/audit directory. To protect against resource exhaustion as audit log can grow to a large size and also to protect sensitive audit data, create a separate partition for /var/log/audit in new installation and for previously installed system, use LVM to create new partition.

2.9 Create separate partition for /home

The users data are stored in /home directory. It is possible to restrict the type of files that can be stored in /home. To achieve this create a separate partition for /home in new installation and for previously installed system, use LVM to create new partition. Also a separate partition for /home protect against resource exhaustion.

2.10 Set nodev for /home

To prevent the /home directory is being used for defining character and block special device, set nodev option so that users cannot create these types of file. Edit /etc/fstab file and add the following lines in it.

# mount -o remount, nodev /home

2.11 Set nodev for removable media

An user can deceive security controls by using character and block special device from removable media to access sensitive device files like /dev/kmem. Edit /etc/fstab file and add the following lines in it.

# mount -o remount, nodev { removable device like floppy or cdrom or USB stick etc. }

2.12 Set noexec to removable media

To prevent programs from being executed from removable media so that no malicious programs can be placed in the system, add the following lines in /etc/fstab

# mount -o remount,noexec { removable device like floppy or cdrom or USB stick etc. }

2.13 Add nosuid to removable media

To prevent the removable media from being used as setuid/setgid, which allows non root users to place privileged programs in the system. Edit /etc/fstab and add the following lines in it

# mount -o remount,nosuid { removable device like floppy or cdrom or USB stick etc. }

2.14 Add nodev option for /run/shm partition

To prevent the users from creating special device files in /run/shm partitions, add the following line in /etc/fstab. This ensures users will be unable to create devices in /run/shm

# mount -o remount,nodev /run/shm

2.15 Add nosuid option to /run/shm partition

To prevent the /run/shm from being used as setuid/setgid, that allows non root users to place privileged programs in the system. The users can execute the program with his own uid and gid. Edit /etc/fstab and add the following lines in it

# mount -o remount,nosuid /run/shm

2.16 Add noexec to /run/shm partition

To prevent /run/shm partition from being used for executing programs, add the following lines in /etc/fstab

# mount -o remount, noexec /run/shm

2.17 Set sticky bit on to world writable directories

To prevent the users from deleting or renaming files in this directory that are not owned by them , set sticky bit on.

# chmod +t /tmp
or
# chmod 1777 /tmp

3. Discard legacy systems

Don't install/use the following legacy services and utilities as there are vulnerabilities in these system/utilities . These are - NIS , RSH server/client , talk server/client , Telenet, TFTP, XINETD, Chargen, Daytime, echo, discard, time

4. Discard special purpose services

Don't install/use the following services as there are vulnerabilities in these services. These are-
X Window system, Avahi Server Print server, DHCP server, LDAP, NFS and RPC, DNS server, FTP, Samba, SNMP, Rsync, BIOSDEVNAME. Few of the above services are indeed needed for day to day operation like DNS server. In that situation, it is advisable to install these server in a separate host that does not contain any sensitive data.

5. Network configuration and firewall

5.1 Disable IP forwarding

To prevent the server is being used to forward packets i.e to act as a router, set net.ipv4.ip_forward parameter to 0 in /etc/sysctl.conf

net.ipv4.ip_forward = 0

Now reload sysctl configuration

# sudo sysctl -p

5.2 Disable sendpacket redirect

An unauthorized user can use a compromised host from sending ICMP redirects packets to other routing device to corrupt routing. To disable redirecting of packets set net.ipv4.conf.all.send_redirects and net.ipv4.conf.default.send_redirects parameter to 0 in /etc/sysctl.conf

# net.ipv4.conf.all.send_redirects = 0

# net.ipv4.conf.default.send_redirects = 0

Now reload sysctl configuration

# sudo sysctl -p

5.3 Disable source route packet acceptance

Using source routed packets, an user can gain access to the private address of the system since route can be specified.
Set the net.ipv4.conf.all.accept_source_route and net.ipv4.conf.default.accept_source_route parameters to 0 in /etc/sysctl.conf

# net.ipv4.conf.all.accept_source_route=0
# net.ipv4.conf.default.accept_source_route=0

Now reload sysctl configuration

# sudo sysctl -p

5.4 Disable ICMP redirect acceptance

An user can alter the routing table to send packets to incorrect networks using bogus ICMP redirect thus allowing the packets to be captured. To disable ICMP Redirect Acceptance set the net.ipv4.conf.all.accept_redirects and net.ipv4.conf.default.accept_redirects parameters to 0 in /etc/sysctl.conf

# net.ipv4.conf.all.accept_redirects = 0
# net.ipv4.conf.default.accept_redirects parameters = 0

Now reload sysctl configuration

# sudo sysctl -p

5.5 Disable Secure ICMP Redirect Acceptance

Secure ICMP redirects and ICMP redirects are almost same, only difference is being Secure ICMP redirects packets's source is a gateway. If the source gateway is compromised then an user can update the routing table using Secure ICMP redirects.
Set the net.ipv4.conf.all.secure_redirects and net.ipv4.conf.default.secure_redirects parameters to 0 in /etc/sysctl.conf to disable Secure ICMP Redirect Acceptance.

net.ipv4.conf.all.secure_redirects=0
net.ipv4.conf.default.secure_redirects=0

Now reload sysctl configuration

# sudo sysctl -p

5.6 Log Suspicious Packets

An administrator can diagnose the system when an attacker is sending spoofed packets.

Set the net.ipv4.conf.all.log_martians and net.ipv4.conf.default.log_martians parameters to 1 in /etc/sysctl.conf to prevent this.

# net.ipv4.conf.all.log_martians=1

# net.ipv4.conf.default.log_martians=1

Now reload sysctl configuration

# sudo sysctl -p

 5.7 Enable ignore broadcast request

To prevent smurf attack in a network set net.ipv4.icmp_echo_ignore_broadcasts to 1 which will enable the system to ignore all ICMP echo and timestamps requests to broadcast and multicast addresses. Set the net.ipv4.icmp_echo_ignore_broadcasts parameter to 1 in /etc/sysctl.conf

# net.ipv4.icmp_echo_ignore_broadcasts=1

Now reload sysctl configuration

# sudo sysctl -p

5.8 Enable bad error message protection

To prevent the attacker from sending responses that violates RFC-1122 in an attempt to insert system log files with useless error messages. Set the net.ipv4.icmp_ignore_bogus_error_responses parameter to 1 in /etc/sysctl.conf to block bogus error responses.

# net.ipv4.icmp_ignore_bogus_error_responses=1

Now reload sysctl configuration

# sudo sysctl -p

5.9 Enable RFC recommended source route validation

Using reverse path filtering , kernel can determine if the packet is valid otherwise it will drop the packet.
Set the net.ipv4.conf.all.rp_filter and net.ipv4.conf.default.rp_filter parameters to 1 in /etc/sysctl.conf

# net.ipv4.conf.all.rp_filter=1
# net.ipv4.conf.default.rp_filter=1

Now reload sysctl configuration

# sudo sysctl -p

5.10 Enable TCP SYN cookies

An attacker can start a DOS attack in the server by flooding SYN packets without initializing three way handshake.To prevent this set the net.ipv4.tcp_syncookies parameter to 1 in /etc/sysctl.conf

# net.ipv4.tcp_syncookies=1

Now reload sysctl configuration

# sudo sysctl -p

5.10 Disable IPv6 router advertisement

Enable server to not accept router advertisements since this can trap into routing traffic to compromised systems.
Set the net.ipv6.conf.all.accept_ra and net.ipv6.conf.default.accept_ra parameter to 0 in /etc/sysctl.conf

# net.ipv6.conf.all.accept_ra=0
# net.ipv6.conf.default.accept_ra=0

Now reload sysctl configuration

# sudo sysctl -p

5.12 Disable IPv6 redirect acceptance

Enable server to not accept router advertisements since this can trap into routing traffic to compromised systems. It is recommended to set hard routes within the system to protect the system from bad routes.

Set the net.ipv6.conf.all.accept_redirects and net.ipv6.conf.default.accept_redirects parameters to 0 in /etc/sysctl.conf

# net.ipv6.conf.all.accept_redirects=0
# net.ipv6.conf.default.accept_redirects=0

Now reload sysctl configuration

# sudo sysctl -p

5.13 Disable IPv6

To reduce the probability of attack in the system, disable IPv6
Edit the file /etc/sysctl.conf and add the following lines:

# net.ipv6.conf.all.disable_ipv6=1
# net.ipv6.conf.default.disable_ipv6=1
# net.ipv6.conf.lo.disable_ipv6=1

Now reload sysctl configuration

# sudo sysctl -p

5.14 Install TCP wrappers

Use TCP wrappers for all services that support TCP wrappers.

Install tcpd:

# apt-get install tcpd

5.15 Create /etc/hosts.allow

To ensure that only authorized systems can connect to the server, use /etc/hosts.allow
Edit /etc/hosts.allow and add the following

"ALL: <net>/<mask>, <net>/<mask>, …"
e.g <net> = 192.168.10.100 , <mask> = 255.255.255.0

5.16 Verify permissions on /etc/hosts.allow

It is important to protect /etc/hosts.allow from unauthorized write access. Execute the following command to find the permission of /etc/hosts.allow

# ls -l /etc/hosts.allow

-rw-r--r-- 1 root root 2055 Feb 15 11:30 /etc/hosts.allow

If the permission is incorrect then use the following command to correct it

#chmod 644 /etc/hosts.allow

5.17 Create /etc/hosts.deny

Deny access to the server using /etc/hosts.deny . The file /etc/hosts.deny is configured to deny

all hosts those are not mentioned in /etc/hosts.allow. Create the file /etc/hosts.deny

echo "ALL: ALL" >> /etc/hosts.deny

5.18 Verify permissions on /etc/hosts.deny

It is important to protect /etc/hosts.deny from unauthorized write access. Execute the following command to find the permission of /etc/hosts.deny

# ls -l /etc/hosts.deny
-rw-r--r-- 1 root root 2055 Feb 15 11:30 /etc/hosts.deny

5.19 Ensure firewall is active

To limit the communication in and out of the box to specific IP address and port, use firewall. Ubuntu provides Uncomplicated Firewall (UFW) to easily configure firewall configuration.
Install UFW

# sudo apt-get install ufw

Activate ufw:

# sudo ufw enable

example:
Allow SSH and http services.

# sudo ufw allow TCP/80
# sudo ufw allow TCP/22
# sudo ufw reload

6. Logging and Auditing

By using a powerful audit framework, the system can track many event types to monitor and audit the system.
Install auditd using following command

sudo apt-get install auditd audispd-plugins

If needed create proper start links for auditd in /etc/rc*.d by running the following command from each of the relevant directories:

# ln -s /etc/init.d/auditd S37auditd

Start links should be created for run levels

6.1 Configure Audit Log Storage Size

The audit log file size should be chosen carefully so that it does not effect the system and no audit data is lost.
Set the max_log_file parameter in /etc/audit/auditd.conf

max_log_file = <MB>

6.2 Disable System on Audit Log Full

The auditd daemon can be configured to halt the system when the audit logs are full. Perform the following to determine if auditd is configured to notify the administrator and halt the system when audit logs are full.

space_left_action = email
action_mail_acct = root
admin_space_left_action = halt

6.3 Keep All Auditing Information

In high security contexts, the benefits of maintaining a long audit history exceed the cost of storing the audit history. Add the following line to the /etc/audit/auditd.conf file.

max_log_file_action = keep_logs

6.4 Record Events That Modify Date and Time Information

To monitor unusual changes in system date and/or time which is an indication of unauthorized activity on the system.
For 64 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time-change
-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k time-change
-a always,exit -F arch=b64 -S clock_settime -k time-change
-a always,exit -F arch=b32 -S clock_settime -k time-change -w /etc/localtime -p wa -k time-change

# Execute the following command to restart auditd

# sudo service auditd restart

For 32 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k time-change
-a always,exit -F arch=b32 -S clock_settime -k time-change -w /etc/localtime -p wa -k time-change

# Execute the following command to restart auditd

# sudo service auditd restart

6.6 Record Events That Modify User/Group Information

Unexpected changes to /etc/group, /etc/passwd, /etc/gshadow, /etc/shadow, /etc/security/opasswd is clear indication of unauthorized user is trying to hide their activities or compromise additional accounts.
Add the following lines to the /etc/audit/audit.rules file.

-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity

# Execute the following command to restart auditd

# sudo service auditd restart

6.7 Record Events That Modify the System's Network Environment

To prevent unauthorized changes to host and domain-name of a system to break security parameters that are set based on those names, add the following lines in /etc/audit/audit.rules
For 64 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a exit,always -F arch=b64 -S sethostname -S setdomainname -k system-locale
-a exit,always -F arch=b32 -S sethostname -S setdomainname -k system-locale
-w /etc/issue -p wa -k system-locale -w /etc/issue.net -p wa -k system-locale
-w /etc/hosts -p wa -k system-locale -w /etc/network -p wa -k system-locale

# Execute the following command to restart auditd

# sudo service auditd restart

For 32 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a exit,always -F arch=b32 -S sethostname -S setdomainname -k system-locale
-w /etc/issue -p wa -k system-locale -w /etc/issue.net -p wa -k system-locale
-w /etc/hosts -p wa -k system-locale -w /etc/network -p wa -k system-locale

# Execute the following command to restart auditd

# sudo service auditd restart

6.8 Record Events That Modify the System's Mandatory Access Controls

Any changes to files in /etc/selinux is an indication of unauthorized user is attempting to modify access controls and change security contexts to gain access to the system.
Add the following lines to /etc/audit/audit.rules

-w /etc/selinux/ -p wa -k MAC-policy

# Execute the following command to restart auditd

# sudo service auditd restart

6.9 Collect Login and Logout Events

To monitor information related to login/logout/brute force attacks add the following lines to the /etc/audit/audit.rules file.

-w /var/log/faillog -p wa -k logins
-w /var/log/lastlog -p wa -k logins
-w /var/log/tallylog -p wa -k logins

# Execute the following command to restart auditd

# sudo service auditd restart

6.10 Collect Session Initiation Information

Monitor session initiation events. A system administrator can monitor logins occurring at unusual time, which could indicate an unauthorized activity.
Add the following lines to the /etc/audit/audit.rules file.

-w /var/run/utmp -p wa -k session
-w /var/log/wtmp -p wa -k session
-w /var/log/btmp -p wa -k session

# Execute the following command to restart auditd

# sudo service auditd restart

6.11 Collect Discretionary Access Control Permission Modification Events

Find changes in file attributes which is an indication of intruder activity.
For 64 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=500 -F auid!=4294967295 -k perm_mod

# Execute the following command to restart auditd

# sudo service auditd restart

For 32 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=500 -F auid!=4294967295 -k perm_mod

# Execute the following command to restart auditd

# sudo service auditd restart

6.12 Collect Unsuccessful Unauthorized Access Attempts to Files

Find failed attempts to open, create or truncate files to gain unauthorized access to the system.
For 64 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b64 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b64 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=500 -F auid!=4294967295 -k access

# Execute the following command to restart auditd

# sudo service auditd restart

For 32 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=500 -F auid!=4294967295 -k access

# Execute the following command to restart auditd

# sudo service auditd restart

6.13 Collect Use of Privileged Commands

Find out if there is any uses of privileged commands by non-privileged users to gain access to the system. First execute the following comand then add the output of the following command to /etc/audit/audit.rules file

# find PART -xdev \( -perm -4000 -o -perm -2000 \) -type f | awk '{print \ "-a always,exit -F path=" $1 " -F perm=x -F auid>=500 -F auid!=4294967295 \ -k privileged" }'

6.14 Collect Unsuccessful File System Mounts

To track mounting of the file systems by non privileged user add the following rules in /etc/audit/audit.rules file
For 64 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b64 -S mount -F auid>=500 -F auid!=4294967295 -k mounts
-a always,exit -F arch=b32 -S mount -F auid>=500 -F auid!=4294967295 -k mounts

# Execute the following command to restart auditd

# sudo service auditd restart

For 32 bit systems, add the following lines to the /etc/audit/audit.rules file.

-a always,exit -F arch=b32 -S mount -F auid>=500 -F auid!=4294967295 -k mounts

# Execute the following command to restart auditd

# sudo service auditd restart

6.15 Collect File Deletion Events by User

To find out if any removal of files and file attributes associated with protected files is occurring add the following rules.
For 64 bit systems, add the following to the /etc/audit/audit.rules file.

-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete
-a always,exit -F arch=b32 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete

Execute the following command to restart auditd

# sudo service auditd restart

For 32 bit systems, add the following to the /etc/audit/audit.rules file.

-a always,exit -F arch=b32 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete

# Execute the following command to restart auditd

# sudo service auditd restart

6.16 Collect Changes to System Administration Scope

Changes in the /etc/sudoers file can indicate that an unauthorized change has been made to scope of system administrator activity.

Add the following lines to the /etc/audit/audit.rules file.

-w /etc/sudoers -p wa -k scope

# Execute the following command to restart auditd

# sudo service auditd restart

6.17 Collect System Administrator Actions (sudolog)

To prevent unauthorized users from using privileged command, find out if any changes takes place in /var/log/sudo.log.
Add the following lines to the /etc/audit/audit.rules file.

-w /var/log/sudo.log -p wa -k actions

Restart auditd

# sudo service auditd restart

6.18 Collect Kernel Module Loading and Unloading

To find out if any unauthorized user is using insmod, rmmod and modprobe and thus compromising the security of the system, Add the following lines to the /etc/audit/audit.rules file.

-w /sbin/insmod -p x -k modules
-w /sbin/rmmod -p x -k modules
-w /sbin/modprobe -p x -k modules

For 32 bit systems, add

-a always,exit -F arch=b32 -S init_module -S delete_module -k modules

For 64 bit systems, add

-a always,exit -F arch=b64 -S init_module -S delete_module -k modules

Restart auditd

# sudo service auditd restart

6.19 Make the Audit Configuration Immutable

To prevent unauthorized users to make changes to the audit system to hide their malicious activity and then revert the audit rules back, add the following lines to the

/etc/audit/audit.rules file.

-e 2

This must be the last line in the /etc/audit/audit.rules file
Restart auditd

# sudo service auditd restart

7. System Access, Authentication and Authorization

7.1 Set User/Group Owner and Permission on cron

Execute the following commands to restrict read/write and search access to root user and groups, preventing normal users from accessing these files/directories.

# chown root:root /etc/crontab
# chmod og-rwx /etc/crontab
# chown root:root /etc/cron.hourly
# chmod og-rwx /etc/cron.hourly
# chown root:root /etc/cron.daily
# chmod og-rwx /etc/cron.daily
# chown root:root /etc/cron.weekly
# chmod og-rwx /etc/cron.weekly
# chown root:root /etc/cron.monthly
# chmod og-rwx /etc/cron.monthly
# chown root:root /etc/cron.d
# chmod og-rwx /etc/cron.d

7.2 Configure PAM

PAM (Pluggable Authentication Modules) is a service that implements modular authentication modules on UNIX systems. PAM must be carefully configured to secure system authentication.

7.2.1 Set Password Creation Requirement Parameters Using pam_cracklib

The pam_cracklib module checks the strength of passwords. It performs checks such as making sure a password is not a dictionary word, it is a certain length, contains a mix of characters (e.g. alphabet, numeric, other) and more.
Set the pam_cracklib.so parameters as follows in /etc/pam.d/common-password

password required pam_cracklib.so retry=3 minlen=14 dcredit=-1 ucredit=-1 ocredit=-1 lcredit=-1

7.2.2 Set Lockout for Failed Password Attempts

Locking out users after unsuccessful consecutive login attempts to prevent brute force password attacks against your systems.
Edit the /etc/pam.d/login file and add the auth line below:

auth required pam_tally2.so onerr=fail audit silent deny=5 unlock_time=900

7.2.3 Limit Password Reuse

Forcing users not to reuse their past 5 passwords make it less likely that an attacker will be able to guess the password. Set the pam_unix.so remember parameter to 5 in /etc/pam.d/common-password

password sufficient pam_unix.so remember=5

8. Configure SSH

Edit the /etc/ssh/sshd_config file to set the following parameter as follows to make it secure.

Protocol 2
LogLevel INFO
X11Forwarding no
MaxAuthTries 4
IgnoreRhosts yes
HostbasedAuthentication no
PermitRootLogin no
PermitEmptyPasswords no
PermitUserEnvironment no
Ciphers aes128-ctr,aes192-ctr,aes256-ctr
ClientAliveInterval 300
ClientAliveCountMax 0
AllowUsers <userlist>
AllowGroups <grouplist>
DenyUsers <userlist>
DenyGroups <grouplist>
Banner <your bannerfile>

9. Restrict Access to the su Command

Use sudo instead of su as it provides a better logging out and audit mechanism. The another motivation for using sudo is to restrict the uses of su. Uncomment the pam_wheel.so line in /etc/pam.d/su, so that su command will be available to users in the wheel group to execute su.

# grep pam_wheel.so /etc/pam.d/su
auth required pam_wheel.so use_uid
# grep wheel /etc/group
wheel:x:10:root, <user list>.....

10. User Accounts and Environment

10.1 Set Password Expiration Days

Reduce the maximum age of a password.

Set the PASS_MAX_DAYS parameter to 120 in /etc/login.defs

PASS_MAX_DAYS 60

Modify active user parameters to match:

# chage --maxdays 120 <user>

10.2 Set Password Change Minimum Number of Days

To prevent the user from changing their password until a minimum no of days have passed since the user changed the password. Set the PASS_MIN_DAYS parameter to 7 in /etc/login.defs

PASS_MIN_DAYS 7

Modify active user parameters to match:
# chage --mindays 7 <user>

10.3 Set Password Expiring Warning Days

The administrator can notify the users about the expiry of their password using ASS_WARN_AGE parameter in /etc/login.defs.

Set the PASS_WARN_AGE parameter to 7 in /etc/login.defs

PASS_WARN_AGE  7

Modify active user parameters to match

# chage --warndays 7 <user>

11. System Accounts

11.1 Disable System Accounts

To prevent the system account from being used to get an interactive shell, append “/usr/sbin/nologin” at the end of each system accounts in /etc/passwd

11.2 Set Default

umask for Users
Set umask of 022 will make files readable by every user on the system.
Edit the /etc/login.defs file and add the following line

UMASK 022

11.3 Lock Inactive User Accounts

To make the system more secure, execute the following command to lock the inactive accounts.

# useradd -D -f 35

11.4 Remove OS Information from Login Warning Banners

To prevent the OS and patch level information from login banners, edit the /etc/motd, /etc/issue and /etc/issue.net files and remove any lines containing \m, \r, \s or \v.

12. Verify System File Permissions

12.1 Verify Permissions on /etc/passwd, /etc/shadow, /etc/group

These file needs to be protected from unauthorized changes by non-privileged users as well as needs to be readable as this information is used by non-privileged programs.
Execute the following commands to correct the permissions for these files

# chmod 644 /etc/passwd
# chmod o-rwx,g-rw /etc/shadow
# chmod 644 /etc/group

12.2 Verify User/Group Ownership on /etc/passwd, /etc/shadow, /etc/group

These file needs to be protected from unauthorized changes by non-privileged users as well as needs to be readable as this information is used by non-privileged programs.
Execute the following commands to correct the ownership for these files

# chown root:root /etc/passwd
# chown root:shadow /etc/shadow
# chown root:root /etc/group

13. Check for rootkits

There are few tools available through which you can check for rootkit in the server. The two popular rootkit hunters are RKHunte and CHKRootKit, use anyone of them periodically to check for rootkit in the system

Install chkrootkit

# sudo apt-get install chkrootkit

To run chkrootkit, execute the following command in the terminal

# chkrootkit

14. PSAD IDS/IPS

To detect the intrusion in your network, you can use toos like snort or cipherdyne's psad. The later has the capability of intrusion detection and log analysis with iptables. PSAD is a lightweight system daemons that analyze the iptables log message to detect scans and other spurious traffic.

Install PSAD

#sudo apt-get install psad

Now configure psad to detect scans, Intrusion Detection and Intrusion Prevention

15. Prevent IP Spoofing

Add following lines in /etc/host.conf to prevent IP spoofing

order bind,hosts
nospoof on

16. Enabling automatic security updates

It is highly recommended to enable automatic security updates and patches to keep the system secure. You will be notified every time you logged in to the system using SSH about security updates and patches. In Ubuntu Desktop, to enable automatic security updates, click on "System" select "Administration" and then "Software Sources" menu. Now select the "Internet Updates" and enable "Check for updates automatically" specifying daily". If Ubuntu issues a new security release then you will be notified via the "Update Manager" icon in the system tray. You can use unattended-upgrades which can handle automatic installation of security upgrades in Ubuntu system. Running sudo unattended-upgrade will install all the security package available for upgrade.

Install this package if it isn't already installed using

# sudo apt-get install unattended-upgrades

To enable it type

# sudo dpkg-reconfigure unattended-upgrades

and select "yes".

17. Harden PHP

Edit the php.ini file /etc/php5/apache2/php.ini and add uncomment/add following lines.

safe_mode = On

safe_mode_gid = On

disable_functions = hp_uname, getmyuid, getmypid, passthru, leak, listen, diskfreespace, tmpfile, link, ignore_user_abord, shell_exec, dl, set_time_limit, exec,

system, highlight_file, source, show_source, fpaththru, virtual, posix_ctermid, posix_getcwd, posix_getegid, posix_geteuid, posix_getgid, posix_getgrgid,

posix_getgrnam, posix_getgroups, posix_getlogin, posix_getpgid, posix_getpgrp, posix_getpid, posix, _getppid, posix_getpwnam, posix_getpwuid, posix_getrlimit,

posix_getsid, posix_getuid, posix_isatty, posix_kill, posix_mkfifo, posix_setegid, posix_seteuid, posix_setgid, posix_setpgid, posix_setsid, posix_setuid, posix_times,

posix_ttyname, posix_uname, proc_open, proc_close, proc_get_status, proc_nice, proc_terminate, phpinfo

register_globals = Off

expose_php = Off

display_errors = Off

track_errors = Off

html_errors = Off

magic_quotes_gpc = Off

mail.add_x_header = Off

session.name = NEWSESSID

allow_url_fopen = Off

allow_url_include = Off

session.save_path = A secured location in the server

18. Harden Apache

Edit Apache2 configuration security file /etc/apache2/conf-available/security.conf and add the following-

ServerTokens Prod

ServerSignature Off

TraceEnable Off

Header unset ETag

FileETag None

The web application firewall ModSecurity is effective way to protect web server so that it's much less vulnerable to probes/scans and attacks. First install mod_security using following command.

# sudo apt-get install libapache2-mod-security2

# mv /etc/modsecurity/modsecurity.conf-recommended /etc/modsecurity/modsecurity.conf

Edit /etc/modsecurity/modsecurity.conf

Activate the rules by editing the SecRuleEngine option and set to On and modify your server signature

SecRuleEngine On

SecServerSignature FreeOSHTTP

Now edit the following to increase the request limit to 16 MB

SecRequestBodyLimit 16384000

SecRequestBodyInMemoryLimit 16384000

Download and install the latest OWASP ModSecurity Core Rule Set from their website.

# wget https://github.com/SpiderLabs/owasp-modsecurity-crs/archive/master.zip

# unzip master.zip

# cp -r owasp-modsecurity-crs-master/* /etc/modsecurity/

# mv /etc/modsecurity/modsecurity_crs_10_setup.conf.example /etc/modsecurity/modsecurity_crs_10_setup.conf

# ls /etc/modsecurity/base_rules | xargs -I {} ln -s /etc/modsecurity/base_rules/{} /etc/modsecurity/activated_rules/{}

# ls /etc/modsecurity/optional_rules | xargs -I {} ln -s /etc/modsecurity/optional_rules/{} /etc/modsecurity/activated_rules/{}

Now add the following line in /etc/apache2/mods-available/mod-security.conf

Include "/etc/modsecurity/activated_rules/*.conf"

Check if the modules has been loaded-

# sudo a2enmod headers

# sudo a2enmod mod-security

Now restart Apache2

# service apache2 restart

Apart from ModSecurity, install modevasive to protect your server from DDOS (Denial of Service) attacks

Once you've hardened the system, run some vulnerability scans and penetration tests against it in order to check that it's actually rock solid as you're now expecting it. However attack on your server can happen, it is up-to you to scan the log files regularly to find out any breaches have been occurred. You can use log analyzer tool like ELK stack to drill through servers log files quickly. If you find evidences of breaches then quickly disconnect your server from the internet and take remedial measures.

The post An Ultimate Guide to Secure Ubuntu Host appeared first on LinOxide.

How to Install Visual Studio Code 1.3 on Ubuntu 16.04

$
0
0

Visual Studio Code is a lightweight, free and open source software. It provides developers with new choice of tooling, one which combines with simplicity and the streamline experiences with the code editor with more features. It is a powerful source code editor which runs on your desktop environment, available for OS like for Windows, OS X and Linux.

vsc

It comes with built-in support for JavaScript, TypeScript and Node.js and has a rich ecosystem of extensions for other languages  like C++, C#, Python, Jade, PHP, XML, Batch, F#, DockerFile, Coffee Script, Java, HandleBars, R, Objective-C, PowerShell, Luna, Visual Basic, .Net, Asp.Net, C#, JSON, HTML, CSS, Less, Sass and many more to come.

Features of Visual Studio Code 1.3

These are some of the exciting features of VSC v1.3 which makes it popular among rest of the them.

  • Tabs: This helps in an easy navigation and organizing your work bench.
  • Extensions : New in-product extensions to quickly view, manage and install extensions.
  • Workbench:  With enhanced Drag and Drop features and  Preview Editors.
  • Editor: Global Search and Replace options, Problems panel to view errors and warnings and Indent guides.
  • Languages :Better Emmet support and Atom JavaScript grammar extension.
  • Debugging : Better debugging options.

In this article, I'll explain  how to install the latest Visual Studio Code v1.3 on Ubuntu 16.04 Desktop.

Please note : Visual Studio Code is supported only in 64 bit Linux architecture. So make sure, your system is 64 bit.

Installation Steps

First of all, we need to download the latest available source package from there website. We can get the latest available version here.  I created a folder for VS code and downloaded the package there.

root@ubuntu:~# mkdir /tmp/VSC
root@ubuntu:~# cd /tmp/VSC

Downloading the package.

root@ubuntu:/tmp/VSC# wget https://az764295.vo.msecnd.net/stable/e6b4afa53e9c0f54edef1673de9001e9f0f547ae/VSCode-linux-x64-stable.zip
--2016-07-19 08:48:36-- https://az764295.vo.msecnd.net/stable/e6b4afa53e9c0f54edef1673de9001e9f0f547ae/VSCode-linux-x64-stable.zip
Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 2606:2800:11f:17a5:191a:18d5:537:22f9, 72.21.81.200
Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|2606:2800:11f:17a5:191a:18d5:537:22f9|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 48262769 (46M) [application/zip]
Saving to: ‘VSCode-linux-x64-stable.zip’

VSCode-linux-x64-stable.zip 100%[==================================================================>] 46.03M 121MB/s in 0.4s

2016-07-19 08:48:36 (121 MB/s) - ‘VSCode-linux-x64-stable.zip’ saved [48262769/48262769]

Now extract the package to /opt folder and make the binary executable. You can install unzip package if it's not installed using apt install unzip.

root@ubuntu:/tmp/VSC# unzip VSCode-linux-x64-stable.zip -d /opt/
root@ubuntu:/tmp/VSC# chmod +x /opt/VSCode-linux-x64/code

Please note, your system should support all GUI packages to make this running. Please make sure to install these packages to enable all required libraries if it's not present.

apt-get install lib32z1 lib32ncurses5 dpkg-dev
apt-get install libgtk2.0-0
apt install libnotify-dev
apt install libnss3

Running Visual Studio Code

Now you can move to the extracted folder from the Desktop and run the "Code" binary from there.

coderun

 

This will launch our VS code editor window.

 

vsstudioconsole

Hurray! we've installed our VStudio Code. Thank you for reading this article. I would recommend your valuable comments and suggestions on this. Have a Good day!

The post How to Install Visual Studio Code 1.3 on Ubuntu 16.04 appeared first on LinOxide.

How to Monitor Docker Containers using Grafana on Ubuntu

$
0
0

Grafana is an open source feature rich metrics dashboard. It is very useful for visualizing large-scale measurement data. It provides a powerful and elegant way to create, share, and explore data and dashboards from your disparate metric databases.

It supports a wide variety of graphing options for ultimate flexibility. Furthermore, it supports many different storage backends for your Data Source. Each Data Source has a specific Query Editor that is customized for the features and capabilities that the particular Data Source exposes. The following datasources are officially supported by Grafana: Graphite, InfluxDB, OpenTSDB, Prometheus, Elasticsearch and Cloudwatch

The query language and capabilities of each Data Source are obviously very different. You can combine data from multiple Data Sources onto a single Dashboard, but each Panel is tied to a specific Data Source that belongs to a particular Organization. It supports authenticated login and a basic role based access control implementation. It is deployed as a single software installation which is written in Go and Javascript.

In this article, I'll explain on how to install Grafana on a docker container in Ubuntu 16.04 and configure docker monitoring using this software.

Pre-requisites

  • Docker installed server

Installing Grafana

We can build our Grafana in a docker container. There is an official docker image available for building Grafana. Please run this command to build a Grafana container.

root@ubuntu:~# docker run -i -p 3000:3000 grafana/grafana

Unable to find image 'grafana/grafana:latest' locally
latest: Pulling from grafana/grafana
5c90d4a2d1a8: Pull complete
b1a9a0b6158e: Pull complete
acb23b0d58de: Pull complete
Digest: sha256:34ca2f9c7986cb2d115eea373083f7150a2b9b753210546d14477e2276074ae1
Status: Downloaded newer image for grafana/grafana:latest
t=2016-07-27T15:20:19+0000 lvl=info msg="Starting Grafana" logger=main version=3.1.0 commit=v3.1.0 compiled=2016-07-12T06:42:28+0000
t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
t=2016-07-27T15:20:19+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
t=2016-07-27T15:20:19+0000 lvl=info msg="Initializing DB" logger=sqlstore dbtype=sqlite3

t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2"
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2"
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2"
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3"
t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3"
t=2016-07-27T15:20:20+0000 lvl=info msg="Created default admin user: [admin]"
t=2016-07-27T15:20:20+0000 lvl=info msg="Starting plugin search" logger=plugins
t=2016-07-27T15:20:20+0000 lvl=info msg="Server Listening" logger=server address=0.0.0.0:3000 protocol=http subUrl=

We can confirm the working of the Grafana container by running this command "docker ps -a" or by accessing it by URL http://Docker IP:3000

All Grafana configuration settings are defined using environment variables, this is much useful when using container technology. The Grafana configuration file is located at /etc/grafana/grafana.ini.

Understanding the Configuration

The Grafana has number of configuration options that can be specified in its configuration file as .ini file or  can be specified using environment variables as mentioned before.

Config file locations

Normal config file locations.

  • Default configuration from : $WORKING_DIR/conf/defaults.ini
  • Custom configuration from  : $WORKING_DIR/conf/custom.ini

PS :  When you install Grafana using the deb or rpm packages or docker images, then your configuration file is located at /etc/grafana/grafana.ini

Understanding the config variables

Let's see some of the variables in the configuration file below:

instance_name : It's the name of the grafana server instance. It default value is fetched from ${HOSTNAME}, which will be replaced with environment variable HOSTNAME, if that is empty or does not exist Grafana will try to use system calls to get the machine name.

[paths]

data : It's the path where Grafana stores the sqlite3 database (when used), file based sessions (when used), and other data.

logs : It's where Grafana stores the logs.

Both these paths are usually specified via command line in the init.d scripts or the systemd service file.

[server]

http_addr : The IP address to bind the application. If it's left empty it will bind to all interfaces.

http_port : The port to which the application is bind to, defaults is 3000. You can redirect your 80 port to 3000 using the below command.

$iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000

root_url : This is the URL used to access Grafana from a web browser.

cert_file : Path to the certificate file (if protocol is set to https).

cert_key : Path to the certificate key file (if protocol is set to https).

[database]

Grafana uses a database to store its users and dashboards and other informations. By default it is configured to use sqlite3 which is an embedded database included in the main Grafana binary.

type
You can choose mysql, postgres or sqlite3 as per our requirement.

path
It's applicable only for sqlite3 database. The file path where the database will be stored.

host
It's applicable only to MySQL or Postgres. it includes IP or hostname and port. For example, for MySQL running on the same host as Grafana: host = 127.0.0.1:3306

name
The name of the Grafana database. Leave it set to grafana or some other name.

user
The database user (not applicable for sqlite3).

password
The database user's password (not applicable for sqlite3).

ssl_mode
For Postgres, use either disable, require or verify-full. For MySQL, use either true, false, or skip-verify.

ca_cert_path
(MySQL only) The path to the CA certificate to use. On many linux systems, certs can be found in /etc/ssl/certs.

client_key_path
(MySQL only) The path to the client key. Only if server requires client authentication.

client_cert_path
(MySQL only) The path to the client cert. Only if server requires client authentication.

server_cert_name
(MySQL only) The common name field of the certificate used by the mysql server. Not necessary if ssl_mode is set to skip-verify.

[security]
admin_user : It is the name of the default Grafana admin user. The default name set is admin.

admin_password : It is the password of the default Grafana admin. It is set on first-run. The default password is admin.

login_remember_days : The number of days the keep me logged in / remember me cookie lasts.

secret_key : It is used for signing keep me logged in / remember me cookies.

Essentials components for setting up Monitoring

We use the below components  to create our Docker Monitoring system.

cAdvisor : It is otherwise called Container Advisor. It provides its users an understanding of the resource usage and performance characteristics. It collects, aggregates, processes and exports information about the running containers. You can go through this documentation for more information about this.

InfluxDB : It is a time series, metrics, and analytic database. We use this datasource for setting up our monitoring. cAdvisor  displays only real time information and doesn’t store the metrics. Influx Db helps to store the monitoring information which cAdvisor provides in order to display a time range other than real time.

Grafana Dashboard : It allows us to combine all the pieces of information together visually. This powerful Dashboard allows us to run queries against the data store InfluxDB and chart them accordingly in beautiful layout.

Installation of Docker Monitoring

We need to install each of these components one by one in our docker system.

Installing InfluxDB

We can use this command to pull InfluxDB image and setuup a influxDB container.

root@ubuntu:~# docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -e PRE_CREATE_DB=cadvisor --name influxsrv tutum/influxdb:0.8.8
Unable to find image 'tutum/influxdb:0.8.8' locally
0.8.8: Pulling from tutum/influxdb
a3ed95caeb02: Already exists
23efb549476f: Already exists
aa2f8df21433: Already exists
ef072d3c9b41: Already exists
c9f371853f28: Already exists
a248b0871c3c: Already exists
749db6d368d0: Already exists
7d7c7d923e63: Pull complete
e47cc7808961: Pull complete
1743b6eeb23f: Pull complete
Digest: sha256:8494b31289b4dbc1d5b444e344ab1dda3e18b07f80517c3f9aae7d18133c0c42
Status: Downloaded newer image for tutum/influxdb:0.8.8
d3b6f7789e0d1d01fa4e0aacdb636c221421107d1df96808ecbe8e241ceb1823

  • -p 8083:8083 : user interface, log in with username-admin, pass-admin
  • -p 8086:8086 : interaction with other application
  • --name influxsrv : container have name influxsrv, use to cAdvisor link it.

You can test your InfluxDB installation by calling this URL >>http://45.79.148.234:8083 and login with user/password as "root".

InfluxDB Administration 2016-08-01 14-10-08

We can create our required databases from this tab.

createDB influx

Installing cAdvisor

Our next step is to  install cAdvisor container and link it to the InfluxDB container. You can use this command to create it.

root@ubuntu:~# docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --link influxsrv:influxsrv --name=cadvisor google/cadvisor:latest -storage_driver_db=cadvisor -storage_driver_host=influxsrv:8086
Unable to find image 'google/cadvisor:latest' locally
latest: Pulling from google/cadvisor
09d0220f4043: Pull complete
151807d34af9: Pull complete
14cd28dce332: Pull complete
Digest: sha256:8364c7ab7f56a087b757a304f9376c3527c8c60c848f82b66dd728980222bd2f
Status: Downloaded newer image for google/cadvisor:latest
3bfdf7fdc83872485acb06666a686719983a1172ac49895cd2a260deb1cdde29
root@ubuntu:~#

  • --publish=8080:8080 : user interface
  • --link=influxsrv:influxsrv: link to container influxsrv
  • -storage_driver=influxdb: set the storage driver as InfluxDB
  • Specify what InfluxDB instance to push data to:
  • -storage_driver_host=influxsrv:8086: The ip:port of the database. Default is ‘localhost:8086’
  • -storage_driver_db=cadvisor: database name. Uses db ‘cadvisor’ by default

You can test our cAdvisor installation by calling this URL >>http://45.79.148.234:8080. This will provide you the statistics of your Docker host and containers.

cAdvisor - Docker Containers 2016-08-01 14-24-18

Installing the Grafana Dashboard

Finally, we need to install the Grafana Dashboard and link to the InfluxDB. You can run this command to setup that.

root@ubuntu:~# docker run -d -p 3000:3000 -e INFLUXDB_HOST=localhost -e INFLUXDB_PORT=8086 -e INFLUXDB_NAME=cadvisor -e INFLUXDB_USER=root -e INFLUXDB_PASS=root --link influxsrv:influxsrv --name grafana grafana/grafana
f3b7598529202b110e4e6b998dca6b6e60e8608d75dcfe0d2b09ae408f43684a

Now we can login to Grafana and configure the Data Sources. Navigate to http://45.79.148.234:3000 or just http://45.79.148.234:

Username - admin
Password - admin

Once we've installed Grafana, we can connect the InfluxDB. Login on the Dashboard and click on the Grafana icon(Fireball) in the upper left hand corner of the panel. Click on Data Sources to configure.

addingdatabsource

Now you can add our new Graph to our default Datasource InfluxDB.

panelgraph

We can edit and modify our query by adjusting our graph at Metric tab.

Grafana - Grafana Dashboard 2016-08-01 14-53-40

Grafana - Grafana Dashboard

You can get more information on docker monitoring here. Thank you for reading this. I would suggest your valuable comments and suggestions on this. Hope you'd a wonderful day!

The post How to Monitor Docker Containers using Grafana on Ubuntu appeared first on LinOxide.

How to Setup ELK Stack to Centralize Logs on Ubuntu 16.04

$
0
0

The ELK stack consists of Elasticsearch, Logstash, and Kibana used to centralize the the data. ELK is mainly used for log analysis in IT environments. The ELK stack makes it easier and faster to search and analyze large volume of data to make real-time decisions-all the time.

In this tutorial we will use the following versions of ELK stack.

Elasticsearch 2.3.4
Logstash 2.3.4
Kibana 4.5.3
Oracle Java version 1.8.0_91
Filebeat version 1.2.3 (amd64)

Before you start installing ELK stack, check the LSB release of Ubuntu server.

# lsb_release -a

Ubuntu LSB release

1. Install Java

The requirement for elasticsearch and logstash is to first install Java. We will install Oracle java since elasticsearch recommends it. However it works with OpenJDK also.

Add the Oracle Java PPA to apt:

# sudo add-apt-repository -y ppa:webupd8team/java

Add Oracle JAVA to apt repository

Update apt database

# sudo apt-get update

Now install the latest stable version of Oracle Java 8 using following command.

# sudo apt-get -y install oracle-java8-installer

Accept JAVA licence

Java 8 is installed, Check the version of Java using the command java -version

Check JAVA Version

2. Install Elasticsearch

To install Elasticsearch, first import  its public GPG key into apt database. Run the following command to import the Elasticsearch public GPG key into apt

# wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Now create the Elasticsearch source list

# echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

Create elasticsearch source list

Update apt  database

# sudo apt-get update

Now install Elasticsearch using following command

# sudo apt-get -y install elasticsearch

Install elasticsearch using apt get

Next edit the elasticsearch configuration file

# sudo vi /etc/elasticsearch/elasticsearch.yml

To restrict outside access to Elasticsearch instance (port 9200) uncomment the line that says network.host and replace its value with "localhost" .

network.host: localhost

Edit elasticsearch network host

Now start Elasticsearch

# sudo service elasticsearch restart

To start Elasticsearch on boot up, execute the following command.

# sudo update-rc.d elasticsearch defaults 95 10

Test elasticsearch using following command.

# curl localhost:9200

Test Elasticsearch using CURL

3. Install logstash

Create the Logstash source list. We have already imported public key as logstash and elasticsearch are from same repository.

# wget https://download.elastic.co/logstash/logstash/packages/debian/logstash_2.3.4-1_all.deb

Download Logstash using WGET

# dpkg -i logstash_2.3.4-1_all.deb

Install Logstash using dpkg

# sudo update-rc.d logstash defaults 97 8

# sudo service logstash start

To check the status of logstash, execute the following command in the terminal.

# sudo service logstash status

Start Logstash

You may find that the logstash is active but you cannot stop/restart logstash properly using service or systemctl command.  In that case you have to configure the systemd logstash daemon script by yourself. First, backup the logstash startup script inside /etc/init.d/ and /etc/systemd/system and remove it from there. Now install this “pleaserun” script from https://github.com/elastic/logstash/issues/3606 The prerequisite for installing this script is ruby.

Install Ruby

# sudo apt install ruby

Install Ruby

Now install please run gem

# gem install pleaserun

Install Pleaserun

You are now ready to create the systmd daemon file for logstash. Use the following command to do this.

# pleaserun -p systemd -v default --install /opt/logstash/bin/logstash agent -f /etc/logstash/logstash.conf

Now that systemd daemon for logstash has been created, start it and check the status of logstash.

# sudo systemctl start logstash

# sudo systemctl status logstatus

Logstash Status

4. Configure logstash

Let us now configure Logstash. The logstash configuration  files  resides inside /etc/logstash/conf.d and are in JSON-format.  The configuration consists of three parts and they are  inputs, filters, and outputs. First, create a directory for storing certificate and key for logstash.

# mkdir -p /var/lib/logstash/private

# sudo chown logstash:logstash /var/lib/logstash/private

# sudo chmod go-rwx /var/lib/logstash/private

Change ownership of logstash dir

Now create certificates and key for logstash

# openssl req -config /etc/ssl/openssl.cnf -x509  -batch -nodes -newkey rsa:2048 -keyout /var/lib/logstash/private/logstash-forwarder.key -out /var/lib/logstash/private/logstash-forwarder.crt -subj /CN=172.31.13.29

Change /CN=172.31.13.29 to your server's private IP address. To avoid “TLS handshake error” add  the following line in /etc/ssl/openssl.cnf.

[v3_ca] subjectAltName = IP:172.31.13.29

Create SSL certificate for ELK server

Keep in mind that we have to copy this certificate to every clients whose logs you want send to ELK server through filebeat.

Next, we will first create “filebeat” input by the name 02-beats-input.conf

# sudo vi /etc/logstash/conf.d/02-beats-input.conf

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/var/lib/logstash/private/logstash-forwarder.crt"
ssl_key => "/var/lib/logstash/private/logstash-forwarder.key"
}
}

Filebeat input section

Now we will create “filebeat” filter by the name 10-syslog-filter.conf to add a filter for syslog messages.

# sudo vi /etc/logstash/conf.d/10-syslog-filter.conf

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Filebeat Filter section

At last, we will create “filebeat” output by the name 30-elasticsearch-output.conf

# sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Filebeat output section

Test your Logstash configuration with the following command.

# sudo service logstash configtest

It will display Configuration OK  if there are no syntax errors otherwise check the logstash log files in /var/log/logstash

Logstash configuration test

To test the logstash, execute the following command from the terminal.

# cd /opt/logstash/bin && ./logstash -f /etc/logstash/conf.d/02-beats-input.conf

You will find that the logstash has started a pipeline and processing the syslogs. Once you are sure that logstash is processing the syslogs, combine  02-beats-input.conf, 10-syslog-filter.conf and 30-elasticsearch-output.conf as logstash.conf in the directory /etc/logstash/

Restart logstash to reload new configuration.

# sudo systemctl restart logstash

5.  Install sample dashboard

Download sample Kibana dashboards and Beats index patterns. We are not going to use this dashboard but will load them so that we can use filebeat index pattern in it. Download the sample dashboards and unzip it.

# curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip

# unzip beats-dashboards-1.1.0.zip

Download sample beat dashboard

Load the sample dashboards, visualizations and Beats index patterns into Elasticsearch using following commands.

# cd beats-dashboards-1.1.0 # ./load.sh

You will find the following index patterns in the the kibana dashboard's left sidebar. We will use only filebeat index pattern.

packetbeat-*
topbeat-*
filebeat-*
winlogbeat-*

Since we will use filebeat to forward logs to Elasticsearch, therefore we will load a filebeat index template into the elasticsearch.

First, download the filebeat index template

# curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/ raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Now load the following template with the following CURL command.

# curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json

If the template loaded properly, you should see a message like this:

Output:
{
"acknowledged" : true
}

xput JSON template using CURL

The ELK Server is now ready to receive filebeat data, let's configure filebeat in client server. For more information about loading beat dashboard check this link https://www.elastic.co/guide/en/beats/libbeat/current/load-kibana-dashboards.html

6. Install filebeat in clients

Create the Beats source list in the clients whose logs you want send to ELK server. Update the apt database and install filebeat using apt-get

# echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list # sudo apt-get update && sudo apt-get install filebeat

Install filebeat using apt-get

Start filebeat

# /etc/init.d/filebeat start

Start filebeat

Now edit the file /etc/filebeat/filebeat.yml . Modify the existing prospector to send syslog to logstash. In the paths section, comment out the - /var/log/*.log file and add new entries for syslog - /var/log/syslog.

Edit filebeat syslog path

Next, specifies that the logs in the prospector are of type syslog.

Edit filebeat document type

Uncomment the Logstash: output section and hosts: ["SERVER_PRIVATE_IP:5044"] section. Edit localhost to the private IP address or hostname of your ELK server. Now uncomment the line that says certificate_authorities, and modify its value to  /var/lib/logstash/private/logstash-forwarder.crt that we have created in the ELK server in step you must copy this certificate to all client machine.

Restart filebeat and check its status.

# sudo /etc/init.d/filebeat restart # sudo service filebeat status

Check filebeat status

To test the filebeat, execute the following command from the terminal.

# filebeat -c /etc/filebeat/filebeat.yml -e -v

Test filebeat from terminal

The filebeat will send the logs to logstash for indexing the logs. Enable the filebeat to start during every boot.

# sudo update-rc.d filebeat defaults 95 10

Now open your favorite browser and point URL to http://ELK-SERVER-IP:5601 or http://ELK-SERVER-DOMAIN-NAME:5601, you will find the syslogs when clicked file-beats-* in the left sidebar.

This is our final filebeat configuration for filebeat -

filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog

input_type: log

document_type: syslog

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["172.31.13.29:5044"]
bulk_max_size: 1024

tls:
certificate_authorities: ["/var/lib/logstash/private/logstash-forwarder.crt"]

shipper:

logging:
files:
rotateeverybytes: 10485760

Final filebeat configuration

7. Configure firewall

Add firewall rules to allow traffic to the following ports.

For IPTABLE users:

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 5601 -j ACCEPT ( Kibana )

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 9200 -j ACCEPT ( Elastic Search)

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT ( NGINX)

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 5044 -j ACCEPT ( Filebeat )

Save the rules.

# service iptables save

Restart IPTABLE

# service iptables restart

For UFW users:

# sudo ufw allow 5601/tcp

# sudo ufw allow 9200/tcp

# sudo ufw allow 80/tcp

# sudo ufw allow 5044/tcp

# sudo ufw reload

8. Install/Configure Kibana

Download latest kibana fom https://download.elastic.co/

# cd /opt

# wget https://download.elastic.co/kibana/kibana/kibana-4.5.3-linux-x64.tar.gz

# tar -xzf kibana-4.5.3-linux-x64.tar.gz # cd kibana-4.5.3-linux-x64/

# mv kibana-4.5.3-linux-x64 kibana # cd /opt/kibana/config # vi kibana.yml

Now change these parameter in /opt/kibana/config/kibana.yml

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"

Edit Kibana configuration file

For testing purpose you can run the kibana using following commands.

# cd /opt/kibana/bin

# ./kibana & # netstat -pltn

Start kibana from terminal

Now we will create systemd daemon for kibana using “pleaserun” in the same way that we have created for logstash.

# pleaserun -p systemd -v default –install /opt/kibana/bin/kibana -p 5601 -H 0.0.0.0 -e http://localhost:9200

where
-p specify the port no that kibana will bind
-H specify the host IP address where Kibana will run.
-e option specify the  elasticsearch IP address.

Start the kibana

# systemctl start kibana

Check the status of kibana

# systemctl status kibana

Check whether port no 5601 has been occupied by kibana

# netstat -pltn| grep '5601'

Create systemd daemon script for kibana

9. Install/Configure NGINX

Since Kibana is configured to listen on localhost, we need to set up a reverse proxy to allow external access to it. We will use NGINX as a reverse proxy. Install NGINX and apache utils using following command.

# sudo apt-get install nginx apache2-utils php-fpm

Install NGINX and Apache utils

Edit php-fpm configuration file www.conf  inside /etc/php/7.0/fpm/pool.d

listen.allowed_clients = 127.0.0.1,172.31.13.29

Configure php fpm

Restart php-fpm

# sudo service php-fpm restart

Using htpasswd create an admin user by the name "kibana"  to access the Kibana web interface.

# sudo htpasswd -c /etc/nginx/htpasswd.users kibana

Create kibana http user

Enter a password at the prompt. Remember this password, we will use it to access the Kibana web interface.
Create a certificate for NGINX

# sudo openssl req -x509 -batch -nodes -days 365 -newkey rsa:2048  -out /etc/ssl/certs/nginx.crt -keyout /etc/ssl/private/nginx.key -subj /CN=demohost.com

Create NGINX SSL certificate and key

Edit NGINX default server block .

# sudo vi /etc/nginx/sites-available/default

Delete the file's contents, and paste the following configuration into the file.

server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
server {
listen 443 ssl;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
add_header Strict-Transport-Security "max-age=31536000;";
}
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}

NGINX configuration file

We are not using the server_name directive as we have configured our domain name in /etc/hosts and /etc/hostname as demohost.com. Also since, we have edited the NGINX default host (  /etc/nginx/sites-available/default ). Therefore once NGINX started demohost.com will be available in the browser.

Save and exit. From now onward NGINX will direct server's HTTP traffic to the Kibana application in port no 5601.
Now restart NGINX to put our changes into effect:

# sudo service nginx restart

Now you can access Kibana by visiting the FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. Enter "kibana" credentials that you have created earlier, you will be redirected to Kibana welcome page which will ask you to configure an index pattern.

Login to Kibana dashboard

Click filebeat* in the top left sidebar, you will see the logs from the clients flowing into the dashboard.

View Kibana Dashboard

Click the status of the ELK server

ELK Server Status

Conclusion:

That's all for ELK server, install filebeat in any number of client systems and ship the logs to the ELK server for analysis. To make the unstructured log data more functional, parse it properly and make it structured using grok. There are also few awesome plug-ins available for use along with kibana, to visualize the logs in a systematic way.

The post How to Setup ELK Stack to Centralize Logs on Ubuntu 16.04 appeared first on LinOxide.

Viewing all 1287 articles
Browse latest View live