Quantcast
Viewing all 1287 articles
Browse latest View live

How to Install OpenDCIM on CentOS 7

Data center infrastructure management (DCIM) is a growing challenge for data center managers, and a hot market for software vendors. The openDCIM project offers an open source alternative for companies seeking to improve their asset tracking and capacity planning. OpenDCIM is used for managing the infrastructure of a data center, no matter how small or large. DCIM means many different things to many different people, and there is a multitude of commercial applications available but openDCIM does not contend to be a function by function replacement for commercial applications. It was Initially developed in-house at Vanderbilt University Information Technology Services by Scott Milliken. The software is released under the GPL v3 license that is free to modify it, and share it with others, as long as you acknowledge where it came from.

The main goal for openDCIM is to eliminate the excuse for anybody to ever track their data center inventory using a spreadsheet or word processing document again. It provide complete physical inventory of the data center.

Features of OpenDCIM:

Following are some of its other useful features of OpenDCIM.

  • Support for Multiple Rooms (Data Centers)
  • Computation of Center of Gravity for each cabinet
  • Template management for devices, with ability to override per device
  • Optional tracking of cable connections within each cabinet, and for each switch device
  • Archival functions for equipment sent to salvage/disposal
  • Management of the three key elements of capacity management - space, power, and cooling
  • Basic contact management and integration into existing business directory via UserID
  • Fault Tolerance Tracking - run a power outage simulation to see what would be affected as each source goes down

Prerequisites:

In order to install OpenDCIM on CentOS 7, we need to compete the following requirements on our server.

  • Web host running Apache 2.x (or higher) with an SSL Enabled site
  • MySQL 5.x (or higher) database
  • PHP 5.x (or higher)
  • User Authentication
  • Web Based Client

Installing Apache, PHP, MySQL

Our first step is to make sure that the complete LAMP stack has been properly configured with Apache/PHP and MySQL/MariaDB running.

To do let's run the following command on your CentOS 7 server to install Apache, PHP with few of its required modules and MySQL-MariaDB server.

# yum install httpd php php-mysql php-mbstring mariadb-server

After resolving dependencies the following number of shown packages will be installed on your system after you type 'y' and hit Enter key.

Image may be NSFW.
Clik here to view.
LAMP packages

Start and Enable Apache/MySQL services:

Once the packages are installed then using the following commands start and enable the services of Apache and Mysql server and check their status that should be active and running.

# systemctl enable httpd.service
# systemctl start httpd.service

# systemctl enable mariadb.service
# systemctl start mariadb.service

Image may be NSFW.
Clik here to view.
starting services

Create database for openDCIM

Before creating the database for OpenDCIM, Secure your MySQL/MariaDB server by doing the following tasks after running the command as shown.

# mysql_secure_installation

  • Set a root password
  • Remove anonymous users
  • Disallow root login remotely
  • Remove test database and access to it
  • Reload privilege tables

Image may be NSFW.
Clik here to view.
securing MysQL

Now create a database for openDCIM after conecting to the MariaDB.

# mysql -u root -p

MariaDB [(none)]> create database dcim;
MariaDB [(none)]> grant all privileges on dcim.* to 'dcim' identified by 'password';
MariaDB [(none)]> exit

Image may be NSFW.
Clik here to view.
OpenDCIM database

Enable HTTPS

Run the command below to install 'mod_ssl' package on your CentOS 7 server

# yum -y install mod_ssl

Once the package installed, generate the necessary keys and copy them to the proper directories using below commands.

# cd /root

# openssl genrsa -out ca.key 1024

# openssl req -new -key ca.key -out ca.csr

# openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

# cp ca.crt /etc/pki/tls/certs

# cp ca.key /etc/pki/tls/private/ca.key

# cp ca.csr /etc/pki/tls/private/ca.csr

Image may be NSFW.
Clik here to view.
Enabling https

Setup Server Name:

To set the server name of your server open the default web configuration in your editor by searching the 'ServerName' in it and add the following line.

# vim +/ServerName /etc/httpd/conf/httpd.conf

ServerName opendcim_server_name:443

Save and close the configuration file using ':wq!' and then restart apache web services.

# systemctl restart httpd.service

New Virual Host for OpenDCIM:

Create a new configuration file for the openDCIM VirtualHost and put the following configuration in it.

# vim /etc/httpd/conf.d/opendcim_server_name.conf

SSLEngine On
SSLCertificateFile /etc/pki/tls/certs/ca.crt
SSLCertificateKeyFile /etc/pki/tls/private/ca.key
ServerAdmin you@example.net
DocumentRoot /opt/openDCIM/opendcim
ServerName opendcim.example.net

AllowOverride All
AuthType Basic
AuthName "openDCIM"
AuthUserFile /opt/openDCIM/opendcim/.htpasswd
Require valid-user
Image may be NSFW.
Clik here to view.
OpenDCIM VH

After save and closing the file now we need to enable basic user authentication to protect the openDCIM web directory by configuring the files we mentioned in above configuration file.

Let's run below commands to create a user after create a new file as shown.

# mkdir -p /opt/openDCIM/opendcim

# touch /opt/openDCIM/opendcim/.htpasswd

# htpasswd /opt/openDCIM/opendcim/.htpasswd Administrator

Let's open Web Access on firewall as on CentOS 7 FirewallD is enabled by default, and blocks access to HTTPS port on 443.

# firewall-cmd --zone=public --add-port=443/tcp --permanent
success

# firewall-cmd --reload
success

Image may be NSFW.
Clik here to view.
opendcim user setup

Download and Install openDCIM

After finishing the configuration of the server, now we need to download the openDCIM package from their Office Web Page.

Image may be NSFW.
Clik here to view.
OpenDCIM Download

Use below commands to get the package on your server.

# cd /opt/openDCIM/

# curl -O http://opendcim.org/packages/openDCIM-4.2.tar.gz

Extract the archive and create a symbolic link by flowing below commands.

# tar zxf openDCIM-4.2.tar.gz

# ln -s openDCIM-4.2-release opendcim

You can also rename the directory openDCIM-4.2-release to 'opendcim' in case if you don't want to create the symbolic link.

Configuring OpenDCIM:

Now, prepare the configuration file for access to the database we have created earlier.

# cd /opt/openDCIM/opendcim

# cp db.inc.php-dist db.inc.php

# vim db.inc.php

$dbhost = 'localhost';
$dbname = 'dcim';
$dbuser = 'dcim';
$dbpass = 'dcimpassword';

# systemctl restart httpd.service

Access OpenDCIM Web Portal:

Now open openDCIM in your browser to proceed with the web based installation.

https://your_server_name_or_IP/

You will be asked for authentication and after proving the user name and Password you will be directed to OpenDCIM web page where you will be asked to create new Department as shown.

Image may be NSFW.
Clik here to view.
OpenDCIM Web

After completing these parameters, switch to the Data Center and give your new Data Center details.

Image may be NSFW.
Clik here to view.
OpenDCIM Data center

Once you have created an Data center then you will be able to create its cabinet inventory.

Image may be NSFW.
Clik here to view.
OpenDCIM Cabinet

You have done, and finished the basic configurations of OpenDCIM .

Image may be NSFW.
Clik here to view.
OpenDCIM installed

Conclusion:

Thank you for being with us, we have successfully setup OpenDCIM on our CentOS 7 server. Now you can easily manage your Data centers no matter how small or large environment you have. Do share your experience and leave your valuable comments.

The post How to Install OpenDCIM on CentOS 7 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

Getting Started with Ansible on Command Line

ANSIBLE is an open source software platform for configuration management, provisioning, application deployment and service orchestration. It can be used for configuring our servers in production, staging and developments. It can also be used in managing application servers like Webservers, database servers and many others. Other systems similar to configuration management is CHEF, Puppet, SALT and Distelli, compared to all these ANSIBLE is the most simple and easily managed tool. The main advantage of using Ansible is as follows:

1. Modules can be written in any programming language.
2. No agent running on the client machine.
3. Easy to install and Manage.
4. Highly reliable.
5. Scalable

In this article, I'll explain some of the basics about your first steps with Ansible.

Understanding the hosts file

Once you've installed Ansible, the first thing is to understand its inventory file. This files contains the list of target servers which are managed by Ansible. The default hosts file location is /etc/ansible/hosts. We can edit this file to include our target systems. This file specifies several groups in which you can classify your hosts as your preference.

Image may be NSFW.
Clik here to view.
ansible_hosts

As mentioned here, important things to note in creating the hosts file:

# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or IP addresses
# - A hostname/ip can be a member of multiple groups
# - Remote hosts can have assignments in more than one groups
# - Include host ranges in one string as server-[01:12]-example.com

PS : It's not recommended to make modifications in the default inventory file, instead we can create our own custom inventory files at any locations as per our convenience.

How Ansible works?

First of all, Ansible admin client connects to the target server using SSH. We don't need to setup any agents on the client servers. All you need is Python and a user that can login and execute the scripts. Once the connection is established, then it starts gathering facts about the client machine like operating systems, services running and packages. We can execute different commands, copy/modify/delete files & folders, manage or configure packages and services using Ansible easily. I'll demonstrate it with the help of my demo setup.

My client servers are 45.33.76.60 and 139.162.35.39. I created my custom inventory hosts file under my user. Please see my inventory file with three groups namely webservers, production and testing.

In webservers, I've included two of my client servers. And separated them in two other groups as one in production and other in testing.

linuxmonty@linuxmonty-Latitude-E4310:~$ cat hosts
[webservers]
139.162.35.39
45.33.76.60

[production]
139.162.35.39

[Testing]
45.33.76.60

Establishing SSH connections

We need to create the SSH keys for the Admin server and copy this over to the target servers to enhance the SSH connections. Let's take a look on how I did that for my client servers.

linuxmonty@linuxmonty-Latitude-E4310:~$ # ssh-keygen -t rsa -b4096
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2e:2f:32:9a:73:6d:ba:f2:09:ac:23:98:0c:fc:6c:a0 linuxmonty@linuxmonty-Latitude-E4310
The key's randomart image is:
+--[ RSA 4096]----+
| |
| |
| |
| |
|. S |
|.+ . |
|=.* .. . |
|Eoo*+.+o |
|o.+*=* .. |
+-----------------+

Copying the SSH keys

This is how we copy the SSH keys from Admin server to the target servers.

Client 1:

linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id root@139.162.35.39
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@139.162.35.39's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@139.162.35.39'"
and check to make sure that only the key(s) you wanted were added.

linuxmonty@linuxmonty-Latitude-E4310#

Client 2:

linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id root@45.33.76.60
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@139.162.35.39's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@45.33.76.60'"
and check to make sure that only the key(s) you wanted were added.

linuxmonty@linuxmonty-Latitude-E4310#

Once you execute these commands from your admin server, your keys will be added to the target servers and saved in the authorized keys.

Familiarizing some basic Ansible Modules

Modules controls system resources, configuration, packages, files etc. There are about 450+ modules used in Ansible. First of all, let's use the module to check the connectivity between your admin server and the target servers. We can run the ping module from your Admin server  to confirm the connectivity.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m ping -u root
139.162.35.39 | success >> {
"changed": false,
"ping": "pong"
}

45.33.76.60 | success >> {
"changed": false,
"ping": "pong"
}
-i : Represents the inventory file selection
-m : Represents the module name selection
-u : Represents the user for execution.

Since you're running this command as a user to connect to the target servers, you need to switch to the root user for module execution.

This is how to check the inventory status in the hosts file.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts webservers --list-hosts
139.162.35.39
45.33.76.60
linuxmonty@linuxmonty-Latitude-E4310:~$
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production --list-hosts
139.162.35.39
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts Testing --list-hosts
45.33.76.60

Setup Module:

Now you run the setup module to gather more facts about your target servers to organize your playbooks. This module provides you the information about the server hardware, network and some of the ansible-related software settings. These facts can be described in the playbooks section and represent discovered variables about your system. These can be also used to implement conditional execution of tasks.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m setup -u root

Image may be NSFW.
Clik here to view.
setup

You can view the server architecture, CPU information, python version, memory, OS version etc by running this module.

Command Module:

Here are some examples of the command module usage. We can pass any argument to this command module to execute.

uptime:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'uptime' -u root
139.162.35.39 | success | rc=0 >>
14:55:31 up 4 days, 23:56, 1 user, load average: 0.00, 0.01, 0.05

45.33.76.60 | success | rc=0 >>
14:55:41 up 15 days, 3:20, 1 user, load average: 0.20, 0.07, 0.06

hostname:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'hostname' -u root
139.162.35.39 | success | rc=0 >>
client2.production.com

45.33.76.60 | success | rc=0 >>
client1.testing.com

w:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'w' -u root
139.162.35.39 | success | rc=0 >>
08:07:55 up 4 days, 17:08, 2 users, load average: 0.00, 0.01, 0.05
USER TTY LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 07:54 7:54 0.00s 0.00s -bash
root pts/1 08:07 0.00s 0.05s 0.00s w

45.33.76.60 | success | rc=0 >>
08:07:58 up 14 days, 20:33, 2 users, load average: 0.03, 0.03, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 101.63.79.157 07:54 8:01 0.00s 0.00s -bash
root pts/1 101.63.79.157 08:07 0.00s 0.05s 0.00s w

Similarly, we can execute any linux commands  across multiple target servers using the command module in Ansible.

Managing User and Groups

Ansible provides a module called "user" which server this purpose. The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of the existing user accounts as per our needs.

Usage : # ansible -i inventory selection -m user -a "name=username1 password=<crypted password here>"

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m user -a "name=adminsupport password=<default123>" -u root
45.33.76.60 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1004,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1004
}

In the above server, this command initiates the creation of this adminsupport user. But in the server 139.162.35.39 this user is already present, hence, it skips any other modifications for that user.

139.162.35.39 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1001,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"stderr": "useradd: warning: the home directory already exists.\nNot copying any file from skel directory into it.\nCreating mailbox file: File exists\n",
"system": false,
"uid": 1001
}

Usage : ansible -i inventory selection -m user -a 'name=username state=absent'

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts Testing -m user -a "name=adminsupport state=absent" -u root
45.33.76.60 | success >> {
"changed": true,
"force": false,
"name": "adminsupport",
"remove": false,
"state": "absent"
}

This command deletes the user adminsupport from our Testing server 45.33.76.60.

File Transfers

Ansible provides a module called "copy" to enhance the file transfers across multiple servers. It can securely transfer a lot of files to multiple servers in parallel.


Usage : ansible -i inventory selection -m copy -a "src=file_name dest=file path to save"

I'm copying a shell script called test.sh from my admin server to all my target servers /root. Please see the command usage below:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m copy -a "src=test.sh dest=/root/" -u root
139.162.35.39 | success >> {
"changed": true,
"dest": "/root/test.sh",
"gid": 0,
"group": "root",
"md5sum": "d910e95fdd8efd48d7428daafa7706ec",
"mode": "0755",
"owner": "root",
"size": 107,
"src": "/root/.ansible/tmp/ansible-tmp-1463040011.67-93143679295729/source",
"state": "file",
"uid": 0
}

45.33.76.60 | success >> {
"changed": true,
"dest": "/root/test.sh",
"gid": 0,
"group": "root",
"md5sum": "d910e95fdd8efd48d7428daafa7706ec",
"mode": "0755",
"owner": "root",
"size": 107,
"src": "/root/.ansible/tmp/ansible-tmp-1463040013.85-235107847216893/source",
"state": "file",
"uid": 0
}

Output Result:

[root@client2 ~]# ll /root/test.sh
-rwxr-xr-x 1 root root 107 May 12 08:00 /root/test.sh

If you use playbook, you can take advantage of the module template to perform the same task.

It also provides a module called "file" which will help us to change the ownership and permissions of the files across multiple servers. We can pass these options directly to the "copy" command. This module can also be used to create or delete the files/folders.

Example :

I've modified the ownerships and groups for an existing file test.sh on the destination server and changed its permission to 600.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/test.sh mode=600 owner=adminsupport group=adminsupport" -u root
139.162.35.39 | success >> {
"changed": true,
"gid": 1001,
"group": "adminsupport",
"mode": "0600",
"owner": "adminsupport",
"path": "/root/test.sh",
"size": 107,
"state": "file",
"uid": 1001
}

Output :

[root@client2 ~]# ll | grep test.sh
-rw------- 1 adminsupport adminsupport 107 May 12 08:00 test.sh

Creating A folder

Now, I need to create a folder with a desired ownership and permissions. Let's see the command to acquire that. I'm creating a folder "ansible" on my production server group and assign it to the owner "adminsupport" with 755 permissions.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/ansible mode=755 owner=adminsupport group=adminsupport state=directory" -u root
139.162.35.39 | success >> {
"changed": true,
"gid": 1001,
"group": "adminsupport",
"mode": "0755",
"owner": "adminsupport",
"path": "/root/ansible",
"size": 4096,
"state": "directory",
"uid": 1001
}

Output :

[root@client2 ~]# ll | grep ansible
drwxr-xr-x 2 adminsupport adminsupport 4096 May 12 08:45 ansible
[root@client2 ~]# pwd
/root

Deleting a folder

We can even use this module for deleting folders/files from multiple target servers. Please see how I did that.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/ansible state=absent" -u root
139.162.35.39 | success >> {
"changed": true,
"path": "/root/ansible",
"state": "absent"
}

The only variable that determines the operation is the arbitrary variable called "state", it is modified to absent to delete that particular folder from the server.

Managing Packages

Let's see how we can manage packages using Ansible. We need to identify the platform of the target servers and use the desired package manager modules like yum or  apt that suits the purpose. We can use apt or yum according to the target servers OS version. It also has modules for managing packages under many platforms.

Installing a VsFTPD package on my production server in my inventory.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=present' -u root
139.162.35.39 | success >> {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: mirrors.linode.com\n * epel: mirror.wanxp.id\n * extras: mirrors.linode.com\n * updates: mirrors.linode.com\nResolving Dependencies\n--> Running transaction check\n---> Package vsftpd.x86_64 0:3.0.2-11.el7_2 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n vsftpd x86_64 3.0.2-11.el7_2 updates 167 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 167 k\nInstalled size: 347 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n Verifying : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n\nInstalled:\n vsftpd.x86_64 0:3.0.2-11.el7_2 \n\nComplete!\n"
]
}

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=present' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"vsftpd-3.0.2-11.el7_2.x86_64 providing vsftpd is already installed"
]
}

If you notice, you can see that, when I execute the ansible command to install the package first time, the "changed" variable was "true" which means this command has installed the package. But when I run that command again, it reported the variable "changed" as "false" which means the command checked for the package installation and it found that to be already installed, so nothing was done on that server.

Similarly, we can update or delete a package, the only variable which determines that is the state variable which can be modified to latest to install the latest available package and absent to remove the package from the server.

Examples:

Updating the package:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=latest' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages providing vsftpd are up to date"
]
}

This claims that the installed software is already in the latest version and there are no available updates.

Removing the package:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nResolving Dependencies\n--> Running transaction check\n---> Package vsftpd.x86_64 0:3.0.2-11.el7_2 will be erased\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nRemoving:\n vsftpd x86_64 3.0.2-11.el7_2 @updates 347 k\n\nTransaction Summary\n================================================================================\nRemove 1 Package\n\nInstalled size: 347 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Erasing : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n Verifying : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n\nRemoved:\n vsftpd.x86_64 0:3.0.2-11.el7_2 \n\nComplete!\n"
]
}

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"vsftpd is not installed"
]
}

First time when we run the ansible command it removed the VsFTPD package and then we run it again to confirm that there is no package existing in the server now.

Managing Services

It is necessary to manage the services which are installed on the target servers. Ansible provides the module service to attain that. We can use this module to enable on-boot and start/stop/restart services. Please see the examples for each case.

Starting/Enabling a Service:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx enabled=yes state=started' -u root
45.33.76.60 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}

139.162.35.39 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}

Stopping a Service:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx state=stopped' -u root
139.162.35.39 | success >> {
"changed": true,
"name": "nginx",
"state": "stopped"
}

45.33.76.60 | success >> {
"changed": true,
"name": "nginx",
"state": "stopped"
}

Restarting a Service:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx state=restarted' -u root
139.162.35.39 | success >> {
"changed": true,
"name": "nginx",
"state": "started"
}

45.33.76.60 | success >> {
"changed": true,
"name": "nginx",
"state": "started"
}

As you can see the state variable is modified to started, restarted and stopped status to manage the service.

Playbooks

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can assign different roles, perform tasks like copying or deleting files/folders, make use of mature modules that shifts most of the functionality or substitute variables to make your deployments dynamic and re-usable.

Playbooks define your deployment steps and configuration. They are modular and can contain variables. They can be used to orchestrate steps across multiple machines. They are configuration files written in simple YAML file which is the Ansible automation language. They can contain multiple tasks and can make use of "mature" modules.

Here is an example of a simple Playbook.

linuxmonty@linuxmonty-Latitude-E4310:~$ cat simpleplbook.yaml
---

- hosts: production
remote_user: root

tasks:
- name: Setup FTP
yum: pkg=vsftpd state=installed
- name: start FTP
service: name=vsftpd state=started enabled=yes

This is a simple playbook with two tasks as below:

  1. Install FTP server
  2. Ensure the Service status

Let's see each statement in details

- hosts: production  -   This selects the inventory host to initiate this process.

remote_user: root  - This specifies the user which is meant to execute this process on the target servers.

tasks:
1. - name: Setup FTP
2. yum: pkg=vsftpd state=installed
3. - name: start FTP
4. service: name=vsftpd state=started enabled=yes

These specifies the two tasks which is meant to be performed while running this playbook.  We can divide it to four statements for more clarity. First statement describes the task which is setting up an FTP server and the second statement performs that by choosing/installing the package on the target server. Third statement describes the next task and fourth one ensure the service status by starting the FTP server and enable it on boot.

Now let' see the output of this playbook.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts simpleplbook.yaml

PLAY [production] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]

TASK: [Setup FTP] *************************************************************
changed: [139.162.35.39]

TASK: [start FTP] *************************************************************
changed: [139.162.35.39]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=3 changed=2 unreachable=0 failed=0

We can see that playbooks are executed sequentially according to the tasks specified in the playbook. First it chooses the inventory and then it starts performing the plays one by one.

Application Deployments

I'm going to set-up my webservers using a playbook. I created a playbook for my "webservers" inventory group. Please see my playbook details below:

linuxmonty@linuxmonty-Latitude-E4310:~$ cat webservers_setup.yaml
---

- hosts: webservers
vars:
- Welcomemsg: "Welcome to Ansible Application Deployment"

tasks:
- name: Setup Nginx
yum: pkg=nginx state=installed
- name: Copying the index page
template: src=index.html dest=/usr/share/nginx/html/index.html
- name: Enable the service on boot
service: name=nginx enabled=yes
- name: start Nginx
service: name=nginx state=started

Now let us run this playbook from my admin server to deploy it.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts -s webservers_setup.yaml -u root

PLAY [webservers] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup Nginx] ***********************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Copying the index page] ************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Enable the service on boot] ********************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [start Nginx] ***********************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=5 changed=4 unreachable=0 failed=0
45.33.76.60 : ok=5 changed=4 unreachable=0 failed=0

This playbook describes four tasks as evident from the result. After running this playbook, we can confirm the status by checking the target servers in the browser.

Image may be NSFW.
Clik here to view.
ansiblewebserver

Now, I'm planning to add a PHP Module namely php-gd to the target servers. I can edit my playbook to include that task too and run it again. Let's see what happens now. My modified playbook is as below:

linuxmonty@linuxmonty-Latitude-E4310:~$ cat webservers_setup.yaml
---

- hosts: webservers
vars:
- Welcomemsg: "Welcome to Nginx default page"
- WelcomePHP: "PHP GD module enabled"

tasks:
- name: Setup Nginx
yum: pkg=nginx state=installed
- name: Copying the index page
template: src=index.html dest=/usr/share/nginx/html/index.html
- name: Enable the service on boot
service: name=nginx enabled=yes
- name: start Nginx
service: name=nginx state=started
- name: Setup PHP-GD
yum: pkg=php-gd state=installed

As you can see, I append these highlighted lines to my playbook. So this is how it goes now.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts -s webservers_setup.yaml -u root

PLAY [webservers] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup Nginx] ***********************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Copying the index page] ************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Enable the service on boot] ********************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [start Nginx] ***********************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup PHP-GD] **********************************************************
changed: [45.33.76.60]
changed: [139.162.35.39]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=6 changed=2 unreachable=0 failed=0
45.33.76.60 : ok=6 changed=2 unreachable=0 failed=0

On close analysis of this result, you can see that only two sections in this have reported modifications to the target servers. One is the index file modification and other is the installation of our additional PHP module. Now we can evident the changes for the target servers in the browser.

Image may be NSFW.
Clik here to view.
PHPmodule+Nginx

Roles

Ansible roles are a special kind of playbooks that are fully self-contained and portable. The roles contains tasks, variables, configuration templates and other supporting tasks that needs to complete complex orchestration. Roles can be used to simplify more complex operations. You can create different roles like common, webservers, db_servers etc categorizing with different purpose and include in the main playbook by just mentioning the roles.  This is how we create the roles.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-galaxy init common
common was created successfully

Now, I've created a role named common to perform some of the common tasks in all my target servers. Each role contains their individual tasks, configuration templates, variables, handlers etc.

total 40
drwxrwxr-x 9 linuxmonty linuxmonty 4096 May 13 14:06 ./
drwxr-xr-x 34 linuxmonty linuxmonty 4096 May 13 14:06 ../
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 defaults/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 files/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 handlers/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 meta/
-rw-rw-r-- 1 linuxmonty linuxmonty 1336 May 13 14:06 README.md
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 tasks/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 templates/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 vars/

We can create our YAML file inside each of these folders as per our purpose. Later on, we can run all these tasks by just specifying these roles inside a playbook. You can get more details on Ansible roles here.

I hope this documentation provided you with the basic knowledge on how to manage your servers with Ansible. Thank you for reading this. I would recommend your valuable suggestions and comments on this.

Happy Automation!!

The post Getting Started with Ansible on Command Line appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install Docker Engine in Ubuntu 16.04 LTS Xenial

Docker is a free and open source project for the automation of deployment of apps under software containers that provides an open platform to pack, ship and run any application any where. It makes an awesome use of the resource isolation features of the linux kernel such as cgroups, kernel namespaces, and union-capable file system. It is pretty easy and simple for deploying and scaling web apps, databases and back-end services independent on a particular stack or provider. The latest release ie version 1.11.1 consists of many additional features and bug fixes. In this article, we'll be installing the latest Docker Engine 1.11.1 in a machine running Ubuntu 16.04 LTS "Xenial" .

System Requirements

Following are the system requirements that are essential to run the latest Docker Engine in Ubuntu 16.04 LTS Xenial.

  • It currently requires 64 bit version of host to run so, we'll require a 64 bit version of Ubuntu Xenial installed on the host.
  • As we require to download images of containers frequently, we'll require a good internet connectivity in the host.
  • Make sure that the machine's CPU supports virtualization technology and virtualization support is enabled in BIOS.
  • Ubuntu Xenial running Linux kernel version 3.8 and above are supported.

Updating and Upgrading Xenial

First of all, we'll need to update the local repository index of the Ubuntu repositories from the nearest mirror service so that we have the index of all the latest packages available on the repository through internet. To do so, we'll need to run the following command in a terminal or console.

$ sudo apt-get update

As our local repository index has been updated, we'll upgrade our Ubuntu Xenial to the latest packages available in the repositories via apt-get package manager.

$ sudo apt-get upgrade

Installing Docker Engine

Once our system has been upgraded, we'll move towards the installation of the latest Docker Engine ie version 1.11 in our machine running the latest and greatest Ubuntu 16.04 Xenial LTS. We have many ways to install it in Ubuntu, either we run a simple script written by the official developers or we manually add the Docker's official repository and install it. Here, in this tutorial, we'll show both methods to install Docker Engine.

Manual Installation

1. Adding the Repository

First of all, we'll need to add the new GPG key for our docker repository.

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Image may be NSFW.
Clik here to view.
Adding GPG Key

As the new GPG key for docker repository has been added to our machine, we'll now need to add the repository source to our apt source list. To do so, we'll use a text editor and create a file named docker.list under /etc/apt/sources.list.d/ directory.

$ sudo nano /etc/apt/sources.list.d/docker.list

Then, we'll gonna add the following of line into that file in order to add the repository to the apt's source..

deb https://apt.dockerproject.org/repo ubuntu-xenial main

Image may be NSFW.
Clik here to view.
Adding Docker Repository

2. Updating the APT's Index

As our repository for docker has been addded, we'll now gonna update the local repository index of APT package manager so that we can use it to install the latest release. In order to update the local repository index, we'll need to run the following command inside a terminal or console.

$ sudo apt-get update

3. Installing Linux Kernel Extras

Now, as its recommended, we'll gonna install the Linux Kernel Extras in our machine running Ubuntu Xenial. We'll need to install this package as its important for us to enable the use of aufs storage driver. So, to install the linux-image-extras kernel package in our machine, we'll need to run the following command.

$ sudo apt-get install linux-image-extra-$(uname -r)

Image may be NSFW.
Clik here to view.
Installing Linux Image Extras

Here, as we have linux kernel 4.4.0-22 installed and running, the linux kernel extras of the respective kernel will be installed.

4. Installing Docker Engine

Once everything is setup and done, we'll now go towards the main part of the work where we'll install the latest docker engine in our latest Ubuntu 16.04 LTS Xenial machine. To do so, we'll need to run the following simple apt-get command.

$ sudo apt-get install docker-engine

Image may be NSFW.
Clik here to view.
Installing Docker Engine

Finally, we are done installing Docker Engine, once we are done the installation process, we'll now move towards the next step where we'll add our current running user to the docker group.

One-Script installation

If we wanna automate everything done above in the Manual installation method, we'll need to follow the this step. As said above, Docker developers have written an awesome script that will install docker engine in our machine running Ubuntu 16.04 LTS Xenial fully automated. This method is pretty fast, easy and simple to perform. A person with little knowledge of Ubuntu 16.04 can easily install docker using this script. So, before we start, we'll need to make sure that wget is installed in our machine. To install wget downloader, we'll need to run the following command.

$ sudo apt-get install wget

Once get downloader is installed in our machine, we'll need to run the following wget command in order to run the docker's official script to install the latest Docker Engine.

$ wget -qO- https://get.docker.com/ | sh

Adding User to Docker Group

Now, we'll gonna add our users to the docker group, doing so will allow docker daemon to provide permissions to the users under group docker to have authentication to run and manage the docker containers.

$ sudo usermod -aG docker arun

Once done, we'll need to logout and again login to the system to apply the changes into effect.

Starting the Docker Daemon

Next, we'll gonna start our Docker Daemon so that we can run, manage and control containers, images in our Ubuntu machine. As Ubuntu 16.04 LTS Xenial runs systemd as its default init system, we'll need to run the following systemctl command to start docker daemon.

$ sudo systemctl start docker

Checking the version

As our docker daemon has been started, we'll now gonna test if its installed and running properly or not by checking the version of docker engine installed in our machine.

$ docker -v

Docker version 1.11.1, build 5604cbe

So, as version 1.11.1 was released and available during the time of writing this article, we must see the above output.

Running Docker Containers

Now, we'll gonna run our first docker container in this step. If everything above is setup and done properly as expected, we'll now be able to run a container. Here in this tutorial, we'll gonna run our all time favorite testing container called Hello World. In order to run hello-world container, we'll need to run the following docker command.

$ docker run hello-world

Image may be NSFW.
Clik here to view.
Hello World Docker

Now, doing this should print an output "Hello from Docker." from the container. This verifies that we have successfully installed docker engine and is capable of running container on it.

In order to check what images where pulled during running the hello-world container, we'll need to run the following docker command.

$ docker images

Managing Docker

As our docker is running successfully, we'll also need to learn how to manage it. In this tutorial, we'll have a look into few basic docker commands which are used to stop, remove, pull a docker container and images.

Stopping a Running Container

Now, if we wanna stop a running container, we'll need to run the following command first to see the list of running containers.

$ docker ps -a

Then, we'll need to run the following docker stop command with the respective container id.

$ docker stop 646ed6509700

Removing a Container

To remove a stopped container, we'll need to run the following command specifying the stopped unused container id.

$ docker rm 646ed6509700

Pulling an Image

In order to pull a docker image, we'll need to run the pull command.

$ docker pull ubuntu

Image may be NSFW.
Clik here to view.
Pulling Docker Ubuntu Image

The above command pulls the latest image of ubuntu from the Docker Registry Hub.

Removing an Image

It is pretty easy to remove a docker container, first we'll need to list the available images in our machine.

$ docker images

Then, we'll run the following command to remove that image.

$ docker rmi ubuntu

Image may be NSFW.
Clik here to view.
Removing Docker Image

We have many commands to manage it, we can see more in the official documentation of Docker.

Conclusion

Docker is an awesome technology enabling us to easily pack, run and ship application independent of platform. It is pretty easy to install and run the latest Docker Engine in the latest Ubuntu release ie Ubuntu 16.04 LTS Xenial. Once the installation is done, we can move further towards managing, networking and more with containers.  So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy  :-)

The post How to Install Docker Engine in Ubuntu 16.04 LTS Xenial appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install ReaR (Relax and Recover) on CentOS 7

ReaR Relax-and-Recover is a Linux bare metal disaster recovery and system migration solution. Relax and Recover (ReaR) is a true disaster recovery solution that creates recovery media from a running Linux system. If a hardware component fails, an administrator can boot the standby system with the ReaR rescue media and put the system back to its previous state. ReaR preserves the partitioning and formatting of the hard disk, the restoration of all data, and the boot loader configuration. ReaR is well suited as a migration tool, because the restoration does not have to take place on the same hardware as the original. It builds the rescue medium with all existing drivers, and the restored system adjusts automatically to the changed hardware.

ReaR even detects changed network cards, as well as different storage scenarios with their respective drivers (migrating IDE to SATA or SATA to CCISS) and modified disk layouts. Relax-and-Recover was designed to be easy to set up, requires no maintenance and is there to assist when disaster strikes. Its setup-and-forget nature removes any excuse for not having a disaster recovery solution implemented, so there is no excuse for not using it.

Prerequisites:

Relax-and-Recover is written entirely in Bash and does not require any external programs. However, the rescue system that is created by Relax-and-Recover requires some programs that are needed to make our rescue system work , that is 'mingetty' and 'sfdisk'. While all other required programs like sort, dd, grep, etc are already present in minimal installation.

Let's start with your system update using below command on your CentOS 7 server.

# yum -y update

Make sure the following dependencies is also installed on your system, else you will get its errors about missing packages.

# yum install syslinux syslinux-extlinux

Image may be NSFW.
Clik here to view.
syslinux extlinux

Install Relax-and-Recover

Many Linux distributions ship Relax-and-Recover as part of their distribution, you can refer to the Relax-and-Recover Download page to get its stable release.

Let run the below 'yum' command to download the rear package.

# yum install rear

The package will be installed after you type 'y' key to continue including its required dependencies.

Image may be NSFW.
Clik here to view.
install relax and recover

You can also start by cloning the Relax-and-Recover sources from Github with below command.

# git clone git://github.com/rear/rear.git

Setup USB Media:

Prepare your USB media that Relax-and-Recover wil be using. Here we are using and external drive which is '/dev/sdb'. You can change '/dev/sdb' to the correct device in your situation.

Run the below command to format all data on that device.

# /usr/sbin/rear format /dev/sdb

Relax-and-recover asks you to confirm if you want to format the device or not, let's type 'Yes' and hit 'Enter'.

USB device /dev/sdb must be formatted with ext2/3/4 or btrfs file system
Please type Yes to format /dev/sdb in ext3 format: Yes

The device has been labeled REAR-000 by the format workflow. Now edit the '/etc/rear/local.conf' configuration file with below configuration.

# vim /etc/rear/local.conf

### write the rescue initramfs to USB and update the USB bootloader
OUTPUT=USB
#
#### create a backup using the internal NETFS method, using 'tar'
BACKUP=NETFS
#
#### write both rescue image and backup to the device labeled REAR-000
BACKUP_URL=usb:///dev/disk/by-label/REAR-000

Create Rescue Image:

Now you are ready to create a rescue image. Let's run the below command with (-v option) to see the verbose output .

# /usr/sbin/rear -v mkrescue

Image may be NSFW.
Clik here to view.
create rescue image

You might want to check the log file for possible errors or see what Relax-and-Recover is doing.

# tail -f /var/log/rear/rear-centos7.log

2016-05-16 00:19:52 Unmounting '/tmp/rear.Ir6gqwz2ROig9on/outputfs'
umount: /tmp/rear.Ir6gqwz2ROig9on/outputfs (/dev/sdb1) unmounted
rmdir: removing directory, '/tmp/rear.Ir6gqwz2ROig9on/outputfs'
2016-05-16 00:19:52 Finished running 'output' stage in 4 seconds
2016-05-16 00:19:52 Finished running mkrescue workflow
2016-05-16 00:19:52 Running exit tasks.
2016-05-16 00:19:52 Finished in 93 seconds
2016-05-16 00:19:52 Removing build area /tmp/rear.Ir6gqwz2ROig9on
rmdir: removing directory, '/tmp/rear.Ir6gqwz2ROig9on'
2016-05-16 00:19:53 End of program reached

Now reboot your system and try to boot from the USB device. If you are able to boot from your second drive then it mean your work is done. You can also check by mounting the other drive. Now let's dive into the advanced Relax-and-Recover options and start creating full backups.

# /usr/sbin/rear -v mkbackup

Image may be NSFW.
Clik here to view.
rear full backup

Rescue system:

Relax-and-Recover will not automatically add itself to the Grub bootloader. It copies itself to your /boot folder. To enable this, add below to your local configuration.

GRUB_RESCUE=1

The entry in the bootloader is password protected. The default password is REAR. Change it in your own 'local.conf' file.

GRUB_RESCUE_PASSWORD="SECRET"

Storing on a central NFS server:

The most straightforward way to store your DR images is using a central NFS server. The configuration below will store both a backup and the rescue CD in a directory on the share.

OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://192.168.122.1/nfs/rear/"

Relax-and-Recover Configurations:

To configure Relax-and-Recover you have to edit the configuration files in '/etc/rear/' directory . All *.conf files are part of the configuration, but only 'site.conf' and 'local.conf' are intended for the user configuration. All other configuration files hold defaults for various distributions and should not be changed.

In almost all circumstances you have to configure two main settings and their parameters: The BACKUP method and the OUTPUT method.

The backup method defines, how your data was saved and whether Relax-and-Recover should backup your data as part of the mkrescue process or whether you use an external application, e.g. backup software to archive your data.

The output method defines how the rescue system is written to disk and how you plan to boot the failed computer from the rescue system. You can view in this file '/share/rear/conf/default.conf' for an overview of the possible methods and their options

Using Relax-and-Recover

To use Relax-and-Recover you always call the main script '/usr/sbin/rear' . To get the list of all its available commands that you can use. rune the below command.

# rear help

Image may be NSFW.
Clik here to view.
rear usage

To view/verify your configuration, run 'rear dump'. It will print out the current settings for BACKUP and OUTPUT methods and some system information.

# rear dump

To recover your system, start the computer from the rescue system and run rear recover. Your system will be recovered and you can restart it and continue to use it normally.

Conclusion:

Relax-and-Recover (Rear) is the leading Open Source disaster recovery solution, and successor to mkcdrec. It was designed to be easy to set up and requires no maintenance and assists when disaster strikes. This was a detailed article on rear installation and its use case. Feel free to get back to us in case of any difficulty just leave us your comment or suggestions.

The post How to Install ReaR (Relax and Recover) on CentOS 7 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install Chef Workstation / Server / Node on CentOS 7

Chef is an automation platform that configures and manages your infrastruture. It transforms the infrastruture into code. It is a Ruby based configuration management tool. This automation platform consists of a Chef workstation, a Chef server and chef clients which are the nodes managed by the Chef server. All the chef configuration files, recipes, cookbooks, templates etc are created and tested on the Chef workstation and are uploaded to the Chef Server, then it distributes these across every possible nodes registered within the organisations.  It is an ideal automation framework for the Ceph and OpenStack. Not only it gives us complete control but it's super easy to work with.

In this article, I'm explaining the steps I followed for implementing a Chef automation environment on my CentOS 7 servers.

Pre-requisites

  • It is recommended to have a FQDN hostname
  • Chef supports only 64 bit architecture
  • Proper network/Firewall/hosts configurations are recommended

How Chef works?

Image may be NSFW.
Clik here to view.
work procedure

Chef comprises of a workstation which is configured to develop the recipes and cookbooks. It is also configured to run the knife and synchronizes with the chef-repo to keep it up-to-date.  It helps in configuring organizational policy, including defining roles & environments and ensuring that critical data is being stored in data bags. Once these recipes/cookbooks are tested in the workstations, we can upload it to our Chef server. Chef server stores these recipes and assigns on to the nodes depending on their requirements. Basically nodes communicates with only the chef server and takes instructions and recipes from there.

In my demo setup, I'm having three servers namely

  1. chefserver.test20.com         -     Chef Server
  2. chefwork.test20.com           -     Chef Workstation
  3. chefnode.test20.com           -     Chef Node

Let's us start with building Workstation.

Setup a Workstation

First of all, login to our server chefwork, then download the Chef development package. Once the package is downloaded, we can install the package using rpm command.

root@chefwork ~]# wget https://packages.chef.io/stable/el/7/chefdk-0.14.25-1.el7.x86_64.rpm
--2016-05-20 03:47:31-- https://packages.chef.io/stable/el/7/chefdk-0.14.25-1.el7.x86_64.rpm
Resolving packages.chef.io (packages.chef.io)... 75.126.118.188, 108.168.243.150
Connecting to packages.chef.io (packages.chef.io)|75.126.118.188|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/87/879656c7736ef2a061937c1f45c623e99fd57aaa2f6d802e9799d333d7e5342f?__gda__=exp=1463716772~hmac=ef9ce287129ab2f035449b76a1adc32b7bf8cae37f018f59da5a642d3e2650fc&response-content-disposition=attachment%3Bfilename%3D%22chefdk-0.14.25-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream [following]
--2016-05-20 03:47:32-- https://akamai.bintray.com/87/879656c7736ef2a061937c1f45c623e99fd57aaa2f6d802e9799d333d7e5342f?__gda__=exp=1463716772~hmac=ef9ce287129ab2f035449b76a1adc32b7bf8cae37f018f59da5a642d3e2650fc&response-content-disposition=attachment%3Bfilename%3D%22chefdk-0.14.25-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream
Resolving akamai.bintray.com (akamai.bintray.com)... 104.123.250.232
Connecting to akamai.bintray.com (akamai.bintray.com)|104.123.250.232|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 143927478 (137M) [application/octet-stream]
Saving to: ‘chefdk-0.14.25-1.el7.x86_64.rpm’

100%[====================================================================================================>] 14,39,27,478 2.52MB/s in 55s

2016-05-20 03:48:29 (2.49 MB/s) - ‘chefdk-0.14.25-1.el7.x86_64.rpm’ saved [143927478/143927478]

[root@chefwork ~]# rpm -ivh chefdk-0.14.25-1.el7.x86_64.rpm
warning: chefdk-0.14.25-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:chefdk-0.14.25-1.el7 ################################# [100%]
Thank you for installing Chef Development Kit!

What is ChefDK?

The Chef Development Kit contains everything to start with Chef, along with the tools essential for code managing.

  • It contains a new command-line tool, "chef"
  • The cookbook dependency manager Berkshelf
  • The Test Kitchen integration testing framework.
  • ChefSpec for testing the cookbook syntax
  • Foodcritic, a tool for doing static code analysis on cookbooks.
  • It also has all the Chef tools like Chef Client, Knife, Ohai and Chef Zero

Let's start with creating a some recipes in the Workstation and test it locally to ensure its working.

Create a folder named chef-repo on /root/ and inside that folder we can create our recipes.

[root@chefwork ~]# mkdir chef-repo
[root@chefwork ~]# cd chef-repo

Creating a recipe called hello.rb.
[root@chefwork chef-repo]# vim hello.rb
[root@chefwork chef-repo]#
[root@chefwork chef-repo]# cat hello.rb
file '/etc/motd' do
content 'Welcome to Chef'
end

This recipe hello.rb creates a file named /etc/motd with content "Welcome to Chef". This recipe make use of the resource file to enhance this task. Now we can run this recipe to check its working.

[root@chefwork chef-repo]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* file[/etc/motd] action create (up to date)

Confirm the recipe execution:

[root@chefwork chef-repo]# cat /etc/motd
Welcome to Chef

Deleting the file

We can modify our recipe file to delete the created file and run using the command chef-apply as below:

[root@chefwork chef-repo]# cat hello.rb
file '/etc/motd' do
action :delete
end

[root@chefwork chef-repo]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* file[/etc/motd] action delete
- delete file /etc/motd

Installing a package

We're modifying our recipe file to install httpd package on our server and copy an index.html file to the default document root to confirm the installation. The package and the service resources are used to implement this. Default action for a package resource is installation, hence we needn't specify that action separately.

[root@chefwork chef-conf]# cat hello.rb
package 'httpd'
service 'httpd' do
action [:enable, :start]
end

file '/var/www/html/index.html' do
content 'Welcome to Apache in Chef'
end
[root@chefwork chef-conf]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* yum_package[httpd] action install
- install version 2.4.6-40.el7.centos.1 of package httpd
* service[httpd] action enable
- enable service service[httpd]
* service[httpd] action start
- start service service[httpd]
* file[/var/www/html/index.html] action create (up to date)

The command execution clearly describes each instance in the recipe. It installs the Apache package , enables and starts the httpd service on the server. And it creates an index.html file in the default document root with the content "Welcome to Apache in Chef". So we can verify it by running the server IP in the browser.

Image may be NSFW.
Clik here to view.
welcomepage_httpd

Creating Cookbooks

Now we can create our first cookbook, create a folder called chef-repo under the /root directory and execute the command "chef generate cookbook [cookbook name]" to generate our cookbook.

root@chefwork chef-repo]# mkdir cookbooks
[root@chefwork chef-repo]# cd cookbooks/
[root@chefwork cookbooks]# chef generate cookbook httpd_deploy
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: code_generator::cookbook
* directory[/root/chef-repo/cookbook/httpd_deploy] action create
- create new directory /root/chef-repo/cookbook/httpd_deploy

 

Image may be NSFW.
Clik here to view.
cookbook filestructure

 

This is the file structure of the created cookbook, let's see the use of these  files/folders inside the cookbook one by one.

Berksfile : It is the configuration file, which mainly tells BerkShelf what are the cookbook's dependencies, which can be specified directly inside this file or indirectly through metadata.rb. It also tells Berkshelf where it should look for those dependencies.

Chefignore : It tells Chef which all files should be ignored while uploading a cookbook to the Chef server.

metadata.rb : It contains meta information about you cookbook, such as name, contacts or description. It can also state the cookbook’s dependencies.

README.md : It contains documentation entry point for the repo.

Recipes : Contains the cookbook's recipes. It starts with executing the file default.rb.

default.rb : The default recipe format.

specs : It will be storing the unit test cases of your libraries.

test : It will be storing the unit test cases of your recipes.

Creating a template

Next we are going to create a template file for ourselves. Earlier, we created a file with some contents, but that can't be fit in with our recipes and cookbook structures. so let's see how we can create a template.

[root@chefwork cookbook]# chef generate template httpd_deploy index.html
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: code_generator::template
* directory[./httpd_deploy/templates/default] action create
- create new directory ./httpd_deploy/templates/default
* template[./httpd_deploy/templates/default/index.html.erb] action create
- create new file ./httpd_deploy/templates/default/index.html.erb
- update content in file ./httpd_deploy/templates/default/index.html.erb from none to e3b0c4
(diff output suppressed by config)

 

Image may be NSFW.
Clik here to view.
template

Now if you see our cookbook file structure, there is a folder created with the name template with index.html.erb file. We can edit our index.html.erb template file and add to our recipe as below:

root@chefwork default]# cat index.html.erb
Welcome to Chef Apache Deployment
[root@chefwork default]# pwd
/root/chef-repo/cookbook/httpd_deploy/templates/default

Creating the recipe with this template

[root@chefwork recipes]# pwd
/root/chef-repo/cookbook/httpd_deploy/recipes
[root@chefwork recipes]# cat default.rb
#
# Cookbook Name:: httpd_deploy
# Recipe:: default
#
# Copyright (c) 2016 The Authors, All Rights Reserved.
package 'httpd'
service 'httpd' do
action [:enable, :start]
end

template '/var/www/html/index.html' do
source 'index.html.erb'
end

Now go back to our chef-repo folder and run/test our recipe on our Workstation.

[root@chefwork chef-repo]# chef-client --local-mode --runlist 'recipe[httpd_deploy]'
[2016-05-20T05:44:40+00:00] WARN: No config file found or specified on command line, using command line options.
Starting Chef Client, version 12.10.24
resolving cookbooks for run list: ["httpd_deploy"]
Synchronizing Cookbooks:
- httpd_deploy (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 3 resources
Recipe: httpd_deploy::default
* yum_package[httpd] action install
- install version 2.4.6-40.el7.centos.1 of package httpd
* service[httpd] action enable
- enable service service[httpd]
* service[httpd] action start
- start service service[httpd]
* template[/var/www/html/index.html] action create
- update content in file /var/www/html/index.html from 152204 to 748cbd
--- /var/www/html/index.html 2016-05-20 04:18:38.553231745 +0000
+++ /var/www/html/.chef-index.html20160520-20425-1bez4qs 2016-05-20 05:44:47.344848833 +0000
@@ -1,2 +1,2 @@
-Welcome to Apache in Chef
+Welcome to Chef Apache Deployment

Running handlers:
Running handlers complete
Chef Client finished, 4/4 resources updated in 06 seconds

[root@chefwork chef-repo]# cat /var/www/html/index.html
Welcome to Chef Apache Deployment

According to our recipe, Apache is installed on our workstation, service is being started and enabled on boot. And a template file has been created  on our default document root.

Now we've tested our Workstation. It's time for the Chef server setup.

Setting up the Chef Server

First of all login to our Chef server "chefserver.test20.com" and download the chef server package combatible with our OS version.

[root@chefserver ~]# wget https://packages.chef.io/stable/el/7/chef-server-core-12.6.0-1.el7.x86_64.rpm
--2016-05-20 07:23:46-- https://packages.chef.io/stable/el/7/chef-server-core-12.6.0-1.el7.x86_64.rpm
Resolving packages.chef.io (packages.chef.io)... 75.126.118.188, 108.168.243.150
Connecting to packages.chef.io (packages.chef.io)|75.126.118.188|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/5a/5a36d0ffa692bf788e90315171582a758d4c5d8033a892dca9a81d3c03c44d14?__gda__=exp=1463729747~hmac=86e28bf2d5197154c84b571330b4c897006c2cb7f14cc9fc386c62d8b6e34c2d&response-content-disposition=attachment%3Bfilename%3D%22chef-server-core-12.6.0-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream [following]
--2016-05-20 07:23:47-- https://akamai.bintray.com/5a/5a36d0ffa692bf788e90315171582a758d4c5d8033a892dca9a81d3c03c44d14?__gda__=exp=1463729747~hmac=86e28bf2d5197154c84b571330b4c897006c2cb7f14cc9fc386c62d8b6e34c2d&response-content-disposition=attachment%3Bfilename%3D%22chef-server-core-12.6.0-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream
Resolving akamai.bintray.com (akamai.bintray.com)... 23.15.249.68
Connecting to akamai.bintray.com (akamai.bintray.com)|23.15.249.68|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 481817688 (459M) [application/octet-stream]
Saving to: ‘chef-server-core-12.6.0-1.el7.x86_64.rpm’

100%[====================================================================================================>] 48,18,17,688 2.90MB/s in 3m 53s

[root@chefserver ~]# rpm -ivh chef-server-core-12.6.0-1.el7.x86_64.rpm
warning: chef-server-core-12.6.0-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:chef-server-core-12.6.0-1.el7 ################################# [100%]

Now our Chef server is installed. But we need to reconfigure the Chef server to enable and start all the services which is composed in the Chef server. We can run this command to reconfigure.

root@chefserver ~]# chef-server-ctl reconfigure
Starting Chef Client, version 12.10.26
resolving cookbooks for run list: ["private-chef::default"]
Synchronizing Cookbooks:
- enterprise (0.10.0)
- apt (2.9.2)
- yum (3.10.0)
- openssl (4.4.0)
- chef-sugar (3.3.0)
- packagecloud (0.0.18)
- runit (1.6.0)
- private-chef (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
[2016-05-19T02:38:37+00:00] WARN: Chef::Provider::AptRepository already exists! Cannot create deprecation class for LWRP provider apt_repository from cookbook apt
Chef Client finished, 394/459 resources updated in 04 minutes 05 seconds
Chef Server Reconfigured!

Please confirm the service status and their pids by running this command.

[root@chefserver ~]# chef-server-ctl status
run: bookshelf: (pid 6140) 162s; run: log: (pid 6156) 162s
run: nginx: (pid 6051) 165s; run: log: (pid 6295) 156s
run: oc_bifrost: (pid 5987) 167s; run: log: (pid 6022) 167s
run: oc_id: (pid 6038) 165s; run: log: (pid 6042) 165s
run: opscode-erchef: (pid 6226) 159s; run: log: (pid 6214) 161s
run: opscode-expander: (pid 6102) 162s; run: log: (pid 6133) 162s
run: opscode-solr4: (pid 6067) 164s; run: log: (pid 6095) 163s
run: postgresql: (pid 5918) 168s; run: log: (pid 5960) 168s
run: rabbitmq: (pid 5876) 168s; run: log: (pid 5869) 169s
run: redis_lb: (pid 5795) 290s; run: log: (pid 6280) 156s

Hurray!! Our Chef Server is ready :). Now we can install the management console to get an web interface to manage our Chef server.

Installing Management Console for Chef Server

We can install the management console by just running this command "chef-server-ctl install chef-manage" from the chef server.

[root@chefserver ~]# chef-server-ctl install chef-manage
Starting Chef Client, version 12.10.26
resolving cookbooks for run list: ["private-chef::add_ons_wrapper"]
Synchronizing Cookbooks:
- enterprise (0.10.0)
- apt (2.9.2)
- yum (3.10.0)
- openssl (4.4.0)
- runit (1.6.0)
- chef-sugar (3.3.0)
- packagecloud (0.0.18)
- private-chef (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 4 resources
Recipe: private-chef::add_ons_wrapper
* ruby_block[addon_install_notification_chef-manage] action nothing (skipped due to action :nothing)
* remote_file[/var/opt/opscode/local-mode-cache/chef-manage-2.3.0-1.el7.x86_64.rpm] action create
- create new file /var/opt/opscode/local-mode-cache/chef-manage-2.3.0-1.el7.x86_64.rpm
- update content in file /var/opt/opscode/local-mode-cache/chef-manage-2.3.0-1.el7.x86_64.rpm from none to 098cc4
(file sizes exceed 10000000 bytes, diff output suppressed)
* ruby_block[locate_addon_package_chef-manage] action run
- execute the ruby block locate_addon_package_chef-manage
* yum_package[chef-manage] action install
- install version 2.3.0-1.el7 of package chef-manage
* ruby_block[addon_install_notification_chef-manage] action create
- execute the ruby block addon_install_notification_chef-manage

Running handlers:
-- Installed Add-On Package: chef-manage
- #<Class:0x00000006032b80>::AddonInstallHandler
Running handlers complete
Chef Client finished, 4/5 resources updated in 02 minutes 39 seconds

After installing the management console, we need to reconfigure the chef server to restart the chef server and its services to update these changes.

[root@chefserver ~]# opscode-manage-ctl reconfigure
To use this software, you must agree to the terms of the software license agreement.
Press any key to continue.
Type 'yes' to accept the software license agreement, or anything else to cancel.
yes
Starting Chef Client, version 12.4.1
resolving cookbooks for run list: ["omnibus-chef-manage::default"]
Synchronizing Cookbooks:
- omnibus-chef-manage
- chef-server-ingredient
- enterprise
Recipe: omnibus-chef-manage::default
* private_chef_addon[chef-manage] action create (up to date)
Recipe: omnibus-chef-manage::config
Running handlers:
Running handlers complete
Chef Client finished, 62/79 resources updated in 44.764229437 seconds
chef-manage Reconfigured!

[root@chefserver ~]# chef-server-ctl reconfigure

Now our Management console is ready, we need to setup our admin user to manage our Chef Server.

Creating Admin user/Organization

I've created the admin user named chefadmin with an organization linox on my chef server to manage it. We can create the user using the chef command chef-server-ctl user-create and organization using the command chef-server-ctl org-create.

root@chefserver ~]# chef-server-ctl user-create chefadmin saheetha shameer saheetha@gmail.com 'chef123' --filename /root/.chef/chefadmin.pem
[root@chefserver ~]#

[root@chefserver .chef]# chef-server-ctl org-create linox Chef Linoxide --association_user chefadmin --filename /root/.chef/linoxvalidator.pem

Our keys are saved inside the folder /root/.chef folder. We need to copy these keys from the Chef server to the Work station to initiate the communication between our Chef server and workstation.

Copying the Keys

I'm copying my user and validator keys from the Chef server to the workstation to enhance the connection between the servers.

[root@chefserver .chef]# scp chefadmin.pem root@139.162.35.39:/root/chef-repo/.chef/
The authenticity of host '139.162.35.39 (139.162.35.39)' can't be established.
ECDSA key fingerprint is 5b:0b:07:85:9a:fb:b6:59:51:07:7f:14:1b:07:07:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '139.162.35.39' (ECDSA) to the list of known hosts.
root@139.162.35.39's password:
chefadmin.pem 100% 1678 1.6KB/s 00:00
[root@chefserver .chef]#

[root@chefserver .chef]# scp linoxvalidator.pem root@139.162.35.39:/root/chef-repo/.chef/
The authenticity of host '139.162.35.39 (139.162.35.39)' can't be established.
ECDSA key fingerprint is 5b:0b:07:85:9a:fb:b6:59:51:07:7f:14:1b:07:07:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '139.162.35.39' (ECDSA) to the list of known hosts.
root@139.162.35.39's password:
linoxvalidator.pem 100% 1678 1.6KB/s 00:00
[root@chefserver .chef]#

Now login to our Management console for our Chef server with the user/password  "chefadmin" created.

Image may be NSFW.
Clik here to view.
chef_management console

It'll ask to create an organization from the Panel on Sign up. Just create a different one.

Download the Starter Kit for WorkStation

Choose any of your organization and download the Starter Kit from the Chef Server to our Work Station.

Image may be NSFW.
Clik here to view.
starterdownload
Image may be NSFW.
Clik here to view.
Starter

After downloading this kit. Move it your Workstation /root folder and extract. This provides you with a default Starter Kit to start up with your Chef server. It includes a chef-repo.

root@chefwork ~]# ls
chef-starter.zip hello.rb
[root@chefwork~]# unzip chef-starter.zip
Archive: chef-starter.zip
creating: chef-repo/cookbooks/
creating: chef-repo/cookbooks/starter/
creating: chef-repo/cookbooks/starter/recipes/
inflating: chef-repo/cookbooks/starter/recipes/default.rb
creating: chef-repo/cookbooks/starter/files/
creating: chef-repo/cookbooks/starter/files/default/
inflating: chef-repo/cookbooks/starter/files/default/sample.txt
creating: chef-repo/cookbooks/starter/templates/
creating: chef-repo/cookbooks/starter/templates/default/
inflating: chef-repo/cookbooks/starter/templates/default/sample.erb
inflating: chef-repo/cookbooks/starter/metadata.rb
creating: chef-repo/cookbooks/starter/attributes/
inflating: chef-repo/cookbooks/starter/attributes/default.rb
inflating: chef-repo/cookbooks/chefignore
inflating: chef-repo/README.md
inflating: chef-repo/.gitignore
creating: chef-repo/.chef/
creating: chef-repo/roles/
inflating: chef-repo/.chef/knife.rb
inflating: chef-repo/roles/starter.rb
inflating: chef-repo/.chef/chefadmin.pem
inflating: chef-repo/.chef/ln_blog-validator.pem

Image may be NSFW.
Clik here to view.
chef-repo

This is the file structure for the downloaded Chef repository. It contains all the required file structures to start with.

Cookbook SuperMarket

Chef cookbooks are available in the Cookbook Super Market, we can go to the Chef SuperMarket here. Download the required cookbooks from there. I'm downloading one of the cookbook to install Apache from there.

root@chefwork chef-repo]# knife cookbook site download learn_chef_httpd
Downloading learn_chef_httpd from Supermarket at version 0.2.0 to /root/chef-repo/learn_chef_httpd-0.2.0.tar.gz
Cookbook saved: /root/chef-repo/learn_chef_httpd-0.2.0.tar.gz

Extract this cookbook inside the "cookbooks" folder.

[root@chefwork chef-repo]# tar -xvf learn_chef_httpd-0.2.0.tar.gz

Image may be NSFW.
Clik here to view.
learn

All the required files are automatically created under this cookbook. We didn't require to make any modifications. Let's check our recipe description inside our recipe folder.

[root@chefwork recipes]# cat default.rb
#
# Cookbook Name:: learn_chef_httpd
# Recipe:: default
#
# Copyright (C) 2014
#
#
#
package 'httpd'

service 'httpd' do
action [:enable, :start]
end

template '/var/www/html/index.html' do
source 'index.html.erb'
end

service 'iptables' do
action :stop
end
[root@chefwork recipes]#
[root@chefwork recipes]# pwd
/root/chef-repo/cookbooks/learn_chef_httpd/recipes
[root@chefwork recipes]#

So we just need to upload this cookbook to our Chef server as it looks perfect.

Validating the Connection b/w Server and Workstation

Before uploading the cookbook, we need to check and confirm the connection between our Chef server and Workstation. First of all, make sure you've proper Knife configuration file.

[root@chefwork .chef]# cat knife.rb
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "chefadmin"
client_key "#{current_dir}/chefadmin.pem"
validation_client_name "linox-validator"
validation_key "#{current_dir}/linox-validator.pem"
chef_server_url "https://chefserver.test20.com:443/organizations/linox"

cookbook_path ["#{current_dir}/../cookbooks"]

This configuration file is location at /root/chef-repo/.chef folder. The highlighted portions are the main things to take care. Now you can run this command to check the connections.

root@chefwork .chef]# knife client list
ERROR: SSL Validation failure connecting to host: chefserver.test20.com - SSL_connect returned=1 errno=0 state=error: certificate verify failed
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.

Original Exception: OpenSSL::SSL::SSLError: SSL Error connecting to https://chefserver.test20.com/clients - SSL_connect returned=1 errno=0 state=error: certificate verify failed

You can see an SSL error reporting. In order to rectify this error, we need to fetch the SSL certificate for our Chef Server and store it inside the /root/.chef/trusted_certs folder. We can do this by running this command.

root@chefwork .chef]# knife ssl fetch
WARNING: Certificates from chefserver.test20.com will be fetched and placed in your trusted_cert
directory (/root/chef-repo/.chef/trusted_certs).

Knife has no means to verify these are the correct certificates. You should
verify the authenticity of these certificates after downloading.

Adding certificate for chefserver.test20.com in /root/chef-repo/.chef/trusted_certs/chefserver_test20_com.crt

Verifying the SSL:

[root@chefwork .chef]# knife ssl check
Connecting to host chefserver.test20.com:443
Successfully verified certificates from `chefserver.test20.com'

[root@chefwork .chef]# knife client list
chefnode
linox-validator
[root@chefwork .chef]# knife user list
chefadmin

Uploading the Cookbook

We can upload our cookbook to our chef server from the workstation using the knife command as below:

#knife cookbook upload learn_chef_httpd

[root@chefwork cookbooks]# knife cookbook upload learn_chef_httpd
Uploading learn_chef_httpd [0.2.0]
Uploaded 1 cookbook.

Verify the cookbook from the Chef Server Management console.

Image may be NSFW.
Clik here to view.
uploadedcookbook

 

Adding a Node

This is the final step in the Chef implementation. We've setup a workstation, a Chef server and then now we need to add our clients to the Chef server for automation. I'm adding my chefnode to the server using the knife bootstrap command as below:

[root@chefwork cookbooks]# knife bootstrap 45.33.76.60 --ssh-user root --ssh-password dkfue@321 --node-name chefnode
Creating new client for chefnode
Creating new node for chefnode
Connecting to 45.33.76.60
45.33.76.60 -----> Installing Chef Omnibus (-v 12)
45.33.76.60 downloading https://omnitruck-direct.chef.io/chef/install.sh
45.33.76.60 to file /tmp/install.sh.5457/install.sh
45.33.76.60 trying wget...
45.33.76.60 el 7 x86_64
45.33.76.60 Getting information for chef stable 12 for el...
45.33.76.60 downloading https://omnitruck-direct.chef.io/stable/chef/metadata?v=12&p=el&pv=7&m=x86_64
45.33.76.60 to file /tmp/install.sh.5466/metadata.txt
45.33.76.60 trying wget...
45.33.76.60 sha1 4def83368a1349959fdaf0633c4d288d5ae229ce
45.33.76.60 sha256 6f00c7bdf96a3fb09494e51cd44f4c2e5696accd356fc6dc1175d49ad06fa39f
45.33.76.60 url https://packages.chef.io/stable/el/7/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 version 12.10.24
45.33.76.60 downloaded metadata file looks valid...
45.33.76.60 downloading https://packages.chef.io/stable/el/7/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 to file /tmp/install.sh.5466/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 trying wget...
45.33.76.60 Comparing checksum with sha256sum...
45.33.76.60 Installing chef 12
45.33.76.60 installing with rpm...
45.33.76.60 warning: /tmp/install.sh.5466/chef-12.10.24-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
45.33.76.60 Preparing... ################################# [100%]
45.33.76.60 Updating / installing...
45.33.76.60 1:chef-12.10.24-1.el7 ################################# [100%]
45.33.76.60 Thank you for installing Chef!
45.33.76.60 Starting the first Chef Client run...
45.33.76.60 Starting Chef Client, version 12.10.24
45.33.76.60 resolving cookbooks for run list: []
45.33.76.60 Synchronizing Cookbooks:
45.33.76.60 Installing Cookbook Gems:
45.33.76.60 Compiling Cookbooks...
45.33.76.60 [2016-05-20T15:36:41+00:00] WARN: Node chefnode has an empty run list.
45.33.76.60 Converging 0 resources
45.33.76.60
45.33.76.60 Running handlers:
45.33.76.60 Running handlers complete
45.33.76.60 Chef Client finished, 0/0 resources updated in 08 seconds
[root@chefwork chef-repo]#

This command will also initialize the installation of the Chef-client in the Chef node. You can verify it from the CLI on the workstation using the knife commands below:

[root@chefwork chef-repo]# knife node list
chefnode

[root@chefwork chef-repo]# knife node show chefnode
Node Name: chefnode
Environment: _default
FQDN: chefnode.test20.com
IP: 45.33.76.60
Run List: recipe[learn_chef_httpd]
Roles:
Recipes:
Platform: centos 7.2.1511
Tags:

Verifying it from the Management console.

Image may be NSFW.
Clik here to view.
added nodechef

We can get more information regarding the added node by selecting the node and viewing the Attributes section.

Image may be NSFW.
Clik here to view.
node details

Managing Node Run List

Let's see how we can add a cookbook to the node and manage its runlist from the Chef server. As you see in the screenshot, you can click the Actions tab and select the Edit Runlist option to manage the runlist.

Image may be NSFW.
Clik here to view.
node_run

In the Available Recipes,  you can see our learn_chef_httpd recipe, you can drag that from the available packages to the current run list and save the runlist.

Image may be NSFW.
Clik here to view.
drag_recipe

Now login to your node and just run the command chef-client to execute your runlist.

root@chefnode ~]# chef-client
Starting Chef Client, version 12.10.24
resolving cookbooks for run list: ["learn_chef_httpd"]
Synchronizing Cookbooks:
- learn_chef_httpd (0.2.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 4 resources
Recipe: learn_chef_httpd::default
* yum_package[httpd] action install

Similarly, we can add any number of nodes to your Chef Server depending on its configuration and hardware. I hope this article provided you with the basic understanding of Chef implementation. I would recommend your valuable comments and suggestions on this. Thank you for reading this :)

Happy Automation with Chef!!

The post How to Install Chef Workstation / Server / Node on CentOS 7 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install Tomcat 8 on Ubuntu 16.04 (Multiple Instances)

Apache Tomcat commonly called as Tomcat is an open source Webserver and Servlet container developed by Apache Software Foundation. It is written in Java and released under Apache 2.0 License. This is a cross platform application. Tomcat is actually composed of a number of components, including a Tomcat JSP engine and a variety of different connectors, but its core component is called Catalina. Catalina provides Tomcat's actual implementation of the servlet specification.

In this article, I'll provide you guidelines to install, configure and create multiple instances of Tomcat 8 on Ubuntu 16.04. Let's walk through the installations steps.

Since Tomcat is written in Java, we need Java to be installed on our server prior to the installation.

Install Java

Tomcat 8 requires, Java 7 or later versions to be installed on the server. I updated packages on my Ubuntu server and installed the JDK packages using the commands below:

root@ubuntu:~# apt-get update
root@ubuntu:~# apt-get install default-jdk
Setting up default-jdk-headless (2:1.8-56ubuntu2) ...
Setting up openjdk-8-jdk:amd64 (8u91-b14-0ubuntu4~16.04.1) ...
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode
Setting up default-jdk (2:1.8-56ubuntu2) ...
Setting up gconf-service-backend (3.2.6-3ubuntu6) ...
Setting up gconf2 (3.2.6-3ubuntu6) ...
Setting up libgnomevfs2-common (1:2.24.4-6.1ubuntu1) ...
Setting up libgnomevfs2-0:amd64 (1:2.24.4-6.1ubuntu1) ...
Setting up libgnome2-common (2.32.1-5ubuntu1) ...
Setting up libgnome-2-0:amd64 (2.32.1-5ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...

done.
done.

Check and confirm the Java Version

After the installation process, just verify the Java version installed on your server.

root@ubuntu:~# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

Download / Install Tomcat

We can download the latest version of Tomcat here. Download and extract this under the folder /opt/apache-tomcat8.

root@ubuntu:/opt# wget http://a.mbbsindia.com/tomcat/tomcat-8/v8.0.35/bin/apache-tomcat-8.0.35.zip
--2016-05-23 03:02:48-- http://a.mbbsindia.com/tomcat/tomcat-8/v8.0.35/bin/apache-tomcat-8.0.35.zip
Resolving a.mbbsindia.com (a.mbbsindia.com)... 103.27.233.42
Connecting to a.mbbsindia.com (a.mbbsindia.com)|103.27.233.42|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9842037 (9.4M) [application/zip]
Saving to: ‘apache-tomcat-8.0.35.zip’

apache-tomcat-8.0.35.zip 100%[===================================================================>] 9.39M 4.46MB/s in 2.1s

2016-05-23 03:02:51 (4.46 MB/s) - ‘apache-tomcat-8.0.35.zip’ saved [9842037/9842037]

Creating tomcat user / group

It is always recommended to run an application as a user instead of root user. Hence, I created a user named tomcat to run this application.

root@ubuntu:/opt# groupadd tomcat

root@ubuntu:/opt# useradd -g tomcat -s /bin/bash -d /opt/apache-tomcat8 tomcat

Now make all scripts under the Tomcat bin folder executable for the user.

root@ubuntu:/opt/apache-tomcat8/bin# chmod 700 *.sh

root@ubuntu:/opt# chown -R tomcat.tomcat apache-tomcat8/

Start the Tomcat Application

Now switch to the tomcat user and execute the script startup.sh inside the Tomcat binary folder namely  /opt/apache-tomcat8/bin/ to run this application.

tomcat@ubuntu:~/bin$ sh startup.sh
Using CATALINA_BASE: /opt/apache-tomcat8
Using CATALINA_HOME: /opt/apache-tomcat8
Using CATALINA_TMPDIR: /opt/apache-tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/apache-tomcat8/bin/bootstrap.jar:/opt/apache-tomcat8/bin/tomcat-juli.jar
Tomcat started.

Now we can access this URL http://serverip:8080 on the browser to confirm the Tomcat working.

Image may be NSFW.
Clik here to view.
tomcat

We can even confirm the status using this command from CLI as below:

root@ubuntu:/opt# lsof -i :8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 22722 tomcat 53u IPv6 100669 0t0 TCP *:http-alt (LISTEN)

PS : To shutdown the application you can use the script shutdown.sh inside the Tomcat binaries.

root@ubuntu:/opt/apache-tomcat8# sh bin/shutdown.sh
Using CATALINA_BASE: /opt/apache-tomcat8
Using CATALINA_HOME: /opt/apache-tomcat8
Using CATALINA_TMPDIR: /opt/apache-tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/apache-tomcat8/bin/bootstrap.jar:/opt/apache-tomcat8/bin/tomcat-juli.jar
May 24, 2016 3:32:35 AM org.apache.catalina.startup.Catalina stopServer
SEVERE: Could not contact localhost:8005. Tomcat may not be running.
May 24, 2016 3:32:36 AM org.apache.catalina.startup.Catalina stopServer
SEVERE: Catalina.stop:

Tomcat Web Application Manager

In a production environment, it is very useful to have the capablility to deploy a new web application or undeploy an existing one, without having to shutdown/restart the entire server. In addition, you can even reload an exisiting application itself, even without declaring it to be reloadable in the Tomcat server configuration file.

This Management Web console supports the following functions:

  • Deploy a new web application from the uploaded WAR file or on a specified context path from the server f/s.
  • List the currently deployed web applications and the sessions that are currently active
  • Reload an existing web applications, to reflect changes in the contents of the /WEB-INF/classes or /WEB-INF/lib.
  • Get the server information about the OS and JVM
  • Start and Stop an existing web applications, --stopping the existing application thus making it unavailable. But don't undeploy it.
  • Undeploy a deployed web application and delete its document base directory

We can create the users to manage the Tomcat Management Web console. You can edit the Tomcat user configuration file namely conf/tomcat-users.xml  to create the admin users to manage the Panel.

I've appended  these lines to the Tomcat user configuration file to create two users namely manager and admin with the passwords as listed.

<user username="manager" password="tomcat123" roles="manager-gui" />

<user username="admin" password="tomcat123" roles="manager-gui,admin-gui"/>

We can access the Tomcat Web Application Manager using the URL >>http://SERVERIP:8080/manager/ with the users created.

Image may be NSFW.
Clik here to view.
TManager

Enabling SSL/TLS support on Tomcat

Tomcat uses a password protected file "keystore" to save the SSL transactions. We need to create a keystore file to store the server's private key and self-signed certificate by executing the following command:

root@ubuntu:/usr/local# keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/local/keystore
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: Saheetha Shameer
What is the name of your organizational unit?
[Unknown]: VIP
What is the name of your organization?
[Unknown]: VIP
What is the name of your City or Locality?
[Unknown]: Kochi
What is the name of your State or Province?
[Unknown]: Kerala
What is the two-letter country code for this unit?
[Unknown]: IN
Is CN=Saheetha Shameer, OU=VIP, O=VIP, L=Kochi, ST=Kerala, C=IN correct?
[no]: yes

Enter key password for <tomcat>
(RETURN if same as keystore password):

Options:

-genkeypair : Generate key pair

-keyalg : Key algorithm

-keystore : Keystore file path

After entering the details for generating the certification, you can edit the Tomcat server configuration to enable the SSL/TLS support directing to the keystore file.

We need to add this section to the Tomcat server configuration file namely conf/server.xml

<Connector port="8443" protocol="HTTP/1.1"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="/usr/local/keystore"
keystorePass="tomcat123"/>

Restart the Tomcat application once confirming the keystore contents.

tomcat@ubuntu:~$ keytool -list -keystore /usr/local/keystore
Enter keystore password:

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

tomcat, May 23, 2016, PrivateKeyEntry,
Certificate fingerprint (SHA1): A3:99:A8:DD:F1:11:4F:69:37:95:11:66:41:59:A5:05:68:23:3E:B2

Now you can access the Tomcat application on the port 8443 on URL https://SERVER IP:8443 to confirm its working.

Image may be NSFW.
Clik here to view.
tomcatssl

Creating Multiple Tomcat instances

In order to create multiple Tomcat instances, you can download and extract the Tomcat application to a different folder. I extracted the contents to a different folder namely /opt/apache-tomcat8-2. After extracting the files, we need to make proper changes to the Tomcat Server configuration file for modifying the Connector ports and other important ports for the application to avoid conflicts with the existing application.

These are the following changes applied to the Tomcat Server configuration file namely conf/server.xml.

1. Modified the shutdown port from 8005 to 8006

 <Server port="8005" shutdown="SHUTDOWN">

to

<Server port="8006" shutdown="SHUTDOWN">

2.  Modified the connector port from 8080 to 8081

<Connector port="8080" protocol="HTTP/1.1"

connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--

to

<Connector port="8081" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--

3. Modified the AJP port  from 8009 to 8010

<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

<Connector port="8010" protocol="AJP/1.3" redirectPort="8443" />

After applying these changes, restart our new Tomcat Application under /opt/apache-tomcat8-2

root@ubuntu:/opt# chown -R tomcat.tomcat /opt/apache-tomcat8-2

root@ubuntu:/opt#cd apache-tomcat8-2

Making the script executable for the user.

root@ubuntu:/opt/apache-tomcat8-2# chmod 700 bin/*.sh

Switch to tomcat user and start the application.

tomcat@ubuntu:/opt/apache-tomcat8-2/bin$ sh startup.sh
Using CATALINA_BASE: /opt/apache-tomcat8-2
Using CATALINA_HOME: /opt/apache-tomcat8-2
Using CATALINA_TMPDIR: /opt/apache-tomcat8-2/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/apache-tomcat8-2/bin/bootstrap.jar:/opt/apache-tomcat8-2/bin/tomcat-juli.jar
Tomcat started.

Verify the second Tomcat instance at the port 8081 at the URL http://SERVERIP:8081

Image may be NSFW.
Clik here to view.
tomcat2instance

That's it! you're done with the basic things on Tomcat installations. I hope you enjoyed reading this article. I would recommend your valuable suggestions and comments on this. Thank you for reading this :)

Have a Good day!

The post How to Install Tomcat 8 on Ubuntu 16.04 (Multiple Instances) appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Configure Bacula Server on Ubuntu 15.10 /16.04

Hello and welcome, our today's article s on installation and configuration of Bacula (an open source network backup solution) on Ubuntu 15.10 /16.04. You can use it to manage backup, recovery, and verification of computer data across a network of computers of different kinds. Bacula is relatively easy to use and efficient, while offering many advanced storage management features that make it easy to find and recover lost or damaged files. Due to its modular design, Bacula is scalable from small single computer systems to systems consisting of hundreds of computers located over a large network.

Bacula is composed of several software components including backup server and the backup clients.
A Bacula server, which we will refer as the "backup server", has these components:

Bacula Director (DIR): Software that controls the backup and restore operations that are performed by the File and Storage daemons

Catalog: Services that maintain a database of files that are backed up. The database is stored in an SQL database such as MySQL or PostgreSQL

Storage Daemon (SD): Software that performs reads and writes on the storage devices used for backups

Bacula Console: A command-line interface that allows the backup administrator to interact with, and control, Bacula Director

A Bacula client (backup clients) is a server that will be backed up and runs the File Daemon (FD) component. The File Daemon is software that provides the Bacula server access to the data that will be backed up.

Prerequisites:

We are going to install and configure Bacula on Ubuntu 15.10 server but you can follow the same instructions on the previous, such as Ubuntu 15.04 but you might get compatibility issues in Ubuntu 16.04.

Let's login to your Ubuntu server using your root credentials, give it an IP address and configure its FQDN. Make sure you have an Internet connection to update your system with below commands.

# apt-get update && apt-get upgrade

Image may be NSFW.
Clik here to view.
system update

Once your system is back with latest updates and security patches then proceed to the next step.

Installing MySQL

Bacula uses an SQL database to manage its information. You can use either MySQL or PostgreSQL database but in this article we will are going to use MySQL. To install MySQL on your Ubuntu server just run the below command in your command line terminal.

# apt-get install mysql-server

Image may be NSFW.
Clik here to view.
MySQL Installation

During MySQL installation, you’ll be asked to set the database administrator password. Enter the password and click Ok. While not mandatory, but it is highly recommended that you set a password for the MySQL administrative "root" user and then repeat the same password that you have setup.

Image may be NSFW.
Clik here to view.
Mysql Passowrd

Installing Bacula Component

Now, let us install Bacula server and client components using the following command.

# apt-get install bacula-server bacula-client

Image may be NSFW.
Clik here to view.
Bacula installation

Once you proceed the installation, You will be prompted for some information that will be used to configure Postfix MTA which Bacula uses by default. Let's choose "Internet Site" as the general Type of Mail Configuration and click on 'OK' . You are also free to select the other mail server configuration type that best meets your needs.

Image may be NSFW.
Clik here to view.
Postfix Configuration

Next you will be asked to setup your system mail name, that will be your fully qualified domain name.

Image may be NSFW.
Clik here to view.
system mail name

Select 'Yes' to configure database for Bacula with dbconfig-common as shown.

Image may be NSFW.
Clik here to view.
bacula director mysql

Then Enter the MySQL database administrator password and click on 'OK' key.

Image may be NSFW.
Clik here to view.
Bacula Password

Once again set the MySQL application password for bacula-director-mysql to register with the database server. If left blank, a random password will be generated.

Image may be NSFW.
Clik here to view.
Mysql app password

Re confirm the same password.

Image may be NSFW.
Clik here to view.
reconfirm password

We have done with installation of Bacula and its components , now we will create the backup and restore directories.

Create Backup and Restore Directories:

Bacula needs a backup directory for storing backup archives and restore directory where restored files will be placed. So, if your system has multiple partitions then make sure to create the directories on one of large partition.

Run the commands below to create new directories for both backup and restore points.

# mkdir -p /b_backup/backup /b_backup/restore

Set the ownership and then permissions to the above directories using below commands.

# chown -R bacula:bacula /b_backup/

# chmod -R 700 /b_backup/

Configuring Bacula

All the configuration files of Bacula can be found in the '/etc/bacula' directory. Bacula has several components that must be configured independently in order to function correctly.

First open the below file to Update Bacula Director configuration.

# vim /etc/bacula/bacula-dir.conf

Update the restore path by finding the below path in your configuration file. In our case, /b_backup/restore is the restore location.

Job {
Name = "RestoreFiles"
Type = Restore
Client=k_ubuntu-fd
FileSet="Full Set"
Storage = File
Pool = Default
Messages = Standard
Where = /b_backup/restore
}

Now scroll down to “list of files to be backed up” section, and set the path of the directory to backup.

File = /home/

Image may be NSFW.
Clik here to view.
configure bacula

Scroll down further and you will finds the Exclude section where you set the list of directories to be excluded from the backup.

Exclude {
File = /var/lib/bacula
File = /nonexistant/path/to/file/archive/dir
File = /proc
File = /tmp
File = /.journal
File = /.fsck
File = /b_backup
}
}

Save and close the file after making above changes and move to the next step.

Update Bacula Storage Daemon settings:

Edit /etc/bacula/bacula-sd.conf file using your editor with below configurations to set the backup folder location, which is /mybackup/backup in our case.

# vim /etc/bacula/bacula-sd.conf

Image may be NSFW.
Clik here to view.
bacula storage daemon

Now, check if all the configurations are valid as shown below.

If the commands displays nothing, the configuration changes are valid.

# bacula-dir -tc /etc/bacula/bacula-dir.conf

# bacula-sd -tc /etc/bacula/bacula-sd.conf

Image may be NSFW.
Clik here to view.
check configurations

Once you done all the changes, restart all bacula services.

# systemctl restart bacula-director

# systemctl restart bacula-fd

# systemctl restart bacula-sd

That’s it. Now, bacula has been installed and configured successfully.

Testing Backup Job

After restarting services, let's test that it works by running a backup job.

We will use the Bacula Console to run our first backup job. If it runs without any issues, we will know that Bacula is configured properly. Enter the Console with below command.

# bconsole

This will take you to the Bacula Console prompt, denoted by a * prompt. Create a Label by issuing a label command. Then you will be prompted to enter a volume name and select the pool that the backup should use. We'll use the "File" pool that we configured earlier, by entering "2".

At this point Bacula now knows how we want to write the data for our backup. We can now run our backup to test that it works correctly using 'run' command then you will be prompted to select which job to run. We want to run the "BackupLocalFiles" job, so enter "1" at the prompt. At the "Run Backup job" confirmation prompt, review the details, then enter "yes" to run the job as you will see a new message as shown below.

Image may be NSFW.
Clik here to view.
test backup job

After running a job, Bacula will tell you that you have messages. The messages are output generated by running jobs. Check the messages by typing 'message'.

Image may be NSFW.
Clik here to view.
test backup message

Another way to see the status of the job is to check the status of the Director. To do this, enter this command at the bconsole prompt.

*status director

Image may be NSFW.
Clik here to view.
status director

The "OK" status indicates that the backup job ran without any problems. Congratulations! You have a backup of the "Full Set" of your Bacula server.

Testing Restore Job

Now that a backup has been created, it is important to check that it can be restored properly. The restore command will allow us restore files that were backed up. To demonstrate, we'll restore all of the files in our last backup.

* restore all

A selection menu will appear with many different options, which are used to identify which backup set to restore from. Since we only have a single backup, let's "Select the most recent backup"—select option 5. When you are finished making your restore selection, proceed by typing 'done' as shown below.

Image may be NSFW.
Clik here to view.
testing restore

As with backup jobs, you should check the messages and Director status after running a restore job. Let's heck the messages by typing 'message'.

Image may be NSFW.
Clik here to view.
restore message

Again, checking the Director status is a great way to see the state of a restore job.

*status director

Image may be NSFW.
Clik here to view.
status director

Managing and working with Bacula via command line might be bit difficult for some administrators but in that case you have the option to use Webmin. So, you don’t have to remember all commands or edit any configuration files manually.

Conclusion

That's it. In this article you have learned the basic Bacula setup and how it can backup and restore your local file system. You can also add your other servers as backup clients so you can recover them, in case of data loss. Do share your comments and suggestions.Thank you for reading this article.

The post How to Configure Bacula Server on Ubuntu 15.10 /16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How To Change Timezone and NTP Synchronization on Ubuntu 16.04

Setting up the correct timezone and date in Linux is very important as most of things depends on it. The server and system clock needs to be on time which does not matter if you are using Linux on your personal computer or you have a Linux server in production. NTP (Network Time Protocol) enables the synchronization of computer clocks distributed across the network by ensuring accurate local timekeeping with reference to some particular time on the Internet. NTP communicates between clients and servers using the User Datagram Protocol on port No.123. NTP uses a systematic, hierarchical level of clock sources for its reference. Each level is called a stratum and has a layer number that usually begins with zero. The stratum level serves as an indicator of the distance from the reference clock in order to avoid cyclic dependenc in the hierarchy. However, the stratum does not represent the quality or reliability of time.

The NTP software package includes a background program known as a daemon or service, which synchronizes the computer’s clock to a particular reference time such as radio clock or a certain device connected to a network. Now let's see how we going setup Timezone and NTP Synchronization on Ubuntu 16.04.

Setup Timezone:

Let's start with setting up the correct timezone on your Ubuntu server. Run the below command with root user credentials and you will be presented with a menu system that allows you to select the geographic region of your server. Select the geographic area in which you live and press 'OK' to continue.

# dpkg-reconfigure tzdata

Image may be NSFW.
Clik here to view.
timezone setup

Then select the city or region corresponding to your time zone as shown.

Image may be NSFW.
Clik here to view.
time zone

Press the 'OK' key and your system will be updated to use the selected timezone, and you will get the below output showing your default zone and date.

root@ubuntu-16:~# dpkg-reconfigure tzdata

Current default time zone: 'Europe/London'
Local time is now: Tue May 24 21:00:31 BST 2016.
Universal Time is now: Tue May 24 20:00:31 UTC 2016.

You can also configure your Timezone on Ubuntu server by using the following command. To check the list of all available timezones, run below command.

# timedatectl list-timezones

Then select your desired timezone from the listed output and run below command to configure it on your system.

# timedatectl set-timezone Europe/London

Now run you can verify that the timezone has been set properly by using the 'timedatectl' command in your command line terminal, where you will get the information about your timezone settings as shown.

#timedatectl

Image may be NSFW.
Clik here to view.
timedatectl

Setup NTP Synchronization:

To setup NTP synchronization, we will use a service called ntp, which we can be installed from Ubuntu's default repositories by using below command.

# apt-get install ntp

Image may be NSFW.
Clik here to view.
Installing NTP

That's it, you have successfully installed NTP package to set up NTP synchronization on Ubuntu 16.04. The daemon will start automatically on each boot and will continuously adjust the system time to be in-line with the global NTP servers throughout the day.

Configure NTP Servers:

To configure and change the default NTP servers, open up the below configuration file and find the section within the configuration that lists the NTP Pool Project servers as shown.

# vim /etc/ntp.conf

Image may be NSFW.
Clik here to view.
ntp configurations

These lines refers to a set of hourly-changing random servers that provide your server with the correct time, which are located all around the world. You can get their list by using below command.

# ntpq -p

Image may be NSFW.
Clik here to view.
ntp global servers

When you made any changes to the configuration, make sure to restart ntp service with below command.

# service ntp restart

Conclusion:

That's it. In this article you have learned about Timezone and NTP synchronization on Ubuntu 16.04. NTP can be easily deployed on servers hosting different services which requires less resource overhead and minimal bandwidth requirements. NTP can handle hundreds of clients at a time with minimum CPU usage. Thank you for reading this article and I hope you find it much helpful. Let's share your thoughts on it.

The post How To Change Timezone and NTP Synchronization on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install and Setup Cacti on Ubuntu 16.04

Hello and welcome to our today's article on another open source Network monitoring tool that is Cacti. Cacti is a complete network graphing solution designed with RRDTool’s data storage and graphing functionality. It can graph network bandwidths with SNMP, shell or perl scripts. RRDtool is a program developed by the Swiss Tobi Oeticker who was already the creator of the famous MRTG. RRDtool is developed using the "C" programming language and it stores the collected data on ".rrd" files. The number of records in a ".rrd" file never increases, meaning that old records are frequently removed. This implies that one obtains precise figures for recently logged data, whereas figures based on very old data are mean value approximations. By default, you can have daily, weekly, monthy and yearly graphs.

Some of the primary features of Cacti are the following:

  • unlimited graph items
  • flexible data sources
  • custom data-gathering scripts
  • built-in SNMP support
  • graph templates
  • data source templates
  • host templates
  • user-based management and security
  • tree, list, and preview views of graph data
  • auto-padding support for graphs
  • graph data manipulation

Using Cacti you can easily monitor the performance of your computers, networks, servers, router, switch, services (apache, mysql, dns, harddisk, mail server), SANs, applications, weather measurements, etc. Cacti's installation is very simple and you don't need to be expert to complete its setup. You can also add plugins to the Cacti for enabling the possibility to integrate other free tools like ntop or php weathermap.

1) Prerequisites:

The basic requirement for Cacti is that you must have LAMP stack setup on your server, before getting started with the installation of Cacti. Login to your Ubuntu server and run below command to update your Ubuntu server.

# apt-get update

# apt-get upgrade

Before installing the LAMP packages, please do note that Cacti do not support MySQL-Server-5.7 as yet. So, we will be using the 'MySQL-Server-5.6' by adding it repository and then update the system with below commands.

# add-apt-repository 'deb http://archive.ubuntu.com/ubuntu trusty universe'

# apt-get update

Now install the following packages for Cacti setup on your Ubuntu server with the help of given below command.

# apt-get install apache2 mysql-server-5.6 php libapache2-mod-php

Press 'Y' to continue installation on LAMP package including its additional required packages as shown.

Image may be NSFW.
Clik here to view.
LAMP installation

During the installation process, you will be asked to configure the root password of MySQL server. Press 'OK' after setting up the password and then repeat the same upon next prompt.

Image may be NSFW.
Clik here to view.
mysql passowrd

2) Install SNMP, SNMPD and RRDtools:

We need to install few other packages that are necessary for fully functional Cacti setup and to monitor the 'localhost' where cacti is installed you need to install and configure the service 'snmpd'.

Run the below command to install these packages on your Ubuntu 16.04 server and press 'Y' key to continue.

# apt-get install snmp snmpd rrdtool

Image may be NSFW.
Clik here to view.
snmpd and rrdtools

3) Install Cacti on Ubuntu 16.04:

Now we can start Cacti installation as we have completed all of its required dependencies. Issue the below command to start installing Cacti packages and press 'Y' to continue.

# apt-get install cacti cacti-spine

Image may be NSFW.
Clik here to view.
Installing Cacti

4) Configuring Cacti:

During the installation process you will be prompted to configure Cacti with few options to select from available options. First of all Choose the web server that you wish to use for configure with Cacti like we are using Apache and then press 'OK' key to continue.

Image may be NSFW.
Clik here to view.
cacti web server

Next is to setup the database that is going to be used for Cacti . Point to the 'No' option if you have already configured databases or click on the 'Yes' to setup database using dbconfig-common for Cacti as shown.

Image may be NSFW.
Clik here to view.
dbconfig-common

Provide the database password for Cacti application to be used with database server.

Image may be NSFW.
Clik here to view.
cacti mysql password

Select the MySQL server connection type from the available options, for the best performance we will be choosing the default UNIX socket as shown.

Image may be NSFW.
Clik here to view.
database connection type

Then you will be asked to create a new mysql database user for Cacti to be used to connect with the database server.

Image may be NSFW.
Clik here to view.
mysql user for cacti

That's it, cacti installation and configuration setup is complete. Now make sure that all required services are active and running.

# service snmpd restart
# service mysql restart
# service apache2 restart

5) Cacti Web Installation Setup:

Open the following url to start Cacti web configuration and click on Next to continue after reading cacti installation guide.

http://your-server_ip-address/cacti

Image may be NSFW.
Clik here to view.
cacti setup

Select the type of installation as 'New Installation' and click on the NEXT button.

Image may be NSFW.
Clik here to view.
cacti new installation

Now check below and make sure all of these values are correct before continuing. If everything looks OK and there is no error in your installation, then hit Finish.

Image may be NSFW.
Clik here to view.
finish cacti setup

Then you need to enter 'admin' username and its password where as admin is default username and password for cacti as shown below.

Image may be NSFW.
Clik here to view.
Cacti Login

Modify the default password after first login and set some different password.

Image may be NSFW.
Clik here to view.
cacti password

Welcome To Cacti Home Page:

After resetting cacti user password, you will be automatically directed towards its home page. That just looks like below.

Image may be NSFW.
Clik here to view.
Cacti Home

Now add new devices, or create new graphs. To view graphs of your localhost system, click on the graphs button and you will see multiple graphs of your local host server showing your system memory usage and load average etc.

Image may be NSFW.
Clik here to view.
cacti graphs view

Conclusion:

In this article you learn about the installation and configuration setup of Cacti on Ubuntu 16.04. Now you are able to use it in your own environment to get graph data for the CPU and network bandwidth utilization. You can also use it to monitor the network traffic by polling a router or switch via snmp. Hope you have enjoyed alot, so do not forget to share your thoughts, Thank you.

The post How to Install and Setup Cacti on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Run Wayland and Weston on Arch Linux

Hello, today we are going to talk about the new windowing protocol for the Linux operating system, its name is Wayland, it is maintained by the people at freedesktop.org, which also helps on X development. The main motivation in creating a new windowing protocol is that X became complex, legacy technology and concepts turned it hard to make it better. Wayland was designed with a new, simpler architecture that performs better and makes it easier to develop.

Wayland is still in development and will not replace X as the main windowing system anytime soon. the fact is that X is a mature system, it simply works and is expected to be found on Linux/unix systems, so, even when Wayland become the main windowing system for unices they should have X as legacy option.

Despite Wayland is under development you can give try it right now and that is what we are going to do now.

Installing Wayland and Weston packages

Here is how to install the required packages to run Wayland and its reference compositor, Weston.

First you should update your system

pacman -Syu

Now install wayland, which will also install libxml if it is not present.

pacman -S wayland

Then install Weston, a compositor, window manager on which Wayland clients (applications) run.

pacman -S weston

During the installation, pacman will ask you to select which package will provide libgl, this will depend on your video card. Following image shows the options.

Here is a list of other dependencies that will be installed on a brand new Arch Linux setup.

Running Weston

You can run weston on different ways, directly as the main backend or from within another windowing system.

Running within X
You can run weston from within X by calling weston from a xterm session.

weston

As with X, you can start a weston instance within another, once again just call weston. The following image shows a stack of Weston running within Weston within Weston within X.

Image may be NSFW.
Clik here to view.
X->Weston->Weston->Weston stack

X->Weston->Weston->Weston stack

Running without X

Now lets try to run weston as our main backend. From one of system's VT call weston launcher.

weston-launch

Configure Weston

As you should have seen, you can run weston without any extra configuration, however you can set some things to customize and make it better.

weston.ini

To configure weston you should create/change the weston.ini file, which is usually at ~/.config/weston.ini. Here i will show some options you could use to do such changes.

The core section is used to select the startup compositor modules and general options.

[core]
backend=drm-backend.so

The keyboard section is used to select keyboard options, such as layout and variant.

[keyboard]
keymap_layout=fr
keymap_model=pc105
keymap_variant=euro
keymap_options=grp:alt_shift_toggle

At the shell section you will set some of the behaviour and visual aspects of the compositor.

[shell]
type=desktop-shell.so
background-image=/path/to/wallpaper.jpg
background-color=0xff002200
panel-color=0x55550000
locking=true
animation=zoom
binding-modifier=ctrl
num-workspaces=6

The screensaver section let you set the path to screensaver and its timeout.

[screensaver]
path=/usr/libexec/weston-screensaver
duration=600

At output section you set how things will be displayed on the monitor, the following are commonly used ones. Note also the different uses of the mode options, the mode options are like on xorg.conf.

#[output]
#name=LVDS1
#mode=1680x1050
#transform=90

#[output]
#name=VGA1
#mode=173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
#transform=flipped

#[output]
#name=X1
mode=1024x768
#transform=flipped-270

To add items to the panel, you should create an launcher section containing the path to the executable and an icon.

[launcher]
icon=/usr/share/icons/gnome/24x24/apps/utilities-terminal.png
path=/usr/bin/weston-terminal

These were some of the main options, which should be enough to test weston, for more details you should take a look at weston.ini(1). Anyway here is a complete weston.ini example with some other settings.

[core]
backend=drm-backend.so

[keyboard]
keymap_layout=us
#keymap_model=pc105
#keymap_variant=euro
keymap_options=grp:alt_shift_toggle

[shell]
type=desktop-shell.so
background-image=/usr/share/archlinux/wallpaper/archlinux-aftermath.jpg
background-type=scale
#background-color=0xff00ff00
panel-color=0x55227700
locking=true
animation=zoom
binding-modifier=ctrl
num-workspaces=4
cursor-size=16

[input-method]
path=/usr/lib/weston/weston-keyboard

[screensaver]
path=/usr/lib/weston/weston-screensaver
duration=600

#[output]
#name=LVDS1
#mode=1680x1050
#transform=90

#[output]
#name=VGA1
#mode=173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
#transform=flipped

[output]
name=X1
mode=1024x768
#transform=flipped-270
transform=normal

[launcher]
path=/usr/bin/firefox
icon=/usr/share/icons/hicolor/24x24/apps/firefox.png

[launcher]
path=/usr/bin/weston-terminal
icon=/usr/share/icons/gnome/24x24/apps/terminal.png

[launcher]
path=/usr/bin/weston-flower
icon=/usr/share/icons/Adwaita/24x24/emblems/emblem-generic.png

[launcher]
path=/usr/bin/weston-editor
icon=/usr/share/icons/Adwaita/24x24/devices/input-dialpad.png

Wayland clients

Along with Wayland and Weston, there are some utilities that will be installed. These are some nice wayland client examples that shows some of the wayland and weston capabilities.

weston-flower - Draw and drag some random flowers on the screen

weston-smoke - Draw smoke that follows the cursor

weston-editor - Very simple text editor example

weston-image - Simple image viewer, just call it with the image path as the first argument

weston-terminal - Simple terminal shell

There are other applications included, but they are not so fun, mostly useful for development/debugging purpose.

Here are some of these Wayland clients running.

Conclusion

Well, so this is it, Wayland is working and you can run Weston from within X, directly or from within another Weston session. There are already more native clients and also ways to run X applications on top of Wayland. It is time to take a look, play around and maybe even develop your own Wayland clients.

Have run and thanks for reading!

The post How to Run Wayland and Weston on Arch Linux appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How To Install CouchDB and Futon on Ubuntu 16.04

Apache CouchDB is an open source database management system that uses JSON for documents, JavaScript for MapReduce indexes and regular HTTP for its API. It is often widely known as a NoSQL database system. It has a document-oriented NoSQL database architecture and is implemented in the concurrency-oriented language Erlang. CouchDB doesn't store data and relationship in tables whereas stores all the data independently in the documents. In Apache CouchDB, the documents maintains its own data and self-contained schema. Whereas Futon is the native web based interface built into CouchDB which provides a good interface to which helps us to create or delete databases and manage individual couchdb documents. In this article, we'll gonna install CouchDB and Futon in our machine running Ubuntu 16.04 LTS Xenial.

Pre-requisites

Before we get started, we'll need to make sure that we have got Ubuntu 16.04 LTS Xenial in our machine or server as we are featuring CouchDB installation in Ubuntu 16.04 LTS Xenial. If we don't have one and planning to install Ubuntu 16.04, you can download it from the Official Ubuntu Download Page. Once we have our Ubuntu 16.04 ready to go, we'll first need to update the local repository index of the apt package manager. If we are accessing using root user, we don't need to enter sudo everytime we run the commands but as we are running a non-root user, we'll need to enter sudo at every command in order to access root privileges.

$ sudo apt-get update

Once the local repository of the apt package manager has been updated, we'll gonna upgrade the packages of our Ubuntu system using the following command.

$ sudo apt-get upgrade

Image may be NSFW.
Clik here to view.
Upgrading Ubuntu Xenial

Adding PPA Repository

Once we meet our pre-requisites, we'll move forward towards the installation of CouchDB and Futon. As we have Ubuntu PPA repository of Apache CouchDB maintained and updated by the CouchDB project and the communities, we'll gonna go for it. Installing CouchDB using PPA repository is the most easiest and simple method to install the official release of CouchDB. First of all, we'll need to make sure that we have installed the package named software-properties-common so that we can easily add the PPA repository in our Ubuntu machine. In order to install it, we'll need to execute the following command.

$ sudo apt-get install software-properties-common

Next, we'll gonna add the official CouchDB PPA repository using add-apt-repository command in the terminal.

$ sudo add-apt-repository ppa:couchdb/stable

Then, we'll need to update the local repository index of apt package manager as we have added new PPA repository above.

$ sudo apt-get update

Installing CouchDB

Now, we'll go for the installation of CouchDB in our machine as all the above steps has been completed. In order to install CouchDB from the official PPA repository, we can now simply run the following apt-get command in our terminal. This will install couchdb with its required dependencies from their respective repositories.

$ sudo apt-get install couchdb

Image may be NSFW.
Clik here to view.
Install CouchDB Xenial

Fixing Ownership and Permission

As some files and directories are set under root user and group by default, it may be risky as per the security in the production, so its highly recommended to fix the permission. In order to fix the issue, we'll need to change the ownership of the files belonging to the couchdb user and group. As the new user and group for couchdb is already created by default during the installation process above, we'll no need to create another. In order to change the ownership, we'll simply need to run the following command.

$ sudo chown -R couchdb:couchdb /usr/bin/couchdb /etc/couchdb /usr/share/couchdb

Once done, we'll now need to fix the permission of those files and directories by executing the following.

$ sudo chmod -R 0770 /usr/bin/couchdb /etc/couchdb /usr/share/couchdb

Restarting CouchDB

Once all the steps above are done successfully, we'll now restart our CouchDB instance. In order to do so, as we are running Ubuntu 16.04 LTS Xenial and its shipped with systemd as the default init system, we'll need to run the following command.

$ sudo systemctl restart couchdb

In order to test if CouchDB is running fine or not, we can simply run the following command which will retrieve the information through curl.

$ curl localhost:5984

Image may be NSFW.
Clik here to view.
Curl CouchDB

Accessing Futon

As CouchDB natively includes Futon, the web interface for CouchDB we can simply access it via our web browser. In order to do so, we'll first need to perform SSH tunneling as allowing Futon across the firewall currently may be dangerous as we haven't set a proper admin credentials. In order to setup SSH tunneling, we'll need to run the following command.

$ ssh -L 5984:127.0.0.1:5984 arun@ip-address

Image may be NSFW.
Clik here to view.
SSH Tunneling

Note: Here we'll need to replace arun and ip-address with username and ip address of the server respectively.

Now, as we have successfully setup the SSH tunneling, we'll now access the web interface of CouchDB. To do so, we'll need to open a web browser and point it to http://localhost:5984 .

Image may be NSFW.
Clik here to view.
Web CouchDB Output

Then, in order to access the web application of Futon, we'll need to point to http://localhost:5984/_utils/index.html . Once done, we'll get access to the Futon Database Administration Panel in which we can perform different CouchDB database management activities.

Image may be NSFW.
Clik here to view.
Web Interface Fucton

Securing Futon

As we don't require any login credentials to login to the Futon panel and as every account is an admin account in it, anyone accessing Futon has the ability to make changes to the database. So, first of all, we'll need to secure it by creating a new Admin account. To do so, we can simply click on Fix it link shown in the bottom of the right-sidebar.

Image may be NSFW.
Clik here to view.
Fixing Admin Access

Doing so will open a dialogue box allowing us to create a new admin account. Here, we'll need to enter the required username and password which we'll use further to login to Futon.

Image may be NSFW.
Clik here to view.
Creating Admin Account CouchDB

 Creating Databases

Now, in order to create a database, we'll need to login to Futon Control Panel using the username and password created above. Then, we'll click on Create Database button available on the top-left of the screen. Then, we'll be asked to enter a name for our new database to be created. Next, we can add new documents, edit, delete, update and save the documents via Futon easily.

Image may be NSFW.
Clik here to view.
Creating New Database

Allowing External Access

If you need to make CouchDB accessible and available outside of our local network or our local machine, then we'll need to make sure that the above steps on securing Futon is completed. Then, we'll need to add 0.0.0.0 to the bind-address varilable in /etc/couchdb/local.ini file under [httpd] block. To do so, we'll need to login as root user and open the file using a text editor.

$ su root

# nano /etc/couchdb/local.ini

[httpd]
bind_address = 0.0.0.0

Image may be NSFW.
Clik here to view.
Allowing External Access

Here, we can customize our configurations according to our needs and requirements. Once done, we'll gonna save the file and exit the text editor. Once done, we can simply log out from the root user by executing exit command in the terminal.

Now, in order to apply the changes, we'll need to restart our CouchDB services using systemctl command.

$ sudo systemctl restart couchdb

Allowing Firewall

As we are making our CouchDB available out of our local network, we'll also need to make sure that port 5984 is opened by the firewall program. As Ubuntu 16.04 LTS Xenial runs with systemd as the default init system, firewalld is most probably used as the firewall program.

$ sudo firewall-cmd --zone=public --add-port=5984/tcp --permanent

Once the port is added for public access, we'll need to a make sure that we reload the firewalld program.

$ sudo firewall-cmd --reload

Note: Allowing CouchDB accessible through the internet makes anyone able to add and access the documents and databases whereas they are unable to edit and delete the documents as we have already created an admin user above. So, its not recommended to allow CouchDB accessible externally, if we need access to it remotely, we can make use of SSH tunneling or allow a specific ip address to connect via iptables or firewall program.

Conclusion

Finally, we have easily and succesfully installed CouchDB with its web based interface Futon in our machine running Ubuntu 16.04 LTS Xenial. Documents and databases in CouchDB can be easily accessible by everyone so we'll need to make sure that our database is not accessible by public or untrusted people. We can even install it manually using the tarballs available in the official download page but as we are running Ubuntu 16.04 LTS Xenial, its pretty easy to install using the PPA repository. So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy  :-)

The post How To Install CouchDB and Futon on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install RainLoop Webmail on Ubuntu 16.04

RainLoop Webmail is an Open Source Web Application software written in PHP. It is a simple, fast and web-based email client. It provides a fast web interface to access your emails on almost all major mail providers like Yahoo, GMail, Outlook and many others as well as your own mail servers. These are some of the main features of this Email client.

1. Modern user interface with efficient memory use which can work on low-end webservers.
2. Provides complete support of IMAP and SMTP protocols including SSL and STARTTLS.
3. Minimum resource requirements.
4. Provides interface to set filters.
5. Direct access to the mail server, no storing of emails locally on webservers
6. It allows adding multiple domain accounts.
7. Really simple and fast installation.
8. It can be integrated with Facebook, twitter, Dropbox and Google.
9. Built-in caching system allows for improving overall performance and reducing load on web and mail servers.

In this article, I'm providing the guidelines on how to install RainLoop Webmail on Ubuntu 16.04. Let's see the pre-requisites for the installation.

Pre-requisites

This application requires a LAMP setup prior to the installation.

  • This works with any of these Web servers: Apache, nginx, lighttpd, MS IIS or other with PHP support
  • PHP Support: 5.3 and above recommended
  • Required PHP extensions: CURL, iconv, json, libxml, dom, openssl, DateTime, PCRE, SPL
  • Supported Browsers: Internet Explorer 9+, Firefox, Opera 10+, Safari 3+, Google Chrome
  • Optional: PDO (MySQL/PostgreSQL/SQLite) PHP extension (for contacts)

As mentioned in the context, RainLoop Webmail is based on PHP, so it is recommended to have a Webserver installed with fully functional PHP to make it working. I've installed Apache, PHP and MySQL on my server prior to the installation. I'll brief the installation steps one by one here.

root@ubuntu:/var/#apt-get install python-software-properties  *//Install the Python Software packages//*
root@ubuntu:/var/#apt install software-properties-common
root@ubuntu:/var# add-apt-repository ppa:ondrej/php
root@ubuntu:/var#apt-get update  *//Update the Softwares//*
root@ubuntu:/var#apt-get install -y php7.0 *// Install PHP //*
Processing triggers for man-db (2.7.5-1) ...
Setting up php7.0-common (7.0.4-7ubuntu2.1) ...
Setting up php7.0-mcrypt (7.0.4-7ubuntu2.1) ...
Setting up php7.0-imap (7.0.4-7ubuntu2.1) ...
Setting up php7.0-xml (7.0.4-7ubuntu2.1) ...
Setting up php7.0-readline (7.0.4-7ubuntu2.1) ...
Setting up php7.0-opcache (7.0.4-7ubuntu2.1) ...
Setting up php7.0-odbc (7.0.4-7ubuntu2.1) ...
Setting up php7.0-mysql (7.0.4-7ubuntu2.1) ...
Setting up php7.0-json (7.0.4-7ubuntu2.1) ...
Setting up php7.0-curl (7.0.4-7ubuntu2.1) ...
Setting up php7.0-cli (7.0.4-7ubuntu2.1) ...
Setting up php7.0-fpm (7.0.4-7ubuntu2.1) ...
Setting up php7.0 (7.0.4-7ubuntu2.1) ...

root@ubuntu:/var# add-apt-repository ppa:ondrej/apache2 *// Add the latest packages for Apache2 //*
root@ubuntu:/var/#apt-get update
root@ubuntu:/var/#apt-get install apache2 *//Install Apache2 //*
root@ubuntu:/var/#add-apt-repository -y ppa:ondrej/mysql-5.6 *//Add the packages for MySQL 5.6 //*
root@ubuntu:/var/# apt-get update
root@ubuntu:/var/# apt-get install mysql-server-5.7 *//Install MySQL 5.6 //*
root@ubuntu:/var/#apt-get install libapache2-mod-php7.0 php7.0-mysql php7.0-curl php7.0-json

Confirming the Installations

After the installation, we need to confirm the installed Apache, PHP and MySQL versions.

root@ubuntu:~# apache2 -v
Server version: Apache/2.4.18 (Ubuntu)
Server built: 2016-04-15T18:00:57

root@ubuntu:~# php -v
PHP 7.0.4-7ubuntu2 (cli) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies

root@ubuntu:~# mysql -v
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.7.12-0ubuntu1 (Ubuntu)

Adding to the hosts file

We need to add proper hosts file entry to make it resolve as required.

cat /etc/hosts

139.162.55.62 rainloop.webmail.com

Creating the Virtual Host

Now we can create the virtual host for the domain. In addition, make sure to create the document root folder and error log folder mentioned in the virtual host, if it's not created before.

root@ubuntu:/etc/apache2/sites-enabled# cat rainloop.conf

<VirtualHost *:80>
ServerName rainloop.webmail.com
DocumentRoot "/var/www/rainloop/"
ServerAdmin you@example.com
ErrorLog "/var/log/httpd/rainloop-error_log"
TransferLog "/var/log/httpd/rainloop-access_log"

<Directory />
Options +Indexes +FollowSymLinks +ExecCGI
AllowOverride All
Order deny,allow
Allow from all
Require all granted
</Directory>

</VirtualHost>

Creating the folders specified in the Virtual host

root@ubuntu:~#mkdir /var/www/rainloop/
root@ubuntu:/#mkdir -p /var/log/httpd/

Enabling SSL for the host

For adding an SSL, we need to first generate a self signed certificate for our hostname "rainloop.webmail.com" and then add it in the Virtual host to enable the SSL support. Let's see how to create a self signed certificate.

root@ubuntu:/var/#openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/httpd/conf/ssl/rainloop.webmail.com.key -out /etc/httpd/conf/ssl/rainloop.webmail.com.crt
Generating a 2048 bit RSA private key
....................................................................................+++
.....................+++
writing new private key to '/etc/httpd/conf/ssl/rainloop.webmail.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:K-----A
Locality Name (eg, city) []:C-----N
Organization Name (eg, company) [Internet Widgits Pty Ltd]:VIP
Organizational Unit Name (eg, section) []:VIP
Common Name (e.g. server FQDN or YOUR name) []:rainloop.webmail.com
Email Address []:-----@gmail.com

As seen here, you can provide the details required to generate the certificate. Once it is created, you can add those to our Apache configuration file, as below:

root@ubuntu:/etc/apache2/sites-available# cat rainloop-ssl.conf
<VirtualHost *:443>
ServerName rainloop.lan
DocumentRoot "/var/www/rainloop/"
ServerAdmin you@example.com
ErrorLog "/var/log/httpd/rainloop-ssl-error_log"
TransferLog "/var/log/httpd/rainloop-ssl-access_log"

SSLEngine on
SSLCertificateFile "/etc/httpd/conf/ssl/rainloop.lan.crt"
SSLCertificateKeyFile "/etc/httpd/conf/ssl/rainloop.lan.key"

<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>

BrowserMatch "MSIE [2-5]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0

CustomLog "/var/log/httpd/ssl_request_log" \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

<Directory />
Options +Indexes +FollowSymLinks +ExecCGI
AllowOverride All
Order deny,allow
Allow from all
Require all granted
</Directory>

</VirtualHost>

Enabling SSL for the Vhost

We can use this command a2ensite in Ubuntu to enable SSL for the enabled domain VirtualHost.

root@ubuntu:/etc# a2ensite rainloop-ssl
Site rainloop-ssl already enabled

Modify the open_basedir value in the PHP configuration file

I've installed PHP 7 in my server, the PHP configuration file is located at /etc/php/7.0/fpm/php.ini. You need to modify the open_basedir value in the PHP configuration file to limit our file operations.

root@ubuntu:~# grep open_basedir /etc/php/7.0/fpm/php.ini
; open_basedir, if set, limits all file operations to the defined directory
open_basedir = /var/http/:/home/:/tmp/:/usr/share/pear/:/usr/share/webapps/:/etc/webapps/:/srv/www/_basedir = /srv/http/:/home/:/tmp/:/usr/share/pear/:/usr/share/webapps/:/etc/webapps/:/var/www/

After modifying the PHP configuration file, we need to restart the Apache service to make it effective.

Confirm ths status of required PHP modules

This webmail application requires some of the PHP modules to be enabled on the server. Please confirm whether these modules are enabled on your server.

root@rainloop:~# php -m | egrep 'odbc|mcrypt|mysqli|iconv|imap|openssl|pdo|SPL'
iconv
imap
mcrypt
mysqli
odbc
openssl
pdo_mysql
SPL

Also confirm whether the required SSL protocols are enabled on PHP

Image may be NSFW.
Clik here to view.
SSL_supported_phpinfo

Download/Install the RainLoop Webmail application

You can go to the Official RainLoop site and download the latest version available from their website.

root@ubuntu:~# wget http://repository.rainloop.net/v1/rainloop-latest.zip
--2016-05-26 06:10:42-- http://repository.rainloop.net/v1/rainloop-latest.zip
Resolving repository.rainloop.net (repository.rainloop.net)... 104.28.6.34, 104.28.7.34
Connecting to repository.rainloop.net (repository.rainloop.net)|104.28.6.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4794510 (4.6M) [application/zip]
Saving to: ‘rainloop-latest.zip’

rainloop-latest.zip 100%[==================================================================>] 4.57M 1.52MB/s in 3.0s

2016-05-26 06:10:46 (1.52 MB/s) - ‘rainloop-latest.zip’ saved [4794510/4794510]

root@ubuntu:~# unzip rainloop-latest.zip -d /var/www/rainloop/

Once extracting the folder, you can fix the files/folder permissions and ownerships as per the URL here.

root@ubuntu:/var/www/rainloop# chown -R www-data:www-data .

Configure RainLoop via Web Interface

You can manage the Webmail interface either from the Web interface or by modifying the variables in the file /var/www/rainloop/data/_data_dc70aaa98299c32ee3d3ee747f40c63b/_default_/configs/application.ini.

Web interface provides a user-friendly access to modify the settings. We can access the Admin interface from the URL >>http://rainloop.webmail.com/?admin with default user/password. The default admin password is "12345".

Image may be NSFW.
Clik here to view.
admin_webmail

After login, you need to update your user password to secure one. You can further modify your admin password at Security tab.

Image may be NSFW.
Clik here to view.
security_password reset

If you don't have a database, you can create one named rainloop and provide the required access. All your email contacts and filters will be saved in this database.

Image may be NSFW.
Clik here to view.
mysql_contacts

By default, this webmail includes GMail, yahoo, qq and outlook mail servers. We can even include as many mail servers domains to this as required.

Image may be NSFW.
Clik here to view.
domains

As you can see all the mail server settings for GMail are added to this system by default.

Image may be NSFW.
Clik here to view.
gmail-rainloop

 

The plugin tab displays the available plugins and their purpose. You can install any available plugins as per your purpose from the Plugins tab.

Image may be NSFW.
Clik here to view.
rainloop plugin

I installed the plugin called POPPASSD which provided me an option to change my email account password. After enabling the required settings. You can access your mail server domain in the browser from the URL >>http://rainloop.webmail.com/

Image may be NSFW.
Clik here to view.
rainloopadmin

You can give the required email/password to access your email. I accessed one of my test yahoo account with my credentials.

Image may be NSFW.
Clik here to view.
rainloop-yahoo

You can get more information regarding the Rain Loop Webmail here.  Howdy! your new advanced email client is ready to use now. Thank you for reading this. I hope you enjoyed reading this article. I would appreciate your valuable comments and suggestions on this.

Have a Good Day!

The post How to Install RainLoop Webmail on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Install Jenkins on Ubuntu 16.04

Jenkins is an open source continuous integration tool, which is used for continuous build, continuous deployment and Testing across multiple servers faster.  It is a self-contained web-based program, ready to run out-of-the-box, with packages for Windows, Mac OS X and Linux operating systems.  It is a Web application build in Java. It performs these tasks automatically when the configurations are added. In this article, I'm providing the guidelines on how to install and configure Jenkins on your Ubuntu 16.04 server.

Pre-requisites

1. A Web Server (Apache/Nginx/Tomcat)
2. Web-Browser
3. Java Platform
Let's start with the installation steps one by one

Install Java

Since, this web application is build on Java platform, the server needs to be installed with the Latest Java Development Kit. I've used this command to install Java on my server.

root@ubuntu:~# apt-get update
root@ubuntu:~# apt-get install default-jdk

You can confirm the Java Version after installing.

root@ubuntu:~# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

Install Apache2

Every Web application requires a Web browser to server the application. I'm using the Apache Webserver to server this purpose. We can install the Apache webserver with this command.

root@ubuntu:~#apt-get install apache2

root@ubuntu:~# apache2 -v
Server version: Apache/2.4.20 (Ubuntu)
Server built: 2016-05-05T15:42:04
root@ubuntu:~#

Installing Jenkins

Before installing Jenkins, we need to add keys and Jenkins packages to the source list.

root@ubuntu:/usr/local/src# sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'

root@ubuntu:/usr/local/src# cat /etc/apt/sources.list.d/jenkins.list
deb http://pkg.jenkins-ci.org/debian binary/
root@ubuntu:/etc/apt# apt-get install jenkins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
daemon jenkins
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.1 MB of archives.
After this operation, 69.2 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.linode.com/ubuntu xenial/universe amd64 daemon amd64 0.6.4-1 [98.2 kB]
10% [Connecting to ftp.icm.edu.pl (2001:6a0:0:31::2)]
10% [Connecting to ftp.icm.edu.pl (2001:6a0:0:31::2)]
10% [Connecting to ftp.icm.edu.pl (2001:6a0:0:31::2)]
Get:2 http://pkg.jenkins-ci.org/debian binary/ jenkins 2.7 [68.0 MB]
Fetched 68.1 MB in 2min 34s (441 kB/s)
Selecting previously unselected package daemon.
(Reading database ... 34869 files and directories currently installed.)
Preparing to unpack .../daemon_0.6.4-1_amd64.deb ...
Unpacking daemon (0.6.4-1) ...
Selecting previously unselected package jenkins.
Preparing to unpack .../archives/jenkins_2.7_all.deb ...
Unpacking jenkins (2.7) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up daemon (0.6.4-1) ...
Setting up jenkins (2.7) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ..

Start the application after installation

root@ubuntu:/etc/apt# /etc/init.d/jenkins start
[ ok ] Starting jenkins (via systemctl): jenkins.service.

You can manage the Jenkins service using Jenkins daemon. Furthermore, you can analyse the Jenkins log at /var/log/jenkins/jenkins.log for any service troubleshooting.

Confirm the Service Status

root@ubuntu~# netstat -plan | grep java
tcp6 0 0 :::8080 :::* LISTEN 27574/java
tcp6 0 0 :::44507 :::* LISTEN 27574/java
udp6 0 0 :::5353 :::* 27574/java
udp6 0 0 :::33848 :::*

Jenkins runs on the default port 8080. You can modify the Jenkins port 8080 in /etc/default/jenkins file.

root@ubuntu:~# grep HTTP_PORT /etc/default/jenkins
HTTP_PORT=8080

After installing the Jenkins, you can access the Jenkins portal at the URL at http://IP:8080 or http://hostname:8080

Setting up an Apache2 Proxy for port 80 to 8080

We need to configure the virtualhost to proxy port 80 to 8080, so that you can access the Jenkins without specifying any ports, just calling the URL >>>http://IP

Enable Proxy module

You can enable the proxy module by just running this command.

root@jenkins:~# a2enmod proxy
Enabling module proxy.
To activate the new configuration, you need to run:
service apache2 restart

root@jenkins:~# a2enmod proxy_http
Considering dependency proxy for proxy_http:
Module proxy already enabled
Enabling module proxy_http.
To activate the new configuration, you need to run:
service apache2 restart

Restart the Apache service once enabling this module. Now we need to create the virtual host for proxy passing the port. Please see the virtual host details:

root@jenkins:/etc/apache2/sites-available# cat jenkins.conf
<VirtualHost *:80>
ServerAdmin webmaster@localhost
ServerName jenkins.ubuntuserver.com
ServerAlias jenkins
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPreserveHost on
ProxyPass / http://localhost:8080/ nocanon
AllowEncodedSlashes NoDecode
</VirtualHost>

root@jenkins:/etc/apache2/sites-enabled# a2ensite jenkins
Enabling site jenkins.
To activate the new configuration, you need to run:
service apache2 reload

By executing this command, you can enable the Jenkins configuration created. That's all :).  Access your Jenkins portal by just calling http://IP or http://hostname.

Configure Jenkins

After installing Jenkins, we can access the Jenkins Portal. It will look as the snapshot below:

Image may be NSFW.
Clik here to view.
JenkinsAdmin

Now we need to copy and paste this file content from this above location mentioned "/var/lib/jenkins/secrets/initialAdminPassword" and paste in here to continue. This will direct to the next page.

Image may be NSFW.
Clik here to view.
Jenkin_setup

We need to install the suggested Jenkins plugins as per our requirement. Once installed it will ask us to create the Admin user to manage the Jenkins portal. We need to provide these details to continue.Image may be NSFW.
Clik here to view.
Jenkinadmin

Now it gives us the management Portal.

Image may be NSFW.
Clik here to view.
Jenkinadminpage

That's all :). Now we're ready to get started with our continuous integration tool.  Thank you for reading this article. I hope you enjoyed reading this. I would recommend your valuable comments and suggestions on this.

The post How to Install Jenkins on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Setup OpenLDAP Multi-Master Replication on CentOS 7

OpenLDAP is an open source Address Directory software. It is "lightweight" or "smaller" when compared to the X.500 designed to run on smaller computers such as desktop computers. In OpenLDAP, data information are arranged like branches of a tree, one striking difference with other varieties of commonly used databases. In OpenLDAP access rights to address directory are based on two categories of functions in slapd, Access Control List and Authorization functions. In Linux/Unix, access rights to file systems are based on file/directory permissions. An LDAP client binds(logins) to an LDAP server that submits a query to request information or submits information to be updated. Then access rights are evaluated by the server and when granted, the server responds with answer or maybe with a referral to another LDAP server where the client can have the query serviced.

In this article we will be setting up Multi-Master replication of OpenLDAP server on CentOS 7. When your directory is very big with lots of client which creates lots of traffic on the directory server then it is very difficult to meet the SLA. So we have to distribiute the load of the clients with multiple servers with the help of Replication. Openldap have multiple replication configurations like Master-Master replication and Master-consumer replication are mostly used.

Basic Setup:

In the multi master replication topology, two or more than two servers can act as masters, all of these master servers are authoritative for any change in the directory server.

In this tutorial we are going use two test servers to make the process simple using following host names and IP addresses.

LDAP1.TEST.COM IP address 172.25.10.176
LDAP2.TEST.COM IP address 192.25.10.177

Login to your both servers using root user credentials, open the 'hosts' file to update your both server names with IP address so that they should be able to resolve the other systems hostnames.

#vim /etc/hosts

127.0.0.1 localhost.localdomain localhost
172.25.10.176 LDAP1.TEST.COM LDAP1
192.25.10.177 LDAP2.TEST.COM LDAP2

Installing OpenLDAP Server:

In order to setup multiple master OpenLDAP replication, first we will install and configure the Basic LDAP Server settings on both of our CentOS 7 server.

Let's run the below command to install OpenLDAP server packages.

# yum install openldap-servers openldap-clients

Image may be NSFW.
Clik here to view.
installing openldap

After installation, copy the sample openldap DB configurations into the following location as shown.

# cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG

Then change the file owner and and start 'slapd' services.

# chown ldap. /var/lib/ldap/DB_CONFIG

# systemctl start slapd

Image may be NSFW.
Clik here to view.
Start ldap services

Setup OpenLDAP Admin password:

Generate the encrypted password by running the slappasswd command and give the password, then copy the generated encrypted string and specify the password generated above for "olcRootPW" section .

# slappasswd
New password:
Re-enter new password:
{SSHA}xcsCNH2eMVrNsf4dU7LRJFY5kULU01p4

#vim chrootpw.ldif

dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW:{SSHA}xcsCNH2eMVrNsf4dU7LRJFY5kULU01p4

Save and close the file and run below command to start authentication.

# ldapadd -Y EXTERNAL -H ldapi:/// -f chrootpw.ldif

Image may be NSFW.
Clik here to view.
openldap admin setup

Run below commands to import schemas.

# ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif

Image may be NSFW.
Clik here to view.
import basic schemas

Domain name setup on LDAP DB:

Let's generate directory manager's password first and then open the 'chdomain.ldif' to put below text in it but make sure to replace your own domain name for "dc=***,dc=***" section and specify the password generated for "olcRootPW" section.

# slappasswd
New password:
Re-enter new password:
{SSHA}xIE0NEjoshYdxkvdBaudyuo8NA2IlisgsN7MvXT

# vim chdomain.ldif

dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=srv,dc=world" read by * none

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=test,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=test,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}IcE0NEjoshYdxkvdBaudyuo8NA2IlisgsN7MvXT

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=test,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=test,dc=com" write by * read

Save the file and run below command to start authentication.

# ldapmodify -Y EXTERNAL -H ldapi:/// -f chdomain.ldif

Image may be NSFW.
Clik here to view.
domain setup

#vim basedomain.ldif

dn: dc=test,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: Test Domain
dc: Test

dn: cn=Manager,dc=test,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager

dn: ou=People,dc=test,dc=com
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=test,dc=com
objectClass: organizationalUnit
ou: Group

# ldapadd -x -D cn=Manager,dc=test,dc=com -W -f basedomain.ldif

Enter LDAP Password:
adding new entry "dc=test,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Test Domain dc: Test"
adding new entry "cn=Manager,dc=test,dc=com"
adding new entry "ou=People,dc=test,dc=com"
adding new entry "ou=Group,dc=test,dc=com"

Repeat the steps on the other node, and lets move towards multi-master replication.

OpenLDAP Multi-Master Replication:

Once your basic LDAP settings are complete, do the following steps to configure and setup your Multi-master replication. To do so, first we will add 'syncprov' module by opening the below file and put the below configurations in it.

#vim mod_syncprov.ldif

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib64/openldap
olcModuleLoad: syncprov.la

Save and close the file and run below command to start authentication.

# ldapadd -Y EXTERNAL -H ldapi:/// -f mod_syncprov.ldif

Then open the given below file and put the mentioned configurations in it.

# vim syncprov.ldif

dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionLog: 100

Now again run below command after adding your configurations.

# ldapadd -Y EXTERNAL -H ldapi:/// -f syncprov.ldif

Image may be NSFW.
Clik here to view.
Add syncprov module

Now we will configure the replication by including the most important configurations by placing the below configurations into the file of each of your master node.

But don't forget to change the "olcServerID" and "provider=xxx" information acording to your server set different value on each server.

# vim ldap01.ldif

# create new
dn: cn=config
changetype: modify
replace: olcServerID
# specify unique ID number on each server
olcServerID: 0

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001
# specify your LDAP server's URI
provider=ldap://ldap1.test.com:389/
bindmethod=simple

# your own domain name
binddn="cn=Manager,dc=test,dc=com"
# directory manager's password
credentials=xxxxxx
searchbase="dc=test,dc=com"
# includes subtree
scope=sub
schemachecking=on
type=refreshAndPersist
# [retry interval] [retry times] [interval of re-retry] [re-retry times]
retry="30 5 300 3"
# replication interval
interval=00:00:05:00
-
add: olcMirrorMode
olcMirrorMode: TRUE

dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov

After saving, close the file and run below command to start final authentication.

# ldapmodify -Y EXTERNAL -H ldapi:/// -f ldap01.ldif

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"

modifying entry "olcDatabase={2}hdb,cn=config"

adding new entry "olcOverlay=syncprov,olcDatabase={2}hdb,cn=config"

That's it. OpenLDAP master replication setup is complete, now you can configure your LDAP Client to bind your LDAP master server by using below command on your client server.

# authconfig --ldapserver=ldap1.test.com,ldap2.test.com --update

Conclusion:

In this article you have learned about the basic concepts of OpenLDAP and its installation and Multi-master replication on CentOS 7. OpenLDAP supports a wide variety of replication topologies, these terms have been deprecated in favor of provider and consumer: A provider replicates directory updates to consumers; consumers receive replication updates from providers. Multi-Master replication is a replication technique using Syncrepl to replicate data to multiple provider ("Master") Directory servers which is best for Automatic failover/High Availability. In Multi-master replication, if any provider fails, other providers will continue to accept updates, avoiding a single point of failure, and providers can be located in several physical sites i.e. distributed across the network/globe. Thank you for reading please share your valuable comments and suggestions.

The post How to Setup OpenLDAP Multi-Master Replication on CentOS 7 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Run Puppet on Container Infrastructure using Docker

Docker is an Opensource Container based technology. It is giving us a workflow around containers which is much easy to use. Docker separates application from underlying operating system using container technology, similar to how Virtual Machines separate the operating system from underlying hardware.

Docker Container Vs Virtual Machines

The Virtual Machines includes applications, necessary binaries and libraries along with an entire guest operating systems which may weigh around 10s of GBs

While, the Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in the user space on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs, but is much more fast, portable, scalable and efficient.

Docker Benefits

Scalability : These containers are extremely lightweight which makes scaling up and scaling down very fast and very easy to launch more containers as we need them or shut them down as we no longer need them.

Portablility : We can move them very easily. We're going to get into images and registries. But essentially, we can take  snapshots of  our environment and upload it to the public/private registry and then download that images for making containers of it anywhere.

Deployments : We can run  these containers almost anywhere  to deploy it namely Desktops, laptops, Virtual machines, Public/Private clouds etc.

In this article, I'm explaining how to install Docker on Ubuntu 1604 server and run Puppet inside a Docker container.

Installing Docker

It is supported in almost all operating Systems. To install Docker in a Ubuntu server, it requires a 64 bit architecture and a kernal version  of atleast or above 3.10. Let's start with the installation prerequisites.

Pre-requisites

Check the Kernel version and Architecture

We can use this commands to confirm the architecture and kernel version of our OS.

root@ubuntu:~# arch
x86_64
root@ubuntu:~# uname -r
4.4.0-21-generic

Now, next step is to update the APT repository packages. In addition, we need to ensure that it runs with https and install the required CA certificates. Run the following command to achieve this.

root@ubuntu:~# apt-get update

root@ubuntu:~# apt-get install apt-transport-https ca-certificates
Reading package lists... Done
Building dependency tree
Reading state information... Done
ca-certificates is already the newest version (20160104ubuntu1).
The following packages will be upgraded:
apt-transport-https
1 upgraded, 0 newly installed, 0 to remove and 54 not upgraded.
Need to get 25.7 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.linode.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.12~ubuntu16.04.1 [25.7 kB]
Fetched 25.7 kB in 0s (2,540 kB/s)
(Reading database ... 25186 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.2.12~ubuntu16.04.1_amd64.deb ...
Unpacking apt-transport-https (1.2.12~ubuntu16.04.1) over (1.2.10ubuntu1) ...
Setting up apt-transport-https (1.2.12~ubuntu16.04.1) ...

Creating Repository file for Docker

Make sure your repository configuration file is properly set to download the packages for Docker.

root@ubuntu:/etc/apt/sources.list.d# cat /etc/apt/sources.list.d/docker.list
deb https://apt.dockerproject.org/repo ubuntu-xenial main

Once it's added, you can update the packages once more by running "apt-get update". Make sure it takes the updates from the right repos. Remove any old docker package if it exists.

root@ubuntu:/etc/apt/sources.list.d# apt-get purge lxc-docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'lxc-docker' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 54 not upgrad

root@ubuntu:~# apt-cache policy docker-engine
docker-engine:
Installed: (none)
Candidate: 1.11.2-0~xenial
Version table:
1.11.2-0~xenial 500
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
1.11.1-0~xenial 500
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
1.11.0-0~xenial 500
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages

Install Kernel packages

For Ubuntu Xenial 16.04 version, it's recommended to install the linux_extra_image package which is compatible with the Kernel package. This package enables the Aufs storage driver.  AUFS storage driver takes multiple directories on a single host, stacks them on top of each other, providing a single unified view.

root@ubuntu:~# apt-get install linux-image-extra-$(uname -r)
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
crda iw libnl-3-200 libnl-genl-3-200 wireless-regdb
The following NEW packages will be installed:
crda iw libnl-3-200 libnl-genl-3-200 linux-image-extra-4.4.0-21-generic wireless-regdb
0 upgraded, 6 newly installed, 0 to remove and 54 not upgraded.
Need to get 39.0 MB of archives.

Installation

Now we can go ahead with the installation of the Docker.

root@ubuntu:~# apt-get install docker-engine
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
aufs-tools cgroupfs-mount git git-man liberror-perl libltdl7 libperl5.22 patch perl perl-modules-5.22 rename xz-utils
Suggested packages:
mountall git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs git-mediawiki git-svn
diffutils-doc perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl make
The following NEW packages will be installed:
aufs-tools cgroupfs-mount docker-engine git git-man liberror-perl libltdl7 libperl5.22 patch perl perl-modules-5.22 rename xz-utils
0 upgraded, 13 newly installed, 0 to remove and 54 not upgraded.
Need to get 24.8 MB of archives.
After this operation, 139 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
WARNING: The following packages cannot be authenticated!

Start and confirm the Docker status

root@ubuntu:~# service docker start

root@ubuntu:~# docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
root@ubuntu:~#

This below command downloads a test image namely hello-world from the Docker registry and runs it in a container. When the container runs, it prints an informational message. Then, it exits. Thus we can confirm the Docker working.

root@ubuntu:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
4276590986f6: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:a7d7a8c072a36adb60f5dc932dd5caba8831ab53cbf016bcdd6772b3fbe8c362
Status: Downloaded newer image for hello-world:latest

Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

Now we're ready to start with Docker. We can download all required images from the Docker Hub using the command

docker pull image_name. For instance, let see how I'm downloading the some of the useful images.

root@ubuntu:~# docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
5ba4f30e5bea: Pull complete
9d7d19c9dc56: Pull complete
ac6ad7efd0f9: Pull complete
e7491a747824: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:46fb5d001b88ad904c5c732b086b596b92cfb4a4840a3abd0e35dbb6870585e4
Status: Downloaded newer image for ubuntu:latest

This has downloaded the Ubuntu image from the Docker Hub and we can use this for creating a Ubuntu container with this image.

root@ubuntu:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 2fa927b5cdd3 11 days ago 122 MB
hello-world latest 94df4f0ce8a4 6 weeks ago 967 B

Creating Puppet inside a Docker container

For creating puppet containers, first we need to download the Puppet packages from the docker hub.

  • puppet/puppet-agent-ubuntu
  • puppet/puppetserver
  • puppet/puppetdb
  • puppet/puppetdb-postgres

Let's see how I downloaded these images from the Docker hub. You can use the command docker pull Image_name for that.

root@ubuntu:~# docker pull puppet/puppetserver
Using default tag: latest
latest: Pulling from puppet/puppetserver
5ba4f30e5bea: Already exists
9d7d19c9dc56: Already exists
ac6ad7efd0f9: Already exists
e7491a747824: Already exists
a3ed95caeb02: Already exists
158cd0fe54d8: Pull complete
7a15dfe1145a: Pull complete
0bb8d51ae57c: Pull complete
7b09944cb025: Pull complete
6bf96d82eed5: Pull complete
58fa7008c2bc: Pull complete
659b4b2b3359: Pull complete
0e205bb6d03b: Pull complete
915e3853b669: Pull complete
750b3208f97d: Pull complete
8fec247907de: Pull complete
Digest: sha256:c43290ca040a7693d9f41448eab4ff2444c61757aa303bd7979f7f1ef3e4ae95
Status: Downloaded newer image for puppet/puppetserver:latest

root@ubuntu:~# docker pull puppet/puppetdb
Using default tag: latest
latest: Pulling from puppet/puppetdb
0be59000882d: Pull complete
f20b6f990572: Pull complete
53662c966c9f: Pull complete
a3ed95caeb02: Pull complete
5eae59cbe62c: Pull complete
2b8ff6279504: Pull complete
612d7a4576b7: Pull complete
60577ed4c036: Pull complete
f99ad2d50f6f: Pull complete
9da7f43c61dc: Pull complete
e4c4271df64b: Pull complete
Digest: sha256:6532e4e3750183cd6951df6deb7bb1adb1e0e0ed37aa9e1e0294e257d73d9b1f
Status: Downloaded newer image for puppet/puppetdb:latest

root@ubuntu:~# docker pull puppet/puppetdb-postgres
Using default tag: latest
latest: Pulling from puppet/puppetdb-postgres
8b87079b7a06: Pull complete
a3ed95caeb02: Pull complete
ff6abb23e531: Pull complete
8364ca902ad3: Pull complete
84179c1b7ff6: Pull complete
be951654637c: Pull complete
4841dfc8333f: Pull complete
8e92fd62d485: Pull complete
13e5de4be2f2: Pull complete
d6aaf4d83b1c: Pull complete
3113f93aec6d: Pull complete
055e85b433f4: Pull complete
a97f9981bfe1: Pull complete
6c162fdd1104: Pull complete
Digest: sha256:d42428f0ecf75f7a0dbebee79cb45afaebfd193051fa1002e64fa026b2060f13
Status: Downloaded newer image for puppet/puppetdb-postgres:latest

root@ubuntu:~# docker pull puppet/puppet-agent-ubuntu
Using default tag: latest
latest: Pulling from puppet/puppet-agent-ubuntu
0be59000882d: Already exists
f20b6f990572: Already exists
53662c966c9f: Already exists
a3ed95caeb02: Already exists
576aca0f90fb: Pull complete
b1842b47756f: Pull complete
Digest: sha256:1867bcbe733adcbdfa004ec76ce8940a0927eef8877ee4f07b1ace4e68e7c5fa
Status: Downloaded newer image for puppet/puppet-agent-ubuntu:latest

Now we've downloaded all required images. You can view it by running docker images command.

root@ubuntu:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
puppet/puppetserver latest 0ac3058fad18 4 days ago 379.9 MB
puppet/puppetdb latest f3f9d8b3e54f 6 days ago 368.4 MB
puppet/puppet-agent-ubuntu latest 57fe50639909 6 days ago 202.9 MB
puppet/puppetdb-postgres latest 4f4ed55af431 10 days ago 265.8 MB
ubuntu latest 2fa927b5cdd3 11 days ago 122 MB
hello-world latest 94df4f0ce8a4 6 weeks ago 967 B

Before creating our Puppet container, we need to create a Docker network to add these Puppet containers as below.

root@ubuntu:~# docker network create puppet
e1ebd861dbb39be31da81a88e411e7f4762814ee203b371fca7643a7bb6840eb

Creating Puppet Master server

We can create puppet server with the image "puppet/puppetserver" with name puppet in the puppet network with hostname "puppet-linoxide".

root@ubuntu:~# docker run --net puppet --name puppet --hostname puppet.linoxide puppet/puppetserver
Warning: The following options to parse-opts are unrecognized: :flag
2016-06-08 09:36:24,348 INFO [o.e.j.u.log] Logging initialized @27125ms
2016-06-08 09:36:36,393 INFO [p.s.v.versioned-code-service] No code-id-command set for versioned-code-service. Code-id will be nil.
2016-06-08 09:36:36,394 INFO [p.s.v.versioned-code-service] No code-content-command set for versioned-code-service. Attempting to fetch code content will fail.
2016-06-08 09:36:36,396 INFO [p.t.s.w.jetty9-service] Initializing web server(s).
2016-06-08 09:36:36,450 INFO [p.s.j.jruby-puppet-service] Initializing the JRuby service
2016-06-08 09:36:36,455 WARN [p.s.j.jruby-puppet-service] The 'jruby-puppet.use-legacy-auth-conf' setting is set to 'true'. Support for the legacy Puppet auth.conf file is deprecated and will be removed in a future release. Change this setting to 'false' and migrate your authorization rule definitions in the /etc/puppetlabs/puppet/auth.conf file to the /etc/puppetlabs/puppetserver/conf.d/auth.conf file.
2016-06-08 09:36:36,535 INFO [p.s.j.jruby-puppet-internal] Creating JRuby instance with id 1.
2016-06-08 09:36:53,825 WARN [puppetserver] Puppet Comparing Symbols to non-Symbol values is deprecated
(file & line not available)
2016-06-08 09:36:54,019 INFO [puppetserver] Puppet Puppet settings initialized; run mode: master
2016-06-08 09:36:56,811 INFO [p.s.j.jruby-puppet-agents] Finished creating JRubyPuppet instance 1 of 1
2016-06-08 09:36:56,849 INFO [p.s.c.puppet-server-config-core] Initializing webserver settings from core Puppet
2016-06-08 09:36:59,780 INFO [p.s.c.certificate-authority-service] CA Service adding a ring handler
2016-06-08 09:36:59,827 INFO [p.s.p.puppet-admin-service] Starting Puppet Admin web app
2016-06-08 09:37:06,473 INFO [p.s.m.master-service] Master Service adding ring handlers
2016-06-08 09:37:06,558 WARN [o.e.j.s.h.ContextHandler] Empty contextPath
2016-06-08 09:37:06,572 INFO [p.t.s.w.jetty9-service] Starting web server(s).
2016-06-08 09:37:06,606 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-cert'
2016-06-08 09:37:06,607 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-key'
2016-06-08 09:37:06,608 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-ca-cert'
2016-06-08 09:37:06,608 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-crl-path'
2016-06-08 09:37:07,037 INFO [p.t.s.w.jetty9-core] Starting web server.
2016-06-08 09:37:07,050 INFO [o.e.j.s.Server] jetty-9.2.z-SNAPSHOT
2016-06-08 09:37:07,174 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@18ee4ac3{/puppet-ca,null,AVAILABLE}
2016-06-08 09:37:07,175 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@4c1434a7{/puppet-admin-api,null,AVAILABLE}
2016-06-08 09:37:07,176 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@7eef9da2{/puppet,null,AVAILABLE}
2016-06-08 09:37:07,177 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@26ad2d06{/,null,AVAILABLE}
2016-06-08 09:37:07,364 INFO [o.e.j.s.ServerConnector] Started ServerConnector@66b8635c{SSL-HTTP/1.1}{0.0.0.0:8140}
2016-06-08 09:37:07,365 INFO [o.e.j.s.Server] Started @70146ms
2016-06-08 09:37:07,381 INFO [p.s.m.master-service] Puppet Server has successfully started and is now ready to handle requests
2016-06-08 09:37:07,393 INFO [p.s.l.legacy-routes-service] The legacy routing service has successfully started and is now ready to handle requests

Now we've our Puppet Server created and running.

root@ubuntu:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4b9f456a4c2 puppet/puppetserver "dumb-init /docker-en" 3 minutes ago Up 3 minutes 8140/tcp puppet

Creating Puppet Client

By running this command, you're creating another container as Puppet client with hostname Puppeet-client-linoxide with the docker image puppet/puppet-agent-ubuntu agent. You can either use this command to create the Puppet client or you can just use docker run --net puppet puppet/puppet-agent-ubuntu  to built one. If you're running this command,  with a onetime flag which means, Puppet exits after the first run.

root@ubuntu:~# docker run --net puppet --name puppet-client --hostname puppet-client-linoxide puppet/puppet-agent-ubuntu agent --verbose --no-daemonize --summarize
Info: Creating a new SSL key for puppet-client-linoxide.members.linode.com
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for puppet-client-linoxide.members.linode.com
Info: Certificate Request fingerprint (SHA256): 62:E2:37:8A:6E:0D:18:AC:81:0F:F1:3E:D6:08:10:29:D4:D6:21:16:59:B7:6D:3F:AA:5C:7A:08:38:B6:6B:07
Info: Caching certificate for puppet-client-linoxide.members.linode.com
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for ca
Notice: Starting Puppet client version 4.5.1
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet-client-linoxide.members.linode.com
Info: Applying configuration version '1465378896'
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.01 seconds
Changes:
Events:
Resources:
Total: 7
Time:
Schedule: 0.00
Config retrieval: 1.55
Total: 1.56
Last run: 1465378896
Filebucket: 0.00
Version:
Config: 1465378896
Puppet: 4.5.1

But if you're using this above command, then the container won't exit, It stays online and updates Puppet every 30 minutes based on the latest content from the Puppet Server. Now we've our Puppet server/Client running on our Docker.

root@ubuntu:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f29866a103b puppet/puppet-agent-ubuntu "/opt/puppetlabs/bin/" 8 minutes ago Up 8 minutes puppet-client
f4b9f456a4c2 puppet/puppetserver "dumb-init /docker-en" 13 minutes ago Up 13 minutes 8140/tcp puppet

 

Creating PuppetDB

We can run a PuppetDB server in a docker container. Inorder to run a PuppetDB, we need a Postgres Server running. Docker supports only PostgreSQL server. This too can be another container instance, or RDS (Relational Database Service) end point or a Physical DB somewhere.  In addition, It requires a Puppet Master running. In order to use SSL certs during the initialization, you will need at least a token puppet master running that the container can connect to initialize the certs.

root@ubuntu:~# git clone https://github.com/tizzo/docker-puppetdb.git
Cloning into 'docker-puppetdb'...
remote: Counting objects: 12, done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 12 (delta 3), reused 12 (delta 3), pack-reused 0
Unpacking objects: 100% (12/12), done.
Checking connectivity... done.

root@ubuntu:~# cd docker-puppetdb/

Create a Docker file compatible with Ubuntu 16.04. I got my Dockerfile and run the docker build.

root@ubuntu:~/docker-puppetdb# docker build .
Sending build context to Docker daemon 68.1 kB
Step 1 : FROM ubuntu:16.04
16.04: Pulling from library/ubuntu
5ba4f30e5bea: Already exists
9d7d19c9dc56: Already exists
ac6ad7efd0f9: Already exists
e7491a747824: Already exists
a3ed95caeb02: Already exists
Digest: sha256:f5edf3b741a08b573eca6bf25257847613540538a17b86e2b76e14724a0be68a
Status: Downloaded newer image for ubuntu:16.04
---> 2fa927b5cdd3
Step 2 : MAINTAINER Gareth Rushgrove "gareth@puppet.com"
---> Running in 555edbbd1017
---> a3d4cea623ac
Removing intermediate container 555edbbd1017
Step 3 : ENV PUPPETDB_VERSION "4.1.0" PUPPET_AGENT_VERSION "1.5.1" DUMB_INIT_VERSION "1.0.2" UBUNTU_CODENAME "xenial" PUPPETDB_USER puppetdb PUPPETDB_PASSWORD puppetdb PUPPETDB_JAVA_ARGS "-Djava.net.preferIPv4Stack=true -Xms256m -Xmx256m" PATH /opt/puppetlabs/server/bin:/opt/puppetlabs/puppet/bin:/opt/puppetlabs/bin:$PATH
---> Running in 4cb8a8220b1c

Once this is completed we can create our PuppetDB container.

root@ubuntu:~# docker run --net puppet --name puppetdb-postgres -e POSTGRES_PASSWORD=puppetdb -e POSTGRES_USER=puppetdb -d postgres
855a6b13fefa4123d5e16cdde84ebc7174ba149e66699e4c94c14e8fbfcac22f

root@ubuntu:~# docker run --net puppet -d -P --name puppetdb --link puppetdb-postgres:postgres puppet/puppetdb
bfe56b64bd980d20570374ed8204136303d82de8cbf1a4279c2f2fd25a798f59

All our containers are  running, we can even confirm its status by with this command, docker ps as shown below:

Image may be NSFW.
Clik here to view.
puppetcontainers

We can access our PuppetDB Dashboard at the URL >>http://Docker-Server-IP:32771

Image may be NSFW.
Clik here to view.
PuppetDB: Dashboard

Hurray! This is how we can make Puppet to run on a container infrastructure inside Docker. I hope you enjoyed reading this article. I would recommend your valuable comments and suggestions on this.

Thank you! Have a Nice Day :)

The post How to Run Puppet on Container Infrastructure using Docker appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Run Single-Node Cassandra Cluster in Ubuntu 16.04

Apache Cassandra is a open source distributed, high performance, extremely scalable and fault tolerant post-relational database solution. It can serve as both real-time data store for online/transactional applications, and as read-intensive database for business intelligence systems.

Relational DB Vs Cassandra

Relational database systems handles moderate incoming data velocity and fetches data from one or few locations. It manages primarily structured data and supports complex/nested transactions with single points of failure with fail over.

Cassandra handles high incoming data velocity by fetching data from many locations. It manages all data types and supports simple transactions with no single points of failure; it provides constant uptime. In addition, it provides read/write scalability.

In this article, I'm providing the guidelines on how I installed Apache Cassandra and ran a single-node cluster on my Ubuntu 16.04 server.

 Pre-requisites

  • It requires a Java Platform to run
  • A user to run this application

Install Java

Cassandra needs Java application to be running on your server, make sure you have installed latest Java version. You can update the APT repository packages and install Java. Cassandra 3 or later requires Java 8+ version to be installed.

root@ubuntu:~# apt-get update

root@ubuntu:~# apt-get install default-jdk
Setting up default-jdk (2:1.8-56ubuntu2) ...
Setting up gconf-service-backend (3.2.6-3ubuntu6) ...
Setting up gconf2 (3.2.6-3ubuntu6) ...
Setting up libgnomevfs2-common (1:2.24.4-6.1ubuntu1) ...
Setting up libgnomevfs2-0:amd64 (1:2.24.4-6.1ubuntu1) ...
Setting up libgnome2-common (2.32.1-5ubuntu1) ...
Setting up libgnome-2-0:amd64 (2.32.1-5ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...

done.
done.

You can confirm the Java version installed.

root@ubuntu:~# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

Creating a user to run Cassandra

It is always recommended to run this application as a user instead of root. Hence, I created my Cassandra user to run this application.

root@ubuntu:~# groupadd cassandra
root@ubuntu:~# useradd -d /home/cassandra -s /bin/bash -m -g cassandra cassandra

root@ubuntu:~# grep cassandra /etc/passwd
cassandra:x:1000:1000::/home/cassandra:/bin/bash

Download and Install Cassandra

Now we can download the latest Apache Cassandra from here and copy to your preferred directory. I downloaded this tar file to my /tmp folder and extracted the contents to my cassandra "home" there.

root@ubuntu:/tmp# wget http://mirror.cc.columbia.edu/pub/software/apache/cassandra/3.6/apache-cassandra-3.6-bin.tar.gz
--2016-06-12 08:36:47-- http://mirror.cc.columbia.edu/pub/software/apache/cassandra/3.6/apache-cassandra-3.6-bin.tar.gz
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35552323 (34M) [application/x-gzip]
Saving to: ‘apache-cassandra-3.6-bin.tar.gz’

apache-cassandra-3.6-bin.tar.gz 100%[===================================================================>] 33.91M 6.43MB/s in 12s

2016-06-12 08:37:01 (2.93 MB/s) - ‘apache-cassandra-3.6-bin.tar.gz’ saved [35552323/35552323]

root@ubuntu:/tmp# tar -xvf apache-cassandra-3.6-bin.tar.gz -C /home/cassandra --strip-components=1

Correcting the ownerships and setting variables

You can correct the ownerships for the files and set proper environment variables to run this application smoothly.

root@ubuntu:/home/cassandra# export CASSANDRA_HOME=/home/cassandra
root@ubuntu:/home/cassandra# export PATH=$PATH:$CASSANDRA_HOME/bin
root@ubuntu:/home/cassandra# chown -R cassandra.cassandra .

Now you can switch to the cassandra user and run this application as below:

cassandra@ubuntu:~$ sh bin/cassandra

INFO 09:10:39 Cassandra version: 3.6
INFO 09:10:39 Thrift API version: 20.1.0
INFO 09:10:39 CQL supported versions: 3.4.2 (default: 3.4.2)
INFO 09:10:39 Initializing index summary manager with a memory pool size of 24 MB and a resize interval of 60 minutes
INFO 09:10:39 Starting Messaging Service on localhost/127.0.0.1:7000 (lo)
INFO 09:10:39 Loading persisted ring state
INFO 09:10:39 Starting up server gossip
INFO 09:10:39 Updating topology for localhost/127.0.0.1
INFO 09:10:39 Updating topology for localhost/127.0.0.1
INFO 09:10:39 Node localhost/127.0.0.1 state jump to NORMAL

This output means, your Cassandra server is up and running fine now. Now we can check and confirm the status of our Cluster by this command.

root@ubuntu:/home/cassandra# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 127.0.0.1 142.65 KiB 256 100.0% fc76be14-acde-47d4-a4a2-5d015804bb3c rack1

The status and state notation UN means it is up and normal.

We are done with installing Single Node Cassandra cluster. Now we can see how to connect to our cluster.

Connecting to our Cluster

We can execute this shell script  "cqlsh" to connect to our cluster node.

Image may be NSFW.
Clik here to view.
cassandra1
These are the various CQL commands used in Cassandra. You can get more information on how to use this here.

Howdy! we're done with a Single-Node Cassandra Cluster in our Ubuntu 16.04 server. I hope you enjoyed reading this. I would recommend your valuable comments and suggestions on this.

Thank you!

The post How to Run Single-Node Cassandra Cluster in Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How Install NodeJS in Ubuntu 16.04 LTS Xenial

NodeJS is a free and open source javascript runtime developed on Chrome's V8 JavaScript engine which is designed to build scalable network applications. It allows the use of Javascript in the server side programming which has the ability to interact with the operating system and networking. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Here in this article, we'll learn how to install the latest and stable NodeJS in our machine running Ubuntu 16.04 LTS Xenial. To install NodeJS, we have many methods we can use. The following are the ways we'll feature in this article.

  • Installing using the Official Repository
  • Installing using the Github Source Code Clone
  • Installing using Node Version Manager (NVM)

Installing using the Official Repository

First of all, as NodeJS is available in the official repository of Ubuntu 16.04 LTS Xenial, we can easily install it using the repository. In order to do so, we'll first need to update the local repository index of our apt package manager.

$ sudo apt-get update

Once the update is completed, we'll move ahead and run the following command to upgrade our system which will upgrade the packages to the latest available versions.

$ sudo apt-get upgrade

Then, we'll install nodejs using the apt-get command. Doing so will automatically install the node package manager which comes by default with nodejs. NPM allows us to install node packages from the Node Package Manager Repository.

$ sudo apt-get install nodejs nodejs-legacy

Image may be NSFW.
Clik here to view.
Installing Nodejs Nodejs-legacy

Once done, we'll be able to install and run our node applications successfully.

Installing using the Github Source Code Clone

If we wanna install the nodejs from the latest clone of Github Source Code then we'll need to follow this method of installation.

First of all, we'll need to make sure that the dependencies required for the compilation of NodeJS is installed in our Ubuntu 16.04 machine. So, in order to install it, we'll need to update the local repository index of the apt package manager.

$ sudo apt-get update

Once done, we'll now install the required dependencies from the repository.

$ sudo apt-get install make gcc g++ python

Image may be NSFW.
Clik here to view.
install dependencies git clone

We'll now need to download the official release of nodejs from its Offical Github Repo . To do so, we'll run the following wget command to the respective release of nodejs.

$ wget https://github.com/nodejs/node/archive/v6.2.1.tar.gz

Once the tarballs are downloaded, we'll want to extract it using the following command.

$ tar zxvf v6.2.1.tar.gz

Then, we'll move ahead towards the compilation of it by running the following commands.

$ cd node-6.2.1
$  ./configure

Image may be NSFW.
Clik here to view.
Configuring Nodejs

Once its compiled successfully, we'll gonna install it in our machine running 16.04 LTS.

$ sudo make install

Image may be NSFW.
Clik here to view.
Make Install Nodejs

The installation process may take much more time depending upon the performance of the machine.

Installing using Node Version Manager (NVM)

The Node Version Manager also known as NVM is a version managing script for nodejs that allows us to manage multiple versions of Node.js to use in the same machine. In order to install NVM, we'll require that curl, libssl-dev and build-essential are installed. In order to do so, we'll need to run the following command.

$ sudo apt-get update
$ sudo apt-get install build-essential libssl-dev curl

Image may be NSFW.
Clik here to view.
Installing NVM Dependencies

Once the dependencies are installed, we'll now install the latest release of NVM from its Github repository using curl.

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.1/install.sh | bash

Image may be NSFW.
Clik here to view.
Installing NVM

Now, in order to gain access to the NVM functionalities and binaries, we'll need to make sure to source ~/.profile file as the installer has appended the required settings in it.

$ source ~/.profile

In order to apply the changes, we'll need to make sure to logout and login to the session.

Once done, we'll move ahead towards the installation of the latest nodejs in our machine using NVM. Here, we can see all the available versions of nodejs that we want to install. To do so, we'll need to execute the following command.

$ nvm ls-remote

Image may be NSFW.
Clik here to view.
Node Versions NVM

Then, we'll need to run the following command to install it.

$ nvm install v6.2.1

Image may be NSFW.
Clik here to view.
Installing Node with NVM

Once its installed, we can switch the version of nodejs by simply running the following command.

$ nvm use v6.2.1

Testing NodeJS Installation

As we have completed the installation of NodeJS using the above steps, we should be now able to even check the version of nodejs by running the following command.

$ node -v

v6.2.1

Now, we'll gonna create a simple nodejs app printing our all time favorite "Hello World" statement. We'll create a file named hello.js using a text editor.

$ nano hello.js

Then, we'll write the following javascript code to the hello.js file.

a="Linoxide";
b="by";
c=100;
d=116;
console.log(c+d);
console.log('Hello World! '+a+' '+b);

Image may be NSFW.
Clik here to view.
Hello World Nodejs

Once done, we'll gonna save and execute the javascript file using nodejs.

$ node hello.js

Image may be NSFW.
Clik here to view.
Hello World Output Nodejs

Conclusion

Finally we have successfully installed the latest and stable available NodeJS in our machine running Ubuntu 16.04 LTS Xenial. This tutorial should work fine in almost all the derivatives of Ubuntu and Debian. NodeJs has completely changed the way we used to run Javascripts. Nodejs has made Javascript available to run out of the web browsers ranging from servers to home desktop applications. After the above installation of NodeJs is completed, we can now run our various nodejs applications. If you have any questions, comments, feedback please do write on the comment box below and let us know what stuffs needs to be added or improved.

The post How Install NodeJS in Ubuntu 16.04 LTS Xenial appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How To Install and Setup RabbitMQ on Ubuntu 16.04

RabbitMQ is an open source message broker software that implements the Advanced Message Queuing Protocol (AMQP) , the emerging standard for high performance enterprise messaging. It is one of the most popular message broker solutions in the market, offered with an open-source license (Mozilla Public License v1.1) as an implementation of AMQP developed using the Erlang language, which is actually relatively easy to use and get started. The RabbitMQ server is a robust and scalable implementation of an AMQP broker. AMQP is a widely accepted open-source standard for distributing and transferring messages from a source to a destination. As a protocol and standard, it sets a common ground for various applications and message broker middle wares to inter operate without encountering issues caused by individually set design decisions.

RabbitMQ Server concepts:

Following are some important concepts that we needs to define before we start RabbitMQ installation setup. The default virtual host, the default user, and the default permissions are used in the examples that follow, but it is still good to have a feeling of what it is.

Producer: Application that sends the messages.
Consumer: Application that receives the messages.
Queue: Buffer that stores messages.
Message: Information that is sent from the producer to a consumer through RabbitMQ.
Connection: A connection is a TCP connection between your application and the RabbitMQ broker.
Channel: A channel is a virtual connection inside a connection. When you are publishing or consuming messages or subscribing to a queue is it all done over a channel.
Exchange: Receives messages from producers and pushes them to queues depending on rules defined by the exchange type. In order to receive messages, a queue needs to be bound to at least one exchange.
Binding: A binding is a link between a queue and an exchange.
Routing key: The routing key is a key that the exchange looks at to decide how to route the message to queues. The routing key is like an address for the message.
virtual host: A Virtual host provide a way to segregate applications using the same RabbitMQ instance. Different users can have different access privileges to different vhost and queues and exchanges can be created so they only exists in one vhost.

Prerequisites:

Our first step is to make sure that all system packages are up-to-date by running the following apt-get commands in command line terminal.

# apt-get update

# apt-get upgrade

Image may be NSFW.
Clik here to view.
System Update

After system update, we need to get main dependencies of RabbitMQ such as Erlang. Let's use the below command to get Erlang on our Ubuntu 16.04 server.

# apt-get install -y erlang

Image may be NSFW.
Clik here to view.
install erlang

Installing RabbixMQ Server on Ubuntu 16.04

Installing rabbitmq-server package on Ubuntu 16.04 is simple. Just flow the below command and type 'Y' key to continue installing RabbixMQ servger package along with its required dependencies.

# apt-get install rabbitmq-server

Image may be NSFW.
Clik here to view.
Install RabbitMQ server

Starting RabbixMQ Services:

RabbitMQ server has been installed on Ubuntu 16.04, now run below commands to start and check the status of RabbitMQ server and to enable its services to auto start after each reboot.

# systemctl enable rabbitmq-server

# systemctl start rabbitmq-server

# systemctl status rabbitmq-server

Image may be NSFW.
Clik here to view.
RabbitMQ service status

Enabling RabbitMQ Management console

RabbitMQ server is up and running, now are going to show you that how you can setup its Web Management console by using the rabbitmq-management plugin. Rabbitmq-management plugin allows you to manage and monitor your RabbitMQ server in a variety of ways, such as listing and deleting exchanges, queues, bindings and many more.

Let's run the below command to install this plugin on your Ubuntu 16.04 server.

# rabbitmq-plugins enable rabbitmq_management

The rabbitmq_management plugin is a combination of the following plugins which will be enabled after executing above command.

  • mochiweb
  • webmachine
  • rabbitmq_web_dispatch
  • amqp_client
  • rabbitmq_management_agent
  • rabbitmq_management

Image may be NSFW.
Clik here to view.
RabbitMQ console plugins

Now we can access RabbitMQ Management console from our web browser, available on HTTP port 15672 by default. You can also create new admin user using below commands.

# rabbitmqctl add_user radmin radmin
# rabbitmqctl set_user_tags radmin administrator
# rabbitmqctl set_permissions -p / radmin ".*" ".*" ".*"

Image may be NSFW.
Clik here to view.
adding rabbitmq user

Now open below URL along with the default port and login with your newly create user and password. You can also use the default 'guest' user name and 'guest' password to login.

http://your_servers_ip:15672/

Image may be NSFW.
Clik here to view.
RabbitMQ Web login

Using RabbitMQ Web Console:

Welcome to the RabbitMQ Web management console, after providing the right user login details can manage your RabbitMQ server from your web.

Image may be NSFW.
Clik here to view.
RabbitMQ Web Console

Conclusion:

Running rabbitmq-server in the foreground displays a banner message, and reports on progress in the startup sequence, concluding with the message "broker running", indicating that the RabbitMQ broker has been started successfully. RabbitMQ is a fully-fledged application stack (i.e. a message broker) that gives you all the tools you need to work with, instead of acting like a framework for you to implement your own. I hope you find this article much helpful and interesting. Do not forget to share it with your friends.

The post How To Install and Setup RabbitMQ on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

How to Setup Postfix Mail Server on Ubuntu 16.04 (Dovecot - MySQL)

Generally, all mailservers consist of three main components: MTA, MDA and MUA.  Each components plays a specific role in the process of moving and managing email messages and is important for ensuring proper email delivery.  Hence, setting up a Mail server is a difficult process involving the proper configuration of these components. The best way is to install and configure each individual component one by one, ensuring each one works and gradually building up your mail server.

In this article, I'm providing the guidelines on how we can configure a Mail Server on an Ubuntu 16.04 server with Postix (MTA) and Dovecot (MDA) using an external database (MySQL) for managing virtual users. First of all let's start with the pre-requisites for building our Mail server.

Pre-requisites

  • MySQL installed Server
  • A Fully qualified hostname
  • Domain resolving to your server

After full-filling our pre-requisites, we can start  building our Mail server one by one.

Installing Packages

First, of all we need to update our APT repository packages and start with installing the required postfix and dovecot packages.

root@ubuntu:~# apt-get install postfix postfix-mysql dovecot-core dovecot-imapd dovecot-lmtpd dovecot-mysql

Image may be NSFW.
Clik here to view.
postfix1
Image may be NSFW.
Clik here to view.
postfix2

During the Postfix installation, set-up windows will pop-up for the initial configuration. We need to choose the "internet site" and set a FQDN as our system mail name during the installation phase. This proceeds with the installation of the required packages as below.

Postfix is now set up with a default configuration. If you need to make
changes, edit
/etc/postfix/main.cf (and others) as needed. To view Postfix configuration
values, see postconf(1).

After modifying main.cf, be sure to run '/etc/init.d/postfix reload'.

Running newaliases
Setting up postfix-mysql (3.1.0-3) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Processing triggers for ufw (0.35-0ubuntu2) ...
Processing triggers for dovecot-core (1:2.2.22-1ubuntu2) ..

Create a Database for managing the mail users

Next step is to create a database for managing the email users and domains on our mail server. As I said before, we're managing the email users with this MySQL database. We can install MySQL if it's not installed by running this command apt-get install mysql-server-5.7.

We are going to create a database named "lnmailserver" with three tables as below:

  • Virtual domains : For managing domains
  • Virtual users : For managing email users
  • Virtual Alias : For setting up Aliases

Let's create our databases with all these tables.

  • Creating a database named lnmailserver.

mysql> CREATE DATABASE lnmailserver;
Query OK, 1 row affected (0.00 sec)

  • Creating a DB user lnmailuser and granting access to this database with a password.

mysql> GRANT SELECT ON lnmailserver.* TO 'lnmailuser'@'127.0.0.1' IDENTIFIED BY 'lnmail123';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

  • Switching to the database lnmailserver and creating our three tables namely virtual_domains, virtual_users and virtual_aliases with a specification and table format.

mysql> USE lnmailserver;
Database changed
mysql> CREATE TABLE `virtual_domains` (
-> `id` INT NOT NULL AUTO_INCREMENT,
-> `name` VARCHAR(50) NOT NULL,
-> PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `virtual_users` (
-> `id` INT NOT NULL AUTO_INCREMENT,
-> `domain_id` INT NOT NULL,
-> `password` VARCHAR(106) NOT NULL,
-> `email` VARCHAR(120) NOT NULL,
-> PRIMARY KEY (`id`),
-> UNIQUE KEY `email` (`email`),
-> FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.03 sec)

mysql> CREATE TABLE `virtual_aliases` (
-> `id` INT NOT NULL AUTO_INCREMENT,
-> `domain_id` INT NOT NULL,
-> `source` varchar(100) NOT NULL,
-> `destination` varchar(100) NOT NULL,
-> PRIMARY KEY (`id`),
-> FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

  • Adding the domains, users and aliases to each of these tables according to our requirements.

mysql> INSERT INTO `lnmailserver`.`virtual_domains`
-> (`id` ,`name`)
-> VALUES
-> ('1', 'linoxidemail.com'),
-> ('2', 'ubuntu.linoxidemail.com');
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0

mysql> INSERT INTO `lnmailserver`.`virtual_users`
-> (`id`, `domain_id`, `password` , `email`)
-> VALUES
-> ('1', '1', ENCRYPT('blogger123', CONCAT('$6$', SUBSTRING(SHA(RAND()), -16))), 'blogger1@linoxidemail.com'),
-> ('2', '1', ENCRYPT('blogger321', CONCAT('$6$', SUBSTRING(SHA(RAND()), -16))), 'blogger2@linoxidemail.com');
Query OK, 2 rows affected, 2 warnings (0.01 sec)
Records: 2 Duplicates: 0 Warnings: 2

mysql> INSERT INTO `lnmailserver`.`virtual_aliases`
-> (`id`, `domain_id`, `source`, `destination`)
-> VALUES
-> ('1', '1', 'info@linoxidemail.com', 'blogger1@linoxidemail.com');
Query OK, 1 row affected (0.00 sec)

  • Verifying each  table contents

mysql> select * from virtual_domains;
+----+-------------------------+
| id | name |
+----+-------------------------+
| 1 | linoxidemail.com |
| 2 | ubuntu.linoxidemail.com |
+----+-------------------------+
2 rows in set (0.00 sec)

mysql> select * from virtual_users;
+----+-----------+------------------------------------------------------------------------------------------------------------+---------------------------+
| id | domain_id | password | email |
+----+-----------+------------------------------------------------------------------------------------------------------------+---------------------------+
| 1 | 1 | $6$da4aa6fc680940d4$jt1plE8Lvo4hcjdP3N0pNxSC/o1ZsN4mpJ4WCcwk2mSqyY7/2l4ayyI7GcipeTf0uwzk5HnWbjddvv/jGomh41 | blogger1@linoxidemail.com |
| 2 | 1 | $6$36d2dc2e68ab56f6$L2b/D44yuT7qXsw22kTFPfxTbEbUuRDhr0RDoBnRc/q/LGcRF3NsLQCyapXdYKyA2zkSE9MJIXL7nHAbbCmlO. | blogger2@linoxidemail.com |
+----+-----------+------------------------------------------------------------------------------------------------------------+---------------------------+
2 rows in set (0.00 sec)

mysql> select * from virtual_aliases;
+----+-----------+-----------------------+---------------------------+
| id | domain_id | source | destination |
+----+-----------+-----------------------+---------------------------+
| 1 | 1 | info@linoxidemail.com | blogger1@linoxidemail.com |
+----+-----------+-----------------------+---------------------------+
1 row in set (0.00 sec)

mysql > exit

Configuring Postfix

Our next step is to modify the Postfix configuration according to our configuration plan of how we need to accept SMTP connections. Before making any changes to the configuration, it is always advised to take a backup for the file.

root@ubuntu:~# cp -rp /etc/postfix/main.cf /etc/postfix/main.cf-bkp

Now we can open up the file and make the following changes.

  • Modify the following entries to enable TLS support for the users to connect, specify the SSL certificate which is used to secure the connection.

This section is modified from:

#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
#smtpd_use_tls=yes
#smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
#smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

To :

smtpd_tls_cert_file=/etc/ssl/certs/dovecot.pem
smtpd_tls_key_file=/etc/ssl/private/dovecot.key
smtpd_use_tls = yes
smtpd_tls_auth_only = yes

I'm using free Dovecot SSL certificates which is specified here. We can generate dovecot self signed SSL certificates with the below command. If you've a valid SSL certificate for your hostname, you can specify those instead.

openssl req -new -x509 -days 1000 -nodes -out "/etc/ssl/certs/dovecot.pem" -keyout "/etc/ssl/private/dovecot.pem"

  • We need to add these TLS parameters to the Postfix configuration which makes Postfix to use Dovecot for authentication and to initialize connections.

smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_recipient_restrictions = permit_sasl_authenticated permit_mynetworks reject_unauth_destination

  • We need to comment the "mydestination" default entries and update it to use "localhost" alone.

mydestination = localhost

  • Confirm the myhostname part, whether it's set properly as our FQDN hostname.

root@ubuntu:~# grep myhostname /etc/postfix/main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
myhostname = ubuntu.linoxidemail.com

  • Modifying this parameter enables Postfix to use Dovecot's LMTP instead of its own LDA to save emails to the local mailboxes, thereby enabling  local mail delivery for all the domains listed in the MySQL database.

    virtual_transport = lmtp:unix:private/dovecot-lmtp

  • Last, but not least we need to tell Postfix that we're using external database to manage the domains, users and aliases. We need to add the configuration path to fetch these details from the database tables.

virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf

Now we need to create these files mentioned above one by one. Please see my file details below:

/etc/postfix/mysql-virtual-mailbox-domains.cf

root@ubuntu:~# cat /etc/postfix/mysql-virtual-mailbox-domains.cf
user = lnmailuser
password = lnmail123
hosts = 127.0.0.1
dbname = lnmailserver
query = SELECT 1 FROM virtual_domains WHERE name='%s'
root@ubuntu:~#

/etc/postfix/mysql-virtual-mailbox-maps.cf

root@ubuntu:~# cat /etc/postfix/mysql-virtual-mailbox-maps.cf
user = lnmailuser
password = lnmail123
hosts = 127.0.0.1
dbname = lnmailserver
query = SELECT 1 FROM virtual_users WHERE email='%s'
root@ubuntu:~#

/etc/postfix/mysql-virtual-alias-maps.cf

root@ubuntu:~# cat /etc/postfix/mysql-virtual-alias-maps.cf
user = lnmailuser
password = lnmail123
hosts = 127.0.0.1
dbname = lnmailserver
query = SELECT destination FROM virtual_aliases WHERE source='%s'

These files describes how Postfix connects with the external database. We need to restart Postfix after making these changes.

root@ubuntu:~# service postfix restart

We need to run these following commands to confirm the connectivity and check whether Postfix is able to fetch the required information from the database.

  • To check whether Postfix finds your domain from the database, we can run this. This should return '1' if the attempt is successful.

root@ubuntu:/etc/ssl/certs# postmap -q linoxidemail.com mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
1

  • To check whether Postfix finds your required email address from the database, we can run this. This also should return '1' if it goes successful.

root@ubuntu:/etc/ssl/certs# postmap -q blogger1@linoxidemail.com mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
1

  • To check whether Posfix finds your email forwarder from the database, we can run this. This should return your email forwarder set if the attempt is successful.

root@ubuntu:/etc/ssl/certs# postmap -q info@linoxidemail.com mysql:/etc/postfix/mysql-virtual-alias-maps.cf
blogger1@linoxidemail.com

Please Note : You can connect securely with your email clients using Postfix on port 587, you can open the port by uncommenting the following part in the Postfix master confguration : /etc/postfix/master.cf. 

Image may be NSFW.
Clik here to view.
postfixmaster

You need to restart Postfix after making any changes to the configuration. By using telnet command, you can confirm whether the port is open.

Configuring Dovecot

Our next step is to configure our MDA to allow POP3 or IMAP protocols and other configuration settings to connect to external database and Postfix. We are mainly modifying the following files.

/etc/dovecot/dovecot.conf
/etc/dovecot/conf.d/10-mail.conf
/etc/dovecot/conf.d/10-auth.conf
/etc/dovecot/conf.d/auth-sql.conf.ext
/etc/dovecot/dovecot-sql.conf.ext
/etc/dovecot/conf.d/10-master.conf
/etc/dovecot/conf.d/10-ssl.conf

It's always advised to take backup for these files before making any configuration changes.  We can modify each file one by one.

Modifying the dovecot main configuration file : /etc/dovecot/dovecot.conf

  • The following setting is uncommented by default.  But we need to ensure that it is uncommented.

!include conf.d/*.conf

  • We can enable all required protocols in this directive. If you need to enable POP3, we can append pop3 to this line and also make sure to install the required dovecot packages "dovecot-pop3d" to enable that.

!include_try /usr/share/dovecot/protocols.d/*.protocol
protocols = imap lmtp

Modifying the Dovecot Mail configuration file : /etc/dovecot/conf.d/10-mail.conf

  • We need to find the following  parameter "mail_location" in the configuration and update with our mail storage path. I've my mail folders located inside "/var/mail/vhosts/" folder. Hence, I modified the file path as below:

mail_location = maildir:/var/mail/vhosts/%d/%n

  • We need to set the "mail_privileged_group" parameter to "mail".

mail_privileged_group = mail

Once this is done, we need to make we've set proper ownership and permissions for our mail folders. Create the mail folders for each domains which we've registered in the MySQL table inside this folder "/var/mail/vhosts" and set proper ownerships/permissions.

root@ubuntu:~# ls -ld /var/mail
drwxrwsr-x 2 root mail 4096 Apr 21 16:56 /var/mail
root@ubuntu:~# mkdir -p /var/mail/vhosts/linoxidemail.com

Created a separate user/group named "vmail" with an id 5000 and changed the mail folders ownerships to that.
root@ubuntu:~# groupadd -g 5000 vmail
root@ubuntu:~# useradd -g vmail -u 5000 vmail -d /var/mail
root@ubuntu:~# chown -R vmail:vmail /var/mail

Modifying the Dovecot authentication file : /etc/dovecot/conf.d/10-auth.conf

  • Disable plain text authentication to ensure security by modifying the below parameter to "yes".

disable_plaintext_auth = yes

  • Modify the "auth_mechanisms" parameter as below:

auth_mechanisms = plain login

  •  We need to comment the mentioned line and enable the MySQL authentication by uncommenting the auth-sql.conf.ext line as below:

#!include auth-system.conf.ext
!include auth-sql.conf.ext

Modifying the authentication SQL file : /etc/dovecot/conf.d/auth-sql.conf.ext

Make sure your MySQL authentication file looks like this.

Image may be NSFW.
Clik here to view.
sqlauth

 Modifying the Dovecot + MySQL configuration file : /etc/dovecot/dovecot-sql.conf.ext

  • We need to uncomment the "driver" parameter and set to MySQL as below:

driver = mysql

  • Modify and set the connection parameters as per our database name and user.

connect = host=127.0.0.1 dbname=lnmailserver user=lnmailuser password=lnmail123

  • Modify the default_pass_scheme to SHA-512 and password_query line as below:

default_pass_scheme = SHA512-CRYPT
password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

Please note : Set permissions on the /etc/dovecot directory so the vmail user can use it.

chown -R vmail:dovecot /etc/dovecot
chmod -R o-rwx /etc/dovecot

Modifying Dovecot Master configuration file : /etc/dovecot/conf.d/10-master.conf

We are modifying four sections in this configuration file. IMAP section, local mail transfer section, authentication section and last authenticating worker process section. Please see the screenshots of each section below to view the modifications:

Image may be NSFW.
Clik here to view.
dovecot-imap

Image may be NSFW.
Clik here to view.
dovecot-lmtp

Image may be NSFW.
Clik here to view.
servi_auth

Image may be NSFW.
Clik here to view.
dovecto_suthorker

Modifying the SSL configuration :  /etc/dovecot/conf.d/10-ssl.conf

We're modifying this section to enable SSL for the incoming/outgoing connections. This configuration settings are optional. But I'd recommend these for more security.

  • Change the SSL parameter to required

ssl = required

  • Specify the SSL cert and key file location for our configuration. You can view the screenshot for more details.

Image may be NSFW.
Clik here to view.
ssl-dovecot

You need to restart Dovecot after all these modification.

That's all :) We've completed with our Mail server setup. Hurray!  You can access your email account using your username and password on any of your preferred email client. I could successfully access my email account using these settings below:

Image may be NSFW.
Clik here to view.
emailcientconfig

I hope you enjoyed reading this article. I would recommend your valuable suggestions and comments on this.
Have a Nice day!

The post How to Setup Postfix Mail Server on Ubuntu 16.04 (Dovecot - MySQL) appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

5 Steps to Setup MySQL Master-Master Replication on Ubuntu 16.04

The Master-Slave replication in MySQL databases provides load balancing for the databases. But it does not provide any failover scenario. If the Master server breaks, we cannot execute queries directly on the slave server. In addition to load balancing, if we need failover in our scenario, we can setup 2 MySQL instances in Master-Master replication. This article describes how this can be achieved in 5 easy steps on Ubuntu 16.04 server.

In Master-master replication, both the servers play the role of master and slave for each other like in the following diagram:

Image may be NSFW.
Clik here to view.
MySQL Master-Master configuration

Each server serves as Master for the other slave at the same time. So if you are familiar with the Master-Slave replication in MySQL, this must be a piece of cake for you.

Pre - requisites:

This article assumes that you are running Linux based OS. MySQL server is also required. Following OS/packages are used for this demo:

  1. Ubuntu 16.04 LTS (Xenial Xerus)
  2. mysqld Ver 5.7.12-0ubuntu1.1

We are also using 2 servers that will be in Master-Master configuration. These servers are called:

  1. LinoxideMasterLeft (IP - 192.168.1.101)
  2. LinoxideMasterRight (IP - 192.168.1.102)

This setup will work on other Linux based OS’s as well but some configuration file paths might change.

Now let’s start with the steps used for MySQL replication:

Step 1: Install the MySQL server

The MySQL server needs to be installed on both servers. This step is same for both servers:

raghu@LinoxideMasterLeft:~$ sudo apt-get update && sudo apt-get install mysql-client mysql-server

root@LinoxideMasterRight:~# sudo apt-get update && sudo apt-get install mysql-client mysql-server

This installation will prompt you to chose a MySQL root password. Chose a strong password and keep it safe with you.

Now depending upon your use case, you might want to replicate one database or multiple databases.

Use Case 1: You need to replicate only selected number of databases. The database names for replication are specified with “binlog_do_db” option in MySQL configuration file.

Use Case 2: You need all of your databases to replicate except a few. You may want to create new databases in the future and adding them to the list manually could be a problem. So in this case, don’t use the option “binlog_do_db”. MySQL will replicate all of your databases by default if you don’t put this option in the configuration. We just put the databases that don’t need to be replicated (like “information_schema” and “mysql”) with the option “binlog_ignore_db”.

You can also use both of these options simultaneously if that is what you want. For the purpose of this demo, we will replicate only 1 database(as in case 1).

The MySQL instances participating in replication are part of a (replication) group. All the servers in this group have unique ID. While configuring our servers for replication, we need to make sure that this ID is not duplicated. We will see this in a while.

Step2: Configure MySQL to listen on Private IP address.

In our setup, the MySQL configuration is included from files in another directory. Open MySQL configuration file /etc/mysql/my.cnf to confirm that the line with “/etc/mysql/mysql.conf.d/” is present. (This file does nothing but includes the files from other directories.)

Make sure that the following line is present in this file:

!includedir /etc/mysql/mysql.conf.d/

Now we will edit the file “/etc/mysql/mysql.conf.d/mysqld.cnf”.

The first thing that we want to do is enable MySQL daemon to listen on the private IP address. By default, the daemon binds itself with the loopback IP address. (You can also make it listen on the public IP address, but the DB servers generally do not need to be accessed directly from the internet). So we change the line:

bind-address = 127.0.0.1

To look like:

bind-address = 192.168.1.101

Make sure that you change this IP address to your server’s IP address.

We make the same changes on the other MySQL server.

Check /etc/mysql/my.cnf:

!includedir /etc/mysql/mysql.conf.d/

And make changes in /etc/mysql/mysql.conf.d/mysqld.cnf:

bind-address = 192.168.1.102

Step 3: Replication configuration

Now that our MySQL servers are set to listen on the Private IP addresses, it’s time to enable replication in MySQL configuration. Let’s start with LinoxideMasterLeft server.

In the same configuration file, look for the following lines:

# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
#binlog_do_db = include_database_name
#binlog_ignore_db = include_database_name

We need to uncomment these lines and mention the database that we are going to replicate. After changes, it will look like this:

# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = linoxidedb
#binlog_ignore_db = include_database_name

As we are replicating only one database, we don’t need to uncomment the line with “#binlog_ignore_db”.

Make the corresponding changes in the other server LinoxideMasterRight as well:

# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = linoxidedb
#binlog_ignore_db = include_database_name

Now that the configuration files are changed on both servers, we will restart the MySQL service:

root@LinoxideMasterLeft:~# service mysql restart

And on the other server as well:

root@LinoxideMasterRight:~# service mysql restart

We can check that our configuration changes are loaded and server is listening on the correct IP address:

root@LinoxideMasterLeft:~# netstat -ntpl | grep mysql
tcp 0 0 192.168.1.101:3306 0.0.0.0:* LISTEN 1924/mysqld

And

root@LinoxideMasterRight:~# netstat -ntpl | grep mysql
tcp 0 0 192.168.1.102:3306 0.0.0.0:* LISTEN 1422/mysqld

Step 4: Create Replication user

For MySQL replication, we need to create a new user for replication that will have replication permission on all the databases. Let’s create this user with the below MySQL queries:

Open the MySQL prompt on LinoxideMasterLeft server with the following command:

root@LinoxideMasterLeft:~# mysql -u root -p
Enter password:

Provide your password that you chose while MySQL server installation. It will drop you at the MySQL prompt. Enter the following commands at this prompt:

mysql> CREATE USER 'linoxideleftuser'@'%' identified by 'replicatepass';
Query OK, 0 rows affected (0.09 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'linoxideleftuser'@'%';
Query OK, 0 rows affected (0.00 sec)

Now we create the similar user on the other server LinoxideMasterRight:

mysql> CREATE USER 'linoxiderightuser'@'%' identified by 'replicatepass';
Query OK, 0 rows affected (0.04 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'linoxiderightuser'@'%';
Query OK, 0 rows affected (0.00 sec)

Step 5: Configure MySQL Master on both servers:

Now in this last step, we tell each server that other server is the master server from which it is syncing.

Step 5.1: Tell LinoxideMasterRight about its master:

First of all, we will check the Master status of LinoxideMasterLeft server. Run the following command at MySQL prompt to check the master status:

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 1447 | linoxidedb | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

Here, we need 2 pieces of information: the File (mysql-bin.000001) and the Position (1447) for setting up this server as master of LinoxideMasterRight (along with the username and password we set in the last step).

Run the following command on LinoxideMasterRight to tell it that LinoxideMasterLeft is its master:

mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> CHANGE MASTER TO MASTER_HOST = 'LinoxideMasterLeft', MASTER_USER = 'linoxideleftuser', MASTER_PASSWORD = 'replicatepass', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 1447;
Query OK, 0 rows affected, 2 warnings (0.07 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

Repeat the similar steps for the other server.

Step 5.2: Tell LinoxideMasterLeft about its master

Run the following command on LinoxideMasterRight to check its master status:

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 621 | linoxidedb | | |
+------------------+----------+--------------+------------------+-------------------+

Configure the LinoxideMasterLeft and tell it about its master by running the following command:

mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> CHANGE MASTER TO MASTER_HOST = 'LinoxideMasterRight', MASTER_USER = 'linoxiderightuser', MASTER_PASSWORD = 'replicatepass', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 621;
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

That’s it. Phew... It was lots of configurations. Now that we have done so much work, let’s check that our configuration is working. Note that the next step is optional and is not part of MySQL Master-Master replication setup.

Step 6: Test the replication

Let’s create the database on LinoxideMasterLeft:

mysql> create database linoxidedb;
Query OK, 1 row affected (0.00 sec)

Let’s check if this database is created on LinoxideMasterRight:

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| linoxidedb |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.00 sec)

Now we will create a table in this database from LinoxideMasterRight and check from the other server.
Run the following command on LinoxideMasterRight:

mysql> CREATE TABLE linoxidedb.testuser ( id INT, name VARCHAR(20));
Query OK, 0 rows affected (0.40 sec)

Let’s check this table from LinoxideMasterLeft:

mysql> show tables;
+----------------------+
| Tables_in_linoxidedb |
+----------------------+
| testuser |
+----------------------+
1 row in set (0.00 sec)

Voila. Our replication is working fine.

As you could see, the Master-Master Replication is nothing more than configuring 2 servers in Master-Slave mode for each other. In Master-Slave configuration, you need to make sure that on Slave server, no query is executed (except replication queries), else replication breaks. But in the case of Master-Master replication, the query can run on any of the 2 servers, thus providing us with a fault-tolerant and safe environment.

The post 5 Steps to Setup MySQL Master-Master Replication on Ubuntu 16.04 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.
Viewing all 1287 articles
Browse latest View live