Data center infrastructure management (DCIM) is a growing challenge for data center managers, and a hot market for software vendors. The openDCIM project offers an open source alternative for companies seeking to improve their asset tracking and capacity planning. OpenDCIM is used for managing the infrastructure of a data center, no matter how small or large. DCIM means many different things to many different people, and there is a multitude of commercial applications available but openDCIM does not contend to be a function by function replacement for commercial applications. It was Initially developed in-house at Vanderbilt University Information Technology Services by Scott Milliken. The software is released under the GPL v3 license that is free to modify it, and share it with others, as long as you acknowledge where it came from.
The main goal for openDCIM is to eliminate the excuse for anybody to ever track their data center inventory using a spreadsheet or word processing document again. It provide complete physical inventory of the data center.
Features of OpenDCIM:
Following are some of its other useful features of OpenDCIM.
Support for Multiple Rooms (Data Centers)
Computation of Center of Gravity for each cabinet
Template management for devices, with ability to override per device
Optional tracking of cable connections within each cabinet, and for each switch device
Archival functions for equipment sent to salvage/disposal
Management of the three key elements of capacity management - space, power, and cooling
Basic contact management and integration into existing business directory via UserID
Fault Tolerance Tracking - run a power outage simulation to see what would be affected as each source goes down
Prerequisites:
In order to install OpenDCIM on CentOS 7, we need to compete the following requirements on our server.
Web host running Apache 2.x (or higher) with an SSL Enabled site
Once the packages are installed then using the following commands start and enable the services of Apache and Mysql server and check their status that should be active and running.
To set the server name of your server open the default web configuration in your editor by searching the 'ServerName' in it and add the following line.
# vim +/ServerName /etc/httpd/conf/httpd.conf
ServerName opendcim_server_name:443
Save and close the configuration file using ':wq!' and then restart apache web services.
# systemctl restart httpd.service
New Virual Host for OpenDCIM:
Create a new configuration file for the openDCIM VirtualHost and put the following configuration in it.
AllowOverride All
AuthType Basic
AuthName "openDCIM"
AuthUserFile /opt/openDCIM/opendcim/.htpasswd
Require valid-user Image may be NSFW. Clik here to view.
After save and closing the file now we need to enable basic user authentication to protect the openDCIM web directory by configuring the files we mentioned in above configuration file.
Let's run below commands to create a user after create a new file as shown.
Now open openDCIM in your browser to proceed with the web based installation.
https://your_server_name_or_IP/
You will be asked for authentication and after proving the user name and Password you will be directed to OpenDCIM web page where you will be asked to create new Department as shown.
Thank you for being with us, we have successfully setup OpenDCIM on our CentOS 7 server. Now you can easily manage your Data centers no matter how small or large environment you have. Do share your experience and leave your valuable comments.
ANSIBLE is an open source software platform for configuration management, provisioning, application deployment and service orchestration. It can be used for configuring our servers in production, staging and developments. It can also be used in managing application servers like Webservers, database servers and many others. Other systems similar to configuration management is CHEF, Puppet, SALT and Distelli, compared to all these ANSIBLE is the most simple and easily managed tool. The main advantage of using Ansible is as follows:
1. Modules can be written in any programming language.
2. No agent running on the client machine.
3. Easy to install and Manage.
4. Highly reliable.
5. Scalable
In this article, I'll explain some of the basics about your first steps with Ansible.
Understanding the hosts file
Once you've installed Ansible, the first thing is to understand its inventory file. This files contains the list of target servers which are managed by Ansible. The default hosts file location is /etc/ansible/hosts. We can edit this file to include our target systems. This file specifies several groups in which you can classify your hosts as your preference.
As mentioned here, important things to note in creating the hosts file:
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or IP addresses
# - A hostname/ip can be a member of multiple groups
# - Remote hosts can have assignments in more than one groups
# - Include host ranges in one string as server-[01:12]-example.com
PS : It's not recommended to make modifications in the default inventory file, instead we can create our own custom inventory files at any locations as per our convenience.
How Ansible works?
First of all, Ansible admin client connects to the target server using SSH. We don't need to setup any agents on the client servers. All you need is Python and a user that can login and execute the scripts. Once the connection is established, then it starts gathering facts about the client machine like operating systems, services running and packages. We can execute different commands, copy/modify/delete files & folders, manage or configure packages and services using Ansible easily. I'll demonstrate it with the help of my demo setup.
My client servers are 45.33.76.60 and 139.162.35.39. I created my custom inventory hosts file under my user. Please see my inventory file with three groups namely webservers, production and testing.
In webservers, I've included two of my client servers. And separated them in two other groups as one in production and other in testing.
We need to create the SSH keys for the Admin server and copy this over to the target servers to enhance the SSH connections. Let's take a look on how I did that for my client servers.
linuxmonty@linuxmonty-Latitude-E4310:~$ # ssh-keygen -t rsa -b4096
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2e:2f:32:9a:73:6d:ba:f2:09:ac:23:98:0c:fc:6c:a0 linuxmonty@linuxmonty-Latitude-E4310
The key's randomart image is:
+--[ RSA 4096]----+
| |
| |
| |
| |
|. S |
|.+ . |
|=.* .. . |
|Eoo*+.+o |
|o.+*=* .. |
+-----------------+
Copying the SSH keys
This is how we copy the SSH keys from Admin server to the target servers.
Client 1:
linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id root@139.162.35.39
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@139.162.35.39's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@139.162.35.39'"
and check to make sure that only the key(s) you wanted were added.
linuxmonty@linuxmonty-Latitude-E4310#
Client 2:
linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id root@45.33.76.60
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@139.162.35.39's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@45.33.76.60'"
and check to make sure that only the key(s) you wanted were added.
linuxmonty@linuxmonty-Latitude-E4310#
Once you execute these commands from your admin server, your keys will be added to the target servers and saved in the authorized keys.
Familiarizing some basic Ansible Modules
Modules controls system resources, configuration, packages, files etc. There are about 450+ modules used in Ansible. First of all, let's use the module to check the connectivity between your admin server and the target servers. We can run the ping module from your Admin server to confirm the connectivity.
Now you run the setup module to gather more facts about your target servers to organize your playbooks. This module provides you the information about the server hardware, network and some of the ansible-related software settings. These facts can be described in the playbooks section and represent discovered variables about your system. These can be also used to implement conditional execution of tasks.
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m setup -u root
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'w' -u root
139.162.35.39 | success | rc=0 >>
08:07:55 up 4 days, 17:08, 2 users, load average: 0.00, 0.01, 0.05
USER TTY LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 07:54 7:54 0.00s 0.00s -bash
root pts/1 08:07 0.00s 0.05s 0.00s w
45.33.76.60 | success | rc=0 >>
08:07:58 up 14 days, 20:33, 2 users, load average: 0.03, 0.03, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 101.63.79.157 07:54 8:01 0.00s 0.00s -bash
root pts/1 101.63.79.157 08:07 0.00s 0.05s 0.00s w
Similarly, we can execute any linux commands across multiple target servers using the command module in Ansible.
Managing User and Groups
Ansible provides a module called "user" which server this purpose. The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of the existing user accounts as per our needs.
Usage : # ansible -i inventory selection -m user -a "name=username1 password=<crypted password here>"
In the above server, this command initiates the creation of this adminsupport user. But in the server 139.162.35.39 this user is already present, hence, it skips any other modifications for that user.
139.162.35.39 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1001,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"stderr": "useradd: warning: the home directory already exists.\nNot copying any file from skel directory into it.\nCreating mailbox file: File exists\n",
"system": false,
"uid": 1001
}
Usage : ansible -i inventory selection -m user -a 'name=username state=absent'
This command deletes the user adminsupport from our Testing server 45.33.76.60.
File Transfers
Ansible provides a module called "copy" to enhance the file transfers across multiple servers. It can securely transfer a lot of files to multiple servers in parallel.
Usage : ansible -i inventory selection -m copy -a "src=file_name dest=file path to save"
I'm copying a shell script called test.sh from my admin server to all my target servers /root. Please see the command usage below:
If you use playbook, you can take advantage of the module template to perform the same task.
It also provides a module called "file" which will help us to change the ownership and permissions of the files across multiple servers. We can pass these options directly to the "copy" command. This module can also be used to create or delete the files/folders.
Example :
I've modified the ownerships and groups for an existing file test.sh on the destination server and changed its permission to 600.
Now, I need to create a folder with a desired ownership and permissions. Let's see the command to acquire that. I'm creating a folder "ansible" on my production server group and assign it to the owner "adminsupport" with 755 permissions.
The only variable that determines the operation is the arbitrary variable called "state", it is modified to absent to delete that particular folder from the server.
Managing Packages
Let's see how we can manage packages using Ansible. We need to identify the platform of the target servers and use the desired package manager modules like yum or apt that suits the purpose. We can use apt or yum according to the target servers OS version. It also has modules for managing packages under many platforms.
Installing a VsFTPD package on my production server in my inventory.
If you notice, you can see that, when I execute the ansible command to install the package first time, the "changed" variable was "true" which means this command has installed the package. But when I run that command again, it reported the variable "changed" as "false" which means the command checked for the package installation and it found that to be already installed, so nothing was done on that server.
Similarly, we can update or delete a package, the only variable which determines that is the state variable which can be modified to latest to install the latest available package and absent to remove the package from the server.
Examples:
Updating the package:
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=latest' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages providing vsftpd are up to date"
]
}
This claims that the installed software is already in the latest version and there are no available updates.
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0, "results": [ "vsftpd is not installed"
]
}
First time when we run the ansible command it removed the VsFTPD package and then we run it again to confirm that there is no package existing in the server now.
Managing Services
It is necessary to manage the services which are installed on the target servers. Ansible provides the module service to attain that. We can use this module to enable on-boot and start/stop/restart services. Please see the examples for each case.
Starting/Enabling a Service:
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx enabled=yes state=started' -u root
45.33.76.60 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}
As you can see the state variable is modified to started, restarted and stopped status to manage the service.
Playbooks
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can assign different roles, perform tasks like copying or deleting files/folders, make use of mature modules that shifts most of the functionality or substitute variables to make your deployments dynamic and re-usable.
Playbooks define your deployment steps and configuration. They are modular and can contain variables. They can be used to orchestrate steps across multiple machines. They are configuration files written in simple YAML file which is the Ansible automation language. They can contain multiple tasks and can make use of "mature" modules.
These specifies the two tasks which is meant to be performed while running this playbook. We can divide it to four statements for more clarity. First statement describes the task which is setting up an FTP server and the second statement performs that by choosing/installing the package on the target server. Third statement describes the next task and fourth one ensure the service status by starting the FTP server and enable it on boot.
PLAY RECAP ********************************************************************
139.162.35.39 : ok=3 changed=2 unreachable=0 failed=0
We can see that playbooks are executed sequentially according to the tasks specified in the playbook. First it chooses the inventory and then it starts performing the plays one by one.
Application Deployments
I'm going to set-up my webservers using a playbook. I created a playbook for my "webservers" inventory group. Please see my playbook details below:
This playbook describes four tasks as evident from the result. After running this playbook, we can confirm the status by checking the target servers in the browser.
Now, I'm planning to add a PHP Module namely php-gd to the target servers. I can edit my playbook to include that task too and run it again. Let's see what happens now. My modified playbook is as below:
On close analysis of this result, you can see that only two sections in this have reported modifications to the target servers. One is the index file modification and other is the installation of our additional PHP module. Now we can evident the changes for the target servers in the browser.
Ansible roles are a special kind of playbooks that are fully self-contained and portable. The roles contains tasks, variables, configuration templates and other supporting tasks that needs to complete complex orchestration. Roles can be used to simplify more complex operations. You can create different roles like common, webservers, db_servers etc categorizing with different purpose and include in the main playbook by just mentioning the roles. This is how we create the roles.
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-galaxy init common
common was created successfully
Now, I've created a role named common to perform some of the common tasks in all my target servers. Each role contains their individual tasks, configuration templates, variables, handlers etc.
total 40
drwxrwxr-x 9 linuxmonty linuxmonty 4096 May 13 14:06 ./
drwxr-xr-x 34 linuxmonty linuxmonty 4096 May 13 14:06 ../
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 defaults/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 files/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 handlers/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 meta/
-rw-rw-r-- 1 linuxmonty linuxmonty 1336 May 13 14:06 README.md
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 tasks/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 templates/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 vars/
We can create our YAML file inside each of these folders as per our purpose. Later on, we can run all these tasks by just specifying these roles inside a playbook. You can get more details on Ansible roles here.
I hope this documentation provided you with the basic knowledge on how to manage your servers with Ansible. Thank you for reading this. I would recommend your valuable suggestions and comments on this.
Docker is a free and open source project for the automation of deployment of apps under software containers that provides an open platform to pack, ship and run any application any where. It makes an awesome use of the resource isolation features of the linux kernel such as cgroups, kernel namespaces, and union-capable file system. It is pretty easy and simple for deploying and scaling web apps, databases and back-end services independent on a particular stack or provider. The latest release ie version 1.11.1 consists of many additional features and bug fixes. In this article, we'll be installing the latest Docker Engine 1.11.1 in a machine running Ubuntu 16.04 LTS "Xenial" .
System Requirements
Following are the system requirements that are essential to run the latest Docker Engine in Ubuntu 16.04 LTS Xenial.
It currently requires 64 bit version of host to run so, we'll require a 64 bit version of Ubuntu Xenial installed on the host.
As we require to download images of containers frequently, we'll require a good internet connectivity in the host.
Make sure that the machine's CPU supports virtualization technology and virtualization support is enabled in BIOS.
Ubuntu Xenial running Linux kernel version 3.8 and above are supported.
Updating and Upgrading Xenial
First of all, we'll need to update the local repository index of the Ubuntu repositories from the nearest mirror service so that we have the index of all the latest packages available on the repository through internet. To do so, we'll need to run the following command in a terminal or console.
$ sudo apt-get update
As our local repository index has been updated, we'll upgrade our Ubuntu Xenial to the latest packages available in the repositories via apt-get package manager.
$ sudo apt-get upgrade
Installing Docker Engine
Once our system has been upgraded, we'll move towards the installation of the latest Docker Engine ie version 1.11 in our machine running the latest and greatest Ubuntu 16.04 Xenial LTS. We have many ways to install it in Ubuntu, either we run a simple script written by the official developers or we manually add the Docker's official repository and install it. Here, in this tutorial, we'll show both methods to install Docker Engine.
Manual Installation
1. Adding the Repository
First of all, we'll need to add the new GPG key for our docker repository.
As the new GPG key for docker repository has been added to our machine, we'll now need to add the repository source to our apt source list. To do so, we'll use a text editor and create a file named docker.list under /etc/apt/sources.list.d/ directory.
$ sudo nano /etc/apt/sources.list.d/docker.list
Then, we'll gonna add the following of line into that file in order to add the repository to the apt's source..
deb https://apt.dockerproject.org/repo ubuntu-xenial main
As our repository for docker has been addded, we'll now gonna update the local repository index of APT package manager so that we can use it to install the latest release. In order to update the local repository index, we'll need to run the following command inside a terminal or console.
$ sudo apt-get update
3. Installing Linux Kernel Extras
Now, as its recommended, we'll gonna install the Linux Kernel Extras in our machine running Ubuntu Xenial. We'll need to install this package as its important for us to enable the use of aufs storage driver. So, to install the linux-image-extras kernel package in our machine, we'll need to run the following command.
Here, as we have linux kernel 4.4.0-22 installed and running, the linux kernel extras of the respective kernel will be installed.
4. Installing Docker Engine
Once everything is setup and done, we'll now go towards the main part of the work where we'll install the latest docker engine in our latest Ubuntu 16.04 LTS Xenial machine. To do so, we'll need to run the following simple apt-get command.
Finally, we are done installing Docker Engine, once we are done the installation process, we'll now move towards the next step where we'll add our current running user to the docker group.
One-Script installation
If we wanna automate everything done above in the Manual installation method, we'll need to follow the this step. As said above, Docker developers have written an awesome script that will install docker engine in our machine running Ubuntu 16.04 LTS Xenial fully automated. This method is pretty fast, easy and simple to perform. A person with little knowledge of Ubuntu 16.04 can easily install docker using this script. So, before we start, we'll need to make sure that wget is installed in our machine. To install wget downloader, we'll need to run the following command.
$ sudo apt-get install wget
Once get downloader is installed in our machine, we'll need to run the following wget command in order to run the docker's official script to install the latest Docker Engine.
$ wget -qO- https://get.docker.com/ | sh
Adding User to Docker Group
Now, we'll gonna add our users to the docker group, doing so will allow docker daemon to provide permissions to the users under group docker to have authentication to run and manage the docker containers.
$ sudo usermod -aG docker arun
Once done, we'll need to logout and again login to the system to apply the changes into effect.
Starting the Docker Daemon
Next, we'll gonna start our Docker Daemon so that we can run, manage and control containers, images in our Ubuntu machine. As Ubuntu 16.04 LTS Xenial runs systemd as its default init system, we'll need to run the following systemctl command to start docker daemon.
$ sudo systemctl start docker
Checking the version
As our docker daemon has been started, we'll now gonna test if its installed and running properly or not by checking the version of docker engine installed in our machine.
$ docker -v
Docker version 1.11.1, build 5604cbe
So, as version 1.11.1 was released and available during the time of writing this article, we must see the above output.
Running Docker Containers
Now, we'll gonna run our first docker container in this step. If everything above is setup and done properly as expected, we'll now be able to run a container. Here in this tutorial, we'll gonna run our all time favorite testing container called Hello World. In order to run hello-world container, we'll need to run the following docker command.
Now, doing this should print an output "Hello from Docker." from the container. This verifies that we have successfully installed docker engine and is capable of running container on it.
In order to check what images where pulled during running the hello-world container, we'll need to run the following docker command.
$ docker images
Managing Docker
As our docker is running successfully, we'll also need to learn how to manage it. In this tutorial, we'll have a look into few basic docker commands which are used to stop, remove, pull a docker container and images.
Stopping a Running Container
Now, if we wanna stop a running container, we'll need to run the following command first to see the list of running containers.
$ docker ps -a
Then, we'll need to run the following docker stop command with the respective container id.
$ docker stop 646ed6509700
Removing a Container
To remove a stopped container, we'll need to run the following command specifying the stopped unused container id.
$ docker rm 646ed6509700
Pulling an Image
In order to pull a docker image, we'll need to run the pull command.
We have many commands to manage it, we can see more in the official documentation of Docker.
Conclusion
Docker is an awesome technology enabling us to easily pack, run and ship application independent of platform. It is pretty easy to install and run the latest Docker Engine in the latest Ubuntu release ie Ubuntu 16.04 LTS Xenial. Once the installation is done, we can move further towards managing, networking and more with containers. So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy :-)
ReaR Relax-and-Recover is a Linux bare metal disaster recovery and system migration solution. Relax and Recover (ReaR) is a true disaster recovery solution that creates recovery media from a running Linux system. If a hardware component fails, an administrator can boot the standby system with the ReaR rescue media and put the system back to its previous state. ReaR preserves the partitioning and formatting of the hard disk, the restoration of all data, and the boot loader configuration. ReaR is well suited as a migration tool, because the restoration does not have to take place on the same hardware as the original. It builds the rescue medium with all existing drivers, and the restored system adjusts automatically to the changed hardware.
ReaR even detects changed network cards, as well as different storage scenarios with their respective drivers (migrating IDE to SATA or SATA to CCISS) and modified disk layouts. Relax-and-Recover was designed to be easy to set up, requires no maintenance and is there to assist when disaster strikes. Its setup-and-forget nature removes any excuse for not having a disaster recovery solution implemented, so there is no excuse for not using it.
Prerequisites:
Relax-and-Recover is written entirely in Bash and does not require any external programs. However, the rescue system that is created by Relax-and-Recover requires some programs that are needed to make our rescue system work , that is 'mingetty' and 'sfdisk'. While all other required programs like sort, dd, grep, etc are already present in minimal installation.
Let's start with your system update using below command on your CentOS 7 server.
# yum -y update
Make sure the following dependencies is also installed on your system, else you will get its errors about missing packages.
Many Linux distributions ship Relax-and-Recover as part of their distribution, you can refer to the Relax-and-Recover Download page to get its stable release.
Let run the below 'yum' command to download the rear package.
# yum install rear
The package will be installed after you type 'y' key to continue including its required dependencies.
You can also start by cloning the Relax-and-Recover sources from Github with below command.
# git clone git://github.com/rear/rear.git
Setup USB Media:
Prepare your USB media that Relax-and-Recover wil be using. Here we are using and external drive which is '/dev/sdb'. You can change '/dev/sdb' to the correct device in your situation.
Run the below command to format all data on that device.
# /usr/sbin/rear format /dev/sdb
Relax-and-recover asks you to confirm if you want to format the device or not, let's type 'Yes' and hit 'Enter'.
USB device /dev/sdb must be formatted with ext2/3/4 or btrfs file system
Please type Yes to format /dev/sdb in ext3 format: Yes
The device has been labeled REAR-000 by the format workflow. Now edit the '/etc/rear/local.conf' configuration file with below configuration.
# vim /etc/rear/local.conf
### write the rescue initramfs to USB and update the USB bootloader
OUTPUT=USB
#
#### create a backup using the internal NETFS method, using 'tar'
BACKUP=NETFS
#
#### write both rescue image and backup to the device labeled REAR-000
BACKUP_URL=usb:///dev/disk/by-label/REAR-000
Create Rescue Image:
Now you are ready to create a rescue image. Let's run the below command with (-v option) to see the verbose output .
You might want to check the log file for possible errors or see what Relax-and-Recover is doing.
# tail -f /var/log/rear/rear-centos7.log
2016-05-16 00:19:52 Unmounting '/tmp/rear.Ir6gqwz2ROig9on/outputfs'
umount: /tmp/rear.Ir6gqwz2ROig9on/outputfs (/dev/sdb1) unmounted
rmdir: removing directory, '/tmp/rear.Ir6gqwz2ROig9on/outputfs'
2016-05-16 00:19:52 Finished running 'output' stage in 4 seconds
2016-05-16 00:19:52 Finished running mkrescue workflow
2016-05-16 00:19:52 Running exit tasks.
2016-05-16 00:19:52 Finished in 93 seconds
2016-05-16 00:19:52 Removing build area /tmp/rear.Ir6gqwz2ROig9on
rmdir: removing directory, '/tmp/rear.Ir6gqwz2ROig9on'
2016-05-16 00:19:53 End of program reached
Now reboot your system and try to boot from the USB device. If you are able to boot from your second drive then it mean your work is done. You can also check by mounting the other drive. Now let's dive into the advanced Relax-and-Recover options and start creating full backups.
Relax-and-Recover will not automatically add itself to the Grub bootloader. It copies itself to your /boot folder. To enable this, add below to your local configuration.
GRUB_RESCUE=1
The entry in the bootloader is password protected. The default password is REAR. Change it in your own 'local.conf' file.
GRUB_RESCUE_PASSWORD="SECRET"
Storing on a central NFS server:
The most straightforward way to store your DR images is using a central NFS server. The configuration below will store both a backup and the rescue CD in a directory on the share.
To configure Relax-and-Recover you have to edit the configuration files in '/etc/rear/' directory . All *.conf files are part of the configuration, but only 'site.conf' and 'local.conf' are intended for the user configuration. All other configuration files hold defaults for various distributions and should not be changed.
In almost all circumstances you have to configure two main settings and their parameters: The BACKUP method and the OUTPUT method.
The backup method defines, how your data was saved and whether Relax-and-Recover should backup your data as part of the mkrescue process or whether you use an external application, e.g. backup software to archive your data.
The output method defines how the rescue system is written to disk and how you plan to boot the failed computer from the rescue system. You can view in this file '/share/rear/conf/default.conf' for an overview of the possible methods and their options
Using Relax-and-Recover
To use Relax-and-Recover you always call the main script '/usr/sbin/rear' . To get the list of all its available commands that you can use. rune the below command.
To view/verify your configuration, run 'rear dump'. It will print out the current settings for BACKUP and OUTPUT methods and some system information.
# rear dump
To recover your system, start the computer from the rescue system and run rear recover. Your system will be recovered and you can restart it and continue to use it normally.
Conclusion:
Relax-and-Recover (Rear) is the leading Open Source disaster recovery solution, and successor to mkcdrec. It was designed to be easy to set up and requires no maintenance and assists when disaster strikes. This was a detailed article on rear installation and its use case. Feel free to get back to us in case of any difficulty just leave us your comment or suggestions.
Chef is an automation platform that configures and manages your infrastruture. It transforms the infrastruture into code. It is a Ruby based configuration management tool. This automation platform consists of a Chef workstation, a Chef server and chef clients which are the nodes managed by the Chef server. All the chef configuration files, recipes, cookbooks, templates etc are created and tested on the Chef workstation and are uploaded to the Chef Server, then it distributes these across every possible nodes registered within the organisations. It is an ideal automation framework for the Ceph and OpenStack. Not only it gives us complete control but it's super easy to work with.
In this article, I'm explaining the steps I followed for implementing a Chef automation environment on my CentOS 7 servers.
Pre-requisites
It is recommended to have a FQDN hostname
Chef supports only 64 bit architecture
Proper network/Firewall/hosts configurations are recommended
Chef comprises of a workstation which is configured to develop the recipes and cookbooks. It is also configured to run the knife and synchronizes with the chef-repo to keep it up-to-date. It helps in configuring organizational policy, including defining roles & environments and ensuring that critical data is being stored in data bags. Once these recipes/cookbooks are tested in the workstations, we can upload it to our Chef server. Chef server stores these recipes and assigns on to the nodes depending on their requirements. Basically nodes communicates with only the chef server and takes instructions and recipes from there.
In my demo setup, I'm having three servers namely
chefserver.test20.com - Chef Server
chefwork.test20.com - Chef Workstation
chefnode.test20.com - Chef Node
Let's us start with building Workstation.
Setup a Workstation
First of all, login to our server chefwork, then download the Chef development package. Once the package is downloaded, we can install the package using rpm command.
[root@chefwork ~]# rpm -ivh chefdk-0.14.25-1.el7.x86_64.rpm
warning: chefdk-0.14.25-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:chefdk-0.14.25-1.el7 ################################# [100%]
Thank you for installing Chef Development Kit!
What is ChefDK?
The Chef Development Kit contains everything to start with Chef, along with the tools essential for code managing.
It contains a new command-line tool, "chef"
The cookbook dependency manager Berkshelf
The Test Kitchen integration testing framework.
ChefSpec for testing the cookbook syntax
Foodcritic, a tool for doing static code analysis on cookbooks.
It also has all the Chef tools like Chef Client, Knife, Ohai and Chef Zero
Let's start with creating a some recipes in the Workstation and test it locally to ensure its working.
Create a folder named chef-repo on /root/ and inside that folder we can create our recipes.
[root@chefwork ~]# mkdir chef-repo
[root@chefwork ~]# cd chef-repo
Creating a recipe called hello.rb.
[root@chefwork chef-repo]# vim hello.rb
[root@chefwork chef-repo]#
[root@chefwork chef-repo]# cat hello.rb
file '/etc/motd' do
content 'Welcome to Chef'
end
This recipe hello.rb creates a file named /etc/motd with content "Welcome to Chef". This recipe make use of the resource file to enhance this task.Now we can run this recipe to check its working.
We're modifying our recipe file to install httpd package on our server and copy an index.html file to the default document root to confirm the installation. The package and the service resources are used to implement this. Default action for a package resource is installation, hence we needn't specify that action separately.
[root@chefwork chef-conf]# cat hello.rb
package 'httpd'
service 'httpd' do
action [:enable, :start]
end
file '/var/www/html/index.html' do
content 'Welcome to Apache in Chef'
end
[root@chefwork chef-conf]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe) * yum_package[httpd] action install - install version 2.4.6-40.el7.centos.1 of package httpd * service[httpd] action enable - enable service service[httpd] * service[httpd] action start - start service service[httpd] * file[/var/www/html/index.html] action create (up to date)
The command execution clearly describes each instance in the recipe. It installs the Apache package , enables and starts the httpd service on the server. And it creates an index.html file in the default document root with the content "Welcome to Apache in Chef". So we can verify it by running the server IP in the browser.
Now we can create our first cookbook, create a folder called chef-repo under the /root directory and execute the command "chef generate cookbook [cookbook name]" to generate our cookbook.
root@chefwork chef-repo]# mkdir cookbooks
[root@chefwork chef-repo]# cd cookbooks/
[root@chefwork cookbooks]# chef generate cookbook httpd_deploy
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: code_generator::cookbook
* directory[/root/chef-repo/cookbook/httpd_deploy] action create
- create new directory /root/chef-repo/cookbook/httpd_deploy
This is the file structure of the created cookbook, let's see the use of these files/folders inside the cookbook one by one.
Berksfile : It is the configuration file, which mainly tells BerkShelf what are the cookbook's dependencies, which can be specified directly inside this file or indirectly through metadata.rb. It also tells Berkshelf where it should look for those dependencies.
Chefignore : It tells Chef which all files should be ignored while uploading a cookbook to the Chef server.
metadata.rb : It contains meta information about you cookbook, such as name, contacts or description. It can also state the cookbook’s dependencies.
README.md : It contains documentation entry point for the repo.
Recipes : Contains the cookbook's recipes. It starts with executing the file default.rb.
default.rb : The default recipe format.
specs : It will be storing the unit test cases of your libraries.
test : It will be storing the unit test cases of your recipes.
Creating a template
Next we are going to create a template file for ourselves. Earlier, we created a file with some contents, but that can't be fit in with our recipes and cookbook structures. so let's see how we can create a template.
[root@chefwork cookbook]# chef generate template httpd_deploy index.html
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: code_generator::template
* directory[./httpd_deploy/templates/default] action create
- create new directory ./httpd_deploy/templates/default
* template[./httpd_deploy/templates/default/index.html.erb] action create
- create new file ./httpd_deploy/templates/default/index.html.erb
- update content in file ./httpd_deploy/templates/default/index.html.erb from none to e3b0c4
(diff output suppressed by config)
Now if you see our cookbook file structure, there is a folder created with the name template with index.html.erb file. We can edit our index.html.erb template file and add to our recipe as below:
root@chefwork default]# cat index.html.erb
Welcome to Chef Apache Deployment
[root@chefwork default]# pwd
/root/chef-repo/cookbook/httpd_deploy/templates/default
Creating the recipe with this template
[root@chefwork recipes]# pwd
/root/chef-repo/cookbook/httpd_deploy/recipes
[root@chefwork recipes]# cat default.rb
#
# Cookbook Name:: httpd_deploy
# Recipe:: default
#
# Copyright (c) 2016 The Authors, All Rights Reserved.
package 'httpd'
service 'httpd' do
action [:enable, :start]
end
template '/var/www/html/index.html' do source 'index.html.erb'
end
Now go back to our chef-repo folder and run/test our recipe on our Workstation.
[root@chefwork chef-repo]# chef-client --local-mode --runlist 'recipe[httpd_deploy]'
[2016-05-20T05:44:40+00:00] WARN: No config file found or specified on command line, using command line options.
Starting Chef Client, version 12.10.24
resolving cookbooks for run list: ["httpd_deploy"]
Synchronizing Cookbooks:
- httpd_deploy (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 3 resources
Recipe: httpd_deploy::default * yum_package[httpd] action install - install version 2.4.6-40.el7.centos.1 of package httpd * service[httpd] action enable - enable service service[httpd] * service[httpd] action start - start service service[httpd] * template[/var/www/html/index.html] action create - update content in file /var/www/html/index.html from 152204 to 748cbd --- /var/www/html/index.html 2016-05-20 04:18:38.553231745 +0000 +++ /var/www/html/.chef-index.html20160520-20425-1bez4qs 2016-05-20 05:44:47.344848833 +0000 @@ -1,2 +1,2 @@ -Welcome to Apache in Chef +Welcome to Chef Apache Deployment
Running handlers:
Running handlers complete
Chef Client finished, 4/4 resources updated in 06 seconds
[root@chefwork chef-repo]# cat /var/www/html/index.html
Welcome to Chef Apache Deployment
According to our recipe, Apache is installed on our workstation, service is being started and enabled on boot. And a template file has been created on our default document root.
Now we've tested our Workstation. It's time for the Chef server setup.
Setting up the Chef Server
First of all login to our Chef server "chefserver.test20.com" and download the chef server package combatible with our OS version.
Now our Chef server is installed. But we need to reconfigure the Chef server to enable and start all the services which is composed in the Chef server. We can run this command to reconfigure.
root@chefserver ~]# chef-server-ctl reconfigure
Starting Chef Client, version 12.10.26
resolving cookbooks for run list: ["private-chef::default"]
Synchronizing Cookbooks:
- enterprise (0.10.0)
- apt (2.9.2)
- yum (3.10.0)
- openssl (4.4.0)
- chef-sugar (3.3.0)
- packagecloud (0.0.18)
- runit (1.6.0)
- private-chef (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
[2016-05-19T02:38:37+00:00] WARN: Chef::Provider::AptRepository already exists! Cannot create deprecation class for LWRP provider apt_repository from cookbook apt
Chef Client finished, 394/459 resources updated in 04 minutes 05 seconds
Chef Server Reconfigured!
Please confirm the service status and their pids by running this command.
After installing the management console, we need to reconfigure the chef server to restart the chef server and its services to update these changes.
[root@chefserver ~]# opscode-manage-ctl reconfigure
To use this software, you must agree to the terms of the software license agreement.
Press any key to continue.
Type 'yes' to accept the software license agreement, or anything else to cancel.
yes
Starting Chef Client, version 12.4.1
resolving cookbooks for run list: ["omnibus-chef-manage::default"]
Synchronizing Cookbooks:
- omnibus-chef-manage
- chef-server-ingredient
- enterprise
Recipe: omnibus-chef-manage::default
* private_chef_addon[chef-manage] action create (up to date)
Recipe: omnibus-chef-manage::config
Running handlers:
Running handlers complete
Chef Client finished, 62/79 resources updated in 44.764229437 seconds
chef-manage Reconfigured!
[root@chefserver ~]# chef-server-ctl reconfigure
Now our Management console is ready, we need to setup our admin user to manage our Chef Server.
Creating Admin user/Organization
I've created the admin user named chefadmin with an organization linox on my chef server to manage it. We can create the user using the chef command chef-server-ctl user-create and organization using the command chef-server-ctl org-create.
Our keys are saved inside the folder /root/.chef folder. We need to copy these keys from the Chef server to the Work station to initiate the communication between our Chef server and workstation.
Copying the Keys
I'm copying my user and validator keys from the Chef server to the workstation to enhance the connection between the servers.
[root@chefserver .chef]# scp chefadmin.pem root@139.162.35.39:/root/chef-repo/.chef/
The authenticity of host '139.162.35.39 (139.162.35.39)' can't be established.
ECDSA key fingerprint is 5b:0b:07:85:9a:fb:b6:59:51:07:7f:14:1b:07:07:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '139.162.35.39' (ECDSA) to the list of known hosts.
root@139.162.35.39's password:
chefadmin.pem 100% 1678 1.6KB/s 00:00
[root@chefserver .chef]#
[root@chefserver .chef]# scp linoxvalidator.pem root@139.162.35.39:/root/chef-repo/.chef/
The authenticity of host '139.162.35.39 (139.162.35.39)' can't be established.
ECDSA key fingerprint is 5b:0b:07:85:9a:fb:b6:59:51:07:7f:14:1b:07:07:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '139.162.35.39' (ECDSA) to the list of known hosts.
root@139.162.35.39's password:
linoxvalidator.pem 100% 1678 1.6KB/s 00:00
[root@chefserver .chef]#
Now login to our Management console for our Chef server with the user/password "chefadmin" created.
After downloading this kit. Move it your Workstation /root folder and extract. This provides you with a default Starter Kit to start up with your Chef server. It includes a chef-repo.
This is the file structure for the downloaded Chef repository. It contains all the required file structures to start with.
Cookbook SuperMarket
Chef cookbooks are available in the Cookbook Super Market, we can go to the Chef SuperMarket here. Download the required cookbooks from there. I'm downloading one of the cookbook to install Apache from there.
root@chefwork chef-repo]# knife cookbook site download learn_chef_httpd
Downloading learn_chef_httpd from Supermarket at version 0.2.0 to /root/chef-repo/learn_chef_httpd-0.2.0.tar.gz
Cookbook saved: /root/chef-repo/learn_chef_httpd-0.2.0.tar.gz
Extract this cookbook inside the "cookbooks" folder.
[root@chefwork chef-repo]# tar -xvf learn_chef_httpd-0.2.0.tar.gz
All the required files are automatically created under this cookbook. We didn't require to make any modifications. Let's check our recipe description inside our recipe folder.
template '/var/www/html/index.html' do source 'index.html.erb' end
service 'iptables' do action :stop end
[root@chefwork recipes]#
[root@chefwork recipes]# pwd /root/chef-repo/cookbooks/learn_chef_httpd/recipes
[root@chefwork recipes]#
So we just need to upload this cookbook to our Chef server as it looks perfect.
Validating the Connection b/w Server and Workstation
Before uploading the cookbook, we need to check and confirm the connection between our Chef server and Workstation. First of all, make sure you've proper Knife configuration file.
This configuration file is location at /root/chef-repo/.chef folder. The highlighted portions are the main things to take care. Now you can run this command to check the connections.
root@chefwork .chef]# knife client list
ERROR: SSL Validation failure connecting to host: chefserver.test20.com - SSL_connect returned=1 errno=0 state=error: certificate verify failed
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.
Original Exception: OpenSSL::SSL::SSLError: SSL Error connecting to https://chefserver.test20.com/clients - SSL_connect returned=1 errno=0 state=error: certificate verify failed
You can see an SSL error reporting. In order to rectify this error, we need to fetch the SSL certificate for our Chef Server and store it inside the /root/.chef/trusted_certs folder. We can do this by running this command.
root@chefwork .chef]# knife ssl fetch WARNING: Certificates from chefserver.test20.com will be fetched and placed in your trusted_cert directory (/root/chef-repo/.chef/trusted_certs).
Knife has no means to verify these are the correct certificates. You should
verify the authenticity of these certificates after downloading.
Adding certificate for chefserver.test20.com in /root/chef-repo/.chef/trusted_certs/chefserver_test20_com.crt
Verifying the SSL:
[root@chefwork .chef]# knife ssl check
Connecting to host chefserver.test20.com:443
Successfully verified certificates from `chefserver.test20.com'
[root@chefwork .chef]# knife client list chefnode linox-validator
[root@chefwork .chef]# knife user list chefadmin
Uploading the Cookbook
We can upload our cookbook to our chef server from the workstation using the knife command as below:
This is the final step in the Chef implementation. We've setup a workstation, a Chef server and then now we need to add our clients to the Chef server for automation. I'm adding my chefnode to the server using the knife bootstrap command as below:
[root@chefwork cookbooks]# knife bootstrap 45.33.76.60 --ssh-user root --ssh-password dkfue@321 --node-name chefnode
Creating new client for chefnode
Creating new node for chefnode
Connecting to 45.33.76.60
45.33.76.60 -----> Installing Chef Omnibus (-v 12)
45.33.76.60 downloading https://omnitruck-direct.chef.io/chef/install.sh
45.33.76.60 to file /tmp/install.sh.5457/install.sh
45.33.76.60 trying wget...
45.33.76.60 el 7 x86_64
45.33.76.60 Getting information for chef stable 12 for el...
45.33.76.60 downloading https://omnitruck-direct.chef.io/stable/chef/metadata?v=12&p=el&pv=7&m=x86_64
45.33.76.60 to file /tmp/install.sh.5466/metadata.txt
45.33.76.60 trying wget...
45.33.76.60 sha1 4def83368a1349959fdaf0633c4d288d5ae229ce
45.33.76.60 sha256 6f00c7bdf96a3fb09494e51cd44f4c2e5696accd356fc6dc1175d49ad06fa39f
45.33.76.60 url https://packages.chef.io/stable/el/7/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 version 12.10.24
45.33.76.60 downloaded metadata file looks valid...
45.33.76.60 downloading https://packages.chef.io/stable/el/7/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 to file /tmp/install.sh.5466/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 trying wget...
45.33.76.60 Comparing checksum with sha256sum...
45.33.76.60 Installing chef 12
45.33.76.60 installing with rpm...
45.33.76.60 warning: /tmp/install.sh.5466/chef-12.10.24-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
45.33.76.60 Preparing... ################################# [100%]
45.33.76.60 Updating / installing...
45.33.76.60 1:chef-12.10.24-1.el7 ################################# [100%]
45.33.76.60 Thank you for installing Chef!
45.33.76.60 Starting the first Chef Client run...
45.33.76.60 Starting Chef Client, version 12.10.24
45.33.76.60 resolving cookbooks for run list: []
45.33.76.60 Synchronizing Cookbooks:
45.33.76.60 Installing Cookbook Gems:
45.33.76.60 Compiling Cookbooks...
45.33.76.60 [2016-05-20T15:36:41+00:00] WARN: Node chefnode has an empty run list.
45.33.76.60 Converging 0 resources
45.33.76.60
45.33.76.60 Running handlers:
45.33.76.60 Running handlers complete
45.33.76.60 Chef Client finished, 0/0 resources updated in 08 seconds
[root@chefwork chef-repo]#
This command will also initialize the installation of the Chef-client in the Chef node. You can verify it from the CLI on the workstation using the knife commands below:
[root@chefwork chef-repo]# knife node list chefnode
Let's see how we can add a cookbook to the node and manage its runlist from the Chef server. As you see in the screenshot, you can click the Actions tab and select the Edit Runlist option to manage the runlist.
In the Available Recipes, you can see our learn_chef_httpd recipe, you can drag that from the available packages to the current run list and save the runlist.
Now login to your node and just run the command chef-client to execute your runlist.
root@chefnode ~]# chef-client
Starting Chef Client, version 12.10.24
resolving cookbooks for run list: ["learn_chef_httpd"]
Synchronizing Cookbooks:
- learn_chef_httpd (0.2.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 4 resources
Recipe: learn_chef_httpd::default
* yum_package[httpd] action install
Similarly, we can add any number of nodes to your Chef Server depending on its configuration and hardware. I hope this article provided you with the basic understanding of Chef implementation. I would recommend your valuable comments and suggestions on this. Thank you for reading this :)
Apache Tomcat commonly called as Tomcat is an open source Webserver and Servlet container developed by Apache Software Foundation. It is written in Java and released under Apache 2.0 License. This is a cross platform application. Tomcat is actually composed of a number of components, including a Tomcat JSP engine and a variety of different connectors, but its core component is called Catalina. Catalina provides Tomcat's actual implementation of the servlet specification.
In this article, I'll provide you guidelines to install, configure and create multiple instances of Tomcat 8 on Ubuntu 16.04. Let's walk through the installations steps.
Since Tomcat is written in Java, we need Java to be installed on our server prior to the installation.
Install Java
Tomcat 8 requires, Java 7 or later versions to be installed on the server. I updated packages on my Ubuntu server and installed the JDK packages using the commands below:
root@ubuntu:~# apt-get update
root@ubuntu:~# apt-get install default-jdk
Setting up default-jdk-headless (2:1.8-56ubuntu2) ...
Setting up openjdk-8-jdk:amd64 (8u91-b14-0ubuntu4~16.04.1) ...
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode
Setting up default-jdk (2:1.8-56ubuntu2) ...
Setting up gconf-service-backend (3.2.6-3ubuntu6) ...
Setting up gconf2 (3.2.6-3ubuntu6) ...
Setting up libgnomevfs2-common (1:2.24.4-6.1ubuntu1) ...
Setting up libgnomevfs2-0:amd64 (1:2.24.4-6.1ubuntu1) ...
Setting up libgnome2-common (2.32.1-5ubuntu1) ...
Setting up libgnome-2-0:amd64 (2.32.1-5ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
done.
Check and confirm the Java Version
After the installation process, just verify the Java version installed on your server.
root@ubuntu:~# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
Download / Install Tomcat
We can download the latest version of Tomcat here. Download and extract this under the folder /opt/apache-tomcat8.
root@ubuntu:/opt# wget http://a.mbbsindia.com/tomcat/tomcat-8/v8.0.35/bin/apache-tomcat-8.0.35.zip
--2016-05-23 03:02:48-- http://a.mbbsindia.com/tomcat/tomcat-8/v8.0.35/bin/apache-tomcat-8.0.35.zip
Resolving a.mbbsindia.com (a.mbbsindia.com)... 103.27.233.42
Connecting to a.mbbsindia.com (a.mbbsindia.com)|103.27.233.42|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9842037 (9.4M) [application/zip]
Saving to: ‘apache-tomcat-8.0.35.zip’
apache-tomcat-8.0.35.zip 100%[===================================================================>] 9.39M 4.46MB/s in 2.1s
Now switch to the tomcat user and execute the script startup.sh inside the Tomcat binary folder namely /opt/apache-tomcat8/bin/ to run this application.
tomcat@ubuntu:~/bin$ sh startup.sh
Using CATALINA_BASE: /opt/apache-tomcat8
Using CATALINA_HOME: /opt/apache-tomcat8
Using CATALINA_TMPDIR: /opt/apache-tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/apache-tomcat8/bin/bootstrap.jar:/opt/apache-tomcat8/bin/tomcat-juli.jar
Tomcat started.
Now we can access this URL http://serverip:8080 on the browser to confirm the Tomcat working.
We can even confirm the status using this command from CLI as below:
root@ubuntu:/opt# lsof -i :8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 22722 tomcat 53u IPv6 100669 0t0 TCP *:http-alt (LISTEN)
PS : To shutdown the application you can use the script shutdown.sh inside the Tomcat binaries.
root@ubuntu:/opt/apache-tomcat8# sh bin/shutdown.sh
Using CATALINA_BASE: /opt/apache-tomcat8
Using CATALINA_HOME: /opt/apache-tomcat8
Using CATALINA_TMPDIR: /opt/apache-tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/apache-tomcat8/bin/bootstrap.jar:/opt/apache-tomcat8/bin/tomcat-juli.jar
May 24, 2016 3:32:35 AM org.apache.catalina.startup.Catalina stopServer
SEVERE: Could not contact localhost:8005. Tomcat may not be running.
May 24, 2016 3:32:36 AM org.apache.catalina.startup.Catalina stopServer
SEVERE: Catalina.stop:
Tomcat Web Application Manager
In a production environment, it is very useful to have the capablility to deploy a new web application or undeploy an existing one, without having to shutdown/restart the entire server. In addition, you can even reload an exisiting application itself, even without declaring it to be reloadable in the Tomcat server configuration file.
This Management Web console supports the following functions:
Deploy a new web application from the uploaded WAR file or on a specified context path from the server f/s.
List the currently deployed web applications and the sessions that are currently active
Reload an existing web applications, to reflect changes in the contents of the /WEB-INF/classes or /WEB-INF/lib.
Get the server information about the OS and JVM
Start and Stop an existing web applications, --stopping the existing application thus making it unavailable. But don't undeploy it.
Undeploy a deployed web application and delete its document base directory
We can create the users to manage the Tomcat Management Web console. You can edit the Tomcat user configuration file namely conf/tomcat-users.xml to create the admin users to manage the Panel.
I've appended these lines to the Tomcat user configuration file to create two users namely manager and admin with the passwords as listed.
Tomcat uses a password protected file "keystore" to save the SSL transactions. We need to create a keystore file to store the server's private key and self-signed certificate by executing the following command:
root@ubuntu:/usr/local# keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/local/keystore
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: Saheetha Shameer
What is the name of your organizational unit?
[Unknown]: VIP
What is the name of your organization?
[Unknown]: VIP
What is the name of your City or Locality?
[Unknown]: Kochi
What is the name of your State or Province?
[Unknown]: Kerala
What is the two-letter country code for this unit?
[Unknown]: IN
Is CN=Saheetha Shameer, OU=VIP, O=VIP, L=Kochi, ST=Kerala, C=IN correct?
[no]: yes
Enter key password for <tomcat>
(RETURN if same as keystore password):
Options:
-genkeypair : Generate key pair
-keyalg : Key algorithm
-keystore : Keystore file path
After entering the details for generating the certification, you can edit the Tomcat server configuration to enable the SSL/TLS support directing to the keystore file.
We need to add this section to the Tomcat server configuration file namely conf/server.xml
In order to create multiple Tomcat instances, you can download and extract the Tomcat application to a different folder. I extracted the contents to a different folder namely /opt/apache-tomcat8-2. After extracting the files, we need to make proper changes to the Tomcat Server configuration file for modifying the Connector ports and other important ports for the application to avoid conflicts with the existing application.
These are the following changes applied to the Tomcat Server configuration file namely conf/server.xml.
1. Modified the shutdown port from 8005 to 8006
<Server port="8005" shutdown="SHUTDOWN">
to
<Server port="8006" shutdown="SHUTDOWN">
2. Modified the connector port from 8080 to 8081
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
to
<Connector port="8081" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
tomcat@ubuntu:/opt/apache-tomcat8-2/bin$ sh startup.sh
Using CATALINA_BASE: /opt/apache-tomcat8-2
Using CATALINA_HOME: /opt/apache-tomcat8-2
Using CATALINA_TMPDIR: /opt/apache-tomcat8-2/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/apache-tomcat8-2/bin/bootstrap.jar:/opt/apache-tomcat8-2/bin/tomcat-juli.jar
Tomcat started.
Verify the second Tomcat instance at the port 8081 at the URL http://SERVERIP:8081
That's it! you're done with the basic things on Tomcat installations. I hope you enjoyed reading this article. I would recommend your valuable suggestions and comments on this. Thank you for reading this :)
Hello and welcome, our today's article s on installation and configuration of Bacula (an open source network backup solution) on Ubuntu 15.10 /16.04. You can use it to manage backup, recovery, and verification of computer data across a network of computers of different kinds. Bacula is relatively easy to use and efficient, while offering many advanced storage management features that make it easy to find and recover lost or damaged files. Due to its modular design, Bacula is scalable from small single computer systems to systems consisting of hundreds of computers located over a large network.
Bacula is composed of several software components including backup server and the backup clients.
A Bacula server, which we will refer as the "backup server", has these components:
Bacula Director (DIR): Software that controls the backup and restore operations that are performed by the File and Storage daemons
Catalog: Services that maintain a database of files that are backed up. The database is stored in an SQL database such as MySQL or PostgreSQL
Storage Daemon (SD): Software that performs reads and writes on the storage devices used for backups
Bacula Console: A command-line interface that allows the backup administrator to interact with, and control, Bacula Director
A Bacula client (backup clients) is a server that will be backed up and runs the File Daemon (FD) component. The File Daemon is software that provides the Bacula server access to the data that will be backed up.
Prerequisites:
We are going to install and configure Bacula on Ubuntu 15.10 server but you can follow the same instructions on the previous, such as Ubuntu 15.04 but you might get compatibility issues in Ubuntu 16.04.
Let's login to your Ubuntu server using your root credentials, give it an IP address and configure its FQDN. Make sure you have an Internet connection to update your system with below commands.
Once your system is back with latest updates and security patches then proceed to the next step.
Installing MySQL
Bacula uses an SQL database to manage its information. You can use either MySQL or PostgreSQL database but in this article we will are going to use MySQL. To install MySQL on your Ubuntu server just run the below command in your command line terminal.
During MySQL installation, you’ll be asked to set the database administrator password. Enter the password and click Ok. While not mandatory, but it is highly recommended that you set a password for the MySQL administrative "root" user and then repeat the same password that you have setup.
Once you proceed the installation, You will be prompted for some information that will be used to configure Postfix MTA which Bacula uses by default. Let's choose "Internet Site" as the general Type of Mail Configuration and click on 'OK' . You are also free to select the other mail server configuration type that best meets your needs.
Once again set the MySQL application password for bacula-director-mysql to register with the database server. If left blank, a random password will be generated.
We have done with installation of Bacula and its components , now we will create the backup and restore directories.
Create Backup and Restore Directories:
Bacula needs a backup directory for storing backup archives and restore directory where restored files will be placed. So, if your system has multiple partitions then make sure to create the directories on one of large partition.
Run the commands below to create new directories for both backup and restore points.
# mkdir -p /b_backup/backup /b_backup/restore
Set the ownership and then permissions to the above directories using below commands.
# chown -R bacula:bacula /b_backup/
# chmod -R 700 /b_backup/
Configuring Bacula
All the configuration files of Bacula can be found in the '/etc/bacula' directory. Bacula has several components that must be configured independently in order to function correctly.
First open the below file to Update Bacula Director configuration.
# vim /etc/bacula/bacula-dir.conf
Update the restore path by finding the below path in your configuration file. In our case, /b_backup/restore is the restore location.
Job {
Name = "RestoreFiles"
Type = Restore
Client=k_ubuntu-fd
FileSet="Full Set"
Storage = File
Pool = Default
Messages = Standard
Where = /b_backup/restore
}
Now scroll down to “list of files to be backed up” section, and set the path of the directory to backup.
Save and close the file after making above changes and move to the next step.
Update Bacula Storage Daemon settings:
Edit /etc/bacula/bacula-sd.conf file using your editor with below configurations to set the backup folder location, which is /mybackup/backup in our case.
Once you done all the changes, restart all bacula services.
# systemctl restart bacula-director
# systemctl restart bacula-fd
# systemctl restart bacula-sd
That’s it. Now, bacula has been installed and configured successfully.
Testing Backup Job
After restarting services, let's test that it works by running a backup job.
We will use the Bacula Console to run our first backup job. If it runs without any issues, we will know that Bacula is configured properly. Enter the Console with below command.
# bconsole
This will take you to the Bacula Console prompt, denoted by a * prompt. Create a Label by issuing a label command. Then you will be prompted to enter a volume name and select the pool that the backup should use. We'll use the "File" pool that we configured earlier, by entering "2".
At this point Bacula now knows how we want to write the data for our backup. We can now run our backup to test that it works correctly using 'run' command then you will be prompted to select which job to run. We want to run the "BackupLocalFiles" job, so enter "1" at the prompt. At the "Run Backup job" confirmation prompt, review the details, then enter "yes" to run the job as you will see a new message as shown below.
After running a job, Bacula will tell you that you have messages. The messages are output generated by running jobs. Check the messages by typing 'message'.
The "OK" status indicates that the backup job ran without any problems. Congratulations! You have a backup of the "Full Set" of your Bacula server.
Testing Restore Job
Now that a backup has been created, it is important to check that it can be restored properly. The restore command will allow us restore files that were backed up. To demonstrate, we'll restore all of the files in our last backup.
* restore all
A selection menu will appear with many different options, which are used to identify which backup set to restore from. Since we only have a single backup, let's "Select the most recent backup"—select option 5. When you are finished making your restore selection, proceed by typing 'done' as shown below.
Managing and working with Bacula via command line might be bit difficult for some administrators but in that case you have the option to use Webmin. So, you don’t have to remember all commands or edit any configuration files manually.
Conclusion
That's it. In this article you have learned the basic Bacula setup and how it can backup and restore your local file system. You can also add your other servers as backup clients so you can recover them, in case of data loss. Do share your comments and suggestions.Thank you for reading this article.
Setting up the correct timezone and date in Linux is very important as most of things depends on it. The server and system clock needs to be on time which does not matter if you are using Linux on your personal computer or you have a Linux server in production. NTP (Network Time Protocol) enables the synchronization of computer clocks distributed across the network by ensuring accurate local timekeeping with reference to some particular time on the Internet. NTP communicates between clients and servers using the User Datagram Protocol on port No.123. NTP uses a systematic, hierarchical level of clock sources for its reference. Each level is called a stratum and has a layer number that usually begins with zero. The stratum level serves as an indicator of the distance from the reference clock in order to avoid cyclic dependenc in the hierarchy. However, the stratum does not represent the quality or reliability of time.
The NTP software package includes a background program known as a daemon or service, which synchronizes the computer’s clock to a particular reference time such as radio clock or a certain device connected to a network. Now let's see how we going setup Timezone and NTP Synchronization on Ubuntu 16.04.
Setup Timezone:
Let's start with setting up the correct timezone on your Ubuntu server. Run the below command with root user credentials and you will be presented with a menu system that allows you to select the geographic region of your server. Select the geographic area in which you live and press 'OK' to continue.
Press the 'OK' key and your system will be updated to use the selected timezone, and you will get the below output showing your default zone and date.
root@ubuntu-16:~# dpkg-reconfigure tzdata
Current default time zone: 'Europe/London'
Local time is now: Tue May 24 21:00:31 BST 2016.
Universal Time is now: Tue May 24 20:00:31 UTC 2016.
You can also configure your Timezone on Ubuntu server by using the following command. To check the list of all available timezones, run below command.
# timedatectl list-timezones
Then select your desired timezone from the listed output and run below command to configure it on your system.
# timedatectl set-timezone Europe/London
Now run you can verify that the timezone has been set properly by using the 'timedatectl' command in your command line terminal, where you will get the information about your timezone settings as shown.
That's it, you have successfully installed NTP package to set up NTP synchronization on Ubuntu 16.04. The daemon will start automatically on each boot and will continuously adjust the system time to be in-line with the global NTP servers throughout the day.
Configure NTP Servers:
To configure and change the default NTP servers, open up the below configuration file and find the section within the configuration that lists the NTP Pool Project servers as shown.
These lines refers to a set of hourly-changing random servers that provide your server with the correct time, which are located all around the world. You can get their list by using below command.
When you made any changes to the configuration, make sure to restart ntp service with below command.
# service ntp restart
Conclusion:
That's it. In this article you have learned about Timezone and NTP synchronization on Ubuntu 16.04. NTP can be easily deployed on servers hosting different services which requires less resource overhead and minimal bandwidth requirements. NTP can handle hundreds of clients at a time with minimum CPU usage. Thank you for reading this article and I hope you find it much helpful. Let's share your thoughts on it.
Hello and welcome to our today's article on another open source Network monitoring tool that is Cacti. Cacti is a complete network graphing solution designed with RRDTool’s data storage and graphing functionality. It can graph network bandwidths with SNMP, shell or perl scripts. RRDtool is a program developed by the Swiss Tobi Oeticker who was already the creator of the famous MRTG. RRDtool is developed using the "C" programming language and it stores the collected data on ".rrd" files. The number of records in a ".rrd" file never increases, meaning that old records are frequently removed. This implies that one obtains precise figures for recently logged data, whereas figures based on very old data are mean value approximations. By default, you can have daily, weekly, monthy and yearly graphs.
Some of the primary features of Cacti are the following:
unlimited graph items
flexible data sources
custom data-gathering scripts
built-in SNMP support
graph templates
data source templates
host templates
user-based management and security
tree, list, and preview views of graph data
auto-padding support for graphs
graph data manipulation
Using Cacti you can easily monitor the performance of your computers, networks, servers, router, switch, services (apache, mysql, dns, harddisk, mail server), SANs, applications, weather measurements, etc. Cacti's installation is very simple and you don't need to be expert to complete its setup. You can also add plugins to the Cacti for enabling the possibility to integrate other free tools like ntop or php weathermap.
1) Prerequisites:
The basic requirement for Cacti is that you must have LAMP stack setup on your server, before getting started with the installation of Cacti. Login to your Ubuntu server and run below command to update your Ubuntu server.
# apt-get update
# apt-get upgrade
Before installing the LAMP packages, please do note that Cacti do not support MySQL-Server-5.7 as yet. So, we will be using the 'MySQL-Server-5.6' by adding it repository and then update the system with below commands.
During the installation process, you will be asked to configure the root password of MySQL server. Press 'OK' after setting up the password and then repeat the same upon next prompt.
We need to install few other packages that are necessary for fully functional Cacti setup and to monitor the 'localhost' where cacti is installed you need to install and configure the service 'snmpd'.
Run the below command to install these packages on your Ubuntu 16.04 server and press 'Y' key to continue.
Now we can start Cacti installation as we have completed all of its required dependencies. Issue the below command to start installing Cacti packages and press 'Y' to continue.
During the installation process you will be prompted to configure Cacti with few options to select from available options. First of all Choose the web server that you wish to use for configure with Cacti like we are using Apache and then press 'OK' key to continue.
Next is to setup the database that is going to be used for Cacti . Point to the 'No' option if you have already configured databases or click on the 'Yes' to setup database using dbconfig-common for Cacti as shown.
Now check below and make sure all of these values are correct before continuing. If everything looks OK and there is no error in your installation, then hit Finish.
Now add new devices, or create new graphs. To view graphs of your localhost system, click on the graphs button and you will see multiple graphs of your local host server showing your system memory usage and load average etc.
In this article you learn about the installation and configuration setup of Cacti on Ubuntu 16.04. Now you are able to use it in your own environment to get graph data for the CPU and network bandwidth utilization. You can also use it to monitor the network traffic by polling a router or switch via snmp. Hope you have enjoyed alot, so do not forget to share your thoughts, Thank you.
Hello, today we are going to talk about the new windowing protocol for the Linux operating system, its name is Wayland, it is maintained by the people at freedesktop.org, which also helps on X development. The main motivation in creating a new windowing protocol is that X became complex, legacy technology and concepts turned it hard to make it better. Wayland was designed with a new, simpler architecture that performs better and makes it easier to develop.
Wayland is still in development and will not replace X as the main windowing system anytime soon. the fact is that X is a mature system, it simply works and is expected to be found on Linux/unix systems, so, even when Wayland become the main windowing system for unices they should have X as legacy option.
Despite Wayland is under development you can give try it right now and that is what we are going to do now.
Installing Wayland and Weston packages
Here is how to install the required packages to run Wayland and its reference compositor, Weston.
First you should update your system
pacman -Syu
Now install wayland, which will also install libxml if it is not present.
pacman -S wayland
Then install Weston, a compositor, window manager on which Wayland clients (applications) run.
pacman -S weston
During the installation, pacman will ask you to select which package will provide libgl, this will depend on your video card. Following image shows the options.
You can run weston on different ways, directly as the main backend or from within another windowing system.
Running within X
You can run weston from within X by calling weston from a xterm session.
weston
As with X, you can start a weston instance within another, once again just call weston. The following image shows a stack of Weston running within Weston within Weston within X.
Now lets try to run weston as our main backend. From one of system's VT call weston launcher.
weston-launch
Configure Weston
As you should have seen, you can run weston without any extra configuration, however you can set some things to customize and make it better.
weston.ini
To configure weston you should create/change the weston.ini file, which is usually at ~/.config/weston.ini. Here i will show some options you could use to do such changes.
The core section is used to select the startup compositor modules and general options.
[core]
backend=drm-backend.so
The keyboard section is used to select keyboard options, such as layout and variant.
At output section you set how things will be displayed on the monitor, the following are commonly used ones. Note also the different uses of the mode options, the mode options are like on xorg.conf.
These were some of the main options, which should be enough to test weston, for more details you should take a look at weston.ini(1). Anyway here is a complete weston.ini example with some other settings.
Along with Wayland and Weston, there are some utilities that will be installed. These are some nice wayland client examples that shows some of the wayland and weston capabilities.
weston-flower - Draw and drag some random flowers on the screen
weston-smoke - Draw smoke that follows the cursor
weston-editor - Very simple text editor example
weston-image - Simple image viewer, just call it with the image path as the first argument
weston-terminal - Simple terminal shell
There are other applications included, but they are not so fun, mostly useful for development/debugging purpose.
Well, so this is it, Wayland is working and you can run Weston from within X, directly or from within another Weston session. There are already more native clients and also ways to run X applications on top of Wayland. It is time to take a look, play around and maybe even develop your own Wayland clients.
Apache CouchDB is an open source database management system that uses JSON for documents, JavaScript for MapReduce indexes and regular HTTP for its API. It is often widely known as a NoSQL database system. It has a document-oriented NoSQL database architecture and is implemented in the concurrency-oriented language Erlang. CouchDB doesn't store data and relationship in tables whereas stores all the data independently in the documents. In Apache CouchDB, the documents maintains its own data and self-contained schema. Whereas Futon is the native web based interface built into CouchDB which provides a good interface to which helps us to create or delete databases and manage individual couchdb documents. In this article, we'll gonna install CouchDB and Futon in our machine running Ubuntu 16.04 LTS Xenial.
Pre-requisites
Before we get started, we'll need to make sure that we have got Ubuntu 16.04 LTS Xenial in our machine or server as we are featuring CouchDB installation in Ubuntu 16.04 LTS Xenial. If we don't have one and planning to install Ubuntu 16.04, you can download it from the Official Ubuntu Download Page. Once we have our Ubuntu 16.04 ready to go, we'll first need to update the local repository index of the apt package manager. If we are accessing using root user, we don't need to enter sudo everytime we run the commands but as we are running a non-root user, we'll need to enter sudo at every command in order to access root privileges.
$ sudo apt-get update
Once the local repository of the apt package manager has been updated, we'll gonna upgrade the packages of our Ubuntu system using the following command.
Once we meet our pre-requisites, we'll move forward towards the installation of CouchDB and Futon. As we have Ubuntu PPA repository of Apache CouchDB maintained and updated by the CouchDB project and the communities, we'll gonna go for it. Installing CouchDB using PPA repository is the most easiest and simple method to install the official release of CouchDB. First of all, we'll need to make sure that we have installed the package named software-properties-common so that we can easily add the PPA repository in our Ubuntu machine. In order to install it, we'll need to execute the following command.
Then, we'll need to update the local repository index of apt package manager as we have added new PPA repository above.
$ sudo apt-get update
Installing CouchDB
Now, we'll go for the installation of CouchDB in our machine as all the above steps has been completed. In order to install CouchDB from the official PPA repository, we can now simply run the following apt-get command in our terminal. This will install couchdb with its required dependencies from their respective repositories.
As some files and directories are set under root user and group by default, it may be risky as per the security in the production, so its highly recommended to fix the permission. In order to fix the issue, we'll need to change the ownership of the files belonging to the couchdb user and group. As the new user and group for couchdb is already created by default during the installation process above, we'll no need to create another. In order to change the ownership, we'll simply need to run the following command.
Once all the steps above are done successfully, we'll now restart our CouchDB instance. In order to do so, as we are running Ubuntu 16.04 LTS Xenial and its shipped with systemd as the default init system, we'll need to run the following command.
$ sudo systemctl restart couchdb
In order to test if CouchDB is running fine or not, we can simply run the following command which will retrieve the information through curl.
As CouchDB natively includes Futon, the web interface for CouchDB we can simply access it via our web browser. In order to do so, we'll first need to perform SSH tunneling as allowing Futon across the firewall currently may be dangerous as we haven't set a proper admin credentials. In order to setup SSH tunneling, we'll need to run the following command.
Note: Here we'll need to replace arun and ip-address with username and ip address of the server respectively.
Now, as we have successfully setup the SSH tunneling, we'll now access the web interface of CouchDB. To do so, we'll need to open a web browser and point it to http://localhost:5984 .
Then, in order to access the web application of Futon, we'll need to point to http://localhost:5984/_utils/index.html . Once done, we'll get access to the Futon Database Administration Panel in which we can perform different CouchDB database management activities.
As we don't require any login credentials to login to the Futon panel and as every account is an admin account in it, anyone accessing Futon has the ability to make changes to the database. So, first of all, we'll need to secure it by creating a new Admin account. To do so, we can simply click on Fix it link shown in the bottom of the right-sidebar.
Doing so will open a dialogue box allowing us to create a new admin account. Here, we'll need to enter the required username and password which we'll use further to login to Futon.
Now, in order to create a database, we'll need to login to Futon Control Panel using the username and password created above. Then, we'll click on Create Database button available on the top-left of the screen. Then, we'll be asked to enter a name for our new database to be created. Next, we can add new documents, edit, delete, update and save the documents via Futon easily.
If you need to make CouchDB accessible and available outside of our local network or our local machine, then we'll need to make sure that the above steps on securing Futon is completed. Then, we'll need to add 0.0.0.0 to the bind-address varilable in /etc/couchdb/local.ini file under [httpd] block. To do so, we'll need to login as root user and open the file using a text editor.
Here, we can customize our configurations according to our needs and requirements. Once done, we'll gonna save the file and exit the text editor. Once done, we can simply log out from the root user by executing exit command in the terminal.
Now, in order to apply the changes, we'll need to restart our CouchDB services using systemctl command.
$ sudo systemctl restart couchdb
Allowing Firewall
As we are making our CouchDB available out of our local network, we'll also need to make sure that port 5984 is opened by the firewall program. As Ubuntu 16.04 LTS Xenial runs with systemd as the default init system, firewalld is most probably used as the firewall program.
Once the port is added for public access, we'll need to a make sure that we reload the firewalld program.
$ sudo firewall-cmd --reload
Note: Allowing CouchDB accessible through the internet makes anyone able to add and access the documents and databases whereas they are unable to edit and delete the documents as we have already created an admin user above. So, its not recommended to allow CouchDB accessible externally, if we need access to it remotely, we can make use of SSH tunneling or allow a specific ip address to connect via iptables or firewall program.
Conclusion
Finally, we have easily and succesfully installed CouchDB with its web based interface Futon in our machine running Ubuntu 16.04 LTS Xenial. Documents and databases in CouchDB can be easily accessible by everyone so we'll need to make sure that our database is not accessible by public or untrusted people. We can even install it manually using the tarballs available in the official download page but as we are running Ubuntu 16.04 LTS Xenial, its pretty easy to install using the PPA repository. So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy :-)
RainLoop Webmail is an Open Source Web Application software written in PHP. It is a simple, fast and web-based email client. It provides a fast web interface to access your emails on almost all major mail providers like Yahoo, GMail, Outlook and many others as well as your own mail servers. These are some of the main features of this Email client.
1. Modern user interface with efficient memory use which can work on low-end webservers.
2. Provides complete support of IMAP and SMTP protocols including SSL and STARTTLS.
3. Minimum resource requirements.
4. Provides interface to set filters.
5. Direct access to the mail server, no storing of emails locally on webservers
6. It allows adding multiple domain accounts.
7. Really simple and fast installation.
8. It can be integrated with Facebook, twitter, Dropbox and Google.
9. Built-in caching system allows for improving overall performance and reducing load on web and mail servers.
In this article, I'm providing the guidelines on how to install RainLoop Webmail on Ubuntu 16.04. Let's see the pre-requisites for the installation.
Pre-requisites
This application requires a LAMP setup prior to the installation.
This works with any of these Web servers: Apache, nginx, lighttpd, MS IIS or other with PHP support
As mentioned in the context, RainLoop Webmail is based on PHP, so it is recommended to have a Webserver installed with fully functional PHP to make it working. I've installed Apache, PHP and MySQL on my server prior to the installation. I'll brief the installation steps one by one here.
root@ubuntu:/var/#apt-get install python-software-properties *//Install the Python Software packages//*
root@ubuntu:/var/#apt install software-properties-common
root@ubuntu:/var# add-apt-repository ppa:ondrej/php
root@ubuntu:/var#apt-get update *//Update the Softwares//*
root@ubuntu:/var#apt-get install -y php7.0 *// Install PHP //*
Processing triggers for man-db (2.7.5-1) ...
Setting up php7.0-common (7.0.4-7ubuntu2.1) ...
Setting up php7.0-mcrypt (7.0.4-7ubuntu2.1) ...
Setting up php7.0-imap (7.0.4-7ubuntu2.1) ...
Setting up php7.0-xml (7.0.4-7ubuntu2.1) ...
Setting up php7.0-readline (7.0.4-7ubuntu2.1) ...
Setting up php7.0-opcache (7.0.4-7ubuntu2.1) ...
Setting up php7.0-odbc (7.0.4-7ubuntu2.1) ...
Setting up php7.0-mysql (7.0.4-7ubuntu2.1) ...
Setting up php7.0-json (7.0.4-7ubuntu2.1) ...
Setting up php7.0-curl (7.0.4-7ubuntu2.1) ...
Setting up php7.0-cli (7.0.4-7ubuntu2.1) ...
Setting up php7.0-fpm (7.0.4-7ubuntu2.1) ...
Setting up php7.0 (7.0.4-7ubuntu2.1) ...
root@ubuntu:/var# add-apt-repository ppa:ondrej/apache2 *// Add the latest packages for Apache2 //*
root@ubuntu:/var/#apt-get update
root@ubuntu:/var/#apt-get install apache2 *//Install Apache2 //*
root@ubuntu:/var/#add-apt-repository -y ppa:ondrej/mysql-5.6 *//Add the packages for MySQL 5.6 //*
root@ubuntu:/var/# apt-get update
root@ubuntu:/var/# apt-get install mysql-server-5.7 *//Install MySQL 5.6 //*
root@ubuntu:/var/#apt-get install libapache2-mod-php7.0 php7.0-mysql php7.0-curl php7.0-json
Confirming the Installations
After the installation, we need to confirm the installed Apache, PHP and MySQL versions.
root@ubuntu:~# apache2 -v
Server version: Apache/2.4.18 (Ubuntu)
Server built: 2016-04-15T18:00:57
root@ubuntu:~# php -v
PHP 7.0.4-7ubuntu2 (cli) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies
root@ubuntu:~# mysql -v
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.7.12-0ubuntu1 (Ubuntu)
Adding to the hosts file
We need to add proper hosts file entry to make it resolve as required.
cat /etc/hosts
139.162.55.62 rainloop.webmail.com
Creating the Virtual Host
Now we can create the virtual host for the domain. In addition, make sure to create the document root folder and error log folder mentioned in the virtual host, if it's not created before.
For adding an SSL, we need to first generate a self signed certificate for our hostname "rainloop.webmail.com" and then add it in the Virtual host to enable the SSL support. Let's see how to create a self signed certificate.
root@ubuntu:/var/#openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/httpd/conf/ssl/rainloop.webmail.com.key -out /etc/httpd/conf/ssl/rainloop.webmail.com.crt
Generating a 2048 bit RSA private key
....................................................................................+++
.....................+++
writing new private key to '/etc/httpd/conf/ssl/rainloop.webmail.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:K-----A
Locality Name (eg, city) []:C-----N
Organization Name (eg, company) [Internet Widgits Pty Ltd]:VIP
Organizational Unit Name (eg, section) []:VIP
Common Name (e.g. server FQDN or YOUR name) []:rainloop.webmail.com
Email Address []:-----@gmail.com
As seen here, you can provide the details required to generate the certificate. Once it is created, you can add those to our Apache configuration file, as below:
<Directory />
Options +Indexes +FollowSymLinks +ExecCGI
AllowOverride All
Order deny,allow
Allow from all
Require all granted
</Directory>
</VirtualHost>
Enabling SSL for the Vhost
We can use this command a2ensite in Ubuntu to enable SSL for the enabled domain VirtualHost.
root@ubuntu:/etc# a2ensite rainloop-ssl
Site rainloop-ssl already enabled
Modify the open_basedir value in the PHP configuration file
I've installed PHP 7 in my server, the PHP configuration file is located at /etc/php/7.0/fpm/php.ini. You need to modify the open_basedir value in the PHP configuration file to limit our file operations.
root@ubuntu:~# grep open_basedir /etc/php/7.0/fpm/php.ini
; open_basedir, if set, limits all file operations to the defined directory
open_basedir = /var/http/:/home/:/tmp/:/usr/share/pear/:/usr/share/webapps/:/etc/webapps/:/srv/www/_basedir = /srv/http/:/home/:/tmp/:/usr/share/pear/:/usr/share/webapps/:/etc/webapps/:/var/www/
After modifying the PHP configuration file, we need to restart the Apache service to make it effective.
Confirm ths status of required PHP modules
This webmail application requires some of the PHP modules to be enabled on the server. Please confirm whether these modules are enabled on your server.
You can manage the Webmail interface either from the Web interface or by modifying the variables in the file /var/www/rainloop/data/_data_dc70aaa98299c32ee3d3ee747f40c63b/_default_/configs/application.ini.
Web interface provides a user-friendly access to modify the settings. We can access the Admin interface from the URL >>http://rainloop.webmail.com/?admin with default user/password. The default admin password is "12345".
If you don't have a database, you can create one named rainloop and provide the required access. All your email contacts and filters will be saved in this database.
I installed the plugin called POPPASSD which provided me an option to change my email account password. After enabling the required settings. You can access your mail server domain in the browser from the URL >>http://rainloop.webmail.com/
You can get more information regarding the Rain Loop Webmail here. Howdy! your new advanced email client is ready to use now. Thank you for reading this. I hope you enjoyed reading this article. I would appreciate your valuable comments and suggestions on this.
Jenkins is an open source continuous integration tool, which is used for continuous build, continuous deployment and Testing across multiple servers faster. It is a self-contained web-based program, ready to run out-of-the-box, with packages for Windows, Mac OS X and Linux operating systems. It is a Web application build in Java. It performs these tasks automatically when the configurations are added. In this article, I'm providing the guidelines on how to install and configure Jenkins on your Ubuntu 16.04 server.
Pre-requisites
1. A Web Server (Apache/Nginx/Tomcat)
2. Web-Browser
3. Java Platform
Let's start with the installation steps one by one
Install Java
Since, this web application is build on Java platform, the server needs to be installed with the Latest Java Development Kit. I've used this command to install Java on my server.
You can confirm the Java Version after installing.
root@ubuntu:~# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
Install Apache2
Every Web application requires a Web browser to server the application. I'm using the Apache Webserver to server this purpose. We can install the Apache webserver with this command.
root@ubuntu:~#apt-get install apache2
root@ubuntu:~# apache2 -v
Server version: Apache/2.4.20 (Ubuntu)
Server built: 2016-05-05T15:42:04
root@ubuntu:~#
Installing Jenkins
Before installing Jenkins, we need to add keys and Jenkins packages to the source list.
root@ubuntu:/usr/local/src# sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
root@ubuntu:/usr/local/src# cat /etc/apt/sources.list.d/jenkins.list
deb http://pkg.jenkins-ci.org/debian binary/
root@ubuntu:/etc/apt# apt-get install jenkins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
daemon jenkins
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.1 MB of archives.
After this operation, 69.2 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.linode.com/ubuntu xenial/universe amd64 daemon amd64 0.6.4-1 [98.2 kB]
10% [Connecting to ftp.icm.edu.pl (2001:6a0:0:31::2)]
10% [Connecting to ftp.icm.edu.pl (2001:6a0:0:31::2)]
10% [Connecting to ftp.icm.edu.pl (2001:6a0:0:31::2)]
Get:2 http://pkg.jenkins-ci.org/debian binary/ jenkins 2.7 [68.0 MB]
Fetched 68.1 MB in 2min 34s (441 kB/s)
Selecting previously unselected package daemon.
(Reading database ... 34869 files and directories currently installed.)
Preparing to unpack .../daemon_0.6.4-1_amd64.deb ...
Unpacking daemon (0.6.4-1) ...
Selecting previously unselected package jenkins.
Preparing to unpack .../archives/jenkins_2.7_all.deb ...
Unpacking jenkins (2.7) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up daemon (0.6.4-1) ...
Setting up jenkins (2.7) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ..
You can manage the Jenkins service using Jenkins daemon. Furthermore, you can analyse the Jenkins log at /var/log/jenkins/jenkins.log for any service troubleshooting.
After installing the Jenkins, you can access the Jenkins portal at the URL at http://IP:8080 or http://hostname:8080
Setting up an Apache2 Proxy for port 80 to 8080
We need to configure the virtualhost to proxy port 80 to 8080, so that you can access the Jenkins without specifying any ports, just calling the URL >>>http://IP
Enable Proxy module
You can enable the proxy module by just running this command.
root@jenkins:~# a2enmod proxy
Enabling module proxy.
To activate the new configuration, you need to run:
service apache2 restart
root@jenkins:~# a2enmod proxy_http
Considering dependency proxy for proxy_http:
Module proxy already enabled
Enabling module proxy_http.
To activate the new configuration, you need to run:
service apache2 restart
Restart the Apache service once enabling this module. Now we need to create the virtual host for proxy passing the port. Please see the virtual host details:
root@jenkins:/etc/apache2/sites-available# cat jenkins.conf
<VirtualHost *:80>
ServerAdmin webmaster@localhost
ServerName jenkins.ubuntuserver.com
ServerAlias jenkins
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPreserveHost on
ProxyPass / http://localhost:8080/ nocanon
AllowEncodedSlashes NoDecode
</VirtualHost>
root@jenkins:/etc/apache2/sites-enabled# a2ensite jenkins
Enabling site jenkins.
To activate the new configuration, you need to run:
service apache2 reload
By executing this command, you can enable the Jenkins configuration created. That's all :). Access your Jenkins portal by just calling http://IP or http://hostname.
Configure Jenkins
After installing Jenkins, we can access the Jenkins Portal. It will look as the snapshot below:
Now we need to copy and paste this file content from this above location mentioned "/var/lib/jenkins/secrets/initialAdminPassword" and paste in here to continue. This will direct to the next page.
We need to install the suggested Jenkins plugins as per our requirement. Once installed it will ask us to create the Admin user to manage the Jenkins portal. We need to provide these details to continue.Image may be NSFW. Clik here to view.
That's all :). Now we're ready to get started with our continuous integration tool. Thank you for reading this article. I hope you enjoyed reading this. I would recommend your valuable comments and suggestions on this.
OpenLDAP is an open source Address Directory software. It is "lightweight" or "smaller" when compared to the X.500 designed to run on smaller computers such as desktop computers. In OpenLDAP, data information are arranged like branches of a tree, one striking difference with other varieties of commonly used databases. In OpenLDAP access rights to address directory are based on two categories of functions in slapd, Access Control List and Authorization functions. In Linux/Unix, access rights to file systems are based on file/directory permissions. An LDAP client binds(logins) to an LDAP server that submits a query to request information or submits information to be updated. Then access rights are evaluated by the server and when granted, the server responds with answer or maybe with a referral to another LDAP server where the client can have the query serviced.
In this article we will be setting up Multi-Master replication of OpenLDAP server on CentOS 7. When your directory is very big with lots of client which creates lots of traffic on the directory server then it is very difficult to meet the SLA. So we have to distribiute the load of the clients with multiple servers with the help of Replication. Openldap have multiple replication configurations like Master-Master replication and Master-consumer replication are mostly used.
Basic Setup:
In the multi master replication topology, two or more than two servers can act as masters, all of these master servers are authoritative for any change in the directory server.
In this tutorial we are going use two test servers to make the process simple using following host names and IP addresses.
LDAP1.TEST.COM IP address 172.25.10.176
LDAP2.TEST.COM IP address 192.25.10.177
Login to your both servers using root user credentials, open the 'hosts' file to update your both server names with IP address so that they should be able to resolve the other systems hostnames.
In order to setup multiple master OpenLDAP replication, first we will install and configure the Basic LDAP Server settings on both of our CentOS 7 server.
Let's run the below command to install OpenLDAP server packages.
Generate the encrypted password by running the slappasswd command and give the password, then copy the generated encrypted string and specify the password generated above for "olcRootPW" section .
# slappasswd
New password:
Re-enter new password:
{SSHA}xcsCNH2eMVrNsf4dU7LRJFY5kULU01p4
Let's generate directory manager's password first and then open the 'chdomain.ldif' to put below text in it but make sure to replace your own domain name for "dc=***,dc=***" section and specify the password generated for "olcRootPW" section.
# slappasswd
New password:
Re-enter new password:
{SSHA}xIE0NEjoshYdxkvdBaudyuo8NA2IlisgsN7MvXT
# vim chdomain.ldif
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=srv,dc=world" read by * none
Enter LDAP Password:
adding new entry "dc=test,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Test Domain dc: Test"
adding new entry "cn=Manager,dc=test,dc=com"
adding new entry "ou=People,dc=test,dc=com"
adding new entry "ou=Group,dc=test,dc=com"
Repeat the steps on the other node, and lets move towards multi-master replication.
OpenLDAP Multi-Master Replication:
Once your basic LDAP settings are complete, do the following steps to configure and setup your Multi-master replication. To do so, first we will add 'syncprov' module by opening the below file and put the below configurations in it.
Now we will configure the replication by including the most important configurations by placing the below configurations into the file of each of your master node.
But don't forget to change the "olcServerID" and "provider=xxx" information acording to your server set different value on each server.
# vim ldap01.ldif
# create new
dn: cn=config
changetype: modify
replace: olcServerID
# specify unique ID number on each server
olcServerID: 0
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001
# specify your LDAP server's URI
provider=ldap://ldap1.test.com:389/
bindmethod=simple
# your own domain name
binddn="cn=Manager,dc=test,dc=com"
# directory manager's password
credentials=xxxxxx
searchbase="dc=test,dc=com"
# includes subtree
scope=sub
schemachecking=on
type=refreshAndPersist
# [retry interval] [retry times] [interval of re-retry] [re-retry times]
retry="30 5 300 3"
# replication interval
interval=00:00:05:00
-
add: olcMirrorMode
olcMirrorMode: TRUE
adding new entry "olcOverlay=syncprov,olcDatabase={2}hdb,cn=config"
That's it. OpenLDAP master replication setup is complete, now you can configure your LDAP Client to bind your LDAP master server by using below command on your client server.
In this article you have learned about the basic concepts of OpenLDAP and its installation and Multi-master replication on CentOS 7. OpenLDAP supports a wide variety of replication topologies, these terms have been deprecated in favor of provider and consumer: A provider replicates directory updates to consumers; consumers receive replication updates from providers. Multi-Master replication is a replication technique using Syncrepl to replicate data to multiple provider ("Master") Directory servers which is best for Automatic failover/High Availability. In Multi-master replication, if any provider fails, other providers will continue to accept updates, avoiding a single point of failure, and providers can be located in several physical sites i.e. distributed across the network/globe. Thank you for reading please share your valuable comments and suggestions.
Docker is an Opensource Container based technology. It is giving us a workflow around containers which is much easy to use. Docker separates application from underlying operating system using container technology, similar to how Virtual Machines separate the operating system from underlying hardware.
Docker Container Vs Virtual Machines
The Virtual Machines includes applications, necessary binaries and libraries along with an entire guest operating systems which may weigh around 10s of GBs
While, the Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in the user space on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs, but is much more fast, portable, scalable and efficient.
Docker Benefits
Scalability : These containers are extremely lightweight which makes scaling up and scaling down very fast and very easy to launch more containers as we need them or shut them down as we no longer need them.
Portablility : We can move them very easily. We're going to get into images and registries. But essentially, we can take snapshots of our environment and upload it to the public/private registry and then download that images for making containers of it anywhere.
Deployments : We can run these containers almost anywhere to deploy it namely Desktops, laptops, Virtual machines, Public/Private clouds etc.
In this article, I'm explaining how to install Docker on Ubuntu 1604 server and run Puppet inside a Docker container.
Installing Docker
It is supported in almost all operating Systems. To install Docker in a Ubuntu server, it requires a 64 bit architecture and a kernal version of atleast or above 3.10. Let's start with the installation prerequisites.
Pre-requisites
Check the Kernel version and Architecture
We can use this commands to confirm the architecture and kernel version of our OS.
Now, next step is to update the APT repository packages. In addition, we need to ensure that it runs with https and install the required CA certificates. Run the following command to achieve this.
root@ubuntu:~# apt-get update
root@ubuntu:~# apt-get install apt-transport-https ca-certificates
Reading package lists... Done
Building dependency tree
Reading state information... Done
ca-certificates is already the newest version (20160104ubuntu1).
The following packages will be upgraded:
apt-transport-https
1 upgraded, 0 newly installed, 0 to remove and 54 not upgraded.
Need to get 25.7 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.linode.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.12~ubuntu16.04.1 [25.7 kB]
Fetched 25.7 kB in 0s (2,540 kB/s)
(Reading database ... 25186 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.2.12~ubuntu16.04.1_amd64.deb ...
Unpacking apt-transport-https (1.2.12~ubuntu16.04.1) over (1.2.10ubuntu1) ...
Setting up apt-transport-https (1.2.12~ubuntu16.04.1) ...
Creating Repository file for Docker
Make sure your repository configuration file is properly set to download the packages for Docker.
root@ubuntu:/etc/apt/sources.list.d# cat /etc/apt/sources.list.d/docker.list
deb https://apt.dockerproject.org/repo ubuntu-xenial main
Once it's added, you can update the packages once more by running "apt-get update". Make sure it takes the updates from the right repos. Remove any old docker package if it exists.
root@ubuntu:/etc/apt/sources.list.d# apt-get purge lxc-docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'lxc-docker' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 54 not upgrad
For Ubuntu Xenial 16.04 version, it's recommended to install the linux_extra_image package which is compatible with the Kernel package. This package enables the Aufs storage driver. AUFS storage driver takes multiple directories on a single host, stacks them on top of each other, providing a single unified view.
root@ubuntu:~# apt-get install linux-image-extra-$(uname -r)
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
crda iw libnl-3-200 libnl-genl-3-200 wireless-regdb
The following NEW packages will be installed:
crda iw libnl-3-200 libnl-genl-3-200 linux-image-extra-4.4.0-21-generic wireless-regdb
0 upgraded, 6 newly installed, 0 to remove and 54 not upgraded.
Need to get 39.0 MB of archives.
Installation
Now we can go ahead with the installation of the Docker.
root@ubuntu:~# apt-get install docker-engine
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
aufs-tools cgroupfs-mount git git-man liberror-perl libltdl7 libperl5.22 patch perl perl-modules-5.22 rename xz-utils
Suggested packages:
mountall git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs git-mediawiki git-svn
diffutils-doc perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl make
The following NEW packages will be installed:
aufs-tools cgroupfs-mount docker-engine git git-man liberror-perl libltdl7 libperl5.22 patch perl perl-modules-5.22 rename xz-utils
0 upgraded, 13 newly installed, 0 to remove and 54 not upgraded.
Need to get 24.8 MB of archives.
After this operation, 139 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
WARNING: The following packages cannot be authenticated!
Start and confirm the Docker status
root@ubuntu:~# service docker start
root@ubuntu:~# docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
root@ubuntu:~#
This below command downloads a test image namely hello-world from the Docker registry and runs it in a container. When the container runs, it prints an informational message. Then, it exits. Thus we can confirm the Docker working.
root@ubuntu:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
4276590986f6: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:a7d7a8c072a36adb60f5dc932dd5caba8831ab53cbf016bcdd6772b3fbe8c362
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account: https://hub.docker.com
For more examples and ideas, visit: https://docs.docker.com/engine/userguide/
Now we're ready to start with Docker. We can download all required images from the Docker Hub using the command
docker pull image_name. For instance, let see how I'm downloading the some of the useful images.
This has downloaded the Ubuntu image from the Docker Hub and we can use this for creating a Ubuntu container with this image.
root@ubuntu:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 2fa927b5cdd3 11 days ago 122 MB
hello-world latest 94df4f0ce8a4 6 weeks ago 967 B
Creating Puppet inside a Docker container
For creating puppet containers, first we need to download the Puppet packages from the docker hub.
puppet/puppet-agent-ubuntu
puppet/puppetserver
puppet/puppetdb
puppet/puppetdb-postgres
Let's see how I downloaded these images from the Docker hub. You can use the command docker pull Image_name for that.
Now we've downloaded all required images. You can view it by running docker images command.
root@ubuntu:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
puppet/puppetserver latest 0ac3058fad18 4 days ago 379.9 MB
puppet/puppetdb latest f3f9d8b3e54f 6 days ago 368.4 MB
puppet/puppet-agent-ubuntu latest 57fe50639909 6 days ago 202.9 MB
puppet/puppetdb-postgres latest 4f4ed55af431 10 days ago 265.8 MB
ubuntu latest 2fa927b5cdd3 11 days ago 122 MB
hello-world latest 94df4f0ce8a4 6 weeks ago 967 B
Before creating our Puppet container, we need to create a Docker network to add these Puppet containers as below.
We can create puppet server with the image "puppet/puppetserver" with name puppet in the puppet network with hostname "puppet-linoxide".
root@ubuntu:~# docker run --net puppet --name puppet --hostname puppet.linoxide puppet/puppetserver
Warning: The following options to parse-opts are unrecognized: :flag
2016-06-08 09:36:24,348 INFO [o.e.j.u.log] Logging initialized @27125ms
2016-06-08 09:36:36,393 INFO [p.s.v.versioned-code-service] No code-id-command set for versioned-code-service. Code-id will be nil.
2016-06-08 09:36:36,394 INFO [p.s.v.versioned-code-service] No code-content-command set for versioned-code-service. Attempting to fetch code content will fail.
2016-06-08 09:36:36,396 INFO [p.t.s.w.jetty9-service] Initializing web server(s).
2016-06-08 09:36:36,450 INFO [p.s.j.jruby-puppet-service] Initializing the JRuby service
2016-06-08 09:36:36,455 WARN [p.s.j.jruby-puppet-service] The 'jruby-puppet.use-legacy-auth-conf' setting is set to 'true'. Support for the legacy Puppet auth.conf file is deprecated and will be removed in a future release. Change this setting to 'false' and migrate your authorization rule definitions in the /etc/puppetlabs/puppet/auth.conf file to the /etc/puppetlabs/puppetserver/conf.d/auth.conf file.
2016-06-08 09:36:36,535 INFO [p.s.j.jruby-puppet-internal] Creating JRuby instance with id 1.
2016-06-08 09:36:53,825 WARN [puppetserver] Puppet Comparing Symbols to non-Symbol values is deprecated
(file & line not available)
2016-06-08 09:36:54,019 INFO [puppetserver] Puppet Puppet settings initialized; run mode: master
2016-06-08 09:36:56,811 INFO [p.s.j.jruby-puppet-agents] Finished creating JRubyPuppet instance 1 of 1
2016-06-08 09:36:56,849 INFO [p.s.c.puppet-server-config-core] Initializing webserver settings from core Puppet
2016-06-08 09:36:59,780 INFO [p.s.c.certificate-authority-service] CA Service adding a ring handler
2016-06-08 09:36:59,827 INFO [p.s.p.puppet-admin-service] Starting Puppet Admin web app
2016-06-08 09:37:06,473 INFO [p.s.m.master-service] Master Service adding ring handlers
2016-06-08 09:37:06,558 WARN [o.e.j.s.h.ContextHandler] Empty contextPath
2016-06-08 09:37:06,572 INFO [p.t.s.w.jetty9-service] Starting web server(s).
2016-06-08 09:37:06,606 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-cert'
2016-06-08 09:37:06,607 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-key'
2016-06-08 09:37:06,608 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-ca-cert'
2016-06-08 09:37:06,608 INFO [p.t.s.w.jetty9-core] webserver config overridden for key 'ssl-crl-path'
2016-06-08 09:37:07,037 INFO [p.t.s.w.jetty9-core] Starting web server.
2016-06-08 09:37:07,050 INFO [o.e.j.s.Server] jetty-9.2.z-SNAPSHOT
2016-06-08 09:37:07,174 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@18ee4ac3{/puppet-ca,null,AVAILABLE}
2016-06-08 09:37:07,175 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@4c1434a7{/puppet-admin-api,null,AVAILABLE}
2016-06-08 09:37:07,176 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@7eef9da2{/puppet,null,AVAILABLE}
2016-06-08 09:37:07,177 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.h.ContextHandler@26ad2d06{/,null,AVAILABLE}
2016-06-08 09:37:07,364 INFO [o.e.j.s.ServerConnector] Started ServerConnector@66b8635c{SSL-HTTP/1.1}{0.0.0.0:8140}
2016-06-08 09:37:07,365 INFO [o.e.j.s.Server] Started @70146ms
2016-06-08 09:37:07,381 INFO [p.s.m.master-service] Puppet Server has successfully started and is now ready to handle requests
2016-06-08 09:37:07,393 INFO [p.s.l.legacy-routes-service] The legacy routing service has successfully started and is now ready to handle requests
Now we've our Puppet Server created and running.
root@ubuntu:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4b9f456a4c2 puppet/puppetserver "dumb-init /docker-en" 3 minutes ago Up 3 minutes 8140/tcp puppet
Creating Puppet Client
By running this command, you're creating another container as Puppet client with hostname Puppeet-client-linoxide with the docker image puppet/puppet-agent-ubuntu agent. You can either use this command to create the Puppet client or you can just use docker run --net puppet puppet/puppet-agent-ubuntu to built one. If you're running this command, with a onetime flag which means, Puppet exits after the first run.
root@ubuntu:~# docker run --net puppet --name puppet-client --hostname puppet-client-linoxide puppet/puppet-agent-ubuntu agent --verbose --no-daemonize --summarize
Info: Creating a new SSL key for puppet-client-linoxide.members.linode.com
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for puppet-client-linoxide.members.linode.com
Info: Certificate Request fingerprint (SHA256): 62:E2:37:8A:6E:0D:18:AC:81:0F:F1:3E:D6:08:10:29:D4:D6:21:16:59:B7:6D:3F:AA:5C:7A:08:38:B6:6B:07
Info: Caching certificate for puppet-client-linoxide.members.linode.com
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for ca
Notice: Starting Puppet client version 4.5.1
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet-client-linoxide.members.linode.com
Info: Applying configuration version '1465378896'
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.01 seconds
Changes:
Events:
Resources:
Total: 7
Time:
Schedule: 0.00
Config retrieval: 1.55
Total: 1.56
Last run: 1465378896
Filebucket: 0.00
Version:
Config: 1465378896
Puppet: 4.5.1
But if you're using this above command, then the container won't exit, It stays online and updates Puppet every 30 minutes based on the latest content from the Puppet Server. Now we've our Puppet server/Client running on our Docker.
root@ubuntu:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f29866a103b puppet/puppet-agent-ubuntu "/opt/puppetlabs/bin/" 8 minutes ago Up 8 minutes puppet-client
f4b9f456a4c2 puppet/puppetserver "dumb-init /docker-en" 13 minutes ago Up 13 minutes 8140/tcp puppet
Creating PuppetDB
We can run a PuppetDB server in a docker container. Inorder to run a PuppetDB, we need a Postgres Server running. Docker supports only PostgreSQL server. This too can be another container instance, or RDS (Relational Database Service) end point or a Physical DB somewhere. In addition, It requires a Puppet Master running. In order to use SSL certs during the initialization, you will need at least a token puppet master running that the container can connect to initialize the certs.
Hurray! This is how we can make Puppet to run on a container infrastructure inside Docker. I hope you enjoyed reading this article. I would recommend your valuable comments and suggestions on this.
Apache Cassandra is a open source distributed, high performance, extremely scalable and fault tolerant post-relational database solution. It can serve as both real-time data store for online/transactional applications, and as read-intensive database for business intelligence systems.
Relational DB Vs Cassandra
Relational database systems handles moderate incoming data velocity and fetches data from one or few locations. It manages primarily structured data and supports complex/nested transactions with single points of failure with fail over.
Cassandra handles high incoming data velocity by fetching data from many locations. It manages all data types and supports simple transactions with no single points of failure; it provides constant uptime. In addition, it provides read/write scalability.
In this article, I'm providing the guidelines on how I installed Apache Cassandra and ran a single-node cluster on my Ubuntu 16.04 server.
Pre-requisites
It requires a Java Platform to run
A user to run this application
Install Java
Cassandra needs Java application to be running on your server, make sure you have installed latest Java version. You can update the APT repository packages and install Java. Cassandra 3 or later requires Java 8+ version to be installed.
root@ubuntu:~# apt-get update
root@ubuntu:~# apt-get install default-jdk
Setting up default-jdk (2:1.8-56ubuntu2) ...
Setting up gconf-service-backend (3.2.6-3ubuntu6) ...
Setting up gconf2 (3.2.6-3ubuntu6) ...
Setting up libgnomevfs2-common (1:2.24.4-6.1ubuntu1) ...
Setting up libgnomevfs2-0:amd64 (1:2.24.4-6.1ubuntu1) ...
Setting up libgnome2-common (2.32.1-5ubuntu1) ...
Setting up libgnome-2-0:amd64 (2.32.1-5ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
done.
You can confirm the Java version installed.
root@ubuntu:~# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
Creating a user to run Cassandra
It is always recommended to run this application as a user instead of root. Hence, I created my Cassandra user to run this application.
Now we can download the latest Apache Cassandra from here and copy to your preferred directory. I downloaded this tar file to my /tmp folder and extracted the contents to my cassandra "home" there.
root@ubuntu:/tmp# wget http://mirror.cc.columbia.edu/pub/software/apache/cassandra/3.6/apache-cassandra-3.6-bin.tar.gz
--2016-06-12 08:36:47-- http://mirror.cc.columbia.edu/pub/software/apache/cassandra/3.6/apache-cassandra-3.6-bin.tar.gz
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35552323 (34M) [application/x-gzip]
Saving to: ‘apache-cassandra-3.6-bin.tar.gz’
apache-cassandra-3.6-bin.tar.gz 100%[===================================================================>] 33.91M 6.43MB/s in 12s
Now you can switch to the cassandra user and run this application as below:
cassandra@ubuntu:~$ sh bin/cassandra
INFO 09:10:39 Cassandra version: 3.6
INFO 09:10:39 Thrift API version: 20.1.0
INFO 09:10:39 CQL supported versions: 3.4.2 (default: 3.4.2)
INFO 09:10:39 Initializing index summary manager with a memory pool size of 24 MB and a resize interval of 60 minutes
INFO 09:10:39 Starting Messaging Service on localhost/127.0.0.1:7000 (lo)
INFO 09:10:39 Loading persisted ring state
INFO 09:10:39 Starting up server gossip
INFO 09:10:39 Updating topology for localhost/127.0.0.1
INFO 09:10:39 Updating topology for localhost/127.0.0.1 INFO 09:10:39 Node localhost/127.0.0.1 state jump to NORMAL
This output means, your Cassandra server is up and running fine now. Now we can check and confirm the status of our Cluster by this command.
root@ubuntu:/home/cassandra# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack UN 127.0.0.1 142.65 KiB 256 100.0% fc76be14-acde-47d4-a4a2-5d015804bb3c rack1
The status and state notation UN means it is up and normal.
We are done with installing Single Node Cassandra cluster. Now we can see how to connect to our cluster.
Connecting to our Cluster
We can execute this shell script "cqlsh" to connect to our cluster node.
Howdy! we're done with a Single-Node Cassandra Cluster in our Ubuntu 16.04 server. I hope you enjoyed reading this. I would recommend your valuable comments and suggestions on this.
NodeJS is a free and open source javascript runtime developed on Chrome's V8 JavaScript engine which is designed to build scalable network applications. It allows the use of Javascript in the server side programming which has the ability to interact with the operating system and networking. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Here in this article, we'll learn how to install the latest and stable NodeJS in our machine running Ubuntu 16.04 LTS Xenial. To install NodeJS, we have many methods we can use. The following are the ways we'll feature in this article.
Installing using the Official Repository
Installing using the Github Source Code Clone
Installing using Node Version Manager (NVM)
Installing using the Official Repository
First of all, as NodeJS is available in the official repository of Ubuntu 16.04 LTS Xenial, we can easily install it using the repository. In order to do so, we'll first need to update the local repository index of our apt package manager.
$ sudo apt-get update
Once the update is completed, we'll move ahead and run the following command to upgrade our system which will upgrade the packages to the latest available versions.
$ sudo apt-get upgrade
Then, we'll install nodejs using the apt-get command. Doing so will automatically install the node package manager which comes by default with nodejs. NPM allows us to install node packages from the Node Package Manager Repository.
Once done, we'll be able to install and run our node applications successfully.
Installing using the Github Source Code Clone
If we wanna install the nodejs from the latest clone of Github Source Code then we'll need to follow this method of installation.
First of all, we'll need to make sure that the dependencies required for the compilation of NodeJS is installed in our Ubuntu 16.04 machine. So, in order to install it, we'll need to update the local repository index of the apt package manager.
$ sudo apt-get update
Once done, we'll now install the required dependencies from the repository.
We'll now need to download the official release of nodejs from its Offical Github Repo . To do so, we'll run the following wget command to the respective release of nodejs.
The installation process may take much more time depending upon the performance of the machine.
Installing using Node Version Manager (NVM)
The Node Version Manager also known as NVM is a version managing script for nodejs that allows us to manage multiple versions of Node.js to use in the same machine. In order to install NVM, we'll require that curl, libssl-dev and build-essential are installed. In order to do so, we'll need to run the following command.
Now, in order to gain access to the NVM functionalities and binaries, we'll need to make sure to source ~/.profile file as the installer has appended the required settings in it.
$ source ~/.profile
In order to apply the changes, we'll need to make sure to logout and login to the session.
Once done, we'll move ahead towards the installation of the latest nodejs in our machine using NVM. Here, we can see all the available versions of nodejs that we want to install. To do so, we'll need to execute the following command.
Once its installed, we can switch the version of nodejs by simply running the following command.
$ nvm use v6.2.1
Testing NodeJS Installation
As we have completed the installation of NodeJS using the above steps, we should be now able to even check the version of nodejs by running the following command.
$ node -v
v6.2.1
Now, we'll gonna create a simple nodejs app printing our all time favorite "Hello World" statement. We'll create a file named hello.js using a text editor.
$ nano hello.js
Then, we'll write the following javascript code to the hello.js file.
Finally we have successfully installed the latest and stable available NodeJS in our machine running Ubuntu 16.04 LTS Xenial. This tutorial should work fine in almost all the derivatives of Ubuntu and Debian. NodeJs has completely changed the way we used to run Javascripts. Nodejs has made Javascript available to run out of the web browsers ranging from servers to home desktop applications. After the above installation of NodeJs is completed, we can now run our various nodejs applications. If you have any questions, comments, feedback please do write on the comment box below and let us know what stuffs needs to be added or improved.
RabbitMQ is an open source message broker software that implements the Advanced Message Queuing Protocol (AMQP) , the emerging standard for high performance enterprise messaging. It is one of the most popular message broker solutions in the market, offered with an open-source license (Mozilla Public License v1.1) as an implementation of AMQP developed using the Erlang language, which is actually relatively easy to use and get started. The RabbitMQ server is a robust and scalable implementation of an AMQP broker. AMQP is a widely accepted open-source standard for distributing and transferring messages from a source to a destination. As a protocol and standard, it sets a common ground for various applications and message broker middle wares to inter operate without encountering issues caused by individually set design decisions.
RabbitMQ Server concepts:
Following are some important concepts that we needs to define before we start RabbitMQ installation setup. The default virtual host, the default user, and the default permissions are used in the examples that follow, but it is still good to have a feeling of what it is.
Producer: Application that sends the messages. Consumer: Application that receives the messages. Queue: Buffer that stores messages. Message: Information that is sent from the producer to a consumer through RabbitMQ. Connection: A connection is a TCP connection between your application and the RabbitMQ broker. Channel: A channel is a virtual connection inside a connection. When you are publishing or consuming messages or subscribing to a queue is it all done over a channel. Exchange: Receives messages from producers and pushes them to queues depending on rules defined by the exchange type. In order to receive messages, a queue needs to be bound to at least one exchange. Binding: A binding is a link between a queue and an exchange. Routing key: The routing key is a key that the exchange looks at to decide how to route the message to queues. The routing key is like an address for the message. virtual host: A Virtual host provide a way to segregate applications using the same RabbitMQ instance. Different users can have different access privileges to different vhost and queues and exchanges can be created so they only exists in one vhost.
Prerequisites:
Our first step is to make sure that all system packages are up-to-date by running the following apt-get commands in command line terminal.
After system update, we need to get main dependencies of RabbitMQ such as Erlang. Let's use the below command to get Erlang on our Ubuntu 16.04 server.
Installing rabbitmq-server package on Ubuntu 16.04 is simple. Just flow the below command and type 'Y' key to continue installing RabbixMQ servger package along with its required dependencies.
RabbitMQ server has been installed on Ubuntu 16.04, now run below commands to start and check the status of RabbitMQ server and to enable its services to auto start after each reboot.
RabbitMQ server is up and running, now are going to show you that how you can setup its Web Management console by using the rabbitmq-management plugin. Rabbitmq-management plugin allows you to manage and monitor your RabbitMQ server in a variety of ways, such as listing and deleting exchanges, queues, bindings and many more.
Let's run the below command to install this plugin on your Ubuntu 16.04 server.
# rabbitmq-plugins enable rabbitmq_management
The rabbitmq_management plugin is a combination of the following plugins which will be enabled after executing above command.
Now we can access RabbitMQ Management console from our web browser, available on HTTP port 15672 by default. You can also create new admin user using below commands.
Now open below URL along with the default port and login with your newly create user and password. You can also use the default 'guest' user name and 'guest' password to login.
Running rabbitmq-server in the foreground displays a banner message, and reports on progress in the startup sequence, concluding with the message "broker running", indicating that the RabbitMQ broker has been started successfully. RabbitMQ is a fully-fledged application stack (i.e. a message broker) that gives you all the tools you need to work with, instead of acting like a framework for you to implement your own. I hope you find this article much helpful and interesting. Do not forget to share it with your friends.
Generally, all mailservers consist of three main components: MTA, MDA and MUA. Each components plays a specific role in the process of moving and managing email messages and is important for ensuring proper email delivery. Hence, setting up a Mail server is a difficult process involving the proper configuration of these components. The best way is to install and configure each individual component one by one, ensuring each one works and gradually building up your mail server.
In this article, I'm providing the guidelines on how we can configure a Mail Server on an Ubuntu 16.04 server with Postix (MTA) and Dovecot (MDA) using an external database (MySQL) for managing virtual users. First of all let's start with the pre-requisites for building our Mail server.
Pre-requisites
MySQL installed Server
A Fully qualified hostname
Domain resolving to your server
After full-filling our pre-requisites, we can start building our Mail server one by one.
Installing Packages
First, of all we need to update our APT repository packages and start with installing the required postfix and dovecot packages.
During the Postfix installation, set-up windows will pop-up for the initial configuration. We need to choose the "internet site" and set a FQDN as our system mail name during the installation phase. This proceeds with the installation of the required packages as below.
Postfix is now set up with a default configuration. If you need to make
changes, edit
/etc/postfix/main.cf (and others) as needed. To view Postfix configuration
values, see postconf(1).
After modifying main.cf, be sure to run '/etc/init.d/postfix reload'.
Running newaliases
Setting up postfix-mysql (3.1.0-3) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Processing triggers for ufw (0.35-0ubuntu2) ...
Processing triggers for dovecot-core (1:2.2.22-1ubuntu2) ..
Create a Database for managing the mail users
Next step is to create a database for managing the email users and domains on our mail server. As I said before, we're managing the email users with this MySQL database. We can install MySQL if it's not installed by running this command apt-get install mysql-server-5.7.
We are going to create a database named "lnmailserver" with three tables as below:
Switching to the database lnmailserver and creating our three tables namely virtual_domains, virtual_users and virtual_aliases with a specification and table format.
mysql> USE lnmailserver;
Database changed
mysql> CREATE TABLE `virtual_domains` (
-> `id` INT NOT NULL AUTO_INCREMENT,
-> `name` VARCHAR(50) NOT NULL,
-> PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.01 sec)
mysql> CREATE TABLE `virtual_users` (
-> `id` INT NOT NULL AUTO_INCREMENT,
-> `domain_id` INT NOT NULL,
-> `password` VARCHAR(106) NOT NULL,
-> `email` VARCHAR(120) NOT NULL,
-> PRIMARY KEY (`id`),
-> UNIQUE KEY `email` (`email`),
-> FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.03 sec)
mysql> CREATE TABLE `virtual_aliases` (
-> `id` INT NOT NULL AUTO_INCREMENT,
-> `domain_id` INT NOT NULL,
-> `source` varchar(100) NOT NULL,
-> `destination` varchar(100) NOT NULL,
-> PRIMARY KEY (`id`),
-> FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)
Adding the domains, users and aliases to each of these tables according to our requirements.
mysql> select * from virtual_aliases;
+----+-----------+-----------------------+---------------------------+
| id | domain_id | source | destination |
+----+-----------+-----------------------+---------------------------+
| 1 | 1 | info@linoxidemail.com | blogger1@linoxidemail.com |
+----+-----------+-----------------------+---------------------------+
1 row in set (0.00 sec)
mysql > exit
Configuring Postfix
Our next step is to modify the Postfix configuration according to our configuration plan of how we need to accept SMTP connections. Before making any changes to the configuration, it is always advised to take a backup for the file.
I'm using free Dovecot SSL certificates which is specified here. We can generate dovecot self signed SSL certificates with the below command. If you've a valid SSL certificate for your hostname, you can specify those instead.
Modifying this parameter enables Postfix to use Dovecot's LMTP instead of its own LDA to save emails to the local mailboxes, thereby enabling local mail delivery for all the domains listed in the MySQL database.
Last, but not least we need to tell Postfix that we're using external database to manage the domains, users and aliases. We need to add the configuration path to fetch these details from the database tables.
Now we need to create these files mentioned above one by one. Please see my file details below:
/etc/postfix/mysql-virtual-mailbox-domains.cf
root@ubuntu:~# cat /etc/postfix/mysql-virtual-mailbox-domains.cf
user = lnmailuser
password = lnmail123
hosts = 127.0.0.1
dbname = lnmailserver
query = SELECT 1 FROM virtual_domains WHERE name='%s'
root@ubuntu:~#
/etc/postfix/mysql-virtual-mailbox-maps.cf
root@ubuntu:~# cat /etc/postfix/mysql-virtual-mailbox-maps.cf
user = lnmailuser
password = lnmail123
hosts = 127.0.0.1
dbname = lnmailserver
query = SELECT 1 FROM virtual_users WHERE email='%s'
root@ubuntu:~#
/etc/postfix/mysql-virtual-alias-maps.cf
root@ubuntu:~# cat /etc/postfix/mysql-virtual-alias-maps.cf
user = lnmailuser
password = lnmail123
hosts = 127.0.0.1
dbname = lnmailserver
query = SELECT destination FROM virtual_aliases WHERE source='%s'
These files describes how Postfix connects with the external database. We need to restart Postfix after making these changes.
root@ubuntu:~# service postfix restart
We need to run these following commands to confirm the connectivity and check whether Postfix is able to fetch the required information from the database.
To check whether Postfix finds your domain from the database, we can run this. This should return '1' if the attempt is successful.
To check whether Posfix finds your email forwarder from the database, we can run this. This should return your email forwarder set if the attempt is successful.
Please Note : You can connect securely with your email clients using Postfix on port 587, you can open the port by uncommenting the following part in the Postfix master confguration : /etc/postfix/master.cf.
You need to restart Postfix after making any changes to the configuration. By using telnet command, you can confirm whether the port is open.
Configuring Dovecot
Our next step is to configure our MDA to allow POP3 or IMAP protocols and other configuration settings to connect to external database and Postfix. We are mainly modifying the following files.
It's always advised to take backup for these files before making any configuration changes. We can modify each file one by one.
Modifying the dovecot main configuration file : /etc/dovecot/dovecot.conf
The following setting is uncommented by default. But we need to ensure that it is uncommented.
!include conf.d/*.conf
We can enable all required protocols in this directive. If you need to enable POP3, we can append pop3 to this line and also make sure to install the required dovecot packages "dovecot-pop3d" to enable that.
Modifying the Dovecot Mail configuration file : /etc/dovecot/conf.d/10-mail.conf
We need to find the following parameter "mail_location" in the configuration and update with our mail storage path. I've my mail folders located inside "/var/mail/vhosts/" folder. Hence, I modified the file path as below:
mail_location = maildir:/var/mail/vhosts/%d/%n
We need to set the "mail_privileged_group" parameter to "mail".
mail_privileged_group = mail
Once this is done, we need to make we've set proper ownership and permissions for our mail folders. Create the mail folders for each domains which we've registered in the MySQL table inside this folder "/var/mail/vhosts" and set proper ownerships/permissions.
root@ubuntu:~# ls -ld /var/mail
drwxrwsr-x 2 root mail 4096 Apr 21 16:56 /var/mail
root@ubuntu:~# mkdir -p /var/mail/vhosts/linoxidemail.com
Created a separate user/group named "vmail" with an id 5000 and changed the mail folders ownerships to that.
root@ubuntu:~# groupadd -g 5000 vmail
root@ubuntu:~# useradd -g vmail -u 5000 vmail -d /var/mail
root@ubuntu:~# chown -R vmail:vmail /var/mail
Modifying the Dovecot authentication file : /etc/dovecot/conf.d/10-auth.conf
Disable plain text authentication to ensure security by modifying the below parameter to "yes".
disable_plaintext_auth = yes
Modify the "auth_mechanisms" parameter as below:
auth_mechanisms = plain login
We need to comment the mentioned line and enable the MySQL authentication by uncommenting the auth-sql.conf.ext line as below:
We are modifying four sections in this configuration file. IMAP section, local mail transfer section, authentication section and last authenticating worker process section. Please see the screenshots of each section below to view the modifications:
Modifying the SSL configuration : /etc/dovecot/conf.d/10-ssl.conf
We're modifying this section to enable SSL for the incoming/outgoing connections. This configuration settings are optional. But I'd recommend these for more security.
Change the SSL parameter to required
ssl = required
Specify the SSL cert and key file location for our configuration. You can view the screenshot for more details.
You need to restart Dovecot after all these modification.
That's all :) We've completed with our Mail server setup. Hurray! You can access your email account using your username and password on any of your preferred email client. I could successfully access my email account using these settings below:
The Master-Slave replication in MySQL databases provides load balancing for the databases. But it does not provide any failover scenario. If the Master server breaks, we cannot execute queries directly on the slave server. In addition to load balancing, if we need failover in our scenario, we can setup 2 MySQL instances in Master-Master replication. This article describes how this can be achieved in 5 easy steps on Ubuntu 16.04 server.
In Master-master replication, both the servers play the role of master and slave for each other like in the following diagram:
Image may be NSFW. Clik here to view.
Each server serves as Master for the other slave at the same time. So if you are familiar with the Master-Slave replication in MySQL, this must be a piece of cake for you.
Pre - requisites:
This article assumes that you are running Linux based OS. MySQL server is also required. Following OS/packages are used for this demo:
Ubuntu 16.04 LTS (Xenial Xerus)
mysqld Ver 5.7.12-0ubuntu1.1
We are also using 2 servers that will be in Master-Master configuration. These servers are called:
LinoxideMasterLeft (IP - 192.168.1.101)
LinoxideMasterRight (IP - 192.168.1.102)
This setup will work on other Linux based OS’s as well but some configuration file paths might change.
Now let’s start with the steps used for MySQL replication:
Step 1: Install the MySQL server
The MySQL server needs to be installed on both servers. This step is same for both servers:
This installation will prompt you to chose a MySQL root password. Chose a strong password and keep it safe with you.
Now depending upon your use case, you might want to replicate one database or multiple databases.
Use Case 1: You need to replicate only selected number of databases. The database names for replication are specified with “binlog_do_db” option in MySQL configuration file.
Use Case 2: You need all of your databases to replicate except a few. You may want to create new databases in the future and adding them to the list manually could be a problem. So in this case, don’t use the option “binlog_do_db”. MySQL will replicate all of your databases by default if you don’t put this option in the configuration. We just put the databases that don’t need to be replicated (like “information_schema” and “mysql”) with the option “binlog_ignore_db”.
You can also use both of these options simultaneously if that is what you want. For the purpose of this demo, we will replicate only 1 database(as in case 1).
The MySQL instances participating in replication are part of a (replication) group. All the servers in this group have unique ID. While configuring our servers for replication, we need to make sure that this ID is not duplicated. We will see this in a while.
Step2: Configure MySQL to listen on Private IP address.
In our setup, the MySQL configuration is included from files in another directory. Open MySQL configuration file /etc/mysql/my.cnf to confirm that the line with “/etc/mysql/mysql.conf.d/” is present. (This file does nothing but includes the files from other directories.)
Make sure that the following line is present in this file:
!includedir /etc/mysql/mysql.conf.d/
Now we will edit the file “/etc/mysql/mysql.conf.d/mysqld.cnf”.
The first thing that we want to do is enable MySQL daemon to listen on the private IP address. By default, the daemon binds itself with the loopback IP address. (You can also make it listen on the public IP address, but the DB servers generally do not need to be accessed directly from the internet). So we change the line:
bind-address = 127.0.0.1
To look like:
bind-address = 192.168.1.101
Make sure that you change this IP address to your server’s IP address.
We make the same changes on the other MySQL server.
Check /etc/mysql/my.cnf:
!includedir /etc/mysql/mysql.conf.d/
And make changes in /etc/mysql/mysql.conf.d/mysqld.cnf:
bind-address = 192.168.1.102
Step 3: Replication configuration
Now that our MySQL servers are set to listen on the Private IP addresses, it’s time to enable replication in MySQL configuration. Let’s start with LinoxideMasterLeft server.
In the same configuration file, look for the following lines:
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
#binlog_do_db = include_database_name
#binlog_ignore_db = include_database_name
We need to uncomment these lines and mention the database that we are going to replicate. After changes, it will look like this:
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = linoxidedb
#binlog_ignore_db = include_database_name
As we are replicating only one database, we don’t need to uncomment the line with “#binlog_ignore_db”.
Make the corresponding changes in the other server LinoxideMasterRight as well:
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = linoxidedb
#binlog_ignore_db = include_database_name
Now that the configuration files are changed on both servers, we will restart the MySQL service:
root@LinoxideMasterLeft:~# service mysql restart
And on the other server as well:
root@LinoxideMasterRight:~# service mysql restart
We can check that our configuration changes are loaded and server is listening on the correct IP address:
For MySQL replication, we need to create a new user for replication that will have replication permission on all the databases. Let’s create this user with the below MySQL queries:
Open the MySQL prompt on LinoxideMasterLeft server with the following command:
root@LinoxideMasterLeft:~# mysql -u root -p
Enter password:
Provide your password that you chose while MySQL server installation. It will drop you at the MySQL prompt. Enter the following commands at this prompt:
mysql> CREATE USER 'linoxideleftuser'@'%' identified by 'replicatepass';
Query OK, 0 rows affected (0.09 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO 'linoxideleftuser'@'%';
Query OK, 0 rows affected (0.00 sec)
Now we create the similar user on the other server LinoxideMasterRight:
mysql> CREATE USER 'linoxiderightuser'@'%' identified by 'replicatepass';
Query OK, 0 rows affected (0.04 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO 'linoxiderightuser'@'%';
Query OK, 0 rows affected (0.00 sec)
Step 5: Configure MySQL Master on both servers:
Now in this last step, we tell each server that other server is the master server from which it is syncing.
Step 5.1: Tell LinoxideMasterRight about its master:
First of all, we will check the Master status of LinoxideMasterLeft server. Run the following command at MySQL prompt to check the master status:
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 1447 | linoxidedb | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
Here, we need 2 pieces of information: the File (mysql-bin.000001) and the Position (1447) for setting up this server as master of LinoxideMasterRight (along with the username and password we set in the last step).
Run the following command on LinoxideMasterRight to tell it that LinoxideMasterLeft is its master:
That’s it. Phew... It was lots of configurations. Now that we have done so much work, let’s check that our configuration is working. Note that the next step is optional and is not part of MySQL Master-Master replication setup.
Let’s check if this database is created on LinoxideMasterRight:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| linoxidedb |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.00 sec)
Now we will create a table in this database from LinoxideMasterRight and check from the other server.
Run the following command on LinoxideMasterRight:
mysql> CREATE TABLE linoxidedb.testuser ( id INT, name VARCHAR(20));
Query OK, 0 rows affected (0.40 sec)
Let’s check this table from LinoxideMasterLeft:
mysql> show tables;
+----------------------+
| Tables_in_linoxidedb |
+----------------------+
| testuser |
+----------------------+
1 row in set (0.00 sec)
Voila. Our replication is working fine.
As you could see, the Master-Master Replication is nothing more than configuring 2 servers in Master-Slave mode for each other. In Master-Slave configuration, you need to make sure that on Slave server, no query is executed (except replication queries), else replication breaks. But in the case of Master-Master replication, the query can run on any of the 2 servers, thus providing us with a fault-tolerant and safe environment.