Quantcast
Channel: linoxide.com
Viewing all 1507 articles
Browse latest View live

How to Install and Setup Asterisk 13 (PBX) on Centos 7.x

$
0
0

Asterisk (PBX) is an open source communication server released under the GPL license maintained by Gigium and Asterisk community. Asterisk is used for creating communication applications that turns an ordinary computer into a communication server. So, using the Asterisk in your network environment, you can connect your employees from home to the office PBX over broadband connections and can also give them voicemail, integrated with Web and E-mail.

The benefits of using Asterisk are too much, so let’s follow this tutorial to setup your own Asterisk server whether in your home, office or in your organization and enjoy with its great features to fill communication gaps.

Asterisk Prerequisites

As we will be using CentOS 7 for Asterisk setup with minimal installed packages, so make sure that your system is up to data and you have root user privileges on the system for the installation of different required packages.

1) System Update

You can use below command for system update after successful root login.

#yum update

2) Installing Required Packages

Once your system is patched with latest updates, you have to install some packages including development tools and other packages that are necessary for its successful operation. You can easily do this by using the below command that will install its all required packages included all dependencies.

[root@centos-7 ~]# yum install gcc gcc-c++ php-xml php php-mysql php-pear php-mbstring mariadb-devel mariadb-server mariadb sqlite-devel lynx bison gmime-devel psmisc tftp-server httpd make ncurses-devel libtermcap-devel sendmail sendmail-cf caching-nameserver sox newt-devel libxml2-devel libtiff-devel audiofile-devel gtk2-devel uuid-devel libtool libuuid-devel subversion kernel-devel kernel-devel-$(uname -r) git subversion kernel-devel php-process crontabs cronie cronie-anacron wget vim

installing prerequisites

Check the list of packages that are going to be installed on your system and press “Y” to continue, this will take about 125MB of your disk space.

After successful operation you will see the list of installed packages and updates including all its required dependencies.

3) Setup MariaDB

After installation by default we can connect to the database without using any password. So, first we will enable/start mariadb services as shown in below image, so that we can setup its root password.

mariadb service

Once the MariaDB services are fine , run the below command to setup its root password and remove anonymous user, test database and disallow remote user login.

[root@centos-7 ~]# mysql_secure_installation

4) Installing libjansson

Jansson is a C library for encoding, decoding and manipulating JSON data. Let's download, unpack and compile it using the below command.

# wget http://www.digip.org/jansson/releases/jansson-2.7.tar.gz

jansson download

To extract this package use below command.

# tar -zxvf jansson-2.7.tar.gz

Then change directory and configure the package as shown.

configure jasson

5) Make Jansson

Now to compile the configured package we are required to use below 'make' and 'make install' command within the same directory to compile fully functional Jansson library.

[root@centos-7 Jansson-2.7]#make clean
[root@centos-7 Jansson-2.7]#make && make install
[root@centos-7 Jansson-2.7]#ldconfig

Installing Asterisk 13.5.0

Here we go with most important download that is Asterisk. Let's download its current latest package from their official web link Asterisk Download Page. We will be using 'wget' command to download its package, so change your current directory and run the command as shown below.

wget asterisk

Using below commands unpack its package, change directory and then run its configuration command .

[root@centos-7 ~]# tar -zxvf
[root@centos-7 ~]# cd asterisk-13.5.0
[root@centos-7 asterisk-13.5.0]# ./configure --libdir=/usr/lib64

configure asterisk

Upon successful completion of asterisk installation you will find its logo as shown below.

asterisk installation

Asterisk Modules Setup

Now in the next few steps we will configure Asterisk for its necessary modules.

1) Asterisk Main menu Selection

In order to setup your menu items, let's run the below command and then choose the appropriate options.

[root@centos-7 asterisk-13.5.0]# make menuselect

Once you run this command, a new window will be opened where you can will see that for the most part , all the necessary modules are already included . You can add something or remove, when you select a module there is a brief description of its purpose. In the Add-ons to enable mp3 support module select 'format_mp3' as shown below.

add-on module

Then move to the next Core Sound Packages and select the formats of Audio packets as shown in the image.

Sound Packages

Then select all the packages from the "Music On Hold Packages" and then form the "Extra Sound Packages choose the 4 that includes a first module containing EN and the choose the "Save and Exit" button to proceed for the next step.

2) Loading mp3 Libraries

Now run the following command to download the mp3 decoder library into the source tree.

[root@centos-7 asterisk-13.5.0]# contrib/scripts/get_mp3_source.sh

mp3 modules

3) Modules Installation

Now we will proceed with installation of selected modules using the 'make command'.

make modules

So, Asterisk has been successfully built, now run the command as shown in the image to install asterisk.

[root@centos-7 asterisk-13.5.0]# make install

In response to the above command you will be greeted with below put at the end of nstallation.

+---- Asterisk Installation Complete -------+
+ +
+ YOU MUST READ THE SECURITY DOCUMENT +
+ +
+ Asterisk has successfully been installed. +
+ If you would like to install the sample +
+ configuration files (overwriting any +
+ existing config files), run: +
+ +
+ make samples +
+ +
+----------------- or ---------------------+
+ +
+ You can go ahead and install the asterisk +
+ program documentation now or later run: +
+ +
+ make progdocs +
+ +
+ **Note** This requires that you have +
+ doxygen installed on your local system +
+-------------------------------------------+

Here we will run the below commands to install sample configuration files as indicated above.

[root@centos-7 asterisk-13.5.0]# make samples
[root@centos-7 asterisk-13.5.0]# make config

Setup Asterisk User

You can create a separate user and give him the right to work with Asterisk in order to start its services with its own user and group. To do so let's run the below command.

[root@centos-7 asterisk-13.5.0]# useradd -m asterisk
[root@centos-7 asterisk-13.5.0]# chown asterisk.asterisk /var/run/asterisk
[root@centos-7 asterisk-13.5.0]# chown -R asterisk.asterisk /etc/asterisk
[root@centos-7 asterisk-13.5.0]# chown -R asterisk.asterisk /var/{lib,log,spool}/asterisk
[root@centos-7 asterisk-13.5.0]# chown -R asterisk.asterisk /usr/lib64/asterisk
[root@centos-7 asterisk-13.5.0]# systemctl restart asterisk
[root@centos-7 asterisk-13.5.0]# systemctl status asterisk

asterisk status

Setup Firewall Rules

Now begin to set up security. By default on CentOS 7 instead of iptables uses the FirewallD.
Using below two command you can start and enable the firewalld services.

[root@centos-7 ~]# systemctl start firewalld
[root@centos-7 ~]# systemctl enable firewalld

Now allow access to the ports that are being used in asterisk PBX by adding the following rules

[root@centos-7 ~]# firewall-cmd --zone=public --add-port=5060/udp --permanent
success
[root@centos-7 ~]# firewall-cmd --zone=public --add-port=5060/tcp --permanent
success
[root@centos-7 ~]# firewall-cmd --zone=public --add-port=5061/udp --permanent
success
[root@centos-7 ~]# firewall-cmd --zone=public --add-port=5061/tcp --permanent
success
[root@centos-7 ~]# firewall-cmd --zone=public --add-port=4569/udp --permanent
success
[root@centos-7 ~]# firewall-cmd --zone=public --add-port=5038/tcp --permanent
success
[root@centos-7 ~]# firewall-cmd --zone=public --add-port=10000-20000/udp --permanent
success

To load new firewall rules, use the below command.

[root@centos-7 ~]# firewall-cmd --reload

To confirm that all rules had been added , you can by using the command as shown in the image.

firewall rules

Setup Asterisk Database

Let's connect to the MySQL MariaDB and create new user and database then provide it all privileges using the following commands.

[root@centos-7 ~]# mysql -u root -p
Enter password:******
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create user 'asterisk'@'localhost' identified by '******';
MariaDB [(none)]> create database asterisk;
MariaDB [(none)]> create database cdrdb;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON asterisk.* TO asterisk@localhost IDENTIFIED BY '******';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cdrdb.* TO asterisk@localhost IDENTIFIED BY '******';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>

Launching Asterisk

Launch Asterisk for the first time after its setup on Centos 7 using below command.

[root@centos-7 ~]# asterisk -r

launch asterisk

Conclusion

Asterisk 13.5.0 (PBX) has been successfully installed on CentOS 7 with its required modules. If you find any mistakes in the article or inconsistencies we would be grateful if you write to us about it in the comments.

The post How to Install and Setup Asterisk 13 (PBX) on Centos 7.x appeared first on LinOxide.


How to Setup iRedMail Server on Ubuntu 14.04 / 15.04

$
0
0

Hello every one, today we will be doing iRedMail Server setup on Ubuntu 15.04 64-Bit Operating system which is the current stable supported version for iRedMail. iRedMail a full-featured Open Source mail server solution that supports all major Linux distributions to provide the same functionality of creating unlimited virtual mail accounts that supports POP3/IMAP and AJAX web mail. iRedMail is basically a bash program that installs the mail server relates software form their official repositories of Linux/BSD distributions.

So, if you are looking for a free email server then iRedMail is the best choice for you to setup by following simple steps in this article whether you are using Ubuntu or any other Linux distribution.

Setup Your Ubuntu Server

iRedMail Server needs to be setup on a fresh operating system with minimal package installation. So, make sure that your operating system is up to data with latest updates and security patches and no mail server related packages should be installed on it.

Let’s login to your Ubuntu 15.04 instance with root or sudo privileged user and run system update command as shown.

# apt-get update

Then open the hosts file and configure your server’s hostnames accordingly.

#vim /etc/hostname

#vim /etc/hosts
72.25.10.171 iredmail.linoxide.com iredmail

Download iRedMail Package

You can get the installation package from the official web link iRedMail Download Page or use the below shown wget command by following the download link.

# wget https://bitbucket.org/zhb/iredmail/downloads/iRedMail-0.9.2.tar.bz2

iRedMail Download

Once your package download is complete extract it within the same directory using bellow command.

# tar xjf iRedMail-0.9.2.tar.bz2

Extract package

Start iRedMail Server Installation

We are ready to start iRedMail installation using its bash script, so let's change directory, give the executable permissions the bash script and run the shown command to start installation.

start script

Once you have executed the installation script you will be prompted to the welcome screen wizard, so click to "Yes" and proceed to configure next steps.

iRedMail Setup

Default Mail Storage

Here you can specify the path to store your mailboxes or simply use the default location and click to "NEXT" button for further steps.

iredmailbox location

Choose Web Server

iRedMail provides an opportunity to choose your desired web server whether you want to use Nginx or Apache, use the space button to choose press "NEXT".

iRedMail Web

Choose Database Server

Let's choose your favorite database form the different available options by iRedMail, here in this tutorial we will be using MySQL which one of the best open source data base.

iredmail db

After choosing the data base server you will be asked to setup your database administration password, so specify your MySQL root password and click on NExt button to proceed.

MySql db

Setup Virtual Domain

Choose your first virtual domain name that you want to setup in iRedMail server but it should different than server's hostname.

virtual domain

In the next you will be asked to configure the administrator password of your newly create virtual domain user, so type your passowrd and click the Next button.

admin user

Optional Components

In this section you can choose some of the optional components where as few of them are recommended, use the space button to select or deselect your optional components and click on the next button.

iredmail components

Finalize Configuration

We have almost with pre configurations setup for iRedMail setup, now type "y" to continue the installation process thas will take a while to install all components.

finalize Configurations

Once the installation process is complete you will be asked to use firewall rules provided by iRedMail, press "Y" if your firewall is enabled an then agai choose yes key to restart your firewall services as shown in below image.

iredmail setup

At the end of installation process you will be greeted with URLs of installed web applications including Webmail, user credentials and other links as shown below.

********************************************************************
* URLs of installed web applications:
*
* - Webmail:
* o Roundcube webmail: httpS://iredmail.linoxide.com/mail/
* o SOGo groupware: httpS://iredmail.linoxide.com/SOGo/
*
* - Web admin panel (iRedAdmin): httpS://iredmail.linoxide.com/iredadmin/
*
* You can login to above links with same credential:
*
* o Username: postmaster@test.linoxide.com
* o Password: *******
*
*
********************************************************************
* Congratulations, mail server setup completed successfully. Please
* read below file for more information:
*
* - /root/iRedMail-0.9.2/iRedMail.tips
*
* And it's sent to your mail account postmaster@test.linoxide.com.
*
********************* WARNING **************************************
*
* Rebooting your system is required to enable mail services.
*
********************************************************************

Now, as recommended above restart your operating system after that we will access its admin console.

# shutdown -r 0

Using iRedMail Web Admin Panel

When your Ubuntu server is back after reboot, open your web browser and access its web admin console with the above provided URL and put your login credentials.

httpS://your_own_domain_or_ip/iredadmin/

iredmail login

After successful credentials you will be directed towards iRedMail Admin Control Panel where you will be able to add new domains, create users and check your system logs.

iRedMail Dashboard

Access Your Webmail

In the same provided with your user name and password open any of your webmail link to access your mails and start sending and receive your important emails.

Roundcube webmail: httpS://your_domain_name/mail/

SOGo groupware: httpS://your_domain_name/SOGo/

Sogo Webmail

Conclusion

Congratulations! we have successfully setup iRedMail Server in no time and without and hassle. So, if you looking for a free email solution for your company, organization or your personal use then go for iRedMail. It provides almost every feature that you wish to be available in your web mail. So, let's start using it and enjoy with with user friendly admin control panel and webmail. Don't hesitate to comment us if you feel any difficulty during its setup.

The post How to Setup iRedMail Server on Ubuntu 14.04 / 15.04 appeared first on LinOxide.

How to Install OpenVAS Vulnerability Scanner on Centos 7.0

$
0
0

The Open Vulnerability Assessment System (OpenVAS) is one of the most important and useful Open Source solution for vulnerability scanning and vulnerability management. Vulnerability scanning is one of the crucial phases in penetration testing that helps to discover vulnerable items that might be the cause of some serious break down. So, OpenVAS provides us with its effective tools for Penetration testing to ensure that we are not vulnerable to known threats.

OpenVAS is widely used by a number of peoples in the World including security experts and common users who used this all in one suite of tools that works together to run the test against client computers using its own database of known weaknesses and exploits.

So, in today’s article we will show you its installation and configuration setup on Linux CentOS 7 to make sure that how well your servers are protected against the attacks.

Base System

We will be using CentOS Linux 7 (Core) to setup OpenVAS with basic installation of system packages. While the hardware resources in this VM are 2 GB RAM and 2 CPUs.

Once your Linux VM with CentOS 7 is ready, let’s login with root credentials to update your system using below command.

# yum update

Setup Atomicorp Repository

Now, we will setup the Atomicorp repository freely available from the best known Atomic Secured for Linux that protects from thousands of risks and vulnerabilities automatically.

Let’s issue the following command to get it installed on your centos server.

~]# wget -q -O - http://www.atomicorp.com/installers/atomic |sh

This will installs the Atomic Free Unsupported Archive installer, version 2.0.14.

Atomicorp Repo

To proceed forward choose the Default option as "Yes" to agree to the Atomicorp terms. Then the system will configure the "atomic" yum archive for the operating system, to agree this once again type "Yes" to Enable its repository. After that the Atomic Rocket Turtle archive will be installed and configured for your system as you can see in the snapshot.

Atomicorp setup

Installing OpenVAS

Now we will run the simple yum command to install OpenVAS using its pre configured atomic repository.

~]# yum install openvas

installing openvas

The system will process to check and resolve its dependencies and will show the transaction summary with list of all its dependent packages that will be installed. To proceed the installation type "Y" to continue.

Transaction Summary
===============================================
Install 1 Package (+157 Dependent packages)
Upgrade ( 1 Dependent package)

Total download size: 57 M
Is this ok [y/d/N]: y

The process will end up after after installing OpenVAS and its dependent packages.

Setup OpenVAS

After successful installation of OpenVAS, now we will run its setup to configure its different parameters tha will start to download the latest database from internet.

So, first of all run its setup command in the terminal as shown.

~]# openvas-setup

Step 1: Updating NVT, CERT and Scap DB

The first step of OpenVAS setup will to update NVT, CERT and SCAP data as shown in below image.

OpenVAS Setup

Here we will choose the default option, that will take couple of minutes while downloading the data and building its database. So, it better to wait and let the process complete without any interruption.

Step 2: Configure GSAD

In this step we will configure the IP address settings for GSAD which is Greenbone Security Assistant a Web Based front end for managing scans. So, we will choose the default settings here to allow connections from any IP.

GSAD Settings

Step 3: Choose the GSAD admin users password

This is the last step of OpenVAS setup where we will setup the user name and password for GSAD that will be used to configure account.

GSAD user

Login to Greenbone Security Assistant

After GSAD setup is complete, we will access its GUI from any web browser by giving server's IP or FQDN with addition to its default port.

https://your_servers_ip:9392/

You be directed to its login page, let's provide your credentials that you configured in previous step.

GSAD Login

Welcome to Greenbone Security Assistant

Congratulations! We have successfully setup OpenVAS with Greenbone Security Assistance, by using this dashboard the basic guide lines are already provided from Scanning IP address, while we can configure it to use its different available features choosing from the top bar.

GSAD Dashboard

Starting Your Fisrt Scan

Now in order to scan your host or IP we put the IP or Host name on the top right side, and click on the "Start Scan button". But, you will not be able to run any scans as you’ll get this error on your report as shown below.

GSAD scan error

To resolve this issue we need to make few changes in the redis configuration file, that can be done by issuing the following command.

# echo "unixsocket /tmp/redis.sock" >> /etc/redis.conf
# sed -i 's/enforcing/disabled/g' /etc/selinux/config /etc/selinux/config
# systemctl enable redis.service
ln -s '/usr/lib/systemd/system/redis.service' '/etc/systemd/system/multi-user.target.wants/redis.service'
# shutdown -r 0

Once the server is back after reboot, rescan your host or IP from the Greenbone Security Assistant Dashboard by providing the login details first.

GSAD Scan

Now can see the progress bar , it might take few minutes to complete the scan. Once the Host/IP scan is complete click on the scan completion date to see the report as shown below.

GSAD scan report

Conclusion

Now have a fully functional OpenVAS server set up for scanning your hosts to spot vulnerabilities and highlight areas to focus on when you are hardening your server.

If you still face any issue while doing its setup or running your scans, feel free to get back to us as we feel pleasure to assist you.

The post How to Install OpenVAS Vulnerability Scanner on Centos 7.0 appeared first on LinOxide.

How to Setup IonCube Loaders on Ubuntu 14.04 / 15.04

$
0
0

IonCube Loaders is an encryption/decryption utility for PHP applications which assists in speeding up the pages that are served. It also protects your website's PHP code from being viewed and ran on unlicensed computers. Using ionCube encoded and secured PHP files requires a file called ionCube Loader to be installed on the web server and made available to PHP which is often required for a lot of PHP based applications. It handles the reading and execution of encoded files at run time. PHP can use the loader with one line added to a PHP configuration file that ‘php.ini’.

Prerequisites

In this article we will setup the installation of Ioncube Loader on Ubuntu 14.04/15.04, so that it can be used in all PHP Modes. The only requirement for this tutorial is to have "php.ini" file exists in your system with LEMP stack installed on the server.

Download IonCube Loader

Login to your ubuntu server to download the latest IonCube loader package according to your operating system architecture whether your are using a 32 Bit or 64 Bit OS. You can get its package by issuing the following command with super user privileges or root user.

# wget http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz

download ioncube

After Downloading unpack the archive into the "/usr/local/src/" folder by issuing the following command.

# tar -zxvf ioncube_loaders_lin_x86-64.tar.gz -C /usr/local/src/

extracting archive

After extracting the archive, we can see the list of all modules present in it. But we needs only the relevant with the version of PHP installed on our system.

To check your PHP version, you can run the below command to find the relevant modules.

# php -v

ioncube modules

With reference to the output of above command we came to know that the PHP version installed on the system is 5.6.4, so we need to copy the appropriate module to the PHP modules folder.

To do so we will create a new folder with name "ioncube" within the "/usr/local/" directory and copy the required ioncube loader modules into it.

root@ubuntu-15:/usr/local/src/ioncube# mkdir /usr/local/ioncube
root@ubuntu-15:/usr/local/src/ioncube# cp ioncube_loader_lin_5.6.so ioncube_loader_lin_5.6_ts.so /usr/local/ioncube/

PHP Configuration

Now we need to put the following line into the configuration file of PHP file "php.ini" which is located in "/etc/php5/cli/" folder then restart your web server’s services and php module.

# vim /etc/php5/cli/php.ini

ioncube zend extension

In our scenario we have Nginx web server installed, so we will run the following commands to start its services.

# service php5-fpm restart
# service nginx restart

web services

Testing IonCube Loader

To test the ioncube loader in the PHP configuration for your website, create a test file called "info.php" with the following content and place it into the web directory of your web server.

# vim /usr/share/nginx/html/info.php

Then save the changes after placing phpinfo script and access "info.php" in your browser with your domain name or server’s IP address after reloading the web server services.

You will be able to see the below section at the bottom of your php modules information.

php info

From the terminal issue the following command to verify the php version that shows the ionCube PHP Loader is Enabled.

# php -v

php ioncube loader

The output shown in the PHP version's command clearly indicated that IonCube loader has been successfully integrated with PHP.

Conclusion

At the end of this tutorial you learnt about the installation and configuration of ionCube Loader on Ubuntu with Nginx web server there will be no such difference if you are using any other web server. So, installing Loaders is simple when its done correctly, and on most servers its installation will work without a problem. However there is no such thing as a "standard PHP installation", and servers can be setup in many different ways, and with different features enabled or disabled.

If you are on a shared server, then make sure that you have run the ioncube-loader-helper.php script, and click the link to test run time installation. If you still face as such issue while doing your setup, feel free to contact us and leave us a comment.

The post How to Setup IonCube Loaders on Ubuntu 14.04 / 15.04 appeared first on LinOxide.

Howto Install and Configure FEMP Stack (FreeBSD 10.2, Nginx, MariaDB, PHP)

$
0
0

FreeBSD is free and open source Unix-like operating system from Berkeley Software Distribution (BSD) and it is one of the most popular server platform. Designed with advanced networking, security and server storage. Until now freebsd has been ported to a variety of processor architectures, include PowerPC, Arm etc.

Nginx (pronounced engine-x) is a free and open-source high-performance HTTP server. Nginx focus on high concurrency, performance and low memory usage. And it is make nginx a one of the most popular a web server. You can use nginx also as reverse proxy for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer.

MariaDB is one of the relational database management system (RDBMS) forked from MySQL and it is drop-in replacement for MySQL and developed by some of the original authors of MySQL. MariaDB strives to be logical choice for database professionals looking for a robust, scalable, and reliable SQL server.

FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation. come with some additional features, like a process management, stdout and stderr logging, accelerated upload support, ability to start workers with different uid or gid and listening on different port. It is very useful for heavy-loaded sites.

Basic Software Installation in FreeBSD

FreeBSD bundled with many collection of application and system tools. In addition, you need to 3rd application for your system. FreeBSD provide 2 way to install 3rd party application :

  • Ports collection - For installing packages from source code.
  • Binary packages - For installing pre-built binary packages.

And in this tutorial, I will use "Binary packages" method with pkg command to install FEMP Stack.

Prerequisites

  • FreeBSD 10.2 with IP Address : 192.168.1.110
  • Root privileges

Step 1 - Update system

To get started with installation, log in to your freebsd server with SSH/Console. And make sure your system is up to date, you can use command below to update your system :

freebsd-update fetch
freebsd-update install

And please install nano editor to complete the preparation with pkg command :

pkg install nano

Step 2 - Install Nginx

Before you start nginx installation, you can search the packages and nginx version with "pkg search" command, this is example :

pkg search nginx

nginx-1.8.0_3,2
nginx-devel-1.9.2_2

and you will get all two version of nginx, and in this tutorial we will install stable version 1.8.

Now begin installing nginx with the following command :

pkg install nginx-1.8.0_3,2

Step 3 - Configure Nginx

Now go to the directory "/usr/local/etc/nginx/" to edit nginx file configuration.

cd /usr/local/etc/nginx/

and please rename a file "nginx.conf" to other filename to make it as backup.

mv /usr/local/etc/nginx/nginx.conf /usr/local/etc/nginx/.conf.original

and now create new file for nginx configuration "nginx.conf" with nano :

nano nginx.conf

And paste paste following code for nginx configuration :

# Define user that run nginx
user  www;
worker_processes  2;

# Define error log
error_log /var/log/nginx/error.log info;

events {
worker_connections  1024;
}

http {
include       mime.types;
default_type  application/octet-stream;

# Define access log
access_log /var/log/nginx/access.log;

sendfile        on;
keepalive_timeout  65;

server {
listen       80;
server_name  localhost;

# Define web data
root /usr/local/www/nginx;
index index.php index.html index.htm;

location / {
try_files $uri $uri/ =404;
}

error_page      500 502 503 504  /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}

# Configuration for PHP-FPM
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include fastcgi_params;
}
}
}

Save and exit.

Next, create new directory "nginx" and create two file "error.log" and "access.log" for log configuration in "/var/log/" directory :

mkdir -p /var/log/nginx/
touch /var/log/nginx/{error,access}.log

And now go to the web data directory "/usr/local/www/", and remove "nginx" directory. It is symlink from "nginx-dist" directory, and then copy that directory :

cd /usr/local/www/
rm -rf nginx/
cp -r nginx-dist/ nginx/

Once that is complete, add nginx to the system of startup script "/etc/rc.conf" file. You can add nginx with nano editor or "echo" command as manual method, but here we will use "sysrq" command to do it :

sysrc nginx_enable=yes

Before you start nginx, make sure the configuration is correct with this command :

nginx -t

You can see following results, if there is no mistake in the nginx configuration file.

nginx result

And now you start nginx :

service nginx start

Just open in the browser your server IP : 192.168.1.100

Nginx started

Step 4 - Install and Configure MariaDB

In this step you will be guide to installing mariaDB 10.0, but you can install too for other version, let's search all version mariaDB that available in the repositories :

pkg search mariadb

mariadb100-client-10.0.21
mariadb100-server-10.0.21
mariadb53-client-5.3.12_7
mariadb53-scripts-5.3.12_6
mariadb53-server-5.3.12_6
mariadb55-client-5.5.44
mariadb55-server-5.5.44

There is three version of mariadb, now lets install 10.0 version.

pkg install mariadb100-server-10.0.21 mariadb100-client-10.0.21

And you will get a message about mariadb configuration in freebsd.

MaraiDB Message

Step 5 - Configure MariaDB

Now please change to the directory "/usr/local/share/mysql/". There is three mysql configuration file.

cd /usr/loca/share/mysql/
ls lah my*.cnf

and you need to copy one configuration file "my-medium.cnf" to "/usr/local/etc/" directory.

cp my-medium.cnf /usr/local/etc/my.cnf

Add mariaDB to the startup application with "sysrc" command :

sysrc mysql_enable=yes

So let's start mariaDB :

service mysql-server start

Once mariadb is started, please configure the username and password for mariadb with command below :

mysql_secure_installation

Enter current password for root (enter for none):
#Just press Enter here
Change the root password? [Y/n] Y
#Type your password for mariadb here
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

And if that is complete, now try connect and login to your mariadb shell :

mysql -u root -p
#type your root pasword

MariaDB Login

Step 6 - Install PHP-FPM

In this tutorial we will install php with version php56-5.6.13. If you want to install other version, you can search it with "pkg search" command. And now lets install it :

pkg install php56-5.6.13 php56-mysqli-5.6.13

An then configure it.

Step 7 - Configure PHP-FPM

Next, you must configure php-fpm service, please go to "/usr/local/etc" directory, there php-fpm file configuration stored.

cd /usr/local/etc/

edit "php-fpm.conf" file with nano editor :

nano php-fpm.com

And now please move cursor to the line 164, this is the line to handle a FastCGI request. By default, php-fpm will handle the request under localhost in port 9000 "listen = 127.0.0.1:9000", but we will change this configuration, so php-fpm will handle all request to the "unix socket".

;listen = 127.0.0.1:9000
listen = /var/run/php-fpm.sock

And let's move to the line 175, you need to set permissions for unix socket, php-fpm on freebsd running under user "www", and for security reason you need to make sure that the unix socket permission is same as a user running nginx.

#Just uncomment the line
listen.owner = www
listen.group = www
listen.mode = 0660

And please now configure "php.ini" in the "/usr/local/etc/" director :

cd /usr/local/etc/
nano php.ini

Search for the line "cgi.fix_pathinfo", uncomment and set the value to "0".

cgi.fix_pathinfo=0

Next, add php-fpm to running at the boot time :

sysrc php_fpm_enable=yes

and start php-fpm :

service php-fpm start

Step 8 - Testing Nginx and PHP-FPM Configuration

You need to test that configuration for nginx and php-fpm is correct, so you can use all of it. Just create new file "info.php" in the web data directory "/usr/loca/www/nginx/".

cd /usr/local/www/nginx/
nano info.php

paste following php code :

<?php phpinfo(); ?>

and go to your browser and access the server IP : 192.168.1.100/info.php

Nginx and PHP-FPM

Now you can see "Nginx and PHP-FPM" is running in the freebsd.

Conclusion

Freebsd is an operating system that is reliable. FreeBSD provides two ways to facilitate the user to install a program, and the easiest is the "Binary Package" with the command pkg. FAMP on freebsd almost the same as LEMP on Linux, making it easier for linux users to configure it, maybe different in its configuration file layout, because Freebsd possessed a different file system structure with linux.

The post Howto Install and Configure FEMP Stack (FreeBSD 10.2, Nginx, MariaDB, PHP) appeared first on LinOxide.

How to Configure nftables to Serve Internet

$
0
0

Hello everyone! This time I will show how to install nftables on a Linux box to serve as firewall and internet gateway. How to build the Linux kernel with nftables enables, how to install nftables use-space and it's dependencies and how to the nft utility to perform network filtering and IP address translation.

The nftables project is intended to replace the current netfilter tools such as iptables, ebtables, arptables and the kernel-space infrastructure with a renewed one and a user-space tool, nft, which has a simplified and cleaner syntax, but maintains the essence of the tools that we use nowadays.

Check your kernel

Nftables is on Linux kernel tree since kernel 3.13 and you need just to enable symbols relative to nftables using usual kernel config tools and build it. However, the masquerade and redirect network address translation targets, were introduced in kernel 3.18 and 3.19 respectively and they are desired for NAT.

Get your kernel release number with the following command

uname -r

To check if nf_tables module is already compiled try this

modinfo nf_tables

You should see information relevant to the module, but if you get an error, you will need another kernel.

Building a nftables compatible kernel

Let's compile kernel 4.2, it is the latest stable kernel while i write this and has all we need for Nftables.

Enter /usr/src

cd /usr/src

Download xz package of the Linux kernel from kernel.org

wget --no-check-certificate https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.2.tar.xz

Extract the sources on the xz package

tar Jxvf linux-4.2.tar.xz

Move your old Linux kernel tree

mv linux linux-old

Create a link to the new Linux tree

ln -s linux-4.2 linux

Copy your old .config to the new kernel tree

cp linux-old/.config linux/.config

And then enter the Linux kernel tree

cd linux

Now prepare your old .config for the new kernel with the olddefconfig target, which maintain your current kernel settings and set new symbols to default.

make olddefconfig

Now, use the menuconfig option to navigate through the curses-like menu and follow  options, that are related to nftables

make menuconfig

Networking support

menuconfig - network support

Networking options

Networking options

Network packet filtering framework (Netfilter)

Network packet filtering framework

Core Netfilter Configuration

 

Enter core Netfilter settings

Enable Netfilter nf_tables support and related modules

Enable nftables and related modules

Enable nftables and related modules

Now go up one level, back to main Netfilter settings and enter IP:Netfilter Configuration

Enter IPv4 Netfilter settings

Enter IPv4 Netfilter settings

There you enable NAT chain for nf_tables and also masquerading and redirect targets.

Enable Nftables NAT support for IPv4

Enable Nftables NAT support for IPv4

You are now done with nftables, remember to check if any kernel setting relative to your specific needs are not missing and save your .config

Then make and make the modules

make && make modules

Install your kernel to /boot manually, so you can use your old kernel if you miss something goes wrong.

cp arch/x86_64/boot/bzImage /boot/vmlinuz-4.2

cp system.map /boot/system.map-4.2

cp .config /boot/config-4.2

Install kernel modules

make modules_install

Boot

Some setups may need an initial ramdisk to boot, it will be the case if your root partition is under LVM,  RAID or the root filesystem's module was not built in the kernel.

The following example creates the compressed ramdisk file /boot/initrd-4.2gz, which will wait 8 seconds to boot  on the rootfs partition of vgroup logical volume group, it will load the modules for XFS and Ext4 filesystems from the kernel 4.2.0

mkinitrd -w 8 -c -u -f ext4 -m ext4:xfs -L -r /dev/vgroup/rootfs -k 4.2.0 -o /boot/initrd-4.2.gz

Add a new option to your bootloader pointing to your kernel and ramdisk, if you have one; on LILO you should add something like this in your /etc/lilo.conf

image     = /boot/vmlinuz-4.2
root     = /dev/vgroup/rootfs
label     = linux-4.2
initrd     = /boot/initrd-4.2.gz
read-only

Once your system reboot, check your module again.

modinfo nf_tables

modinfo nf_tables

modinfo nf_tables

You should see something similar to the image above, otherwise, try to review menuconfig the steps above and try to mark all netfilter related symbols as modules.

After that, make and install those modules

make modules && make modules install

Install nft tool

Now it is time to install Nftables user-space utility, nft, the replacement for the traditional iptables and its friends, but before we can do that, we need to install the required shared libraries to build nft itself.

GMP - The GNU Multiple Precision Arithmetic Library

Download and extract the package

wget https://gmplib.org/download/gmp/gmp-6.0.0a.tar.xz tar Jxvf gmp-*

Build and install

cd gmp* && ./configure && make && make install

libreadline - The GNU Readline Library

You will need this library if you plan to use nft in interactive mode, which is optional not covered here.

Download, extract and enter source tree.

wget ftp://ftp.gnu.org/gnu/readline/readline-6.3.tar.gz && tar zxvf readline* && cd readline*

Configure it to use ncurses, then make and install.

./configure --with-curses && make && make install

libmnl - Minimalistic user-space library for Netlink developers

Download, extact and enter source tree

wget http://www.netfilter.org/projects/libmnl/files/libmnl-1.0.3.tar.bz2 && tar jxvf libmnl-* && cd libmnl-*

Configure, make and install

./configure && make && make install

libnftnl

Download, extract and enter source tree

wget http://www.netfilter.org/projects/libnftnl/files/libnftnl-1.0.3.tar.bz2 && tar jxvf libnftnl* && cd libnftnl*

Configure make and install.

./configure && make && make install

Build and install nft

Download, extract and enter source tree.

wget http://www.netfilter.org/projects/nftables/files/nftables-0.4.tar.bz2 && tar jxvf nftables*

Then configure, make and install

./configure && make && make install

Note that you can use --without-cli flag for the configure script, it will disable the interactive command line interface and the need of readline library.

Using nftables

First thing you can do, is to load the basic template tables for IPv4 networking, which can be found on the nft tool source tree, of course you can do it by hand, but remember that it is always a good idea do start simple.

Load IPv4 filter table definitions

nft -f files/nftables/ipv4-filter

Load NAT table

nft -f files/nftables/ipv4-nat

It is a good idea to load also mangle

nft -f files/nftables/ipv4-mangle

Now list your tables

nft list tables

Drop any new packet addressed to this machine

nft add rule filter input ct state new drop

Accept packets that are from ot related to established connections

nft add rule filter input ct state related,established accept

Most Linux systems runs OpenSSH, it is a good idea to accept connections to the TCP port 22, so you can access your SSH service.

nft insert rule filter input tcp port 22 accept

Now list you tables and take a look on how things are going

nft list table filter

Performing Network Address Translation (NAT)

Create a  rule to translate the IP address coming from the network 192.168.1.0/24 and count it before sending.

nft add rule nat postrouting ip saddr 192.168.1.0/24 counter masquerade

Take a look at your rules, this time append the '-a' flag to get more details and you will see

nft list table nat -a

Enable forwarding

You will also need to enable IP forwarding on the kernel

sysctl -w net.ipv4.ip_forward=1

To enable forwarding on startup, put the following sentence in the /etc/sysctl.conf file, which may need to be created on some distros.

net.ipv4.ip_forward=1

You can also enable forwarding through the proc filesystem, run the following command to do so and put it at the end of an rc script like rc.local to enable forwarding on startup

echo 1 > /proc/sys/net/ipv4/ip_forward

Saving your tables

To save your settings, just redirect the output of the listing command to an file

Save filter table

nft list table filter -a > /etc/firewall.tables

Now append the nat table, note that we use the '>' two times.

nft list table nat -a >> /etc/firewall.tables

Then append mangle table

nft list table mangle -a >> /etc/firewall.tables

Now you just need to load this file when your system starts

nft -f /etc/firewall.tables

Conclusion

Your Linux machine is now able to serve internet, all you have to do now is to point your Linux machine as gateway for your devices to share your internet. Of course there is a lot of other details and features on nftables, but it should be enough for you to understand the basics, protect your systems, share internet and prepare to say goodbye to iptables and family.

The post How to Configure nftables to Serve Internet appeared first on LinOxide.

How to Setup ZPanel CP on Linux CentOS 6

$
0
0

Today we are going to show you about one of the most important solution of web hosting control panel which is more than an open source application that works on Windows and Linux that is ZPanel. Its written in PHP and uses different other open source software packages to provide secure web hosting control panel. Using ZPanel you can use it to manage every aspect of your web server, including email accounts, MySQL databases, domains, FTP, DNS and other advanced configurations like Cron jobs.

The installation setup of ZPanel is extremely easy to getting it up and running. ZPanel provides its installation script that does everything you need to run on your own web server, so all you need is a blank server with CentOS 6 installed on it, and ZPanel will do the rest of everything else.

System Preparation

ZPanel is a very lightweight web hosting control panel in regard of resource usage as compared to other hosting control panels but it is strongly recommended to have at least 512MB of RAM for better performance.

Zpanel does not support CentOS 6.6 and above as yet so we will be using CentOS 6.5 here. So, first of all prepare a fresh CentOS 6.5 version and log in to your server via SSH with root user and stop any other web services and database service are running on it. Then remove their installation packages and make it clean so that no other web or database services are running on it.

Once you are ready with clean installation of CentOS 6.5, first apply all updates with below command.

# yum update

Download ZPanel Installer

Let’s download ZPanel latest available installer from their official web link zPanel Download Page .

zpanel download

Copy the link to get the latest available supported installation script for centos 6 and download it using the below command in your server’s Secure Shell.

# wget https://raw.github.com/zpanel/installers/master/install/CentOS-6_4/10_1_1.sh

zpanel install package

Starting ZPanel Installation

Before starting the installation through Zpanel installer script, make sure that it has executable permissions. You can check and and assign it executable permissions as shown in below image.

change permissions

Now execute the ZPanel installation script using the below command.

# ./10_1_1.sh

This will check for the installed packages and detect the version of your supported operating system. So, once everything is according the its requirements you will be greeted with Welcome screen of Official ZPanel Installer as below.

##############################################################
# Welcome to the Official ZPanelX Installer for CentOS 6.4 #
# #
# Please make sure your VPS provider hasn't pre-installed #
# any packages required by ZPanelX. #
# #
# If you are installing on a physical machine where the OS #
# has been installed by yourself please make sure you only #
# installed CentOS with no extra packages. #
# #
# If you selected additional options during the CentOS #
# install please consider reinstalling without them. #
# #
##############################################################

Then press “Y” key to continue and choose the continent, your country name and the time zone as followed by the instructions provided during the installation.

The following information has been given:

Britain (UK)

Therefore TZ='Europe/London' will be used.
Local time is now: Sat Aug 22 01:13:45 BST 2015.
Universal Time is now: Sat Aug 22 00:13:45 UTC 2015.
Is the above information OK?
1) Yes
2) No
#? 1

You can make this change permanent for yourself by appending the line
TZ='Europe/London'; export TZ
to the file '.profile' in your home directory; then log out and log in again.

Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
Europe/London

After that you have to provide the FQDN (fully qualified domain name) that will be used to access the server. Let’s provide the appropriate settings and then type “Y” to continue as shown below.

Enter the FQDN you will use to access ZPanel on your server.
- It MUST be a sub-domain of you main domain, it MUST NOT be your main domain only. Example: panel.yourdomain.com
- Remember that the sub-domain ('panel' in the example) MUST be setup in your DNS nameserver.
FQDN for zpanel: zpanel-cp.linoxide.com
Enter the public (external) server IP: 19.19.23.12
ZPanel is now ready to install, do you wish to continue (y/n) y

Now during the installation process on your machine that will take a while in their setup as it will install the following packages

  • ZPanel
  • Apache
  • MySQL
  • PHP
  • Bind
  • Postfix

After successful completeion of its packages it will restart services of all of its installed packages and will greet with its generated password for MySQL, Postfix and Zpanel User name and Password. So, save all of this valuable information and type “Y” for Restart your server to complete the install process.

zpanel installation

One your server is back, open your web browser to access ZPanel Web Login, using your server’s IP.

http://your_servers_ip/

Provide your login detail a above generated after completion of installation.

zpanel login

Welcome to ZPnale Control Panel

Once you have provided your true credentials you will be directed to the Zpanel Dashboard, where you will be able to perform any kind of your web host tasks, whether you want to configure your emails, manage your database or domains.

zpanelcp dashboard

Conclusion

ZPanel is the best looking, simplest to use, and by far the easiest to setup. However, it will be more than enough for the vast majority of users who require a free web server control panel. Hope you enjoyed this tutorial feel free to comment us about your feedback and suggestions.

The post How to Setup ZPanel CP on Linux CentOS 6 appeared first on LinOxide.

How to Install and Setup Docker on Ubuntu 15.04

$
0
0

Hi, In this article we are going to setup Docket on Ubuntu 15.04 also we will show you the usage of Docker-managed release packages and its working mechanism. Docker is an open-source program that enables a Linux application and its dependencies to be packaged as a container such as configuration files, however, unlike a virtual machine, a container doesn't run a OS, it shares the OS of the host, which in this case will be Ubuntu 15.04. It makes it possible to get more apps running on the same old servers by providing an additional layer of abstraction and automation of operating system level virtualization on Linux.

1) Docker's Prerequisites

First of all your system should latest and updated with the list of available packages so before installing you must upgrade your kernel and install the additional packages from packet manager with following command with root or sudo privileged user.

# apt-get update

If you are using an old release of your ubuntu server then you upgrade it using the following command.

# apt-get -y upgrade

Docker is missing with some of its features on the kernels that are older than version 3.10. So, to install Docker on Ubuntu Operating system make sure that its 64-bit installation of Ubuntu and have its kernel verion later than 3.10.

You can check the kernel version of your installed Ubuntu kernel version with following command.

root@ubuntu-15:~# uname -r
3.19.0-15-generic

2) Docker’s Installation Process

The Docker’s installation was available only for Ubuntu in the past days but now its available for many other distributions like CentOS and other Red Hat distributions. Once you had fulfilled the requirements by updating your system with latest updates and patches for installing Docker, then go with its installation process by installing the docker-io package using below command.

# apt-get -y install docker.io

Installing Docker

3) Docker's Configuration

After the installation process completes, we will create and fix the paths by creating a soft link between 'docker.io' in '/usr/local/bin/' to docker that is in the same folder with the following command.

# ln -sf /usr/bin/docker.io /usr/local/bin/docker

Now to confirm the status of docker’s service, execute the following command to be sure that its service is up and running.

# service docker start
# service docker status

Docker Status

In order to enable the docker services at boot up, let’s configure Docker to auto start when the server boots with below command.

# update-rc.d docker defaults

To start the daemon of Docker we can use the below command to be executed on the terminal and make sure its daemon is not previously in running state, if so then stop its daemon first and then run the below command.

Docker Daemon

4) Using Docker on Ubuntu

Here we will explain you about its usage that how we can get the Docker works for us.
To get the list of all the available commands, run the 'docker' command in the terminal and you will get a list of its all available commands with its descriptions that can be used to perform required tasks.

# docker

Docker Commands

5) Downloading a Docker Container

Let’s begin using Docker! We will download the Docker’s image by using its docker command with 'pull' options to Pull an image or a repository from the docker registry server.

Here is its command to use.

# docker pull ubuntu

pull docker image

6) Running a Docker Container

To setup a basic ubuntu container with a bash shell, we just run one command as shown below and the docker run will run a command in a new container, where as -i attaches stdin and stdout, -t allocates a tty, and we’re using the standard ubuntu container.

# docker run -i -t ubuntu /bin/bash

In the output we can see the standard Ubuntu container that can be used.

docker container

So, we are using a bash shell inside of an Ubuntu docker container. If you wish to disconnect, or detach, from the Docker container’s shell without exiting from it we have to use the combination of these keys "Ctrl-p + Ctrl-q" , then you will be back to your previous window.

There are many community containers already available, which can be found through a search. In the command below we are searching for the keyword fedora, let see its output below.

# docker search fedora

docker search

Conclusion

We hope you have had a pretty good tour through what Docker is and how it works, but still there are many challenges to setup Docker in the Organizations. After going through this detailed article you are now able to setup and use Docker containers for any of your operating system by checking its availability. So, Feel free to get back to us in case of any difficulty.

The post How to Install and Setup Docker on Ubuntu 15.04 appeared first on LinOxide.


How to Setup Vesta Control Panel on Ubuntu 14.04 / 15.04

$
0
0

Hello and welcome to the new article on Vesta Open Source Web Hosting Control Panel. If you are going to use a web hosting control panel to make managing things quicker and easier with the availability of your required features then Vesta Control Panel is one of the best choice for you. You can use its control panel to manage email accounts, FTP accounts, file management functions, creation of sub domains, disk space monitoring, bandwidth monitoring, create backups and many more. VestaCP is extremely fast and light weight and focuses on minimalism and simple usability which is freely available for Debian, Ubuntu, RHEL and CentOS Linux distributions.

In this article we are going to show you the complete installation setup on Ubuntu 14.04 as its not supporting 15.04 as yet. After following this article you will be able to install and configure your fully functional Vesta Control Panel on Ubuntu 14.04 LTS.

System Update

Connect to your Ubuntu server as root user using SSH, Putty or CLI and configure the basic server parameters regarding firewall and SELinux setup, Security measures and its latest updates.

In Ubuntu, you can check the status of your firewall and enable it using the commands shown below.

# ufw status
# ufw enable

To update update or upgrade your ubuntu system if you are using an old release, make use of the following two commands.

# apt-get update

# apt-get upgrade

Download VestaCP Package

VestaCP comes with very simple and quick way to install on any Linux Distribution. Its provides us with its Installation script, that performs each and every necessary step that is required for its successful installation.

To download its installation script we will be using the curl command as shown below.

root@ubuntu-vcp:~# curl -O http://vestacp.com/pub/vst-install.sh

Download VestaCP

Installing VestaCP

Before starting the installation make sure that no database server or web server is active on the system where you are going to start VestaCP setup, other wise your installation process will be stoped.

To start the installation process, run the bash script using below command.

root@ubuntu-vcp:~# bash vst-install.sh

Upon successful execution of script the list of following packages will be installed on the system as shown in image.

VestaCP Installation

Once you have pressed "Y" key to continue, it will take about 15 minutes to complete the whole process while doing its installation of packages and required dependencies.

VestaCP Setup

After completing the installation of its packages including third party plugins, you will greeted with successful completion message about the Vesta Control Panel setup along with its web login details.

VestaCP Installation

Vesta CP Web Login

To access the web console of Vesta Control Panel, you can use your servers's IP along with its default port "8083" with its provided credentials and make sure the its default port is allowed in the firewall.

https://your_server_ip:8083/

VestaCP Login

After providing the right credentials you will be directed to the Dashboard of Vesta Control Panel. This was the achievement that for what we have done this setup. Now from here you can easily manage multiple Web hosting accounts.

VestaCP Dashboard

Conclusion

The Vesta CP supports Apache & Nginx out of the box. It's easy to use, even for someone who is new to the world of hosting. It's one of the newest Control Panel's out there, that is a quite good to be adopted. It is easy to add clients and easy to manage them. The hosting system of VestaCP is built around packages where you set up what your clients get. You can also add your own or edit the existing packages that the system is pre configured with. We hope after following this tutorial, you got the objective of this article. Let's enjoy using Open Source Vesta Control Panel and feel free to get back to us by leaving us your valuable comments.

The post How to Setup Vesta Control Panel on Ubuntu 14.04 / 15.04 appeared first on LinOxide.

How to Install and Configure ISPConfig CP on CentOS 7.x

$
0
0

When we talk about web hosting or want to manage one or multiple web sites through a user friendly web interface then there comes different web hosting control panels some of them are proprietary and many are open source. ISPConfig is one of the most widely used open source web hosting control panel for Linux that is designed to manage Apache, FTP, DNS, Emails and databases using its web based interface. The ISPConfig provides different levels of user access that is administrator, re seller, client and email-user.

Now we will setup its installation on CentOS 7, after following this tutorial you will have a user friendly web hosting control panel where you can easily manage your multiple domains without any cost.

Basic OS Setup

As we will be going to setup ISPConfig on CentOS 7, so before starting with installation process we will configure its basic parameters to configure its network settings, firewall rules and installation of its required dependencies.

Network Setup

Your Linux host should be configured with a proper FQDN and IP address and must have internet access to it. You can configure your local host by opening the hosts file of your system using below command.

# vim /etc/hosts

72.25.10.73 ispcp ispcp.linoxide.com

Configure Firewall

Enabling system level firewall is always been a good practice for securing your servers. In Linux CentOS 7 you can enable your firewall and open the required known ports using the below commands.

To enable and start firewall run below command.

# systemctl enable firewalld
# systemctl start firewalld

Then open the ports that will be used in ISPConfig setup using the below command.

# firewall-cmd --zone=public --add-port 22/tcp --permanent
# firewall-cmd --zone=public --add-port 443/tcp --permanent
# firewall-cmd --zone=public --add-port 80/tcp --permanent
# firewall-cmd --zone=public --add-port 8080/tcp --permanent
# firewall-zmd --zone=public --add-port 25/tcp --permanent

Setup Dependencies

Before we move forward let's update your system with latest updates and security patches and Enable the EPEL repository on our CentOS system to the require packages packages for the ISPConfig.

# yum -y install yum-priorities

To update existing packages on the system run the below command.

# yum update

Once your system is up to date, we will install the Development tools packages that will be required for the complete setup of ISPConfig. To install these packages you can run the below command.

# yum -y groupinstall 'Development Tools'

1) Installing LAMP Stack

Now run the below command to install LAMP stack packages with MariaDB, Apache, PHP , NTP and PHPMYADMIN.

# yum install ntp httpd mod_ssl mariadb-server php php-mysql php-mbstring phpmyadmin

After LAMP stack packages installation, restart mariadb services and setup its root password using below 'mysql_secure_installation'.

# systemctl start mariadb
# systemctl enable mariadb

# mysql_secure_installation

2) Installing Dovecot

You can install dovecot by issuing the following command.

# yum -y install dovecot dovecot-mysql dovecot-pigeonhole

After installation create an empty dovecot-sql.conf file and create a symbolic as shown below.

# touch /etc/dovecot/dovecot-sql.conf
# ln -s /etc/dovecot/dovecot-sql.conf /etc/dovecot-sql.conf

Now restart dovecot services and enable it at boot.

# systemctl start dovecot
# systemctl enable dovecot

3) Installing ClamAV, Amavisd-new and SpamAssassin

To install ClamAV, Amavisd and SpamAssassin, you make use of the following command, that will install all these packages in one go.

# yum -y install amavisd-new spamassassin clamav clamd clamav-update unzip bzip2 unrar perl-DBD-mysql

4) Installing Apache2 and PHP Modules

Now will install some of the mentioned modules that ISPConfig 3 allows to use mod_php, mod_fcgi/PHP5, cgi/PHP5, and suPHP on each website basis.

So, to install these modules with Apache2 you can run the below command in your ssh terminal.

# yum -y install php-ldap php-mysql php-odbc php-pear php php-devel php-gd php-imap php-xml php-xmlrpc php-pecl-apc php-mbstring php-mcrypt php-mssql php-snmp php-soap php-tidy curl curl-devel mod_fcgid php-cli httpd-devel php-fpm perl-libwww-perl ImageMagick libxml2 libxml2-devel python-devel

To configure your date and time format, we will open the default configuration file of PHP and configure the Data and Time zone.

# vim /etc/php.ini
date.timezone = Europe/London

After making changes in the configuration file make sure to restart Apache web services.

5) Installing PureFTPd

PureFTP is required for the transfer of files form one server to other, to install its package you can use the below command.

yum -y install pure-ftpd

6) Installing BIND

BIND is Domain name server utility in Linux, in ISPconfig to manage and configure DNS setting, you have to install these package using the commands shown below.

# yum -y install bind bind-utils

ISPConfig Installation Setup

Now get ready for the installation setup of ISPConfig. To download its installation package we will use following wget command to copy the package from the officially provided web link of ISPConfig.

# wget http://www.ispconfig.org/downloads/ISPConfig-3-stable.tar.gz

Download ISPConfig

Once the package is downloaded, run the below command to unpack the package.

# tar -zxvf ISPConfig-3-stable.tar.gz

Then change the directory where its installation package is placed as shown in below image.

ISPConfig Package

Installing ISPConfig

Now we will run the installation through the php program by running the following command in the terminal.

# php -q install.php

ispconfig installer

Initial configuration

Select language (en,de) [en]:

Installation mode (standard,expert) [standard]:

Full qualified hostname (FQDN) of the server, eg server1.domain.tld [ispcp]: ispcp.linoxide.com

Database Configurations

MySQL server hostname [localhost]:

MySQL root username [root]:

MySQL root password []: *******

MySQL database to create [dbispconfig]:

MySQL charset [utf8]:

Then the system will be Generating a 4096 bit RSA private key to write a new private key to 'smtpd.key' file. After that we have to enter information that will be incorporated into certificate request.

Country Name (2 letter code) [XX]:UK
State or Province Name (full name) []:London
Locality Name (eg, city) [Default City]:Manchester
Organization Name (eg, company) [Default Company Ltd]:Linoxide
Organizational Unit Name (eg, section) []:Linux
Common Name (eg, your name or your server's hostname) []:ispcp
Email Address []:demo@linoxide.com

When you add above information, the system will be configured with all of its required packages as shown in the below image and then you will be asked for a secure (SSL) connection to the ISPConfig web interface.

ispconfig ssl setup

Once you have entered the information for generating the RSA key to establish its SSL connection you will be asked to configure some extra attributes, whether to choose the default or change as per your requirements. Then it will be writing RSA key, configure DB server and restart its services to complete the ISPConfig installation setup.

ispconfig setup

ISPConfig Login

Now we are ready to use ISPConfig control panel, to access its web control panel open your web browser to access the following URL that consists of your FQDN or Server's IP address with the default configured port.

https://server_IP:8080/

You can login with the dafault user name and password as 'admin' 'admin'.

ISPConfig Login

Using ISPConfig Control Panel

Upon successful authentication and providing with right login credentials, you will be directed towards dashboard of ISPconfig as shown below.

ISPConfig dashboard

By using this admin control panel we will be able to manage our system services, configure emails, add DNS entries and setup our new websites by simply choosing from its available modules.

In the following image we can see that choosing the System module will shows the status of our server with all services running on it.

using ispconfig

Conclusion

After completing this tutorial, you are now able to manage the services through a web browser that includes Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more without paying its license fee as it free and open source that you can easily modify its source code if you wish to do so. Hope you find this tutorial much helpful for you, please leave your comments if you have any issue regarding this article and feel free to post your suggestions.

The post How to Install and Configure ISPConfig CP on CentOS 7.x appeared first on LinOxide.

How to Install and Configure OpenVPN in FreeBSD 10.2

$
0
0

VPN or Virtual Private Network is a private network across the public network - mean internet. VPN provide a secure network connection over the internet or a private network owned by service provider. VPN is one of the smartest solution for improving your online "PRIVACY", using some security protocol such as IPSec(Internet Protocol Security), SSL/TLS(Transport Layer Security), PPTP(Point-to-Point Tunneling Protocol), or even you can use SSH(Secure Shell) to secure remote connection, usually called port forwarding - but we do not recommend.

OpenVPN is an open-source project provide a secure connection with virtual private network implemented. It is flexible, reliable and secure. Openvpn use openssl library to provide the secure encryption, and can run under UDP and TCP protocol with IPv4 and IPv6 support. Designed to work with TUN/TAP virtual network interface that available on the most platform. Openvpn provide many ways for users in it's use, you can use a username/password based, certificate-based for authentication.

In this tutorial we will try to install "OpenVPN in FreeBSD 10.2 with certificate-based authentication", so if someone has the certificate, they can use the Our VPN.

Prerequisites

  • FreeBSD 10.2
  • Root privileges

Step 1 - Update the System

Before you begin the installation, make sure your system is up to date. Please use "freebsd-update" to update :

freebsd-update fetch
freebsd-update install

Step 2 - Install OpenVPN

You can install open vpn via freebsd ports in directory "/usr/ports/openvpn/" or you can install with binary packages method - with "pkg" command. In this tutorial I use a pkg command. Let`s install with following command :

pkg install openvpn

The command will install "easy-rsa" and "lzo2" packages that needed by openvpn.

Install OpenVPN in FreeBSD

Step 3 - Generate Server Certificate and Keys

We need a "easy-rsa" packages for generating the server key and certificate, and that is installed on our freebsd.

So now please make new directory for openvpn and our key :

mkdir -p /usr/local/etc/openvpn/

Next, copy the easy-rsa directory in "/usr/local/share/" to the openvpn directory :

cp -R /usr/local/share/easy-rsa /usr/local/etc/openvpn/easy-rsa/

Go to the openvpn easy-rsa directory, and then make all file there excutable with "chmod" command.

cd /usr/local/etc/openvpn/easy-rsa/
chmod +x *

You must generate encryption certificate in easy-rsa directory :

. ./vars
NOTE: If you run ./clean-all, I will be doing a rm -rf on /usr/local/etc/openvpn/easy-rsa/keys

./clean-all

Next, we want to generate 4 key and certificate :

  1. CA(Certificate Authority) key
  2. Server key and certificate
  3. Client key and Certificate
  4. DIFFIE-HELLMAN PARAMETERS(necessary for the server end of a SSL/TLS connection)

Generate ca.key

In the easy-rsa directory, please run command above :

./build-ca

Enter your information about the state, country, email etc. You can use a default by press "Enter". That command will generate a ca.key and ca.crt in "keys/" directory.

Generate CA Key for Openvpn

Generate server key and certificate

Generate server key with "build-key-server nameofserverkey", and we use "server" as our server name.

./build-key-server server

Enter your information about the state, country, email etc. You can use a default by press "Enter". And type "y" to confirm all info.

Generate Server Key

Generate the client key and certificate

Generate the client key and certificate with "build-key nameofclientkey" command in easy-rsa directory. in this tutorial wi will use "client" for our cliant name.

./build-key client

Enter your information about the state, country, email etc. You can use a default by press "Enter". And type "y" to confirm all info.

Generate Client Key

Generate dh parameters

Default key size in freebsd 10.2 for dh parameters is 2048-bit keys. It is a strong, although you can also make more secure and strong by using 4096-bit keys, but it make a slow the handshake process.

./build-dh
Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time

And now all certificate is created under keys directory - "/usr/local/etc/easy-rsa/keys/". And the last you need to copy keys directory to openvpn.

cp -R keys ../../

cd ..
ll

total 40
drwxr-xr-x 4 root wheel 512 Sep 21 00:57 easy-rsa
drwx------ 2 root wheel 512 Sep 21 00:59 keys

Step 4 - Configure OpenVPN

In this step we will configure the openvpn with all key and certificate we have created before. We need to copy the openvpn configuration file from directory "/usr/local/share/examples/openvpn/sample-config-files/" to our openvpn directory "/usr/local/etc/openvpn/".

cp /usr/local/share/examples/openvpn/sample-config-files/server.conf/usr/local/etc/openvpn/server.conf
cd /usr/local/etc/openvpn/

Next, edit "server.conf" file with nano, if you haven't it, please install it with command :

pkg install nano

Now edit the file :

nano -c server.conf

Note : -c for show line number in nano editor.

In the line 32, you need to configure the port that used by openvpn. I will use default port :

port 1194

I'm UDP protocol, it is default configuration, line 36 :

proto UDP

Next, go to the line 78 to configure the certificate authority(CA), Server key, Client key and dh parameter.

ca /usr/local/etc/openvpn/keys/ca.crt
cert /usr/local/etc/openvpn/keys/server.crt
key /usr/local/etc/openvpn/keys/server.key #our server key
dh /usr/local/etc/openvpn/keys/dh2048.pem

And please configure the private ip that using by openvpn and the client in that network, please go to the line 101. I will leave default ip.

server 10.8.0.0 255.255.255.0

The last configure the log file in the line 280. we will that log file in "/var/log/openvpn/" directory.

status /var/log/openvpn/openvpn-status.log

and in the line 289 :

log /var/log/openvpn/openvpn.log

Save and Exit. And now please create the file for store the log :

mkdir -p /var/log/openvpn/
touch /var/log/openvpn/{openvpn, openvpn-status}.log

Step 5 - Enable Port Forwarding and Add OpenVPN to the Startup

To enable port forwrding in freebsd you can use sysctl command :

sysctl net.inet.ip.forwarding=1

Add the openvpn to the boot time by editing "rc.conf" file :

nano rc.conf

add to the end of the line below :

gateway_enable="YES"
openvpn_enable="YES"
openvpn_configfile="/usr/local/etc/openvpn/server.conf"
openvpn_if="tap"

Save and Exit.

Step 6 - Start OpenVPN

start openvpn wit service command:

service openvpn start

And check that openvpn is running by checking the port that used by openvpn :

sockstat -4 -l

You can see that port 1194 is opening and used by openvpn.

Step 7 - Configure the Client

As the client, please download the certificate file :

  • ca.crt
  • client.crt
  • client.key

Copy that three file to the home directory, and change the permission to the user taht use to login with ssh :

cd /usr/local/etc/openvpn/keys/
cp ca.crt client.crt client.key /home/myuser/
cd /home/myuser/
chown myuser:myuser ca.crt client.crt client.key

And then Download that's cetificate to your client, I'm here use linux so i just need to download it with scp command :

scp myuser@192.168.1.100:~/ca.crt myvpn/
scp myuser@192.168.1.100:~/client.crt myvpn/
scp myuser@192.168.1.100:~/client.key myvpn/

Please create client file configuration :

nano client.ovpn

Please add the code below :

client
dev tun
proto udp
remote 192.168.1.100 1194 #ServerIP and Port used by openvpn
resolv-retry infinite
nobind
user nobody
persist-key
persist-tun
mute-replay-warnings
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3

Save and Exit.

Now you see the files that belong to the client :

ll

total 20K
-rw-r--r--. 1 myuser myuser 1.8K Sep 21 03:09 ca.crt
-rw-r--r--. 1 myuser myuser 5.4K Sep 21 03:09 client.crt
-rw-------. 1 myuser myuser 1.7K Sep 21 03:09 client.key
-rw-rw-r--. 1 myuser myuser 213 Sep 20 00:13 client.ovpn

Step 8 - Testing OpenVPN

This is time test the openvpn, please connect to the openvpn server with openvpn file that we have. And connect with command :

cd myopenvpn/
sudo openvpn --config client.ovpn

And we have connected with the vpn, and we have private ip : 10.8.0.6.

Connected to OpenVPN 1

Openvpn Successfully.

Another test :

ping private ip for the client from the freebsd server :

ping 10.8.0.6

and from the client, I connect to the freebsd server with private ip that running openvpn 10.8.0.1.

ssh myuser@10.8.0.1

Connected to OpenVPN 2

And all successfully, we are connected.

Conclusion

VPN or Virtual Private Network is a secure and private network in public network(Internet). Openvpn is open-source project that implement virtual private network technology, Openvpn secure your traffic and encrypt it use OpenSSL Libraries. OpenVPN is easy to deploy and install in your own server, this is one of the best solution if you want to protect your online "PRIVACY".

The post How to Install and Configure OpenVPN in FreeBSD 10.2 appeared first on LinOxide.

Howto Setup Open Grid Engine Cluster using Ubuntu 14.04 / 15.04

$
0
0

Hi All, In today’s article we will talk about the Open Grid Scheduler/ Grid Engine and will show you its installation process and some steps to easily setup your Grid Engine Cluster on Ubuntu 15.04. Open Grid Scheduler/Grid Engine is a commercially supported batch-queuing system that manages and schedules the allocation of distributed resources such as processors, memory, disk space, and software licenses. It guarantees a good utilization of resources and prevents a single user from capturing the whole system resources when other users waiting for their jobs to run.

Grid Engine is typically used on high-performance computing cluster that is responsible for accepting, scheduling, dispatching, and managing the remote and distributed execution of large numbers of standalone, parallel or interactive user jobs.

Grid Engine Master Server Setup

First of all we will setup our Ubuntu server for the installation of Grid Engine on the Master server, let's login to your server with root user and update your system with all basic server's parameters.

#apt-get update

1) Create New User

We will create a new user that will be used for the Grid Engine setup and the NFS shares. Let's run the command for new user setup and add its required information.

# adduser gsadmin --uid 500

New User

2) Download Grid Engine Package

You can find the latest and old releases of Grid Engine scheduler packages on the Sourceforge Page.
Or if you can get it download on your ubuntu server using the wget utility command by following the complete package path as shown.

# wget http://downloads.sourceforge.net/project/gridscheduler/GE2011.11p1/GE2011.11p1.tar.gz

Download Opengrid

3) Extract Package

We will extract the Open Grid Package and then move it into the home directory of the newly created user and assign its user rights. To perform these steps lets use the following commands.

# tar -zxvf GE2011.11p1.tar.gz
# mv GE2011.11p1 /home/gsadmin/
# chown -R gsadmin:gsadmin /home/gsadmin/

Extract Package

Setup NFS Server on Master Host

Now install the NFS Server package and configure it with the home directory of the new user "gsadmin" for sharing.

Let's run the command below.

# apt-get install nfs-kernel-server

NFS Server

Before the installation process starts you will asked to press Yes/No, So once you press "Y" to continue a number of steps will be performed as shown below.

Creating config file /etc/idmapd.conf with new version

Creating config file /etc/default/nfs-common with new version
Adding system user `statd' (UID 109) ...
Adding new user `statd' (UID 109) with group `nogroup' ...
Not creating home directory `/var/lib/nfs'.
invoke-rc.d: gssd.service doesn't exist but the upstart job does. Nothing to start or stop until a systemd or init job is present.
invoke-rc.d: idmapd.service doesn't exist but the upstart job does. Nothing to start or stop until a systemd or init job is present.
nfs-utils.service is a disabled or a static unit, not starting it.
Setting up nfs-kernel-server (1:1.2.8-9ubuntu8.1) ...

Creating config file /etc/exports with new version

Creating config file /etc/default/nfs-kernel-server with new version
Processing triggers for libc-bin (2.21-0ubuntu4) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (219-7ubuntu3) ...

At this point NSF kernel server has been successfully installed, now we will make use of few commands as shown below to configure the default exports file and then restart nfs services.

# echo "/home/gsadmin *(rw,sync,no_subtree_check)" >> /etc/exports
# exportfs -a
# service nfs-kernel-server restart

NFS Exports

Setup OpenJDK-8 On Master Node

Let's setup Open JDK on the Master Node with Open grid engine to run java based jobs on the cluster.
Simply run the below command for latest java installation.

# apt-get install openjdk-8-jdk

OpenJDK 8

Press "Y" to continue the Java installation process as shown below.

0 to upgrade, 179 to newly install, 0 to remove and 40 not to upgrade.
Need to get 108 MB of archives.
After this operation, 478 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y

Java Home Setup

Now setup and configure the Java_Home and export its path to use.

root@ubuntu-15:~# echo "export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/" >> ~/.bashrc
root@ubuntu-15:~# export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
root@ubuntu-15:~# echo "export PATH=$PATH:/usr/lib/jvm/java-8-openjdk-amd64/bin" >> ~/.bashrc
root@ubuntu-15:~# export PATH=$PATH:/usr/lib/jvm/java-8-openjdk-amd64/bin
root@ubuntu-15:~# echo "export SGE_ROOT=/home/gsadmin/GE2011.11p1" >>~/.bashrc
root@ubuntu-15:~# export SGE_ROOT=/home/gsadmin/GE2011.11p1/
root@ubuntu-15:~# export PATH=$PATH:~/GE2011.11p1/bin/linux-x64:$SGE_ROOT

While the PATH Variables also set at nodes so, before starting the installation of Open Grid Schedule packages, first we will configure the Nodes.

Setup Open Grid Engine Client Node

Like the same way on the master node, we will create a new user on the Client Node for the NFS Share in order to access the Grid Engine files and other shared data.

1) Adding New User

To create new user for the client node, use the command as shown in the image, give it a password and fill out other information if required else leave as default.

Add New User

2) Installing NFS Package

Now install the NFS command package on th client node using the below command as shown.

NFS Common

3) Mount Share

After installation of the NFS common package now we will mount the server node's shares on the client node and configure it to be permanently mounted using below commands.

root@ubuntu-node:~# mount 17.5.10.71:/home/gsadmin /home/gsadmin
root@ubuntu-node:~# echo "17.5.10.71:/home/gsadmin /home/gsadmin nfs" >> /etc/fstab
root@ubuntu-node:~# echo "export SGE_ROOT=/home/gseadmin/GE2011.11p1" >>~/.bashrc
root@ubuntu-node:~# export SGE_ROOT=/home/gsadmin/GE2011.11p1/

Setup SSH Keys

Now we have to allow direct ssh access from Server to Client and Vice Versa, So, to accomplish this we will configure both nodes to have access to ssh without entering the password. Let's run the below command on the Client Node first to generate the RSA key pair.

root@client-node:~# ssh-keygen
root@server-node:~# ssh-keygen

SSH Keygen

Now to copy the SSH Key from the client Node to the Master Node run the below command.

root@ubuntu-node:~# ssh-copy-id master_server_IP

SSH Copy ID

We have successfully added Key from Client Machine to Server, now we can access our server node from the client without entering the root password. Use the below command to connect to other server using ssh key.

root@ubuntu-node:~# ssh 17.5.10.71

SSH connect

So, we are now able to connect from between other servers without a password using the same way.

Open Grid Scheduler/Grid Engine Installation

All the basic parameters has been setup, now we will go through the installation process Open Grid Scheduler and Grid Engine on the Master server node.

There are two options, whether to use the Graphical User Interface if you are using Xwindow Operating system or follow the command line installation process.

For GUI installation, point to the location of the installation script, give it teh execute permissions on the server and run the gui installer.

root@ubuntu-15:~# cd /home/gsadmin/GE2011.11p1/source/clients/gui-installer/templates/
root@ubuntu-15:/home/gsadmin/GE2011.11p1/source/clients/gui-installer/templates# ./start_gui_installer.sh

You can also install the Grid engine server packages from its available repositories using apt-get command as shown below.

root@ubuntu-15:~# apt-get install gridengine-master

gridengine master

You might get failed error, if any of the prerequisites are unmet, so make sure of the following points.

  • Hostname is properly configured.
  • Postfix service is up an running .
  • bsd-mailx package is installed.

During the installation process you will be asked to configure some more parameters as shown below so choose the appropriate settings for Postfix General type of mail configurations from the available options.

Then choose the system mail name as per instructed on the window.

Postfix Configurations

In the next steps you will be prompted to configure gridengine common settings. So, if you want the default configurations by SGE automatically then point to the "Yes" key an press OK to move for next settings.

Configure SGE

At the end of installation process, you will be able to see the parameters of the cluster initialization setup as shown in the below image.

gridengine cluster

Verify Installation

To check and verify that everything went fine during the installation process, lets run the below command on the master server to check its services.

root@ubuntu-15:~# netstat -anput | grep master
root@ubuntu-15:~# ps aux | grep "sge"

sge master

On the Client node check for exec service.

root@ubuntu-node:~# netstat -anp | grep exec
tcp 0 0 0.0.0.0:6445 0.0.0.0:* LISTEN 1353/sge_execd
root@ubuntu-node:~# netstat -anput | grep exec
tcp 0 0 0.0.0.0:6445 0.0.0.0:* LISTEN 1353/sge_execd

Conclusion

We hope have learnt the installation and configurations steps to setup Open Grid Scheduler/ Grid Engine cluster using Ubuntu 14.04/15.05. The Sun Grid Engine queuing system is useful when you have a bundle of tasks to run and you want to distribute the tasks over a cluster of nodes. Now you can use it for scheduling your tasks, load balancing or monitor all submitted jobs/queries which cluster nodes are running on.

The post Howto Setup Open Grid Engine Cluster using Ubuntu 14.04 / 15.04 appeared first on LinOxide.

How to Install and Configure OpenVZ on Ubuntu 14.04/15.04

$
0
0

Hello everybody, In today's article we are going to show you the installation process of OpenVZ on Ubuntu 15.04. OpenVZ, is an open Source operating-system-level virtualization platform that provides a thin layer of virtualization on top of the underlying Operating System. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (Ves). The kernel used in OpenVZ is a Linux based kernel that is modified to add support for its containers. This modified linux kernel provides virtualization, isolation, checkpoints and resources management.

So, using OpenVZ Virtualization environment each virtual machine will efficiently share the CPU, Memory, Disk space, and network of your Physical server.

1) Prerequisites Check

In this tutorial we are going to setup OpenVZ on Ubuntu 15.04 64-Bit Operating System with minimal software packages installed on it.

The hardware resources depends upon the size of your infrastructure requirements while its minimum requirements of RAM is about 128 MB with atleast 4GB of free disk space. So, we are working on our test lab environment with following mentioned resources.

RAM: 2 GB
CPU: 2 CPUs
Disk: 20 GB
Static IP: xx.xx.xx.xx
FQDN: openvz.linoxide.com

2) Adding OpenVZ Repository

Once you are done with initial setup of your ubuntu server and are connected with the internet, login with root or sudo credentials and run the below command to add OpenVZ repository to your ubuntu server.

# cat < /etc/apt/sources.list.d/openvz-rhel6.list
> deb http://download.openvz.org/debian wheezy main
> # deb http://download.openvz.org/debian wheezy-test main
> EOF

Now we can directly download its key with below command.

# wget http://ftp.openvz.org/debian/archive.key

Add Repository

Once the import is complete, run the below command to add the imported archive key with below command.

# apt-key add archive.key

3) Installing OpenVZ Kernel

Before installing the OpenVZ kernel, let's update your system with latest updates and patches by running the following command.

# apt-get update

Now we are going to install OpenVZ kernel by running the below command for 64-bit operating system as shown.

# apt-get install linux-image-openvz-amd64

installing openvz

The OpenVZ installation process will ends up with grub configurations as show.

Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.19.0-15-generic
Found initrd image: /boot/initrd.img-3.19.0-15-generic
Found linux image: /boot/vmlinuz-2.6.32-openvz-042stab111.11-amd64
Found initrd image: /boot/initrd.img-2.6.32-openvz-042stab111.11-amd64
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
Setting up linux-image-openvz-amd64 (042stab111.11) ...

4) Configure Kernel Parameters

To configure the new kernel parameters on ubuntu we will change the sysctl variables in /etc/sysctl.conf file with some number lines as shown.

so, Open the file using your editor to configure the kernel parameters on it.

# vim /etc/sysctl.conf

net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

After saving the changes close file and run the following command to apply the sysctl changes.

# sysctl -p

5) Installing OpenVZ Tools

Its a quite practice to install some user-level tools on OpenVZ before using it. vzstats is a tool that will helps your to gather OpenVZ usage statistics and vzquota will be used to assist in managing disk quota.

To install these tools simply run the following command in your ssh session.

# apt-get install vzctl vzquota ploop vzstat

OpenVZ Tools

6) Booting OpenVZ Kernel

At this point we are done with the initial setup of OpenVZ installation and its kernel parameters configurations. Now the only need left is to reboot your system and choose the OpenVZ kernel from the grub boot loader options.

So, after rebootng your system choose the Advance options for Ubuntu as shown in below image.

boot options

In the next window you will see all the latest and previously installed kernel versions including the OpenVZ, so here we change our required kernel to start OpenVZ.

OpenVZ kernel

Once your server is back with OpenVZ Virtualization kernel, make sure to configure the default configuration file of OpenVZ to make changes in the if required other than its default.

The default configuration file of OpenVZ can be found in the following location.

# vi /etc/vz/vz.conf

Conclusion

In this Linux howto tutorial we have learned the installation of OpenVZ on "Ubuntu Server" as a host where we can create multiple virtual Linux servers. While each new Linux VM in OpenVZ is isolated from the host and from each other. The virtualization technique used in OpenVZ does not use hardware virtualization like KVM, XEN or VMware. We hope hope you are now much more familiar with OpenVZ Virtualization and its setup on Ubuntu. So, feel free to get back to us in case of any difficulty.

The post How to Install and Configure OpenVZ on Ubuntu 14.04/15.04 appeared first on LinOxide.

How to Install OwnCloud 8 with Nginx and SSL on FreeBSD 10.2

$
0
0

OwnCloud is suite of application client-server for creating hosting services, it is allow you to create your own cloud storage and allow you to share your data, contacts, calendar with other users and devices. OwnCloud is open source project that provides an easy way for you to sync and share your data that is hosted in your data center. it Has a beautiful and user-friendly front-end design, so that make a user easy for browse and access the data, then share to others users. OwnCloud an online secure enterprise file sync and file sharing.

In this tutorial, I will guide you a step by step to install owncloud 8, and we use Nginx(engine-X) as web server, php-fpm and mariaDB as the database system on FreeBSD 10.2.

Step 1 ) Installing Nginx php-fpm and MariaDB

In the previous tutorial, we had discussed about the installation FEMP (Nginx, MariaDB and PHP-FPM), but in this tutorial we will discuss briefly. We will install FEMP using pkg command.

Install Nginx :

pkg install nginx

Install MariaDB :

pkg install mariadb100-server-10.0.21 mariadb100-client-10.0.21

Install php-fpm and all the packages that needed by owncloud :

pkg install php56-extensions php56-mysql php56-pdo_mysql php56-zlib php56-openssl php56-bcmath php56-gmp php56-gd php56-curl php56-ldap php56-exif php56-fileinfo php56-mbstring php56-gmp php56-bz2 php56-zip php56-mcrypt pecl-APCu pecl-intl

Step 2 ) Configure Nginx php-fpm and MariaDB

Configure Nginx.

Leave nginx with default configuration. In this step you just need to add nginx to the startup with sysrc command, then start nginx :

sysrc nginx_enable=yes
service nginx start

And try access it with your browser.

Nginx homepage

Configure MariaDB

Copy mariadb configuration and add it to the startup.

cp /usr/local/share/mysql/my-medium.cnf /usr/local/etc/my.cnf
sysrc mysql_enable=yes

Start mariadb :

service mysql-server start

Next, Configure a password for mariadb :

 mysql_secure_installation

Enter current password for root (enter for none):
#Just press Enter here
Change the root password? [Y/n] Y
#Type your password for mariadb here
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

and try log in to the mariadb/mysql server with command :

mysql -u root -p
YOUR PASSWORD

MariaDB Configured

Configure PHP-FPM

Change the default listen to the unix socket and set the permission for it, and then configure php-fpm to running under user "www".

nano /usr/local/etc/php-fpm.conf

change the line like below :

listen = /var/run/php-fpm.sock
...
...
listen.owner = www
listen.group = www
listen.mode = 0660

And now configure and  edit php.ini file :

cd /usr/local/etc/
cp php.ini-production php.ini
nano php.ini

change the cgi.fix_pathinfo line value to 0.

cgi.fix_pathinfo=0

and the last add php-fpm to the boot time and start it :

sysrc php_fpm_enable=yes
service php-fpm start

Step 3 ) Generate SSL Certificate for OwnCloud

Create new directory "cert" in the /usr/local/etc/nginx/ and please generate SSL certificate :

mkdir -p /usr/local/etc/nginx/cert/
cd /usr/local/etc/nginx/cert/
openssl req -new -x509 -days 365 -nodes -out /usr/local/etc/nginx/cert/owncloud.crt -keyout /usr/local/etc/nginx/cert/owncloud.key

Next, change the certificate permission to 600 :

chmod 600 *

Step 4 ) Create Database for OwnCloud

To create the database for owncloud, you must log in to the mysql/mariadb server use the password that had been set.

mysql -u root -p
YOUR PASSWORD

Create new database called "my_ownclouddb" :

create database my_ownclouddb;

And Create new user "myownclouduser" for the "my_ownclouddb" database :

create user myownclouduser@localhost identified by 'myownclouduser';

Next, Grant that user has been created to the "my_ownclouddb" database :

grant all privileges on my_ownclouddb.* to myownclouduser@localhost identified by 'myownclouduser';
flush privileges;

Configure Database for OwnCloud

Step 5 ) Install and Configure OwnCloud

Go to the tmp directory and download owncloud from the official site with fetch command. I'm here use OwnCloud 8.1.3 - latest stable version.

cd /tmp/
fetch https://download.owncloud.org/community/owncloud-8.1.3.tar.bz2

Extract the owncloud and move owncloud directory to "/usr/local/www/".

tar -xzvf owncloud-8.1.3.tar.bz2
mv owncloud/ /usr/local/www/

Now create new directory "data" in the owncloud directory, and change the ownership of the file and directory to the "www" user that running nginx.

cd /usr/local/www/
mkdir -p /usr/local/www/owncloud/data
chown -R www:www owncloud/

Next, Configure the virtualhost for owncloud.

Move to the nginx configuration directory, and rename default nginx configuration file nginx.conf to nginx.conf.original.

cd /usr/local/etc/nginx/
mv nginx.cong nginx.conf.original

And create new configuration file for owncloud :

nano nginx.conf

Paste the following code :

worker_processes 2;

events {
worker_connections  1024;
}

http {
include      mime.types;
default_type  application/octet-stream;
sendfile        off;
keepalive_timeout  65;
gzip off;

server {
listen 80;
server_name 192.168.1.114;

#Force to the https
return 301 https://$server_name$request_uri;
}

server {

listen 443 ssl;
server_name 192.168.1.114; #YourIP or domain

#SSL Certificate you created
ssl_certificate /usr/local/etc/nginx/cert/owncloud.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/owncloud.key;

# Add headers to serve security related headers
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;

root /usr/local/www/owncloud;
location = /robots.txt { allow all; access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }
location ^~ / {
index index.php;
try_files $uri $uri/ /index.php$is_args$args;
fastcgi_intercept_errors on;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
client_max_body_size 512M;
fastcgi_buffers 64 4K;
location ~ ^/(?:\.|data|config|db_structure\.xml|README) {
deny all;
}
location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTPS on;
include fastcgi_params;
fastcgi_param modHeadersAvailable true;
fastcgi_param MOD_X_ACCEL_REDIRECT_ENABLED on;
}
location ~* \.(?:jpg|gif|ico|png|css|js|svg)$ {
expires 30d;
add_header Cache-Control public;
access_log off;
}
location ^~ /data {
internal;
alias /mnt/files;
}
}
}
}

Save and Exit.

Next, test nginx configuration with command "nginx -t", if there is no error, please restart nginx and php-fpm :

nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

service php-fpm restart
service nginx restart

Visit the server IP or the domain name with your browser :

http://owncloud.local - will force to the https, so please confirm the ssl certificate.

And create admin user for owncloud, fill with your username and password. Then fill the database configuration, and fill it with a database that has been configured in the previous step.

Configure Owncloud

And click "Finish Setup".

Step 6 ) Testing

Visit the http://192.168.1.114/, then login with username and password that have been configured.

Owncloud installed

So we have successfully install and configure owncloud 8.1.3 on FreeBSD 10.2 with SSL and Nginx webserver.

Conclusion

OwnCloud an open-source project that makes it easy for users to store and exchange data on cloud computing. We can install ownCloud on our servers, so that we ourselves can organize the data that has been stored on our server easily and safely. This is a good solution for the convenience and security of data (Because the data is in our own) for its users. OwnCloud easily installed and configured on the server.

The post How to Install OwnCloud 8 with Nginx and SSL on FreeBSD 10.2 appeared first on LinOxide.

How Setup VMs using Gnome-Boxes

$
0
0

Gnome Boxes is a simple virtualisation software whose purpose is to provide an easy graphical user interface(GUI) to manage virtual machines on Linux. Using Boxes, we can access and use both local and remote virtual systems.  It is an alternative to tools like VMware, VirtualBox and Virt-manager.  However, it is targeted for basic users rather than system administrators who use advanced features.  As Gnome boxes is part of the GNOME environment, it is available on almost all Linux distributions.  Underneath, Boxes makes use of QEMU which in turn needs hardware virtualisation extensions (Intel VT-x or AMD-v). You can enable hardware virtualisation extension on your system by going to the BIOS settings. If your system does not support this, then you will not be able to use Boxes.  In this article, let us learn how to set up virtual machines using Gnome Boxes.

Installing Gnome-Boxes

If you are a Ubuntu user, execute

sudo apt-get install gnome-boxes

poornima@poornima-Lenovo:~$ sudo apt-get install gnome-boxes
[sudo] password for poornima:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
augeas-lenses cpu-checker dmsetup ebtables gawk ipxe-qemu libaugeas0

.....

16 upgraded, 47 newly installed, 0 to remove and 1247 not upgraded.
Need to get 15.0 MB/15.4 MB of archives.
After this operation, 60.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

RedHat users can go with dnf:

sudo dnf gnome-boxes

Once installed, you can start the boxes by executing the command

$gnome-boxes

Setting up virtual machine

When you start gnome-boxes for the first time, there are no boxes yet and you can proceed to create one by clicking on the 'New' button.

Freshly started gnome-box

Creating a box

Setting up a virtual machine using Gnome-boxes is pretty simple. We need to first have the required .iso file downloaded or have a link to the URL where it is available.

Selecting the source file

In the screenshot above, I have chosen the URL option, hence it is asking for entering the URL. Or else, you can browse for the location of .iso file and proceed from there. Gnome-box will now make preparations to create a new box by downloading the media (Fedora 22 Live Workstation in this case).

It will assign a default value of 1GB to memory and 21.5 GB of hard disk space to the virtual machine to be created. This can be customised if required.

VM configuration review

 

Once the Fedora screen shows up, select 'Install to Hard Drive' option and you will be taken through the usual installation screens of Fedora to select the language, date & time, location etc. Once selected, a summary of the installation to be done is displayed and you can proceed by pressing the 'Begin Installation' button.

Summary of installation to be done

It takes a while for the installation to complete and when done,  voila!!!  you can reboot the VM and start using it.

Screen showing that installation is complete

Below is the screen shot of Fedora 22 VM booted after installation and ready to be used.

Fedora is now up and running

Now, if you want to go back to the gnome-boxes main screen, click on the '<' button on the left side of the top bar.  It takes you to the screen which lists all the VMs that are installed.  You can launch any VM from here.

Gnome-boxes main screen showing the list of VMs

In order to edit the properties of a particular VM, select the required VM and click on the 'Properties' button that gets displayed.

VM properties

Do not expect too many features of the VM to be able to control from here as Boxes is meant for basic users. The "System" tab allows you to control the memory and disk space. But you cannot control resources like CPU.  "Devices"  tab helps in accessing the devices connected to the host OS. You can create snapshots of the VMs using the "Snapshots" tab.

Conclusion

Latest version of Gnome Boxes available is 3.16.2  and there are improvements to the software with every update.  Boxes is not a replacement for VMWare or Virtual box which are more mature.  You can give it a try only if you are looking for a safe and easy way to try out a new operating system.

The post How Setup VMs using Gnome-Boxes appeared first on LinOxide.


How to Setup DockerUI - a Web Interface for Docker

$
0
0

Docker is getting more popularity day by day. The idea of running a complete Operating System inside a container rather than running inside a virtual machine is an awesome technology. Docker has made lives of millions of system administrators and developers pretty easy for getting their work done in no time. It is an open source technology that provides an open platform to pack, ship, share and run any application as a lightweight container without caring on which operating system we are running on the host. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. Running docker containers and managing them may come a bit difficult and time consuming, so there is a web based application named DockerUI which is make managing and running container pretty simple. DockerUI is highly beneficial to people who are not much aware of linux command lines and want to run containerized applications. DockerUI is an open source web based application best known for its beautiful design and ease simple interface for running and managing docker containers.

Here are some easy steps on how we can setup Docker Engine with DockerUI in our linux machine.

1. Installing Docker Engine

First of all, we'll gonna install docker engine in our linux machine. Thanks to its developers, docker is very easy to install in any major linux distribution. To install docker engine, we'll need to run the following command with respect to which distribution we are running.

On Ubuntu/Fedora/CentOS/RHEL/Debian

Docker maintainers have written an awesome script that can be used to install docker engine in Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 and Debian 8.x distributions of linux. This script recognizes the distribution of linux installed in our machine, then adds the required repository to the filesystem, updates the local repository index and finally installs docker engine and required dependencies from it. To install docker engine using that script, we'll need to run the following command under root or sudo mode.

# curl -sSL https://get.docker.com/ | sh

On OpenSuse/SUSE Linux Enterprise

To install docker engine in the machine running OpenSuse 13.1/13.2 or SUSE Linux Enterprise Server 12, we'll simply need to execute the zypper command. We'll gonna install docker using zypper command as the latest docker engine is available on the official repository. To do so, we'll run the following command under root/sudo mode.

# zypper in docker

On ArchLinux

Docker is available in the official repository of Archlinux as well as in the AUR packages maintained by the community. So, we have two options to install docker in archlinux. To install docker using the official arch repository, we'll need to run the following pacman command.

# pacman -S docker

But if we want to install docker from the Archlinux User Repository ie AUR, then we'll need to execute the following command.

# yaourt -S docker-git

2. Starting Docker Daemon

After docker is installed, we'll now gonna start our docker daemon so that we can run docker containers and manage them. We'll run the following command to make sure that docker daemon is installed and to start the docker daemon.

On SysVinit

# service docker start

On Systemd

# systemctl start docker

3. Installing DockerUI

Installing DockerUI is pretty easy than installing docker engine. We just need to pull the dockerui from the Docker Registry Hub and run it inside a container. To do so, we'll simply need to run the following command.

# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui

Starting DockerUI Container

Here, in the above command, as the default port of the dockerui web application server 9000, we'll simply map the default port of it with -p flag. With -v flag, we specify the docker socket. The --privileged flag is required for hosts using SELinux.

After executing the above command, we'll now check if the dockerui container is running or not by running the following command.

# docker ps

Running Docker Containers

4. Pulling an Image

Currently, we cannot pull an image directly from DockerUI so, we'll need to pull a docker image from the linux console/terminal. To do so, we'll need to run the following command.

# docker pull ubuntu

Docker Image Pull

The above command will pull an image tagged as ubuntu from the official Docker Hub. Similarly, we can pull more images that we require and are available in the hub.

4. Managing with DockerUI

After we have started the dockerui container, we'll now have fun with it to start, pause, stop, remove and perform many possible activities featured by dockerui with docker containers and images. First of all, we'll need to open the web application using our web browser. To do so, we'll need to point our browser to http://ip-address:9000 or http://mydomain.com:9000 according to the configuration of our system. By default, there is no login authentication needed for the user access but we can configure our web server for adding authentication. To start a container, first we'll need to have images of the required application we want to run a container with.

Create a Container

To create a container, we'll need to go to the section named Images then, we'll need to click on the image id which we want to create a container of. After clicking on the required image id, we'll need to click on Create button then we'll be asked to enter the required properties for our container. And after everything is set and done. We'll need to click on Create button to finally create a container.

Creating Docker Container

Stop a Container

To stop a container, we'll need to move towards the Containers page and then select the required container we want to stop. Now, we'll want to click on Stop option which we can see under Actions drop-down menu.

Managing Container

Pause and Resume

To pause a container, we simply select the required container we want to pause by keeping a check mark on the container and then click the Pause option under Actions . This is will pause the running container and then, we can simply resume the container by selecting Unpause option from the Actions drop down menu.

Kill and Remove

Like we had performed the above tasks, its pretty easy to kill and remove a container or an image. We just need to check/select the required container or image and then select the Kill or Remove button from the application according to our need.

Conclusion

DockerUI is a beautiful utilization of Docker Remote API to develop an awesome web interface for managing docker containers. The developers have designed and developed this application in pure HTML and JS language. It is currently incomplete and is under heavy development so we don't recommend it for the use in production currently. It makes users pretty easy to manage their containers and images with simple clicks without needing to execute lines of commands to do small jobs. If we want to contribute DockerUI, we can simply visit its Github Repository. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !

The post How to Setup DockerUI - a Web Interface for Docker appeared first on LinOxide.

How to Setup Red Hat Ceph Storage on CentOS 7.0

$
0
0

Ceph is an open source software platform that stores data on a single distributed computer cluster. When you are planning to build a cloud, then on top of the requirements you have to decide on how to implement your storage. Open Source CEPH is one of RED HAT mature technology based on object-store system, called RADOS, with a set of gateway APIs that present the data in block, file, and object modes. As a result of its open source nature, this portable storage platform may be installed and used in public or private clouds. The topology of a Ceph cluster is designed around replication and information distribution, which are intrinsic and provide data integrity. It is designed to be fault-tolerant, and can run on commodity hardware, but can also be run on a number of more advanced systems with the right setup.

Ceph can be installed on any Linux distribution but it requires the recent kernel and other up-to-date libraries in order to be properly executed. But, here in this tutorial we will be using CentOS-7.0 with minimal installation packages on it.

System Resources

CEPH-STORAGE
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.136.163
FQDN: ceph-storage.linoxide.com

CEPH-NODE
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.171.138
FQDN: ceph-node.linoxide.com

Pre-Installation Setup

There are few steps that we need to perform on each of our node before the CEPH storage setup. So first thing is to make sure that each node is configured with its networking setup with FQDN that is reachable to other nodes.

Configure Hosts

To setup the hosts entry on each node let's open the default hosts configuration file as shown below.

# vi /etc/hosts

45.79.136.163 ceph-storage ceph-storage.linoxide.com
45.79.171.138 ceph-node ceph-node.linoxide.com

Install VMware Tools
While working on the VMware virtual environment, its recommended that you have installed its open VM tools. You can install using below command.

#yum install -y open-vm-tools

Firewall Setup

If you are working on a restrictive environment where your local firewall in enabled then make sure that the number of following ports are allowed from in your CEPH storge admin node and client nodes.

You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node and port 80 to CEPH admin or Calamari node for inbound so that clients in your network can access the Calamari web user interface.

You can start and enable firewall in centos 7 with given below command.

#systemctl start firewalld
#systemctl enable firewalld

To allow the mentioned ports in the Admin Calamari node run the following commands.

#firewall-cmd --zone=public --add-port=80/tcp --permanent
#firewall-cmd --zone=public --add-port=2003/tcp --permanent
#firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
#firewall-cmd --reload

On the CEPH Monitor nodes you have to allow the following ports in the firewall.

#firewall-cmd --zone=public --add-port=6789/tcp --permanent

Then allow the following list of default ports for talking to clients and monitors and for sending data to other OSDs.

#firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

It quite fair that you should disable firewall and SELinux settings if you are working in a non-production environment , so we are going to disable the firewall and SELinux in our test environment.

#systemctl stop firewalld
#systemctl disable firewalld

System Update

Now update your system and then give it a reboot to implement the required changes.

#yum update
#shutdown -r 0

Setup CEPH User

Now we will create a separate sudo user that will be used for installing the ceph-deploy utility on each node and allow that user to have password less access on each node because it needs to install software and configuration files without prompting for passwords on CEPH nodes.

To create new user with its separate home directory run the below command on the ceph-storage host.

[root@ceph-storage ~]# useradd -d /home/ceph -m ceph
[root@ceph-storage ~]# passwd ceph

Each user created on the nodes must have sudo rights, you can assign the sudo rights to the user using running the following command as shown.

[root@ceph-storage ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL

[root@ceph-storage ~]# sudo chmod 0440 /etc/sudoers.d/ceph

Setup SSH-Key

Now we will generate SSH keys on the admin ceph node and then copy that key to each Ceph cluster nodes.

Let's run the following command on the ceph-node to copy its ssh key on the ceph-storage.

[root@ceph-node ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5b:*:*:*:*:*:*:*:*:*:c9 root@ceph-node
The key's randomart image is:
+--[ RSA 2048]----+

[root@ceph-node ~]# ssh-copy-id ceph@ceph-storage

SSH key

Configure PID Count

To configure the PID count value, we will make use of the following commands to check the default kernel value. By default its a small maximum number of threads that is '32768'.
So will configure this value to a higher number of threads by editing the system conf file as shown in the image.

Change PID Value

Setup Your Administration Node Server

With all the networking setup and verified, now we will install ceph-deploy using the user ceph. So, check the hosts entry by opening its file.

#vim /etc/hosts
ceph-storage 45.79.136.163
ceph-node 45.79.171.138

Now to add its repository run the below command.

#rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Adding EPEL

OR create a new file and update the CEPH repository parameters but do not forget to mention your current release and distribution.

[root@ceph-storage ~]# vi /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

After this update your system and install the ceph deploy package.

Installing CEPH-Deploy Package

To upate the system with latest ceph repository and other packages, we will run the following command along with ceph-deploy installation command.

#yum update -y && yum install ceph-deploy -y

Image-

Setup the cluster

Create a new directory and move into it on the admin ceph-node to collect all output files and logs by using the following commands.

#mkdir ~/ceph-cluster
#cd ~/ceph-cluster

#ceph-deploy new storage

setup ceph cluster

Upon successful execution of above command you can see it creating its configuration files.
Now to configure the default configuration file of CEPH, open it using any editor and place the following two lines under its global parameters that reflects your public network.

#vim ceph.conf
osd pool default size = 1
public network = 45.79.0.0/16

Installing CEPH

We are now going to install CEPH on each of the node associated with our CEPH cluster. To do so we make use of the following command to install CEPH on our both nodes that is ceph-storage and ceph-node as shown below.

#ceph-deploy install ceph-node ceph-storage

installing ceph

This will takes some time while processing all its required repositories and installing the required packages.

Once the ceph installation process is complete on both nodes we will proceed to create monitor and gather keys by running the following command on the same node.

#ceph-deploy mon create-initial

CEPH Initial Monitor

Setup OSDs and OSD Daemons

Now we will setup disk storages, to do so first run the below command to List all of your usable disks by using the below command.

#ceph-deploy disk list ceph-storage

In results will get the list of your disks used on your storage nodes that you will use for creating the OSD. Let's run the following command that consists of your disks names as shown below.

#ceph-deploy disk zap storage:sda
#ceph-deploy disk zap storage:sdb

Now to finalize the OSD setup let's run the below commands to setup the journaling disk along with data disk.

#ceph-deploy osd prepare storage:sdb:/dev/sda
#ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1

You will have to repeat the same command on all the nodes while it will clean everything present on the disk. Afterwards to have a functioning cluster, we need to copy the different keys and configuration files from the admin ceph-node to all the associated nodes by using the following command.

#ceph-deploy admin ceph-node ceph-storage

Testing CEPH

We have almost completed the CEPH cluster setup, let's run the below command to check the status of the running ceph by using the below command on the admin ceph-node.

#ceph status
#ceph health
HEALTH_OK

So, if you did not get any error message at ceph status , that means you have successfully setup your ceph storage cluster on CentOS 7.

Conclusion

In this detailed article we learned about the CEPH storage clustering setup using the two virtual Machines with CentOS 7 OS installed on them that can be used as a backup or as your local storage that can be used to precess other virtual machines by creating its pools. We hope you have got this article helpful. Do share your experiences when you try this at your end.

The post How to Setup Red Hat Ceph Storage on CentOS 7.0 appeared first on LinOxide.

How to Install Blog Publishing Application Dotclear on Ubuntu 15.04

$
0
0

Dotclear is an open source, free, blog publishing application. It is written in PHP and strongly adopts all important aspects of any well structured open source solution. It is multilingual, web based system and offers a user friendly administration panel. Here are some of the noteworthy features  of this application.

  • Even if you don’t have advanced knowledge for CSS or HTM, You can still easily customize its themes to your liking.
  • It comes with awesome media management feature, you can  perform all important operations on your media files and it lets you include media files from external sources  as well.
  • It comes with flexible comments management system and a builtin anti-spam filter to fight spam comments more effectively.
  • A single Dotclear installation natively allows you to create multiple blogs.
  • It is a multi-user system, you can create as many users as you like and can assign them proper permissions.
  • It lets you add pages independent of the entry flow. In this way you have more control over the flow of your web blog.
  • You can extend and customize the functionality and design of your blog using freely available themes and plugins.
  • It is a fast regardless of the amount of data stored in database. It is security and performance focused system.
  • It comes with advanced features like Full Unicode Support, Syndication Feeds, and XML/RPC support.
  • It is naturally optimized for search engines and importing/exporting your blog data is pretty straight forward.

Installing Dotclear on Ubuntu 15.04

Let’s see how we can install this blog publishing system on Ubuntu operating system. As pre-requisite, it requires Apache, PHP5, and MySQL server to be installed on our ubuntu system. If you already have Apache, MySQL and PHP installed on your system, you can skip this step, otherwise launch your system terminal and run following command to install Lamp-server on your ubuntu system.

sudo apt-get install lamp-server^

During the installation process, it will launch a pop up to specify root password for your mysql installation, note down the password you set here as it will be used later when installing Dotclear.

MySQL Install

Once the installation for Lamp-server is complete, start apache web server.

sudo /etc/init.d/apache2 start

Launch your browser and load: http://localhost, it should display the following page.

Apache

It is the point where we should be creating a database for dotclear. Login to your Mysql installation and run following command to create database.

CREATE DATABASE dotclear;

The following screenshot should depict the whole process.

MySQL DB

Our Ubuntu system is all set to install Dotclear now. Run following command to change your current working directory to apache document root.

cd /var/www/html

Here download Dotclear by using the following command:

sudo wget http://download.dotclear.org/latest.tar.gz

Run following command to extract the downloaded file.

tar -xvzf latest.tar.gz

Rest of the installation process is web based, point your web browser to http://localhost/dotclear , it will start the web based installation wizard. On the very first step, specify your database credentials(DB hostname should be “localhost”, password and DB name are the ones we created earlier). Click “Continue” to take installation to the next level.

DotClear install wizard

Specify your this new installation’s username and password here. It will be administrator login details for your Dotclear installation.

DotClear User

That’s it, installation is complete now. It will list all important URLs on this congratulation page.

DotClear Login

Click “Manage your blog now” to start managing your blog. This is the main screen for your administration interface for this publishing system.

DotClear Main Interface

Conclusion

It is a feature rich, high quality, easy to install web system. It provides easy publication, easy administration, and flexible system to manage your multiple blogs from a single place. This tool is gaining popularity with the passage of time. It comes with all mandatory and enhanced options to run your blogs effectively.

The post How to Install Blog Publishing Application Dotclear on Ubuntu 15.04 appeared first on LinOxide.

Getting Started to Calico Virtual Private Networking on Docker

$
0
0

Calico is a free and open source software for virtual networking in data centers. It is a pure Layer 3 approach to highly scalable datacenter for cloud virtual networking. It seamlessly integrates with cloud orchestration system such as openstack, docker clusters in order to enable secure IP communication between virtual machines and containers. It implements a highly productive vRouter in each node that takes advantage of the existing Linux kernel forwarding engine. Calico works in such an awesome technology that it has the ability to peer directly with the data center’s physical fabric whether L2 or L3, without the NAT, tunnels on/off ramps, or overlays. Calico makes full utilization of docker to run its containers in the nodes which makes it multi-platform and very easy to ship, pack and deploy. Calico has the following salient features out of the box.

  • It can scale tens of thousands of servers and millions of workloads.
  • Calico is easy to deploy, operate and diagnose.
  • It is open source software licensed under Apache License version 2 and uses open standards.
  • It supports container, virtual machines and bare metal workloads.
  • It supports both IPv4 and IPv6 internet protocols.
  • It is designed internally to support rich, flexible and secure network policy.

In this tutorial, we'll perform a virtual private networking between two nodes running Calico in them with Docker Technology. Here are some easy steps on how we can do that.

1. Installing etcd

To get started with the calico virtual private networking, we'll need to have a linux machine running etcd. As CoreOS comes preinstalled and preconfigured with etcd, we can use CoreOS but if we want to configure Calico in other linux distributions, then we'll need to setup it in our machine. As we are running Ubuntu 14.04 LTS, we'll need to first install and configure etcd in our machine. To install etcd in our Ubuntu box, we'll need to add the official ppa repository of Calico by running the following command in the machine which we want to run etcd server. Here, we'll be installing etcd in our 1st node.

# apt-add-repository ppa:project-calico/icehouse

The primary source of Ubuntu packages for Project Calico based on OpenStack Icehouse, an open source solution for virtual networking in cloud data centers. Find out more at http://www.projectcalico.org/
More info: https://launchpad.net/~project-calico/+archive/ubuntu/icehouse
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmpi9zcmls1/secring.gpg' created
gpg: keyring `/tmp/tmpi9zcmls1/pubring.gpg' created
gpg: requesting key 3D40A6A7 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpi9zcmls1/trustdb.gpg: trustdb created
gpg: key 3D40A6A7: public key "Launchpad PPA for Project Calico" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

Then, we'll need to edit /etc/apt/preferences and make changes to prefer Calico-provided packages for Nova and Neutron.

# nano /etc/apt/preferences

We'll need to add the following lines into it.

Package: *
Pin: release o=LP-PPA-project-calico-*
Pin-Priority: 100

Calico PPA Config

Next, we'll also need to add the official BIRD PPA for Ubuntu 14.04 LTS so that bugs fixes are installed before its available on the Ubuntu repo.

# add-apt-repository ppa:cz.nic-labs/bird

The BIRD Internet Routing Daemon PPA (by upstream & .deb maintainer)
More info: https://launchpad.net/~cz.nic-labs/+archive/ubuntu/bird
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmphxqr5hjf/secring.gpg' created
gpg: keyring `/tmp/tmphxqr5hjf/pubring.gpg' created
gpg: requesting key F9C59A45 from hkp server keyserver.ubuntu.com
apt-ggpg: /tmp/tmphxqr5hjf/trustdb.gpg: trustdb created
gpg: key F9C59A45: public key "Launchpad Datov� schr�nky" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

Now, after the PPA jobs are done, we'll now gonna update the local repository index and then install etcd in our machine.

# apt-get update

To install etcd in our ubuntu machine, we'll gonna run the following apt command.

# apt-get install etcd python-etcd

2. Starting Etcd

After the installation is complete, we'll now configure the etcd configuration file. Here, we'll edit /etc/init/etcd.conf using a text editor and append the line exec /usr/bin/etcd and make it look like below configuration.

# nano /etc/init/etcd.conf
exec /usr/bin/etcd --name="node1" \
--advertise-client-urls="http://10.130.65.71:2379,http://10.130.65.71:4001" \
--listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
--listen-peer-urls "http://0.0.0.0:2380" \
--initial-advertise-peer-urls "http://10.130.65.71:2380" \
--initial-cluster-token $(uuidgen) \
--initial-cluster "node1=http://10.130.65.71:2380" \
--initial-cluster-state "new"
Configuring ETCD

Note: In the above configuration, we'll need to replace 10.130.65.71 and node-1 with the private ip address and hostname of your etcd server box. After done with editing, we'll need to save and exit the file.

We can get the private ip address of our etcd server by running the following command.

# ifconfig

ifconfig

 

As our etcd configuration is done, we'll now gonna start our etcd service in our Ubuntu node. To start etcd daemon, we'll gonna run the following command.

# service etcd start

After done, we'll have a check if etcd is really running or not. To ensure that, we'll need to run the following command.

# service etcd status

3. Installing Docker

Next, we'll gonna install Docker in both of our nodes running Ubuntu. To install the latest release of docker, we'll simply need to run the following command.

# curl -sSL https://get.docker.com/ | sh

Docker Engine Installation

After the installation is completed, we'll gonna start the docker daemon in-order to make sure that its running before we move towards Calico.

# service docker restart

docker stop/waiting
docker start/running, process 3056

3. Installing Calico

We'll now install calico in our linux machine in-order to run the calico containers. We'll need to install Calico in every node which we're wanting to connect into the Calico network. To install Calico, we'll need to run the following command under root or sudo permission.

On 1st Node

# wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl

--2015-09-28 12:08:59-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
Resolving github.com (github.com)... 192.30.252.129
Connecting to github.com (github.com)|192.30.252.129|:443... connected.
...
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.9.9
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.9.9|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6166661 (5.9M) [application/octet-stream]
Saving to: 'calicoctl'
100%[=========================================>] 6,166,661 1.47MB/s in 6.7s
2015-09-28 12:09:08 (898 KB/s) - 'calicoctl' saved [6166661/6166661]

# chmod +x calicoctl

After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.

# mv calicoctl /usr/bin/

On 2nd Node

# wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl

--2015-09-28 12:09:03-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
Resolving github.com (github.com)... 192.30.252.131
Connecting to github.com (github.com)|192.30.252.131|:443... connected.
...
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.8.113
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.8.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6166661 (5.9M) [application/octet-stream]
Saving to: 'calicoctl'
100%[=========================================>] 6,166,661 1.47MB/s in 5.9s
2015-09-28 12:09:11 (1022 KB/s) - 'calicoctl' saved [6166661/6166661]

# chmod +x calicoctl

After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.

# mv calicoctl /usr/bin/

Likewise, we'll need to execute the above commands to install in every other nodes.

4. Starting Calico services

After we have installed calico on each of our nodes, we'll gonna start our Calico services. To start the calico services, we'll need to run the following commands.

On 1st Node

# calicoctl node

WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
No IP provided. Using detected IP: 10.130.61.244
Pulling Docker image calico/node:v0.6.0
Calico node is running with id: fa0ca1f26683563fa71d2ccc81d62706e02fac4bbb08f562d45009c720c24a43

On 2nd Node

Next, we'll gonna export a global variable in order to connect our calico nodes to the same etcd server which is hosted in node1 in our case. To do so, we'll need to run the following command in each of our nodes.

# export ETCD_AUTHORITY=10.130.61.244:2379

Then, we'll gonna run calicoctl container in our every our second node.

# calicoctl node

WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
No IP provided. Using detected IP: 10.130.61.245
Pulling Docker image calico/node:v0.6.0
Calico node is running with id: 70f79c746b28491277e28a8d002db4ab49f76a3e7d42e0aca8287a7178668de4

This command should be executed in every nodes in which we want to start our Calico services. The above command start a container in the respective node. To check if the container is running or not, we'll gonna run the following docker command.

# docker ps

Docker Running Containers

If we see the output something similar to the output shown below then we can confirm that Calico containers are up and running.

5. Starting Containers

Next, we'll need to start few containers in each of our nodes running Calico services. We'll assign a different name to each of the containers running ubuntu. Here, workload-A, workload-B, etc has been assigned as the unique name for each of the containers. To do so, we'll need to run the following command.

On 1st Node

# docker run --net=none --name workload-A -tid ubuntu

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
...
91e54dfb1179: Already exists
library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
Status: Downloaded newer image for ubuntu:latest
a1ba9105955e9f5b32cbdad531cf6ecd9cab0647d5d3d8b33eca0093605b7a18

# docker run --net=none --name workload-B -tid ubuntu

89dd3d00f72ac681bddee4b31835c395f14eeb1467300f2b1b9fd3e704c28b7d

On 2nd Node

# docker run --net=none --name workload-C -tid ubuntu

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
...
91e54dfb1179: Already exists
library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
Status: Downloaded newer image for ubuntu:latest
24e2d5d7d6f3990b534b5643c0e483da5b4620a1ac2a5b921b2ba08ebf754746

# docker run --net=none --name workload-D -tid ubuntu

c6f28d1ab8f7ac1d9ccc48e6e4234972ed790205c9ca4538b506bec4dc533555

Similarly, if we have more nodes, we can run ubuntu docker container into it by running the above command with assigning a different container name.

6. Assigning IP addresses

After we have got our docker containers running in each of our hosts, we'll go for adding a networking support to the containers. Now, we'll gonna assign a new ip address to each of the containers using calicoctl. This will add a new network interface to the containers with the assigned ip addresses. To do so, we'll need to run the following commands in the hosts running the containers.

On 1st Node

# calicoctl container add workload-A 192.168.0.1
# calicoctl container add workload-B 192.168.0.2

On 2nd Node

# calicoctl container add workload-C 192.168.0.3
# calicoctl container add workload-D 192.168.0.4

7. Adding Policy Profiles

After our containers have got networking interfaces and ip address assigned, we'll now need to add policy profiles to enable networking between the containers each other. After adding the profiles, the containers will be able to communicate to each other only if they have the common profiles assigned. That means, if they have different profiles assigned, they won't be able to communicate to eachother. So, before being able to assign. we'll need to first create some new profiles. That can be done in either of the hosts. Here, we'll run the following command in 1st Node.

# calicoctl profile add A_C

Created profile A_C

# calicoctl profile add B_D

Created profile B_D

After the profile has been created, we'll simply add our workload to the required profile. Here, in this tutorial, we'll place workload A and workload C in a common profile A_C and workload B and D in a common profile B_D. To do so, we'll run the following command in our hosts.

On 1st Node

# calicoctl container workload-A profile append A_C
# calicoctl container workload-B profile append B_D

On 2nd Node

# calicoctl container workload-C profile append A_C
# calicoctl container workload-D profile append B_D

8. Testing the Network

After we've added a policy profile to each of our containers using Calicoctl, we'll now test whether our networking is working as expected or not. We'll take a node and a workload and try to communicate with the other containers running in same or different nodes. And due to the profile, we should be able to communicate only with the containers having a common profile. So, in this case, workload A should be able to communicate with only C and vice versa whereas workload A shouldn't be able to communicate with B or D. To test the network, we'll gonna ping the containers having common profiles from the 1st host running workload A and B.

We'll first ping workload-C  having ip 192.168.0.3 using workload-A as shown below.

# docker exec workload-A ping -c 4 192.168.0.3

Then, we'll ping workload-D having ip 192.168.0.4 using workload-B as shown below.

# docker exec workload-B ping -c 4 192.168.0.4

Ping Test Success

Now, we'll check if we're able to ping the containers having different profiles. We'll now ping workload-D having ip address 192.168.0.4 using workload-A.

# docker exec workload-A ping -c 4 192.168.0.4

After done, we'll try to ping workload-C having ip address 192.168.0.3 using workload-B.

# docker exec workload-B ping -c 4 192.168.0.3

Ping Test Failed

Hence, the workloads having same profiles could ping each other whereas having different profiles couldn't ping to each other.

Conclusion

Calico is an awesome project providing an easy way to configure a virtual network using the latest docker technology. It is considered as a great open source solution for virtual networking in cloud data centers. Calico is being experimented by people in different cloud platforms like AWS, DigitalOcean, GCE and more these days. As Calico is currently under experiment, its stable version hasn't been released yet and is still in pre-release. The project consists a well documented documentations, tutorials and manuals in their official documentation site.

The post Getting Started to Calico Virtual Private Networking on Docker appeared first on LinOxide.

How to Setup Installation of Kolab Groupware on CentOS 7.0

$
0
0

Kolab is a free, secure, scalable, reliable and open source groupware server with a web administration interface, management resources, synchronization for several devices and more. Different clients can have access to variety of it features that includes Kolab client for Mozilla, Outlook and KDE. The core features that can be availed using the Kolab groupware are Emails Solution, Calendering, Address Books and Task Management.

So, using the Kolab Groupware multiple functionality is provided for email server, spam and virus filtering and web interface that supports secure protocols such as imaps, https, smtps, https, etc. The web interface can be used to add, modify and remove users, domains, distributions list, shared folders, among other things.

1) System Preparation

Kolab installation process is very simple to follow, but we need to take care of few things before installing it on CentOS 7.0.

The base operating system we are using in this tutorial is CentOS-7.0 with minimal installation packages. Let's connect to your centos server with root user and configure your basic server settings by following few steps.

Network Setup

Configure your CentOS 7 server with the static IP address and a fully qualified domain name as it has strict DNS requirements for how this machine refers to itself, and how people will locate this machine.

You can check and setup your hostname using following commands respectively.

# hostname -f
# hostnamectl set-hostname cen-kolab
# vim /etc/hosts
72.25.10.73 cen-kolab cen-kolab.linoxide.com

Configure Firewall

If you are working on a critical environment then you must Enable SELinux and Firewalld on your CentOS 7 server, while its better if you could disable both in a test or non-production environment.

Acording to Kolab officials SELinux is not fully compatible with SELinux features so its quite recommended that you consider configuring SELinux to be permissive mode.

The SELinux can be checked and configured with below commands.

# sestatus
# setenforce 0

To enable and start firewall service in CentOS 7, run the following commands.

# systemctl enable firewalld
# systemctl start firewalld

In order to allow the required ports used by Kolab in CentOS-7 firewall, let's create a simple script with all required ports and services to be executed on the system.

# vim firewall_rules.sh

#!/bin/bash
for s in ssh http https pop3s imaps smtp ldap ldaps
do
firewall-cmd --permanent --add-service=$s
done
for p in 110/tcp 143/tcp 587/tcp
do
firewall-cmd --permanent --add-port=$p
done
firewall-cmd --reload

After saving changes execute this script and then run the below command to check if all the mentioned ports are allowed.

# iptables -L -n -v

Allowed Ports

2) Installing Kolab on CentOS 7

Now we will be installing Kolab by adding its latest EPEL repository for CentOS-7. Let's run the below command to install its latest EPEL repository.

Ading EPEL

# rpm -Uhv https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Kolab EPEL

Downloading EPEL

Now run the below commands to download the newly added repositories on your centos 7 server.

# wget http://obs.kolabsys.com/repositories/Kolab:/3.4/CentOS_7/Kolab:3.4.repo
# wget http://obs.kolabsys.com/repositories/Kolab:/3.4:/Updates/CentOS_7/Kolab:3.4:Updates.repo

To add its GPG key , use the following command.

# rpm --import https://ssl.kolabsys.com/community.asc

Download Kolab Repository

Installing Yum Plugin

To install yum plugin priorities package, run the below command.

# yum install yum-plugin-priorities

yum plugin priorities

To proceed the installation process "Y" to proceed with the installation package.

==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Installing:
yum-plugin-priorities noarch 1.1.31-29.el7 base 24 k

Transaction Summary
==============================================================================================================================
Install 1 Package

Total download size: 24 k
Installed size: 28 k
Is this ok [y/d/N]: y

Install Kolab Groupware

Finally we reached at the point to install kolab groupware, let's run the command to start its installation on centos 7 with yum.

# yum install kolab

installing kolab

This will install kolab groupware with a number of packages including its various dependencies. To proceed with press "Y" to continue if you agree to install these packages as shown below.

Transaction Summary
==============================================================================================================================
Install 1 Package (+341 Dependent packages)
Upgrade ( 7 Dependent packages)

Total download size: 198 M
Is this ok [y/d/N]: y

During the installation process you will be asked to confirm the GPG Key before the installation of packages starts. Press "Y" to accept the changes and let the installation complete.

import GPG Key

3) Starting Services

After kolab groupware installation start the Apache web server, MariaDB and Postfix services and enable them to auto start at each reboot using the following commands.

For Apache

#systemctl start httpd.service
#systemctl enable httpd.service

For MariaDB

#systemctl start mariadb.service
#systemctl enable mariadb.service

For Postfix

#systemctl start postfix.service
#systemctl enable postfix.service

4) Configuring Kolab Groupware

Now we will start the Kolab setup process by using the below command. So first thing it will ask to configure is the FQDN, then it will ask for the password to configure that will be used later on.

Let's run the following command to start Kolab setup as shown.

# setup-kolab

Kolab Setup

Configure the hostname accordingly.

linoxide.com [Y/n]: n
Domain name to use: cen-kolab.linoxide.com

The standard root dn we composed for you follows. Please confirm this is the root
dn you wish to use.

dc=cen-kolab,dc=linoxide,dc=com [Y/n]: Y

Setup is now going to set up the 389 Directory Server. This may take a little
while (during which period there is no output and no progress indication).

Once the Kolab setup is complete , it better to reboot your server as a good practice and make sure that all services are up and running.

5) Kolab Web Login

Now we can login to the web admin using the URL and the credentials that you had configured during it setup.

Let's open the Kolab web admin page as shown below.

http://172.25.10.173/kolab-webadmin/

Kolab Web Admin

After providing successful credentials you will be greeted with Kolab Web Administration page, where you can manage users, resource and other objects.

Kolab Server Maintenance

Conclusion

We have successfully installed and configured Kolab on CentOs 7, which is one of the best groupware solution. Its email and calendar services are completely secure, so your private data will never be crawled and get ready your own Kolab server. Do let us know if you find any difficulty.

The post How to Setup Installation of Kolab Groupware on CentOS 7.0 appeared first on LinOxide.

Viewing all 1507 articles
Browse latest View live