Tuesday 21 May 2013

MySQL Cluster For Multiple Dedicated Servers


The main aim to create clusters MySQL is to have redundancy - your hosting server and programs will run efficiently even if one hosting server goes down

Note: For better performance you should have a 3rd hosting server as a control node, but this can be closed down after the group begins. Also observe that closing down the control hosting server is not suggested (see the additional notices at the end of this paper for more information). You cannot run a MySQL Cluster with just two Hosting And have real redundancy.

It is possible to set up the group on two Devoted Web servers you will not get the capability to "kill" one hosting server and for the group to proceed at regular. For this you need a third hosting server operating the control node.

Now below I had given the example for three servers:

mysql1.domain.com - 192.168.0.1
mysql2.domain.com - 192.168.0.2
mysql3.domain.com - 192.168.0.3

Servers 1 and 2 will be the two that end up "clustered". This would be ideal for two servers behind a fill balancer or using circular robin the boy wonder DNS and is an excellent replace duplication. Server 3 needs to have only minimal changes created for it and does NOT need a MySQL set up. It can be a low-end device and can be undertaking other projects.

STAGE 1: Install MySQL on the first two servers:

Complete the following actions on both mysql1 and mysql2:

cd /usr/local/
dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/

from/signal42.com/mirrors/mysql/
Grouped mysql
Useradd -g mysql mysql
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
Rm mysql-max-4.1.9-PC-Linux-gnu-i686. tar. gz
Ln -s mysql-max-4.1.9-PC-Linux-gnu-i686 mysql
CD mysql
Scripts/mysql_install_db --user=mysql
Chown -R primary .
Chown -R mysql data
Chgrp -R mysql .
Cup support-files/mysql. server /etc/rc.d/Inuit. d/
Chmod +x /etc/rc.d/Inuit. d/mysql. server
Chkconfig --add mysql. server

Do not begin MySQL yet.

STAGE 2: Install and set up the control server

You need the following details files from the bin/ of the mysql directory: ndb_mgm and ndb_mgmd. Obtain the whole mysql-max table and draw them out from the bin/ listing.

Mkdir /usr/src/mysql-MGM
CD /usr/src/mysql-MGM
dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/

from/www.signal42.com/mirrors/mysql/
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
Rm mysql-max-4.1.9-PC-Linux-gnu-i686. tar. gz
CD mysql-max-4.1.9-PC-Linux-gnu-i686
mv bin/ndb_mgm .
mv bin/ndb_mgmd .
chmod +x ndb_mg*
mv ndb_mg* /usr/bin/
CD
rm -rf /usr/src/mysql-mgm

You now need to set up the config details declare this management:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi [or emacs or any other editor] config.ini

Now, place the following (changing the pieces as indicated):

[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Management Server
[NDB_MGMD]
HostName=192.168.0.3 # the IP of THIS SERVER
# Storage space Engines
[NDBD]
HostName=192.168.0.1 # the IP of the FIRST SERVER
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2 # the IP of the SECOND SERVER
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I individually keep this empty to allow fast changes of the mysql clients;
# you can get into the hostnames of the above two servers here. I recommend you don't.
[MYSQLD]
[MYSQLD]

Now, begin the control server:

ndb_mgmd

This is the MySQL control hosting server, not control system. You should therefore not anticipate any outcome (we will begin the system later).

STAGE 3: Configure the storage/SQL servers and begin MySQL

On each of the two storage/SQL servers (192.168.0.1 and 192.168.0.2) get into the following (changing the pieces as appropriate):

vi /etc/my.cnf

Enter me to go to place method again and place this on both servers (changing the IP deal with to the IP of the control hosting server that you set up in level 2):

[mysqld]
ndbcluster
DB-connectstring=192. 168.0.3 # the IP of the MANAGEMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER

Now, we create the detail listing and begin the storage engine:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd --initial
/etc/rc.d/init.d/mysql.server start

If you have done one hosting server now go returning to the begin of level 3 and do it again exactly the same process on the second hosting server.

Note: you should ONLY use --initial if you are either beginning from the beginning or have modified the config.ini details file on the control.

STAGE 4: Examine its working

You can now come back to the control hosting server (mysql3) and get into the control console:

/usr/local/mysql/bin/ndb_mgm

Enter the control SHOW to see what is going on. A example outcome looks like this:

[root@mysql3 mysql-cluster]# /usr/local/mysql/bin/ndb_mgm
-- NDB Cluster -- Management Customer --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.1 (Version: 4.1.9, Nodegroup: 0, Master)
id=3 @192.168.0.2 (Version: 4.1.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.3 (Version: 4.1.9)

[mysqld(API)] 2 node(s)
id=4 (Version: 4.1.9)
id=5 (Version: 4.1.9)

ndb_mgm>

If you see

not linked, recognizing link from 192.168.0.[1/2/3]

in the first or last two collections they you have a issue. Please e-mail me with as much details as you can provide and I can try to discover out where you have gone incorrect and modify this HOWTO to fix it.

If you are OK to here it's about a chance to analyze MySQL. On either hosting server mysql1 or mysql2 get into the following commands: Remember that we have no primary security password yet.

mysql
use test;
CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
INSERT INTO ctest () VALUES (1);
SELECT * FROM ctest;

You should see 1 row came back (with the value 1).

If this performs,which will probably occur, go to the other hosting server and run the same SELECT and see what you get. Insert from that variety and go returning to variety 1 and see if it performs. If it performs then best wishes.

The last analyze is to destroy one hosting server to see what happens. If you have actual accessibility the device basically remove its system wire and see if the other hosting server keeps on going excellent (try the SELECT query). Fantastic have actual accessibility do the following:

ps aux | grep ndbd

You get an outcome like this:

root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbd
root 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbd
root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd

In this situation neglect the control "grep ndbd" (the last line) but destroy the first two procedures by giving the control destroy -9 pid pid:

kill -9 5578 5579

Then try the select on the other hosting server. While you are at it run a SHOW control on the control node to see that the hosting server has passed away. To reboot it, just issue

ndbd

Note: no --initial!
Further notices about setup

I recommend that you study all of this (and preserve this page). It will almost certainly preserve you a lot of looking.
The Management Server

I recommend that you do not quit the control hosting server once it has began. This is for several reasons:

* The hosting server might hardly need and take any hosting server resources

* If a group drops over, you want to be able to just ssh in and kind ndbd to statistic it. You will not want to begin playing around with another server

* You need the control hosting server up If you want to take backups

* The group log is sent to the control hosting server so to evaluate what is going on in the group or has occurred since last this is an essential tool

* All orders from the ndb_mgm client is sent to the control hosting server and thus no control orders without control hosting server.

* The control hosting server is needed in situation of group reconfiguration (crashed hosting server or system split). In the situation that it is not operating, "split-brain" situation will occur. The control hosting server mediation part is needed for this kind of installation to offer better mistake patience.

No comments:

Post a Comment