- Report a bug
This page will describe a full installation, based on ubuntu hardy 8.04. It is work in progress and hasn't been reviewed yet.
A basic installation contains at least 2 database-servers and 1 monitoring server. In this guide, I used 3 servers with ubuntu hardy 8.04.
function | IP | name |
---|---|---|
monitoring | 192.168.0.1 | mon |
database 1 | 192.168.0.10 | db1 |
database 2 | 192.168.0.11 | db2 |
Lets start with the installation of db1.
Obviously, mysql needs to be installed first.
db1$ sudo apt-get install mysql-server
Then edit the mysql-config /etc/mysql/my.cnf First add the following lines:
server-id = 1 log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates
Then remove the “bind-address = 127.0.0.1” entry.
Then restart the database:
db1$ sudo /etc/init.d/mysql restart
Now we can create the required users. We'll need 3 different users for the replication and monitoring. replication: used by slave server to login to the master repagent: used by daemon to check the status of the local mysql-server repmonitor: used by the monitoring server, to check the status of all mysql-servers and to enable and disable read-only mode.
Login to the mysql-prompt and create these users. All user-accounts will be syncronized over all nodes in this setup.
(db1) mysql> grant replication slave on *.* to 'replication'@'192.168.0.10' identified by '<replicationpw>'; (db1) mysql> grant replication slave on *.* to 'replication'@'192.168.0.11' identified by '<replicationpw>'; (db1) mysql> grant super, replication client on *.* to 'repmonitor'@'192.168.0.1' identified by '<repmonitorpw>'; (db1) mysql> grant super, replication client on *.* to 'repagent'@'192.168.0.10' identified by '<repagentpw>'; (db1) mysql> grant replication client,process on *.* to 'repagent'@'192.168.0.11' identified by '<repagentpw>';
Then we can start with the installation of db2: Install mysql.
db2$ sudo apt-get install mysql-server
Then edit the mysql-config /etc/mysql/my.cnf First add the following lines:
server-id = 2 log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates
Remove the “bind-address = 127.0.0.1” entry.
Then restart the database:
db2$ sudo /etc/init.d/mysql restart
I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.
First make sure that no one is altering the data while we create a backup.
(db1) mysql> flush tables with read lock;
Then get the current position in the binary-log. We will need this values when we setup the replication on db2.
(db1) Mysql> show master status; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000002 | 374 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)
DON'T CLOSE this mysql-shell. If you close it, the database lock will be removed. Open a second console and type:
db1$ mysqldump -u root -p –all-databases > /tmp/database-backup.sql
Now we can remove the database-lock. Go to the first shell:
(db1) mysql> unlock tables;
Copy the database-backup to db2.
db1$: scp /tmp/database-backup.sql <user>@192.168.0.11:/tmp
Then import this into db2:
db2$ cat /tmp/database-backup.sql | mysql -u root -p
Then flush the privileges on db2. We have altered the user-table and mysql has to reread this table.
(db2) mysql> flush privileges;
On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2. This password is used for starting and stopping mysql.
Both databases now contain the same data. We now can start the syncing-process to keep it that way.
Login into mysql on db2:
db2$ mysql -u root -p
Check the current position in the binary log:
(db2) mysql> show master status; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 98 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)
Configure replication with the following command:
(db2) mysql> change master to master_host = '192.168.0.10', master_port=3306, master_user='<replication>', master_password='<replicationpw>', master_log_file='<file>', master_log_pos=<pos>;
Please insert the values return by “show master status” on db1 at the <file> and <pos> tags.
Start the slave-process:
(db2) mysql> start slave;
Now check if the replication is running correctly on db2:
(db2) mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.10 Master_User: <replication> Master_Port: 3306 Connect_Retry: 60 …
Then repeat this step for db1. Login to mysql:
db1$ mysql -u root -p
Configure replication with the following command:
(db1) mysql> change master to master_host = '192.168.0.11', master_port=3306, master_user='<replication>', master_password='<replicationpw>', master_log_file='<file>', master_log_pos=<pos>;
Now insert the values return by “show master status” on db2 at the <file> and <pos> tags.
Start the slave-process:
(db1) mysql> start slave;
Now check if the replication is running correctly on db2:
(db1) mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: <replication> Master_Port: 3306 Connect_Retry: 60 …
Replication between the two nodes should now be complete. Try it now by inserting some data into both database and check that the data will appear on the other node.
First download the Mysql-Multi master replication manager scripts from here:
MySQL Master-Master Replication Manager v1.2.3 (.tar.gz)The following steps need to executed on all database-servers and the monitoring-server.
First install some perl libraries:
$ sudo apt-get install libproc-daemon-perl libalgorithm-diff-perl libnet0-dev libdbd-mysql-perl
Then untar the mmm-package and install it:
$ untar xvzf mysql-master-master-1.2.3.tar.gz $ cd mysql-master-master-1.2.3 $ sudo ./install.pl
Then compile the send_arp program.
$ sudo apt-get install build-essential $ cd /usr/local/mmm/contrib/send_arp $ make $ sudo make install
Alternatively, it can also be compiled on a separate development-machine. After make, scp /usr/local/mmm/contrib/send_arp/send_arp to /usr/local/mmm/bin/sys/
All generic configuration-options are grouped in a separate file. This file will be the same on all hosts in the system. This file is called mmm_common.conf and must be placed in /usr/local/mmm/etc/.
In my test-cluster, I have the following entries:
# Cluster interface cluster_interface eth0 # Debug mode debug no # Paths bin_path /usr/local/mmm/bin # Logging setup log mydebug file /usr/local/mmm/var/mmm-debug.log level debug log mytraps file /usr/local/mmm/var/mmm-traps.log level trap email root@localhost # Email notification settings email notify from_address mmm_control@example.com from_name MMM Control # Define roles active_master_role writer # MMMD command socket tcp-port agent_port 9989 monitor_ip 127.0.0.1 # Cluster hosts addresses and access params host db1 ip 192.168.0.1 0 port 3306 user repmonitor password repmonitorpw mode master peer db2 host db2 ip 192.168.0.11 port 3306 user repmonitor password repmonitorpw mode master peer db1 # Define roles that are assigned to the above hosts # Mysql Reader role role reader mode balanced servers db1, db2 ip 192.168.0.21, 192.168.0.22 # Mysql Writer role role writer mode exclusive servers db1, db2 ip 192.168.0.20 # Replication credentials used by slaves to connect to the master replication_user replication replication_password replicationpw # Checks parameters # Ping checker check ping check_period 1 trap_period 5 timeout 2 # Mysql checker # (restarts after 10000 checks to prevent memory leaks) check mysql check_period 1 trap_period 2 timeout 2 restart_after 10000 # Mysql replication backlog checker # (restarts after 10000 checks to prevent memory leaks) check rep_backlog check_period 5 trap_period 10 max_backlog 60 timeout 2 restart_after 10000 # Mysql replication threads checker # (restarts after 10000 checks to prevent memory leaks) check rep_threads check_period 1 trap_period 5 timeout 2 restart_after 10000
Create the database-server specific config called mmm_agent.conf in /usr/local/mmm/etc/. This file needs to be created on both database-servers.
# # Master-Master Manager config (agent) # include mmm_common.conf # Paths pid_path /usr/local/mmm/var/mmmd_agent.pid # MMMD command socket tcp-port and ip bind_port 9989 # Define current server id this <db-name> # Cluster hosts addresses and access params host db1 user repagent password <repagentpw> host db2 user repagent password <repagentpw>
Where <db-name> is db1 or db2.
Then open /etc/mysql/my.cnf and add the following line to the [mysqld]-section of the config:
read-only
Then restart the database:
$ sudo /etc/init.d/mysql restart
Copy the monitoring-startup script to /etc/init.d
$ sudo cp /usr/local/mmm/scripts/init.d/mmm_agent /etc/init.d/ $ sudo update-rc.d mmm_agent defaults
And start it:
$ sudo /etc/init.d/mmm_agent start
The monitoring-server has one extra config-file, named mmm_mon.conf.
# # Master-Master Manager config (monitor) # include mmm_common.conf # Paths pid_path /usr/local/mmm/var/mmmd.pid status_path /usr/local/mmm/var/mmmd.status # MMMD command socket tcp-port bind_port 9988 # Choose the default failover method [manual|wait|auto] failover_method wait # How many seconds to wait for both masters to become ONLINE # before switching from WAIT to AUTO failover method, 0 = wait indefinitely wait_for_other_master 60 # How many seconds to wait before switching node status from AWAITING_RECOVERY to ONLINE # 0 = disabled auto_set_online 300
Finally we have to copy the monitoring-startup-script to /etc/init.d.
$ sudo cp /usr/local/mmm/scripts/init.d/mmm_mon /etc/init.d/ $ sudo update-rc.d mmm_mon defaults
And start it:
$ sudo /etc/init.d/mmm_mon start
If everything went well, your setup Is now complete. You can check the current status with:
$ sudo mmm_control show MySQL Multi-Master Replication Manager Version: 1.2.3 Config file: mmm_mon.conf Daemon is running! =============================== Cluster failover method: AUTO =============================== Servers status: db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.22;), writer(192.168.0.20;) db2(192.168.0.11): master/ONLINE. Roles: reader(192.168.0.21;)
Let your mysql-client connect to virtual ip 192.168.0.20 for read-write access and to virtual ip's 192.168.0.21 en 192.168.0.22 for read-only access.
This setup is quite solid. The setup is protected against inserts to both servers at once. Only a super-user, who connects to the primary ip-address of one of the mysql-servers, can bypass this restriction. It is recommended to only connect to the virtual ip-address of the writer. Inserts at both servers at once could lead to “duplicate-key”-errors. When a duplicate-key-error occurs, the replication will stop. The following commands could help you to fix the replication. To create a replication-error, I stopped the slave-processes on both servers and then executed a insert-query on both database-servers: insert into user values (3,”db1”); and insert into user values (3,”db2”);. The first field of the table user is the primary key.
After the slave processes where restarted, the replication stopped immediately. Both servers tried to execute the insert-query that was executed on the other server. This will lead to a duplicate key error.
First check the status of the slave.
(db1) mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000016 Read_Master_Log_Pos: 7802 Relay_Log_File: mysql-relay-bin.000069 Relay_Log_Pos: 235 Relay_Master_Log_File: mysql-bin.000016 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 1' on query. Default database: 'test'. Query: 'insert into user values (3,"db2")' Skip_Counter: 0 Exec_Master_Log_Pos: 7595 Relay_Log_Space: 442 Until_Condition: None <....> Query OK, 0 rows affected (0.00 sec)
The replication has stopped, due to a duplicate-key-error. It shows you some useful information, like the last error and the current position in master en relay-logs. Optionally, you can check these mysql-logfiles. There are two types of logfiles, the binlog and the relay-log. The binlog resides on the master-server and contains all queries that were executed on that server. The relay-log resides on the slave-server and contains all queries that were received from the master-server.
# mysqlbinlog mysql-relay-bin.000069 --start-position=235 -s /*!40019 SET @@session.max_insert_delayed_threads=0*/; <...> #090721 14:34:51 server id 2 end_log_pos 7775 Query thread_id=5 exec_time=0 error_code=0 SET TIMESTAMP=1248179691/*!*/; insert into user values (3,"db2") <...> # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
This is the query that caused the replication to stop.
When we restart the slave-processes, mysql wants to execute the insert-query again. This will fail again.
We can tell mysql to skip one query from the relay-log. Then we can start the slave process again.
mysql> set global sql_slave_skip_counter=1; Query OK, 0 rows affected (0.00 sec) mysql> start slave;
It's also possible to delete the record and let mysql insert it again. When we delete a record, it will be recorded into the binlog and then be transmitted to the other server. The delete-query will also be executed on that server! To prevent this, we first have to disable binary-logging. This command will only disable binary logging for this connection to the database.
mysql> set sql_log_bin=0;
Now it is safe to delete that entry.
mysql> delete from user where id = 3; mysql> set sql_log_bin=1; mysql> start slave;
Make sure that both servers contain the same information.