How to use

This page shows a few very basic things you can do to control MMM.

mmm_control

Most of the control is available through mmm_control tool on monitor server. If you have few clusters in configuration, you should always use it together with cluster name you want to query (i.e. to check C1 status, use “mmm_control @C1 show”). Otherwise, if you have single pair in configuration, it can be used without cluster name. Arguments are as follows:

mmm_control <move_role|set_online|show|ping|set_offline> [..params..]

move_role

move_role - usually used to move writer role between cluster nodes. i.e. we have this cluster status:

# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;)
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;), writer(192.168.0.12;)

to move writer role to db1, you should issue:

# mmm_control @C1 move_role writer db1
Config file: mmm_mon_C1.conf
Daemon is running!
Command sent to monitoring host. Result: OK: Role 'writer' has been
moved from 'db2' to 'db1'. Now you can wait some time and check
new roles info!
 
# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;), writer(192.168.0.12;)
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;)

set_online

set_online - command is used to recover the node when from a failure when it's state is “AWAITING_RECOVERY” or “ADMIN_OFFLINE”. For instance, I restart db1 and after restart here's the cluster status:

# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/AWAITING_RECOVERY. Roles: None
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.14;), reader(192.168.0.11;), writer(192.168.0.12;)

You see all roles are moved to db2 as db1 has failed. Now that it's recovered, we should set it online:

# mmm_control @C1 set_online db1
Config file: mmm_mon_C1.conf
Daemon is running!
Command sent to monitoring host. Result: OK: State of 'db1' changed
to ONLINE. Now you can wait some time and check its new roles!
 
# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;)
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;), writer(192.168.0.12;)

set_offline

set_offline - this is used to bring the node down manually for maintenance:

# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;), writer(192.168.0.12;)
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;)
 
 
# mmm_control @C1 set_offline db1
Config file: mmm_mon_C1.conf
Daemon is running!
Command sent to monitoring host. Result: OK: State of 'db1' changed
to ADMIN_OFFLINE. Now you can wait some time and check all roles!
 
# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/ADMIN_OFFLINE. Roles: None
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.14;), reader(192.168.0.11;), writer(192.168.0.12;)

Again, writer and reader roles automatically move to db1. To get db1 up, we use set_online:

# mmm_control @C1 show
Config file: mmm_mon_C1.conf
Daemon is running!
Servers status:
db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;)
db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;), writer(192.168.0.12;)

show

show - as I noticed, command is used to get current cluster status. These are the most common node states:

  • master/ONLINE - this indicates that the node is running without any problems.
  • master/AWAITING_RECOVERY - it's common to see this state when MySQL is restarted and the node is not set on-line in cluster yet. If the node can't be set on-line, command “mmm_control @C1 set_online node” prints the explanation on what is the problem. If the problem is with replication, this is where you should check the problem manually in MySQL itself.
  • master/ADMIN_OFFLINE - indicates that node was set offline manually, but MySQL server is actually running.
  • master/HARD_OFFLINE - shows that MySQL server is unreachable. This could be a network failure or MySQL might actually be down.
  • master/REPLICATION_FAIL - indicates an obvious replication error. You should check the problem and fix it manually.
  • master/REPLICATION_DELAY - this happens when the node can't catch up with the master. It's common to see this if MySQL was down for a while but if this happens on a normal workload, probably there's some serious problem (slow network, weak hardware etc). When this happens, MMM automatically moves reader role to the active master server so the node could catch up faster and the data from database would be actual.

ping

- ping - this is to simply check the monitor status. If monitor is on-line:

# mmm_control @C1 ping
Config file: mmm_mon_C1.conf
Daemon is running!

when the monitor is down:

# mmm_control @C1 ping
Config file: mmm_mon_C1.conf
 
WARNING!!! DAEMON IS NOT RUNNING. INFORMATION MAY NOT BE ACTUAL!!!
...

General guidelines

Now, few general guidelines:

- To make an lvm based clone from one master (or a slave from master/slave), mmm_clone command can be used:

db2:~# mmm_clone --host db1 --clone-mode master-master

Note, command must be executed in the destination server.

- If you are going to do some manual shutdown/restart operations on the cluster nodes, please make use of manual set offline/online commands. For example to upgrade mysql on both nodes w/o downtime (only few active mysql sessions will get killed):

  1. “mmm_control @C1 set_offline db2” (db1 will be the only working server at this point)
  2. upgrade mysql on db2, make sure mysql is started and replication_delay is going well on both nodes (“SHOW SLAVE STATUS\G” sql statement should show seconds_behind_master: 0)
  3. “mmm_control @C1 set_online db2” to bring db2 back in cluster and ensure that the cluster status is fine with “mmm_control @C1 show” command.
  4. move writer role manually to db2: “mmm_control @C1 move_role writer db2”
  5. now repeat set_offline, upgrade and set_online for db1
  6. again, ensure cluster is running fine.

If both nodes have equal hardware configuration and are equally well to serve as masters (which is what we would usually expect), you should be fine at this point, otherwise you may need to move writer role to db1 with command:

mmm_control @C1 move_role writer db1
mmm1/how-to-use.txt · Last modified: 2009-06-11 16:13 by Pascal Hofmann
CC Attribution-Share Alike 3.0 Unported

MySQL is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Multi-Master Replication Manager for MySQL is in no way affiliated or associated with MySQL AB.

www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0