- Report a bug
This page shows a few very basic things you can do to control MMM.
Most of the control is available through mmm_control tool on monitor server. If you have few clusters in configuration, you should always use it together with cluster name you want to query (i.e. to check C1 status, use “mmm_control @C1 show”). Otherwise, if you have single pair in configuration, it can be used without cluster name. Arguments are as follows:
mmm_control <move_role|set_online|show|ping|set_offline> [..params..]
move_role - usually used to move writer role between cluster nodes. i.e. we have this cluster status:
# mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;) db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;), writer(192.168.0.12;)
to move writer role to db1, you should issue:
# mmm_control @C1 move_role writer db1 Config file: mmm_mon_C1.conf Daemon is running! Command sent to monitoring host. Result: OK: Role 'writer' has been moved from 'db2' to 'db1'. Now you can wait some time and check new roles info! # mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;), writer(192.168.0.12;) db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;)
set_online - command is used to recover the node when from a failure when it's state is “AWAITING_RECOVERY” or “ADMIN_OFFLINE”. For instance, I restart db1 and after restart here's the cluster status:
# mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/AWAITING_RECOVERY. Roles: None db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.14;), reader(192.168.0.11;), writer(192.168.0.12;)
You see all roles are moved to db2 as db1 has failed. Now that it's recovered, we should set it online:
# mmm_control @C1 set_online db1 Config file: mmm_mon_C1.conf Daemon is running! Command sent to monitoring host. Result: OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles! # mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;) db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;), writer(192.168.0.12;)
set_offline - this is used to bring the node down manually for maintenance:
# mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;), writer(192.168.0.12;) db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;) # mmm_control @C1 set_offline db1 Config file: mmm_mon_C1.conf Daemon is running! Command sent to monitoring host. Result: OK: State of 'db1' changed to ADMIN_OFFLINE. Now you can wait some time and check all roles! # mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/ADMIN_OFFLINE. Roles: None db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.14;), reader(192.168.0.11;), writer(192.168.0.12;)
Again, writer and reader roles automatically move to db1. To get db1 up, we use set_online:
# mmm_control @C1 show Config file: mmm_mon_C1.conf Daemon is running! Servers status: db1(192.168.0.10): master/ONLINE. Roles: reader(192.168.0.14;) db2(192.168.0.13): master/ONLINE. Roles: reader(192.168.0.11;), writer(192.168.0.12;)
show - as I noticed, command is used to get current cluster status. These are the most common node states:
- ping - this is to simply check the monitor status. If monitor is on-line:
# mmm_control @C1 ping Config file: mmm_mon_C1.conf Daemon is running!
when the monitor is down:
# mmm_control @C1 ping Config file: mmm_mon_C1.conf WARNING!!! DAEMON IS NOT RUNNING. INFORMATION MAY NOT BE ACTUAL!!! ...
Now, few general guidelines:
- To make an lvm based clone from one master (or a slave from master/slave), mmm_clone command can be used:
db2:~# mmm_clone --host db1 --clone-mode master-master
Note, command must be executed in the destination server.
- If you are going to do some manual shutdown/restart operations on the cluster nodes, please make use of manual set offline/online commands. For example to upgrade mysql on both nodes w/o downtime (only few active mysql sessions will get killed):
If both nodes have equal hardware configuration and are equally well to serve as masters (which is what we would usually expect), you should be fine at this point, otherwise you may need to move writer role to db1 with command:
mmm_control @C1 move_role writer db1