Installing and Managing Multi Master Replication Manager(MMM) for MySQL High Availability

Exclusive offer: get 50% off this eBook here
High Availability MySQL Cookbook

High Availability MySQL Cookbook — Save 50%

Over 60 simple but incredibly effective recipes focusing on different methods of achieving high availability for MySQL database

$26.99    $13.50
by Alex Davies | May 2010 | Cookbooks MySQL Open Source Web Development

Multi Master Replication Manager for MySQL ("MMM") is a set of open source Perl scripts designed to automate the process of creating and automatically managing the "Active / Passive Master" high availability replication setup discussed earlier in the article on MySQL Replication design recipe, which uses two MySQL servers configured as masters with only one of the masters accepting write queries at any point in time. This provides redundancy without any significant performance cost.

In this article, by Alex Davies, author of High Availability MySQL Cookbook, we will cover:

  • Multi Master Replication Manager (MMM)
    • Initial installation
    • Installing the MySQL nodes
    • Installing the monitoring node
  • Managing and using Multi Master Replication Manager (MMM)

(Read more interesting articles on MySQL High Availability here.)

Multi Master Replication Manager (MMM): initial installation

This setup is asynchronous, and a small number of transactions can be lost in the event of the failure of the master. If this is not acceptable, any asynchronous replication-based high availability technique is not suitable.

Over the next few recipes, we shall configure a two-node cluster with MMM.

It is possible to configure additional slaves and more complicated topologies. As the focus of this article is high availability, and in order to keep this recipe concise, we shall not mention these techniques (although, they all are documented in the manual available at http://mysql-mmm.org/).

MMM consists of several separate Perl scripts, with two main ones:

  1. mmmd_mon: Runs on one node, monitors all nodes, and takes decisions.
  2. mmmd_agent: Runs on each node, monitors the node, and receives instructions from mmm_mon.

In a group of MMM-managed machines, each node has a node IP, which is the normal server IP address. In addition, each node has a "read" IP and a "write" IP. Read and write IPs are moved around depending on the status of each node as detected and decided by mmmd_mon, which migrates these IP address around to ensure that the write IP address is always on an active and working master, and that all read IPs are connected to another master that is in sync (which does not have out-of-date data).

mmmd_mon should not run on the same server as any of the databases to ensure good availability. Thus, the best practice would be to keep a minimum number of three nodes.

In the examples of this article, we will configure two MySQL servers, node 5 and node 6 (10.0.0.5 and 6) with a virtual writable IP of 10.0.0.10 and two read-only IPs of 10.0.0.11 and 10.0.0.12, using a monitoring node node 4 (10.0.0.4). We will use RedHat / CentOS provided software where possible.

If you are using the same nodes to try out any of the other recipes discussed in this article, be sure to remove MySQL Cluster RPMs and /etc/my.cnf before attempting to follow this recipe

There are several phases to set up MMM. Firstly, the MySQL and monitoring nodes must have MMM installed, and each node must be configured to join the cluster. Secondly, the MySQL server nodes must have MySQL installed and must be configured in a master-master replication agreement. Thirdly, a monitoring node (which will monitor the cluster and take actions based on what it sees) must be configured. Finally, the MMM monitoring node must be allowed to take control of the cluster.

In this article, each of the previous four steps is a recipe. The first recipe covers the initial installation of MMM on the nodes.

How to do it...

The MMM documentation provides a list of required Perl modules. With one exception, all Perl modules currently required for both monitoring agents and server nodes can be found in either the base CentOS / RHEL repositories, or the EPEL library (see the Appendices for instructions on configuration of this repository), and will be installed with the following yum command:

[root@node6 ~]# yum -y install perl-Algorithm-Diff
perl-Class-Singleton perl-DBD-MySQL perl-Log-Log4perl
perl-Log-Dispatch perl-Proc-Daemon perl-MailTools

Not all of the package names are obvious for each module; fortunately, the actual perl module name is stored in the Other field in the RPM spec file, which can be searched using this syntax:

[root@node5 mysql-mmm-2.0.9]# yum whatprovides "*File::
stat*"
Loaded plugins: fastestmirror
...
4:perl-5.8.8-18.el5.x86_64 : The Perl programming language
Matched from:
Other : perl(File::stat) = 1.00
Filename : /usr/share/man/man3/File::stat.3pm.gz
...

This shows that the Perl File::stat module is included in the base perl package (this command will dump once per relevant file; in this case, the first file that matches is in fact the manual page).

The first step is to download the MMM source code onto all nodes:

[root@node4 ~]# mkdir mmm
[root@node4 ~]# cd mmm
[root@node4 mmm]# wget http://mysql-mmm.org/_media
/:mmm2:mysql-mmm-2.0.9.tar.gz
--13:44:45-- http://mysql-mmm.org/_media
/:mmm2:mysql-mmm-2.0.9.tar.gz
...
13:44:45 (383 KB/s) - `mysql-mmm-2.0.9.tar.gz' saved [50104/50104]

Then we extract it using the tar command:

[root@node4 mmm]# tar zxvf mysql-mmm-2.0.9.tar.gz
mysql-mmm-2.0.9/
mysql-mmm-2.0.9/lib/
...
mysql-mmm-2.0.9/VERSION
mysql-mmm-2.0.9/LICENSE
[root@node4 mmm]# cd mysql-mmm-2.0.9

Now, we need to install the software, which is simply done with the make file provided:

[root@node4 mysql-mmm-2.0.9]# make install
mkdir -p /usr/lib/perl5/vendor_perl/5.8.8/MMM /usr/bin/mysql-mmm
/usr/sbin /var/log/mysql-mmm /etc /etc/mysql-mmm
/usr/bin/mysql-mmm/agent/ /usr/bin/mysql-mmm/monitor/
...
[ -f /etc/mysql-mmm/mmm_tools.conf ] || cp etc/mysql-mmm/
mmm_tools.conf /etc/mysql-mmm/

Ensure that the exit code is 0 and that there are no errors:

[root@node4 mysql-mmm-2.0.9]# echo $?
0

Any errors are likely caused as a result of dependencies—ensure that you have a working yum configuration (refer to Appendices) and have run the correct yum install command.

High Availability MySQL Cookbook Over 60 simple but incredibly effective recipes focusing on different methods of achieving high availability for MySQL database
Published: April 2010
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

Multi Master Replication Manager (MMM): installing the MySQL nodes

In this recipe, we will install the MySQL nodes that will become part of the MMM cluster. These will be configured in a multi-master replication setup, with all nodes initially set to read-only.

How to do it...

First of all, install a MySQL server:

[root@node5 ~]# yum -y install mysql-server
Loaded plugins: fastestmirror
...
Installed: mysql-server.x86_64 0:5.0.77-3.el5
Complete!

Now configure the mysqld section /etc/my.cnf on both nodes with the following steps:

  1. Prevent the server from modifying its data until told to do so by MMM. Note that this does not apply to users with SUPER privilege (that is, probably you at the command line!):

    read-only

  2. Prevent the server from modifying its mysql database as a result of a replicated query it receives as a slave:

    replicate-ignore-db = mysql

  3. Prevent this server from logging changes to its mysql database:

    binlog-ignore-db = mysql

  4. Now, on the first node (in our example node5 with IP 10.0.0.5), add the following to the [mysqld] section in /etc/my.cnf:

    log-bin=node5-binary
    relay-log=node5-relay
    server-id=5

  5. And on the second node (in our example node6 with IP 10.0.0.6), repeat with the correct hostname:

    log-bin=node6-binary
    relay-log=node6-relay
    server-id=6

Ensure that these are correctly set. Identical node IDs or logfile names will cause all sorts of problems later.

On both servers, start the MySQL server (the mysql_install_db script will be run automatically for you to build the initial MySQL database):

[root@node5 mysql]# service mysqld start
...
Starting MySQL: [ OK ]

The next step is to enter the mysql client and add the users required for replication and the MMM agent. Firstly, add a user for the other node (you could specify the exact IP of the peer node if you want):

mysql> grant replication slave on *.* to
'mmm_replication'@'10.0.0.%' identified by 'changeme';
Query OK, 0 rows affected (0.00 sec)

Secondly, add a user for the monitoring node to log in and check the status (specify the IP address of the monitoring host):

mysql> grant super, replication client on *.* to
'mmm_agent'@'10.0.0.4' identified by 'changeme';
Query OK, 0 rows affected (0.00 sec)

Finally, flush the privileges (or restart the MySQL server):

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Repeat these three commands on the second node.

With the users set up on each node, now we need to set up the Multi Master Replication link. At this point, we have started everything from scratch, including installing MySQL and running it in read-only mode. Therefore, creating a replication agreement is trivial as there is no need to sync the data. If you already have data on one node that you wish to sync to the other, or both nodes are not in a consistent state, refer to the previous recipe for several techniques to achieve this.

First, ensure that the two nodes are indeed consistent. Run the command SHOW MASTER STATUS in the MySQL Client:

[root@node5 mysql]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.0.77-log Source distribution
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> show master status;
+---------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------------+----------+--------------+------------------+
| node5-binary.000003 | 98 | | mysql |
+---------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

Ensure that the logfile name is correct (it should be a different name on each node) and ensure that the position is identical.

If this is correct, execute a CHANGE MASTER TO command on both nodes:

In our example, on node5 (10.0.0.5), configure it to use node6 (10.0.0.6) as a master:

code 35mysql> change master to master_host = '10.0.0.6',
master_user='mmm_replication', master_password='changeme',
master_log_file='node6-binary.000003', master_log_pos=98;
Query OK, 0 rows affected (0.00 sec)

Configure node6 (10.0.0.6) to use node5 (10.0.0.5) as a master:

mysql> change master to master_host = '10.0.0.6', master_user='mmm_
replication', master_password='changeme', master_log_file='node5-
binary.000003', master_log_pos=98;
Query OK, 0 rows affected (0.00 sec)

On both nodes, start the slave threads by running:

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

And check that the slave has come up:

mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.0.0.6
Master_User: mmm_replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: node6-binary.000003
Read_Master_Log_Pos: 98
Relay_Log_File: node5-relay.000002
Relay_Log_Pos: 238
Relay_Master_Log_File: node6-binary.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Seconds_Behind_Master: 0
1 row in set (0.00 sec)

The next step is to configure MMM. Unfortunately, MMM requires one Perl package that is not provided in the base or EPEL repositories with CentOS or RHEL, so we must download and install it. The module is Net::ARP (which is used for the IP-takeover) and you can download it from Perl's MCPAN, or use a third-party RPM. In this case, we use a third-party RPM, which can be found from a trusted repository of your choice or Google (in this example, I used http://dag.wieers.com/rpm/):

[root@node6 ~]# cd mmm
[root@node6 mmm]# wget ftp://ftp.univie.ac.at/systems/linux/
dag/redhat/el5/en/x86_64/RPMS.dries
/perl-Net-ARP-1.0.2-1.el5.rf.x86_64.rpm
--18:53:31--
...
18:53:32 (196 KB/s) - `perl-Net-ARP-1.0.2-1.el5.rf.x86_64.rpm' saved
[16582]
[root@node6 mmm]# rpm -ivh perl-Net-ARP-1.0.2-1.el5.rf.x86_64.rpm
warning: perl-Net-ARP-1.0.2-1.el5.rf.x86_64.rpm: Header V3 DSA
signature: NOKEY, key ID 1aa78495
Preparing... ###########################################
[100%]
1:perl-Net-ARP ###########################################
[100%]

Now, configure /etc/mysql-mmm/mmm_agent.conf with the name of the local node (do this on both nodes):

include mmm_common.conf
this node5

Start the MMM agent on the node:

[root@node6 mysql-mmm-2.0.9]# service mysql-mmm-agent start
Starting MMM Agent daemon... Ok

And configure it to start on boot:

[root@node6 mysql-mmm-2.0.9]# chkconfig mysql-mmm-agent on

Multi Master Replication Manager (MMM): installing monitoring node

In this recipe, we will configure the monitoring node with details of each of the hosts, and will tell it to start monitoring the cluster.

How to do it...

Edit /etc/mysql-mmm/mmm_common.conf to change the details for each host and its username and password.

Within this file, define default interfaces, PID and binary paths, and username / password combinations for the replication and MMM agents. For our example cluster, the file looks like this:

<host default>
cluster_interface eth0

pid_path /var/run/mmmd_agent.pid
bin_path /usr/bin/mysql-mmm/

replication_user mmm_replication
replication_password changeme

agent_user mmm_agent
agent_password changeme

monitor_user mmm_agent
monitor_password changeme
</host>

We will define these user accounts and passwords here, because in this example we will use the same replication and agent user account and password for both nodes. While it may be tempting to use different details, it is worth remembering that these are relatively "low privilege" accounts and that anyone with access to either server has the same degree of access to all your data!

In this example, we have additionally used one user for both MMM's monitor and agents.

Secondly (in the same mmm_common.conf file), define the MySQL hosts involved in the replication. For our example cluster, it looks like this:

<host node5>
ip 10.0.0.5
mode master
peer node6
</host>
<host node6>
ip 10.0.0.6
mode master
peer node5
</host>

Define a role for writers and readers; we will have two readers and one writer at any one point (this allows either node to run read-only queries). For our example cluster, it looks like this:

<role writer>
hosts node5,node6
ips 10.0.0.10
mode exclusive
</role>
<role reader>
hosts node5,node6
ips 10.0.0.11,10.0.0.12
mode balanced
</role>

If you would like to specify a role to stick to one host unless there is a real need to move it, specify prefer nodex in the <role> section. Note that if you do this, you will not be able to easily move this role around for maintenance, but this can be useful in the case of widely different hardware.

Finally, tell MMM that you would like the active master to allow write queries:

active_master_role writer

Copy mmm_common.conf to the MySQL nodes:

[root@node4 mysql-mmm]# scp mmm_common.conf node5:/etc/mysql-mmm/
mmm_common.conf
100% 624 0.6KB/s 00:00
[root@node4 mysql-mmm]# scp mmm_common.conf node6:/etc/mysql-mmm/
mmm_common.conf
100% 624 0.6KB/s 00:00

Now edit /etc/mysql-mmm/mmm_mon.conf on the monitoring node, which controls how monitoring will run.

Include the common configuration (hosts, roles, and so on) defined earlier:

include mmm_common.conf

Run a monitor locally, pinging all IPs involved:

<monitor>
ip 127.0.0.1
pid_path /var/run/mmmd_mon.pid
bin_path /usr/bin/mysql-mmm/
status_path /var/lib/misc/mmmd_
mon.status
ping_ips 10.0.0.5,10.0.0.6,10.0
.0.10,10.0.0.11,10.0.0.12
</monitor>

Finally, start the monitoring daemon:

[root@node4 ~]# service mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmmd_mon'
Daemon pid: '/var/run/mmmd_mon.pid'
Starting MMM Monitor daemon: Ok

MMM is now configured, with the agent monitoring the two MySQL nodes. Refer to the next recipe for instructions on using MMM.

High Availability MySQL Cookbook Over 60 simple but incredibly effective recipes focusing on different methods of achieving high availability for MySQL database
Published: April 2010
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

Managing and using Multi Master Replication Manager (MMM)

In this recipe, we will show how to take your configured MMM nodes into a working MMM cluster with monitoring and high availability, and also discuss some management tasks such as conducting planned maintenance.

This recipe assumes that the MMM agent is installed on all MySQL nodes, and that a MMM monitoring host has been installed as shown in the preceding recipe. This recipe will make extensive use of the command mmm_control, which is used to control the hosts inside a MMM cluster.

How to do it…

Within mmm_control, a show command gives the current status of the cluster:

[root@node4 ~]# mmm_control show
node5(10.0.0.5) master/ONLINE. Roles: reader(10.0.0.12)
node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11),
writer(10.0.0.10)

This shows that both node5 and node6 are up, each has a reader role (10.0.0.12 and 10.0.0.11), and node6 has the writer role (10.0.0.10). Therefore, if you need to execute a write query or a read query that must have the latest data, use 10.0.0.10. If you are executing a read-only query that can be executed on slightly old data, you can use 10.0.0.12 in order to keep the load off the active write master.

When your nodes first start, they will appear with a status of AWAITING_RECOVERY:

[root@node4 ~]# mmm_control show
node5(10.0.0.5) master/AWAITING_RECOVERY. Roles:
node6(10.0.0.6) master/AWAITING_RECOVERY. Roles:

This is because MMM needs to be sure that you want to bring them both online.

  1. The first step is to configure the nodes to come online using the mmm_control set_online command:

    [root@node4 ~]# mmm_control set_online node5
    OK: State of 'node5' changed to ONLINE. Now you can wait
    some time and check its new roles!
    [root@node4 ~]# mmm_control set_online node6
    OK: State of 'node6' changed to ONLINE. Now you can wait
    some time and check its new roles!
    [root@node4 ~]# mmm_control show
    node5(10.0.0.5) master/ONLINE. Roles: reader(10.0.0.12),
    writer(10.0.0.10)
    node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11)

    We can now see the MMM has brought both nodes online, giving the writer role to node5.

  2. The second step is to check that MMM has successfully configured the read-only node (node6), which we'll do with show variables like 'read_only'; executed against both the MySQL server with the reader and writer role:

    [root@node4 ~]# mmm_control show
    node5(10.0.0.5) master/ONLINE. Roles: reader(10.0.0.12),
    writer(10.0.0.10)
    node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11)
    [root@node4 ~]# echo "show variables like 'read_only';" | mysql -h
    10.0.0.10
    Variable_name Value
    read_only OFF
    [root@node4 ~]# echo "show variables like 'read_only';" | mysql -h
    10.0.0.11
    Variable_name Value
    read_only ON

    This shows the same query executed against the writer role (10.0.0.10) and the reader role (10.0.0.11). As expected, the reader is set to read-only, whereas the writer is not.

    If you execute this query against the reader role on the same host as a writer, it will show read_only set to off, even though it is a reader role. This is because this parameter is specified on a per-host basis, so if a reader happens to have an active writer role, it will also accept write queries. The important thing is that nodes without a writer are set to read-only, otherwise the replication between the two nodes will break.

  3. The next step is to activate the nodes. MMM runs in two modes:
    1. In active mode, the MMM monitoring agent actively takes control of the MySQL nodes, and commands sent to mmm_control are executed on the MySQL nodes.
    2. Passive node is entered in the event of a problem detected on startup (either a problem in communicating with a MySQL node, or a discrepancy detected between the stored status and the detected status on nodes.
  4. The fourth step is to check the current status of a node. Do this with the following command on the MMM monitoring node:

    [root@node4 ~]# mmm_control mode
    ACTIVE

    If a node is in the passive mode, a status report will show this:

    [root@node4 ~]# mmm_control show
    # --- Monitor is in PASSIVE MODE ---
    # Cause: Discrepancies between stored status, agent status and
    system status during startup.

    The last step is to turn any inactive node to active. In order to do this, run the following command for each inactive node on the MMM monitoring node:

    [root@node4 ~]# mmm_control set_active
    OK: Switched into active mode.

It is possible to deliberately put MMM into passive mode, make some changes to IP addresses, and then set MMM active, which will have the effect of immediately carrying out all the pending changes (if possible). For more details, see the MMM documentation.

At this point, your MMM cluster is up and running. In the event of failure of a MySQL server, the roles that were running on that server will be migrated off the server very quickly.

How it works...

When MMM moves a role from a node, it uses the functionality provided by the Net::ARP Perl Module to update ARP tables, and rapidly move the IP address from node to node. The exact process is as follows:

On the "current" active writer node:

  • MySQL server is made read_only to prevent further write transactions (except those executed by a SUPER user)
  • Active connections are terminated
  • The writer role IP is removed

On the new writer:

  • The MMM process running on the slave is informed that it is about to become the active writer
  • The slave will attempt to catch up with any remaining queries in the master's binary log
  • read_only is turned off
  • The writer IP is configured

There's more...

You will often want to move a role, often the writer role, from the currently active node to a passive one in order to conduct maintenance on the active node. This is trivial to complete with MMM.

Firstly, confirm the current status:

[root@node4 ~]# mmm_control show
node5(10.0.0.5) master/ONLINE. Roles: reader(10.0.0.12)
node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11),
writer(10.0.0.10)

In this example, we will move the active writer role (on node6) to node5, using the move_role command:

[root@node4 ~]# mmm_control move_role writer node5
OK: Role 'writer' has been moved from 'node6' to 'node5'. Now you can
wait some time and check new roles info!

We can now check the status to see that the role has moved:

[root@node4 ~]# mmm_control show
node5(10.0.0.5) master/ONLINE. Roles: reader(10.0.0.12),
writer(10.0.0.10)
node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11)

Failure detection

If a node fails and MMM is running, MMM will migrate all roles off that node and onto other nodes.

For example, if we have a status with node5 having an active reader and writer, and node6 just a reader:

[root@node4 ~]# mmm_control show
node5(10.0.0.5) master/ONLINE. Roles: reader(10.0.0.12),
writer(10.0.0.10)
node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11)

Here node5 fails, and rapidly show will show that all nodes have been migrated:

[root@node4 ~]# mmm_control show
# Warning: agent on host node5 is not reachable
node5(10.0.0.5) master/HARD_OFFLINE. Roles:
node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11),
reader(10.0.0.12), writer(10.0.0.10)

When node5 recovers, assuming MySQL is configured to start at boot, it will sit in AWAITING_RECOVERY state:

[root@node4 ~]# mmm_control show
node5(10.0.0.5) master/AWAITING_RECOVERY. Roles:
node6(10.0.0.6) master/ONLINE. Roles: reader(10.0.0.11),
reader(10.0.0.12), writer(10.0.0.10)

The process for activating this node was covered at the beginning of this section.

Summary

With this we come to the end of the tutorial. If MySQL does not start at boot, the node will appear in the HARD_OFFLINE state. In this case, investigate the cause on the node before doing anything in MMM.


If you have read this article you may be interested to view :


About the Author :


Alex Davies

Alex Davies has a wealth of experience working with MySQL, becoming a founding member of the MySQL Guilds before the age of 18. Author of the first book on MySQL Clustering, published in 2006, Alex has significant experience working on MySQL servers, often working on scalability and high availability problems.

Books From Packt


MySQL for Python: Database Access Made Easy
MySQL for Python: Database Access Made Easy

MySQL Admin Cookbook
MySQL Admin Cookbook

Mastering phpMyAdmin 3.1 for Effective MySQL Management
Mastering phpMyAdmin 3.1 for Effective MySQL Management

MySQL 5.1 Plugins Development
MySQL 5.1 Plugins Development

Magento 1.3: PHP Developer's Guide
Magento 1.3: PHP Developer's Guide

jQuery 1.4 Reference Guide
jQuery 1.4 Reference Guide

Plone 3 Products Development Cookbook
Plone 3 Products Development Cookbook

Wordpress and Flash 10x Cookbook
Wordpress and Flash 10x Cookbook


Your rating: None Average: 5 (2 votes)
In reply to the previous by
In reply to the previous comment, they are both configured to act as Master. With a MMM you can set which node is the active master on the fly. Regarding installation, I wanted to note that Fedora 12+ and EPEL 5+ currently have the latest version of mysql-mmm in their repositories (2.2.1 is in testing right now). You might consider adding references to installing and configuring via the packages in Yum. Thanks.
MMM is now in EPEL by
Nice article! It is a bit outdated though, as MMM is now in the EPEL repositories: http://download.fedora.redhat.com/pub/epel/5/x86_64/repoview/mysql-mmm.html
It is actually Multi-Master by
..because there are two machines that can function as a master, the slaves are slaves of the machine that is the active master. It's called active-passive master master, which is different from regular master-slave or active-active master master.
Multi-Master? Uhh, Master-Slave by
I don't understand why they call this a "Multi-Master" setup. This is Master-Slave. If only 1 host is able to accept writes, then the other is NOT operating as a master!

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
y
7
a
x
K
G
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software