I have created my MemSQL cluster with the help of following command (–high-availability=false)
`memsql-deploy setup-cluster --high-availability=false --license <license_key> --master-host prod-node1.com --aggregator-hosts prod-node1.com --leaf-hosts prod-node2.com --password <password>`
Now I want to add additional leaf node to the existing cluster. How can I change the high availability flag to true and and additional leaf node?
Before enabling high availability, you’ll need to create a node and add it as a leaf. This is because high availability organizes leaves into pairs, so there must be an even count of leaves in order to enable it.
It looks like you are deploying a cluster on 2 hosts, with 1 master aggregator, 1 child aggregator, and 2 leaves, is that right? I’d like to note that the flag
--aggregator-hosts specifically refers to child aggregators only. If used, it will deploy a child aggregator to the host you indicate. Here I see you deployed a child aggregator to the same host as the master aggregator. Typically we recommend not deploying more than 1 aggregator to any host, since they’ll perform the same actions and compete with each other for resources. Additionally you must ensure that leaves that will be paired to each other for high availability are never on the same host. I would recommend switching the location of the child aggregator so that you end up with 1 aggregator and 1 leaf on each host.
Brief Description of High Availability
When high availability is enabled, all leaves are organized into availability group 1 or 2. Then leaves in opposite availability groups are paired (eg LEAF_A in availability group 1 is paired with LEAF_B in availability group 2). In each pair, the leaves’ database partitions are copied to each other as slave partitions. In this way, each leaf will hold twice as much data before: all of its original master partitions, plus all the copied slave partitions which replicate from the paired leaf. This means that if a leaf fails, you still have a copy of all of its data on another leaf. When it fails, the master aggregator will promote all the slave partitions on the paired leaf to master partitions, so that they can serve content. When the leaf recovers, the master aggregator will rebalance partitions so that each leaf holds half master partitions and half slave partitions again (this ensures that each leaf serves half the load for their shared data set).
High Availability Best Practice
However, if you have both leaves in a paired set on the same host, and the host fails, then the database would still go offline. To avoid that issue, ensure that no host has leaves in the same availability group. For example if all leaves on a host are in availability group 1, then they can never pair to each other.
- Run the node list to see the MemSQL IDs:
- Remove the child aggregator with this command, inserting the ID:
memsql-admin remove-aggregator --memsql-id MEMSQL_ID
- Confirm it was removed and has no role now:
- Add that same node as a leaf instead.
memsql-admin add-leaf --memsql-id MEMSQL_ID
- Enable high availability by running this on the main host:
If desired, deploy a child aggregator to the other host instead.
- Create the node:
memsql-admin create-node --host prod-node2.com --port PORT [OTHER_OPTIONS]
- Run the node list to see the MemSQL IDs:
- Add that node as your child aggregator:
memsql-admin add-aggregator --memsql-id MEMSQL_ID
- Confirm the changes:
- Note that you’ll need to specify the port, because the default port 3306 is already used by the leaf on the host prod-node2.com.
Hello @sgl-memsql ,
i was trying same as [arshakfm] But for its not working can u please let me know what is the issue here?
[root@vm_1 memsql]# memsql-deploy setup-cluster --high-availability=false --license BDky5NTY4ZmExMDA1MzcyZDVlAAAAAAAAAAAEAAAAAAAAAAwwNQIZAMqz/WFYKhaChhVO++p80eTdoFv4PeIls/eQrxpL+sAA== --master-host VM_1 --aggregator-hosts VM_1 --leaf-hosts VM_2 --password test
duplicate host “VM_1” found (consider using --cluster-file flag so as to register multiple nodes on the same host)
[root@vm_1 ~]# cat /etc/hosts|grep -i VM
[root@vm_1 ~]# ping VM_1
PING VM_1 (10.69.35.205) 56(84) bytes of data.
64 bytes from VM_1 (10.69.35.205): icmp_seq=1 ttl=64 time=0.022 ms
64 bytes from VM_1 (10.69.35.205): icmp_seq=2 ttl=64 time=0.040 ms
[root@vm_1 ~]# ping VM_2
PING VM_2 (10.69.35.206) 56(84) bytes of data.
64 bytes from VM_2 (10.69.35.206): icmp_seq=1 ttl=64 time=0.339 ms
64 bytes from VM_2 (10.69.35.206): icmp_seq=2 ttl=64 time=0.166 ms
i tried with hostname & ip still the same what could be the issue here?