Maximum lead node memory limit

We are currently running a six node cluster – four leaf nodes @ 64 GB each, and two aggregators @ 16 GB each. At this moment, we use 50% of our license capacity. We are running into memory issues, and one of our team members says there is a limit of 64 GB per leaf node, and we should add additional nodes. We work in a “private cloud” with a limited number of hosts, so adding more leaf nodes could result in two or more leaf nodes residing on the same physical host.

My questions are:

  1. is there an upper maximum RAM for each leaf node? And, if so, what is it?
  2. Is it better to add additional nodes to alleviate memory issues or add additional RAM to existing leaf nodes?

Thank you,

Scott

Hi Scott,

  1. There is no limit on RAM usage for a node (singlestore is run on nodes with a TB of memory). The amount of memory used per node is controlled by the maximum_memory system variable. By default, singlestore will set maximum_memory to 90% of memory on the host (with some other heuristics for hosts with really large or really small amount of memory).
  2. This depends on your workload. Typically, fewer larger nodes are better then many small nodes. If you just need more memory (and not more compute), I would scale up your hosts if you can.

-Adam