'Memory use' Calculation (skiplist vs hash)

I did the calculation ‘memory use’ as below.


<cluster 1>
Q1)
Does the ‘hash index’ use ‘64MB’?
The document states that the hash index is used as ‘32MB/table’.

CREATE TABLE `hash_PK` (
  `i` bigint(20) NOT NULL auto_increment,
  `v` bigint(20) NOT NULL,
  PRIMARY KEY (`i`) using hash,
  key (v)
 );

Q2)
Below is a calculation of memory usage.

 (8+8+24+40+32)*4 + 64MB = **512MB**
 (8+8+24+40+40)*4 + 32MB = 512MB
 (8+8) = data
 24 = metadata overhead
 40 = key
 32 or 40 = hash or PK
 4 = row count
 64MB or 32MB = hash size per column

I want to know which one is right.(32 or 40 = hash or PK, 64MB or 32MB = hash size per column)
And, why there is a slight difference between the calculation and the actual value. (512MB, 514MB)
And, There is a 32MB difference between ‘Q1’ and ‘Q2’ PK. I think 144MB should be increased, but
It increased 32 MB. What’s the reason?

32 byte * 4M = 144MB

Q3)
There is about 32MB difference between ‘Q2’ and ‘Q3’. What’s the reason?

CREATE TABLE `hash_key` (
  `i` bigint(20) NOT NULL auto_increment,
  `v` bigint(20) NOT NULL,
  PRIMARY KEY (`i`),
  key (v) using hash 
);

(8+8+24+40+32)*4 +32 = 480

<cluster 2>
Q4)
When I created a table in another cluster, I got the following results. I don’t know what the problem is.
I just created a table.


Hello @hyun.wee. I’m see you are doing a detailed investigation of memory usage. Can you explain why you’re looking for this depth of understanding right now?

I’ll see if I can find an engineer to help answer these.

Regarding Q1, perhaps if you have redundancy 2 (2 copies of each partition) the amount of memory will be doubled from the stated 32MB.

JSUT CURIOUS.
I have redundancy 1.
The starting point of this curious is ‘Q4 (cluster2)’.
Q1, Q2, Q3 are pretty understandable.
But, Q4 didn’t understand if it was a bug or an error or an invalid setting values.
So, I created a new cluster and tested it for hash.

Got it. Again, I’ll check to see if I can find someone to answer.