I did the calculation ‘memory use’ as below.
Does the ‘hash index’ use ‘64MB’?
The document states that the hash index is used as ‘32MB/table’.
CREATE TABLE `hash_PK` ( `i` bigint(20) NOT NULL auto_increment, `v` bigint(20) NOT NULL, PRIMARY KEY (`i`) using hash, key (v) );
Below is a calculation of memory usage.
(8+8+24+40+32)*4 + 64MB = **512MB** (8+8+24+40+40)*4 + 32MB = 512MB (8+8) = data 24 = metadata overhead 40 = key 32 or 40 = hash or PK 4 = row count 64MB or 32MB = hash size per column
I want to know which one is right.(32 or 40 = hash or PK, 64MB or 32MB = hash size per column)
And, why there is a slight difference between the calculation and the actual value. (512MB, 514MB)
And, There is a 32MB difference between ‘Q1’ and ‘Q2’ PK. I think 144MB should be increased, but
It increased 32 MB. What’s the reason?
32 byte * 4M = 144MB
There is about 32MB difference between ‘Q2’ and ‘Q3’. What’s the reason?
CREATE TABLE `hash_key` ( `i` bigint(20) NOT NULL auto_increment, `v` bigint(20) NOT NULL, PRIMARY KEY (`i`), key (v) using hash );
(8+8+24+40+32)*4 +32 = 480
When I created a table in another cluster, I got the following results. I don’t know what the problem is.
I just created a table.