Table Range Partitioning Is a Crutch. Here’s Why SingleStore Doesn’t Need It

EH

Eric Hanson

Director of Product Management

Table Range Partitioning Is a Crutch. Here’s Why SingleStore Doesn’t Need It

While antiquated legacy databases rely on table range partitioning to manage data lifecycles, modern databases built for data-intensive applications — like SingleStore — have functionalities that eliminate the need for unnecessary, outdated processes (like range partitioning).

Table range partitioning is a crutch that legacy databases use to help them handle the lifecycle of bulk data additions and removal. SingleStore doesn't need range partitioning, mainly because it can delete data so fast. Using bulk loading (SingleStore's LOAD DATA or PIPELINES) plus SQL INSERT and DELETE to manage the lifecycle of data is much easier than using range-partitioned tables and partitioning switching operations. SingleStore's speed at adding and removing data makes it a joy to work with, compared to partitioning operations in legacy database systems.

You can stop reading now.

But seriously, you may be thinking that you'd like to see some of the reasoning behind this, and some proof that we're as fast as I'm saying. Here goes.

Range partitioning exists for these reasons in legacy systems:

  1. It allows you to swap out a range of data fast, from an existing table to a target table with a metadata-only operation. Then you can truncate the resulting target table to remove the data. Truncating in the legacy systems is the only way to quickly delete data.
  2. It allows you to swap in a range of data fast, from a staging table to a target table. This allows you to prepare data in a staging table before putting it into a table where it will be queried.
  3. It provides a coarse form of indexing, since data is in strictly ascending order from earlier to later ranges. That means you can partition three years of data into 36 one-month ranges.
  4. Different partitions can be stored on different storage devices, allowing you to add new devices as your data grows to hold the increasing volume of data.

Let's look at why SingleStore can address the above requirements without range partitioning — and why it's much easier to use SingleStore to manage the data lifecycle than it is with legacy range partitioning.

  1. First; It allows you to swap out a range of data fast: SingleStore does not need to be able to swap out a range of data fast because it can delete data phenomenally fast. Stay tuned for proof of that below.
  2. Second; It allows you to swap in a range of data fast: There's no need to swap data into a SingleStore table because we can load millions of records per second using INSERT operations (say, several threads inserting data in 100-row batches) or SingleStore PIPELINES.
  3. Third; It provides a coarse form of indexing: SingleStore doesn't need partitioning to coarsely index data because columnstore sort keys can be used to order the data by what you would have used as the partitioning key in a legacy system (e.g., the date_key or a datetime column).
  4. Finally; Different partitions can be stored on different storage devices: SingleStore allows you to add more storage via a partitioned (sharded) storage model, where you can add more nodes to a cluster (of course, these nodes have their own additional storage) and rebalance your data. So there's no need to be concerned about keeping specific ranges of data on specific devices.

single-store-delete-speedSingleStore DELETE Speed

A common bulk operation that people do on large tables that is related to lifecycle management is to remove old data.

Here's an example of how fast SingleStore can remove old data. I created the following table on a SingleStore S8-size cluster on our managed service:

Table: lineitem2

Columns: 17

Data size: 1,254,492,160 rows

An S8 has 8 units, which is 64 total cores and 512 GB RAM.

Then, I deleted the oldest 116,325,376 rows (about 9.3% of the data). This is similar to deleting the oldest month from a table with a year of data.

This took 0.2081 seconds.

Yes, you read that right — 0.2081 seconds. Not minutes, not hours, not days.

With delete speeds like this, who needs the complexity of partitioning? Not SingleStore users.

summarySummary

If you think you need table partitioning and you're using SingleStore, ask yourself why. If it's to speed up bulk removal of old data, we think you'll be happy if you just use DELETE instead because it's fast and easy. If it's to speed up query processing by keeping the data in order, you can just use a SORT key. There's really no need for range partitioning in SingleStore.

Want to test this out for yourself? Get started with your free trial of SingleStore today — and say goodbye to table range partitioning for good.

appendix-script-for-delete-testAppendix: Script for Delete Test

If you want to reproduce this yourself, you can run this script on a size S8 cluster. Or, choose a different sizer cluster, modify the script, and experiment with it.

RESTORE DATABASE memsql_demo FROM s3 "s3://memsql-demo/"  CONFIG '{"region":"us-west-2"} ';
use memsql_demo;

-- verify size of lineitem is 1,225,090 rows
select format(count(*),0) from lineitem;

-- create a columnstore version of the lineitem table, calling it lineitem2
CREATE TABLE `lineitem2` ( `orderkey` bigint(20) NOT NULL DEFAULT '0',
`partkey` int(11) DEFAULT NULL, `suppkey` int(11) DEFAULT NULL,
`linenumber` int(11) NOT NULL DEFAULT '0', `quantity` decimal(20,2) DEFAULT NULL,
`extendedprice` decimal(20,2) DEFAULT NULL,
`discount` decimal(3,2) DEFAULT NULL, `tax` decimal(3,2) DEFAULT NULL,
`returnflag` char(1) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`linestatus` char(1) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`shipdate` date DEFAULT NULL, `commitdate` date DEFAULT NULL,
`receiptdate` date DEFAULT NULL,
`shipinstruct` varchar(25) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`shipmode` varchar(10) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`comment` varchar(44) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`created` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
SHARD KEY (`orderkey`,`linenumber`),
SORT KEY(created));

-- seed some data from lineitem into lineitem2
insert into lineitem2
select * from lineitem;

-- append lineitem2 to itself until it has more than a billion rows
delimiter //
do
declare c bigint;
begin
select count(*) into c from lineitem2;
while c < 1000*1000*1000 loop
insert into lineitem2
select * from lineitem2;
select count(*) into c from lineitem2;
end loop;
echo select format(count(*),0) from lineitem2;
end //
delimiter ;

-- verify row count
select format(count(*),0) from lineitem2;

-- flush table to make it all in columnstore format
-- (not strictly necessary, but for
-- demonstration purposes it makes sure we are only observing columnstore
-- delete speed)
optimize table lineitem2 flush;

-- There are about 40 different "created" times. This query shows
-- total rows for each, and running total, ordered by "created"
with t as
(
select created, count(*) c
from lineitem2
group by created
)
select row_number() over (order by created), t.created, c,
sum(c) over (order by created) as cum
from t order by created;

-- Now, copy out the timestamp for about 116 million rows,
-- which is: 2021-08-23 18:37:33
-- Use this in the DELETE statement to remove around 9.3% of the rows.

-- Run an equivalent delete command to what we want to measure,
-- so the plan gets compiled,
-- and thus we won't be measuring compile time. The time chosen is
-- to be before all the data, so this doesn't delete any rows.
delete from lineitem2 where created <= "2020-08-23 18:37:33";

-- run the DELETE and get the before & after times and subtract, and scale
-- to show total time in seconds
select now(6) into @ts1;
delete from lineitem2 where created <= "2021-08-23 18:37:33";
select now(6) into @ts2;
select timestampdiff(microsecond,@ts1, @ts2)/1000000.0 as secs;

Share