New

Learn how to Optimize Performance with TPCH 100

Notebook


SingleStore Notebooks

Learn how to Optimize Performance with TPCH 100

Note

This tutorial is meant for Standard & Premium Workspaces. You can't run this with a Free Starter Workspace due to restrictions on Storage. Create a Workspace using +group in the left nav & select Standard for this notebook. Gallery notebooks tagged with "Starter" are suitable to run on a Free Starter Workspace

Context

This notebook will help you with four core key principles for getting performance out of SingleStoreDB using TPCH Benchmark. SingleStoreDB is a distributed database, so you should think of using shard keys, database partitions, primary keys and indexes for getting the best performance out of it.

About database partitions

The generalized recommendation for most clusters is to have 4 CPU cores per database partition on each leaf. This means if you had a cluster with 16 cores on each of 4 leaves (64 CPU cores total across all leaf hosts), you would want to have 4 partitions on each leaf (16 partitions throughout the cluster). If you are using a S00 workspace, you will have 2 partitions per database. Note that increasing partitions will have additional memory and caching overheads, which can be expensive if you have thousands of tables

About shard keys

Data is distributed across the SingleStoreDB Cloud workspace into a number of partitions on the leaf nodes. The shard key is a collection of the columns in a table that are used to control how the rows of that table are distributed. To determine the partition responsible for a given row, SingleStoreDB Cloud computes a hash from all the columns in the shard key to the partition ID. Therefore, rows with the same shard key will reside on the same partition.

About hash indexes

They are highly efficient for exact-match lookups (point-reads). Because hash indexes store rows in a sparse array of buckets indexed through a hash function on the relevant columns, queries can quickly retrieve data by examining only the corresponding bucket rather than searching the entire dataset. This enables significant reduction in lookup time and hence, increased performance for specific query types.

For that tutorial, we recommend using a workspace of size S4 to ingest data faster and also see the difference and gain you can get from a distributed architecture.

Note

For that tutorial, we recommend using workspace of size S4 to ingest data faster and also see the difference and gain you can get from a distributed architecture.

Let's first create the unoptimized database

In [1]:

%%sql
CREATE DATABASE IF NOT EXISTS s2_tpch_unoptimized
# To create a database with custom partitions use the following syntax: CREATE DATABASE YourDatabaseName PARTITIONS=X;
# You cannot change after creation the number of partitions

If using a S00, the database will have 2 partitions, if using S1, it will have 8 partitions

In [2]:

%%sql
SELECT num_partitions FROM information_schema.DISTRIBUTED_DATABASES WHERE database_name = 's2_tpch_unoptimized';

Let's create all the tables in that database with no index, shard key or primary key

Action Required

Make sure to select the s2_tpch_unoptimized database from the drop-down menu at the top of this notebook. It updates the connection_url to connect to that database.

In [3]:

%%sql
CREATE TABLE IF NOT EXISTS `customer` (
`c_custkey` int(11) NOT NULL,
`c_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_address` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_nationkey` int(11) NOT NULL,
`c_phone` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_acctbal` decimal(15,2) NOT NULL,
`c_mktsegment` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_comment` varchar(117) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `lineitem` (
`l_orderkey` bigint(11) NOT NULL,
`l_partkey` int(11) NOT NULL,
`l_suppkey` int(11) NOT NULL,
`l_linenumber` int(11) NOT NULL,
`l_quantity` decimal(15,2) NOT NULL,
`l_extendedprice` decimal(15,2) NOT NULL,
`l_discount` decimal(15,2) NOT NULL,
`l_tax` decimal(15,2) NOT NULL,
`l_returnflag` char(1) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_linestatus` char(1) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_shipdate` date NOT NULL,
`l_commitdate` date NOT NULL,
`l_receiptdate` date NOT NULL,
`l_shipinstruct` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_shipmode` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_comment` varchar(44) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `nation` (
`n_nationkey` int(11) NOT NULL,
`n_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`n_regionkey` int(11) NOT NULL,
`n_comment` varchar(152) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `orders` (
`o_orderkey` bigint(11) NOT NULL,
`o_custkey` int(11) NOT NULL,
`o_orderstatus` char(1) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`o_totalprice` decimal(15,2) NOT NULL,
`o_orderdate` date NOT NULL,
`o_orderpriority` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`o_clerk` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`o_shippriority` int(11) NOT NULL,
`o_comment` varchar(79) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `part` (
`p_partkey` int(11) NOT NULL,
`p_name` varchar(55) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_mfgr` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_brand` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_type` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_size` int(11) NOT NULL,
`p_container` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_retailprice` decimal(15,2) NOT NULL,
`p_comment` varchar(23) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `partsupp` (
`ps_partkey` int(11) NOT NULL,
`ps_suppkey` int(11) NOT NULL,
`ps_availqty` int(11) NOT NULL,
`ps_supplycost` decimal(15,2) NOT NULL,
`ps_comment` varchar(199) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `region` (
`r_regionkey` int(11) NOT NULL,
`r_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`r_comment` varchar(152) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);
CREATE TABLE IF NOT EXISTS `supplier` (
`s_suppkey` int(11) NOT NULL,
`s_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`s_address` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`s_nationkey` int(11) NOT NULL,
`s_phone` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`s_acctbal` decimal(15,2) NOT NULL,
`s_comment` varchar(101) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
);

Now let's create the pipelines and run them to ingest data

In [4]:

%%sql
CREATE PIPELINE IF NOT EXISTS `customer_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/customer'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `customer`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [5]:

%%sql
CREATE PIPELINE IF NOT EXISTS `lineitem_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/lineitem/lineitem.'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `lineitem`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [6]:

%%sql
CREATE PIPELINE IF NOT EXISTS `nation_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/nation'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `nation`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [7]:

%%sql
CREATE PIPELINE IF NOT EXISTS `orders_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/orders'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `orders`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [8]:

%%sql
CREATE PIPELINE IF NOT EXISTS `partsupp_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/partsupp'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `partsupp`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [9]:

%%sql
CREATE PIPELINE IF NOT EXISTS `part_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/part'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `part`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [10]:

%%sql
CREATE PIPELINE IF NOT EXISTS `region_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/region'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `region`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [11]:

%%sql
CREATE PIPELINE IF NOT EXISTS `supplier_pipeline`
AS LOAD DATA S3 'memsql-tpch-dataset/sf_100/supplier'
CONFIG '{\"region\":\"us-east-1\", \"disable_gunzip\": false}'
BATCH_INTERVAL 2500
DISABLE OUT_OF_ORDER OPTIMIZATION
DISABLE OFFSETS METADATA GC
SKIP DUPLICATE KEY ERRORS
INTO TABLE `supplier`
FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '|\n' STARTING BY '';

In [12]:

%%sql
START PIPELINE customer_pipeline;
START PIPELINE lineitem_pipeline;
START PIPELINE nation_pipeline;
START PIPELINE orders_pipeline;
START PIPELINE partsupp_pipeline;
START PIPELINE part_pipeline;
START PIPELINE region_pipeline;
START PIPELINE supplier_pipeline;

[Optional Step] Check data ingestion in real-time with Perspective

In [13]:

import perspective
import threading
import random
import time
import warnings
from datetime import datetime, date
from perspective import Table, PerspectiveWidget
warnings.filterwarnings('ignore')

In [14]:

def loop():
while mode != 'stop':
while mode == 'run':
table.update(data_source())
time.sleep(1)

In [15]:

def data_source():
result = %sql SELECT sum(rows_streamed) AS rows_streamed FROM information_schema.pipelines_batches_summary WHERE database_name = 's2_tpch_unoptimized';
result2 = list(result.dicts())
return result2
SCHEMA = {
"rows_streamed": int
}

In [16]:

mode = 'run'
table = perspective.Table(SCHEMA, limit=100)
threading.Thread(target=loop).start()

In [17]:

perspective.PerspectiveWidget(table,title = "Track Row Ingestion",plugin="Y Line",columns=["count_rows"])

In [18]:

mode = 'stop'

Now, let's see the performance of a few queries

In [19]:

%%sql
# TPC-H Query 1: Pricing Summary Report
SELECT
l_returnflag,
l_linestatus,
SUM(l_quantity) AS sum_qty,
SUM(l_extendedprice) AS sum_base_price,
SUM(l_extendedprice * (1 - l_discount)) AS sum_disc_price,
SUM(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge,
AVG(l_quantity) AS avg_qty, AVG(l_extendedprice) AS avg_price,
AVG(l_discount) AS avg_disc,
COUNT(*) AS count_order
FROM s2_tpch_unoptimized.lineitem
WHERE l_shipdate <= DATE('1998-12-01') - INTERVAL '90' DAY
GROUP BY l_returnflag, l_linestatus
ORDER BY l_returnflag, l_linestatus;

In [20]:

%%sql
# TPC-H Query 4: Order Priority Checking
SELECT
o_orderpriority,
COUNT(*) AS order_count
FROM
s2_tpch_unoptimized.orders
WHERE
o_orderdate >= DATE('1993-07-01')
AND o_orderdate < DATE('1993-10-01')
AND EXISTS (
SELECT *
FROM s2_tpch_unoptimized.lineitem
WHERE l_orderkey = o_orderkey AND l_commitdate < l_receiptdate
)
GROUP BY o_orderpriority
ORDER BY o_orderpriority;

In [21]:

%%sql
# TPC-H Query 21: Suppliers Who Kept Orders Waiting
SELECT
s_name,
COUNT(*) AS numwait
FROM
s2_tpch_unoptimized.supplier,
s2_tpch_unoptimized.lineitem l1,
s2_tpch_unoptimized.orders,
s2_tpch_unoptimized.nation
WHERE
s_suppkey = l1.l_suppkey
AND o_orderkey = l1.l_orderkey
AND o_orderstatus = 'F'
AND l1.l_receiptdate > l1.l_commitdate
AND EXISTS (
SELECT
*
FROM
s2_tpch_unoptimized.lineitem l2
WHERE
l2.l_orderkey = l1.l_orderkey
AND l2.l_suppkey <> l1.l_suppkey
)
AND NOT EXISTS (
SELECT
*
FROM
s2_tpch_unoptimized.lineitem l3
WHERE
l3.l_orderkey = l1.l_orderkey
AND l3.l_suppkey <> l1.l_suppkey
AND l3.l_receiptdate > l3.l_commitdate
)
AND s_nationkey = n_nationkey
AND n_name = 'EGYPT'
GROUP BY
s_name
ORDER BY
numwait DESC,
s_name
LIMIT 100;

Now, let's first focus on optimizing the performance

In [22]:

%%sql
CREATE DATABASE IF NOT EXISTS s2_tpch_optimized

Action Required

Make sure to select the s2_tpch_optimized database from the drop-down menu at the top of this notebook. It updates the connection_url to connect to that database.

Now, let's create each table with optimized data structure:
  • We create a unique key through primary key. For example lineitem table needs both the orderkey and linenumber to identify rows by uniqueness

  • We create a shard key which will distribute data in an efficient way to perform fast join and filtering. For lineitem table since we perform joins and calculation based on the orderkey we create a shardkey with orderkey

In [23]:

%%sql
CREATE TABLE IF NOT EXISTS `lineitem` (
`l_orderkey` bigint(11) NOT NULL,
`l_partkey` int(11) NOT NULL,
`l_suppkey` int(11) NOT NULL,
`l_linenumber` int(11) NOT NULL,
`l_quantity` decimal(15,2) NOT NULL,
`l_extendedprice` decimal(15,2) NOT NULL,
`l_discount` decimal(15,2) NOT NULL,
`l_tax` decimal(15,2) NOT NULL,
`l_returnflag` char(1) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_linestatus` char(1) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_shipdate` date NOT NULL,
`l_commitdate` date NOT NULL,
`l_receiptdate` date NOT NULL,
`l_shipinstruct` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_shipmode` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`l_comment` varchar(44) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`l_orderkey`,`l_linenumber`) USING HASH,
SHARD KEY `__SHARDKEY` (`l_orderkey`),
KEY `l_orderkey` (`l_orderkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `customer` (
`c_custkey` int(11) NOT NULL,
`c_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_address` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_nationkey` int(11) NOT NULL,
`c_phone` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_acctbal` decimal(15,2) NOT NULL,
`c_mktsegment` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`c_comment` varchar(117) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`c_custkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`c_custkey`),
KEY `c_custkey` (`c_custkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `nation` (
`n_nationkey` int(11) NOT NULL,
`n_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`n_regionkey` int(11) NOT NULL,
`n_comment` varchar(152) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`n_nationkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`n_nationkey`),
KEY `n_nationkey` (`n_nationkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `orders` (
`o_orderkey` bigint(11) NOT NULL,
`o_custkey` int(11) NOT NULL,
`o_orderstatus` char(1) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`o_totalprice` decimal(15,2) NOT NULL,
`o_orderdate` date NOT NULL,
`o_orderpriority` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`o_clerk` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`o_shippriority` int(11) NOT NULL,
`o_comment` varchar(79) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`o_orderkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`o_orderkey`),
KEY `o_orderkey` (`o_orderkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `part` (
`p_partkey` int(11) NOT NULL,
`p_name` varchar(55) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_mfgr` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_brand` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_type` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_size` int(11) NOT NULL,
`p_container` varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`p_retailprice` decimal(15,2) NOT NULL,
`p_comment` varchar(23) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`p_partkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`p_partkey`),
KEY `p_partkey` (`p_partkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `partsupp` (
`ps_partkey` int(11) NOT NULL,
`ps_suppkey` int(11) NOT NULL,
`ps_availqty` int(11) NOT NULL,
`ps_supplycost` decimal(15,2) NOT NULL,
`ps_comment` varchar(199) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`ps_partkey`,`ps_suppkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`ps_partkey`),
KEY `ps_partkey` (`ps_partkey`,`ps_suppkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `region` (
`r_regionkey` int(11) NOT NULL,
`r_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`r_comment` varchar(152) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`r_regionkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`r_regionkey`),
KEY `r_regionkey` (`r_regionkey`) USING CLUSTERED COLUMNSTORE
);
CREATE TABLE IF NOT EXISTS `supplier` (
`s_suppkey` int(11) NOT NULL,
`s_name` varchar(25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`s_address` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`s_nationkey` int(11) NOT NULL,
`s_phone` varchar(15) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`s_acctbal` decimal(15,2) NOT NULL,
`s_comment` varchar(101) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
UNIQUE KEY `pk` (`s_suppkey`) USING HASH,
SHARD KEY `__SHARDKEY` (`s_suppkey`),
KEY `s_suppkey` (`s_suppkey`) USING CLUSTERED COLUMNSTORE
);

In [24]:

%%sql
INSERT INTO s2_tpch_optimized.nation SELECT * FROM s2_tpch_unoptimized.nation;
INSERT INTO s2_tpch_optimized.lineitem SELECT * FROM s2_tpch_unoptimized.lineitem;
INSERT INTO s2_tpch_optimized.customer SELECT * FROM s2_tpch_unoptimized.customer;
INSERT INTO s2_tpch_optimized.orders SELECT * FROM s2_tpch_unoptimized.orders;
INSERT INTO s2_tpch_optimized.part SELECT * FROM s2_tpch_unoptimized.part;
INSERT INTO s2_tpch_optimized.partsupp SELECT * FROM s2_tpch_unoptimized.partsupp;
INSERT INTO s2_tpch_optimized.region SELECT * FROM s2_tpch_unoptimized.region;
INSERT INTO s2_tpch_optimized.supplier SELECT * FROM s2_tpch_unoptimized.supplier;

In [25]:

%%sql
# TPC-H Query 1: Pricing Summary Report
SELECT
l_returnflag,
l_linestatus,
SUM(l_quantity) AS sum_qty,
SUM(l_extendedprice) AS sum_base_price,
SUM(l_extendedprice * (1 - l_discount)) AS sum_disc_price,
SUM(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge,
AVG(l_quantity) AS avg_qty, AVG(l_extendedprice) AS avg_price,
AVG(l_discount) AS avg_disc,
COUNT(*) AS count_order
FROM lineitem
WHERE l_shipdate <= DATE('1998-12-01') - INTERVAL '90' DAY
GROUP BY l_returnflag, l_linestatus
ORDER BY l_returnflag, l_linestatus;

In [26]:

%%sql
# TPC-H Query 4: Order Priority Checking
SELECT
o_orderpriority,
COUNT(*) AS order_count
FROM
s2_tpch_optimized.orders
WHERE
o_orderdate >= DATE('1993-07-01')
AND o_orderdate < DATE('1993-10-01')
AND EXISTS (
SELECT *
FROM s2_tpch_optimized.lineitem
WHERE l_orderkey = o_orderkey AND l_commitdate < l_receiptdate
)
GROUP BY o_orderpriority
ORDER BY o_orderpriority;

In [27]:

%%sql
# TPC-H Query 21: Suppliers Who Kept Orders Waiting
SELECT
s_name,
COUNT(*) AS numwait
FROM
s2_tpch_optimized.supplier,
s2_tpch_optimized.lineitem l1,
s2_tpch_optimized.orders,
s2_tpch_optimized.nation
WHERE
s_suppkey = l1.l_suppkey
AND o_orderkey = l1.l_orderkey
AND o_orderstatus = 'F'
AND l1.l_receiptdate > l1.l_commitdate
AND EXISTS (
SELECT
*
FROM
s2_tpch_optimized.lineitem l2
WHERE
l2.l_orderkey = l1.l_orderkey
AND l2.l_suppkey <> l1.l_suppkey
)
AND NOT EXISTS (
SELECT
*
FROM
s2_tpch_optimized.lineitem l3
WHERE
l3.l_orderkey = l1.l_orderkey
AND l3.l_suppkey <> l1.l_suppkey
AND l3.l_receiptdate > l3.l_commitdate
)
AND s_nationkey = n_nationkey
AND n_name = 'EGYPT'
GROUP BY
s_name
ORDER BY
numwait desc,
s_name
LIMIT 100;

Finally, let's do a side by side comparison between the optimized and unoptimized database

In [28]:

from singlestoredb import create_engine
import sqlalchemy as sa
db_connection_unoptimized = create_engine(database='s2_tpch_unoptimized').connect()
db_connection_optimized = create_engine(database='s2_tpch_optimized').connect()

Here are a few queries that you can test side by side against. Overall you will notice an average of 4x improvement in performance

In [29]:

sql_query4 = sa.text('''
SELECT
o_orderpriority,
COUNT(*) AS order_count
FROM
s2_tpch_unoptimized.orders
WHERE
o_orderdate >= DATE('1993-07-01')
AND o_orderdate < DATE('1993-10-01')
AND EXISTS (
SELECT *
FROM s2_tpch_unoptimized.lineitem
WHERE l_orderkey = o_orderkey AND l_commitdate < l_receiptdate
)
GROUP BY o_orderpriority
ORDER BY o_orderpriority;
''')

In [30]:

sql_query21 = sa.text('''
SELECT
s_name,
COUNT(*) AS numwait
FROM
supplier,
lineitem l1,
orders,
nation
WHERE
s_suppkey = l1.l_suppkey
AND o_orderkey = l1.l_orderkey
AND o_orderstatus = 'F'
AND l1.l_receiptdate > l1.l_commitdate
AND EXISTS (
SELECT
*
FROM
lineitem l2
WHERE
l2.l_orderkey = l1.l_orderkey
AND l2.l_suppkey <> l1.l_suppkey
)
AND NOT EXISTS (
SELECT
*
FROM
lineitem l3
WHERE
l3.l_orderkey = l1.l_orderkey
AND l3.l_suppkey <> l1.l_suppkey
AND l3.l_receiptdate > l3.l_commitdate
)
AND s_nationkey = n_nationkey
AND n_name = 'EGYPT'
GROUP BY
s_name
ORDER BY
numwait desc,
s_name
LIMIT 100;
''')

In [31]:

result = db_connection_optimized.execute(sql_query21)

In [32]:

import time
import pandas as pd
import plotly.graph_objs as go
num_iterations = 10
opt_times = []
for i in range(num_iterations):
opt_start_time = time.time()
opt_result = db_connection_optimized.execute(sql_query21)
opt_stop_time = time.time()
opt_times.append(opt_stop_time - opt_start_time)
unopt_times = []
for i in range(num_iterations):
unopt_start_time = time.time()
unopt_result = db_connection_unoptimized.execute(sql_query21)
unopt_stop_time = time.time()
unopt_times.append(unopt_stop_time - unopt_start_time)
x_axis = list(range(1, num_iterations + 1))
data = {
'iteration': x_axis,
'opt_times': opt_times,
'unopt_times': unopt_times,
}
df = pd.DataFrame.from_dict(data)
fig = go.Figure()
# Adding optimized times to the plot
fig.add_trace(go.Scatter(x=df['iteration'], y=df['opt_times'], mode='lines+markers', name='Optimized Database'))
# Adding unoptimized times to the plot
fig.add_trace(go.Scatter(x=df['iteration'], y=df['unopt_times'], mode='lines+markers', name='Unoptimized Database'))
# Update y-axis and x-axis properties
fig.update_layout(
title="Execution Time Comparison",
xaxis_title="Iteration",
yaxis_title="Time in Seconds",
xaxis=dict(tickmode='array', tickvals=list(range(1, num_iterations + 1)))
)
# Show the plot
fig.show()

Details


About this Template

This notebook will help you understand how you can take advantage of SingleStoreDB distributed capability using TPCH-100.

Notebook Icon

This Notebook can be run in Standard and Enterprise deployments.

Tags

advancedperformanceturningshardkeyingesttpch

License

This Notebook has been released under the Apache 2.0 open source license.