Slow Query with a Costly ShuffleGroupBy Operation

I analyzed a slow query created from the B/E app and found a costly ShuffleGroupBy operation in its query’s profile.
Then I try reduce the ShuffleGroupBy time by adding the WITH(LEAF_PUSHDOWN=TRUE) hint, however it does not help much. When checking the operation again in the profile (with the leaf_pushdown hint), it shows the following 2 messages:

encoded_group_by_unsupported: “at least one join does not meet requirements for push down for columnstore scan”
encoded_join_unsupported: “join key is not single integer column at join depth 1”

I have some questions about the query performance as below:

  1. What are the requirements for push down for columnsotre scan?
  2. Does the table “x_tmp_hz_cv_r_sales_inv_store_by_channel_all” has the proper shard key for this query?
  3. What is the root cause for the slowness and any suggestions to improve?

Thank you in advance for your time and reply!

Query:

profile select WITH(LEAF_PUSHDOWN=TRUE)
CATALOG_4.parent_id as catalog_4,
period_id,
sum(sales_inv_hist.purchase_spent) as purchase_spent,
sum(sales_inv_hist.purchase_discount) as purchase_discount,
sum(sales_inv_hist.purchase_units) as purchase_units,
sum(sales_inv_hist.cost_amt) as cost_amt,
sum(sales_inv_hist.gross_margin) as margin,
sum(sales_inv_hist.purchase_spent) as sales_amount,
– sum(case
– when sales_inv_hist.cal_dt = to_date then sales_inv_hist.inventory_on_hand_qty
– end) as eop_inventory_on_hand_qty,
– sum(case
– when sales_inv_hist.cal_dt = to_date then sales_inv_hist.inventory_on_order_qty
– end) as inventory_on_order_eop_qty,
– sum(sales_inv_hist.oos_flg) as oos_flag,
– sum(case
– when sales_inv_hist.cal_dt = to_date then sales_inv_hist.oos_flg
– end) as eop_oos_flag,
– sum(sales_inv_hist.over_stock_flg) as over_stock_flag,
– sum(sales_inv_hist.current_oos_impact) as oos_impact,
– sum(sales_inv_hist.current_low_stock_impact) as low_stock_impact,
– sum(case
– when sales_inv_hist.cal_dt = to_date then sales_inv_hist.inventory_on_hand_cost_amt
– end) as oh_eop_cost,
– avg(sales_inv_hist.inventory_on_hand_cost_amt) as avg_oh_cost,
– sum(case
– when sales_inv_hist.cal_dt = to_date then sales_inv_hist.inventory_on_hand_amt
– end) as total_oh_eop,
– sum(price_amt) as price_amt,
– avg(sales_inv_hist.inventory_on_hand_amt) as avg_oh_retail,
– sum(sales_inv_hist.inventory_on_hand_qty) as inventory_on_hand_qty,
– sum(sales_inv_hist.avg_sales_qty) as avg_sales_qty,
– sum(inventory_on_order_qty) as inventory_on_order_qty,
– avg(sales_inv_hist.inventory_on_hand_qty) as avg_inventory_oh_qty,
count(distinct case
when sales_inv_hist.days_on_hand > 0 then sales_inv_hist.catalog_internal_id
end) as num_items,
sum(case
when sales_inv_hist.cal_dt = to_date then sales_inv_hist.inventory_on_order_cost_amt
end) as eop_inventory_on_order_cost,
sum(sales_inv_hist.damage_amt) as damage_amount,
sum(sales_inv_hist.waste_amt) as waste_amount
from x_tmp_hz_cv_r_sales_inv_store_by_channel_all as sales_inv_hist
join x_tmp_hz_dt as periods
on cal_dt = periods.cal_dt1
straight_join (
select child_id
from (
select distinct child_id
from (
select child_id
from cv_r_all_parents_full_view
where (
hier = 1
and parent_id in (
2, 7)
and parent_level = 4
and child_level = 0
)
) as all_parents1
) as union1
) as relevant_products
on sales_inv_hist.catalog_internal_id = relevant_products.child_id
straight_join cv_r_all_parents_full_view as CATALOG_4
on (
CATALOG_4.child_id = sales_inv_hist.catalog_internal_id
and CATALOG_4.child_level = 0
and CATALOG_4.hier = 1
and CATALOG_4.parent_level = 4
)
– where (
– sales_inv_hist.cal_dt between date(‘2022-06-27’) and date(‘2022-11-13’)
– or sales_inv_hist.cal_dt between date(‘2023-06-26’) and date(‘2023-11-13’)
– )
group by
catalog_4,
period_id
;

Schema:
– x_tmp_hz_cv_r_sales_inv_store_by_channel_all definition

CREATE TABLE x_tmp_hz_cv_r_sales_inv_store_by_channel_all (
cal_dt date DEFAULT NULL,
cv_pos_id bigint(20) DEFAULT NULL,
catalog_internal_id bigint(20) DEFAULT NULL,
dc_internal_id bigint(20) DEFAULT NULL,
purchase_spent decimal(19,3) DEFAULT NULL,
purchase_units decimal(19,3) DEFAULT NULL,
purchase_discount decimal(19,3) DEFAULT NULL,
price_amt decimal(19,3) DEFAULT NULL,
cost_amt decimal(19,3) DEFAULT NULL,
gross_margin decimal(19,3) DEFAULT NULL,
net_margin decimal(19,3) DEFAULT NULL,
inventory_on_hand_qty decimal(19,3) DEFAULT NULL,
inventory_on_hand_amt decimal(19,3) DEFAULT NULL,
inventory_on_hand_cost_amt decimal(19,3) DEFAULT NULL,
inventory_on_order_qty decimal(19,3) DEFAULT NULL,
inventory_on_order_amt decimal(19,3) DEFAULT NULL,
inventory_on_order_cost_amt decimal(19,3) DEFAULT NULL,

rec_insert_tmsp varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
SORT KEY cal_cv_r_sales_inv_store_by_channel_all_sort_key (cal_dt DESC),
SHARD KEY cal_cv_r_sales_inv_store__by_channel_all_shard_key (cal_dt,catalog_internal_id),
UNIQUE KEY cal_dt (cal_dt,catalog_internal_id,cv_pos_id,dc_internal_id) USING HASH,
KEY hz__catalog_idx (catalog_internal_id) USING HASH,
KEY hz__pos_idx (cv_pos_id) USING HASH
) AUTOSTATS_CARDINALITY_MODE=INCREMENTAL AUTOSTATS_HISTOGRAM_MODE=CREATE AUTOSTATS_SAMPLING=ON SQL_MODE=‘STRICT_ALL_TABLES’;

– x_tmp_hz_dt definition

CREATE TABLE x_tmp_hz_dt (
cal_dt1 date DEFAULT NULL,
period_id bigint(20) DEFAULT NULL,
from_date date DEFAULT NULL,
to_date date DEFAULT NULL,
SORT KEY __UNORDERED (),
KEY hz__cal_dt1 (cal_dt1) USING HASH,
KEY hz__pcal_dt1 (period_id,cal_dt1) USING HASH
, SHARD KEY ()
) AUTOSTATS_CARDINALITY_MODE=INCREMENTAL AUTOSTATS_HISTOGRAM_MODE=CREATE AUTOSTATS_SAMPLING=ON SQL_MODE=‘STRICT_ALL_TABLES’;

Profile:
cannot paste the profile here because of the post size limit. please let me know if there is a way to share.
Here is part of profile with the ShuffleGroupBy operation:

hi, you can avoid a ShuffleGroupBy by having the same shard key on the tables you wanna join. SingleStore stores data in partitions. The partition is chosen based on the shard key. If you have a different shard key on two tables and join them, the data “lives” in different partitions => SingleStore has to move data between partitions to join it => expensive computation. Try to pick the same shard key for often-used join queries.

Upon analyzing a slow query from the B/E app, a costly ShuffleGroupBy operation was identified. Attempts to improve performance by adding the WITH(LEAF_PUSHDOWN=TRUE) hint did not yield significant results, as indicated by messages highlighting unsupported conditions for push down for columnstore scan and join key requirements. To address performance issues, it’s crucial to ensure that join conditions and group by operations meet the criteria for efficient execution on columnstore tables.