Performance analytics are critical to modern intelligent applications. Here’s an initial competitive analysis that compares SingleStore to ClickHouse.

Given the overlap in target use cases of SingleStore and ClickHouse, we wanted to provide objective performance data to help industry users understand where each system excels and how to leverage SingleStore to its best advantage. We were stunned by the poor performance shown by ClickHouse for join queries in particular, which are pervasive in most analytical use cases.
Why this comparison matters
SingleStore delivers the performance you need for enterprise AI — a simplified data architecture to provide single-shot retrieval for AI applications with real-time insights on operational data; transactions, analytics, search and more. ClickHouse identifies its key use cases as real-time analytics, machine learning and gen AI, data warehousing and observability.
We wanted to provide objective performance data to help industry users understand where each system excels and how to leverage SingleStore to its best advantage.
Our testing methodology
We chose to use the TPC-DS dataset as a basis for our analysis. TPC-DS makes use of many features needed by modern analytical workloads and utilizes a star schema. It is based on a realistic use case that aims to provide relevant, objective performance to industry users.
For the analysis, our queries are selected TPC-DS queries and customized queries. TPC-DS is complicated and has 99 queries using joins, common table expressions and more — all features used by modern analytical systems. Due to this complexity and the fact that ClickHouse cannot run all of the queries, we selected a subset of the queries and added some additional ones using the DS schema.
For our testing environment, we used a 1TB TPC-DS dataset with ClickHouse Cloud running on 2x(59 vCores, 236 GB RAM) and SingleStore Helios® on an S-8 configuration.
Round 1: Learning some hard truths
For our first round of experiments, we ran a set of 10 queries. These queries were designed to test a variety of query shapes, and included multiple types of aggregations and filters.
However, all queries except one included a join. We learned ClickHouse was surprisingly slow on joins, factors of 1,000s slower than SingleStore on many queries.
The results from round one were at first shocking, but then became … boring. The graphs all looked similar to the following graph, which shows SingleStore vs. ClickHouse on TPC-DS query 3 on a 1TB dataset. To be very clear, the following graph displays results from untuned ClickHouse with the TPC-DS schema as specified in the benchmark, and the query as written in the benchmark.

The pattern was consistent across queries: SingleStore dramatically outperformed ClickHouse, often by orders of magnitude. These weren't minor differences — they represented fundamental performance gaps that significantly impact real-world applications.
Round 2: Giving ClickHouse every advantage
So, take two. We again tried to understand system performance with a variety of query shapes to really give a solid comparison of where SingleStore and Clickhouse each shine, trying to give a real understanding of the differences between the two systems.
This time, we got rid of most of the joins — we included one small join (a join with a very small table) and we left in one larger join. Often you just need a join — like when you need to separate out privacy-sensitive data into a separate table — or you need the flexibility of accessing data from different perspectives; or you want the ability to update dimension data without rewriting a whole fact table.
In addition, we began to tune ClickHouse as it was clearly necessary. As a first step, we rewrote the query to reorder the joins. Re-writing queries to reorder joins is possible in a benchmark, but is unrealistic in a workload where queries are automatically generated by business intelligence tools. This optimization did significantly improve CH performance.
The results were still telling
The following graph shows SingleStore vs. untuned ClickHouse, and our version one of tuning ClickHouse. Even with tuning efforts, SingleStore maintained significant performance advantages on the large join scenario.


The small join results showed that even with small table joins — theoretically a stronger area for ClickHouse's — SingleStore demonstrated superior performance.
Key findings and critical implications
Our conclusions so far:
ClickHouse requires tuning to perform well — we were surprised at the poor performance of untuned ClickHouse.
SingleStore untuned is much better than untuned ClickHouse. Keep reading to see if we can make ClickHouse match SingleStore's performance.
These findings reveal several critical insights that go beyond simple performance metrics:
- Operational complexity. ClickHouse's requirement for extensive tuning presents a significant operational burden. Achieving acceptable performance requires deep expertise and ongoing optimization efforts, increasing total cost of ownership.
- Out-of-the-box performance. SingleStore's superior untuned performance suggests lower operational complexity and faster time-to-value for organizations implementing analytical solutions.
- Real-world join requirements. The join performance gap isn't just a benchmark artifact. Modern applications frequently require joins for privacy-sensitive data separation, flexible data access patterns, dynamic dimension updates and maintaining normalized data or star schema models for integrity, storage efficiency and ease of understanding by business analysts.
- Configuration complexity. ClickHouse functions more like a database framework than a traditional database system, with numerous configuration options that lead to dramatically different performance characteristics between deployments.
Why these results matter for modern analytics
The performance differences we observed have direct implications for organizations building AI and analytics applications. Join operations are fundamental to real-world analytical workloads, and ClickHouse's limitations in this area force organizations to choose between query performance and data architecture best practices — a compromise that shouldn't be necessary in modern analytical systems.
Furthermore, the tuning requirements for ClickHouse mean organizations must invest significant resources in database optimization and data transformations. This work makes pipelines complex, and adds latency and distracts focus and energy from delivering business value through their analytics applications.
Looking forward
Our work continues and will be described in future blogs. Stay tuned for more on single-table aggregate results, additional tuning experiments and more comprehensive analysis of different query patterns and optimization strategies.
The bottom line
While ClickHouse has earned recognition in the analytics space, our testing reveals significant performance gaps and operational complexity challenges that organizations should carefully consider. SingleStore delivers superior out-of-the-box performance across diverse workload types while maintaining the simplicity and reliability that enterprises need for production AI and analytics applications.
For organizations evaluating analytical databases, these findings suggest SingleStore offers a more robust, user-friendly solution that performs well without extensive tuning, allowing teams to focus on delivering business value rather than database optimization.
The evidence is clear: when it comes to real-world analytical workloads, SingleStore provides the performance and operational simplicity that modern enterprises require for their AI and analytics initiatives.
Want to test how SingleStore outperforms Clickhouse yourself? Start free today.
Benchmarking performed by benchANT.
Appendix
Methodology
These results used a 1TB TPC-DS data set with the schema as defined by TPC-DS. The systems used were SingleStore Helios S-8 and ClickHouse Cloud 2x(59 vCores, 236 GB RAM).
For round one, the workload was a mix of ten queries including TPC-DS query 3 (listed below) and analytics queries inspired by TPC-DS queries. The queries were run in single-threaded mode for a one hour runtime.
For round two the workload was a mix of three queries including the large and small join. The queries were run in single-threaded mode for a one hour runtime.
Queries
Figure 1 — TPC-DS query 3
1define AGGC= text({"ss_ext_sales_price",1},{"ss_sales_price",1},{"ss_ext_discount_amt",1},{"ss_net_profit",1});2define MONTH = random(11,12,uniform);3define MANUFACT= random(1,1000,uniform);4define _LIMIT=100;5 6select7 dt.d_year,8 item.i_brand_id brand_id,9 item.i_brand brand,10 sum([AGGC]) sum_agg /* sum(ss_ext_sales_price) sum_agg */11from12 date_dim dt,13 store_sales,14 item15where16 dt.d_date_sk = store_sales.ss_sold_date_sk17 and store_sales.ss_item_sk = item.i_item_sk18 and item.i_manufact_id = [MANUFACT]19 and dt.d_moy=[MONTH]20group by21 dt.d_year,22 item.i_brand,23 item.i_brand_id24order by25 dt.d_year,26 sum_agg desc,27 brand_id28limit 100;
Figure 2 — Large join, modified TPC-DS query 52
1define YEAR=random(1998, 2002, uniform);2define _LIMIT = 100;3define BP= text({"1001-5000",1},{">10000",1},{"501-1000",1});4define MS= dist(marital_status, 1, 1);5 6select7 i_item_desc,8 w_warehouse_name,9 d1.d_week_seq,10 sum(case when p_promo_sk is null then 1 else 0 end) no_promo,11 sum(case when p_promo_sk is not null then 1 else 0 end) promo,12 count(*) total_cnt13from catalog_sales14join inventory on (cs_item_sk = inv_item_sk)15join warehouse on (w_warehouse_sk=inv_warehouse_sk)16join item on (i_item_sk = cs_item_sk)17join customer_demographics on (cs_bill_cdemo_sk = cd_demo_sk)18join household_demographics on (cs_bill_hdemo_sk = hd_demo_sk)19join date_dim d1 on (cs_sold_date_sk = d1.d_date_sk)20join date_dim d2 on (inv_date_sk = d2.d_date_sk)21join date_dim d3 on (cs_ship_date_sk = d3.d_date_sk)22left outer join promotion on (cs_promo_sk=p_promo_sk)23left outer join catalog_returns on (cr_item_sk = cs_item_sk and cr_order_number = cs_order_number)24where d1.d_week_seq = d2.d_week_seq25 and inv_quantity_on_hand < cs_quantity26 and d3.d_date > d1.d_date + 527 and hd_buy_potential = '[BP]'28 and d1.d_year = [YEAR]29 and cd_marital_status = '[MS]'30group by i_item_desc,w_warehouse_name,d1.d_week_seq31order by32 total_cnt desc,33 i_item_desc,34 w_warehouse_name,35 d_week_seq36limit 100;
Figure 3 — Small join (loosely based on TPC-DS query 17)
1define SDT = random(2450816,2452642,uniform); 2 3select4 s_store_id ,5 count(ss_quantity) as store_sales_quantitycount,6 avg(ss_quantity) as store_sales_quantityave,7 stddev_samp(ss_quantity) as store_sales_quantitystdev,8 stddev_samp(ss_quantity)/avg(ss_quantity) as store_sales_quantitycov,9 sum(ss_sales_price) as store_sales_spricesum,10 avg(ss_sales_price) as store_sales_spriceave,11 avg(ss_wholesale_cost) as store_sales_wscostave,12 sum(ss_list_price) as store_sales_lpricesum,13 avg(ss_list_price) as store_sales_lpriceave,14 sum(ss_ext_tax) as store_sales_taxsum ,15 max(ss_ext_sales_price) as store_sales_espricemax,16 min(ss_ext_list_price) as store_sales_elpricemin 17from18 store_sales, 19 store20where21 s_store_sk = ss_store_sk AND22 ss_sold_date_sk = SDT23group by s_store_id;