Database systems are becoming more and more specialized, with many applications relying on an increasingly complex web of products that are great at running challenging workloads in a particular domain — but are not capable of addressing general-purpose data needs. In contrast, we believe it is possible to design a database that can satisfy a breadth of requirements for transactional and analytical workloads, with world-class performance across both OLTP and OLAP workloads. That database is SingleStore.

SingleStore runs demanding production workloads for some of the world's largest financial, telecom, high-tech and energy companies. These customers use SingleStore to run a breadth of workloads across their organizations, often replacing two or three different databases with SingleStore.

Our results demonstrate that SingleStore achieves state-of-the-art performance on both TPC-H and TPC-C  industry-standard OLAP and OLTP benchmarks. SingleStore can perform as well as specialized operational databases on TPC-C, and can perform as well as specialized data warehouses on TPC-H. We ran both benchmarks on SingleStore's universal storage, and the results demonstrate SingleStore's unification of world-class transactional and analytical performance in the same database and the same tables.

Benchmark resultsbenchmark-results

We used benchmarks derived from the industry-standard TPC-H and TPC-C benchmarks to evaluate SingleStore compared to other leading cloud databases and data warehouses. The results below show SingleStore achieves leading-edge performance on both TPC-H, an OLAP benchmark, and TPC-C, an OLTP benchmark.

We run these benchmarks as a way to demonstrate leading performance on industry-standard benchmarks that are well-understood and easy to compare. As discussed in our blog post The TPC-DS Benchmarking Showdown - A SingleStore POV, we believe that these benchmarks do not show the full breadth of capabilities customers may demand for their modern workloads. But, they're still one of the best standardized ways that the industry currently has to compare performance across database systems. And with the elimination of the DeWitt Clause from our contract, SingleStore is committed to making these benchmark results completely available and transparent.

We compared SingleStore with three other popular, state-of-the-art products: two cloud data warehouses we’ll refer to as CDW1 and CDW2, and a cloud operational database we’ll refer to as CDB. Let's start with a summary of the results across both TPC-H and TPC-C:

TPC-H median runtime (sec)TPC-H geomean cost (cents)TPC-C throughput (tpmC) at 1k warehousesTPC-C throughput (tpmC) at 10k warehouses
CDW126.0391.75Not supportedNot supported
CDW224.1091.85Not supportedNot supported
CDBDid not finish within 24 hoursDid not finish within 24 hours12,582Not tested

Summary of TPC-H and TPC-C results. TPC-H: 10TB scale factor, cold runtimes, lower is better. TPC-C: higher is better, up to the limit of 12.86 tpmC/warehouse

As the results show, SingleStore had excellent performance and cost-performance on the analytic benchmark TPC-H compared to the cloud data warehouses, and on the transactional benchmark TPC-C compared to CDB. The main point of this comparison is to demonstrate that SingleStore delivers state-of-the-art performance on both analytical and transactional workloads, while CDW1, CDW2 and CDB can only run well on either TPC-C or TPC-H, not both. CDW1 and CDW2 only support data warehousing and cannot run TPC-C. CDB can run both benchmarks, but it performs orders of magnitude worse than the cloud data warehouses on TPC-H. Additionally, CDB offers limited scalability on OLTP workloads — previous results show its performance does not scale well to much larger numbers of warehouses.

This chart summarizes the performance on both TPC-C and TPC-H. On this chart, we show the throughput for TPC-C (tpmC, as defined by TPC-C) and the queries per dollar for TPC-H (calculated as 100/geomean query cost in cents). The higher the number, the better.

How we testedhow-we-tested

We ran both benchmarks on SingleStore's universal storage. That is, both benchmarks were run using the same unified table storage, which is the default out-of-the-box configuration (we did not force rowstore vs. columnstore table storage). This is noteworthy because our results show that SingleStore's universal storage is able to deliver state-of-the-art performance on both OLTP and OLAP workloads.

We used indexes, sort keys and shard keys appropriate for each benchmark, and used similar ones across all products where those options were available. We've included setup instructions, schemas, data loading commands and queries for SingleStore at our Github repository.

Our benchmark runs used the schema, data and queries as defined by the TPC. However, they do not meet all the criteria for official TPC benchmark runs, and are not official TPC results.

We ran the benchmarks in January 2022 on the latest software versions and hardware instance types available from each database vendor — for SingleStore, this was version 7.6.7.


We measured the performance of SingleStore compared to two popular, state-of-the-art data warehouses, CDW1 and CDW2, on TPC-H, an industry-standard OLAP benchmark at the 10TB scale factor.

We ran comparisons on public cloud instances with cluster sizes that were chosen to be as similar in price as possible. The costs of the cluster configurations we used are shown in the appendix.

We measured the runtime of one cold run of each query, and then measured the average runtime of three warm runs of each query, with results caching disabled. The warm run allows for query compilation and data caching.

We computed the cost of each query by multiplying the runtime by the price per second of the product configuration. We then computed the geomean of the runtime and cost results across all the queries.

When we tested the operational database CDB on the largest available size, it performed orders of magnitude worse: most queries failed to complete within 1 hour, and a single run of all the benchmark queries failed to complete within 24 hours (compared to about 15 minutes for SingleStore, CDW1 and CDW2).

The results, shown in the chart below, demonstrate that SingleStore achieves competitive performance on TPC-H compared to both leading-edge, specialized cloud data warehouses. A breakdown of results by query can be found in the appendix.


We compared SingleStore against CDB, a popular, state-of-the-art cloud operational database, on TPC-C, an industry-standard OLTP benchmark. All SingleStore results were on our columnar-based universal storage, and were competitive with CDB which is a rowstore-based operational database. We ran comparisons on public cloud instances with cluster sizes that were chosen to match in vCPU count.

Note that the cloud data warehouses CDW1 and CDW2 do not support running TPC-C. As an example, unique constraints are one feature that are not supported in the data warehouses but are supported in SingleStore (they are supported in all table types, including in our universal storage). CDW1 and CDW2 only support informational, unenforced unique constraints — their systems cannot enforce the integrity of the constraint, they only use it as an informational hint to the query planner.

We measured the throughput (tpmC), as defined by the TPC-C benchmark. We compared against results previously published by CDB. Note that TPC-C specifies a maximum possible tpmC of 12.86 per warehouse, and both SingleStore and CDB are essentially reaching this maximum at a data size of 1,000 warehouses.

We also tested SingleStore on TPC-C at a data size of 10,000 warehouses, and SingleStore delivers excellent scale-out performance — performance on larger TPC-C sizes scales linearly as we scale out the SingleStore cluster size. On the other hand, CDB offers limited scalability on OLTP workloads — previous results show its performance does not scale well to much larger numbers of warehouses.

ProductData size (warehouses)Cluster size (leaf vCPUs)Throughput (tpmC)Throughput (% of max)

TPC-C results (higher is better, up to the limit of 12.86 tpmC/warehouse)


Here's the summary of both the TPC-C and TPC-H results again:

These results demonstrate that SingleStore achieves state-of-the-art performance competitive with leading operational databases, as well as analytical databases on benchmarks specific to each workload. SingleStore can meet workload requirements that previously required using multiple specialized database systems.

Ready to see these results for yourself? Get started with your free SingleStore trial today. If you're interested in running and experimenting with these benchmarks on SingleStore yourself, we've included setup instructions, schemas, data loading commands and queries at our Github repository.

Appendix - TPC-H results breakdownappendix-tpc-h-results-breakdown

See the TPC-H section above for the details of what results we measured.

Details of runtimes by query (runtimes are in seconds):

Cold runsCold runsCold runsWarm runsWarm runsWarm runs
Geomean runtime (sec)26.0325.8125.3619.3320.8915.05
Median runtime (sec)22.8326.0324.1016.3122.2617.13
Cluster price per hour\$124.80\$128.00\$130.40\$124.80\$128.00\$130.40
Geomean costs (cents)90.2391.7591.8567.0274.2754.53