You did it again—you just blinked. According to the Harvard Database of Useful Biological Numbers, a blink of a human eye lasts roughly 100 milliseconds. Not a long period of time, but a relative eternity compared to SingleStore use cases such as: (i) A large ride-sharing company executing 40,000 concurrent transactions with an SLA of two milliseconds, (ii) Real-time fraud detection for a top-tier bank, running several analytic queries within their 50-millisecond service level agreement (SLA) at high availability, and (iii) A major internet company doing 12 million upserts per second (1,000 milliseconds).
Those are just a few examples shared by SingleStore’s chief product officer, Jordan Tigani, at a recent DATAcated virtual event. His presentation, “Data Intensive Applications in Financial Services,” dove into the history of the need for speed in the financial industry, and how SingleStore is uniquely positioned to execute data-intensive tasks at almost impossible-to-imagine speeds. Here’s a recap of his talk. 
From “Flash Boys” to today’s data-intensive norm.
Jordan kicked off his talk with a reference to Michael Lewis’s 2014 book, “Flash Boys,” a great take on the need for speed in the financial industry. The first part of the book chronicles how a company called Spread Networks spent $300M to build a fiber-optic route between a datacenter in New Jersey to the Chicago Mercantile Exchange. Their goal was to reduce the delay by four milliseconds—a money-minting competitive advantage.
The ability to make decisions about data faster than the competition is at the heart of the trading game. Data constraints don’t just show up in algorithmic trading, however. There are a wide range of problems in financial services that boil down to fast, reliable access to data for making decisions. These include:
  • Building rich customer experiences that provide real-time recommendations 
  • Fraud detection during a card transaction, “on-the-swipe”
  • Portfolio management customers want to see not just what is going on in their account, but what is going on in the market right now. 
All of these applications share a common characteristic: they are data-intensive applications, defined by Martin Kleppmann in his great book of the same name as “applications where the primary bottleneck is data,” because the data size is either too large, too complex, or moving too quickly. 
Jordan explained how this is exactly the type of problem seen over and over in financial services apps, where decisions must be made quickly, or information delivered to users quickly. More importantly, he said, this phenomenon is becoming widespread. “Data-intensive applications show up everywhere,” he said. “If you consider most of the rich, modern applications of the recent era, most of them derive much of their value from their ability to incorporate large, complex, fast-moving data to provide their core user experiences.”
The last part is key; data intensity isn’t just an add on, it is essential to organizations being able to provide the level of service that is at the core to their main product.

What do data-intensive applications require?

Jordan broke down the infrastructure requirements for data-intensive apps as follows:
  • They need to be able to get data in quickly and reliably. If you need to make a decision, you had better be operating on the latest data, or you’re going to be in trouble.
  • Even if you’re not doing high-frequency trading, latency is important. If you have a user waiting for information, they can detect latency of tens or hundreds of milliseconds. For years, industry studies have shown that even small increases in response times have a big impact on user attention and ultimately revenue.
  • Requests don’t come in uniformly throughout the day, so you need to be able to handle a lot of requests at once. When there is a big event in the market, you don’t want your data systems to get bogged down.
The systems traditionally used to process data are not well suited for data-intensive applications; transactional databases have trouble with analytics at scale, and data warehouses have trouble achieving the low latency needed for online serving. Both of these structures have architectural limits. 

SingleStore Overview

In his talk Jordan explained the backstory of SingleStore, which was designed as an in-memory database that could do extremely low latency, sub-millisecond transactions. “Over time, we added an analytics-optimized storage layer, which added the ability to scan trillions of rows per second,” he said. “We combined the two into a single store, which is how we came up with the name ‘SingleStore’ for the company.” 
SingleStore is one of the only databases that, on its own, can meet the needs of the largest data intensive applications. It can scale to handle myriad concurrent users because it is incredibly efficient with resource usage; one SingleStore customer found that it uses less than 1% of the CPU cores of a major cloud data warehouse. Other examples include the three at the beginning of this blog. 
Oh, and “Flash Boys”? The main story in that book is about a company called IEX that aimed to build a new type of exchange. Their data services arm, IEX Cloud, is using SingleStore to provide richer data analysis and reporting capabilities to consumers. You can learn more by watching this webinar and reading the eBook, “Ludicrously Fast Analytics: A 5-Step Guide for Developers of Modern Applications.” 
Watch the recorded version of the webinar here.
To keep up with how SingleStore is unlocking value in a wide range of industries by enabling data-intensive applications, follow us on Twitter @singlestore.