What pain points does SingleStore solve to help eliminate database sprawl? Rick Negrin, Field CTO at SingleStore, explained how a Tier 1 bank, Hulu and Sony PlayStation Network solved their infrastructure problems with SingleStore, the modern database for data-intensive applications. Part 2 of a two-part series.
In Part 1 of this blog, a recap of SingleStore’s recent webinar “Eliminating Database Sprawl: Three Customer Case Studies,“ Forrester vice president and principal analyst Noel Yuhanna walked through some of the ways how companies can become jeopardized by database sprawl. Over decades of building enterprise applications that are tightly coupled with their underlying database, companies can amass thousands of databases. He wrapped up with a recommendation to “look at a modern database platform to support new and emerging business requirements, a platform that goes beyond the traditional database functions.”
This recap from Rick Negrin, Field CTO at SingleStore, dives into three customer use cases, explaining how they avoided painful database sprawl. These companies jettisoned their patchwork of databases and now use SingleStore, the modern database for powering data-intensive applications.
Fraud detection at a Tier 1 bank
Our first example describes how one of the world’s largest banks uses SingleStore to execute “on the swipe” fraud detection, analyzing payment card transactions in real-time to detect fraud. As I mentioned in the webinar, when a customer swipes or taps their credit card, pays with ApplePay or any other app, banks want to run the transaction through a complex analytic model, determine if it’s fraudulent, and then approve or deny the transaction in real-time. If the cardholder notices anything more than about half a second of delay, the customer experience degrades quickly.
Against a time budget of about 50 milliseconds, the Tier 1 bank needed to execute more than 70 queries to pull data into the fraud analytic model. The problem was, the bank wasn't able to meet the customer experience requirement because the system was built with more than 10 different open source technologies. The bank tried to stitch everything together and not surprisingly, the system could not deliver an answer anywhere near 50 milliseconds—it took tens of seconds.
To better catch card fraud as it occurred, and give demanding customers a positive experience, it was essential that the bank move to a truly real-time fraud detection model. But no matter how much effort they put into optimizing their open source-based platform, it could not work.
After recognizing that the system was too complex and had multiple points of failure, the bank replaced it with a combination of Kafka and SingleStore. The summary of this use case: by bringing the transaction to the model, and then loading the historical data and other data points out of SingleStore, the analytics could run in a matter of tens of milliseconds, deliver real-time fraud detection and a great customer experience.
Hulu radically simplifies experience monitoring
Everyone knows the video streaming service Hulu. One of the key things any provider wants to know is how well the streaming experience is going right now. Hulu has millions of customers, so that means there's a lot of data being pulled in at any given moment.
Hulu processes roughly two billion rows of data an hour, pulling in metrics from all their user platforms: browsers, mobile apps and streaming devices like Roku and Firestick, to glean insight into providing the best customer experience. Hulu originally used a combination of Druid and Storm for data aggregation and analytics. It’s one of the biggest Druid installations in the world, and Hulu uses it to analyze the traffic and predict regional issues. It was critical to avoid the dreaded buffering problem that causes a bad customer experience with their service.
Hulu struggled with the Druid-Storm system and eventually experienced an outage during the Super Bowl. It was a really unfortunate outage because the system went down and Hulu was completely blind to what was going on. They could not bring the data in and analyze it fast enough.
After that event Hulu wanted to ensure such a catastrophic outage never occurred again. They started looking for an alternate technology that would meet their visibility requirements, be more stable and drive down total cost of ownership (TCO).
In the webinar, we also detailed the data-intensive application’s turnaround. With SingleStore as the centerpiece of the system, Hulu also streamlined the design of its data ingestion. Before SingleStore they had used a six-step process for ingestion, a bunch of tricks in order to do the ETL to get the data in. The Druid-Storm system was complicated and a bottleneck requiring a lot of workarounds. Now Hulu is able to streamline that processing with SingleStore's pipeline feature, a single command in the database that automatically brings in the data, dramatically reducing the platform’s complexity. Furthermore, when Hulu moved to SingleStore, they found they could drop their hardware costs by more than half. They needed only about a quarter of the memory and much lower CPU resources, as well.
With SingleStore, Hulu essentially created a stream of insight to predict and forecast the quality of service. Instead of reacting without visibility, Hulu has a real-time view into what’s going on across its streaming service. That’s critical in ensuring that the system won’t go down. Hulu now processes roughly a hundred terabytes of performance data a day with a dramatically simplified pipeline and far less technology to manage.
Sony thwarts denial of service attacks
Our last customer example was Sony and its popular gaming platform, the Sony PlayStation Network. With millions of users daily, the PlayStation Network is an enticing target for denial of service (DoS) and other malicious attacks by cyber criminals. A successful denial of service is a huge deal for any gaming company; it not only affects the user experience, but at the end of the day, reputation and revenue.
Sony needed to quickly crunch through enormous amounts to data to determine if a DoS attack was occurring. The previous system struggled to ingest data fast enough to achieve a real-time SLA around speed-to-insight. They were trying to ingest one million rows per second and support ANSI SQL analytics using a combination of Postgres, ElastiCache and DynamoDB, but it wasn't scaling. Trying to use three separate databases in one solution like this made it clear that they had a data intensity problem. Sony could not get the support for all the features they wanted from any one system.
When Sony moved to SingleStore they easily met their requirement, ingesting a million events per second while retaining the ability to execute complex SQL queries. There are some extraordinary benefits Sony achieved with SingleStore: They were looking for improvements in performance and TCO, and were able to get a hundred-X improvement in performance and a four-X reduction to TCO by moving off of the other three data stores onto SingleStore.”
Overall, we see that customers are extremely happy and satisfied with SingleStore. They're able to dramatically simplify their architecture, get better performance, drive down TCO by consolidating different systems, and rely on one system to deliver what they need for data intensive applications.
If you missed it, check out the previous blog in this series, “Eliminating Database Sprawl, Part 1: How to Escape a Slow-moving Car Crash”, which features Forrester vice president and principal analyst Noel Yuhanna. You can watch the entire webinar here, keep up with our latest news on Twitter @SingleStore, and follow our new Twitter channel for developers @SingleStoreDevs.