
Trending
Reducing Energy Consumption Via Real-Time Data
We all know it’s important to conserve energy. Today in the U.S., we’re reminded of that because it’s National Cut your Energy Costs Day.
Depending on where you live in the world, your own habits and cultural forces, this may be easier or harder. You might expect industrialized countries to have the highest per capita consumption rates. In the United States, the national average electricity consumption per capita is about 1,000 kWh/month. In Canada, it’s around 1200 kWh/month. When compared to countries like France and Germany where the per capita average is closer to 500kWh, this sounds exorbitant.
But if frugality is part of the national identity and there’s a gas shortage, reducing your energy consumption could even be a fun competition. In Germany, the cuts in natural gas flowing from Russia have become a personal energy challenge that residents have taken up. Citizens are competing to see who can keep the heat off the longest, cheerfully sharing results on social media.
German towns are reducing energy use through various methods like dimming street lights, lowering temperatures in public buildings and turning off hot water in public washrooms. It’s estimated that Germans have managed to reduce gas consumption by about a quarter using these methods.
Simply being more aware of your energy consumption can help you reduce energy consumption. In one study, economists demonstrated that the “learning effect” created by having an in-home display with real-time data about your energy use helps reduce energy use in the short term. Smart plugs and home energy monitors provide real-time information to your smartphone to help you identify where those energy vampires are.
Smart thermostats, like an Ecobee or Nest, help you proactively control and reduce your energy consumption. But when energy use can’t be avoided, it often can be time shifted.
In the United States, many utilities have introduced programs that encourage their customers to use electricity during off-peak hours. The programs pass on the savings to customers through rebates or reduced electricity rates. On a larger scale, utilities have also digitized their gas and electric distribution networks over the years using Advanced Meter Infrastructure (AMI) to gain operational visibility on energy use, leakage and even fraud.
SingleStore has helped utilities perform more accurate energy grid volume predictions with proactive grid maintenance to reduce costs and deliver better customer service — as well as through IoT analytics that perform continuous analysis. And regarding those massive cloud datacenters we all depend on, I previously wrote about the energy reduction and sustainability measures being put in place.
Reducing energy at the personal level in your home is helpful and necessary, but is not sufficient for the actions needed to be taken for energy reduction and sustainability. Corporations, research institutions and governments around the world are introducing innovations using real-time data to reduce energy use and practice sustainability. The upcoming World Economic Forum meeting this month has sustainability on the agenda, and that organization has been highlighting several remarkable examples.
Here are some of the notable projects leveraging real-time data for sustainability:
Topolytics is a data analytics company that makes the world’s waste visible, verifiable and valuable through real-time tracking and geospatial analytics. Its WasteMap® platform generates insights for waste producers, recyclers and governments that enable greater materials recovery and drive operational efficiencies.Carbon-neutral jet fuel using sunlight and air could be the future for sustainable aviation fuel. Researchers at ETH Zurich have developed an industrial plant that extracts carbon dioxide (CO2) and water from the air. Solar energy is then used to split these compounds and produce syngas — a mixture of hydrogen and carbon monoxide. This is then further processed to create kerosene and methanol, which can be used as a substitute for traditional fuels.Real-time data from mobile applications is being used to determine how well the existing EV charging infrastructure in the U.S. is serving the needs of the population. In this study, sentiment analysis was performed on reviews given on various social platforms by EV users while they were at charging stations.Tietoevry is using real-time sustainability data for real-time action. It has established the Sustainability Data Hub to aid their customers in reporting sustainability performance in real time.
Real-time data is helping individuals, companies and governments to reduce energy and put in place sustainability practices. As the use of real-time data grows and becomes more available, ever-increasing demands are placed on it by applications for freshness, completeness, accuracy, granularity, access and analysis.
This creates data intensity and, if not dealt with effectively, results in creating data infrastructure complexity and adds to the cost of operating real-time applications for these sustainability analytics and real-time energy monitoring use cases.
SingleStore CEO Raj Verma recently wrote about how to manage these risks and offered three ways to manage the real-time revolution as part of the upcoming meeting of the World Economic Forum this month.
Read Post

Company
Re:wind AWS re:Invent 2022
While the world was watching the coming together of the best soccer players in the world in the Rub'al-Khali desert, the biggest players in the technology world convened in the Mojave desert.
AWS re:Invent has become the World Cup of technology conferences. Being on the largest stages, temperatures were running high at both events, and not just because of the Europeans’ aversion to air conditioning.
SingleStore was at AWS re: Invent in full force along with our partners including IBM, RedHat, HPE, Zesty, Grafana and Confluent — as well as SingleStore customers who exhibited in the showcase such as Palo Alto Networks, Imperva, Goldman Sachs and CapitalOne. It was thrilling to see SingleStoreDB power these real-time applications and customer-facing analytics on the exhibit floor. We’d like to share some of the excitement, activities and our take on the announcements that happened, and some that didn’t.
First of all, in-person technology events are clearly back. With reports of close to 50,000 in attendance, it’s clear we are no longer living in the COVID-induced, all-digital world. The booth conversations, informal chats between sessions and serendipity of chance encounters with former co-workers, customers and partners is something that pure virtual events simply can’t match. But with hundreds of thousands attending the keynotes online, the event world’s future is forever hybrid.
SingleStore has been called “absolutely magical” and “the database of our dreams” by its users, so it was only fitting that we book our favorite magician for our re:Invent booth who managed to educate and awe the audience with SingleStore database “aha!” moments. Our solutions consultants help pull back the curtain and show conference attendees what SingleStoreDB can do for real-time applications with analytics using a real-time marketing analytics application which you can see here — and find the source code to run it yourself in a free trial.
Read Post

Product
SingleStoreDB Outshines Major Database Competitors in TCO Study
GigaOm recently conducted a Performance and Total Cost of Ownership (TCO) analysis which revealed that SingleStoreDB delivers better performance and a 50% lower TCO against the combination of MySQL and Snowflake, and a 60% lower TCO compared to the combination of PostgreSQL and Redshift.
Today’s enterprises are generating transactional data from all parts of their business. Modern SaaS applications, APIs and data products are the crucial digital touch points that must be responsive, interactive and real time to engage customers in data experiences that keep them coming back. At the same time, this transactional data must be analyzed and served to internal stakeholders managing operations, and embedded in these SaaS applications to enrich interactive customer data experiences.
This combined need to have both fresh, real-time operational and analytical data applies pressure from all sides, and in every moment. This is data intensity and it requires a new kind of database technology to solve it. SingleStoreDB is the #1 database for unified operational and analytical processing, and data intensity. With a single table type released in 2019 called Universal Storage and hundreds of customers in production using it, SingleStoreDB provides the best price-performance and TCO across the three industry standard benchmarks for transactional (OLTP) and analytical (OLAP) workloads: TPC Benchmark™ H (TPC-H), TPC Benchmark™ DS (TPC-DS), and TPC Benchmark™ C (TPC-C). While these do not measure all the dimensions of data intensity, they are an approximate baseline and are well understood.
The traditional approach for 50 years has been to use separate database technologies for OLTP and OLAP, mostly driven by limitations in hardware and cost. Enterprises built applications using operational databases and then copied the data to analytic databases for reporting. This created data silos and also increased the latency of the analysis of operational data. Beginning with the industry adoption of NoSQL databases almost 20 years ago, a growing trend emerged to pick more specialized databases for each workload. This increased the number of data silos immensely, duplicated data, increased data delays and cloud costs, raised security concerns, introduced new governance challenges and created unnecessarily complex data systems underneath SaaS applications. Today, that has culminated with out-of-control cloud complexity and cost, and is no longer sustainable and demands for data increase in every moment — especially as data intensity increases.
Innovations in cloud computing, hardware and database architecture design in recent years have invalidated the old assumptions that a single database technology can only perform a single workload well. SingleStoreDB is the industry leader in innovation and adoption for undefined transactions and analytics. It was built to simplify modern SaaS applications, save operational costs, reduce the number of database technologies to manage, provide consistent low-latency results and eliminate data movement.
This GigaOm report compares SingleStoreDB, a real-time distributed SQL database, to special-purpose analytic databases (Snowflake and Redshift) and special-purpose transactional databases (MySQL and PostgreSQL). This report combines both the TCO and benchmark analysis, so the economic buyers can review costs and share this with the technical evaluators who can review the validity of the claims in a single document.
What You’ll Find in the Report
The study compares SingleStoreDB as a distributed general-purpose SQL database to Amazon Redshift and Snowflake with two analytical workloads, and with a transactional workload. The purpose was to ascertain whether SingleStoreDB — as a multiple workload platform — performs at a level comparable to popular cloud databases.
Read Post

Company
SingleStore Community: Meet Akmal Chaudhri, Developer Advocate
SingleStore’s growth is on fire — globally — and a big reason why is that our technology has strong appeal for both developers and business leaders. Akmal Chaudhri, our London-based Developer Advocate, has a deep understanding of the conversations that take place on both sides of, and across, the table. I recently spoke with Akmal about why he’s excited to be at SingleStore, and why he thinks “the best kept secret '' of the database world (that’s us) is about to turn up the heat even more.
Q: OK tell me, what drew you to SingleStore?
A: The short answer is that SingleStore has really fantastic technology and is a grown-up company.
The longer answer is that my interests and background are in database technology; when I started in the industry Oracle was the dominant player, IBM was number two and Microsoft a distant third. Throughout my IT career, the one thing that has been very consistent is relational technology. Despite all of the predictions of its demise, that hasn’t happened. Relational technology is as strong today as it’s ever been.
As a company, SingleStore is engineering-focused — it has a very strong technology offering, a “best kept secret” kind of thing. And while SingleStore is a very solid relational database, it can do so many other things as well, like supporting both transactions and analytics, JSON, time series and geospatial data, and more. It can serve many applications beyond purely relational — with SingleStore you can get all the capabilities you need without having to glue together multiple pieces, which is painful for both developers and executives.
SingleStore isn’t trying to be all things to all people, though. Instead, they are focused on data-intensive apps, which are rapidly multiplying across many industries.
I have wanted to work for SingleStore for some time and was thrilled when a role opened on the developer relations team last year. I’ll be working with developers and people in the industry to understand what problems they are trying to solve, and show them where SingleStore is a good fit.
(For more of Akmal’s story check out his post on Medium.)
Q: How will you do that? What’s your plan?
A: My focus is very much on the developers, helping them get up to speed on SingleStore to do their job better. And working with SingleStore, too, to do their job better.
I have worn many hats over my career. Seeing issues from a variety of perspectives firsthand, I have an understanding of developers’ and executives’ pain points, which makes me fairly unique. I’m a firm believer that technology for technology’s sake is not a good thing. You need to have some reason why you are building that technology — what is the practical benefit? How can it help the business sell more? Be more competitive? If the focus is purely on the technology, you are excluding some very important issues.
I enjoy writing technical content and working with the developer community. I like providing assets to get developers up to speed, and code examples. But I’m also very interested in the business side and being able to understand customer pain points. It’s important to be able to show how technology can help overcome those specific pain points. Executives also tend to be more interested in issues such as TCO and ROI, and not so much in technology — which to them is just a black box.
Q: You’re based in London. How do you see the tech scene there as being different compared to other parts of the world?
My observation over a long period of time is that technology innovation tends to get into production fastest on the east and west coasts of America. Europe tends to lag a little bit behind. That being said, London and numerous other European cities are financial capitals, and finance has always been a strong area for SingleStore.
But equally, if you look at requirements in other industries, many SingleStore capabilities are equally well-suited to serve domains like advertising, IoT and healthcare wearables, which are exploding. For any company that wants to provide personalized services, fast analytics are critical, and that’s where SingleStore has an edge.
There’s great potential for SingleStore in London and the European mainland. It’s been a best kept secret — but now is really the time for users to see what it can do.
Q: Any parting thoughts?
A: Over the years I’ve seen that one of the biggest hurdles for new technology is integrating with other tools and services already in place, many of them open source like Spark and Kafka. SingleStore is very well positioned here. Regardless of what tools or technology developers are working with, the fact that SingleStore talks with them and allows you to get data in and out at scale is extremely beneficial. It’s a great strength for SingleStore.
To stay in touch with Akmal join the SingleStore Developer Community, and follow him on LinkedIn and Medium.
Read Post

Product
2021 Roundup of Product & Developer Community News
As we start the new year, we wanted to share a few highlights of customer, product and developer community news from SingleStore in 2021. It has useful information for learning about distributed SQL databases, data-intensive applications and the outcomes developers are achieving, so you may want to bookmark this post. To learn more about SingleStore company milestones this past year, check out our CEO Raj Verma’s post.Customers Delivering Data-Intensive ApplicationsIn 2021, we added the most new customers ever in the company’s history, expanded use significantly with existing customers and celebrated some customers’ IPOs. The new customers are innovators in industries like fintech, gaming, media, communications, marketing technology and cybersecurity, and are providing modern data experiences to their customers. The common challenge they faced was how to cost-effectively solve for data intensity. Their data-intensive use cases — like operational analytics, in-product analytics, portfolio analytics, fraud prevention, gaming telemetry, real-time esports analytics, real-time marketing analytics and algo trading — all demanded a new approach to the cloud data stack that solved for the crucial time-sensitive moments and digital experiences their own SaaS products demanded.The common obstacles they needed to solve were data bottlenecks across several dimensions: data size, speed of ingestion, latency, query complexity and concurrency. When an application exhibits two or more of these challenges simultaneously, we call these data-intensive applications. SingleStore is the industry’s first cloud database built for data-intensive applications.We also saw an increase in adoption of hybrid cloud deployments for SingleStore. Our fully-managed cloud database service (DBaaS) is now in its third year of availability on AWS, Azure and GCP — but in 2021, we introduced a hybrid cloud deployment option. SingleStore can now be deployed in a hybrid cloud configuration where you get the full capabilities of the product (e.g., separation of storage and compute with Unlimited Storage) but separate the administration from the data. This allows you to have the best of both on-premises and multi-cloud worlds. The hybrid cloud deployment allows you to own the data plane in any Kubernetes environment including your own data center, AWS VPC, Azure VPC or GCP VPC, and connect to the SingleStore control plane. The data plane contains your data and the nodes of the database, while the administrative control plane is hosted by SingleStore. The benefit of this is that customers can keep their data in their existing environments. Customers in diverse verticals such as energy, sharing economy services, accounting and financial services are now running their SingleStore databases in the hybrid cloud, or are in the early stages of adoption.We are delighted that a few customers agreed to share their data-intensive success stories publicly with the SingleStore community this year, especially since so many view SingleStore as the secret weapon in their strategic cloud data stack. We love hearing how our customers are making use of the unique properties of SingleStore’s modern cloud database in their organizations, and we hope you find these kinds of stories informative and inspirational:IEX Cloud - Serving more than 1.2 billion API calls daily to over 200 financial datasets, IEX Cloud is the financial data platform built to connect developers and data creators. “SingleStore enables us to do monitoring and analysis in the same system that houses our historical data, and this creates enormous efficiencies for us,” said Josh Blackburn, Chief Technology Officer of IEX Cloud. Their data prep and ETL process execution time dropped from days to just minutes for data updates, and the API’s average response time is now only 8ms for a working set of 100s of TBs across real-time and historical data. After considering more than 10 databases, including Clickhouse, CockroachDB, Yugabyte and Google BigQuery, the team selected SingleStore because, as Josh stated in this webinar, “It met all the requirements we had for writes, massive analytical data, and web-scale traffic.”Factors.ai - The death of the dashboard has been widely reported and Factors.ai may be one of the drivers of that trend. Factors.ai offers an Amazon Alexa-like interface to B2B marketers so they can ask a question and receive a set of specific, focused recommendations based on their own data, rather than having to inspect hundreds of dashboards. They were hitting a wall with their cloud-hosted PostgreSQL database. CTO and co-founder Aravind Murthyshared with SingleStore, “We went from taking 20-30 minutes to run a query on 50 million records to 10 seconds. That's a 180x faster query response.” And Chief Product Officer Praveen Dasr shared, “...our clients can plan out a campaign, generate an analytics report, create a user segment, send it to their targeting system, and take it live in one hour. That’s about 32x faster than before and it’s the sort of competitively differentiating agility we bring to marketing,”Fathom Analytics - One of the most-widely shared stories of the year was Fathom, due in no small part to its loquacious and gracious co-founder and CTO, Jack Ellis. The company provides website analytics which is fully privacy law-compliant. When faced with scaling their cloud-hosted MySQL database to serve quickly-growing analytical queries, Jack discovered that not only could SingleStore solve his analytic query performance and user concurrency bottlenecks, but it also could serve as the application database outright replacing MySQL for OLTP, it could replace Redis as the caching tier and replace AWS DynamoDB as the key-value reference store. This greatly simplified the data stack and reduced the DevOps burden significantly resulting in more than a 60% reduction in database total cost of ownership and a 1000x increase in query performance. Building in the open, he shared his unfolding journey with the community through his blogs, “Building the World’s Fastest Website Analytics,” “Making the World’s Fastest Analytics Even Faster” and “Why We Ditched DynamoDB.”We also celebrated the IPOs of several SingleStore customers in 2021. We are grateful to be a part of their success and their modern cloud data stacks:PlaytikaMonday.comSportsradarUdemyLaunching Mission-Critical Capabilities The product innovations developed in 2021 have been remarkable. The development focus for the year was to advance the product to better serve enterprises’ mission-critical scenarios for their applications. SingleStore has been well-known as a fast database for real-time analytics since its early days as MemSQL. But in the last few years, the product has improved its ability to store and process diverse streaming data for low-latency analytical workloads, while simultaneously expanding to capabilities heretofore only seen in cloud data warehouses and operational databases. This expansion to better support a widening array of workloads is part of a larger data management market trend that SingleStore is leading — and disrupting the old guard of the last 50 years of on-premises databases, as well as the last 15 years of cloud databases.Multiple industry analysts have identified this technology unification trend from different vantage points. Stephen O’Grady from RedMonk remarked that the market is searching for capabilities now in a new kind of modern general purpose database. Rick van der Lans has explored the question of whether unified databases make polyglot persistence irrelevant. Mike Gualtieri and Noel Yuhanna from Forrester have been covering two specific parts of the unification trend with the Translytical Data Platform, combining operational and analytical workloads, and the Multimodel Data Platform, which combines diverse data models and data types in a data platform. And Gartner continued its coverage of Cloud Database Management Systems in the Gartner Magic Quadrant released in December, covering both operational and analytics capabilities across a single cloud database market.They used to call this transactional and analytical unification HTAP, but have since redefined it to cover an augmented set of capabilities that includes machine learning. SingleStore was recognized in this year’s Gartner MQ for cloud database systems for the first time. Some folks in the industry talk about this database unification trend in the future tense but at SingleStore, the future is already here, to riff on William Gibson.SingleStore has developed unique innovations to achieve simplicity, power, and cloud resource and cost efficiency in a single general-purpose modern cloud database while avoiding the need for difficult tradeoffs for diverse application workloads. The company was awarded patent # 11,068,454 in July for one of those innovations, Universal Storage, which was declared feature-complete last summer.Here are some more of the mission-critical enterprise SingleStore product capabilities launched in 2021:System of Record capabilitiesLimitless PITRLock-free Backup for online backup without pausesMulti-AZ FailoverResource GovernanceAchieved 4 nines (99.99%) SLASecurityPrivate Link for AWS, Azure, and GCPCompliance Certifications: SOC2, ISO27001, HIPAA, GDPREase of UseUniversal Storage is feature-completeUnlimited Storage for our unique separation of storage and compute, codenamed BottomlessNew Editions and Flexible Usage with CreditsExternal Functions - Here's an external function with AWS TranslateNew Vector FunctionsHTTP API and Management APINative Drivers for ODBC and JDBCFaster Cluster CreationTechnology ecosystemSpark Connector w/Parallel ReadAWS Glue Custom ETL ConnectorML in-database with MindsDB integrationThat’s a big list, so I encourage you to check out the summer announcement and its release notes, as well as the fall announcement and its release notes. Of course, if you like to sit back, watch and listen,check out the accompanying webinars.Building the Developer Community TikTok much? If not, you may find that’s a channel that is expanding quite a lot. You’ll find our Developer Advocate Joe Karlsson here, there and everywhere. It’s just one additional channel we expanded into to meet, educate, support and learn from our growing developer community where they already are. We expanded the Community and Developer Relations teams, and began collaborating with Engineering’s Launch Pad team. Here are just a few of the milestones these teams have achieved together this past year:Launched the Developer Hub as the one place to start your developer journeyNamed the inaugural group of 10 Ambassadors to the CommunityContributed many helpful how-to articles for getting started, using SingleStore in various multimodal use cases, the recipe for building a modern database, using Streamlit with SingleStore, using SingleStore as a feature store, and much moreLivestreamed Tech Talks for getting started, and for specific use cases like streaming data in parcel logisticsContributed to advancing the state of the art for WASM and WASI through the Cloud Native Computing Foundation, CNCFEstablished a new GitHub organization called SingleStore Labs focused on the useful code and frameworks like geospatial track and trace of airplanes, and serverless CloudFlare Workers with SingleStoreReached a total of 92 GitHub repositories across our SingleStore GitHub and SingleStore LabsLooking AheadWe are giddy with excitement about what’s in store for 2022. Across community, customer and product we have big plans this year and we will be sharing a lot over coming months. We are on a mission to bring together modern data to make it work for you in the moments that matter to your customers. We simplify data architectures so you can avoid the burden of stitching multiple datastores together, and maintain them to meet your data-intensive challenges. We were built for this moment: SingleStore is the industry’s first cloud database built for data-intensive applications, and it’s a great time to join us. You can do that by joining our developer community or the company…or both! We are rapidly growing in Engineering, Marketing, Sales and other areas. You’ll find all of the open positions on our Careers page. Be part of the data revolution with SingleStore!
Read Post

Company
SingleStore Community: Hello Joe Karlsson, Developer Advocate
Who’s your best friend? It just might be Joe Karlsson, SingleStore’s new Developer Advocate. Read Domenic Ravita’s rapid-fire Q&A with Joe to find out how he’s creating inclusive, useful and fun resources to build our global community.
If there’s one thing that the pandemic has helped me appreciate in my work life, it’s the connectedness of the SingleStore community. Whether I’m working with my colleagues around the world, or hearing about the latest wins from our prolific developer base, it’s always gratifying to share in the enthusiasm that SingleStore technology generates. One of the reasons why there’s so much heat around our data platform is Joe Karlsson (he/they), SingleStore’s new Developer Advocate.
I recently sat down with Joe on a Zoom call to learn more about his work and ambitious plans for the SingleStore developer community. (Join us!) Here are the highlights of our conversation, and a list of all the places you can connect with Joe.
Q: If you had to summarize your mission in 240 characters, what would you say?
A: For developers, online communities are more important than ever. My goal is to create fun, inclusive, engaging content that developers love, and that helps them to be wildly successful with SingleStore.
Q: Well done, with a few characters to spare! Tell me about your background.
A: I’m a database engineer turned developer advocate, and forever a massive data nerd. I am all about empowering developers to think creatively when building data-intensive applications, through demos, blogs, videos, and whatever else they need.
My career has taken me from building database best practices and demos for MongoDB, to architecting and building the third biggest eCommerce website in North America—that’s Best Buy—to teaching at one of the most highly rated software development boot camps on Earth. I’m also an ex-surfer, TEDx speaker, film buff, and an avid TikTok-er and Tweeter.
Q: Why SingleStore?
A: SingleStore is a product for developers. They are the people working with data platforms every day, and increasingly the ones making buying decisions. I am all about facilitating conversations with this important audience. Developers don’t follow enterprise companies on Twitter to get technical content; they follow engineers they trust, like and respect. It’s an honor for me to be leading those conversations for SingleStore.
Q: How is the SingleStore developer community different, and how is that reflected in the way you serve them?
A: Developers are really passionate about SingleStore technology. They literally fall in love with our ability to handle data-intensive applications. That’s where SingleStore is winning and we need to talk about that. With so many people working remotely, creating an online gathering and building a sense of community there is really important, and the most important place to be.
Building data-intensive apps is not something you do fresh out of bootcamp. I’m working on becoming an expert in this field and educating developers about how SingleStore makes it monumentally easier to build massively scalable data-intensive apps. My job is to help make that educational content more approachable, simplifying it an easily digestible format.
Q: How are some of the ways you are making complex technical content more digestible?
This is such a cool part of my job—I do lots of different things, often. For example, I blog. I make YouTube videos about how to get started with SingleStore. I speak at tech conferences and TEDx, do livestreams and know how to rock TikTok. And much more. And yes, that’s my cat in those TikTok videos!
Q: How are you getting the word out about SingleStore?
I work closely with Wesley Faulkner, Head of Community at SingleStore. We both joined the company at the same time, and are working hard to establish a strong SingleStore community that is inclusive, friendly and incredibly useful. Connect with me on LinkedIn and on my social media, and let’s create an environment that we both want to be part of.
Developer resources
SingleStore Developer WebsiteSingleStore CommunitySingleStore Community ForumsSingleStore TwitterSingleStore GitHub
Get social with Joe
TwitterGitHubYouTubeMediumRedditInstagramTikTokSpotify
Read Post

Product
Migrating from MySQL to SingleStore: Just Hit the ‘Easy’ Button
It really *is* easy to migrate your data from MySQL to SingleStore. This blog recaps the recent SingleStore technical webinar, “Modernizing from First-Generation Systems: Migrating from MySQL,” delivered by SingleStore VP of Client Technical Services Sukki S. Sandhar and Pankaj Singh, Solutions Consultant at POC Factory. Here’s a snapshot of the webinar’s step-by-step walk-through of the ‘how’ and ‘why’ of migrating from MySQL to SingleStore.Just Hit the ‘Easy’ Button: Migrating to SingleStore from MySQLDomenic Ravita
Read Post

Product
Straight to the Point: Why Developers Choose SingleStore to Turbocharge Their Apps
SingleStore is designed to make developers’ lives easier with features that deliver scalable performance, speed migration and keep costs down. In this blog Domenic Ravita recaps SingleStore’s recent developer webinar, “Turbocharge Your SaaS Applications with SingleStore,” highlighting the straight-to-the-point benefit story told by Sarung Tripathi, SingleStore Principal Solutions Architect.
From the beginning, SingleStore has had a large and loyal application developer following. One of the reasons why is that, like our customers, we don’t mess around. I recently hosted a webinar created for the developer audience, “Turbocharge Your SaaS Applications with SingleStore.” My special guest was Sarung Tripathi, Principal Solutions Architect at SingleStore and, in his inimitable way, Sarung got down to business as soon as he was handed the mic.
A frictionless path to high performance
Right away, Sarung was on the same wavelength as our audience. After noting that conceptually simple concurrency and data inflow issues often cause poor app performance—his title slide read “It’s Your Database, Stupid”—he conceded, “Frankly, I realize that not everybody wants to put a lot of time into optimizing a database, or building a database cluster and making sure that it's perfect.”
The good news is that there are databases available today that are designed to power mobile and SaaS apps large and small. Sarung ran through a checklist of must-have features that SingleStore offers, and app developers should look for when considering a database backend:
Standard SQL syntaxNative connectivity to any application frameworkConsistently fast user experience, from 10 to 10,000Simpler migration pathPredictable pricing to scale with developer needs
SingleStore makes it easy
Read Post

Product
Limitless Point-in-Time Recovery: What's in It for DBAs, Execs and Developers?
Limitless point-in-time recovery (PITR) is one of the hottest features in SingleStore 7.5, offering new, mission-critical capabilities for enterprise customers. Domenic Ravita takes a deep dive into PITR with SingleStore’s VP of Product Development, Rick Negrin, to find out how this new feature benefits database administrators, executives and developers.
A few weeks ago I blogged about the highlights of SingleStore 7.5, recapping our recent webinar, “What’s New in SingleStore Summer Release?” One of its most hotly anticipated capabilities is limitless point-in-time recovery (PITR) which, as it sounds like, allows you to recover a complete database to any point in time, over a pre-defined window. I wanted to know more about why limitless PITR is important, and to whom, so I sat down for a Zoom deep dive with Rick Negrin, Vice President of Development at SingleStore. Here’s an edited version of our conversation.
Domenic: There’s a lot of heat around SingleStore’s new limitless PITR capability. Why?
Rick: Limitless PITR is an important capability for mission critical applications, which have very high data integrity requirements. While SingleStore is the mission-critical backbone of many data-intensive applications, limitless PITR additionally allows enterprise customers to confidently rely on our database as a system of record. This is a common term in enterprise computing that means a database essentially serves as a “single source of truth.”
Obviously, a system of record is mission critical. It being a “source of truth” also means that the database is the only location for that data. If that data is lost, it’s gone and there is no way to reconstruct it from other sources. That is why data loss from a system of record can cripple an organization and has to be prevented at all costs. In contrast, in most data warehouses the data sourced from other systems. You can always rebuild a data warehouse, although it might take a really long time.
With limitless PITR, SingleStore is meeting the high-bar needs of an enterprise audience, which for years has had a fairly limited selection of relational databases to use as systems of record.
D: Tell me more about how limitless PITR works.
R: Limitless PITR is conceptually pretty simple. It allows you to recover a database to any point in time over a window you specify: a day, week, month, 90 days or more. SingleStore 7.5 will maintain all history of the data over that time, recoverable to any time granularity down to the microsecond, or recovered to a named milestone.
Let me emphasize that our granularity is impressive: rollback to the microsecond level in a database that is distributed, relational, multi-model and cloud native.
D: How is limitless PITR used in the real world? Who’s most excited about this new feature?
R: Database administrators (DBAs), IT and business executives, and developers are all excited because PITR allows A) recovery from database corruption or other problems, B) can function as a continuous online backup, and C) makes it very easy to use production data in development.
I’ll unpack each of those use cases. First, DBAs spend an inordinate amount of time cleaning up from errors like:
Someone accidentally dropped or truncated a table in the database.A deployment failed, making changes to the database that are difficult to reverse.A lot of data was accidentally deleted or modified, making it impossible to run one or more applications.
The ease with which SingleStore can get the database back to a valid state eliminates two thorny problems that DBAs commonly face in these and other types of disaster recovery scenarios: RPO and RTO.
RPO is recovery point objective; when you do a restore, how much data will you have lost? Can I get it back to the exact point where the system crashed, or a day or week before that? With a weekly backup, your RPO is a week. Daily backup has an RPO of a day, an hour and so forth. With continuous backup, which is what SingleStore delivers, RPO is near zero, eliminating potential windows of loss of critical data.RTO is recovery time objective, or the time frame in which you want to be able to recover your data.
SingleStore 7.5 dramatically simplifies RPO. Because we store data continuously, a restore doesn’t require defaulting to the most recent version of the database; you can pick any point in time to restore it to. So, if you have a logical corruption and know when it occurred, you can restore the database to just before the point of corruption. You can take that data out just before the operation that corrupted it and thus not lose it.
We also have a feature called ‘milestones’ that is handy. Milestones are like bookmarks. You create one when you know you are doing something a bit risky, such as upgrading the application. That way if something goes wrong you don't have to figure out the starting point to the exact microsecond. You just restore back to the milestone, which was specified as the last known good point.
All of these features illustrate the big advantages of limitless PITR—it allows you to be very precise about what you restore to, and therefore you reduce your RPO to close to zero, if not zero.
To summarize, SingleStore unlimited storage databases can function as a continuous online backup.
D: Why do executives like continuous online backup?
R: In any industry, backing up data is an enormous cost of doing business that requires a lot of people and technology resources. All businesses back up databases for disaster recovery and business continuity purposes, but compliance regulations in some industries such as finance require institutions to maintain seven years of historical data. This data isn’t used that often, but when you need it, it’s critically important. Unfortunately, the risk around traditional backup processes is relatively high, because it’s not uncommon for data to become corrupted or have other problems.
We anticipate that some of our banking customers will keep a seven-year window on their data, freeing their DBAs to work on much more pressing issues that drive competitive advantage.
D: The SingleStore 7.5 release webinar said that limitless PITR does not eliminate the need to do backups. What gives?
Practically speaking, we expect to see 7.5 embraced as backup because of SingleStore’s architecture. When data is put into the cluster it’s automatically sent down to object store, providing limitless PITR and what is effectively continuous backup in highly secure and reliable S3.
If you want to keep the data for seven years, you just let it keep going to object store. Set the retention to infinite and you don’t have to do anything; the backup just accumulates. The storage does have a cost, but this approach eliminates the DBA time required to execute and manage the backups. DBAs can be redeployed to more meaningful tasks. The cost of storage is nominal; object store in S3 is 2.3 cents per month, per gigabyte. A 100-terabyte database—that’s 100,000 gigabytes—costs \$2,300 per month to store, a small fraction of an enterprise DBA’s salary.
D: Last question: What does limitless PITR offer developers?
Developers that want to test their application on up-to-the-minute production data can now do so. They can do limitless PITR of the production system, make a copy of the database, restore it to another server, and now have an up to date copy of the database for development and testing.
If you’re excited about limitless PITR and want to try it in your data workstream, get started with SingleStore today, go ahead and take advantage of our offer for \$500 in free product credits. Follow us on Twitter @SingleStoreDB and in our new Twitter channel for developers, too: @SingleStoreDevs.
Read Post

Product
The Antidote for Data Architecture Complexity: A Unified Database
Find out why specialized databases have proliferated in enterprises, why that’s a problem and, more importantly, the antidote for mad complexity: a unified database. This webinar features a technical deep dive from industry analyst Rick van der Lans of R20/Consultancy paired with market context, a product overview and customer examples from SingleStore’s Rick Negrin.The Antidote for Data Architecture Complexity: A Unified Database
Read Post

Product
Taming the Wild West: SingleStore Fast Analytics for Streaming Media
This blog is a recap of some of the highlights of an information-packed webinar that feels more like an entertaining talk show. Fast Analytics in Streaming Media, stars Mark Hashimoto, software engineering manager at WhatsApp and SingleStore’s Domenic Ravita.Taming the Wild West: Fast Analytics in Streaming MediaIf you watched a lot of Netflix and other streaming content last year during lockdown, you’re not alone. A recent study by Convivas reported that streaming volumes in Q4 2019 were up 44% over Q4 2020! Explosive growth in viewership has fueled a similar frenzy on the back end, sending content service providers scrambling to satisfy their customers’ every viewing whim.“Let's face it, when we're looking at TV, and we're trying to find a show to watch, we really want to find something within ten to 30 seconds,” says Mark Hashimoto, software engineering manager at WhatsApp and a guest on SingleStore’s latest industry webinar, “Fast Analytics in Streaming Media.” “Multiply that by billions of people across the globe, searching for entertainment,” he said, and you can begin to see why the unmitigated data analytics challenge really is “the Wild West.” This blog is a recap of some of the highlights of the information-packed webinar, which, when you watch it, feels like an entertaining talk show—without the guilt!Defining data intensityTech TV presenter Lisa Martin kept the program moving at a brisk pace, volleying conversation gambits between Mark, also a former engineer at Comcast, and SingleStore field CTO Domenic Ravita as the group rolled through the fun Wild West graphic (above) that set the structure for the webinar. In setting up the analytics challenge that streaming media providers face, Domenic gave a quick definition of data intensity.“In physics, power applied over a given surface area is a measure of intensity. If we take that analogy and look at data intensity, it's that concept of the processing power you need to exert in a tight timeframe on lots of different data surface areas. Data intensity adds a multiplier factor; what is the size of the data, and how fast is changing? What's the level of concurrency needed? What's the complexity of the queries? And what are the consistent response times expected? All those things, together, are what we call data intensity.”Mark then took the concept to the next level, adding in the consumer context.“The consumer is faced with the challenge of finding great content that's hopefully personalized to them, and will resonate with them. The first part of the discovery portion is really data intensive. It’s based on what that individual has watched or what their interests are. From the streaming media company’s perspective, the challenge is how to provide those pure magical moments in the user journey to say, “Here's a great show that we think will really captivate, and that you should watch.”He went on to explain the latest angle on personalization, as vaccinated people start to gather together again. It's right now starting on a very personal level, like you, Lisa or Dominic. “Let’s say all three of us want to watch a show together,” Mark theorized with Lisa and Domenic. “What are some of the shows that all three of us would like that we haven't seen before? That's really where the industry is going; the data, the infrastructure and the user experience all have to come together to really resonate with the customer.”Legacy data infrastructure is inadequateNot surprisingly, the streaming media industry’s infrastructure isn’t well suited for data intensive analytics. “One thing you should know about traditional infrastructure,” Mark said, “is that it’s really good for more batch processing, not for on-demand. The infrastructure has to be very malleable to handle spikes of traffic, if you have a hit show that goes viral.“The East Coast comes online at about 7:30 PM. The wave then rolls across the country, and around the world,” he explained. “Traditional infrastructure doesn't handle bursts of load very well. So, in today's infrastructure, you really need elastic computing. That's why you really want to have a modern cloud infrastructure, to provide the magical experience you want customers to have.”Domenic quickly picked up the thread from a database perspective.“We are, by my measure, roughly 20 years into the cloud data era. Back in the late 90s and early 2000s, we used existing single node databases like Oracle and MySQL to build the first-generation Yahoos and eBays of the world. The problem with that is, to get the scale needed for reads and writes, it's like retrofitting a bicycle with the engine of a car. It just doesn't fit. Patching is difficult, and you get a lot of data duplication and inefficient use of hardware.”He went on to explain that second-era cloud data systems ushered in the speed and scale that earlier systems couldn't deliver, and in a more efficient way. However, “you had to give up something—the power of relational semantics,” Domenic said. “Basically, you gave up on letting the database do a lot of the work for you. With these second-era cloud data systems, you've got to write that logic in your code; you've got to know how to join together the data between customers, product, price and so forth. As an industry we gained speed and scale, but we gave up on SQL.”Domenic then brought the narrative up to the present.“The third era of systems that have been around the last few years—and SingleStore is among this class of new, modern data infrastructure systems—offers the speed and scale of the preceding NoSQL era, but with relational SQL. The core foundation of modern systems is a scale-out relational data tier. When you think about use cases that Mark provided—such as matching machine learning outputs, recommendations, getting analytics on that, and serving customers shows—this approach both lowers latency and improves the customer experience.”But wait, there’s more!Clearly, there’s a lot to tell in the story of fast analytics in streaming media; this blog post captures just the first ten minutes of the webinar! Watch the remaining 35 minutes to learn about:The data challenges faced by new, advertising-based streaming media servicesHow streaming media companies can take advantage of fast analytics to improve the customer experience and reduce churnHow SingleStore’s core capabilities—ultra-fast data ingestion, super-low latency, a scale-out relational database architecture and high concurrency—are ideally suited to streaming media applicationsHow streaming media companies can take make the most of the “Wild West” environment in cloud data systems to gain a competitive advantageAnd much more!Once again, here’s the link to the webinar. Keep up with the latest dispatches from the Wild West of the cloud data world by following us on Twitter @SingleStoreDB and in our new Twitter channel for developers: @SingleStoreDevs
Read Post

Product
New eBook! Why Developers Choose SingleStore: No Filter
Modern applications provide responsive, data-driven user experiences, and their analytic queries present a constant challenge for SaaS application developers. If you’re an app developer, you are probably already familiar with the intense pressure to scale data infrastructure without slowing services, or showing your customers the dreaded spinning pinwheel.
Speed matters. In a competitive market, ludicrously fast customer experience (CX) is everything.
SingleStore’s new eBook tells the story of three superstar application developers—Jack Ellis from Fathom Analytics, Josh Blackburn of IEX Cloud and Gerry Morgan at DailyVest—who hit the accelerator on the analytics within their SaaS products with SingleStore. Read the eBook to find out why they chose SingleStore, the data engine that powers great CX with real-time, interactive data analytics, giving users the thrill of their own version of Tesla’s legendary ludicrous mode.
Developers talk: no filter
What I love about this eBook is that it captures the developers’ thought processes and unfiltered commentary on their journey to SingleStore. It’s one of the more entertaining eBooks I’ve seen in recent memory, for sure. For example:
Jack chronicled his company’s every step of their move from MySQL to SingleStore in a detailed and very entertaining blog post, which also went viral in the developer community on Twitter. Here are a few highlights:
First, there’s the title of Jack’s blog: “Building the world’s fastest website analytics**.”The first sentence of the blog captures Fathom Analytics’ enthusiasm for SingleStore, too: “In March 2021, we moved all of our analytics data to the database of our dreams.”Of the SingleStore sales process Jack said, “[T]his wasn't a sales call. This was a call where I could ask for help from engineers with 100x more knowledge than me, who have solved challenges for companies far larger than ours...”
Josh said, in a webinar discussing why he chose SingleStore, “The [SingleStore] support for Apache Kafka has been phenomenal, especially as we are trying to process hundreds of thousands of real-time prices. That’s just been an amazing feature. SingleStore actually solved all of our problems for our use case, all in one database. It’s very aptly named.”
Gerry, also sharing his experiences in a webinar with SingleStore, said, “In initial benchmarking our stored procedures were up to three times faster, and we saw a 90% improvement in the time that it took to copy databases and restore them. It was taking about an hour to do that in Azure SQL. The time is reduced to about four minutes in SingleStore which, as far as we were concerned, was unbelievably good.”
The results: Faster, better, cheaper
In the eBook you will also see the results the developers achieved with SingleStore. For example, here’s a summary table from Gerry.
Read Post

Data Intensity
How to Accelerate Analytics for SaaS Developers
Everyone remembers their first experience in a Tesla. Mine was in Las Vegas with a good friend and CIO. I was in town for a tech conference and that provided a good opportunity to reconnect and discuss a new project. He offered to pick me up at my hotel. That was the moment I was first introduced to Tesla’s unique Ludicrous mode. It was exhilarating. The Strip became one long, breathless blur. If you haven’t experienced zero to 60 mph in 2.2 seconds, take a look at the “Tesla reactions” genre of videos online.
Breakthroughs in Scaling Data Analytics
Wouldn’t it be great to get that kind of reaction to your product experience? Whether your customers are launching queries for generating fastboards, leaderboards for gaming analytics, filtering audience views, or generating BI reports, a constant challenge is scaling your data infrastructure without slowing your services or showing your users the dreaded spinning wait animation. It may be time to hit the accelerator on the analytics in your SaaS product to give your users the thrill of your version of Ludicrous mode. SingleStore is the data engine that powers Ludicrous mode for real-time, interactive data analytics in your SaaS applications.
As an application developer, you generally choose the database that you’re familiar with, is general purpose, and has broad support. If you’re building in a cloud environment, this often leads you to a hosted relational database, like Azure SQL, AWS RDS for MySQL or Google Cloud SQL. These work fine early on, but start to show cracks as your SaaS product gains rapid adoption. This is the moment you start to encounter data bottlenecks which show up in your customer experience. Solving the data bottlenecks is the essential thing to get right and is the frequent obstacle. How can you ensure backend scalability while simultaneously focusing on delivering a simple, easy-to-use service?
When application developer Gerry Morgan started encountering data bottlenecks in DailyVest’s portfolio analytics API for their 401(k) customers, he and fellow engineer Kevin Lindroos identified the culprit as their AzureSQL database. While it had served them well initially, as data grew their costs grew but performance was never more than simply adequate. As they extrapolated their customer growth plans, they determined that they needed a better way to control costs as they grew their customers. So, they began the search for a new database platform that could support their growth and the ad hoc analytical queries over large data volumes their portfolio analytics product required. This led them to consider columnstore databases.
For application developers unfamiliar with columnstores, they are generally best for analytical workloads whereas rowstores are generally best at transactional workloads. (Should you use a rowstore or columnstore?) After investigating columnstore databases such as AWS RedShift, Vertica, MariaDB, and kdb+, they discovered SingleStore met - or rather exceeded - all of their requirements. The benefits were clear. It had a better total cost of ownership, provided a managed service in Azure, executed stored procedures multiple times faster than AzureSQL, and accelerated database backups from 1 hour to just 10 minutes. To learn more about how these application developers scaled and accelerated the analytics in their SaaS applications, watch How DailyVest Drove a 90% Performance Improvement.
For IEX Cloud, the data bottleneck they encountered when scaling their cloud service was a little different. IEX Cloud is a division of IEX Group, the company made famous by Michael Lewis’ 2014 book “Flash Boys: A Wall Street Revolt”. The key service IEX Cloud delivers is a real-time financial market data API. It requires the collection and aggregation of many historical and real-time data sources which are then processed and served to their customers. Balancing the flow of data from disparate sources and to over 130,000 consumers with serving over 1.5 billion API responses per day and 800,000 data operations per second demands quite a lot of simultaneous read and write volume on their database backend. Tracking the real-time changes in stock prices is a write-intensive operation while serving billions of API requests against that fast-changing data is read-intensive. Serving real-time streaming analytics with metrics like P/E ratios and market capitalization on real-time streaming data and historical data adds compute-intensive workloads to the mix. Furthermore, as a data aggregator, IEX Cloud must refresh reference data from many 100s of providers throughout the day through ETL. They expect the number of ETL processes will soon be in the 1000s. Compounding the situation, daily market volatility correlates to volatility in the volume of API traffic from their customers.
IEX Cloud needed improved performance in multiple areas that their initial Google Cloud SQL for MySQL service wasn’t delivering. These requirements include high performance bulk-loaded data through ETL, streaming data ingestion, store all the data, perform real-time analytics, low latency responses to many parallel API requests, and the ability to easily scale horizontally. After trying a variety of database types, including CockroachDB, YugaByte, Clickhouse, and Google BigQuery, IEX Cloud found that only SingleStore could satisfy all of their requirements, and do it in just one system that was cost-effective, had an established community, and good support. Learn more about this SaaS analytics scaling challenge in The FinTech Disruption of IEX Cloud webinar.
Common First Steps
When performance and scaling issues arise, application developers are among the first to know in today’s DevOps world. Application performance monitoring alerts the team and triage get into motion. At this point, if a DBA is available the investigation begins, queries are profiled and indexes are modified or added. If handling read volume is the issue, a common technique is to provide a read replica by replicating from the primary database. This offloads work, but at the cost of adding latency and duplicating data. If the data is fast-changing, the approach is less effective as the data in the replica is out-of-date all too often.
Caching is the next option for scaling read-heavy workloads. You’ve seen it work great for static assets in Gatsby, Next.js, or React Static, but managing your dynamic business data this way is another animal. Managing cache eviction is complicated and expensive for fast-changing data. Another challenge is that the size of your cached data must fit into the memory of a single machine. This works well for small datasets, but you’ll soon be looking for a way to scale the cache if your data is large. Scaling out a cache by adding more nodes, for Redis for instance, provides the availability of data but at the cost of data consistency. It also adds infrastructure and more complexity.
Another option for scaling is to use database partitioning. This technique cuts the data into sizes that will fit into a single server, no matter how much data you have. There are various types of partitioning/sharding to ensure no downtime or data loss in the event of a node failing. There are various approaches for partitioning and indexing the data based on your queries. You can try to do this yourself, but it may get you more than you bargained as an application developer. There is an easier way which provides the scalability with the speed and simplicity you need.
Solving for Scale, Simply
SingleStore solves 3 key challenges for SaaS applications which embed live or real-time analytics without the need of changing your application design, adding a cache, manual partitioning, or other external middleware:
Scaling Data IngestScaling Low Latency QueriesScaling User Concurrency
SingleStore is a distributed, highly-scalable SQL database. This is an important characteristic as it’s the foundation for how it addresses each of the SaaS analytics scaling issues. For scaling data ingestion, SingleStore breaks the scaling bottleneck by providing parallel ingestion from distributed streaming sources in addition to bulk data loads. As mentioned earlier, both are important for IEX Cloud’s analytics API. Next, query responses can return trillions of rows per second on tables with over 50 billion records. This is the kind of low latency query speed that makes fans out of your SaaS application users. Finally, SingleStore scales to meet the need of your growing customers through support for high concurrency. For Nucleus Security, every millisecond matters when it comes to thwarting cyberattacks. Their original database choice was MariaDB but it failed to keep up with their needs to perform more frequent vulnerability scans and serve real-time analytics to a quickly growing number of government users. SingleStore delivered the high concurrency needed for their rapidly growing SaaS applications while dramatically improving performance by 50x, at ⅓ the costs of the alternatives. Scott Kuffer, co-founder of Nucleus Security, describes the details in the Every Millisecond Counts in Cybersecurity webinar.
Every successful SaaS application includes analytics either as a core offering or as an adjunct feature. Analytics places increased demand on the database. Customers won't wait for your spinning animation while their data loads. So, it is imperative to deliver the app and the analytics fast. Otherwise, you see more incidents raised, negative reviews are posted on social feeds and review sites, and you may find that your customer churn increases. In short, your business goes down and you have to fight to get your growth back. This can happen in the blink of an eye when switching costs are relatively low for SaaS services. That’s no way to claim your slice of the \$157 billion global SaaS application market.
SingleStore accelerates and scales the analytics in your SaaS application by delivering scalable data ingestion with single millisecond low latency queries queries, and high concurrency. But beyond the ludicrous speeds and thrills it delivers, our customers rave about our customer support and optimization services, our established robust community, and how cost-effective the solution is. To learn more about SingleStoreDB Cloud to scale the analytics in your SaaS application, join us for the upcoming webinar on April 8.
Read Post

Trending
Carbon, Cloud, and the Modern Data Estate
On this National Cut your Energy Costs Day, it’s a good time to think about our carbon footprints at home and at work as data professionals.
Since the first home was electrified in Manhattan in the 1880s, our home electricity usage has grown dramatically. According to Energy.gov, residential homes now account for 22% of all electricity consumption in the U.S. Roughly 63% of this electricity is still generated by nonrenewable fossil fuel sources in the U.S. according to the Energy Information Administration., but this varies a lot based on where you live in the country. In Georgia where I live, nonrenewable fossil fuel sources account for about 71% of electric generation. No matter where you live, saving energy brings immediate benefits to you and helps reduce our carbon footprint. As today is National Cut your Energy Costs Day, it’s a good time to think about how changing some habits will save money on your monthly electricity bill, but the larger collective impact of cutting your energy use helps the environment by reducing the carbon footprint.
Three energy-saving tips that can make a difference. First, install a programmable thermostat. These can learn household behaviors and set temperatures at the optimal levels for comfort and may save as much as 15 percent of electricity consumption. Second, finish replacing those energy-hungry incandescents with LED bulbs. Finally, unplug the multiple devices, laptops, televisions, and even the coffee pot. The bricks and wall warts for those electronics and appliances are energy vampires which draw power even when the device is off and can account for as much as 20% of your monthly bill.
But as important as our personal energy habits are, perhaps we should be more environmentally conscious about the impact of our choices as IT and data professionals. Our home and work lives have blurred in the restricted lifestyle this pandemic has caused and our new home-bound behaviors are driving the largest, fastest adoption of digital services the world has ever seen. By day, we’ve turned to video conferencing from the kitchen counter for work. By night, we’re watching The Queen’s Gambit on Netflix and the Mandalorian on Disney+. (I highly recommend both.) But you may be wondering how the use of these digital services are impacting electricity usage.
Electricity and the Cloud
In the early months of the pandemic after air travel dropped precipitously, the carbon footprint of video streaming services received a lot of attention. With the energy consumption of information and communication technologies (ICT) increasing 9% every year, The Shift Project’s Lean ICT Report found that the carbon footprint from ICT sectors increased from 2.5% to 3.7% of global emissions, compared to air travel’s 2.4% share. Of the 3.7%, 19% come from data centers and 16% from network operations. Of course, video streaming services are just one type of digital service among many more SaaS applications delivered by both public cloud data centers and enterprise-owned data centers. and they require massive amounts of electricity to operate. The National Resources Defense Council (NRDC) estimated that data center electricity consumption would increase to roughly 140 billion kilowatt-hours annually in 2020, the equivalent annual output of 50 power plants, costing American businesses $13 billion in electricity bills and emitting nearly 100 million metric tons of carbon pollution per year. This is roughly 2-3% of all electricity usage in the U.S. per year. Although it’s invisible to us, our collective use of digital services is making a big impact on electricity usage and the environment.
There is some good news on data centers. Efficiency improvements have reduced the growth rate in their electricity consumption over the last 20 years. A study commissioned by the Department of Energy and conducted by the Lawrence Berkeley National Laboratory used decades worth of data the observe the trend in electricity usage of data centers and found that from 2000 to 2020 the rate of increase in electricity usage was estimated to stabilize close to 4% from 2010 to 2020 compared to a 90% increase from 2000 to 2005 and a 24% from 2005 to 2010. Part of the efficiency gain is attributed to the reduced growth in the number of servers operating in public cloud data centers. Servers in the public cloud are operated at a higher server utilization rate than enterprise-managed data centers. Amazon Web Services commissioned a study from 451 Research showing that their infrastructure-as-a-service was more than 3.6 times as energy efficient as the median of surveyed U.S. enterprise-owned data centers. They attribute that efficiency advantage to a much higher server utilization and more efficient hardware. Google and Microsoft Azure data centers are achieving similar efficiency gains over corporate-owned data centers.
Managing the Data Estate
But just as our personal energy habits in our homes have a large effect on energy use, so do our IT decisions. How we manage the data powering these SaaS applications in the context of carbon may be the next big challenge because where data is stored, where it’s copied, how it’s processed, and where it’s transmitted all add up and have an impact. You or your Cloud Operations team sees the effect of that for your company’s SaaS product in your cloud utility bill every month. Some of the line items may pop out, like a large cluster of m5.12xlarge instances in a test environment that’s been left running for 30 days with no activity. In this case, the cloud-saving habit is no different than your home energy-saving habit: Turn off the lights when you leave the room!
The carbon impact of other cloud decisions we make may be less obvious. Modern customer and business experiences delivered by SaaS applications depend on a diverse data backend. Microsoft refers to this as the “modern data estate” with data stored in different data locations across different types of data stores from operational databases to data warehouses to data lakes. Into this data estate flows a deluge of data from an increasing number of different sources. Within the data estate we ingest, manage, and analyze this data using various types of storage appropriate to the processing and need for freshness. In the data estate, you need to retain a long history of data to be able to access past and present data and predict the future.
I think the analogy of the estate is a useful one for thinking about the carbon impact of our data management decisions. Within the estate we have assets and liabilities, in terms of data assets and workloads. The data liabilities include the cost of copying and moving data. It has been conventional wisdom of late to pick a datastore-per-workload. There are complex decision trees available on how to pick from among almost 20 different specialty datastores such as time-series, key-value, document, wide-column, unstructured text, graph or relational data. There’s also the choice about the type of processing needed in terms of transactions or analytics.
Consider the real-time data assets and workloads needed for your SaaS application or business. Think about how many different types of datastores are involved in creating, storing and processing those data assets and workloads. Also consider the machine learning models which operate on that data. The tally may be 3, 4 or more. Because it’s as easy and convenient to spin up new datastores as it is to flip on a light, your data estate may be large and sprawling which requires an estate staff with specialized skill sets to manage each of those assets. At SingleStore, we’ve encountered scenarios where as many as 14 different types of datastores were involved in producing real-time assets and serving real-time analytics.
Serving these diverse workloads on diverse data structures for real-time use cases is inherently inefficient. Big data becomes even bigger when it’s copied and moved rather than operated on in place and as it arrives from streaming sources. In terms of the “data estate”, we can reduce the liability and cost of creating, processing, and maintaining real-time assets by consolidating these workloads. There’s no need to give up on the convenience of instant availability in the cloud or the data access styles and structures you’ve grown accustomed to when designing your application. Many have already moved off single-node and special purpose databases to achieve greater efficiency by combining real-time operational and analytical workloads on diversely-structured data, from binary and raw text to key-value, document and relational. Such as sharing hardware at the cloud infrastructure level is resulting in higher server utilization and greater energy efficiency for data centers, building applications with a unified database that supports diverse workloads on diversely-structured data reduces your data estate’s liabilities. I argue that it also has the effect of increasing the value of the real-time data assets as well since designing SaaS applications with SingleStore reduces latency and stores data more efficiently through a combination of innovations than other datastores.
Takeaway
So, unplug those energy vampires in your home and across your data estate. Take a modern approach to cut your energy consumption. Consider the advantages you gain by combining real-time workloads into fewer datastores to not only simplify and accelerate your data, but also to conserve electricity and reduce the carbon footprint. By renewing and modernizing your data estate through reducing special purpose datastores, you’re directly following the environmental ethos of reduce, reuse, and recycle. I’ve said before that you must simplify to accelerate. Consider that by doing so, you may also “simplify to save”.
Read Post

Data Intensity
If Your Business Is Not Working In Real Time, You’re Out Of Time To Capture The Business Moment
Business is about serving the needs of customers. But customer expectations are changing quickly, and most organizations are not truly aware of how fast that’s happening.
Most businesses are moving in slow motion relative to their customers. That means they miss out on opportunities to make decisions about and act on the moments that matter.
In the past, lag time was accepted. Nielsen called people on the phone to understand their TV viewing habits. Broadcast TV networks set advertising rates and advertisers gauged viewership based on Nielsen ratings. It took a long time for a legion of people to collect this data, and once they got the data, it was typically a small and outdated sample size. But this was the best available method given the technology of the time.
Today these types of approaches simply don’t work — and they don’t have to. Organizations can use modern technology to move quickly and benefit from in-the-moment opportunities. That enables them to act in real time to deliver better experiences to retain and add customers — and optimize solutions for their clients and business partners.
What Is Real Time?
The definition of “real time” depends upon the context. In the context of a video streaming service, “now” means instantaneously. If you’re serving up pixelated videos or you can’t deliver an advertisement, you can lose consumer users or advertising sponsors. Latency is also a conversion killer for websites and a costly problem for financial traders. Akamai, one of my company’s clients, reports that conversion rates drop 7% for every 100 milliseconds of added latency.
Real time can mean seconds or minutes. Thorn, another client of my company, which works to prevent child sex trafficking, processes massive amounts of web data quickly. This improves child identification and investigation time by up to 63%. Each passing minute matters and determines the likelihood of saving a child.
Speed is also key in fighting the pandemic. True Digital, also one of my company’s clients, is using real-time data to monitor human movement using anonymized cellular location information. This can help authorities prevent large gatherings that can become coronavirus hot spots.
In each of these scenarios, what is considered “real time” is dependent upon the context and the goal. But all of these scenarios define crucial moments in which having the relevant current and historical data immediately available for processing is essential.
In-The-Moment Decision-Making Requires Infrastructure Simplicity
You have to simplify to accelerate business in this way. To go faster and get finer-grained, real-time metrics, you can’t have 15 different steps in the process and 15 different types of databases and storage. That adds up to too much latency and exorbitant maintenance fees.
Instead, you need to be able to do the same things and add new business functions, with less infrastructure. This requires technology convergence.
As Andrew Pavlo of Carnegie Mellon University and Matthew Aslett of 451 Research wrote, NewSQL database management systems now converge the capabilities that in the past were implemented one at a time in separate systems. This a byproduct “of a new era where distributed computing resources are plentiful and affordable, but at the same time the demands of applications [are] much greater.”
Now you can go faster. You can make decisions and act on them in real time. You’re in the game rather than sitting on the sidelines waiting for information while competitors are acting.
Modern Businesses And Their Customers Benefit From Real-Time Data Today
FedEx founder and CEO Fred Smith said in 1979 that “the information about the package is as important as the package itself.” This highlights the power of data.
Companies like FedEx now use this power to dynamically reroute packages based on customer interactions and to optimize their routes. Real-time data allows customers to employ digital interfaces to see when and where their packages will be delivered, request that a package be sent to an alternate location and have that request honored. It’s not just FedEx that’s doing this; other companies like DHL and UPS have done dynamic rerouting for years.
This is important because people are a lot more mobile these days; customers expect businesses to be more responsive to their needs and tend to give businesses that cater to them higher customer satisfaction and Net Promoter Scores. On-time delivery helps logistics companies avoid missing service level agreements and then paying penalties.
You can’t do route optimization and dynamic rerouting if your information about the package and other relevant details is hours behind where the package actually exists. You need your digital environment to mirror what’s happening in the real world.
When you create a digital mirror of your environment, you get what is called a digital twin. As our co-founder recently explained, digital twins are often associated with industrial and heavy machinery. But organizations in many sectors are now exploring and implementing digital twins to get a 360-degree view of how their businesses operate.
This requires organizations to have converged, cloud-native, massively scalable and fast-performing infrastructure that supports artificial intelligence and machine learning models. Organizations that don’t have these capabilities will be outmaneuvered by faster companies that do have the intelligence and agility to make decisions and act in the now.
Embracing Intelligence And Agility
Understand that delivering faster data isn’t the objective. The objective is to deliver the optimal customer experience and improved operational insights. Let these two objectives be your guide — and seek ways to leverage all relevant data in the moments that matter.
Dreaming big is important. But to start, identify a small project combining current, live and real-time data with historical data for in-the-moment views and automated decision-making and trendspotting to address customer experience or operational opportunities or challenges.
Polyglot persistence provides real development advantages. But it’s not necessary to assemble multiple types of data stores to get those advantages. Choose simplicity with flexibility by searching for solutions that provide support for a spectrum of workloads, reducing cloud data infrastructure complexity.
This was previously posted on Forbes.
Read Post

Case Studies
Nucleus Security and SingleStore Partner to Manage Vulnerabilities at Scale
Cybercrime damages in 2021 are expected to reach \$6 trillion, with organizations of all sizes and industries exploring ways to protect themselves better. Potential security vulnerabilities come in many forms, from cross site scripting to improper privilege management. Over 17,000 new security vulnerabilities were discovered in 2020 alone. IT security teams are responsible for securing infrastructure that’s continually increasing in complexity, so it’s challenging to respond quickly to newly discovered exploits or confirm that a known vulnerability is addressed. It doesn’t take long for an organization to end up with a substantial vulnerability backlog. On average, over six months, organizations end up with a backlog exceeding 57,000 and fail to mitigate 28 percent of these vulnerabilities. A pre-emptive, holistic, and scalable vulnerability management approach is needed to keep systems safe.
Addressing a Critical Vulnerability Management Process Gap
The founders of Nucleus Security, Stephen Carter, CEO, and Scott Kuffer, COO, had decades of experience providing vulnerability management services to United States government agencies. They used a variety of vulnerability scanning tools for this purpose. Whenever they found a vulnerability, they had to manually enter a remediation request into a ticketing system, such as Jira or ServiceNow. The gap between the identification of vulnerabilities and the creation of tickets created an efficiency bottleneck. All of the vulnerability reports needed to be normalized across a range of formats used by different tools and prioritized by the user. The response then needs to be automated for greater speed, manageability, and scalability.
Carter and Kuffer created the Nucleus vulnerability management platform to make it faster and easier to provide services to their clients. Their original intent was to have a custom tool for their own operation, but they quickly discovered a major need for this type of platform.
The users most interested in Nucleus Security were:
Large enterprises and government agencies needing to manage security vulnerabilities at scale.SingleStoreDB Cloud Security Providers (MSSPs) looking for a platform that supports their vulnerability management processes with multiple clients, offers client access, and provides a way to expand it with their proprietary functionality.
Handling Unified Vulnerability Management at Scale
Since Nucleus was created by and for vulnerability management experts, the platform solved many problems that stood in the way of discovering and remediating vulnerabilities before they became exploits.
Smart automation and workflow optimization eliminated many tedious and time-consuming parts of vulnerability management through:
Centralizing vulnerability aggregation, normalization, and prioritization in a single platform.Delivering the right information to the right people in the right format by connecting vulnerability, ticketing, alerting, and issue tracking tools together.Increasing vulnerability management speed and capacity from end-to-end.Reducing costs for vulnerability management.Improving the visibility of vulnerability management data and remediation efforts.
How Nucleus Works
The Nucleus application was delivered as a software-as-a-service (SaaS) offering and was designed with a traditional three-tier architecture. “There’s a web application, which is customer-facing, and a job queue that processes data. Then there’s a database on the backend, serving both,” says Kuffer.
Users log in to the customer-facing Nucleus web application.They set up integrations with the tools they’re using from a list of more than 70 supported applications.The users create rules to ingest data and trigger alerts, reports, and tickets.They can access a centralized vulnerability management dashboard, perform searches, and analyze their scan results.A separate backend job queue ingests and processes the data on the user-specific schedule from the selected data sources’ APIs.A database powers the frontend and backend operations.
The product licensing is based on the number of assets that an organization is working with. In Nucleus, an asset is defined as an item that the user scans. Each asset can have multiple vulnerabilities.
The Challenges of Pivoting to a Commercially Viable Vulnerability Management Platform
The decision to launch a separate company, Nucleus Security, had not originally been the founders’ plan. Kuffer explains, “We actually just went to a conference called Hacker Halted in Atlanta, Georgia, and we had Nucleus as basically a line item in a data sheet for our old company. We just got basically hammered with leads at that point.”
“We were not prepared at all for any leads whatsoever, and so none of us had any sales experience, we didn’t have a company, we had nothing. We didn’t really have much of anything, other than a product that didn’t scale for all of these needs.”
Nucleus had many leads coming in that were bigger names and companies, which helped to expedite the decision to fork it off into a separate business at the end of 2018. However, the founders needed a way to scale the platform for these Fortune 500 companies, and they needed it fast.
Finding the Right Database Solution
When Nucleus Security began architecting the Nucleus platform, they explored several database options, including document-oriented databases, graph databases, and traditional relational databases.
Carter says, “All of the options had their strengths and weaknesses, and we felt that several database technologies could work well for the initial proof of concept. However, it quickly became apparent that a high-performance relational database system was a hard requirement to support our data model and the features we were planning to bring to market.”
They started out developing the solution they would have needed when they were working directly with government agencies. These clients are highly focused on compliance and run scans weekly or monthly, based on the regulatory requirements. The database solutions that Nucleus tried often took a long time to process vulnerabilities, but it wasn’t a big issue at a weekly or monthly cadence.
Struggles with MariaDB
The Nucleus prototype used MariaDB, which is on many federal government-approved software lists. “MariaDB comes bundled with the government-approved operating system we were using, which is Red Hat Linux,” explains Carter. “For the prototype, this worked just fine. But when we started to onboard larger customers, we hit some ceilings performance-wise, and some pretty major performance issues.”
“As a workaround, we were trying to precalculate a bunch of stuff, to show it in the UI. But if you’re scanning every hour, and it takes an hour and a half to do the calculations, then you get a huge backlog,” says Carter.
The team spent a lot of time tuning queries to squeeze out every bit of performance that they could to keep up with the capacity needed for their beta customers. However, even with the clustering and load-balanced configurations available, it was clear that a different database solution would be needed for Nucleus to support very large enterprises, which have hundreds of thousands of devices and tens of millions of vulnerabilities.
Commercial clients scan for vulnerabilities multiple times daily or weekly. They wanted to see the results of a scan in minutes, not hours or longer. Over time, the backlog built up and hit a ceiling where the Nucleus application couldn’t process jobs fast enough. Batch processing worked with federal agencies, but enterprises demand real-time vulnerability management.
“It was the database that was the bottleneck all along,” says Kuffer. “We looked at some NoSQL solutions. We looked at Percona for clustering, but we would have had to rewrite a lot of our code – and all of our queries.” Nucleus Security also investigated other SQL solutions based on PostgreSQL core, such as Greenplum.
The relational database for a commercially viable version of Nucleus needed:
Real-time processingSupport for 10s of millions of vulnerabilities and 100s of thousands of devicesHigh scalability
The Benefits of SingleStoreDB for Vulnerability Management Platforms
Nucleus Security started looking into alternatives to MariaDB that were better suited for its vulnerability management platform. They found Percona first, but the initial tests indicated that it wouldn’t help with their use case. The scaling focused more on being a high-availability cluster. While they could add to the cluster and use load balancing schemes with different nodes, it was an extremely manual process. It also required a minimum of three servers.
SingleStore came up during Nucleus Security’s search for a database and impressed them from the start. “SingleStore was a great option, because SingleStore is not only relational; it also supports the MySQL wire protocol, which of course is inherent in MariaDB,” explains Carter. “It was almost a drop-in replacement for MariaDB, and it’s way less complex, and also much easier to maintain than the Percona cluster solution that we were looking at.”
SingleStore is The Single Database for All Data-Intensive Applications powering modern applications and analytical systems with a cloud-native, scalable architecture for maximum ingest and query performance at the highest concurrency. It delivered many powerful capabilities, such as:
Optimization to run anywhere from bare metal to the hybrid cloud with commodity hardware, including multi-cloudSimple to deployBetter performance than legacy RDBMS and NoSQL databases for high-velocity big data workloadsMemory-optimizedLow total cost of ownership by integrating with existing systemsIngest millions of events per second with ACID transactions while simultaneously analyzing billions of rows of data in relational SQL, JSON, geospatial, and full-text search formatsData shardingLatency-free analyticsSeamless horizontal and vertical scalingWorkload managementCompressed on-disk tables
Switching to SingleStoreDB from MariaDB in Nucleus
Nucleus Security started with the free tier of SingleStoreDB Self-Managed for the proof of concept early in 2019. The founders wanted to determine how hard it would be to migrate the application to the new database. They imported 80-100 gigs of data from MariaDB that included several tables with several 100 million rows to test their slowest queries. It only took an afternoon to migrate the development database to SingleStoreDB Self-Managed and get Nucleus working without any architectural changes.
Carter says, “It was dead simple to get set up. Whereas, I’ve got experience getting Oracle database clusters set up, and those things can be nightmares. And our experience with SingleStore was very good. We do not spend a lot of time maintaining it, troubleshooting it.”
Following the successful proof of concept, the Nucleus application moved to a three-node cluster along with several single server instances for customers in Australia, the United States, and the European Union. It took 3-4 weeks to get Nucleus tested, deployed, and entirely in production on SingleStoreDB Self-Managed. Nucleus got its first keystone client, the Australian Post Office, in March 2019, shortly after the migration. They had 2,000 code repositories and required approximately 50,000 scans per day.
Kuffer says, “They paid us a lot of money upfront on a multi-year deal, and plus they had the brand name and it allowed us to transition that into a lot of later success. We definitely wouldn’t have been able to support the scale that they had without SingleStore.”
No Architectural Changes
The Nucleus app brings in data directly from the APIs of vulnerability scanning tools used by customers, and interacts directly with their job scheduling systems, such as Jira or ServiceNow, directly. There’s no need, at this time, to use Kafka or other streaming technologies.
Nucleus did not need to make any architectural changes to their application to move to SingleStore; it has served as a drop-in replacement for MariaDB. Since MySQL wire compatibility is shared by both, making the move was easy. By replacing MariaDB with SingleStore, Nucleus customers can now support MSSPs with full scalability.
Read Post

Product
MemSQL is Now SingleStore
Today is an important milestone for our company. We have rebranded the company to reflect that we offer much more than an in-memory database. We also have an expanded vision to share. SingleStore, formerly MemSQL, provides one platform to actualize all of your enterprise’s data.
“What’s in a name?”
The MemSQL name has stood for unrivaled speed, scale, and simplicity for operational data since its founding in 2011. It is a well-recognized name among the data architect and performance engineer experts. Our Co-Founder and Chief Strategy Officer, Nikita and Adam, our CTO, founded the company on the vision of building a massively scalable, elastic transactional system. As our product has expanded to fulfill the growing developer community’s needs, the name no longer reflected the breadth and depth of our current capabilities and product vision. To our faithful communities of users, contributors, customers and advocates, we will “retain that dear perfection” for which you’ve known and loved MemSQL while broadening our capabilities.
The Early Years
The initial version of the product was designed to meet the low-latency requirements of real-time analytics workloads and leveraged the newly affordable large memory hardware available. At the time, the cost of RAM was going down dramatically which made it cost-effective for the first time to build a totally in-memory transactional database where the entire dataset fit in memory. The advantage was blazing speed for concurrent reads and writes, but without the need to manage disk I/O, it also opened the door to using a more efficient indexing approach, namely the skip-list index. Even now, almost a decade later, most databases leverage a less efficient index approach of a bygone disk-based era, the Btree. And so, with an initial version which provided an in-memory transactional rowstore, the company was named “MemSQL” with “mem” signifying in-memory and “SQL” making it clear that you could indeed achieve speed, scale, and SQL without giving up on the expressive power and advantages of relational algebra executing for your application using a simple, widely-understood declarative statement.
NewSQL Pioneers and ACID Guarantees
Within a couple of years, MemSQL became a leader in a new area called “NewSQL” delivering the scalability, speed and flexibility promised by non-relational systems while retaining the support for SQL queries and ACID (atomicity, consistency, isolation and durability) guarantees. As a scalable distributed SQL database, MemSQL was among the first systems to provide these NewSQL capabilities along with lock-free data structures, code generation, MVCC, replication, and k-safety. The next several releases added an in-memory compressed columnstore backed by disk, JSON as a native datatype, times-series and geospatial types and functions, SIMD vectorization, a resource governor, and so many more than we can list here. This brought us into a category Gartner called hybrid transactional analytical processing, HTAP. With the addition of data science and machine learning model integration several years ago, we expanded into an area we called Operational Analytics.
Fast forward to today
The product evolved beyond in-memory several years ago to use a sophisticated tiered storage approach leveraging modern cloud and hardware innovations giving customers 10x the performance at ⅓ the cost of database incumbents like Oracle. Our new name signifies the company’s goals of helping businesses adapt more quickly, embrace diverse data and accelerate digital innovation by operationalizing all data through one platform for all the moments that matter.
What’s Changed – We are now SingleStore
The renaming of the company to SingleStore also brings new product and service names.
SingleStoreDB Self-Managed, formerly MemSQL Software
It continues to provide a converged data platform for transactions, analytics, and operationalizing ML for time-critical scenarios. It continues to handle structured, semi-structured, and unstructured data and is available in the public clouds, on-premises, and in hybrid deployments.
SingleStoreDB Cloud, formerly MemSQL Helios
The MemSQL Helios launch over a year ago introduced the world’s first cloud-native HTAP and translytical database with converged transactional and analytical capabilities provided in a single offering without the need to replicate or link data from one cloud database to another cloud database. This convergence means better cloud cost management through lower monthly bills for the same cloud workloads, fewer moving parts, and fewer skill sets needed. Uber, John L. Scott, Medaxion, SSIMWave and others are enjoying the database simplicity made possible by the converged cloud database we now call SingleStoreDB Cloud.
The new name is not so new.
Our community of customers and developers will recognize “SingleStore”. It has been the name of the multi-year initiative to move from our dual table type approach in a single database to a solution which both OLTP and OLAP workloads efficiently and with high-speed performance using a single table type. That journey began with the 7.0 release, continues, and will be mature with the upcoming 7.5 release. This capability of a single table type and the initiative is now known as “universal storage” indicating our intent to expand that diversity of data types, data models, built-in functions and data access patterns beyond the current multi-model set we currently support.
Expanded Vision Takes Hold
With today’s rebranding announcement also comes a preview of the expanded vision for the product. Our Chief Product Officer, Jordan Tigani, announced today our intent to provide access to data located anywhere, even beyond SingleStore. The new capability will provide a global namespace for data located across a multi-cloud landscape allowing SingleStore to provide an API from which to operationalize your data no matter where it lives. SingleStore databases will be accessible from anywhere you have a SingleStore compute cluster, while honoring access permissions and sovereignty restrictions. You will be able to join data in AWS against data in GCP and on-premises, for instance. We will manage replication, consistency, and security according to policies. This global federated access to all your data will provide greater flexibility and will further simplify cloud data management for organizations. It’s a game changer.
We’re very excited to share this news with you today during our (R)Evolution 2020 event hosted by our CEO, Raj Verma, and CPO, Jordan Tigani, in collaboration with our customers and partners. Thank you to all of our customers, partners, and employees for helping to build this company and joining us as we continue towards our expanded mission, now as SingleStore, to operationalize all data through one platform for all the moments that matter.
Read Post

Case Studies
Infosys and SingleStore: Working Together
Infosys and SingleStore are now working together, identifying opportunities to implement SingleStore – a fast, scalable, relational database, renowned for fast analytics, streaming support, and solutions in areas such as financial services, digital media, telecom, and IoT – with the speed and effectiveness for which Infosys is rightly renowned, worldwide.
Infosys is a global leader in next-generation digital services and consulting, with clients in 46 countries. With over three decades of experience in managing the systems and workings of global enterprises, Infosys expertly steers clients through their digital journey. Key to their work is enabling the enterprise with an AI-powered core which helps prioritize the execution of change at each company. They seek to enable an agile digital strategy across the business. They then deliver a learning agenda, driving continuous improvement through building and transferring digital skills, expertise, and ideas from the Infosys innovation ecosystem.
SingleStore is an up-and-coming database provider gaining increasing recognition for fast, scalable relational database offerings. The SingleStore database delivers lightning-fast analytics and compatibility with ANSI SQL, standing out from the NoSQL crowd. SingleStore is also gaining attention with customers that include many of the Fortune 500, including half of the Top 10 North American banks.
SingleStore and Infosys have now partnered to help clients across verticals. The key to the partnership is the ability of SingleStore to offer uniquely powerful solutions, with outstanding price-performance, to a variety of database-related issues, especially in analytics.
Infosys contributes a gimlet eye for opportunities which can best be realized by the use of SingleStore. Infosys can also lead the systems integration work needed to quickly stand up SingleStore, and put it to work, within a complex operating environment.
The solutions jointly offered by the SingleStore and Infosys blur the boundaries of two categories which are commonly used to describe technology offerings: painkillers and vitamins. A painkiller solves problems – reports that take days to run, and dashboards that take hours to update. Slow-running ecommerce sites that inadvertently send customers to the competition. Applications that refuse to scale, requiring doublings and redoublings of back-end investment for meager performance gains.
Vitamins help companies take advantage of opportunities – the new report, new dashboard, new ecommerce site, or new applications that helps a large company step ahead of smaller competitors, or helps a government agency offer services faster, more effectively, and at less cost. And, when technology is used in just the right way, it can do both at once: fix things that are broken, and open up competitive advantage.
SingleStore provides raw power for such solutions; Infosys offers finesse in identifying opportunities and implementing solutions, helping to fix problems the right way, the first time, maximizing opportunities, and minimizing time to market. With the description given here, you’ll be able to identify opportunities in which Infosys and SingleStore may be able to help in your organization.
What Infosys Offers
Infosys is a global organization, nearly 40 years old, with nearly a quarter of a million employees and with a market capitalization, at this writing, of more than \$40 billion. Infosys offers digital services and consulting, with Infosys Consulting operating as a business within the overall Infosys organization. Slightly more than half their business is in the US, with the rest spread around the globe.
Infosys partners closely with its clients, and offers a broad range of services that adapt to their needs through its vast suites of services, with platforms spread across different industry verticals.
Infosys acts as a trusted advisor to its clients. Part of their value proposition is their ability to identify valuable emerging technologies, use these technologies in a few situations for which they may be particularly well suited, then share their findings across the Infosys organization worldwide. This creates a portfolio of proven technologies that all Infosys clients can adopt with confidence.
What SingleStore Offers
SingleStore offers a fast, scalable, relational database. SingleStore combines transaction and analytics processing in a single database, like other databases in the emerging NewSQL category, also referred to as hybrid transactional and analytical processing, or HTAP – a term coined by Gartner nearly a decade ago, shortly after SingleStore was founded.
SingleStore is a private, venture capital-funded company based in San Francisco. SingleStore has grown to more than 200 employees, including a large proportion of engineers who continue to develop the platform. At its current stage of development, SingleStore is especially well suited for powering fast analytics that blend historical and real-time data, with fast ingest, high concurrency, and with breakthrough responsiveness to SQL queries, whether simple or complex. This field is known as operational analytics, and is used heavily in financial services, digital media, telecom, IoT, and other areas.
SingleStore usage is sometimes held as something of a secret among its customers, who gain tremendous competitive advantage in applications including credit card fraud detection, predictive maintenance, ad marketplace sales, and other applications. With an increasing range of business partnerships, however, and with the Infosys partnership as the most prominent, the secret is increasingly out.
How Infosys and SingleStore Work Together
Infosys and SingleStore work together with both new and existing customers. Infosys looks for opportunities where the SingleStore database adds unique value, due to its speed, scalability, SQL compatibility, and other features. SingleStore looks for opportunities where advising is needed at a broader level than whether the SingleStore database is the right fit for a specific use case – where an organization is looking at significant change within IT, such as moving a large proportion of their application portfolio from on-premises servers to the cloud.
Infosys sees SingleStore as a key enabler, and well-suited for use cases such as:
Real-time business intelligenceOffloading OLTP databasesPerforming operational analyticsBuilding real-time apps for highly-concurrent read access (such as for dashboards and internal enterprise apps)Performing real-time analytics, such as anomaly detection to spot fraud and customer segmentation
Here are a few examples of recent implementations of SingleStore in Infosys-led projects:
A large consumer electronics company is moving internal data warehousing workloads from Vertica to SingleStore. Infosys is working closely with SingleStore to set up and manage SingleStore infrastructure to support the former Vertica workload, migrate applications out of Vertica, and point dashboards to run on SingleStore.A global music company is using SingleStore to power real-time business intelligence (BI) solutions. Tableau is being used as a visualization layer for data extract from a range of sources and displayed in dashboard. Formerly, the dashboard was having inconsistency and latency issues. Infosys introduced SingleStore as an intermediate, low-latency layer, moving data sets from Hadoop to SingleStore.A leading bank is offloading a transaction processing workload from the leading “old SQL” database to SingleStore. Offloading the transactional database to SingleStore resulted in cost savings and better customer experience.
Learn More
This blog post may have left you curious about using the SingleStore database; about working with Infosys; or about working with Infosys and SingleStore as partners. You can contact Infosys askus@infosys.com ; contact SingleStore; or try SingleStore for free.
Read Post

Data Intensity
What is The Database of Now™?
The Database of Now™ delivers operational insights and advantages by providing the current state of the business. It is a modern, efficient approach to cloud data management which broadens, accelerates, and simplifies access to all the relevant in-the-moment and historical data while unifying data access styles and patterns.
Digital transformation projects have accelerated to meet the increased demand for digital products and services providing answers and solutions with immediacy. With the onset of the ‘always-on’ culture, the pervasive use of smartphones and ubiquitous devices has driven a global shift in customer experience and consumer expectations. Business is now won or lost in a moment.
To put this in perspective, in the world of institutional and high-frequency derivatives and commodities trading, every nanosecond of added latency costs financial traders millions of dollars as losses to competing traders. Exchanges such as the Investors Exchange (IEX) and NASDAQ reward maximum order execution performance. For web experience, Akamai found in 2017 that added latency of just 100ms in website load time drops conversion rates by 7%.
Seconds can save lives too. Thorn’s mission is to save children from sex trafficking and they do that by continuously processing massive amounts of web data to identify children and decrease law enforcement investigation time by as much as 63%. And True Digital seeks to proactively reduce the likelihood of new viral hotspots by monitoring mass population movement trends and the rates of population density changes through anonymized cellphone location data.
Each of these scenarios define a make-or-break moment in time. “Now scenarios” are time-critical, but the length of time available for effective action varies by situation, as does the variety, volume, and velocity of data required. But, what is essential for “Now scenarios” is to leverage all the relevant data to establish the most accurate, complete, and timely context to drive proactive responses. For business operations, with each passing moment, real revenue is lost or gained in a split second. For customer experience, latency adversely affects the interactivity and responsiveness customers expect. For law enforcement and public health, lives are at stake.
The Database of Now™ delivers the operational insights and advantages by providing the current state of the business so that business can proactively identify, capture and capitalize on the most crucial moments for their own endeavors and their customers’ success. It achieves this by simplifying the data infrastructure required to execute diverse workloads across various data access styles, patterns and types. Data professionals, application stakeholders and end users gain the advantages of speed, scale and simplicity.
Read Post

Data Intensity
What is HTAP?
HTAP, or hybrid transaction/analytics processing, combines transactions, such as updating a database, with analytics, such as finding likely sales prospects.
An HTAP database supports both workloads in one database, providing speed and simplicity. And today, “cloud-native HTAP” is a thing; users want an HTAP database that they can mix and match smoothly with Kafka, Spark, and other technologies in the cloud. Use cases include fraud prevention, recommendation engines for e-commerce, smart power grids, and AI.
HTAP databases work with – and, to a certain degree, are designed for – integration with streaming data sources, such as Kafka, and messaging systems used for advanced analytics, AI and machine learning such as Spark. They serve multiple analytics clients, from business analysts typing in SQL queries, to BI tools, apps, and machine learning models, which generate queries in the scores or thousands per second.
Before HTAP – Separate Transactions and Analytics
HTAP combines different kinds of data processing into one coherent whole. The two types of processing differ considerably. Transaction processing – adding and updating records in a database – demands a very high degree of reliability for single-record operations, along with accuracy and speed. “Update Sandy Brown’s current address” is an example of a transactional update.
Analytics processing, on the other hand, means looking very quickly through one or more database tables for a single record, or many records, or total counts of a type of record. “Find me all the subscribers who live in Colorado and own their own home” is an example of an analytics request.
The first effective databases, first widely used in the 1970s and 1980s, were transaction-oriented. They came to be called online transaction processing (OLTP) systems. OLTP systems were optimized to work on underpowered computers with small hard disks – by today’s standards, of course. The only analytics was through printed reports, which might be sorted on various key fields, such as by state or ZIP code.
When analytics was added on later, the transactional systems were already busy, so the data was copied onto a separate computer, running different software. These databases are called online analytics processing (OLAP) databases. Data warehouses and data marts are specialized OLAP databases that house non-operational data for analysis.
Data on OLAP systems was queried using various languages, which coalesced around structured query language (SQL). At first, analytics queries were entered directly by individual analysts; eventually, business intelligence (BI) programs were used to make querying easier. More recently, software applications generate queries of their own, often at the rate of thousands per second.
An entire process and discipline called extract, transform, and load (ETL) was created, simply to move data from OLTP to OLAP. As part of the ETL process, data owners may mix different databases of their own, externally purchased data, social signals, and other useful information. However, the use of three different silos means that data in the OLAP databases is always out of date – often from one day to one week old.
The Move to HTAP
The OLTP/ETL/OLAP structure is still widely used today. However, over time, both OLAP and, more slowly, OLTP databases were given the ability to work in a distributed fashion. That is, a single data table can now be distributed across multiple machines.
Being distributed across several servers allows the data table to be much larger. A distributed data table can have its performance boosted at any time, simply by adding more servers to handle more transactions or reply to more queries. A database – one or more data tables, serving related functions on overlapping data – can now run on a flexibly sized array of machines, on-premises or in the cloud.
As these capabilities have added, the exciting possibility of intermixing OLTP and OLAP capabilities in a single database has come to fruition. The database software that makes this possible was named hybrid transaction and analytics processing (HTAP) by Gartner in 2013. In 2014, the first version of SingleStoreDB's HTAP product was made available.
This capability is so new that it has many names, including hybrid operational analytics processing (HOAP) and translytical databases (which combine trans_actions and ana_lytical functions). HTAP, HOAP, and translytical databases are also described as performing operational analytics – “analytics with an SLA,” or analytics that have to deliver near-real-time responsiveness. Gartner has also come up with augmented transaction processing (ATP), which describes a subset of HTAP workloads that include operational AI and machine learning.
See more: The Forrester Wave™: Translytical Data Platforms, Q4 2022
The Benefits of HTAP
HTAP has many benefits. HTAP creates a simpler architecture, because two separate types of databases, as well as the ETL process, are replaced by a single database. And data copies are eliminated. Instead of data being stored in OLTP, for transactions, then being copied to OLAP – perhaps multiple times – for analytics, a single source of truth resides in the HTAP database.
These fundamental changes have add-on benefits. Operations is much simpler and easier, because only one system is running, not several. Making a single database secure is easier than for multiple data copies on different systems. And data can be fresh – as soon as data comes in for processing, it’s also available for analytics. No more need to wait hours or days – sometimes longer – for data to go through OLTP and ETL before it’s available for analytics.
Very large cost benefits can be achieved by HTAP, along with related increases in revenues and decreases in costs. Simplicity in architecture and operations results in significant cost savings. Higher performance makes existing revenue-producing functions more productive, and makes new ones possible.
The Internet of Things (IoT) benefits greatly from HTAP. If you’re running a smart grid, you need to be running fast, from the latest data. Analysts, dashboards, and apps all need access to the same, updated data at once.
Machine learning and AI are actually impractical without HTAP. There isn’t much point to running a machine learning algorithm if you can’t be learning from current, as well as historical, data. No one wants to run a predictive maintenance program that tells you that your oil well was likely to need urgent maintenance a week ago, or that there were several interesting travel bargains available yesterday.
How SingleStore Fits
SingleStore was conceived as an HTAP database before the term was even coined by Gartner. The company was founded in 2011 to create a general-purpose database that supports SQL, while combining transactions and analytics in a single, fast, distributed database.
The result is now a cloud-native, scalable, SQL database that combines transactions and analytics. Today, SingleStoreDB Self-Managed is available for download, so you can run it in the cloud or on-premises, or in the form of an elastic managed service in the cloud, SingleStoreDB Cloud.
SingleStore allows customers to handle transactional operations in rowstore tables, which run entirely in memory. Most analytics functions are handled by columnstore tables, which reside largely on disk. Data is moved between tables by Pipelines, a fast and efficient alternative to ETL.
Today, SingleStore is going further, introducing SingleStore Universal Storage in 2019. Universal Storage is a new expression of the core idea behind HTAP. In Universal Storage, rowstore and columnstore tables each gain some of the attributes of the other. For instance, rowstore tables now have data compression, which was formerly the sole province of columnstore. And columnstore tables can now quickly find, and even update, a single record, or a few records – capabilities that were formerly the hallmark of rowstore.
And, we continue to push HTAP databases forward to power the development of real-time applications.
Increasingly, with Universal Storage, a single table type can fill both transactional and analytical needs. Any needed performance boost is provided by scalability, simply adding servers. Future plans for Universal Storage include even higher levels of convergence.
Read Post

Product
Announcing SingleStoreDB Self-Managed 7.1
With the SingleStoreDB Self-Managed 7.1 release, we’re continuing our journey to a simpler data platform, offering speed, scalability, and SQL – accessible to everyone, and easy to use. The release is available today as downloadable software and at the core of SingleStoreDB Cloud, our elastic managed service in the cloud.
Read Post

Case Studies
Case Study: True Digital Group Helps to Flatten the Curve with SingleStore
True Digital Group is using SingleStore to power a contact tracing app, preventing the spread of COVID-19 in Thailand. (See our joint press release.) The app uses React and Web Workers in the frontend, with SingleStore Pipelines and time series functions processing fast ingestion of events, and geospatial functions used to plot data on the map in real-time. The first functional version of the app was built in two weeks.
Our Global Lockdown
Preventing the spread of the COVID-19 disease has been our collective priority in this pandemic. By avoiding non-essential, discretionary travel such as shopping trips and social gatherings, we keep ourselves safe and prevent ourselves from unknowingly being asymptomatic carriers to our most vulnerable members of society. The latest data from the hardest hit hotspots of the outbreak show that social distancing is starting to have the intended effect of flattening the curve. Millions of businesses have been forced to close with a disproportionate impact on the restaurant, entertainment, and travel and hospitality industries. Finding ways to safely reopen economies is urgently needed.
Read Post