SingleStore is proud to announce the general availability of SingleStore DB 7.1 for immediate download. SingleStore DB 7.1 is also available today on SingleStore Managed Service, the company’s elastic cloud database, available on public cloud providers around the world.
With the availability of SingleStore DB 7.1, SingleStore further cements itself as the leading NewSQL platform for real-time analytics and augmented transaction processing [Ron19], delivering superb transactional and analytical performance in one system. The release also delivers improved resilience features to further strengthen SingleStore for mission-critical applications providing transaction processing, operational analytics, and more.
With limitless scalability, high availability (HA) built in, disaster recovery (DR) support, and full SQL, SingleStore is a strong foundation for your digital transformation initiatives [DT20]. With SingleStore, you can change the way you do business, and capture new opportunities without concern for speed and scale limits. At the same time, you can rely on existing skills and the SQL ecosystem to build groundbreaking new applications.
This blog post describes the major features of SingleStore DB 7.1 self-managed software and SingleStore Managed Service in some detail. For even more detail, see the release notes. Also see our detailed updates on disaster recovery improvements in SingleStore DB 7.1, improvements to SingleStore Tools, and our new SingleStore Training modules.
Major Feature Areas in SingleStore DB 7.1
Along with incremental improvements across the range of database platform functionality, SingleStore DB 7.1 provides especially notable new features in the following areas.
SingleStore’s Universal Storage technology is being delivered as an arc of new features over several releases. The aim of this technology is to support real-time analytics and online transaction processing (OLTP) on SingleStore, with great performance and low total cost of ownership (TCO).
The Universal Storage story began with SingleStore DB 7.0 [SS19] and continues in SingleStore DB 7.1. Universal Storage is an extension of our columnstore technology that excels at analytics and which also improves OLTP, including support for indexes, unique keys, seeks, and fast, highly selective, nested-loop-style joins. Because the data is highly compressed and doesn’t all have to fit in RAM, Universal Storage gives excellent TCO. See our upcoming SingleStore DB 7.1 Universal Storage blog for more details [SS20].
Databases that support applications that people run their businesses on must be resilient to failure. SingleStore has supported transactions, persistence, high availability (HA), and disaster recovery (DR) for years. We’re continuing to build and strengthen these feature areas.
In SingleStore DB 7.1, we’re delivering fast DR failback. We use a differential approach to reduce the amount of data that must be transmitted when you bring a previously-failed primary cluster back on line and make it the primary again. See our blog on DR failback for more information [YW20].
SingleStore’s HA is based on a partitioned data model, keeping an original version of each partition and a copy on two separate leaf nodes, where SingleStore stores actual data; metadata is stored in aggregator nodes. (We’ll use the term “node” to mean “leaf node” in the following.)
If a node fails, a replica of its data is readily available on another node, and all data remains accessible. In the previous SingleStore release, SingleStore DB 7.0, all partitions from one node are replicated on a single other node. So if one node fails, that second node can become a hot spot, subject to contention, since it has twice as much data as before – and thus, twice its normal share of query processing work. In SingleStore DB 7.1, we introduce a feature called load-balanced failover to solve this problem.
With load-balanced failover, when a node fails, then the partitions on each node will be spread around to several other nodes, not just one. This avoids the creation of a hot spot after a failure, which would slow performance, while also keeping the probability low that loss of a node will cause unavailability of data.
SingleStore supports the MPSQL language for programming internal extensions, including stored procedures and user-defined functions. The 7.1 release enhances this capability, in particular for transaction processing application development and for handling JSON.
We’re making it easier than ever to build applications with SingleStore. For example, particularly in OLTP applications, it’s common to fetch a single row in a stored procedure, and fill a separate local variable with the value for each column of the row. This was possible in our MPSQL language before, but took more lines of code than was desirable. This same operation can now be performed with much-improved brevity, using syntax like this:
SELECT first_name, last_name INTO var_first, var_last FROM employees WHERE id = 2;
In addition, this also works with dynamic SQL. For example:
EXECUTE IMMEDIATE sql_string INTO var_first, var_last;
This syntax is similar to that used in other database products, making it familiar and easier to learn, and simplifying application porting from other database offerings into SingleStore.
In addition, access to fields of local variables or parameters that have a RECORD type is now supported in SQL statements in stored procedures. This can reduce the number of lines of code to perform cursor-like operations a lot. For example:
create table t(a int, b int); insert t values(1, 2),(3,4),(5,6),(7,8); create table t2(a int, b int); delimiter // create or replace procedure p() as declare q query(a int, b int) = select a, b from t where a >= 5; begin for r in collect(q) loop call echo_stuff(r); insert t2 values(**r.a**, **r.b**); -- notice use of record fields inline end loop; end // create or replace procedure echo_stuff(r record(a int, b int)) as begin echo select **r.a**, **r.b**; -- also using record fields here end // delimiter ;
Then we can call this procedure and see the results, as follows:
memsql> call p(); +------+------+ | a | b | +------+------+ | 5 | 6 | +------+------+ 1 row in set (0.25 sec) +------+------+ | a | b | +------+------+ | 7 | 8 | +------+------+ 1 row in set (0.26 sec) Query OK, 0 rows affected (0.26 sec) memsql> select * from t2; +------+------+ | a | b | +------+------+ | 7 | 8 | | 5 | 6 | +------+------+
Even though we’ve made this much easier, don’t forget that it’s often best to do things in a set-oriented way, with single SQL statements, rather than process a row at a time in a loop, where possible. That’s because it’s faster and can use fewer lines of code. Can you think of a way to do the same thing as the stored procedure (SP) above, without processing one record at a time?
Backups are a key part of almost everyone’s system management and HA/DR processes. SingleStore DB 7.0 introduced incremental backup, which is convenient, and in some cases critical, since it allows a quick incremental backup to be done before important operations, such as rolling out a new application change. SingleStore DB 7.1 extends our incremental backup capability in the cloud by now supporting Amazon S3, Azure Blob Store, and GCP storage as targets for incremental backups.
Upgrades are a big part of system management. SingleStore DB 7.1 supports online upgrade from 7.0 and earlier. This allows moving to the latest software version without downtime.
A new management view, MV_BLOCKED_QUERIES, shows what queries are blocked and why they are blocked, to help troubleshoot query wait problems.
Additional features available in SingleStore DB 7.1 include the following.
function is a table-valued function that takes an array as an argument, and returns a set of rows, with one row per array element. It can be used in the
clause of a query. When used with the
function, it’s a convenient way to explode JSON arrays into rowsets. For example:
create table t(id int, json_col json); insert into t values(1, '[1,2,3]'); insert into t values(2, '[4,5]'); select * from t join table(json_to_array(t.json_col));
+------+----------+-----------+ | id | json_col | table_col | +------+----------+-----------+ | 1 | [1,2,3] | 1 | | 1 | [1,2,3] | 2 | | 1 | [1,2,3] | 3 | | 2 | [4,5] | 4 | | 2 | [4,5] | 5 | +------+----------+-----------+
Partition Split During Backup
SingleStore has great elasticity. You can add leaf nodes and rebalance data partitions across them online. A limit to our elasticity occurs once we get down to having one database partition per leaf node, and then you can’t subdivide further, so adding nodes and rebalancing is no longer effective. Prior to SingleStore DB 7.1, increasing the number of partitions in a database required creating a new database with more partitions, and then copying the data over to it from the original database. This could be labor-intensive.
In SingleStore DB 7.1, we automate the partition split process. This is done through a new version of the backup command, which splits partitions in two as it creates the backup. When you restore the backup, you have the same tables, data, and other objects, but there are twice as many partitions. This saves a lot of labor and time compared to the old approach.
Global Temporary Tables
SingleStore has supported temporary tables for years. The standard temporary tables are local to a session. Two different sessions could use the same temporary table name, and there would be two different tables. Global temporary tables, on the other hand, which are new in SingleStore DB 7.1, can be shared across sessions. These are in-memory rowstore tables that are non-logged, so they are great for inserting bursts of data with no I/O impact. Depending on your hardware, but especially if it uses a regular hard disk drive for the log, the new global temporary tables may triple the speed of burst inserts.
Of course, since global temporary tables are non-logged, and the data is present only in RAM, if a leaf node goes down, the table will go into an error state, and can’t be queried. If this happens, your application will have to drop the table and re-create it to begin again. This tradeoff between fault tolerance and speed may be fine for applications which are trying to maximize performance for bursts of inserts, or operations on intermediate scratch tables used for data transformation. If you need fault tolerance, simply use regular tables.
We’ve expanded our ingest and backup support for Google Cloud Storage (GCS), continuing our integrations with the industry’s major cloud providers.
Several query optimization improvements are provided in 7.1, including:
can help with parameter-sensitive plans (e.g., when you have a query that needs a different plan when run for a narrow range of dates compared to a wide range of dates). You can also use
to force a query to recompile a new plan for new parameter values if it is very sensitive to its parameters. This can allow you to always get a good plan for those parameters, at the expense of spending more time to compile the query every time it is run with a new parameter.
Industry-Standard Built-In Functions
We have introduced a number of standard built-in functions to ease the development process and migration from other tools, including:
We now support a configurable password complexity policy that can control different aspects of passwords: the length, the number of alphanumeric and special characters, as well as the number of occurrences of sequences or repeated patterns in the password. We also now support configurable parameters that you can specify to lock a user’s account after a certain number of failed login attempts. You can specify the amount of time the account is locked after the logout attempt threshold is reached. This feature is to ensure your systems are protected from password attacks.
These new features allow you to implement your organization’s password policies directly in SingleStore, without relying on a third-party tool or application program. This may be convenient for your developers, and may save you money and reduce complexity by not requiring an extra software purchase.
The SingleStore DB 7.1 release delivers a big advance in our ability to support system-of-record applications on SingleStore, including improvements to Universal Storage (unique keys on columnstores, fast selective joins on columnstores); DR failback; easier stored procedure programming for transactional applications; useful new built-in functions to ease application development and migration; and more.
Are you seeking a data platform with unlimited scale and power, which your SQL-trained developers will readily be able to learn and use, to enable your digital transformation initiatives? Look no further than SingleStore DB 7.1, available for immediate download and via SingleStore Managed Service.
[Ron19] Adam Ronthal, There is Only One DBMS Market!, https://blogs.gartner.com/adam-ronthal/2019/07/17/one-dbms-market/, Gartner, 2019.
[DT20] What is Digital Transformation? The Enterprisers Project, https://enterprisersproject.com/what-is-digital-transformation, 2020.
[SS19] SingleStore Universal Storage – And Then There Was One, https://www.singlestore.com/blog/memsql-Universal Storage-then-there-was-one/, September, 2019.
[SS20] SingleStore Universal Storage, Episode 2, <link-TK>, April, 2020.
[YW20] Yu-wang Wang, DR Failback in SingleStore DB 7.1, https://www.singlestore.com/blog/fast-disaster-recovery-failback-memsql-7-1, May, 2020.