A Developer's Guide to SingleStore Sequences

8 min read

Jul 16, 2025

In the world of modern databases, generating unique, reliable identifiers is a cornerstone of data integrity and application operation and correctness.

A Developer's Guide to SingleStore Sequences

For developers working with SingleStore, Sequences offer a powerful and efficient way to achieve this. A sequence in SingleStore is a column property that automatically generates unique, sequential numbers. Why are developers increasingly turning to them? Simply put, sequences ensure unique numbers are available without the need for locking tables or rows, significantly boosting performance in multi-user, high-throughput environments — and they provide easy-to-read identifiers without undesired extra digits.

This post will take you through the essentials of SingleStore Sequences. We'll explore their key benefits, common use cases, how they work under the hood and practical examples to get you started. Whether you're an architect designing a new system, a developer looking to optimize ID generation or a decision-maker evaluating database features, understanding sequences can lead to more robust and performant applications.

key-benefits-of-using-sequences-in-singlestoreKey benefits of using sequences in SingleStore

Sequences aren't just another way to create IDs; they bring a host of advantages:

  • Unique, reliable identifiers. This is the primary promise. With sequences, you never have to worry about duplicate keys. Every value generated is guaranteed to be unique, which is crucial for maintaining strong data integrity across your database.
  • Easy-to-read and efficient integers. Sequences now support all integer types, so you can generate values (like 101, 102, 103) in smallint, int or bigint columns. This lets you choose the most space-efficient type for your needs, optimizing storage while keeping values easy to work with.
  • Predictable, sequential order. Sequences are designed to create a logical, sequential flow of values. While it's possible to see small gaps if a transaction is rolled back (as the reserved sequence value might not be used), you won’t have to deal with the large, unpredictable and often alarming gaps that can crop up with traditional auto-increment in distributed systems.
  • Performance impact is minimized due to caching. One of the significant wins with sequences is caching. SingleStore aggregators can pre-fetch and keep a batch of sequence values ready in memory. This allows sequences to handle high-volume inserts, with practically no performance impact.
  • Less hassle for developers. Sequences abstract away the complexity of generating unique IDs and managing order. This means less custom application code, fewer headaches dealing with concurrency and race conditions, and more time for developers to focus on building core application features.

common-use-cases-for-sequencesCommon use cases for sequences

When does it make the most sense to use sequences? Here are a couple of common scenarios:

  1. Coordinated multi-application inserts. In systems where multiple applications or services insert data into the same table, ensuring each row gets a unique identifier can be challenging. Letting the database handle this coordination with sequences is a safe, reliable and efficient approach.
  2. Simplified ID generation for single applications. Even for a standalone application, if developers want to avoid the hassle of manually keeping track of the current maximum ID — or don't want to query the table for the maximum value every time the application starts — sequences provide a clean and effortless solution.

understanding-the-core-how-sequences-work-in-singlestoreUnderstanding the core: How sequences work in SingleStore

Let's get into the technical details of how sequences are implemented and how you can use them.

syntax-declaring-a-sequenceSyntax: Declaring a sequence

Sequences are declared as a modifier to the AUTO_INCREMENT declaration when creating a table.

1CREATE TABLE order_request (2    order_id INT AUTO_INCREMENT AS SEQUENCE PRIMARY KEY,3    customer VARCHAR(100)4);

This simple addition AS SEQUENCE transforms the order_id column's auto-increment behavior to use the sequence mechanism.

key-differences-sequence-vs-classical-auto-incrementKey differences: Sequence vs. classical AUTO_INCREMENT

It's important to understand how sequences differ from — and are better than — the traditional AUTO_INCREMENT behavior, especially in a distributed database like SingleStore:

  • Value jumps. Classical AUTO_INCREMENT columns in a distributed system can exhibit enormous jumps between values generated by different aggregators. A SEQUENCE, on the other hand, will have much smaller, more predictable jumps due to its caching and coordination mechanism.
  • Recovery. Classical AUTO_INCREMENT might require longer recovery times after an aggregator failure to reconcile ID ranges. SEQUENCES are designed to be more resilient with no such prolonged recovery issues.
  • Performance nuances. Classical AUTO_INCREMENT might hold a slight edge in raw insert speed for very specific, non-distributed, extreme high-load scenarios. However, SEQUENCES generally provide comparable performance while significantly enhancing predictability and user experience in distributed environments, particularly due to their effective caching capabilities.

behind-the-scenes-storage-and-cachingBehind the scenes: Storage and caching

A sequence's state is stored in a randomly-selected partition within your SingleStore cluster. When an INSERT operation requires a new sequence value, a request is made to the leaf node that owns that particular partition.

To optimize this process and reduce latency, each aggregator proactively anticipates requests by keeping a small range of sequence values cached in memory. You can fine-tune this caching behavior using several global variables:

  • sequence_min_cache_count: (Default: 100) This is the minimum size of the interval of values pre-fetched by an aggregator. It's the initial interval size. The size of this interval can influence the jump size you might observe between sequences generated by different aggregators. For example, with a default of 100, an insert processed by one aggregator followed by an insert on another aggregator might show a jump of up to 100 (e.g., ID 101 then ID 201).
  • sequence_max_cache_count: The maximum size the pre-fetch interval can grow to after hitting the resize-up time.

inserting-data-and-manual-overridesInserting data and manual overrides

Inserting data into a table with a sequence column is straightforward:

1INSERT INTO order_request (customer) VALUES ('Name Of Customer');2-- The order_id will be automatically generated by the sequence.

You can also override the automatic generation by providing a specific value:

1INSERT INTO order_request (order_id, customer) VALUES (2, 'Name Of 2Another Customer');

Important: Manually inserting an ID like this can disrupt the sequence. The aggregators' caches are not automatically made aware of this manually inserted value. If you insert 2, an aggregator might later try to generate 2 again.

To resolve this and synchronize the sequence with the actual maximum value in the table after manual insertions, you must run:

1AGGREGATOR SYNC AUTO_INCREMENT ON order_request ALL;

This command tells SingleStore to update the sequence's internal counter based on the current data in the table.

practical-examples-sequences-in-actionPractical examples: Sequences in action

Let's see how sequences can be used in real-world scenarios.

example-1-managing-orders-and-line-itemsExample 1. Managing orders and line items

Imagine an eCommerce platform. You need unique IDs for customer orders and for each item within those orders. Sequences are perfect for this.

Scenario. We'll create an order_table for customer orders and an item_in_order table for the individual products in each order. The order_id in order_table will start at 1000.

1-- Create orders table with a sequence, order ids start at 10002CREATE TABLE order_table (3    order_id INT AUTO_INCREMENT AS SEQUENCE PRIMARY KEY,4    customer VARCHAR(100)5) AUTO_INCREMENT = 1000; -- Sets starting value for the sequence6
7CREATE TABLE item_in_order (8    order_item_id INT AUTO_INCREMENT AS SEQUENCE PRIMARY KEY,9    order_id INT,10    item VARCHAR(100)11);12
13-- First order14INSERT INTO order_table (customer) VALUES ('Willard Smith');15-- Capture the last generated ID for the current session16SELECT LAST_INSERT_ID() INTO @current_order_id;17
18INSERT INTO item_in_order (order_id, item) VALUES (@current_order_id, 19'Large Hammer');20INSERT INTO item_in_order (order_id, item) VALUES (@current_order_id,21'Bellows');22INSERT INTO item_in_order (order_id, item) VALUES (@current_order_id, 23'Steel Anvil');24
25-- Second order26INSERT INTO order_table (customer) VALUES ('Sabrina Carpenter');27SELECT LAST_INSERT_ID() INTO @current_order_id; -- Get the new order_id28
29INSERT INTO item_in_order (order_id, item) VALUES (@current_order_id, 30'Hacksaw');31INSERT INTO item_in_order (order_id, item) VALUES (@current_order_id, 32'Varnish');33INSERT INTO item_in_order (order_id, item) VALUES (@current_order_id,34'4x Sand paper');35
36-- Querying the data37SELECT o.order_id, o.customer, i.order_item_id, i.item38FROM order_table AS o39JOIN item_in_order AS i ON o.order_id = i.order_id40ORDER BY o.customer, o.order_id, i.item;

Results table

order_idcustomerorder_item_iditem
1001Sabrina Carpenter64x Sand paper
1001Sabrina Carpenter4Hacksaw
1001Sabrina Carpenter5Varnish
1000Willard Smith2Bellows
1000Willard Smith1Large Hammer
1000Willard Smith3Steel Anvil

In this example:

  • AUTO_INCREMENT = 1000 on order_table sets the starting point for its sequence
  • LAST_INSERT_ID() is a crucial function. It returns the last id, the automatically-generated AUTO_INCREMENT value for the current connection/session. This is how we link items to their respective orders.
  • Updates and deletes on these tables work like any standard SQL operation; the sequence primarily impacts how new order_id and order_item values are generated during inserts.

example-2-generating-unique-i-ds-across-multiple-tables-via-a-stored-procedureExample 2. Generating unique IDs across multiple tables via a stored procedure

Sometimes, you might need a unique identifier that can be used across several different tables, perhaps for a global logging system or a federated tracking ID. While a single sequence is tied to a single table column, you can create a centralized ID generator using a dedicated table and a stored procedure.

Scenario. We need unique IDs for logs in module_alpha_logs and events in module_beta_events. These IDs should come from a common sequence pool.

1-- Create a dedicated table to act as our global ID generator2CREATE TABLE global_id_generator (3    id INT AUTO_INCREMENT AS SEQUENCE PRIMARY KEY4);5
6-- Example tables that will use the global ID7CREATE TABLE module_alpha_logs (8    log_id INT PRIMARY KEY,9    log_data VARCHAR(255),10    created_at DATETIME DEFAULT NOW()11);12
13CREATE TABLE module_beta_events (14    event_id INT PRIMARY KEY,15    event_details TEXT,16    event_timestamp TIMESTAMP DEFAULT NOW()17);18
19-- Stored procedure to get the next unique ID from the generator20DELIMITER //21CREATE OR REPLACE PROCEDURE get_next_global_id() RETURNS INT AS22DECLARE23    new_id INT;24BEGIN25    -- Insert a dummy row into the generator table to trigger its 26sequence27    -- and get the next ID.28    INSERT INTO global_id_generator (id) VALUES (NULL);29    SELECT LAST_INSERT_ID() INTO new_id;30    RETURN new_id;31END //32DELIMITER ;33
34-- Using the stored procedure to populate our tables35CALL get_next_global_id();36INSERT INTO module_alpha_logs (log_id, log_data) VALUES 37(LAST_INSERT_ID(), 'Log entry for Alpha module, process initiated.');38
39CALL get_next_global_id();40INSERT INTO module_beta_events (event_id, event_details) VALUES 41(LAST_INSERT_ID(), 'Critical event in Beta module: resource threshold 42exceeded.');43
44CALL get_next_global_id();45INSERT INTO module_alpha_logs (log_id, log_data) VALUES 46(LAST_INSERT_ID(), 'Log entry for Alpha module, process completed.');47
48-- Verify the unique IDs49SELECT * FROM module_alpha_logs ORDER BY created_at;50SELECT * FROM module_beta_events ORDER BY event_timestamp;51-- You can also inspect the generator table to see the sequence 52progression53SELECT * FROM global_id_generator order by id desc;54
55

Result to check "global_id_generator" is generating unique IDs:

ID
3
2
1

This pattern provides a flexible way to manage shared unique IDs, effectively creating a "sequence as a service" within your database. It addresses scenarios where a sequence isn't directly tied to a single table's primary key but is needed more broadly.

important-considerations-and-limitationsImportant considerations and limitations

While powerful, sequences have certain characteristics and limitations to keep in mind:

  • Global variable tuning. The sequence_*_cache_count and sequence_*_resize_*_time_ms variables are powerful tools for performance tuning, but changes should be made with an understanding of their impact on cache sizes and potential gap sizes between concurrently generated IDs.
  • One AUTO_INCREMENT AS SEQUENCE per table. A table can have one column defined with AUTO_INCREMENT AS SEQUENCE. If you need multiple independent sequences for different columns within the same table (a rare requirement), or more complex shared sequence logic, the stored procedure pattern (example 2) is the recommended approach.
  • Integer values only. Sequences in SingleStore generate integer-type values.
  • Transaction rollback and gaps. If a transaction inserts a row (thereby consuming a sequence value or a batch from the cache) and then rolls back, those specific sequence values are typically "lost" or skipped. This means you can have gaps in your sequence. This is generally acceptable because sequences guarantee uniqueness, not strict contiguity without any gaps. The primary goal is a unique, ever-increasing number.
  • Cannot decrease sequences. A sequence value always moves forward. There's no built-in mechanism to "roll back" or decrease the current value of a sequence generator itself (though AGGREGATOR SYNC can reset it based on table data).
  • Synchronization is key for manual inserts. Remember AGGREGATOR SYNC AUTO_INCREMENT ON your_table ALL ; if you manually insert IDs that could conflict with the sequence's natural progression.

conclusion-sequence-your-way-to-better-dataConclusion: Sequence your way to better data

SingleStore Sequences provide a robust, performant and developer-friendly mechanism for generating unique identifiers. They strike an excellent balance between ensuring data integrity, offering predictable behavior and optimizing for the high-concurrency demands of modern applications. By understanding their benefits, how they work and their practical applications, you can leverage sequences to build more scalable and reliable systems.

If you're not already using them, consider evaluating SingleStore Sequences for your next project or for optimizing existing applications. Your applications will thank you.

For more information about SingleStore Sequences, please visit our documentation page. To explore using SingleStore for your next application, reach out to team@singlestore.com or try SingleStore Helios® free.

Start building with SingleStore