How to Migrate From MySQL to SingleStore

ST

Sarung Tripathi

Global Head of Pre-Sales

How to Migrate From MySQL to SingleStore

For decades, organizations have used MySQL for their customer-facing applications. These apps have primarily been transactional in nature, supporting websites with heavy traffic, e-commerce engines and more. Of late, these applications are powered by MySQL as a component framework like the LAMP stack and Laravel.

The proliferation of SaaS apps in the last few years has furthered MySQL adoption. However, companies are finding that the continued need to introduce analytics in their apps and keep up with high transaction volumes is pushing the database to its limits — and are quickly realizing MySQL isn’t designed to keep up.

why-migrate-your-appWhy migrate your app? 

When experiencing scaling issues with MySQL or other open source databases, developers often turn to approaches like sharding middleware or NoSQL. However, distributed SQL databases supersede these approaches, given that developers don’t need to compromise their ACID-compliant transactions, complex joins or other table stakes database features.

SingleStore is a distributed SQL database that follows the MySQL wire protocol, making it incredibly easy to migrate from any flavor of MySQL (including AWS RDS, Google Cloud SQL, Azure MySQL or others) to unlock significant performance advantages — particularly around data ingest, query latency and high concurrency.

my-sql-migration-a-brief-customer-storyMySQL Migration: A Brief Customer Story

Many organizations have migrated their applications from MySQL to SingleStore. The majority of these migrations were completed in weeks — some in just days — given the ease of schema and query conversion.

Stream Hatchet provides comprehensive analytics and leaderboards on e-sports data from streaming platforms. This includes retention of large amounts of historical data, as well as large aggregations to power competitive analyses, influencer targeting and more. When the company needed a database solution capable of handling large amounts of data without compromising analytics speed, Stream Hatchet evaluated several technologies: AWS Athena, Elasticsearch, MongoDB and various cloud-hosted MySQL options. Ultimately, they selected SingleStore for the following capabilities in a single database technology:

  • The ease and familiarity of MySQL syntax
  • The native multi-model support for ultra-fast time-series data ingestion and calculations
  • Reading and writing JSON documents
  • Simple and efficient relational SQLSupport for efficient and lightning-fast joins across multiple tables for complex analytic queries and massive scans in just milliseconds

StreamHatchet started their evaluation by using SingleStore Pipelines to ingest bulk data from S3 and test out query performance. Once they were satisfied, they added additional insert statements for real-time ingest from the application into SingleStore. After seeing SingleStore’s impressive ingest performance they stopped inserting into RDS MySQL, redirecting the frontend application logic to also point to SingleStore. Easy!

Several other SingleStore customers have migrated from MySQL, including Fathom Analytics. In fact, you can check out a live dashboard powered by SingleStore on their website!

methods-for-completing-a-my-sql-migration-to-single-storeMethods for completing a MySQL Migration to SingleStore

Bulk data load

In this example, we’ll use a table we’ve created in AWS Aurora MySQL and migrate it to SingleStore.

Want to see more? You can watch our full, on-demand webinar on the topic. 
Or if you prefer a video tutorial, our MySQL Migration demo has you covered.

AWS RDS MySQL Table:

CREATE TABLE `scan` (
`scan_id` bigint(20) NOT NULL,
`package_id` bigint(20) NOT NULL,
`loc_id` int(11) DEFAULT NULL,
`Loc_name` char(5) DEFAULT NULL,
PRIMARY KEY (`package_id`,`scan_id`)
) ;

Scan table in MySQL:

select count(*) from scan;
7340032

Simple export of data as a CSV:

SELECT * from scan
INTO OUTFILE s3 's3://data-bucket-pb/Jan13/scan.csv'
FORMAT CSV
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';

Create a database and a table. Note the addition of a SHARD KEY and a COLUMNSTORE KEY in the SingleStore DDL. These will enable optimal distribution and organization of data to ensure lightning fast queries. SingleStore Documentation offers advice on how to select these keys for your tables:

create database mem_0113;
use mem_0113;

create table scan (
scan_id BIGINT NOT NULL,
package_id BIGINT NOT NULL,
loc_id INT,
loc_name CHAR(5),
KEY (scan_id) USING CLUSTERED COLUMNSTORE,
SHARD(package_id) );

Create SingleStore Pipeline to get data from S3. This is a super simple way to get data from several external sources:

CREATE PIPELINE pipe1_scan
AS LOAD DATA S3 'data-bucket-pb/Jan13/scan.csv.part_00000'
CONFIG '{REDACTED} '
CREDENTIALS '{REDACTED} '
INTO TABLE mem_0113.scan
FORMAT CSV FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
(scan_id,package_id,loc_id,loc_name);

Start SingleStore Pipeline:

start PIPELINE pipe1_scan

Check table for records, and we have the same # of rows as we did in Aurora:

select count(*) from scan; --7340032

single-store-replicate-toolSingleStore Replicate Tool

SingleStore offers lightweight migration tooling for your bulk data load needs in initial migrations. This can also be used for incremental CDC after the initial load of data. These two features allow users to test out their workload on SingleStore, and then have a zero-downtime cutover when moving to production. Let’s look at another example of a table in RDS MySQL, which covers the bulk data load:

AWS RDS MySQL Table:

CREATE TABLE `scan` (
`scan_id` bigint(20) NOT NULL,
`package_id` bigint(20) NOT NULL,
`loc_id` int(11) DEFAULT NULL,
`Loc_name` char(5) DEFAULT NULL,
PRIMARY KEY (`package_id`,`scan_id`)
) ;

select count(*) from scan;
7340032

The scan table includes 7.3 million records.

Configuration File:

To configure the connectivity between RDS MySQL and SingleStore, we simply populate this file with the endpoints of both databases. Below is the example of the SingleStore config file (yaml):

type: SINGLESTORE

host: svc-1732741a-f499-467c-a722-9887d73150c1-ddl.aws-virginia-2.svc.singlestore.com
port: 3306

username: <redacted>
password: <redacted>

#credential-store:
# type: PKCS12
# path: #Location of key-store
# key-prefix: "memsql_"
# password: #If password to key-store is not provided then default password will be used

max-connections: 30

max-retries: 10
retry-wait-duration-ms: 1000

Execute Replicate Command:

Now, we execute the REPLICATE command based on the configuration file previously populated.

./bin/replicant snapshot conf/conn/mysql.yaml conf/conn/singlestore.yaml

Observe Bulk Load:

Here is a sample of a large database being replicated over from MySQL to SingleStore.

Verify that databases and tables exist in SingleStore:

select count(*) from scan; --7340032

my-sql-dumpMySQL dump

AWS RDS MySQL Table:

CREATE TABLE `scan` (
`scan_id` bigint(20) NOT NULL,
`package_id` bigint(20) NOT NULL,
`loc_id` int(11) DEFAULT NULL,
`Loc_name` char(5) DEFAULT NULL,
PRIMARY KEY (`package_id`,`scan_id`)
) ;

select count(*) from scan;
7340032

The scan table includes 7.3 million records.

Dump Command:

Open the terminal on the server where MySQL runs and run commands as listed below:

Template

mysqldump -h <dump_location> -u <username> -p <source_database_name> > <filename>.sql

Command

mysqldump -h 127.0.0.1 -u <redacted> -p scan > scan_dump.sql

mysqldump commands are made up of both DDL and DML. In this scenario, the mysqldump would look like this:

CREATE TABLE `scan` (
`scan_id` bigint(20) NOT NULL,
`package_id` bigint(20) NOT NULL,
`loc_id` int(11) DEFAULT NULL,
`Loc_name` char(5) DEFAULT NULL,
PRIMARY KEY (`package_id`,`scan_id`)

INSERT INTO scan (1001, 1, 1, “CA”);
INSERT INTO scan (1002, 1, 2, “FL”);

Of course with SingleStore being a distributed SQL database, we need to add in our shard keys and sort keys as we did in the bulk data load example. So, we simply edit the DDL to reflect those:

create table scan (
scan_id BIGINT NOT NULL,
package_id BIGINT NOT NULL,
loc_id INT,
loc_name CHAR(5),
KEY (scan_id) USING CLUSTERED COLUMNSTORE,
SHARD(package_id) );

We will then simply save the mysqldump file and create a new database in our SingleStore cluster:

Template:

mysql -u admin -h <endpoint> --default-auth=mysql_native_password -p <destination_database_name> < <filename>.sql

Example:

mysql -u admin -h svc-your-cluster-id-here-ddl.aws-virginia-2.svc.singlestore.com --default-auth=mysql_native_password -p destination_db < scan_dump.sql

The tables with data loaded should now appear in your SingleStore database!

what-to-look-out-for-when-migrating-from-my-sql-to-single-storeWhat to Look Out for When Migrating From MySQL to SingleStore

We all know database migrations are not always as easy as they seem. SingleStore’s adherence to the MySQL Wire Protocol makes it very straightforward to migrate, but there are a few things to look out for to ensure your MySQL migration is as simple as the previous examples shared:

  • Today, SingleStore does not support foreign keys. Customers like Fathom Analytics and StreamHatchet are enforcing referential integrity at the application level. There are many benefits of keeping this logic at the app level. FKs can be difficult to implement in a distributed system without compromising speed, but the SingleStore team continues to work toward this for the future of the platform.
  • SingleStore is a multi-model database, unlocking unique functions on existing data types you may have in MySQL.

summarySummary

As you can see, there are plenty of ways to easily migrate your existing MySQL database to SingleStore. This simple migration will elevate your database from single-node and slow, to distributed and lightning-fast. Ingest data more easily, make your queries faster and improve support for concurrency with one of these migration options today.

Singlestore Helios offers $600 in free credits to get started with just a few clicks. The Singlestore Helios Engineering Team that contributed to this blog is always standing by to assist you with your MySQL migration, or any subsequent questions you may have about the platform.

Schedule your time to chat with SingleStore engineers today.

Share