Event Processing using Kafka Pipelines vs MemSQL Replicate

Dear Team,

We have seen that using MemSQL Replicate, we can replicate the data from oracle, postgress and other heterogeneous databases. Even we have explored the option of ingesting data using memsql Kafka pipelines whenever an event is triggered at source database.

In above two approaches, we would like to understand which one is industry best practice and please let me know the pros and cons of each approach. This will help us in finalizing the approach for ingesting the data into memsql.

Also Request you to address below concerns.

• How do we verify data consistency between source and targets in both approaches?
• How do we achieve multi source correlation in above said approaches?
• How do we check health of the data replication in case if we use replicate?
• How do we achieve masking, data enrichment using replicate if it is recommended?
• Does it support with limited columns replication if replicate is recommended?
• How do we handle multiple concurrent events or updates in both approaches?
• What is the lag between source and target in case of replicate?
• Can we achieve change data capture (CDC) into MemSQL using replicate or through event processing?
• What is the impact on performance in both cases?

Thanks,
Naga S

Dear Naga,

Since Replicate is available only to paying customers, I assume you are one. Please check with your account team and they can help you with some of your questions about Replicate.

Eric