@carl @hanson Thanks for the quick reply!
@carl - In your last paragraph, you’ve meant that we should implement our own connector (that reads from Pulsar) instead of the JDBC one, “mimicking” the existing Kafka connector code and use LOAD DATA and in addition, insert directly to leaves?
Also relating to “How many rows/s we intend to insert” - for now, it will be about few tens of thousands of inserts per sec, like a total of 20-30k inserts spanned across 6 big facts tables, each containing 10M-50M rows already. Half of the inserts will be “INSERT … ON DUPLICATE KEY UPDATE”.
I guess that a relatively small cluster of SingleStore should meet this load?
We are comparing performance / $ ratio with RDS and Aurora… so I should assume that since SingleStore insert performance should be much better we will need less h/w?