We will have all processed data in the MEMSQL database. We are using the MEMSQL to store our analytics data. Guide us how to unload MEMSQL tables data into Amazon S3 in csv or parquet file format.
Thank you very much for your prompt response. How to unload data into parquet format from MEMSQL to S3.
For now i recommend to either output to kafka and run a conversion in Kafka or run a periodic batch job that picks up from S3 bucket and converts to Parquet using something like Spark.
We are looking into supporting Parquet out of the box
How to unload data into csv/text format from MEMSQL to S3?
Look up for S3 on this page
Hi Nikita, Thanks for response. Yes i am referring to mentioned page syntax only.
INTO S3 ‘testing/output’
in place of output i am giving the file name.csv but still it is getting stored as Binary/Octet - Stream in S3. Any comments on this?
Currently we always upload data to S3 using the content-type
binary/octet-stream. Is there a specific use case for which you need the content type to be accurate to the file contents? Thanks!
There is no speicfic use case. Just that i want that to be in readable or downloadable format hence thinking if we can store that in .csv format.
Totally understand. On the plus side, we are uploading the data in CSV format already. You can download and consume the data using any tool which supports reading CSV. Most tools will happily open the file if you download it with the extension
Hi Guys, I need export data as parquet files to have historical data archive.
What is simplest way on now ?
Currently i see there is a limit to the number of rows
By default it is 300 from the query editor
i can bump it to 3000
But how to take dump of full table with 100 million records
@nikita is the support to export in Parquet format available in SingleStore now? Any other way to export directly export from Singlestore to Parquet? Please confirm