Load data from S3 path parquet files with partitions to MeMSQL table

Hello,
I am tryin to load data with pipeline from S3 path into table in our memsql db cluster.
The query i am running is:
CREATE PIPELINE pipeline_name AS
LOAD DATA S3 ‘s3_path’
CONFIG region
CREDENTIALS aws_credentials
INTO TABLE dest_table
(
columns ← columns
)
FORMAT PARQUET;

event_day column in partitioned one in S3.
I get an error:
Path event_day does not name a primitive field in schema message spark_schema

Is there a way to load parquet files with partitions to memsql table with pipeline?

Thanks

Hi Gal! :wave: Welcome and thanks for using the forums. What version are you running?

Hi,
we also face the same issue.

we are also using version 7.8.8
Source structure is having the partitioned table.

we are hitting the below error
Path has no DEFAULT clause and was not found as a primitive field of input schema message hive_schema

Was there ever a solution found for this issue? I’d rather not create pipelines per partition just because singlestore doesn’t understand fields (aka partitions) embedded in the url such as:

…/transaction_year__r=2023/transaction_month__r=10/xyz.zstd.parquet

I’ve been testing with singlestore cluster-in-a-box docker image (8.0.12)