Load data from S3 path parquet files with partitions to MeMSQL table

I am tryin to load data with pipeline from S3 path into table in our memsql db cluster.
The query i am running is:
CREATE PIPELINE pipeline_name AS
LOAD DATA S3 ‘s3_path’
CONFIG region
CREDENTIALS aws_credentials
INTO TABLE dest_table
columns ← columns

event_day column in partitioned one in S3.
I get an error:
Path event_day does not name a primitive field in schema message spark_schema

Is there a way to load parquet files with partitions to memsql table with pipeline?


Hi Gal! :wave: Welcome and thanks for using the forums. What version are you running?

we also face the same issue.

we are also using version 7.8.8
Source structure is having the partitioned table.

we are hitting the below error
Path has no DEFAULT clause and was not found as a primitive field of input schema message hive_schema