Columnar Table is consuming Memory

I’m seeing that most of the columnar tables are consuming memory, causing oom issues. Do you what could be causing this?

TABLE INFO
Memory Usage
1.5 GB
Disk Usage
1.5 GB
Row Count
5.72M rows
Table Type
Columnstore table
Table Compression
82.37%

CREATE TABLE table_columnstore (

raw_json JSON COLLATE utf8_bin,

name as raw_json::$name PERSISTED varchar(35) CHARACTER SET utf8 COLLATE utf8_general_ci,

platform as raw_json::$platform PERSISTED varchar(10) CHARACTER SET utf8 COLLATE utf8_general_ci,

env as raw_json::$env PERSISTED varchar(15) CHARACTER SET utf8 COLLATE utf8_general_ci,

user as raw_json::$player_id PERSISTED varchar(38) CHARACTER SET utf8 COLLATE utf8_general_ci,

date as date (FROM_UNIXTIME (raw_json::%unixts)) PERSISTED date ,

timestamp_utc as FROM_UNIXTIME (raw_json::%unixts) PERSISTED datetime ,

inserted_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ,

game_ver as raw_json::$game_ver PERSISTED varchar(25) CHARACTER SET utf8 COLLATE utf8_general_ci,

KEY events (date_utc,environment,player_id,timestamp_utc) USING CLUSTERED COLUMNSTORE,

SHARD KEY key (date,env,user)

) AUTOSTATS_CARDINALITY_MODE=INCREMENTAL AUTOSTATS_HISTOGRAM_MODE=CREATE AUTOSTATS_SAMPLING=ON SQL_MODE=‘STRICT_ALL_TABLES’

SingleStore columnstore tables contain an in-memory rowstore segment for recently-inserted or updated rows. It’s normal for some memory to be used by a columnstore. In a large columnstore, the in-memory rowstore segment will contain only a fraction of the total data. The rest will be in disk-based columnstore format. Some of that will be cached in memory in the file system buffer cache.

If you do OPTIMIZE TABLE FLUSH, it will write the in-memory data to disk. If you do that and restart your system, then check memory usage by the columnstore, you might see that it is less. But really, this is just a curiosity – it is not necessary to do this in normal operation because the system automatically flushes data in the background.

Thank you @hanson that command reduced from 1.3GB of memory to 250M, Do you know why this isn’t being cleaned automatically?

The table is being loaded from a pipeline with batches every 60000 milis.

well… now it’s back to 1.3GB of memory again… seems that the buffer is too high