SELECT INTO KAFKA - support for populating kafka key

Hi,

I tested the SELECT INTO KAFKA sql statement and it seems the integration with confluent kafka is working ok using:

SELECT to_json(telemtry_test.*)
FROM telemtry_test
INTO KAFKA ‘[broker-host-and-port]/[topic-name]’
CONFIG ‘{
“security.protocol” : “SASL_SSL”,
“sasl.mechanism” : “PLAIN”,
“sasl.username” : “…”,
“ssl.ca.location” : “/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem”}’
CREDENTIALS ‘{
“sasl.password” : “…”}’;

My question: how can i configure how to populate the kafka key in the SQL?
for example, in my table above, i have 3 columns: sensor_id, timestamp and value.

I would like the sensor_id with addition of some hard coded prefix would be used as the kafka key for each message. how to configure that?

We don’t currently support populating the Kafka key with SELECT INTO KAFKA, unfortunately. We always produce messages with no key, and send them to randomly chosen partitions.

ok, thanx for the quick reply.

is there something like that on your roadmap to support it? can i open a “request” somehow for Single Store to add something like this?