Monitoring, Logging and Debugging of Pipelines

I have a general question on how to monitor and debug memsql’s kafka pipelines.

I understand I can create a pipeline and transform kafka messages using stored procedure or python scripts. And I can monitor the pipeline status in Memsql studio. I am curious when something went wrong, how do I get the log and how can I get alerted if one of the pipelines are down? Is it possible to send the log to a cloud monitoring service such as Datadog?

Also, is there any guide on how to debug the pipelines?



Hello Henry,

Please see this forum post for responses and possible approaches.

Hope this is helpful.


The response in the blog and the documents provide the debugging steps only for the stored procedures.
We can push the debugging logs to some error table from transformers other than stored procedures(python/java/node js). We are performing some complex operations as part of the
transform application and we would like to have different loggers, something like this.

We would expect a large number of different types of logs for debugging. So other than keeping the logs in some error table from the transform application, is there any better way to track the logs?

Checked the below docs for debugging the pipeline transformations, but we are looking for the transform code level debugging and also in production we want all the logs in one place.

Thanks in advance

Hello Madhuri,
Thank you for reaching out and clarifying the problem that you are trying to solve. Currently we do not support the granularity of the logging as shown in the example link that you have shared.

However, I have filed a feature request for Pipeline transform logging to better meet your requirements.