Blobs Pipelines Error-Invalid JSON value for column 'vals'

Hi,there is a problem ​in Cluster Monitoring.

I’ve followed the instruction of the manual on SinglesStore web site to configure monitoring.
At the beginning,it is normally,but a problem occurred after a few minutes.
Then i did it again,but the problem still exists .
I don’t know where the problem is and i don’t know how to find the cause of the problem.
Thank you!

SELECT * FROM information_schema.pipelines_errors;
Here is CSV file:

DATABASE_NAME,PIPELINE_NAME,ERROR_UNIX_TIMESTAMP,ERROR_TYPE,ERROR_CODE,ERROR_MESSAGE,ERROR_KIND,STD_ERROR,LOAD_DATA_LINE,LOAD_DATA_LINE_NUMBER,BATCH_ID,ERROR_ID,BATCH_SOURCE_PARTITION_ID,BATCH_EARLIEST_OFFSET,BATCH_LATEST_OFFSET,HOST,PORT,PARTITION

metrics,192.168.150.174_9104_blobs,1626612002.12413,Error,1844,Leaf Error (192.168.150.174:3307): Invalid JSON value for column ‘vals’,Internal,NULL,NULL,NULL,9,53,NULL,NULL,NULL,192.168.150.174,3306,NULL

metrics,192.168.150.174_9104_blobs,1626612002.124064,Error,1844,
Leaf Error (192.168.150.174:3307): Leaf Error (192.168.150.174:3307): Invalid JSON value for column ‘vals’,
Internal,NULL,NULL,NULL,9,44,NULL,NULL,NULL,192.168.150.174,3306,NULL

metrics,192.168.150.174_9104_blobs,1626612002.1179,Error,1844,Invalid JSON value for column ‘vals’,Load,NULL,

“{”“keys”“:”“{"“activity_name"”:"“Select_metrics__et_al_e529638156a3a4a3"”}”“,”“memsql_tags”“:”“{"“cluster"”:"“memsql_cluster"”,"“host"”:"“192.168.150.174:9104"”,"“port"”:"“3306"”,"“push_job"”:"”"“,"“role"”:"“ma"”}”“,”“time_sec”“:”“1626612002"”,““type””:““query””,““vals””:“”{"“plan_warnings"”:"“"”,"“query_text"”:"“select * from (\\nselect time_sec,\\nCONCAT(^,host) as metric,\\nmemsql_sysinfo_mem_used_b / memsql_sysinfo_mem_total_b as used\\nFROM\\n(SELECT\\nmax(time_sec) time_sec,\\navg(value) value,\\nmetric,\\nhost\\nFROM\\n(SELECT\\n time_sec,\\n time_sec DIV @ * @ tg,\\n intval as value,\\n name as metric,\\n CONCAT(^, trim_host(host), ^) as host\\nFROM metrics\\nWHERE cluster=^\\nand (host in (^) or ^ = ^)\\nand extractor = ^\\nand subsystem = ^\\nand name like ^\\nand time_sec \u003e= @ AND time_sec \u003c= @\\n) X\\nGROUP BY tg, metric, host\\n) as pvt_data\\nPIVOT (\\n AVG(value) \\n FOR metric \\n IN (\\"“memsql_sysinfo_mem_total_b\\"”, \\"“memsql_sysinfo_mem_used_b\\"”)\\n) as pvt_table) a\\norder by time_sec"”}“”}",

1911,9,22,http://192.168.150.174:9104/samples?monitoring_version=7.3.13&sample_queries=true,0,1,192.168.150.174,3307,4

metrics,192.168.150.174_9104_blobs,1626611986.667857,Error,1844,Leaf Error (192.168.150.174:3307): Invalid JSON value for column ‘vals’,Internal,NULL,NULL,NULL,8,35,NULL,NULL,NULL,192.168.150.174,3306,NULL

metrics,192.168.150.174_9104_blobs,1626611986.667789,Error,1844,Leaf Error (192.168.150.174:3307): Leaf Error (192.168.150.174:3307): Invalid JSON value for column ‘vals’,Internal,NULL,NULL,NULL,8,26,NULL,NULL,NULL,192.168.150.174,3306,NULL

metrics,192.168.150.174_9104_blobs,1626611986.65621,Error,1844,Invalid JSON value for column ‘vals’,Load,NULL,

“{”“keys”“:”“{"“activity_name"”:"“Select_metrics__et_al_e529638156a3a4a3"”}”“,”“memsql_tags”“:”“{"“cluster"”:"“memsql_cluster"”,"“host"”:"“192.168.150.174:9104"”,"“port"”:"“3306"”,"“push_job"”:"”"“,"“role"”:"“ma"”}”“,”“time_sec”“:”“1626611986"”,““type””:““query””,““vals””:“”{"“plan_warnings"”:"“"”,"“query_text"”:"“select * from (\\nselect time_sec,\\nCONCAT(^,host) as metric,\\nmemsql_sysinfo_mem_used_b / memsql_sysinfo_mem_total_b as used\\nFROM\\n(SELECT\\nmax(time_sec) time_sec,\\navg(value) value,\\nmetric,\\nhost\\nFROM\\n(SELECT\\n time_sec,\\n time_sec DIV @ * @ tg,\\n intval as value,\\n name as metric,\\n CONCAT(^, trim_host(host), ^) as host\\nFROM metrics\\nWHERE cluster=^\\nand (host in (^) or ^ = ^)\\nand extractor = ^\\nand subsystem = ^\\nand name like ^\\nand time_sec \u003e= @ AND time_sec \u003c= @\\n) X\\nGROUP BY tg, metric, host\\n) as pvt_data\\nPIVOT (\\n AVG(value) \\n FOR metric \\n IN (\\"“memsql_sysinfo_mem_total_b\\"”, \\"“memsql_sysinfo_mem_used_b\\"”)\\n) as pvt_table) a\\norder by time_sec"”}“”}",

1900,8,20,http://192.168.150.174:9104/samples?monitoring_version=7.3.13&sample_queries=true,0,1,192.168.150.174,3307,2

Grafan screenshot

Besides,there is no problem after I restarted memsql,but when I use Grafana,blobs pipeline reports the error.

Hello @tp_jia, could you please provide some additional info:

  • Do you use different clusters as Monitoring and Source clusters or is it the same cluster? (basically, is the host “192.168.150.174” where the exporter is running the same one as the one you are setting up monitoring DB and pipelines on?)
  • What are the versions of SingleStore DB running on your Source and Monitoring cluster?
  • Did you perform the setup using SingleStore DB Toolbox? If so, what was the version of Toolbox you were using?

Do you mean after you restarted memsql pipelines were running without errors or were they still shown as “Paused due to error” in SHOW PIPELINES?

Thanks for your reply!
1.My Monitoring and Source is the same cluster.
2.@@memsql_version is 7.3.13
3.I setup Monitor using SingleStore DB Toolbox,sdb-admin version is 1.11.8

Pipelines were running without errors after I restarted memsql,but when I use Grafana(http://192.168.150.174:3000/),I find the error in blobs pipeline.

Today there is still no error,even if i use Grafana.

I recognize the query text that’s apparently generating an incorrect JSON value. It is a query behind the CPU panel on the Detailed Cluster View dashboard, and it for some reason did not parametrize this part of the query ... metric \\n IN (\\”“memsql_sysinfo_mem_total_b\\”", \\"“memsql_sysinfo_mem_used_b\\”") .... With proper parametrization it should be something like metric \\n IN (^, ^).

@tp_jia Since you have the code for the dashboard, can you modify it to use single-quotes inside the IN ( ... ) list, instead of double quotes?

The pipeline should already be set to skip all errors, so it should simply move past the wrong sample, but it is indeed annoying that the query text won’t show up for queries that use double-quotes in some places.

It work normally now,Thanks!

1 Like