Error Deploying SingleStore/MemSQL Operator

bitwyre@rke-bootstrap-ansible:~/bitwyre-k8s-manifest-dev/memsql$ kubectl logs memsql-operator-5f5544bc4-7s4gb
2020/10/28 04:36:51 main.go:55    {cmd}    Go Version: go1.11.5
2020/10/28 04:36:51 main.go:56    {cmd}    Go OS/Arch: linux/amd64
2020/10/28 04:36:51 main.go:57    {cmd}    Version of operator-sdk: v0.2.1
2020/10/28 04:36:51 main.go:58    {cmd}    Commit Hash: 9ec2c6f4
2020/10/28 04:36:51 main.go:60    {cmd}    Options:
2020/10/28 04:36:51 main.go:61    {cmd}    --cores-per-unit: 8.000000
2020/10/28 04:36:51 main.go:62    {cmd}    --memory-per-unit: 32.000000
2020/10/28 04:36:51 main.go:63    {cmd}    --overpack-factor: 0.000000
2020/10/28 04:36:51 main.go:64    {cmd}    --extra-cidrs: []
2020/10/28 04:36:51 main.go:65    {cmd}    --external-dns-domain-name: {false }
2020/10/28 04:36:51 main.go:66    {cmd}    --external-dns-ttl: {false 0}
2020/10/28 04:36:51 main.go:67    {cmd}    --ssl-secret-name:
2020/10/28 04:36:51 main.go:68    {cmd}    --merge-service-annotations: true
2020/10/28 04:36:51 main.go:88    {cmd}    --backup-default-deadline-seconds: 3600
2020/10/28 04:36:51 main.go:96    {cmd}    --backup-incremental-default-deadline-seconds: 3600
2020/10/28 04:36:51 main.go:116    {cmd}    --fs-group-id: 5555
2020/10/28 04:36:51 leader.go:55    {leader}    Trying to become the leader.
2020/10/28 04:36:51 leader.go:158    {leader}    found namespace    Namespace: "default"
2020/10/28 04:36:52 leader.go:171    {leader}    found podname    Pod.Name: "memsql-operator-5f5544bc4-7s4gb"
2020/10/28 04:36:52 leader.go:95    {leader}    Found existing lock with my name. I was likely restarted.
2020/10/28 04:36:52 leader.go:96    {leader}    Continuing as the leader.
2020/10/28 04:36:52 main.go:255    {cmd}    Registering Components.
2020/10/28 04:36:52 controller.go:120    {kubebuilder.controller}    Starting EventSource    Controller: "memsql-controller"  Source: "kind source: /, Kind="
panic: runtime error: index out of range

goroutine 1 [running]:
main.(*LogrWrapper).info(0xc00015bc20, 0x15b013c, 0x36, 0xc00048d320, 0x3, 0x3, 0x1, 0x3)
    /builds/engineering/helios/go/src/freya/cmd/operator/logger.go:42 +0x4c0
main.(*LogrWrapper).Error(0xc00015bc20, 0x175c500, 0xc00085ad40, 0x15b013c, 0x36, 0xc000735840, 0x1, 0x1)
    /builds/engineering/helios/go/src/freya/cmd/operator/logger.go:53 +0xd3
freya/vendor/sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start(0xc000a9b520, 0x1778c60, 0x2346960, 0x178ae60, 0xc000a9b400, 0xc000735740, 0x1, 0x1, 0x40b548, 0x14df040)
    /builds/engineering/helios/go/src/freya/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:89 +0x17a
freya/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch(0xc000194780, 0x175ca40, 0xc000a9b520, 0x1778c60, 0x2346960, 0xc000735740, 0x1, 0x1, 0x0, 0x0)
    /builds/engineering/helios/go/src/freya/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:121 +0x327
freya/kube/controller.AddToManager(0x178cb60, 0xc000464a80, 0x175b2e0, 0xc0005432c0, 0x175b2c0, 0xc000543180, 0x0, 0x0, 0xc00032c5a0, 0xc00032c610)
    /builds/engineering/helios/go/src/freya/kube/controller/controller.go:118 +0x546
main.main()
    /builds/engineering/helios/go/src/freya/cmd/operator/main.go:268 +0xe1d

Hello dendisuhubdy,

Thank you for reaching out and welcome to our community forums!

This error indicates to us that you may have run the deployment commands in an unexpected order. First, you have to run the command for the crd, and then the deployment YAML (as shown below).
Can you please try running these two commands in the order shown below and let us know if it works?

kubectl apply -f memsql-cluster-crd.yaml
kubectl apply -f deployment.yaml

Additionally, we will plan to improve this error case so that it is more clear what the issue is and how to mitigate it.

Thank you!
Roxanna

Hi Roxana,

I also get the same issue, already do this :

kubectl apply -f memsql-cluster-crd.yaml
kubectl apply -f deployment.yaml

but still get same eror.

Please see my memsql-cluster-crd.yaml ( I change group from memsql.com to singlestore.com )

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: memsqlclusters.singlestore.com
spec:
group: singlestore.com
names:
kind: MemsqlCluster
listKind: MemsqlClusterList
plural: memsqlclusters
singular: memsqlcluster
shortNames:
- memsql
scope: Namespaced
version: v1alpha1
subresources:
status: {}
additionalPrinterColumns:

  • name: Aggregators
    type: integer
    description: Number of SingleStore DB Aggregators
    JSONPath: .spec.aggregatorSpec.count
  • name: Leaves
    type: integer
    description: Number of SingleStore DB Leaves (per availability group)
    JSONPath: .spec.leafSpec.count
  • name: Redundancy Level
    type: integer
    description: Redundancy level of SingleStore DB Cluster
    JSONPath: .spec.redundancyLevel
  • name: Age
    type: date
    JSONPath: .metadata.creationTimestamp

Hi Roxanna

Actually, your CRD on this link
https://docs.singlestore.com/v7.1/guides/deploy-memsql/self-managed/kubernetes/step-3/

is error,

The CustomResourceDefinition “memsqlclusters.singlestore.com” is invalid: metadata.name: Invalid value: “memsqlclusters.singlestore.com”: must be spec.names.plural+“.”+spec.group

We look into Kubernetes documentation about CRD, and we change the group from ‘’’ group: memsql.com’‘’ to ‘’‘group: singlestore.com’‘’ and the CRD works.

But then we cant deploy the memsql operator.
Please advice

Thank you all for your insight. Please use the following crd instead and run the same commands as specified above again. The name in the crd should remain: memsqlclusters.memsql.com for now.

We will be updating our documentation to mitigate this.

CRD:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: memsqlclusters.memsql.com
spec:
  group: memsql.com
  names:
    kind: MemsqlCluster
    listKind: MemsqlClusterList
    plural: memsqlclusters
    singular: memsqlcluster
    shortNames:
      - memsql
  scope: Namespaced
  version: v1alpha1
  subresources:
    status: {}
  additionalPrinterColumns:
  - name: Aggregators
    type: integer
    description: Number of MemSQL Aggregators
    JSONPath: .spec.aggregatorSpec.count
  - name: Leaves
    type: integer
    description: Number of MemSQL Leaves (per availability group)
    JSONPath: .spec.leafSpec.count
  - name: Redundancy Level
    type: integer
    description: Redundancy level of MemSQL Cluster
    JSONPath: .spec.redundancyLevel
  - name: Age
    type: date
    JSONPath: .metadata.creationTimestamp

Hi Roxana,

Its running now, Thanks a lot.

No problem! Thank you for bringing this issue up.

Best,
Roxanna

Hi Roxana,

We have issue with singlestore dashboard, this is the eror message : ER_BAD_FIELD_ERROR: Unknown column ‘mv.NODE_TYPE’ in ‘where clause’

Thank You.

Thank you for the follow-up.

Can you please provide your MemSQL version and Studio version?

  • You can obtain the MemSQL version by running select @@memsql_version; in the SQL client
  • You can obtain the studio version by running memsql-studio version on the machine you are running it on

Hi Roxana,

Currently we use 6.8.9 version. and if we should upgrade the version to 7.1.11, can you share the tutorial ? we run Singlestore cluster on Kubernetes.

Thank You.

HI Roxana, also, we use dashboard from docker image of “cluster-in-a-box:latest”. which is version

bash-4.2$ memsql-studio version
2020/10/30 07:08:08 env.go:92 Log Opened
Version: 2.0.4

We register MemSQL cluster to this dashboard using

svc-memsql-cluster-ddl.default.svc.cluster.local:3306

is this the correct endpoint of master aggregator ?
because at the dashboard, we have to connect using master aggregator endpoint.

Please advice and thank you for your time.

Thanks for getting back to me with the information. This is independent of where Studio is running. The issue is that some of the variables on the newer Studio dashboard require later versions of SingleStore.

You have 2 options:

  • Can you upgrade SingleStore to a later version of 6.8 (such as 6.8.24)? It is most recommended you upgrade to 7.1, if possible.
    OR
  • Can you downgrade Studio to 2.0.2?

We are working on a fix so that this will not error with previous versions.

Best,
Roxanna