SS cluster not spinning Aggreator node and Leaf Node in Kubernetes Cluster

HI,

I am giving 2 replicas in sdb-operator.yaml.It’s spinning two pods but it’s not spinning aggregator nodes and leaf nodes.

[root@learning-1 ss_kubernetese]# kubectl get  memsqlclusters.memsql.com
NAME          AGGREGATORS   LEAVES   REDUNDANCY LEVEL   AGE
sdb-cluster   0             0        1                  3m24s

[root@learning-1 ss_kubernetese]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
sdb-operator-564b9d7d97-6k4q5   1/1     Running   0          4m14s
sdb-operator-564b9d7d97-xl6tp   1/1     Running   0          4m13s
[root@learning-1 ss_kubernetese]# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
learning-1   Ready    control-plane   6h42m   v1.25.2
learning-2   Ready    <none>          6h23m   v1.25.2
[root@learning-1 ss_kubernetese]# kubectl get services
NAME                  TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes            ClusterIP      <...>     <none>        443/TCP          6h49m
svc-sdb-cluster       ClusterIP      None          <none>        3306/TCP         12m
svc-sdb-cluster-ddl   LoadBalancer   <...>   <pending>     3306:30158/TCP   12m

sdb-operator.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sdb-operator
  labels:
    app.kubernetes.io/component: operator
spec:
  replicas: 2
  selector:
    matchLabels:
      name: sdb-operator
  template:
    metadata:
      labels:
        name: sdb-operator
    spec:
      serviceAccountName: sdb-operator
      containers:
        - name: sdb-operator
          image: singlestore/operator:3.0.32-db8f5aff  #docker pull singlestore/operator:3.0.32-db8f5aff  >> Ref link-https://hub.docker.com/r/singlestore/operator/tags
          imagePullPolicy: Always
          args: [
            # Cause the operator to merge rather than replace annotations on services
            "--merge-service-annotations",
            # Allow the process inside the container to have read/write access to the `/var/lib/memsql` volume.
            "--fs-group-id", "5555",
            "--cluster-id", "sdb-cluster"          ]
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME                      #Name of pod
              valueFrom:
                fieldRef: 
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "sdb-operator"

sdb-cluster.yaml:

apiVersion: memsql.com/v1alpha1
kind: MemsqlCluster
metadata:
  name: sdb-cluster
spec:
  license: <....>
  adminHashedPassword: "*9177CC8207174BDBB5ED66B2140C75171283F15D" 
  nodeImage:                             
    repository: singlestore/node         #docker pull singlestore/node:alma-7.8.15-4c5fbd0f27 >> Ref link-https://hub.docker.com/r/singlestore/node/tags 
    tag: 7.8.15-4c5fbd0f27                        #7.8.15-4c5fbd0f2

  redundancyLevel: 1                     #By default, the redundancyLevel is set to 2 to enable high availability (HA), which is highly recommended for production deployments. 
                                         #To disable high availability, set this value to 1. Refer to Managing High Availability for more information.

  serviceSpec:                           #optional
    objectMetaOverrides:
      labels:
        custom: label
      annotations:
        custom: annotations

  aggregatorSpec:                  #The height value specifies the vCPU and RAM size of an aggregator or leaf node where a height of 1 equals 8 vCPU cores and 32 GB of RAM.                                        
    count: 2                       #The smallest value you can set is 0.5 (4 vCPU cores, 16 GB of RAM).
    height: 0.5
    storageGB: 20                 #The storageGB value corresponds to the amount of storage each aggregator or leaf should request for their persistent data volume.
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

  leafSpec:
    count: 2
    height: 0.5
    storageGB: 20
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

Some more queries:

  1. I tried logging in SS : mysql -u admin -h <hostname> -P <port> -p<password> but it’s me for password again.

Logs:

[root@learning-1 ss_kubernetese]# mysql -u admin -h <hostname> -P <port> -p<password>
password:
  1. Is SS on Kubernetes has GUI? for example: linux based installation has GUI.

Hi divyank1,

A few observations:

  • in sdb-operator.yaml, the spec has 2 replicas when our docs recommend 1 replica
  • in sdb-cluster.yaml, the storage allotted for both aggs and both leaves is 20GB, which is particularly low

There is not a GUI specifically for k8s based deployments.

Please make sure to follow our k8s deployment docs when provisioning a SingleStore cluster in kubernetes. Per our docs, having prior experience with Kubernetes administration is strongly recommended for this deployment. A lighter weight self managed deployment option would be a linux deployment.

Hi @gkafity ,
Thanks for your response. This is pending since long…

I changed it to 1 operator now.

Can you specify minimum storage for aggregator and Leaf node,considering for testing env.
have 100 GB in Master Node and 50 GB in Worker Node Storage.
Want to spin atleast 1 Aggregator & 1 Worker Node.

I am getting error logs as follows:It would be super helpful if you could figure out what Im doing wrong.

Logs:

[root@learning-1 ss_kubernetese]# lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
CPU(s):                2
Thread(s) per core:    2
Core(s) per socket:    1
Socket(s):             1

[4px@learning-1 ~]$ cat /proc/cpuinfo | grep cpu\ cores |uniq
cpu cores       : 1

[4px@learning-1 ~]$ cat /proc/cpuinfo | grep processor
processor       : 0
processor       : 1

[root@learning-1 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        3.1G         11G         22M        1.3G         11G
Swap:            0B          0B          0B

[root@learning-1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G   13M  7.8G   1% /dev/shm
tmpfs           7.8G   10M  7.8G   1% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2       100G   33G   68G  33% /
/dev/sda1       200M   12M  189M   6% /boot/efi
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/7a0422a0f478715c00b19ea2031069443c2c6cb4197d9ec738d661ea89ed9f41/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/74e03bb5e7205b36dfa3c16bd845ab7855db43b9d0acab97a4e939a2216ca919/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/314f6879c3a382c2684099dbbf334624e9bb164acb5ebe60dd3fa2da26cd92f4/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/ade049b7e99da544cffa51e8ac1cb68f673443cc664ffbc8835dfb181188e815/shm
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/74e03bb5e7205b36dfa3c16bd845ab7855db43b9d0acab97a4e939a2216ca919/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/ade049b7e99da544cffa51e8ac1cb68f673443cc664ffbc8835dfb181188e815/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/314f6879c3a382c2684099dbbf334624e9bb164acb5ebe60dd3fa2da26cd92f4/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/7a0422a0f478715c00b19ea2031069443c2c6cb4197d9ec738d661ea89ed9f41/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/eb489898f880b64ce776c252838f5260f94ef4e3c139c604219cf8d685fd5700/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/3abe29e4eef99f4550e13cdd5c832fb9e54198c65e517b679b7bdcbf992b2d4e/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/9305eedaaad2da82ef9e15f8e48dcbf8d07933f2bb8637384ccbecf8efa745d5/rootfs
overlay         100G   33G   68G  33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/eda3b4ada268970c81d11e8b96b9fd12da1aa4374d143582d4b7b4d2d5000bf9/rootfs
overlay         100G   33G   68G  33% /var/lib/docker/overlay2/783ff1ddc0821ed8aac5a4fe9bb8ef8b23b22c467f46bd9eec183925468a0881/merged
overlay         100G   33G   68G  33% /var/lib/docker/overlay2/7cdff9846c92ad5401df7a3270f5c81521160fe23a07f09dec92daf5684bc7c4/merged

Linux Kernel version:

[root@learning-1 /]# hostnamectl
   Static hostname: learning-1
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 599dc9a1d16051c45505c4e99257deec
           Boot ID: 23741883b22d499c86b61aae7f93fdba
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-1160.59.1.el7.x86_64
      Architecture: x86-64


[root@learning-1 /]#kubectl get nodes

NAME         STATUS   ROLES           AGE     VERSION
learning-1   Ready    control-plane   24m     v1.23.12
learning-2   Ready    <none>          5m12s   v1.23.12


[root@learning-1 ss_kubernetese]# kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces
NAME                                 STATUS    NODE
sdb-operator-564b9d7d97-6k4q5        Running   learning-2
kube-flannel-ds-7gktf                Running   learning-2
kube-flannel-ds-ttkkl                Running   learning-1
coredns-565d847f94-5qx2v             Running   learning-1
coredns-565d847f94-xg8mt             Running   learning-1
etcd-learning-1                      Running   learning-1
kube-apiserver-learning-1            Running   learning-1
kube-controller-manager-learning-1   Running   learning-1
kube-proxy-hqrjq                     Running   learning-1
kube-proxy-zzslq                     Running   learning-2
kube-scheduler-learning-1            Running   learning-1

[root@learning-1 ss_kubernetese]# kubectl get services
NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes            ClusterIP      10.96.0.1       <none>        443/TCP          15m
svc-sdb-cluster       ClusterIP      None            <none>        3306/TCP         5m9s
svc-sdb-cluster-ddl   LoadBalancer   10.102.231.87   <pending>     3306:30052/TCP   5m9s

[root@learning-1 ss_kubernetese]# kubectl get  memsqlclusters.memsql.com/sdb-cluster
NAME          AGGREGATORS   LEAVES   REDUNDANCY LEVEL   AGE
sdb-cluster   0             0        1                  7m23s


[root@learning-1 ss_kubernetese]# kubectl get namespace
NAME              STATUS   AGE
default           Active   171m
kube-flannel      Active   169m
kube-node-lease   Active   171m
kube-public       Active   171m
kube-system       Active   171m

[root@learning-1 ss_kubernetese]# kubectl get deployments --namespace=default
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
sdb-operator   1/1     1            1           168m

[root@learning-1 ss_kubernetese]# kubectl get statefulsets --namespace=default
No resources found in default namespace.

[root@learning-1 ss_kubernetese]# kubectl get rc
NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/sdb-operator-564b9d7d97   1         1         1       28m


*************************ERROR-LOGS*************************************************************************************************************
[root@learning-1 ss_kubernetese]# kubectl logs deployment/sdb-operator
2022/10/14 04:42:26 deleg.go:121        {cmd}   Go Version: go1.18.2
2022/10/14 04:42:26 deleg.go:121        {cmd}   Go OS/Arch: linux/amd64
2022/10/14 04:42:26 deleg.go:121        {cmd}   Operator Version: 3.0.33
2022/10/14 04:42:26 deleg.go:121        {cmd}   Commit Hash: db8f5aff
2022/10/14 04:42:26 deleg.go:121        {cmd}   Build Time: 2022-09-08T14:43:05Z
2022/10/14 04:42:26 deleg.go:121        {cmd}   Options:
2022/10/14 04:42:26 deleg.go:121        {cmd}   --cores-per-unit: 8.000000
2022/10/14 04:42:26 deleg.go:121        {cmd}   --memory-per-unit: 32.000000
2022/10/14 04:42:26 deleg.go:121        {cmd}   --overpack-factor: 0.000000
2022/10/14 04:42:26 deleg.go:121        {cmd}   --extra-cidrs: []
2022/10/14 04:42:26 deleg.go:121        {cmd}   --external-dns-domain-name: {false }
2022/10/14 04:42:26 deleg.go:121        {cmd}   --external-dns-ttl: {false 0}
2022/10/14 04:42:26 deleg.go:121        {cmd}   --ssl-secret-name:
2022/10/14 04:42:26 deleg.go:121        {cmd}   --merge-service-annotations: true
2022/10/14 04:42:26 deleg.go:121        {cmd}   --backup-default-deadline-seconds: 3600
2022/10/14 04:42:26 deleg.go:121        {cmd}   --backup-incremental-default-deadline-seconds: 3600
2022/10/14 04:42:26 deleg.go:121        {cmd}   --cluster-id: sdb-cluster
2022/10/14 04:42:26 deleg.go:121        {cmd}   --fs-group-id: 5555
2022/10/14 04:42:26 deleg.go:121        {controller-runtime.metrics}    Metrics server is starting to listen    addr: "0.0.0.0:9090"
2022/10/14 04:42:26 deleg.go:121        {cmd}   starting manager
2022/10/14 04:42:26 logr.go:249 Starting server path: "/metrics"  kind: "metrics"  addr: "[::]:9090"
I1014 04:42:26.956979       1 leaderelection.go:248] attempting to acquire leader lease default/memsql-operator-lock-sdb-cluster...
I1014 06:35:43.689297       1 leaderelection.go:258] successfully acquired lease default/memsql-operator-lock-sdb-cluster
2022/10/14 06:35:43 logr.go:249 {events}        Normal  object: "{Kind:ConfigMap Namespace:default Name:memsql-operator-lock-sdb-cluster UID:48580f1d-64b3-4f21-92f4-13d2951357f4 APIVersion:v1 ResourceVersion:20659 FieldPath:}"  reason: "LeaderElection"  message: "sdb-operator-564b9d7d97-cxjjm_9b57f782-db08-48b4-95a7-59688d3e941f became leader"
2022/10/14 06:35:43 logr.go:249 {events}        Normal  object: "{Kind:Lease Namespace:default Name:memsql-operator-lock-sdb-cluster UID:7fd4ccbd-dff9-45e1-9941-bc6e03ca138d APIVersion:coordination.k8s.io/v1 ResourceVersion:20660 FieldPath:}"  reason: "LeaderElection"  message: "sdb-operator-564b9d7d97-cxjjm_9b57f782-db08-48b4-95a7-59688d3e941f became leader"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1alpha1.MemsqlCluster"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  source: "kind source: *v1alpha1.MemsqlCluster"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  r	econciler kind: "MemsqlCluster"  source: "kind source: *v1.StatefulSet"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1.Service"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1.Secret"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting Controller     reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"
2022/10/14 06:35:44 deleg.go:121        {controller.memsql}     reconciliation cause: statefulset       namespace: "default"  clusterName: "sdb-cluster"  serviceName: "svc-sdb-cluster"  namespace: "default"
2022/10/14 06:35:44 deleg.go:121        {controller.memsql}     reconciliation cause: statefulset       namespace: "default"  clusterName: "sdb-cluster"  serviceName: "svc-sdb-cluster-ddl"  namespace: "default"
2022/10/14 06:35:44 deleg.go:121        {controller.memsql}     reconciliation cause: memsqlcluster     clusterName: "sdb-cluster"  namespace: "default"
2022/10/14 06:35:44 logr.go:249 {controller.memsqlcluster}      Starting workers        reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  worker count: "1"
2022/10/14 06:35:44 logr.go:249 {controller.memsql}     Reconciling MemSQL Cluster.     Request.Namespace: "default"  Request.Name: "sdb-cluster"
2022/10/14 06:35:44 deleg.go:121        {memsql}        can't find operator deployment, trying uncached client  key: "default/operator-sdb-cluster"
2022/10/14 06:35:44 deleg.go:135        {memsql}        can't find operator deployment, metrics service will not be created     error: "deployments.apps "operator-sdb-cluster" not found"
2022/10/14 06:35:44 deleg.go:135        {controller.memsql}     Reconciler error, will retry after      1s: "error"  failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found

Please see our kubernetes host requirements & overall system recommendations. Each SingleStore node requires a host with at least: 4 CPU cores, 8 GB of RAM & 3x the capacity of main memory storage.

In the logs we noticed svc-sdb-cluster-ddl with IP address 10.102.231.87, what happens if you try to connect mysql client to this endpoint (with port 3306)?

There are more streamlined approaches for deploying self managed SingleStoreDB than on k8s - you can use cluster in a box for small scale testing, and deploy on linux for larger prod type environments. Have you tried either of these options?

HI @gkafity ,

As it’s testing Env Sys Requirements are kept low, CPU Core is 2 with 16 GB RAM for each node.

What does it means? Sorry but I did’t see this point in system requirements docs, thats why I’m putting below storage config in sdb-cluster.yaml file.
Current storage of Master and worker node are 100GB & 50 GB respectively.

  aggregatorSpec:                  #The height value specifies the vCPU and RAM size of an aggregator or leaf node where a height of 1 equals 8 vCPU cores and 32 GB of RAM.                                        
    count: 1                       #The smallest value you can set is 0.5 (4 vCPU cores, 16 GB of RAM).
    height: 0.5
    storageGB: 20                 #The storageGB value corresponds to the amount of storage each aggregator or leaf should request for their persistent data volume.
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

  leafSpec:
    count: 1
    height: 0.5
    storageGB: 20
    storageClass: standard

It’s not able to connect.

[root@learning-1 ss_kubernetese]# mysql -u admin -h 10.102.231.87 -P 3306 -p *9177CC8207174BDBB5ED66B2140C75171283F15D
Enter password:
ERROR 2002 (HY000): Can't connect to server on '10.102.231.87' (115)

It already tried and tested, K8s Testing is going on currently.
Linux cluster working now.
Ref link:Leaf Node went offline,How to restart and connect this node to the cluster

Below Error is coming due to storage issue?
Due to which Aggregator and Leaf pods are not spinning or there can be config issue in yaml files.
Let me know if you need any logs,wants this issue to get resolved.

[root@learning-1 ss_kubernetese]# kubectl logs deployment/sdb-operator
2022/10/14 04:42:26 deleg.go:121        {cmd}   --cluster-id: sdb-cluster
2022/10/14 04:42:26 deleg.go:121        {cmd}   --fs-group-id: 5555
2022/10/14 04:42:26 deleg.go:121        {controller-runtime.metrics}    Metrics server is starting to listen    addr: "0.0.0.0:9090"
2022/10/14 04:42:26 deleg.go:121        {cmd}   starting manager
2022/10/14 04:42:26 logr.go:249 Starting server path: "/metrics"  kind: "metrics"  addr: "[::]:9090"
I1014 04:42:26.956979       1 leaderelection.go:248] attempting to acquire leader lease default/memsql-operator-lock-sdb-cluster...
I1014 06:35:43.689297       1 leaderelection.go:258] successfully acquired lease default/memsql-operator-lock-sdb-cluster
2022/10/14 06:35:43 logr.go:249 {events}        Normal  object: "{Kind:ConfigMap Namespace:default Name:memsql-operator-lock-sdb-cluster UID:48580f1d-64b3-4f21-92f4-13d2951357f4 APIVersion:v1 ResourceVersion:20659 FieldPath:}"  reason: "LeaderElection"  message: "sdb-operator-564b9d7d97-cxjjm_9b57f782-db08-48b4-95a7-59688d3e941f became leader"
2022/10/14 06:35:43 logr.go:249 {events}        Normal  object: "{Kind:Lease Namespace:default Name:memsql-operator-lock-sdb-cluster UID:7fd4ccbd-dff9-45e1-9941-bc6e03ca138d APIVersion:coordination.k8s.io/v1 ResourceVersion:20660 FieldPath:}"  reason: "LeaderElection"  message: "sdb-operator-564b9d7d97-cxjjm_9b57f782-db08-48b4-95a7-59688d3e941f became leader"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1alpha1.MemsqlCluster"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  source: "kind source: *v1alpha1.MemsqlCluster"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  r	econciler kind: "MemsqlCluster"  source: "kind source: *v1.StatefulSet"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1.Service"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1.Secret"
2022/10/14 06:35:43 logr.go:249 {controller.memsqlcluster}      Starting Controller     reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"
2022/10/14 06:35:44 deleg.go:121        {controller.memsql}     reconciliation cause: statefulset       namespace: "default"  clusterName: "sdb-cluster"  serviceName: "svc-sdb-cluster"  namespace: "default"
2022/10/14 06:35:44 deleg.go:121        {controller.memsql}     reconciliation cause: statefulset       namespace: "default"  clusterName: "sdb-cluster"  serviceName: "svc-sdb-cluster-ddl"  namespace: "default"
2022/10/14 06:35:44 deleg.go:121        {controller.memsql}     reconciliation cause: memsqlcluster     clusterName: "sdb-cluster"  namespace: "default"
2022/10/14 06:35:44 logr.go:249 {controller.memsqlcluster}      Starting workers        reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  worker count: "1"
2022/10/14 06:35:44 logr.go:249 {controller.memsql}     Reconciling MemSQL Cluster.     Request.Namespace: "default"  Request.Name: "sdb-cluster"
2022/10/14 06:35:44 deleg.go:121        {memsql}        can't find operator deployment, trying uncached client  key: "default/operator-sdb-cluster"
2022/10/14 06:35:44 deleg.go:135        {memsql}        can't find operator deployment, metrics service will not be created     error: "deployments.apps "operator-sdb-cluster" not found"
2022/10/14 06:35:44 deleg.go:135        {controller.memsql}     Reconciler error, will retry after      1s: "error"  failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found

The # of cores being used is below the recommended amount. Additionally, for the 3x capacity of main memory storage, please see our system requirements & recommendations docs under ‘recommended hardware’.

The text above states 100GB & 50GB respectively, but the yaml file says otherwise - 20GB and 20GB. Additionally from the sdb-cluster.yaml file, please note that a node should have at least 8 vCPU (i.e. 4 vCPU when height is 0.5). This yaml is half of that recommended threshold.

If linux based deployments have been successful for you in the past, why are you deploying to kubernetes now? Our general guidance is that linux based deployment is much lower friction than kubernetes deployments.

Provide a storage system for each node with at least 3x the capacity of main memory.
Current storage of Master and worker node are 100GB & 50 GB, RAM of 16 GB each. So StorageGB in sdb-cluster.yaml should be thrice of main memory.

main memory means RAM?

Ex: sdb-cluster.yaml:

  aggregatorSpec:                  #The height value specifies the vCPU and RAM size of an aggregator or leaf node where a height of 1 equals 8 vCPU cores and 32 GB of RAM.                                        
    count: 1                       #The smallest value you can set is 0.5 (4 vCPU cores, 16 GB of RAM).
    height: 0.5
    storageGB: 48                 #The storageGB value corresponds to the amount of storage each aggregator or leaf should request for their persistent data volume.
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

  leafSpec:
    count: 1
    height: 0.5
    storageGB: 32
    storageClass: standard

If linux based deployments have been successful for you in the past, why are you deploying to kubernetes now?

K8s peformance testing needs to be done. You mean due to storage and System Requirement I am facing leaf and aggregators nodes not spinning?

Hi @gkafity

Storage of Master Node and Worker Node increased to 300GB each and VCPU Cores at 8.
Set up kubernetes from scratch on both nodes.

Current sdb-cluster.yaml

apiVersion: memsql.com/v1alpha1
kind: MemsqlCluster
metadata:
  name: sdb-cluster
spec:
  license: license
  adminHashedPassword: "*9177CC8207174BDBB5ED66B2140C75171283F15D"
  nodeImage:
    repository: singlestore/node
    tag: alma-7.8.17-69cee1f1a3

  redundancyLevel: 1

  serviceSpec:
    objectMetaOverrides:
      labels:
        custom: label
      annotations:
        custom: annotations

  aggregatorSpec:
    count: 1
    height: 0.5
    storageGB: 72
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

  leafSpec:
    count: 1
    height: 0.5
    storageGB: 128
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

Logs:

[root@learning-1 ss_kubernetese]# kubectl get all
NAME                                READY   STATUS    RESTARTS   AGE
pod/sdb-operator-867c9cb7b8-lb64q   1/1     Running   0          27m

NAME                          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes            ClusterIP      10.96.0.1      <none>        443/TCP          46m
service/svc-sdb-cluster       ClusterIP      None           <none>        3306/TCP         21m
service/svc-sdb-cluster-ddl   LoadBalancer   10.99.79.174   <pending>     3306:31795/TCP   21m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sdb-operator   1/1     1            1           27m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/sdb-operator-867c9cb7b8   1         1         1       27m

[root@learning-1 ss_kubernetese]# free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        3.4G        9.6G         23M        2.5G         11G
Swap:            0B          0B          0B

[root@learning-1 ss_kubernetese]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
learning-1   Ready    control-plane,master   63m   v1.23.12
learning-2   Ready    <none>                 46m   v1.23.12

[root@learning-1 ss_kubernetese]# lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
CPU(s):                4
Thread(s) per core:    2
Core(s) per socket:    2
Socket(s):             1
[root@learning-1 ss_kubernetese]# cat /proc/cpuinfo | grep processor
processor       : 0
processor       : 1
processor       : 2
processor       : 3
[root@learning-1 ss_kubernetese]# cat /proc/cpuinfo | grep processor | wc -l
4

[root@learning-1 ss_kubernetese]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G   13M  7.8G   1% /dev/shm
tmpfs           7.8G   11M  7.8G   1% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2       300G   34G  267G  12% /
/dev/sda1       200M   12M  189M   6% /boot/efi
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/3abc679a89357c3bfedf89f06662c3da1c7bdc566059c3adbe24411e0500677f/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/52019c8adb1a2532f723e17dd3c77edb56a6a162833a1396b5e80e34b2841e97/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/b1919f0c322317371f05311c9b33dfff570bbe18b769dc23aa9b69809fcacb8d/merged
tmpfs           1.6G     0  1.6G   0% /run/user/1001
tmpfs           1.6G     0  1.6G   0% /run/user/0
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/d62badad3262d0dc7d83d18e1110eeeb90d6f792474a1f598d68aa78bb665247/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/7c31d0b80276042e0651fd7016b3a0e85e4cc5defc899160f1bace4e77ff6d08/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/37ce71df22e29c772382e7a3478acfb8fbe21be8491c3206f371ba74d90d562c/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/2e057156cd38793705209e003aa3aa3278690fca3a0470df6712db6f33d6b73f/merged
shm              64M     0   64M   0% /var/lib/docker/containers/aad52cd81e97cced2bea576897c9d8e897b25f315de70472303f8c68d9665f82/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/621966b86879907622e47bf677031e397c03eb1818ad9f559c5d4d22e4214169/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/a65bb00dc9990db94dc4bbc4352225d249a886d6a8c9c38b40dc836cf204c1ec/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/c1a3615432ffd7d80c125366f6b8868a643aa345e5341cebfe2ca9a0c92c1c87/mounts/shm
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/b72daefbeaa917a371608305422e77db935bc5232960204f164ca0a07cda75b0/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/78356f9c303befbf71860b56978eb7f0ebe889ea5615bbc1042a4a101e878ed0/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/f7affb2bbac998cbc952e036737028843ab5b84c96c3a27c4bc31fb1907d14cd/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/deb1812cb0cd5d013559425f9087066bd1b564501f3602c4d8545c1f60aeb4ec/merged
tmpfs            16G   12K   16G   1% /var/lib/kubelet/pods/494c0715-2590-4f20-8a2b-0b3013505d81/volumes/kubernetes.io~projected/kube-api-access-n7x8x
tmpfs           170M   12K  170M   1% /var/lib/kubelet/pods/11cde4a8-a267-431b-b41b-54b0b0217c46/volumes/kubernetes.io~projected/kube-api-access-5pqdb
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/2f421f41ccbe89db3fd8b42cda6f7a43b92ba1be5a92e98815c4d78249eff6c3/merged
shm              64M     0   64M   0% /var/lib/docker/containers/88016582336c7b0a44380ef09275224915456844475e3ae7c1c3c029261679f3/mounts/shm
tmpfs           170M   12K  170M   1% /var/lib/kubelet/pods/1949bc6e-60ea-4da7-8522-7b4597804822/volumes/kubernetes.io~projected/kube-api-access-zgkh9
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/4e171d1bd0c4876d9b23f3c7b887573c65c1c5067b4f3dd23011117d861eaf46/merged
shm              64M     0   64M   0% /var/lib/docker/containers/45bf4a6a15e4e50e273a21d367ebae0db54bf42d69acfc153329b07593bc8ddd/mounts/shm
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/35343d9d531c23d8d6191fede579d72a86ea240e48f15103ab796b813c29a9f3/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/42bbf2ae6c20184cc89a0569b16f8a0c3618b7fceed78acd764ba996deeb24d2/merged
shm              64M     0   64M   0% /var/lib/docker/containers/a38c9ede257b045fa22d814a71af3883a44a1d92f6b904c1e21e0d556a44a1c7/mounts/shm
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/32dd2e127cdf71c180660bc7a50e48aeea6ca8af0224592464cb410e57434356/merged
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/a15d810ffadeb9e45a6beebe9fa5c863dc4f419ec8c3ed77a1b4e6b778232aa4/merged
tmpfs            16G   12K   16G   1% /var/lib/kubelet/pods/6e3e2b66-6b96-4c8c-889a-4787cd99c58d/volumes/kubernetes.io~projected/kube-api-access-zdrc4
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/1fa10d809f85ce0cf21515481a54c327e501bf651dab20a0b3d2953b5c2bdb7f/merged
shm              64M     0   64M   0% /var/lib/docker/containers/fe5ea69f682ba0167324d58bac059ccb9400dd6b0d77ceb34b20eeef67a6caeb/mounts/shm
overlay         300G   34G  267G  12% /var/lib/docker/overlay2/0dfd5a79e2a55ed0a809d92e8370e221b2e96e3f58317ce697f02b1c1823dc35/merged


Getting Same Error:

[root@learning-1 ss_kubernetese]# kubectl logs deployment/sdb-operator
2022/10/18 08:35:21 deleg.go:121        {cmd}   Go Version: go1.18.2
2022/10/18 08:35:21 deleg.go:121        {cmd}   Go OS/Arch: linux/amd64
2022/10/18 08:35:21 deleg.go:121        {cmd}   Operator Version: 3.0.33
2022/10/18 08:35:21 deleg.go:121        {cmd}   Commit Hash: db8f5aff
2022/10/18 08:35:21 deleg.go:121        {cmd}   Build Time: 2022-09-08T14:43:05Z
2022/10/18 08:35:21 deleg.go:121        {cmd}   Options:
2022/10/18 08:35:21 deleg.go:121        {cmd}   --cores-per-unit: 8.000000
2022/10/18 08:35:21 deleg.go:121        {cmd}   --memory-per-unit: 32.000000
2022/10/18 08:35:21 deleg.go:121        {cmd}   --overpack-factor: 0.000000
2022/10/18 08:35:21 deleg.go:121        {cmd}   --extra-cidrs: []
2022/10/18 08:35:21 deleg.go:121        {cmd}   --external-dns-domain-name: {false }
2022/10/18 08:35:21 deleg.go:121        {cmd}   --external-dns-ttl: {false 0}
2022/10/18 08:35:21 deleg.go:121        {cmd}   --ssl-secret-name:
2022/10/18 08:35:21 deleg.go:121        {cmd}   --merge-service-annotations: true
2022/10/18 08:35:21 deleg.go:121        {cmd}   --backup-default-deadline-seconds: 3600
2022/10/18 08:35:21 deleg.go:121        {cmd}   --backup-incremental-default-deadline-seconds: 3600
2022/10/18 08:35:21 deleg.go:121        {cmd}   --cluster-id: sdb-cluster
2022/10/18 08:35:21 deleg.go:121        {cmd}   --fs-group-id: 5555
2022/10/18 08:35:21 deleg.go:121        {controller-runtime.metrics}    Metrics server is starting to listen    addr: "0.0.0.0:9090"
2022/10/18 08:35:22 deleg.go:121        {cmd}   starting manager
2022/10/18 08:35:22 logr.go:249 Starting server path: "/metrics"  kind: "metrics"  addr: "[::]:9090"
I1018 08:35:22.078299       1 leaderelection.go:248] attempting to acquire leader lease default/memsql-operator-lock-sdb-cluster...
I1018 08:35:22.185802       1 leaderelection.go:258] successfully acquired lease default/memsql-operator-lock-sdb-cluster
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  source: "kind source: *v1alpha1.MemsqlCluster"
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler kind: "MemsqlCluster"  reconciler group: "memsql.com"  source: "kind source: *v1alpha1.MemsqlCluster"
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  source: "kind source: *v1.StatefulSet"
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  source: "kind source: *v1.Service"
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting EventSource    reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  source: "kind source: *v1.Secret"
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting Controller     reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"
2022/10/18 08:35:22 logr.go:249 {events}        Normal  object: "{Kind:ConfigMap Namespace:default Name:memsql-operator-lock-sdb-cluster UID:53fa29ac-6705-43dd-b60d-25824c4d9f98 APIVersion:v1 ResourceVersion:2419 FieldPath:}"  reason: "LeaderElection"  message: "sdb-operator-867c9cb7b8-lb64q_e4905402-07eb-4403-a146-e17a751a7c2c became leader"
2022/10/18 08:35:22 logr.go:249 {events}        Normal  object: "{Kind:Lease Namespace:default Name:memsql-operator-lock-sdb-cluster UID:7aa6c7bd-a963-4833-b362-31044ac9a9e5 APIVersion:coordination.k8s.io/v1 ResourceVersion:2420 FieldPath:}"  reason: "LeaderElection"  message: "sdb-operator-867c9cb7b8-lb64q_e4905402-07eb-4403-a146-e17a751a7c2c became leader"
2022/10/18 08:35:22 logr.go:249 {controller.memsqlcluster}      Starting workers        reconciler group: "memsql.com"  reconciler kind: "MemsqlCluster"  worker count: "1"
2022/10/18 08:35:47 deleg.go:121        {controller.memsql}     reconciliation cause: memsqlcluster     clusterName: "sdb-cluster"  namespace: "default"
2022/10/18 08:35:47 logr.go:249 {controller.memsql}     Reconciling MemSQL Cluster.     Request.Name: "sdb-cluster"  Request.Namespace: "default"
2022/10/18 08:35:48 deleg.go:121        {memsql}        can't find operator deployment, trying uncached client  key: "default/operator-sdb-cluster"
2022/10/18 08:35:48 deleg.go:135        {memsql}        can't find operator deployment, metrics service will not be created     error: "deployments.apps "operator-sdb-cluster" not found"
2022/10/18 08:35:48 deleg.go:121        {memsql}        creating object type: "*v1.Secret"  name: "sdb-cluster"
2022/10/18 08:35:48 deleg.go:121        {memsql}        creating object type: "*v1.Service"  name: "svc-sdb-cluster"
2022/10/18 08:35:48 deleg.go:121        {controller.memsql}     reconciliation cause: statefulset       namespace: "default"  clusterName: "sdb-cluster"  serviceName: "svc-sdb-cluster"  namespace: "default"
2022/10/18 08:35:48 deleg.go:121        {memsql}        creating object type: "*v1.Service"  name: "svc-sdb-cluster-ddl"
2022/10/18 08:35:48 deleg.go:121        {controller.memsql}     reconciliation cause: statefulset       namespace: "default"  clusterName: "sdb-cluster"  serviceName: "svc-sdb-cluster-ddl"  namespace: "default"
2022/10/18 08:35:48 deleg.go:121        {controller.memsql}     Transition to phase pending on missing phase value
2022/10/18 08:35:48 deleg.go:121        {controller.memsql}     updating operator version       3.0.33
2022/10/18 08:35:48 deleg.go:135        {controller.memsql}     Reconciler error, will retry after      1s: "error"  failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found
freya/kube/memsql.getServiceEndpoint
        /builds/singlestore/engineering/helios/go/src/freya/kube/memsql/service.go:482
freya/kube/memsql.serviceStatusUpdate
        /builds/singlestore/engineering/helios/go/src/freya/kube/memsql/service.go:184
freya/kube/memsql.NewServiceAction.func1.1
        /builds/singlestore/engineering/helios/go/src/freya/kube/memsql/service.go:235
freya/kube/memsql.NewServiceAction.func1
        /builds/singlestore/engineering/helios/go/src/freya/kube/memsql/service.go:317
freya/kube/memsql.ComposeActions.func1
        /builds/singlestore/engineering/helios/go/src/freya/kube/memsql/action.go:21
freya/kube/controller.(*Reconciler).Reconcile
        /builds/singlestore/engineering/helios/go/src/freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        /builds/singlestore/engineering/helios/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        /builds/singlestore/engineering/helios/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        /builds/singlestore/engineering/helios/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        /builds/singlestore/engineering/helios/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
runtime.goexit

SingleStore release a new operator image yesterday - Docker Hub

Can you please test with this image?

Hi @gkafity ,

I checked with new image, still getting same error.

Logs:

[root@learning-1 ss_kubernetese]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
sdb-operator-84bf4b74dd-n9bnk   1/1     Running   0          9m56s
[root@learning-1 ss_kubernetese]# kubectl describe pod
Name:         sdb-operator-84bf4b74dd-n9bnk
Namespace:    default
Priority:     0
Node:         learning-2/10.138.0.4
Start Time:   Fri, 21 Oct 2022 13:34:29 +0000
Labels:       name=sdb-operator
              pod-template-hash=84bf4b74dd
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
  IP:           10.244.1.2
Controlled By:  ReplicaSet/sdb-operator-84bf4b74dd
Containers:
  sdb-operator:
    Container ID:  docker://5541cb59306b362e7872f99b7728cb6b6d1962f7a761d58bc8750c2f2b674fb7
    Image:         singlestore/operator:3.0.60-c818b3a1
    Image ID:      docker-pullable://singlestore/operator@sha256:01373eaefb5d0c08633bb24e1cff740c8cb123ac5887375b94e9323bf299ff71
    Port:          <none>
    Host Port:     <none>
    Args:
      --merge-service-annotations
      --fs-group-id
      5555
      --cluster-id
      sdb-cluster
    State:          Running
      Started:      Fri, 21 Oct 2022 13:34:32 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      WATCH_NAMESPACE:  default (v1:metadata.namespace)
      POD_NAME:         sdb-operator-84bf4b74dd-n9bnk (v1:metadata.name)
      OPERATOR_NAME:    sdb-operator
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9gcq7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-9gcq7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/sdb-operator-84bf4b74dd-n9bnk to learning-2
  Normal  Pulling    10m   kubelet            Pulling image "singlestore/operator:3.0.60-c818b3a1"
  Normal  Pulled     10m   kubelet            Successfully pulled image "singlestore/operator:3.0.60-c818b3a1" in 1.043047467s
  Normal  Created    10m   kubelet            Created container sdb-operator
  Normal  Started    10m   kubelet            Started container sdb-operator
[root@learning-1 ss_kubernetese]# cat /proc/cpuinfo | grep cpu\ cores |uniq
cpu cores       : 2
[root@learning-1 ss_kubernetese]# lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
CPU(s):                4
Thread(s) per core:    2
Core(s) per socket:    2
Socket(s):             1
[root@learning-1 ss_kubernetese]#  cat /proc/cpuinfo | grep processor
processor       : 0
processor       : 1
processor       : 2
processor       : 3
[root@learning-1 ss_kubernetese]# free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        3.3G        9.2G         99M        3.0G         11G
Swap:            0B          0B          0B

Error:

[root@learning-1 ss_kubernetese]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
learning-1   Ready    control-plane,master   111s   v1.23.12
learning-2   Ready    <none>                 12s    v1.23.12
[root@learning-1 ss_kubernetese]# kubectl create -f  sdb-rbac.yaml
serviceaccount/sdb-operator created
role.rbac.authorization.k8s.io/sdb-operator created
rolebinding.rbac.authorization.k8s.io/sdb-operator created
[root@learning-1 ss_kubernetese]# kubectl create -f sdb-cluster-crd.yaml
customresourcedefinition.apiextensions.k8s.io/memsqlclusters.memsql.com created
[root@learning-1 ss_kubernetese]# kubectl create -f sdb-operator.yaml
deployment.apps/sdb-operator created
[root@learning-1 ss_kubernetese]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
sdb-operator-84bf4b74dd-n9bnk   1/1     Running   0          17s
[root@learning-1 ss_kubernetese]# kubectl create -f sdb-cluster.yaml
memsqlcluster.memsql.com/sdb-cluster created
[root@learning-1 ss_kubernetese]# kubectl get statefulsets
No resources found in default namespace.
[root@learning-1 ss_kubernetese]# kubectl logs deployment/sdb-operator
2022-10-21T13:34:32.576Z        INFO    cmd     operator/main.go:82     Options:
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:83     --cores-per-unit: 8.000000
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:84     --memory-per-unit: 32.000000
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:85     --overpack-factor: 0.000000
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:86     --extra-cidrs: []
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:87     --external-dns-domain-name: {false }
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:88     --external-dns-ttl: {false 0}
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:89     --ssl-secret-name:
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:90     --merge-service-annotations: true
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:113    --backup-default-deadline-seconds: 172800
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:121    --backup-incremental-default-deadline-seconds: 172800
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:145    --cluster-id: sdb-cluster
2022-10-21T13:34:32.577Z        INFO    cmd     operator/main.go:149    --fs-group-id: 5555
2022-10-21T13:34:32.851Z        INFO    controller-runtime.metrics      metrics/listener.go:44  Metrics server is starting to listen    {"addr": "0.0.0.0:9090"}
2022-10-21T13:34:32.985Z        INFO    runtime/proc.go:250     starting manager
2022-10-21T13:34:32.986Z        INFO    runtime/asm_amd64.s:1571        Starting server {"kind": "health probe", "addr": "[::]:8080"}
I1021 13:34:32.986318       1 leaderelection.go:248] attempting to acquire leader lease default/memsql-operator-lock-sdb-cluster...
2022-10-21T13:34:32.986Z        INFO    runtime/asm_amd64.s:1571        Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:9090"}
I1021 13:34:33.091820       1 leaderelection.go:258] successfully acquired lease default/memsql-operator-lock-sdb-cluster
2022-10-21T13:34:33.092Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting EventSource    {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster", "source": "kind source: *v1alpha1.MemsqlCluster"}
2022-10-21T13:34:33.092Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting EventSource    {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster", "source": "kind source: *v1alpha1.MemsqlCluster"}
2022-10-21T13:34:33.092Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting EventSource    {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster", "source": "kind source: *v1.StatefulSet"}
2022-10-21T13:34:33.092Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting EventSource    {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster", "source": "kind source: *v1.Service"}
2022-10-21T13:34:33.092Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting EventSource    {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster", "source": "kind source: *v1.Secret"}
2022-10-21T13:34:33.092Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting Controller     {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster"}
2022-10-21T13:34:33.194Z        INFO    controller.memsqlcluster        controller/controller.go:234    Starting workers        {"reconciler group": "memsql.com", "reconciler kind": "MemsqlCluster", "worker count": 1}
2022-10-21T13:34:54.858Z        INFO    controller/configmaps_secrets.go:59     reconciliation cause: memsqlcluster     {"clusterName": "sdb-cluster", "namespace": "default"}
2022-10-21T13:34:54.858Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:34:54.996Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:34:55.031Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:55.032Z        INFO    memsql/util.go:60       creating object {"type": "*v1.Secret", "name": "sdb-cluster"}
2022-10-21T13:34:55.067Z        INFO    memsql/util.go:60       creating object {"type": "*v1.Service", "name": "svc-sdb-cluster"}
2022-10-21T13:34:55.103Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulset       {"namespace": "default", "clusterName": "sdb-cluster", "serviceName": "svc-sdb-cluster", "namespace": "default"}
2022-10-21T13:34:55.103Z        INFO    memsql/util.go:60       creating object {"type": "*v1.Service", "name": "svc-sdb-cluster-ddl"}
2022-10-21T13:34:55.142Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulset       {"namespace": "default", "clusterName": "sdb-cluster", "serviceName": "svc-sdb-cluster-ddl", "namespace": "default"}
2022-10-21T13:34:55.142Z        INFO    controller/controller.go:300    Transition to phase pending on missing phase value
2022-10-21T13:34:55.142Z        INFO    controller/controller.go:321    Updating operator version       {"previous version": "", "new version": "3.0.60"}
2022-10-21T13:34:55.142Z        INFO    controller/controller.go:328    Updating observed generation    {"previous value": 0, "new value": 1}
2022-10-21T13:34:55.187Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "1s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:55.187Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:34:55.187Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:34:55.187Z        INFO    controller/configmaps_secrets.go:55     skipping reconcile request because cluster spec has not changed
2022-10-21T13:34:55.187Z        INFO    controller/configmaps_secrets.go:55     skipping reconcile request because cluster spec has not changed
2022-10-21T13:34:55.221Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:55.222Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "1s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:56.188Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:34:56.188Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:34:56.224Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:56.224Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "2s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:58.225Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:34:58.225Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:34:58.264Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:34:58.265Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "3s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:01.265Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:35:01.265Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:35:01.303Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:01.304Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "5s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:06.304Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:35:06.304Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:35:06.340Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:06.341Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "8s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:14.342Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:35:14.342Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:35:14.386Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:14.386Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "13s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:27.386Z        INFO    controller/controller.go:114    Reconciling MemSQL Cluster.     {"Request.Namespace": "default", "Request.Name": "sdb-cluster"}
2022-10-21T13:35:27.387Z        INFO    memsql/metrics.go:58    can't find operator deployment, trying uncached client  {"key": "default/operator-sdb-cluster"}
2022-10-21T13:35:27.424Z        ERROR   memsql/metrics.go:61    can't find operator deployment, metrics service will not be created     {"error": "deployments.apps \"operator-sdb-cluster\" not found"}
freya/kube/memsql.NewMetricsServiceAction.func1
        freya/kube/memsql/metrics.go:61
freya/kube/memsql.ComposeActions.func1
        freya/kube/memsql/action.go:22
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:296
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227
2022-10-21T13:35:27.424Z        ERROR   controller/errors.go:95 Reconciler error        {"will retry after": "21s", "error": "failed to get service endpoint (svc-sdb-cluster-ddl): no ingress endpoint found"}
freya/kube/controller.(*ErrorManager).Update
        freya/kube/controller/errors.go:95
freya/kube/controller.(*Reconciler).Reconcile
        freya/kube/controller/controller.go:342
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227

this error,

Is not the root cause of the problem and can be treated as a warning. It looks like the operator is stuck on the lack of ingress endpoint, which looks like the same issue you raised here

Let me know if that answers your question.

Regards,
Brooks

1 Like