Skip to content

Kubernetes GKE

Kubernetes with Google Cloud

This test was done with Kubernetes GKE v1.16.15-gke.6000

Step 0 - Set up your environment

1. Open a new Cloud Shell session

2. Set your project's default Compute Engine zone and create a Google Kubernetes Engine cluster:

gcloud config set compute/zone europe-west1-b
gcloud container clusters create resiot-tutorial --num-nodes=3

3. in this example we create a Google Load Balancer managed with nginx-ingress (see https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke)

# Install nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml

# Output:
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

# Verify installation with:
kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch

4. Take the public ip address of the Google Load Balancer with:

kubectl get service ingress-nginx-controller -n ingress-nginx

# Output:
# NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
# ingress-nginx-controller   LoadBalancer   10.92.9.208   39.178.203.11   80:31361/TCP,443:30871/TCP   11m

# in this example is 39.178.203.11 and from now on we will call it [your_public_ip]

# we therefore assume that your hosts to access the services will be

iot.[your_public_ip].xip.io  tcp port 80 for web access to ResIOT Network Server/Infrastructure Manager/IoT Platform
grpc.[your_public_ip].xip.io tcp port 80 for Grpc Api access


# xip.io is a magic domain name that provides wildcard DNS for any IP address.
# 10.0.0.1.xip.io  resolves to  10.0.0.1, www.10.0.0.1.xip.io   resolves to   10.0.0.1 - See xip.io
# if you want or want a production environment you can also insert two DNS records to your domain and point them to ip [your_public_ip]

Step 1 - contents of ssd-storageclass.yaml file:

in Cloud Shell create ssd-storageclass.yaml file with

sudo nano ssd-storageclass.yaml

and paste the following content:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: faster
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd

Step 2 - contents of resiot00001-postgres-redis.yaml file:

in Cloud Shell create resiot00001-postgres-redis.yaml file with

sudo nano resiot00001-postgres-redis.yaml

and paste the following content
if it is an instance in the production environment remember to edit:
storage: 15Gi - increases the space from 15Gi up to 50/100Gi
POSTGRES_PASSWORD: change resiotdbpassword with your secure password. (remember that later you will also use it in the resiot00001-RESIOT.YAML file)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: resiot00001-postgres-redis-volumeclaim
spec:
  storageClassName: faster
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: resiot00001-postgres-redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: resiot00001-postgres-redis
  template:
    metadata:
      labels:
        app: resiot00001-postgres-redis
    spec:
      containers:
        - name: resiot00001-postgres
          image: postgres:12.5-alpine
          env:
            - name: POSTGRES_USER
              value: "resiotdb"
            - name: POSTGRES_PASSWORD
              value: "resiotdbpassword"
            - name: POSTGRES_DB
              value: "resiotcore"
            - name: PGDATA
              value: "/var/lib/postgresql/data/pgdata"
          ports:
            - containerPort: 5432
              protocol: TCP
          volumeMounts:
            - name: persistent-storage
              mountPath: /var/lib/postgresql/data
              subPath: postgres
        - name: resiot00001-redis
          image: redis:5.0.10-alpine
          ports:
            - containerPort: 6379
              protocol: TCP
          volumeMounts:
            - name: persistent-storage
              mountPath: /data
              subPath: redis
      volumes:
        - name: persistent-storage
          persistentVolumeClaim:
            claimName: resiot00001-postgres-redis-volumeclaim
---
apiVersion: v1
kind: Service
metadata:
  name: resiot00001-postgres-redis-clusterip
spec:
  ports:
    - name: postgres
      port: 5432
      protocol: TCP
      targetPort: 5432
    - name: redis
      port: 6379
      protocol: TCP
      targetPort: 6379
  selector:
    app: resiot00001-postgres-redis
  type: ClusterIP

Step 3 - contents of resiot00001-storage.yaml file:

in Cloud Shell create resiot00001-storage.yaml file with

sudo nano resiot00001-storage.yaml

and paste the following content:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: resiot00001-nfs-pv-provisioning
spec:
  storageClassName: faster
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi  
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: resiot00001-nfs-server
spec:
  replicas: 1
  selector:
    role: resiot00001-nfs-server
  template:
    metadata:
      labels:
        role: resiot00001-nfs-server
    spec:
      containers:
      - name: resiot00001-nfs-server
        image: k8s.gcr.io/volume-nfs:0.8
        ports:
          - name: nfs
            containerPort: 2049
          - name: mountd
            containerPort: 20048
          - name: rpcbind
            containerPort: 111
        securityContext:
          privileged: true
        volumeMounts:
          - mountPath: /exports
            name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: resiot00001-nfs-pv-provisioning      
---
kind: Service
apiVersion: v1
metadata:
  name: resiot00001-nfs-server
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
  selector:
    role: resiot00001-nfs-server
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: resiot00001-nfs
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: resiot00001-nfs-server.default.svc.cluster.local
    path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: resiot00001-nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 20Gi

Step 4 - contents of resiot00001-mqtt.yaml file:

in Cloud Shell create resiot00001-mqtt.yaml file with

sudo nano resiot00001-mqtt.yaml

and paste the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: resiot00001-mqtt
spec:
  replicas: 1
  selector:
    matchLabels:
      app: resiot00001-mqtt
  template:
    metadata:
      labels:
        app: resiot00001-mqtt
    spec:
      containers:
        - name: resiot00001-mqtt
          image: resiot/resiotmqtt:1000020
          env:
            - name: USERNAME
              value: "mqttuser"
            - name: PASSWORD
              value: "yourpassword_change_please"
            - name: TLS
              value: "y"
          ports:
            - containerPort: 1883
              protocol: TCP
          volumeMounts:
            - name: persistent-storage-certmqtt
              mountPath: /certfiles
              subPath: resiotcertmqtt
      volumes:
        - name: persistent-storage-certmqtt
          persistentVolumeClaim:
            claimName: resiot00001-nfs
---
apiVersion: v1
kind: Service
metadata:
  name: resiot00001-mqtt-clusterip
spec:
  ports:
  - port: 1883
    protocol: TCP
    targetPort: 1883
  selector:
    app: resiot00001-mqtt
  type: ClusterIP
---

finire con
https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: resiot00001-ingress-resource-mqtt
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    #cert-manager.io/issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - mqtt.[your_public_ip].xip.io
    secretName: resiot00001-tls-mqtt
  rules:
  - host: mqtt.[your_public_ip].xip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: resiot00001-mqtt-clusterip
          servicePort: 1883

Step 5 - contents of resiot00001-resiot.yaml file:

in Cloud Shell create resiot00001-resiot.yaml file with

sudo nano resiot00001-resiot.yaml

and paste the following content and remember to edit:
1. RESIOT_LORA_BAND
2. RESIOT_LORA_NETID
3. RESIOT_DB_URL: if you have changed the password in the resiot00001-postgres-redis.yaml file replace it in place of resiotdbpassword
4. replace [your_public_id] in grpc.[your_public_ip].xip.io with your public ip address or enter your host GRPC

apiVersion: apps/v1
kind: Deployment
metadata:
  name: resiot00001-resiot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: resiot00001-resiot
  template:
    metadata:
      labels:
        app: resiot00001-resiot
    spec:
      containers:
        - name: resiot00001-resiot
          image: resiot/resiot:1000020
          env:
            - name: NO_LNS
              value: "n"
            - name: NO_PLA
              value: "n"
            - name: RESIOT_DB_TYPE
              value: "postgres"
            - name: RESIOT_DB_URL
              value: "postgres://resiotdb:resiotdbpassword@resiot00001-postgres-redis-clusterip:5432/resiotcore?sslmode=disable"   
            - name: RESIOT_REDIS_URL
              value: "redis://resiot00001-postgres-redis-clusterip:6379"
            - name: RESIOT_MQTT_URL
              value: "tcp://resiot00001-mqtt-clusterip:1883"          
            - name: RESIOT_LORA_BAND
              value: "EU_863_870"
            - name: RESIOT_LORA_NETID
              value: "A0A1A2"
            - name: RESIOT_EXTERNAL_ACCESS_UDP_HOST
              value: "0.0.0.0"
            - name: RESIOT_EXTERNAL_ACCESS_UDP_PORT
              value: "7677"
            - name: RESIOT_EXTERNAL_ACCESS_GRPC_HOST
              value: "grpc.[your_public_ip].xip.io:443"
          ports:
            - containerPort: 8088
              protocol: TCP
            - containerPort: 8095
              protocol: TCP
            - containerPort: 7677
              protocol: UDP
          volumeMounts:
            - name: persistent-storage
              mountPath: /run
              subPath: resiotupdfld
            - name: persistent-storage
              mountPath: /data
              subPath: resiotdata
      volumes:
        - name: persistent-storage
          persistentVolumeClaim:
            claimName: resiot00001-nfs
---
apiVersion: v1
kind: Service
metadata:
  name: resiot00001-resiot-clusterip
spec:
  ports:
    - name: http
      port: 8088
      protocol: TCP
      targetPort: 8088
    - name: grpc
      port: 8095
      protocol: TCP
      targetPort: 8095
    - name: udp
      port: 7677
      protocol: UDP
      targetPort: 7677
  selector:
    app: resiot00001-resiot
  type: ClusterIP

Step 5 - contents of resiot00001-ingress.yaml file:

in Cloud Shell create resiot00001-ingress.yaml file with

sudo nano resiot00001-ingress.yaml

and paste the following content and remember to
replace [your_public_id] in iot.[your_public_ip].xip.io with your public ip address or enter your host IOT

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: resiot00001-ingress-resource
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    #cert-manager.io/issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - iot.[your_public_ip].xip.io
    secretName: resiot00001-tls
  rules:
  - host: iot.[your_public_ip].xip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: resiot00001-resiot-clusterip
          servicePort: 8088

Step 6 - contents of resiot00001-ingress-grpc.yaml file:

for GRPC I need a certificate, in this example we create a self-signed one and put it in the secret resiot00001-tls-grpc

export MY_HOST=grpc.[your_public_ip].xip.io

# create an SSL certificate
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes \
  -keyout xip.key -out xip.crt -subj "/CN=$MY_HOST" \
  -addext "subjectAltName=DNS:$MY_HOST"

kubectl create secret tls resiot00001-tls-grpc --cert=xip.crt --key=xip.key

in Cloud Shell create resiot00001-ingress-grpc.yaml file with

sudo nano resiot00001-ingress-grpc.yaml

and paste the following content and remember to
replace [your_public_id] in grpc.[your_public_ip].xip.io with your public ip address or enter your host GRPC and IOT

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: resiot00001-ingress-resource-grpc
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/server-snippet: |
      grpc_read_timeout 600s;
      grpc_send_timeout 600s;
      client_body_timeout 600s;
    #cert-manager.io/issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - grpc.[your_public_ip].xip.io
    secretName: resiot00001-tls-grpc
  rules:
  - host: grpc.[your_public_ip].xip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: resiot00001-resiot-clusterip
          servicePort: 8095

Step 7 - Deployment

kubectl apply -f ssd-storageclass.yaml

kubectl apply -f resiot00001-postgres-redis.yaml
# ouput:
# persistentvolumeclaim/resiot00001-postgres-redis-volumeclaim created
# deployment.apps/resiot00001-postgres-redis created
# service/resiot00001-postgres-redis-clusterip created

kubectl apply -f resiot00001-mqtt.yaml
# ouput:
# deployment.apps/resiot00001-mqtt created
# service/resiot00001-mqtt-clusterip created

# verify pods with
kubectl get pods --watch

# when all pods are running

kubectl apply -f resiot00001-resiot.yaml
# ouput:
# persistentvolumeclaim/resiot00001-resiot-volumeclaim created
# deployment.apps/resiot00001-resiot created
# service/resiot00001-resiot-clusterip created

kubectl apply -f resiot00001-ingress.yaml
# ouput:
# ingress.extensions/resiot00001-ingress-resource created

kubectl apply -f resiot00001-ingress-grpc.yaml
# ouput:
# ingress.extensions/resiot00001-ingress-resource-grpc created

# check grpc  with https://github.com/fullstorydev/grpcurl
grpcurl -v -insecure grpc.[your_public_ip].xip.io:443 nodeaa/Getzz
# if you have "Error invoking method "nodeaa/Getzz": failed to query for service descriptor "nodeaa": server does not support the reflection API" is all ok

DONE. Now try to log in via the web with:
http://iot.[your_public_ip].xip.io/


look here for the first configuration
https://docs.resiot.io/onpremise_setup/

thanks

Step 8 - [Optional] SSL/TLS Certificate

# install cert-manager
# cert-manager is a native Kubernetes certificate management controller
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml

# output
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

# install Issuer Let's Encrypt (staging for test)
# when you edit it insert your email replacing user@example.com, then ESC + :x + ENTER to exit and save the changes
kubectl create --edit -f https://cert-manager.io/docs/tutorials/acme/example/staging-issuer.yaml

# output:
issuer.cert-manager.io/letsencrypt-staging created


# install Issuer Let's Encrypt (production)
# when you edit it insert your email replacing user@example.com, then ESC + :x + ENTER to exit and save the changes
kubectl create --edit -f https://cert-manager.io/docs/tutorials/acme/example/production-issuer.yaml

# output:
issuer.cert-manager.io/letsencrypt-prod created


# If you want to use an SSL certificate, you cannot use the xip.io domain, so you must have your own domain to use
# now configure the DNS of your domain like this:

iot.youerdomain.com  -->  [your_public_ip]
grpc.youerdomain.com  -->  [your_public_ip]

# change the file resiot00001-ingress-grpc.yaml and replace 
# 1. grpc.[your_public_ip].xip.io  with grpc.youerdomain.com
# 2. #cert-manager.io/issuer: "letsencrypt-prod" with cert-manager.io/issuer: "letsencrypt-prod"
sudo nano resiot00001-ingress-grpc.yaml
# update config
kubectl delete -f resiot00001-ingress-grpc.yaml
kubectl apply -f resiot00001-ingress-grpc.yaml

# change the file resiot00001-ingress.yaml and replace 
# 1. iot.[your_public_ip].xip.io  with iot.youerdomain.com
# 2. #cert-manager.io/issuer: "letsencrypt-prod" with cert-manager.io/issuer: "letsencrypt-prod"
sudo nano resiot00001-ingress.yaml
# update config
kubectl delete secret resiot00001-tls-grpc
kubectl delete -f resiot00001-ingress.yaml
kubectl apply -f resiot00001-ingress.yaml

# change the file resiot00001-resiot.yaml and replace 
#    - name: RESIOT_EXTERNAL_ACCESS_GRPC_HOST
#      value: "grpc.[your_public_ip].xip.io:443"
# with
#      value: "iot.youerdomain:443"
sudo nano resiot00001-resiot.yaml
# update config
kubectl apply -f resiot00001-resiot.yaml

Clean up

kubectl delete -f resiot00001-ingress.yaml
kubectl delete -f resiot00001-ingress-grpc.yaml
kubectl delete -f resiot00001-resiot.yaml
kubectl delete -f resiot00001-mqtt.yaml
kubectl delete -f resiot00001-postgres-redis.yaml