1. Topology

In this lab environment, I have 2 sites, Site-01a and Site-01b. In each site I have:

  • 1x AVI Controller
  • 1x vCenter Server
  • 1x Openshift Cluster with 3x master node and 3x worker node
  • 1x Linux host to bootstrap and do most of the stuff

There are several Service Engines created to do the load balancing job. There are two Service Engine Groups, one group for DNS and the other group for the ingress controller.

Here is the topology I used in this lab environment.

2. Requirement

Software Requirement

The setup was created with below software version:

Prerequisite

In order to install AMKO, we need:

  • at least one Openshift cluster
  • Create avi-system namespace
    kubectl create ns avi-system
    
  • Create kubeconfig file with permission to read the service and the ingress/route objects for all clusters.

I have two Openshift clusters in two sites. A kubeconfig file with access to both of these clusters is required so AMKO can watch the cluster API in these two clusters.

To create this kubeconfig file, you can refer to this tutorial. Name this file gslb-members and create a secret with the kubeconfig file

kubectl create secret generic gslb-config-secret --from-file gslb-members -n avi-system

3. Install Helm

The guide on installing Helm can be found here

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

4. Installing AMKO

Step 1: Get values.yaml file from avinetworks github

mkdir ako
cd ako
wget https://raw.githubusercontent.com/avinetworks/avi-helm-charts/master/charts/incubator/amko/values.yaml

Step 2: Add AMKO repository

$ helm repo add amko https://avinetworks.github.io/avi-helm-charts/charts/stable/amko

Step 3: Search for available charts

$ helm search repo
NAME            CHART VERSION   APP VERSION     DESCRIPTION
amko/amko       1.2.1           1.2.1           A helm chart for Avi Multicluster Kubernetes Op...

Step 4: Install AMKO

$ helm install  amko/amko  --generate-name --version 1.2.1 -f values.yaml  --set configs.gsllbLeaderController=<leader_controller_ip> --namespace=avi-system

Step 5: Verify the installation

$ oc get all -n avi-system
NAME                        READY   STATUS    RESTARTS   AGE
pod/ako-568f47d9f7-68hkn    1/1     Running   0          43h
pod/amko-84ff6655bf-k2stf   1/1     Running   0          20d

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ako    1/1     1            1           43h
deployment.apps/amko   1/1     1            1           20d

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/ako-568f47d9f7    1         1         1       43h
replicaset.apps/amko-84ff6655bf   1         1         1       20d

5. Deploy Demo App

Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

This application can be found on its Github Page

This demo application has been installed to both cluster. Each cluster has AKO installed.

The example apps above does not include ingress service, so I add ingress service as per yaml below:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: frontend-ingress
  labels:
    app: avi-gslb
spec:
  rules:
    - host: shop.apps.corp.local
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            serviceName: frontend
            servicePort: 80

I have ingress on subdomain shop.apps.corp.local and it is redirecting to frontend service.

below are the result:

$ oc get pods
NAME                                    READY   STATUS        RESTARTS   AGE
adservice-86bc987ccd-x2n2q              1/1     Running       1          11d
cartservice-c877477df-qz2rp             1/1     Running       7          5d13h
cartservice-c877477df-vjzzj             1/1     Terminating   2          11d
checkoutservice-cf8cf75db-45qzc         1/1     Running       0          11d
currencyservice-6c596c8df8-bw22b        1/1     Running       1          11d
emailservice-d86586496-8xgjh            1/1     Running       1          11d
frontend-d6f4f8984-fft7g                1/1     Running       0          15h
frontend-d6f4f8984-htcx2                1/1     Running       0          15h
frontend-d6f4f8984-n85jw                1/1     Running       0          15h
frontend-d6f4f8984-rpggn                1/1     Terminating   0          11d
frontend-d6f4f8984-v5z9z                1/1     Running       0          15h
frontend-d6f4f8984-zc8jv                1/1     Running       0          15h
loadgenerator-558d6c8d85-4cwt8          1/1     Terminating   6          11d
loadgenerator-558d6c8d85-bvww2          1/1     Running       2          5d13h
paymentservice-7cb9cfd8b8-5jqrq         1/1     Running       0          5d13h
paymentservice-7cb9cfd8b8-n7zlv         1/1     Terminating   1          11d
productcatalogservice-7bb4c9868-96khz   1/1     Running       0          5d13h
productcatalogservice-7bb4c9868-lcmj2   1/1     Terminating   0          11d
recommendationservice-df4dc9bfb-759qh   1/1     Running       5          11d
redis-cart-659df7674c-8djjz             1/1     Running       0          5d13h
redis-cart-659df7674c-9khtb             1/1     Terminating   0          11d
shippingservice-df95d5484-86gqs         1/1     Terminating   0          11d
shippingservice-df95d5484-vh5pr         1/1     Running       0          5d13h
$ oc get svc
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
adservice               ClusterIP      172.30.153.106   <none>           9555/TCP       11d
cartservice             ClusterIP      172.30.61.177    <none>           7070/TCP       11d
checkoutservice         ClusterIP      172.30.130.82    <none>           5050/TCP       11d
currencyservice         ClusterIP      172.30.129.103   <none>           7000/TCP       11d
emailservice            ClusterIP      172.30.53.143    <none>           5000/TCP       11d
frontend                ClusterIP      172.30.49.167    <none>           80/TCP         11d
frontend-external       LoadBalancer   172.30.19.147    192.168.243.21   80:31315/TCP   11d
paymentservice          ClusterIP      172.30.206.8     <none>           50051/TCP      11d
productcatalogservice   ClusterIP      172.30.191.141   <none>           3550/TCP       11d
recommendationservice   ClusterIP      172.30.98.64     <none>           8080/TCP       11d
redis-cart              ClusterIP      172.30.250.199   <none>           6379/TCP       11d
shippingservice         ClusterIP      172.30.221.246   <none>           50051/TCP      11d
$ oc get routes
NAME                     HOST/PORT                         PATH   SERVICES   PORT   TERMINATION   WILDCARD
frontend-ingress-k4x5c   shop.apps.corp.local ... 1 more   /      frontend   http                 None

I can look at specific pod in frontend deployment, this should be my pool member

$ oc get pods -o wide | grep frontend
frontend-d6f4f8984-6vnzj                1/1     Running       0          15m     10.130.2.25   ocp01-grq8j-worker-w26bz   <none>           <none>

Below is my GSLB Service registered. The GSLB service is configured with a Health Monitor to check the health of the Virtual Service. These objects and rules are created automatically from AMKO.

Below is my GSLB pool member

6. Troubleshooting

Below are issues I found during the installation process:

  1. Unauthorized access to get ingress object
    [34mINFO[0m	utils/ingress.go:61	networkingv1 ingresses not found, setting informer for extensionsv1: Unauthorized
    <output truncated>
    E0904 12:44:38.299026       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:189: Failed to list *v1.Namespace: Unauthorized
    E0904 12:44:38.299027       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:183: Failed to list *v1.Service: Unauthorized
    E0904 12:44:38.299140       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:171: Failed to list *v1beta1.Ingress: Unauthorized
    E0904 12:44:39.306034       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:171: Failed to list *v1beta1.Ingress: Unauthorized
    E0904 12:44:39.306144       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:183: Failed to list *v1.Service: Unauthorized
    E0904 12:44:39.306291       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:189: Failed to list *v1.Namespace: Unauthorized
    E0904 12:44:40.314388       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:183: Failed to list *v1.Service: Unauthorized
    E0904 12:44:40.314526       1 reflector.go:123] amko/gslb/ingestion/member_controllers.go:189: Failed to list *v1.Namespace: Unauthorized
    

    This issue happened when the credential in the config file does not have the right permission to access the resource. The permissions provided in the kubeconfig file for all the clusters must have the permissions to [get, list, watch] on:

    • Kubernetes ingress and service type load balancers.
    • Openshift routes and service type load balancers. AMKO also needs permissisons to [get, list, watch, update] on:
    • GSLBConfig object
    • GlobalDeploymentPolicy object The extra update permission is to update the GSLBConfig and GlobalDeploymentPolicy objects’ status fields to reflect the current state of the object, whether its accepted or rejected.

Source

https://github.com/avinetworks/avi-helm-charts/blob/master/docs/AMKO/README.md