1. Topology

In this lab environment, I have 2 sites, Site-01a and Site-01b. In each site I have:

  • 1x AVI Controller
  • 1x vCenter Server
  • 1x Openshift Cluster with 3x master node and 3x worker node
  • 1x Linux host to bootstrap and do most of the stuff

There are several Service Engines created to do the load balancing job. There are two Service Engine Groups, one group for DNS and the other group for the ingress controller.

Here is the topology I used in this lab environment.

2. Requirement

Software Requirement

The setup was created with below software version:


  • AVI Controller has to be setup with vCenter cloud

  • Make sure the Port Group of the Openshift nodes is configured in IPAM profile and the network has IP pool

  • If the Pod CIDRs are not routable, we need to create VRF context object in AVI for kubernetes controller and configure Port Group network with VRF context.

3. Install Helm

The guide on installing Helm can be found here

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

4. Installing AKO

Step 1: Get values.yaml file from avinetworks github

mkdir ako
cd ako
wget https://raw.githubusercontent.com/avinetworks/avi-helm-charts/master/charts/stable/ako/values.yaml

Step 2: Create the avi-system namespace

kubectl create ns avi-system

Step 3: Add AKO repository

helm repo add ako https://avinetworks.github.io/avi-helm-charts/charts/stable/ako

Step 4: Search for available charts

helm search repo

ako/ako              	1.2.1        	1.2.1      	A helm chart for Avi Kubernetes Operator

Step 5: Install AKO

helm install  ako/ako  --generate-name --version 1.2.1 -f values.yaml --set configs.controllerIP=<avi-controller-ip> --set avicredentials.username=<avi-ctrl-username> --set avicredentials.password=<avi-ctrl-password> --namespace=avi-system

Step 6: Verify the installation

$ oc get all -n avi-system
NAME                     READY   STATUS    RESTARTS   AGE
pod/ako-8ff7fbdc-sb55x   1/1     Running   0          21d

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ako   1/1     1            1           21d

NAME                           DESIRED   CURRENT   READY   AGE
replicaset.apps/ako-8ff7fbdc   1         1         1       21d

5. Verification

After AKO installation, a new Virtual Service is created on the AVI Controller. This is Layer-7 Ingress service.

6. Deploy Demo App

Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

This application can be found on its Github Page

The example apps above does not include ingress service, so I add ingress service as per yaml below:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
  name: frontend-ingress
    app: avi-gslb
    - host: shop.apps.corp.local
        - path: /
          pathType: Prefix
            serviceName: frontend
            servicePort: 80

I have ingress on subdomain shop.apps.corp.local and it is redirecting to frontend service.

below are the result:

$ oc get pods
NAME                                    READY   STATUS        RESTARTS   AGE
adservice-86bc987ccd-x2n2q              1/1     Running       1          11d
cartservice-c877477df-qz2rp             1/1     Running       7          5d13h
cartservice-c877477df-vjzzj             1/1     Terminating   2          11d
checkoutservice-cf8cf75db-45qzc         1/1     Running       0          11d
currencyservice-6c596c8df8-bw22b        1/1     Running       1          11d
emailservice-d86586496-8xgjh            1/1     Running       1          11d
frontend-d6f4f8984-fft7g                1/1     Running       0          15h
frontend-d6f4f8984-htcx2                1/1     Running       0          15h
frontend-d6f4f8984-n85jw                1/1     Running       0          15h
frontend-d6f4f8984-rpggn                1/1     Terminating   0          11d
frontend-d6f4f8984-v5z9z                1/1     Running       0          15h
frontend-d6f4f8984-zc8jv                1/1     Running       0          15h
loadgenerator-558d6c8d85-4cwt8          1/1     Terminating   6          11d
loadgenerator-558d6c8d85-bvww2          1/1     Running       2          5d13h
paymentservice-7cb9cfd8b8-5jqrq         1/1     Running       0          5d13h
paymentservice-7cb9cfd8b8-n7zlv         1/1     Terminating   1          11d
productcatalogservice-7bb4c9868-96khz   1/1     Running       0          5d13h
productcatalogservice-7bb4c9868-lcmj2   1/1     Terminating   0          11d
recommendationservice-df4dc9bfb-759qh   1/1     Running       5          11d
redis-cart-659df7674c-8djjz             1/1     Running       0          5d13h
redis-cart-659df7674c-9khtb             1/1     Terminating   0          11d
shippingservice-df95d5484-86gqs         1/1     Terminating   0          11d
shippingservice-df95d5484-vh5pr         1/1     Running       0          5d13h
holuser@ubuntu-01a:~$ oc get svc
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
adservice               ClusterIP   <none>           9555/TCP       11d
cartservice             ClusterIP    <none>           7070/TCP       11d
checkoutservice         ClusterIP    <none>           5050/TCP       11d
currencyservice         ClusterIP   <none>           7000/TCP       11d
emailservice            ClusterIP    <none>           5000/TCP       11d
frontend                ClusterIP    <none>           80/TCP         11d
frontend-external       LoadBalancer   80:31315/TCP   11d
paymentservice          ClusterIP     <none>           50051/TCP      11d
productcatalogservice   ClusterIP   <none>           3550/TCP       11d
recommendationservice   ClusterIP     <none>           8080/TCP       11d
redis-cart              ClusterIP   <none>           6379/TCP       11d
shippingservice         ClusterIP   <none>           50051/TCP      11d
$ oc get routes
NAME                     HOST/PORT                         PATH   SERVICES   PORT   TERMINATION   WILDCARD
frontend-ingress-k4x5c   shop.apps.corp.local ... 1 more   /      frontend   http                 None

I can look at specific pod in frontend deployment, this should be my pool member

$ oc get pods -o wide | grep frontend
frontend-d6f4f8984-6vnzj                1/1     Running       0          15m   ocp01-grq8j-worker-w26bz   <none>           <none>

Below is my Virtual Service construct in my controller. The ingress configured in the application is registered as host URL in my Virtual Service. There is a HTTP rule to specify this in HTTP header. All created automatically from AKO.