To connect a Kubernetes cluster to Lens AppIQ, the first step in order is to install our control plane. Simply use the cluster connect command to do so from our CLI.
Make sure that you have loaded the context of the right Kubernetes cluster in your terminal and that the Lens AppIQ CLI has been adequately configured.
Once the previous steps are verified, run the following command.
kubectl apply -f $(lapps cluster connect -n cluster-name demo-cluster)
The installation process of our control plane will immediately start and all proper Kubernetes resources will be created.
namespace/shipa created secret/sh.helm.release.v1.shipa-agent.v1 created serviceaccount/shipa-agent created secret/shipa-agent created configmap/shipa-agent-config created clusterrole.rbac.authorization.k8s.io/shipa-agent-general created clusterrole.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created clusterrole.rbac.authorization.k8s.io/shipa-agent-autodiscovery-helm created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager-role created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-helm created clusterrole.rbac.authorization.k8s.io/shipa-agent-containermetrics-busybody created clusterrole.rbac.authorization.k8s.io/shipa-agent-containermetrics-helm created clusterrole.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-metrics-exporter created clusterrole.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-general created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager-role created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-containermetrics-busybody created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-containermetrics-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-metrics-exporter created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-helm created role.rbac.authorization.k8s.io/shipa-agent-general created role.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created role.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created role.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created rolebinding.rbac.authorization.k8s.io/shipa-agent-general created rolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created rolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created rolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created service/shipa-agent-api created deployment.apps/shipa-agent created
While our agents are deployed and the connection is established, you can monitor the process locally by running the command
kubectl get pods -n shipa -w into your terminal. Wait until all pieces are in a "Running" status.
$ kubectl get pods -n shipa NAME READY STATUS RESTARTS AGE shipa-agent-5879fb7954-wtp44 1/1 Running 0 3m9s shipa-busybody-rgdgk 1/1 Running 0 2m53s shipa-controller-6ccf95854d-cfdtp 1/1 Running 0 2m56s
Once all components are in "Running" state, you can verify your cluster is ready by listing your clusters in the terminal
lapps cluster list. Your cluster should be displayed with the provided name
$ lapps cluster list +--------------+-------------+-----------+---------------------+-------+-------+ | Name | Provisioner | Addresses | Ingress controllers | Teams | Error | +--------------+-------------+-----------+---------------------+-------+-------+ | demo-cluster | kubernetes | | Type: nginx | | | | | | | Address: | | | +--------------+-------------+-----------+---------------------+-------+-------+
cluster connect command described above generates a configuration using all our default options. However, if you want to have further control and customize the ingress controller that our agents will use, install the deployment engine, or control the discovery options, you can do so by leveraging a special .yaml file that will allow you to control those options.
ingress: ip: 22.214.171.124 type: nginx serviceType: LoadBalancer className: shipa-nginx-ingress clusterIssuer: shipa-nginx-letsencrypt-issuer cluster: name: demo-cluster autoDiscovery: enabled: true namespaces:  features: - autodiscovery - containerMetrics - ingressMetrics - appDeploy
Currently, the CLI supports customizing the following properties:
- name: The name your cluster will have on Lens AppIQ
- autoDiscovery: Options to control the application discovery
- enabled: Boolean value to activate or deactivate the application discovery. If disabled, our control plane won't recognize any applications in your cluster, unless deployed with our deployment engine
- namespaces: A list of namespaces from where you want our control plane to recognize applications. If a namespace is excluded, applications from it will not appear on Lens AppIQ
- features: List of the components of our control plane that will be installed in your cluster. By default, only
autodiscoveryare installed; however, if you want to enable application deployments,
appDeployneed to be added to the list
- ingress: When the feature
appDeployis defined in the configuration file, this section is accounted as the settings for the ingress controller you want to use from your cluster, so our control plane can leverage it to make apps accessible outside of your cluster (through public domains)
- type: Ingress controller provider. Currently, only compatible with
- ip: IP address where your ingress controller is exposing its main service.
- The external IP address must be provided if the service is exposed as a LoadBalancer. Otherwise, provide the ClusterIP associated with the aforementioned service
- serviceType: One of
ClusterIP. The type associated with the main services of your ingress controller
- className: The ingress controller class name.
- clusterIssuer: The name of the ClusterIssuer installed in your cluster to generate certificates (if any)
- type: Ingress controller provider. Currently, only compatible with
To connect a cluster using the suggested .yaml file, simply run the following command:
$ lapps cluster connect -f ~/path/demo-cluster.yaml
cluster: name: demo-cluster autoDiscovery: enabled: true namespaces:  features: - autodiscovery - containerMetrics - ingressMetrics - appDeploy ## By ignoring the property "ingress", Lens AppIQ will install nginx as the ## ingress controller for your cluster and use it to set up routing for your ## applications
ingress: ip: 126.96.36.199 # providing the IP of the users own "nginx" ingress controller type: nginx serviceType: LoadBalancer cluster: name: demo-cluster autoDiscovery: enabled: true namespaces:  features: - autodiscovery - containerMetrics - ingressMetrics - appDeploy
cluster: name: demo-cluster autoDiscovery: enabled: true namespaces: ['staging', 'production', 'development'] ## providing specific namespaces to monitor for our agents features: - autodiscovery - containerMetrics
To list registered clusters, use the cluster list command.
$ lapps cluster list
To view your cluster configuration, use the cluster export command
$ lapps cluster export demo-cluster ingress: type: nginx serviceType: ClusterIP className: shipa-nginx-ingress clusterIssuer: shipa-nginx-letsencrypt-issuer cluster: name: demo-cluster autoDiscovery: enabled: true namespaces:  features: - autodiscovery - containerMetric
Optionally, output this configuration directly to a .yaml file to reuse it in a future cluster connection
$ lapps cluster export demo-cluster > ~/Desktop/demo-cluster.yaml
If your cluster shows that our control plane has an upgrade available in our UI, you can also update it through our CLI by executing the following commands:
lapps cluster export CLUSTER_NAME > cluster.yaml lapps cluster connect -f cluster.yaml -r | kubectl apply -f -
The commands will first export the current configuration of your cluster to a
yaml file, and then use it to generate the upgraded chart that needs to be installed on your cluster for the upgrade to be completed. Run the command, and follow the same process described here
To remove a registered cluster, use the cluster remove command.
$ lapps cluster remove <name> [-y]
Removes a registered cluster.
|-y, --assume-yes||(= false) Don't ask for confirmation|
Updated 3 months ago