This section describes how to connect Kubernetes clusters to Lens AppIQ using a Magic Link generated through our UI. For greater control and automation, visit CLI Cluster Management.
There are two steps to adding clusters using our UI:
- Generate a
kubectlcommand with a Magic Link using our UI.
- Run such a command against your cluster to install our control plane
The sections below explain each part in detail.
Visit apps.lenscloud.io/clusters after signing in to your account and click on the button Connect Cluster
A modal will open up in your window. Enter a name to identify your cluster (optional) and click the button Generate Command. Lens AppIQ will generate a Magic Link and display it as part of a
kubectl command that you can easily copy and run in your cluster
Copy and paste the command into your terminal (while making sure you are in the right cluster context) and press ENTER. Several Kubernetes resources will be created in your cluster to initiate the deployment of our control plane
$ kubectl apply -f "https://api.lenscloud.io/cluster-connect?authToken=..." namespace/shipa created secret/sh.helm.release.v1.shipa-agent.v1 created serviceaccount/shipa-agent created secret/shipa-agent created configmap/shipa-agent-config created clusterrole.rbac.authorization.k8s.io/shipa-agent-general created clusterrole.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created clusterrole.rbac.authorization.k8s.io/shipa-agent-autodiscovery-helm created clusterrole.rbac.authorization.k8s.io/shipa-agent-containermetrics-busybody created clusterrole.rbac.authorization.k8s.io/shipa-agent-containermetrics-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-general created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-containermetrics-busybody created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-containermetrics-helm created role.rbac.authorization.k8s.io/shipa-agent-general created role.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created rolebinding.rbac.authorization.k8s.io/shipa-agent-general created rolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller created service/shipa-agent-api created deployment.apps/shipa-agent created
Chart details and permissions
If you would like to see all the objects that will be created in your cluster before running the command, simple execute a CURL request against the magic link
curl "https://api.lenscloud.io/cluster-connect?authToken=..."and you will be able to inspect the chart in details.
While our agents are deployed and the connection is established, you can monitor the process locally by running the command
kubectl get pods -n shipa -w into your terminal. Wait until all pieces are in a "Running" status.
$ kubectl get pods -n shipa NAME READY STATUS RESTARTS AGE shipa-agent-5879fb7954-wtp44 1/1 Running 0 3m9s shipa-busybody-rgdgk 1/1 Running 0 2m53s shipa-controller-6ccf95854d-cfdtp 1/1 Running 0 2m56s
At this point in time, you should see your cluster in the dashboard in a
Connecting state. Hover over the status dropdown to see which pieces of the control plane are Running already and which ones are still Pending to be installed.
You can also visualize the status of the process on the Events page by looking for a record with the Kind
cluster.create. If an error occurs during the connection, you should be able to see its details by inspecting the particular event associated with your cluster.
Once the installation of the control plane is finished, your cluster should appear on the Cluster list page with a
Click on the cluster to see further details about it (namespaces, metadata, control plane status, and more):
As soon as your cluster is connected to Lens AppIQ, our agents will start discovering the namespaces and applications running on it. You can verify this activity by visiting the Events page and looking at all
app.create events generated. After a few minutes, all applications running on your cluster should be visible from our dashboard and ready to be inspected
That's it! Your cluster is fully connected to Lens AppIQ.
Connect your cluster using Kubectl and Lens AppIQ CLI
By running the Lens AppIQ CLI, you can access more connectivity options e.g naming the cluster with the current kubectl context
kubectl apply -f $(lapps cluster connect -n cluster-name -c)
When connecting clusters, you can optionally choose to install our deployment engine if your team plans to use Lens AppIQ to ease their deployment strategy.
To enable the deployment engine, simply open the cluster connection modal, and check the box "Enable application deployments" before generating a new command. This will instruct our API during the connection time about the need to install additional pieces as part of our control plane.
If your cluster is already connected to Lens AppIQ, and you want to enable the deployment engine there, simply go to the Cluster Details page, and click the button "Enable deployments".
When the deployment checkbox is selected, a new section is displayed on the screen asking the user to configure a mandatory ingress controller (as stated in the modal this configuration is required, so our control plane can set up routing for your applications).
For now, we will leave the selection as is and use the default ingress controller. This will instruct our agent to deploy
nginx in your cluster and configure it as such. See use your own ingress controller for further information.
When using our default ingress controller, the form allows you to specify if your cluster provider can provision Load Balancers with external IP addresses. If the option is selected,
nginx(our default ingress controller) will get deployed as a
LoadBalancerin your cluster and our control plane will leverage it to set up public domains for your applications, to make them accessible outside of your cluster. If the option is not selected, then
nginxgets exposed as a
ClusterIPservice, and access to your apps outside of the cluster will not be configured.
Make sure that the option is checked only if your provider truly supports provisioning external IPs for Load Balancers. If it doesn't and you still mark the option, the connection process to Len AppIQ will fail.
Click the button "Generate Command" after your selection is ready.
kubectl command gets displayed on the screen. Proceed to copy and paste it into your terminal (against your intended cluster). Press Enter to initiate the installation. A few more Kubernetes resources will be created, and the deployment of more pieces in our control plane should start
$ kubectl apply -f "https://api.lenscloud.io/cluster-connect?authToken=..." namespace/shipa configured secret/sh.helm.release.v1.shipa-agent.v1 configured serviceaccount/shipa-agent unchanged secret/shipa-agent unchanged configmap/shipa-agent-config configured clusterrole.rbac.authorization.k8s.io/shipa-agent-general unchanged clusterrole.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller unchanged clusterrole.rbac.authorization.k8s.io/shipa-agent-autodiscovery-helm unchanged clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager-role created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created clusterrole.rbac.authorization.k8s.io/shipa-agent-appdeploy-helm created clusterrole.rbac.authorization.k8s.io/shipa-agent-containermetrics-busybody unchanged clusterrole.rbac.authorization.k8s.io/shipa-agent-containermetrics-helm unchanged clusterrole.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-metrics-exporter created clusterrole.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-general unchanged clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller unchanged clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-helm unchanged clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-cert-manager-role created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-helm created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-containermetrics-busybody unchanged clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-containermetrics-helm unchanged clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-metrics-exporter created clusterrolebinding.rbac.authorization.k8s.io/shipa-agent-ingressmetrics-helm created role.rbac.authorization.k8s.io/shipa-agent-general unchanged role.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller unchanged role.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created role.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created rolebinding.rbac.authorization.k8s.io/shipa-agent-general unchanged rolebinding.rbac.authorization.k8s.io/shipa-agent-autodiscovery-shipa-controller unchanged rolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-ketch-controller created rolebinding.rbac.authorization.k8s.io/shipa-agent-appdeploy-nginx created service/shipa-agent-api unchanged deployment.apps/shipa-agent configured
Oversee the installation process by running the command
kubectl get pods -A and monitoring the deployment of the remaining pieces of our control plane (
$ kubectl get pods -A shipa-agent-69797f9757-mt94c 0/1 Running 0 23s shipa-agent-6d4ddd4fbf-hflf7 1/1 Running 0 28m shipa-busybody-w2pl7 1/1 Running 0 27m shipa-controller-74784c88d7-546wb 1/1 Running 0 27m shipa-controller-fb986d577-k98ct 0/1 ContainerCreating 0 2s shipa-controller-fb986d577-k98ct 0/1 Running 0 2s shipa-controller-fb986d577-k98ct 1/1 Running 0 20s shipa-controller-74784c88d7-546wb 1/1 Terminating 0 28m shipa-controller-74784c88d7-546wb 0/1 Terminating 0 28m shipa-controller-74784c88d7-546wb 0/1 Terminating 0 28m shipa-controller-74784c88d7-546wb 0/1 Terminating 0 28m shipa-controller-74784c88d7-546wb 0/1 Terminating 0 28m shipa-nginx-ingress-86f7458bbd-4hzrd 0/1 Pending 0 0s shipa-nginx-ingress-86f7458bbd-4hzrd 0/1 Pending 0 0s shipa-nginx-ingress-86f7458bbd-4hzrd 0/1 ContainerCreating 0 0s metrics-exporter-5cbfbc6bc5-6wxs9 0/1 Pending 0 0s metrics-exporter-5cbfbc6bc5-6wxs9 0/1 Pending 0 0s metrics-exporter-5cbfbc6bc5-6wxs9 0/1 ContainerCreating 0 0s shipa-nginx-ingress-86f7458bbd-4hzrd 0/1 Running 0 11s metrics-exporter-5cbfbc6bc5-6wxs9 1/1 Running 0 5s ketch-controller-5d59bc67b-lxw49 0/1 Pending 0 0s ketch-controller-5d59bc67b-lxw49 0/1 Pending 0 0s ketch-controller-5d59bc67b-lxw49 0/1 ContainerCreating 0 0s ketch-controller-5d59bc67b-lxw49 1/1 Running 0 4s shipa-agent-69797f9757-mt94c 1/1 Running 0 111s shipa-agent-6d4ddd4fbf-hflf7 1/1 Terminating 0 29m shipa-agent-6d4ddd4fbf-hflf7 0/1 Terminating 0 29m shipa-agent-6d4ddd4fbf-hflf7 0/1 Terminating 0 29m shipa-agent-6d4ddd4fbf-hflf7 0/1 Terminating 0 29m shipa-agent-6d4ddd4fbf-hflf7 0/1 Terminating 0 29m shipa-nginx-ingress-86f7458bbd-4hzrd 1/1 Running 0 20s shipa-controller-fb986d577-k98ct 0/1 Running 1 (1s ago) 2m4s shipa-controller-fb986d577-k98ct 1/1 Running 1 (22s ago) 2m25s
Note: Please note that installing the deployment engine might take a few minutes as we deploy all necessary pieces in your cluster
On the dashboard, it should be visible that your cluster is currently "Updating"
Once all the new pieces are installed, your cluster should go back to a Running state, and the Control Plane tab should reflect the new pieces deployed
That's it, the deployment engine is already running, and ready to be used. Start deploying applications as needed.
When enabling application deployments, the UI states that you can leverage your own ingress controller to set up routing for your applications, instead of installing a new
To do so, you will need to follow the steps mentioned below:
Collect the IP address of the services where your ingress controller is exposed. For example, if using
istio, run the command
kubectl get services -n istio-system and grab the external IP of your service, in this case
188.8.131.52. If you'd like to make apps accessible outside the cluster via public domains, then you use the external IP of your service, otherwise, grab the ClusterIP assigned to it.
kubectl get services -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway ClusterIP 10.80.6.205 <none> 80/TCP,443/TCP 24h istio-ingressgateway LoadBalancer 10.80.12.62 184.108.40.206 15021:31703/TCP,80:31184/TCP,443:30904/TCP,31400:31648/TCP,15443:31256/TCP 24h istiod ClusterIP 10.80.5.55 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 24h
On the UI, within the cluster connect modal, select the option "Enable application deployments" and then "Use your own ingress controller"
Compatible ingress controller
At the moment, Lens AppIQ is only compatible with
traefik. If compatibility with other ingress controllers is required please reach out to our support.
Fill out the required information based on your ingress controller. In this scenario we will select
istio as the Provider,
Load Balancer as the Service Type,
220.127.116.11 as the IP Address (Address retrieved from the first step)
With all the selections ready, click the button "Generate Command", and as done before copy and paste the resulting command into your terminal to install the control plane.
That's all. Your cluster connection will start and our control plane will be installed without deploying "nginx" as part of it. When deploying apps to this cluster, Lens AppIQ will use your available
istio to route applications through either its Load Balancer or through its ClusterIP
Please consider that any failure in the cluster connection process can be seen in the Events section of the dashboard, filtering the list by the Kind
cluster.create, and clicking on a specific event you want to inspect. Error descriptions will be listed in there.
If the process of connecting your cluster to Lens AppIQ fails, there is a chance that a few pieces of our control plane will be left there untouched (agents, roles, cluster roles, etc). To clean all leftovers please run the following commands in your terminal
kubectl get clusterrolebindings.rbac.authorization.k8s.io -o name | grep shipa | xargs kubectl delete kubectl get clusterrole -o name | grep shipa | xargs kubectl delete kubectl delete ns shipa
Reusing a previously connected cluster
The cleanup process mentioned above is also recommended if our control plane was installed in your cluster in a previous time. Please run the commands above before trying to reconnect the same cluster to Lens AppIQ.
Updated 3 months ago