vSphere Supervisor Services


Supervisor Services Catalog

Discover current Supervisor Services offered to support modern applications through vSphere Services. New service will be added overtime with the goal to continue to empower your DevOps communities.

Prior vSphere 8 Update 1, the Supervisor Services are only available with Supervisor Clusters enabled using VMware NSX-T. With vSphere 8 U1, Supervisor Services are also supported when using the vSphere Distributed Switch networking stack.

Supervisor Service vSphere 7 vSphere 8
vSAN Data Persistence Platform Services - MinIO, Cloudian and Dell ObjectScale
Backup & Recovery Service - Velero
Certificate Management Service - cert-manager
Cloud Native Registry Service - Harbor ❌ *
Kubernetes Ingress Controller Service - Contour
External DNS Service - ExternalDNS
* The embedded Harbor Registry feature is still available and supported on vSphere 7 and onwards.

vSAN Data Persistence Platform (vDPP) Services:

vSphere with Tanzu offers the vSAN Data Persistence platform. The platform provides a framework that enables third parties to integrate their cloud native service applications with underlying vSphere infrastructure, so that third-party software can run on vSphere with Tanzu optimally.

Partner Documentation Links:

Backup & Recovery Service

Velero vSphere Operator helps users install Velero and its vSphere plugin on a vSphere with Kubernetes Supervisor cluster. Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.

Velero vSphere Operator CLI Versions

This is a prerequisite for a cluster admin install.

Velero Versions

Certificate Management Service

ClusterIssuers are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request.

CA Cluster Issuer Versions

CA Cluster Issuer Sample values.yaml

Cloud Native Registry Service

Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing.

Harbor Versions

Harbor Sample values.yaml

Kubernetes Ingress Controller Service

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.

Contour Versions

Contour Sample values.yaml

External DNS Service

ExternalDNS publishes DNS records for applications to DNS servers, using a declarative, Kubernetes-native interface. This operator connects to your DNS server (not included here). For a list of supported DNS providers and their corresponding configuration settings, see the upstream external-dns project.

ExternalDNS Versions

ExternalDNS data values.yaml

deployment:
  args:
  - --source=contour-httpproxy
  - --source=service
  - --log-level=debug

NSX Management Proxy

NSX Management Proxy is for Antrea-NSX adapter in TKG workload cluster to reach NSX manager. We recommend to use NSX Management Proxy when there is isolation between management network and workload network and the workloads running in TKG workload clusters cannot reach NSX manager.

NSX Management Proxy Versions

NSX Management Proxy Sample values.yaml



Supervisor Services Labs Catalog

Experimental

The following Supervisor Services Labs catalog is only provided for testing and educational purposes. Please do not use these services in a production environment. These services are intended to demonstrate Supervisor Services' capabilities and usability. VMware will strive to provide regular updates to these services. The Labs services have been tested starting from vSphere 8.0. Over time, depending on usage and customer needs, some of these services may be included in the core product.

WARNING - By downloading and using these solutions from the Supervisor Services Labs catalog, you explicitly agree to the conditional use license agreement.

ArgoCD Operator

The Argo CD Operator manages the entire lifecycle of Argo CD and its components. The operator aims to automate the tasks required to operate an Argo CD deployment. Beyond installation, the operator helps automate the process of upgrading, backing up, and restoring as needed and removes the human toil as much as possible. For a detailed description of how to consume the ArgoCD Operator, see the ArgoCD Operator project.

ArgoCD Operator Versions

ArgoCD Operator Sample values.yaml - None

Usage:

  1. Download the example as a reference for a simple deployment.
  2. Log in to the Supervisor - 10.220.3.18 is the Supervisor IP address in this example - with a user that has owner/edit access to the vSphere Namespace - user@vsphere.local in this example.
$ kubectl vsphere login --server 10.220.3.18 -u user@vsphere.local
  1. To deploy ArgoCD to the vSphere Namespace - demo1 in this example - set the context appropriately.
$ kubectl config use-context demo1
  1. Use kubectl to deploy the file -argocd-instance.yaml in this example - that was downloaded in Step 1.
$ kubectl apply -f argocd-instance.yaml
  1. Upon successful deployment, the following should be the status. Use the EXTERNAL-IP address of the argocd-server service to connect to the UI - 10.220.3.20 in this example.
$ kubectl get pods
NAME                                        READY   STATUS    RESTARTS   AGE
demo1-argocd-application-controller-0       1/1     Running   0          5m9s
demo1-argocd-redis-cd8c958fd-jltgd          1/1     Running   0          5m9s
demo1-argocd-repo-server-6ccccfc999-rm4ng   1/1     Running   0          5m9s
demo1-argocd-server-945597778-2qfjk         1/1     Running   0          5m9s

$ kubectl get svc
NAME                                          TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
...
demo1-argocd-server                           LoadBalancer   10.96.0.88    10.220.3.20   80:30803/TCP,443:30679/TCP   6m41s
...
  1. If you encounter a DockerHub rate-limiting for the Redis image, use a proxy-cache or host the image on another registry. The sample argocd-instance.yaml shows how to reference an alternate image location.

External Secrets Operator

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, CyberArk Conjur, etc. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret. For a detailed description of how to consume External Secrets Operator, visit External Secrets Operator project

External Secrets Operator Versions

External Secrets Operator Sample values.yaml - None

Usage:

  1. Download the example as a reference for a simple usage. For this example to work, store an SSH private key as a secret called tkg-ssh-priv-keys in GCP Secret Manager. A service account with the Secret Manager Secret Accessor role should be granted access to the secret. The service account's key has to been downloaded and kept in a secure location. (Note - Service account keys could pose a security risk if compromised, and this exercise is for demo purposes only)
  2. Log in to the Supervisor - 10.220.3.18 is the Supervisor IP address in this example - with a user with owner/edit access to the vSphere Namespace - user@vsphere.local in this example.
$ kubectl vsphere login --server 10.220.3.18 -u user@vsphere.local
  1. To create External Secrets objects within the vSphere Namespace - demo1 in this example - set the context appropriately.
$ kubectl config use-context demo1
  1. Create a secret to store the GCP service account's key downloaded in step 1 - key.json in this example.
$ kubectl create secret generic gcpsm-secret --from-file=secret-access-credentials=key.json -n demo1
  1. Modify Line 14 projectID: my-gcp-projectid of the file -external-secrets-example.yaml in this example - that was downloaded in Step 1, per your GCP ProjectID and use kubectl to deploy the file.
$ kubectl apply -f external-secrets-example.yaml
  1. Upon successful deployment, a new secret object workload-vsphere-tkg2-ssh should have been created and its data should match the one uploaded in the GCP Secret Manager.
$ kubectl get secret -n demo1 workload-vsphere-tkg2-ssh -o json |jq -r '.data."ssh-privatekey"'|base64 -d