By default, the VMware Event Broker Appliance can integrate with any Open Container Initiative (OCI) compliant container registry for hosting and deploying container images that uses a TLS certificate from a trusted authority such as Docker Hub or Amazon Elastic Container Registry (ECR) as an example.
For organizations that require the use of a private container registry and uses a self-signed TLS certificate, an additional post-deployment configuration is required within the VMware Event Broker Appliance. Please follow the steps outlined below.
Note: For those using the Harbor registry, the root CA certificate is located in : /etc/docker/certs.d/[FQDN]/ca.crt
In this example, the root CA certificate key file is named ca.crt
and is located in /root
systemctl start sshd
cp /etc/containerd/config.toml /etc/containerd/config.toml.bak
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
within the configuration file.Append the following two lines below this section and replace the REPLACE_ME_FQDN value with FQDN of the private registry and REPLACE_ME_PATH_TO_ROOT_CA_CERT value the full path to the root CA certificate located on the VMware Event Broker Appliance
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.configs."REPLACE_ME_FQDN".tls]
ca_file = "REPLACE_ME_PATH_TO_ROOT_CA_CERT"
systemctl restart containderd
systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-03-28 19:35:06 UTC; 1 day 3h ago
Docs: https://containerd.io
Process: 30072 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 30073 (containerd)
Tasks: 545
Memory: 1.2G
CGroup: /system.slice/containerd.service
knative-serving
namespace that points to full path of the root CA certificate of private registry which should reside within the VMware Event Broker Appliancekubectl -n knative-serving create secret generic customca --from-file=ca.crt=/root/ca.crt
knative-serving-controller.yaml
kubectl -n knative-serving get deploy/controller -o yaml > knative-serving-controller.yaml
cat > overlay.yaml <<EOF
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Deployment", "metadata": {"name": "controller", "namespace": "knative-serving"}})
---
spec:
template:
spec:
containers:
#@overlay/match by=overlay.subset({"name": "controller"})
-
env:
#@overlay/append
- name: SSL_CERT_DIR
value: /etc/customca
#@overlay/match missing_ok=True
volumeMounts:
- name: customca
mountPath: /etc/customca
#@overlay/match missing_ok=True
volumes:
- name: customca
secret:
secretName: customca
EOF
new-knative-serving-controller.yaml
ytt -f overlay.yaml -f knative-serving-controller.yaml > new-knative-serving-controller.yaml
kubectl apply -f new-knative-serving-controller.yaml
It can take a couple of minutes for the previous Knative Serving controller to terminate and spawn the new configuration. You can monitor the progress using the following commmand and ensure the READY
state shows 1/1
kubectl -n knative-serving get deployment/controller -w
NAME READY UP-TO-DATE AVAILABLE AGE
controller 1/1 1 1 29h
Note: If for some reason the deployment is not re-deploying, you can run
kubectl -n knative-serving delete deployment/controller
and then perform theapply
operation with the new Knative Serving YAML.
Using private container registry with the VMware Event Broker Appliance.