Your Cloud, Your Rules: Ververica's Bring Your Own Cloud Deployment
How to integrate Istio with Ververica Platform?
Note:strong> This article applies to Ververica Platform 2.6+.
In general, there are no specific VVP-related settings required to work with Istio. Nevertheless, as of writing this article, there is a little issue related to Kubernetes sidecar containers (not imposed by VVP). Istio-created sidecar containers may prevent VVP pods from normal completion. This issue will be addressed further below.
Note: Install Istio in your cluster, if not installed yet. Follow the official Istio documentation.
The below instructions use two namespaces, which will eventually be managed by the Istio service mesh.
Kubernetes namespaces:
1) vvp - VVP control plane and MinIO deployment for VVP Artifact Storage. MinIO is optional, in case external object storage is configured with your VVP, then MinIO can be ignored below.
2) vvp-jobs - for all Flink applications which will be created by VVP deployments.
First, check the current status of pods in both namespaces.
Expecting 2 pods in vvp
namespace.
kubectl get po -n vvp
NAME READY STATUS RESTARTS AGE
minio-f487cb456-jm86x 1/1 Running 0 5m7s
vvp-ververica-platform-fdb85fd68-dr5wm 3/3 Running 0 5m5s
Checking the list of containers across the existing two pods:
kubectl get po -n vvp -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]''\n' |\
sort
1 minio/minio:latest
1 registry.ververica.com/v2.9/vvp-appmanager:2.9.0
1 registry.ververica.com/v2.9/vvp-gateway:2.9.0
1 registry.ververica.com/v2.9/vvp-ui:2.9.0
There are no Istio containers yet.
Checking the second namespace.
kubectl get po -n vvp-jobs
No resources found in vvp-jobs namespace.
Namespace vvp-jobs
has no pods deployed yet.
Note: If you already had running deployments in the vvp-jobs
namespace, then ignore this check.
Istio requires a special namespace label to manage applications in it. Adding it:
kubectl label namespace vvp istio-injection=enabled
kubectl label namespace vvp-jobs istio-injection=enabled
Restart pods in vvp
namespace to get Istio to add its own sidecar containers. Replace pod names with your pod names in the below command:
kubectl delete po minio-f487cb456-jm86x -n vvp
kubectl delete po vvp-ververica-platform-fdb85fd68-dr5wm -n vvp
Note: If you install VVP after Istio installation, then restart of pods is not needed, because the very first start of the VVP pod will be already managed by Istio. Nevertheless, the vvp
namespace needs to be labeled with istio-jnjection
label like above, before VVP installation.
Checking the current list of containers in the vvp
namespace:
kubectl get pods -n vvp \
-o jsonpath="{.items[*].spec['initContainers','containers'][*]['image']}" | \
tr -s '[[:space:]]' '\n'
docker.io/istio/proxyv2:1.16.1
docker.io/istio/proxyv2:1.16.1
minio/minio:latest
registry.ververica.com/v2.9/vvp-appmanager:2.9.0
registry.ververica.com/v2.9/vvp-gateway:2.9.0
registry.ververica.com/v2.9/vvp-ui:2.9.0
docker.io/istio/proxyv2:1.16.1
docker.io/istio/proxyv2:1.16.1
Now, there are two Istio containers (init and proxy) per each pod injected.
Explore the list of containers in the pods in detail to see that they have different names. All Istio sidecar containers will have the same container image.
Note: Skip this step, if access to VVP UI is done via port-forward
of a service HTTP port.
Deploy Istio Gateway and VirtualService resources in case VVP UI needs to be accessed via external IP and no existing ingress configuration is set up.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: vvp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
-port:
number: 80
name: http
protocol: HTTP
hosts:
-"*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vvp
spec:
hosts:
- "*"
gateways:
- vvp-gateway
http:
- match:
- uri:
prefix: /app
- uri:
prefix: /ui
- uri:
prefix: /namespaces
- uri:
prefix:/sql
- uri:
prefix: /catalog
- uri:
prefix: /artifacts
- uri:
prefix: /api/v1
- uri:
prefix: /flink-ui/v1
route:
- destination:
host: vvp-ververica-platform
port:
number: 80
Save the above YAML definition as a local file and apply it to the vvp
Kubernetes namespace by running the command:
kubectl apply -n vvp istio/vvp-gateway.yaml
The next step is to determine the host address of the Ingress gateway by using this subsection from the Istio documentation: https://istio.io/latest/docs/setup/getting-started/#determining-the-ingress-ip-and-ports.
Example of opening the VVP UI via ingress gateway: http://<determined host>/app/
Before starting any new or restarting existing VVP deployment, we need to solve the issue mentioned earlier about sidecar containers, which may hold a pod from completion. This issue will lead to problems in the VVP control plane, as it won't be able to restart Flink jobs to apply new settings.
Below is one of the solutions which require changing the VVP Deployment Kubernetes pod template. The following YAML code needs to be added under spec.template.spec
:
...
kubernetes:
jobManagerPodTemplate:
spec:
containers:
- command:
- sh
- '-c'
- >
/bin/bash <<'EOSCRIPT'
set -e
sleep 10
while true; do pgrep java || break; sleep 5; done
pgrep envoy && kill $(pgrep envoy)
EOSCRIPT
image: 'ubuntu:19.04'
name: istio-proxy-terminator
resources:
limits:
cpu: 10m
memory: 20M
requests:
cpu: 10m
memory: 20M
shareProcessNamespace: true
...
Use it to update existing VVP deployments. This is how it may look on the VVP UI:
Note: Adding the istio-proxy-terminator container is a sub-optimal solution as every VVP deployment will use additional CPU/Memory from the Kubernetes cluster. The more efficient solution would be to implement some custom controller which would detect cases when sidecar containers need to be exited because main containers are completed. Alternatively, watch for the official solution to this problem from Kubernetes itself.
Kiali tool is a very convenient way to visualize service-to-service network traffic, thus we can check all the Istio proxies used in the vvp
and vvp-jobs
namespaces.
Observations:
1. vvp-ververica-platform application is accessed from Istio Ingress Gateway. The Gateway host is accessed via web browsers.
2. vvp-ververica-platform application uses MinIO to list or upload artifacts for VVP deployments.
3. The pods in vvp-jobs
are created as part of a Flink cluster and VVP deployment: jobmanager, taskmanager.
4. The pods in vvp-jobs
also access external namespace services such as MinIO, Kafka, Kubernetes API, etc.
5. Traffic to MinIO is in red status due to some percentage of 4xx HTTP statuses.
Suspend or Cancel running the deployment to verify the pods are terminated successfully. The vvp-jobs
namespace should not have any running pods of the tested deployment in it after a stop completion:
kubectl get po -n vvp-jobs
No resources found in vvp-jobs namespace
In order to enable Istio service-to-service mTLS authentication for VVP services, just add Istio PeerAuthentication resource to the Istio root namespace (istio-system) or to the vvp and your VVP deployment namespace (such as vvp-jobs in our case). First option applies authentication policy to all namespaces automatically.
# peer-authentication.yaml
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
spec:
mtls:
mode: STRICT
Apply above YAML configuration to you cluster:
kubectl apply -f peer-authentication.yaml -n vvp
kubectl apply -f peer-authentication.yaml -n vvp-jobs
Then start new or restart existing VVP deployment to get mTLS added to the VVP services, Flink Job and Tasks Managers communication.
Below is a Kiali service graph for vvp and vvp-jobs namespaces. You can see small black lock icon on the arrows among services. It means that client and server sides are authenticated via mTLS. As you can see, VVP does not require any special configuration rather than standard Istio authentication policy applied to a Kubernetes namespace.