When Ververica Platform runs in a Kubernetes namespace A, a Ververica Platform deployment running in a Kubernetes namespace B (as specified by the configured Deployment Target) may get stuck in Transitioning state, even after the Flink job is started successfully.
Ververica Platform needs to talk to a jobmanager, via port 8081
, to get the job status. If you want to preview your SQL querys using a session cluster, Ververica Platform needs to talk to the result-fetcher container, via port 6568
, to By default, the cross-namespace communication is allowed in a Kubernetes cluster. But if network policies are enabled, your Kubernetes cluster may be configured to allow communication only within the same namespace by default. One such example network policy could look like this:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace-only
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
In order to allow the Ververica Platform pod to communicate with the JobManager pod, add the following network policy in your job's namespace:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-vvp
spec:
podSelector:
matchLabels:
app: flink-job
component: jobmanager
ingress:
- ports:
- protocol: TCP
port: 8081
Important: The network policy allow-vvp
needs to be added to your job's namespace.
To allow the Ververica Platform pod to communicate with the jobmanager container and the result-fetcher container of the SQL preview session cluster, add the following network policy in your job's namespace:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-vvp-session-cluster
spec:
podSelector:
matchLabels:
app: flink-session-cluster
component: jobmanager
ingress:
- ports:
- protocol: TCP
port: 8081
- protocol: TCP
port: 6568
Important: The network policy allow-vvp-session-cluster
needs to be added to your job's namespace.
Cross-namespace communication is disabled in Kubernetes clusters.