Ververica Cloud: Dual Data Pipelines Are Done
- 10x
- Faster Deployment
- 90%
- Faster Diagnostics
- 1
- Pipeline for Batch & Streaming
- 40-60%
- Cost Reduction
The Death of Dual Pipelines.
Why run two separate data platforms to represent one truth?
With Ververica's newest release, you can finally eliminate the pipeline architecture that creates duplication, drift, and operational risk in your cloud deployments. One managed platform for all your real-time data processing needs.
Learn More
Check out the new feature details in the announcement blog
New Features and Improvements
Eliminate pipeline duplication, reduce operational complexity, and rebuild trust in your data.
Fundamentally change how you build and operate your data pipeline. Stop managing separate streaming and batch systems. Replace fragile streaming ETL workflows with declarative SQL. Define what you want, and let Ververica's Unified Streaming Data Platform handle the execution.
Available Now

Materialized Tables
Define tables once using SQL. The platform maintains them over time.
Freshness-Driven Execution
Declare how up-to-date data must be and let the platform choose the execution strategy.
Built-In Workflow Scheduling
Bounded refreshes are planned and scheduled automatically.
Resource Queue Management
Always-on workloads are protected and batch work runs safely.
Unified Semantics
Streaming and Batch. One platform, one set of rules.
Ready for one pipeline? Read the release notes.
One Powerful Engine for Batch and Streaming
Meet VERA
VERA is the heart of Ververica’s Streaming Data Platform, the engine that operationalizes streaming data and optimizes open source Apache Flink®.
VERA allows you to connect, process, analyze, and govern your data in one ultra-high-performance streaming data solution with exactly-once semantics built in. Created to solve both batch and real-time streaming use cases, VERA makes it easy for you to harness insights from your data at any volume and scale.
VERA Engine Key Features
Gemini State Backend
With 97% faster snapshots, state migrations that once took 20 minutes now take 30 seconds.
Tiered Storage
With hot data in memory/SSD, and cold data in object storage, you'll never hit disk limits.
Key/Value Separation for Joins
Access up to 2x faster streaming joins with low match rates.
Dynamic Complex Event Processing (CEP)
Update fraud detection rules in your database table, and running jobs pick up the changes automatically with no job restart. Now you can react to threats in minutes, not days.
CDAS/CTAS (Create Table As Select)
Move data with one SQL statement: CREATE TABLE target AS SELECT * FROM source and Ververica handles the rest: automatic schema inference, offset tracking, delivery guarantees, and seamless schema evolution.
Unified Storage
Build one durable, cost-effective data layer on a lakehouse architecture. A single source of truth with no data duplication.
Streamhouse Key Features
Stream Directly to the Data Lake
With ACID transactions, automatic compaction, and native change data capture (CDC) support.
Query Both Ways
Real-time dashboards and deep historical analytics on the same table.
Multi-Engine Compatibility
Flink writes streams, while Spark, Presto, Trino, and your BI tools all operate on the same tables. No duplication, no data silos.
Automatic Schema Evolution
Add columns or change types with no downtime or redeployment.
Time Travel and Audit Trails
Query tables as of a specific timestamp or snapshot ID. Easily restore previous versions for debugging, auditing, or recovery.
Unified Storage
Build one durable, cost-effective data layer. A single source of truth with no data duplication.
New Connectors and Catalogs
Delta Lake Connector
This connector provides a first-class, Unity Catalog–backed metadata layer for writing Flink batch and streaming data into Delta Lake tables, so you can manage Delta tables using fully qualified names (catalog.schema.table) with consistent governance across engines. By integrating natively with Flink and supporting exactly-once semantics for both batch and streaming workloads, the Catalog simplifies table management, eliminates filesystem-level configuration, and lays the foundation for reliable append and CDC-friendly write patterns, delivering a production-grade, lakehouse-aligned experience for data engineers.
Databricks Unity Catalog Integration
Use the centralized metadata, governance, and access-control layer that allows Apache Flink jobs to discover, read from, and write to lakehouse tables (such as Delta Lake) using fully qualified names 9like catalog.schema.table) instead of managing table paths and metadata manually. Unity Catalog acts as the authoritative metastore that provides consistent table definitions, permissions, and lineage, enabling governed, production-grade batch and streaming pipelines that integrate cleanly with the Databricks lakehouse ecosystem.
Apache Iceberg Catalog Support
The Iceberg Catalog provides a first-class, centrally managed Apache Iceberg metadata layer into Ververica, enabling users to configure Iceberg backends once and reference tables consistently as catalog.database.table across all Flink SQL and job deployments. By aligning with the upstream Iceberg Flink catalog integration and integrating it into the native Ververica catalog experience, it eliminates repeated per-table configuration, improves discoverability and governance, and ensures consistent, reliable Iceberg table access across teams, environments, and deployment modes.
See It in Action
$400 in free credits. First pipeline in hours. Architecture that scales from prototype to petabytes.