Dual Pipelines Are Done. Ververica Unifies Batch and Streaming.

Introducing Materialized Tables, Freshness-Driven Execution, and the End of Pipeline Duplication

Why are you running two completely separate data platforms when they're supposed to represent the same truth?

One pipeline runs continuously for real-time dashboards. Another runs on a schedule for historical analytics. They start with the same intent but drift apart in logic, semantics, and timing. Your real-time fraud detection shows different numbers than yesterday's compliance report. Your streaming customer 360 doesn't match your batch ML training data.

This isn't a "technology preference." It's organizational cost, decision risk, and wasted engineering effort.

Data teams spend more time reconciling pipelines than deriving insights. Analysts need to understand both streaming and batch semantics just to trust the numbers. Engineers maintain duplicate logic across Apache Spark™ and Apache Flink®. Business leaders stop trusting dashboards because the numbers keep changing.

The root cause is simple: your data architecture treats streaming and batch as separate problems when they should be unified expressions of the same intent.

Today, we're solving this.

What We're Announcing

Ververica now delivers true unified streaming and batch execution with a new set of capabilities that eliminate pipeline duplication, reduce operational complexity, and rebuild trust in your data, including:

  • Materialized Tables: Define tables once using SQL; the platform maintains them over time
  • Freshness-Driven Execution: Declare how up-to-date data must be let the platform choose the execution strategy
  • Built-In Workflow Scheduling: Bounded refreshes are planned and scheduled automatically
  • Resource Queue Management: Always-on workloads are protected and batch work runs safely
  • Unified Streaming and Batch Semantics: One platform, one set of rules
Batch execution

Batch execution

Also new in this release:

  • Apache Iceberg Catalog support
  • Delta Lake Connector
  • Databricks Unity Catalog integration

This release fundamentally changes how you build, operate, and trust data pipelines. You no longer manage separate streaming and batch systems. You define what you want, and the platform handles the execution.

The Problem: Two Systems for One Truth

Across every industry, data teams face the same challenge: two sets of pipelines that should represent one source of truth but don't.

Here's what that looks like in practice across industries:

Financial Services:
Your streaming fraud detection pipeline flags suspicious transactions in real-time. Your batch ML training pipeline recomputes fraud patterns nightly. The business logic should be identical, but they're implemented in different systems (Flink for streaming, Spark for batch). Over time, they drift. Your fraud model is trained on data that doesn't match production detection logic. The result? False positives spike, compliance violations occur.

Retail & E-Commerce:
Your real-time customer 360 dashboard updates continuously. Your batch analytics pipeline runs nightly for marketing campaigns. Same customer data, different numbers. Marketing sends campaigns based on yesterday's batch run while support sees today's real-time view. Your teams are making decisions on inconsistent data.

Manufacturing & IoT:
Streaming pipelines monitor machine sensor data for predictive maintenance. Batch pipelines analyze historical patterns for capacity planning. When the numbers don't match, you're either over-maintaining equipment (wasting money) or under-maintaining (risking downtime and safety).

The costs pile up fast:

  • Duplicate engineering effort: Write the same business logic twice, maintain two codebases
  • Semantic drift: Real-time and historical views diverge over time, eroding trust
  • Operational complexity: Manage schedules, backfills, dependencies across two systems
  • Cognitive burden: Analysts must understand both streaming and batch semantics
  • Hidden risk: Inconsistent data leads to bad decisions, compliance failures

This isn't about having the wrong tools. It's about having an architecture that treats streaming and batch as separate worlds when they should be unified.

The Solution: Table-Centric, Freshness-Driven Unification

Ververica solves this by collapsing the divide between streaming and batch entirely. One platform. One definition. One source of truth.

Here's how it works:

Materialized Tables: Define Once, Trust Forever

Instead of building separate pipelines for streaming and batch, you define a Materialized Table once using SQL. That definition becomes a contract the platform maintains over time.

undefined-Feb-09-2026-02-27-59-7370-PM

What happens next:

  • The schema is derived automatically from your query
  • The platform keeps the table up to date according to your freshness declaration
  • The same table serves operational dashboards, historical reports, and exploratory analysis
  • No separate pipelines. No duplicate logic. No semantic drift.

You don't manage execution modes. You declare intent. The platform handles the rest.

Freshness-Driven Execution: Intent, Not Implementation

Freshness expresses how up-to-date your data needs to be, not how often something should run.

As business requirements evolve, you change freshness, not your SQL, not your pipeline architecture. The same table can move between "hot" (real-time) and "cold" (batch) usage patterns without rewriting anything.

This is critical for regulated industries where compliance reporting might need daily freshness, but fraud detection needs sub-second freshness.

One table definition. Different execution strategies. No duplication.

Built-In Workflow Scheduling: No External Orchestrators

Not all data needs to be updated continuously. When bounded execution is required, Ververica generates and schedules refresh workflows automatically.

No manually defined dependencies.

The platform understands your table definitions, dependencies, and freshness requirements. It schedules refreshes, handles backfills, and coordinates execution, natively.

This matters because:

  • Data engineers don't maintain orchestration code alongside business logic
  • Changes to table definitions automatically propagate to scheduling
Workflow Scheduler -1 Workflow Scheduler -2

Workflow Scheduler

Resource Queue Management: Safe Coexistence

In a unified platform, always-on streaming workloads and bounded batch jobs must coexist without conflict. This is where most "unified" platforms fail, because batch backfills starve streaming pipelines, or streaming workloads block batch execution.

Ververica solves this with resource queue management:

  • Always-on streaming workloads are protected with reserved capacity
  • Batch and incremental work runs opportunistically when resources are available
  • Historical recomputation never destabilizes production systems

For highly-regulated industries including the financial services industry (FSI) this is non-negotiable. Real-time fraud detection can't be starved by a batch model training job. Payment processing can't be delayed because someone is running a historical compliance report.

Ververica ensures workloads coexist safely—no resource starvation, no operational risk.

Resource Queue Management

Resource Queue Management

Why This Matters: Business Outcomes, Not Just Features

The primary value of Ververica's Unified Streaming Data Platform is simplicity that scales.

Lower Operational Cost

When you stop maintaining duplicate pipelines for the same logic, your data platform costs drop. Fewer systems to manage. Fewer broken workflows to fix. Fewer engineers required just to keep things running.

Reduced Risk

Many data problems don't show up as obvious failures. They show up as silent inconsistencies, like metrics that drift, reports that don't align, that result in decisions based on stale or incorrect data.

Ververica reduces this risk by maintaining data consistently over time. Changes are controlled. Backfills don't destabilize production. Your data becomes reliable, not fragile.

Higher Trust in Data

When different dashboards show different numbers, confidence collapses. Teams stop acting on insights and start debating the data.

By defining data once and keeping it consistent everywhere, Ververica rebuilds trust. Leaders can focus on decisions, not on whether the data is correct.

A Platform That Scales With Your Business

What works for a small team becomes unmanageable at scale—more pipelines, more schedules, more failures.

Ververica gets simpler as usage grows, not more complex. New use cases reuse the same definitions. Changes don't require rebuilding everything. The platform absorbs complexity instead of pushing it onto your teams.

The Bottom Line

Data is continuous. Your data platform should treat it that way.

Ververica’s new release available in Ververica Cloud deployments eliminates the dual pipeline architecture that creates duplication, drift, and operational risk. With Materialized Tables, freshness-driven execution, and unified streaming and batch semantics, you define data once and trust the platform to maintain it over time.

No more Spark for batch, Flink for streaming.
No more semantic drift between real-time and historical data.
No more orchestration complexity hiding business intent.

One platform. One definition. One source of truth.

Please read the Release Notes for additional details.

Vulnerability Fixes

Maintaining platform security is a top priority. In this release, several vulnerabilities were addressed to safeguard your deployments and ensure system integrity. For a complete list of the resolved vulnerabilities, check out the Release Notes

Downloads

More Resources