# v0.22.0 — Production Scalability and Downstream Integration > **Full technical details:** [v0.22.0.md-full.md](v0.22.0.md-full.md) **Status: ✅ Released** | **Scope: Large** (~5 weeks) > Stream table changes can now feed Kafka, Debezium, and event-sourcing > pipelines directly. Independent stream tables refresh simultaneously > in a parallel worker pool. Refresh strategy is predicted from historical > cost data. SLA targets automatically assign the right scheduling tier. --- ## What problem does this solve? pg_trickle consumes change data from source tables but — until now — could not emit changes downstream to external systems. The serial refresh scheduler was a bottleneck for deployments with many independent stream tables. The AUTO mode cost model reacted to slow refreshes rather than predicting them. And tier assignment required manual tuning rather than being driven by declared latency requirements. --- ## Downstream CDC Publication: Feed Kafka Without a Second Replication Slot `stream_table_to_publication(name)` creates a PostgreSQL logical replication publication for a stream table. Any Kafka Connect, Debezium, or event-sourcing subscriber can then consume exactly the changes applied to the stream table — inserts, updates, and deletes — as a reliable event stream. This requires no second replication slot and no additional infrastructure. The publication is backed by PostgreSQL's native logical replication, which is already present in every PostgreSQL installation. *In plain terms:* your stream tables can now drive downstream systems. When the "daily revenue by region" stream table updates, Kafka gets an event. Your data warehouse, notification service, or event-sourcing log all stay in sync — all from within PostgreSQL. The publication lifecycle is managed automatically: it is dropped when the stream table is dropped, and rebuilt if the schema changes. --- ## Parallel Refresh Worker Pool A **coordinator/worker architecture** replaces the previous single-threaded scheduler: - The coordinator owns the dependency graph and dispatches work - Worker processes execute refreshes concurrently - Independent stream tables at the same level in the dependency graph run simultaneously `pg_trickle.max_parallel_workers` controls the pool size (default 0 = serial, maximum 32). Diamond dependencies are handled correctly — the coordinator waits for all branches to complete before refreshing the downstream node. *In plain terms:* in a deployment with 100 independent stream tables and 8 workers, a refresh cycle takes roughly 1/8 of the previous time. --- ## Predictive Cost Model The AUTO mode decision previously reacted to a slow differential refresh by switching to FULL on the *next* cycle. The new predictive model uses linear regression over the recent history of refresh times to *predict* whether the current change batch will be faster with DIFFERENTIAL or FULL — and switches pre-emptively. The prediction is visible in `df_threshold_advice` and logged as `refresh_reason = 'predicted_cost_exceeds_full'` when it fires. A cold-start fallback applies when fewer than 5 historical data points are available. --- ## SLA-Driven Tier Auto-Assignment `create_stream_table(name, query => '...', sla => interval '30 seconds')` declares a freshness deadline. The scheduler automatically assigns the stream table to the tier whose dispatch rate meets that deadline given current queue depth, and dynamically re-assigns it if the queue depth changes. *In plain terms:* instead of manually configuring "put this stream table in the hot tier", you declare "this stream table must never be more than 30 seconds stale" and the scheduler works out the tier assignment for you. --- ## Scope v0.22.0 is a significant scalability and integration release. Downstream CDC publication connects stream tables to the wider event-driven architecture ecosystem. Parallel refresh removes the serial scheduler as a throughput bottleneck. The predictive cost model and SLA-driven tier assignment reduce the operational knowledge required to run pg_trickle well.