# v0.36.0 — Structural Hardening, Performance & Temporal IVM > **Full technical details:** [v0.36.0.md-full.md](v0.36.0.md-full.md) **Status: Planned** | **Scope: Large** > Performance deep-dive, structural refactoring, and two new analytic > capabilities: time-travel queries over a stream table's full history, > and columnar storage for dramatically faster analytics. --- ## What is this? v0.36.0 is a broad structural release that delivers performance and architectural improvements across the core engine, adds production-hardening depth identified in the v7 overall assessment, and opens two new analytic workload patterns. --- ## Performance & architectural hardening v0.36.0 closes a set of performance and structural gaps that have accumulated as the codebase has grown: - **L0 shared-memory dshash template cache** — the `L0_POPULATED_VERSION` signal has been wired since v0.31.0 but the actual cross-backend dshash table was never constructed. This release fills in the missing piece, eliminating the ~45 ms cold-start penalty for connection-pooler workloads where every backend is short-lived. - **WAL slot backpressure** — the WAL decoder currently emits warnings when the logical replication slot lag exceeds thresholds, but takes no action. A new `ENFORCE_BACKPRESSURE` mode pauses CDC trigger writes when the slot lag exceeds the critical threshold, preventing disk-fill under a stuck refresh. - **`src/api/mod.rs` split** — the 6,322-LOC monolith is split into `src/api/lifecycle.rs`, `src/api/refresh.rs`, `src/api/fuse.rs`, and `src/api/status.rs`. No behaviour change; purely a maintainability improvement that makes every future feature easier to review. - **Structured JSON logging** — a new `pg_trickle.log_format = text|json` GUC emits structured fields (`event`, `pgt_id`, `cycle_id`, `duration_ms`, `refresh_reason`) for OpenTelemetry / Loki integration. - **Typed DDL event hook** — `src/hooks.rs` string-matches on DDL event tags. A typed `DdlCommandKind` enum parsed from `pg_event_trigger_ddl_commands()` eliminates the silent-breakage risk from PostgreSQL minor-version wording changes. - **`RowIdSchema` type** — each DVM operator declares a `RowIdSchema` as part of its signature; a compile-time verifier asserts cross-operator compatibility, addressing the root cause of EC-01. --- ## Temporal IVM — time-travel queries Normally, a stream table shows the *current* state of the world. Temporal mode changes this: the stream table maintains a full history of how every row has changed over time. Rows are never physically deleted; instead, each row carries a `valid_from` timestamp and an optional `valid_to` timestamp that records when a version was replaced. This enables queries like "what did this table look like at 3 PM on Tuesday?" without any external audit log infrastructure. The pattern is known as **SCD Type 2** (Slowly Changing Dimension Type 2) in data warehousing, and it is used for: - Customer history ("what address was on file when this order shipped?") - Regulatory audit trails ("what were the account balances at quarter-end?") - Slowly-changing dimension tables in analytics pipelines Creating a temporal stream table is a single parameter: ```sql SELECT pgtrickle.create_stream_table( 'customer_history', query := 'SELECT id, name, address FROM customers', temporal := true ); ``` Queries against the stream table with `AS OF TIMESTAMP $1` automatically resolve against the historical row versions. --- ## Columnar materialization Stream tables currently store their results in standard PostgreSQL heap storage — optimised for row-by-row reads and writes. Analytic queries that scan millions of rows to compute aggregates are better served by *columnar* storage, where all values for a single column are stored together on disk. This dramatically reduces I/O for aggregate queries (summing a column, for example, only reads that column, not the entire row). The `storage_backend := 'columnar'` parameter to `create_stream_table()` tells pg_trickle to store the materialised result in Citus columnar storage or pg_mooncake. The differential refresh machinery continues to work — pg_trickle automatically routes the MERGE to use the `delete_insert` strategy that columnar storage requires, with no manual configuration. --- ## Also in v0.36.0 - Online schema evolution (`ALTER STREAM TABLE EVOLVE`) — adding a column to a `SELECT *` style ST becomes online without a full reinit - `STREAM TABLE` SQL syntax via `ProcessUtility_hook` — `CREATE STREAM TABLE x AS SELECT …` as sugar over the function call - Column lineage graph in `pgt_stream_tables` + `pgtrickle.stream_table_lineage()` - Pluggable sink architecture in the relay — sinks as separate cargo crates - OOM / disk-full chaos tests; aggregation/window/recursive-CTE fuzz targets - Sigstore + SBOM provenance attestation at release - Capacity sizing calculator in `docs/SCALING.md` - TUI inbox/outbox views - Bulk `alter_stream_tables()` / `drop_stream_tables()` APIs - Drain mode: graceful scheduler quiesce before maintenance --- ## Scope v0.36.0 is a large release. The temporal IVM and columnar storage features require DVM engine extensions; the structural splits and performance work are lower-risk but broad in surface area. --- *Previous: [v0.35.0 — Correctness Sprint, Reactive Subscriptions & Zero-Downtime Operations](v0.35.0.md)* *Next: [v0.37.0 — Scheduler Modularisation, pgVectorMV & OpenTelemetry](v0.37.0.md)* ## Scope v0.36.0 is a medium-sized release. The temporal IVM work requires extending the core frontier model — the internal mechanism that tracks which changes have been processed — from a single LSN cursor to a two-dimensional `(LSN, timestamp)` pair. A design spike before committing this feature to the milestone is recommended. --- *Previous: [v0.35.0 — Reactive Subscriptions & Zero-Downtime Operations](v0.35.0.md)*