Skip to main content
Every query that arrives at Mori is classified, routed to a strategy, and executed against the appropriate backend(s). This page is the exhaustive reference for how each operation type is handled.

Query Classification

The classifier parses each incoming statement to extract:
  • OpTypeREAD, WRITE, DDL, TRANSACTION, or OTHER
  • SubType — specific operation (e.g., SELECT, INSERT, UPDATE, DELETE, ALTER, BEGIN)
  • Tables — all tables referenced in the query
  • Primary Keys — extractable PKs from WHERE clauses (e.g., WHERE id = 42)
  • Feature flagsIsJoin, HasAggregate, HasSetOp, IsComplexRead, HasReturning, HasOnConflict, HasWindowFunc, HasDistinct, HasCursor, IsMetadataQuery

Classification by Engine

EngineClassification MethodNotes
PostgreSQL, CockroachDBAST-based (pg_query_go)Full SQL parse tree
MySQL, MariaDBAST-based (vitess/sqlparser)Vitess SQL parser
MSSQLRegex-basedNo Go-native TDS parser
SQLite, DuckDBRegex-basedpgwire-exposed embedded engines
RedisCommand-based (static lookup, ~155 commands)No AST — fixed command grammar
FirestoregRPC method-name based (18 methods)No SQL — method name classification

Statement Types (SQL Engines)

CategoryStatementsSubType
ReadSELECTSubSelect
WriteINSERT, UPDATE, DELETE, TRUNCATESubInsert, SubUpdate, SubDelete, SubTruncate
DDLCREATE TABLE/INDEX, ALTER TABLE, DROP TABLE, RENAMESubCreate, SubAlter, SubDrop
TransactionBEGIN, COMMIT, ROLLBACK, SAVEPOINT, RELEASESubBegin, SubCommit, SubRollback, SubSavepoint, SubRelease
CursorDECLARE, FETCH, CLOSESubCursor
SessionSET, SHOWSubSet, SubShow
ExplainEXPLAIN (rejects ANALYZE)SubExplain
PreparedPREPARE, EXECUTE, DEALLOCATESubPrepare, SubExecute, SubDeallocate
ListenLISTEN, UNLISTENSubListen
UnsupportedNOTIFY, COPY, LOCK TABLE, DO $$, CALL, EXPLAIN ANALYZESubNotSupported

Unsupported Operations by Engine

EngineUnsupported Operations
PostgreSQLNOTIFY, COPY, LOCK TABLE, DO $$, CALL, EXPLAIN ANALYZE
MySQLCALL, EXPLAIN ANALYZE
MSSQLBULK INSERT, EXEC/EXECUTE
DuckDBCOPY/EXPORT/IMPORT, EXPLAIN ANALYZE
RedisPUBLISH (would affect production subscribers)

Routing Decision Matrix

The router takes the classification output plus the current delta/tombstone/schema state and selects one of 12 execution strategies.

Strategies

StrategyDescription
Prod DirectForward to prod unchanged, return result. Zero overhead.
Merged ReadQuery both backends, filter tombstones/deltas, merge results.
Join PatchExecute JOIN on prod, patch delta rows from shadow.
Shadow WriteExecute INSERT on shadow only.
Hydrate and WriteHydrate affected rows from prod → shadow, then write.
Shadow DeleteDelete from shadow, add tombstone.
Shadow DDLExecute DDL on shadow, update schema registry.
TransactionForward transaction control to both backends.
Not SupportedReturn error to client.
Forward BothForward to both backends (e.g., SET).
TruncateExecute on shadow, mark table fully shadowed.
Listen OnlyForward LISTEN to prod only.

Decision Table (SQL Engines)

OpTypeConditionStrategy
SELECTAll tables clean (no deltas/tombstones/schema diffs)Prod Direct
SELECTAffected table + set operation (UNION/INTERSECT/EXCEPT)Merged Read
SELECTAffected table + complex read (CTEs, derived tables)Merged Read
SELECTAffected table + aggregate/GROUP BYMerged Read
SELECTAffected table + JOINJoin Patch
SELECTAffected table (simple single-table)Merged Read
INSERTNo ON CONFLICTShadow Write
INSERTHas ON CONFLICTHydrate and Write
UPDATEAlwaysHydrate and Write
DELETEAlwaysShadow Delete
TRUNCATEAlwaysTruncate
DDLAlwaysShadow DDL
BEGIN/COMMIT/ROLLBACKAlwaysTransaction
SAVEPOINT/RELEASEAlwaysTransaction
SETAlwaysForward Both
SHOWAlwaysProd Direct
EXPLAINClean tablesProd Direct
EXPLAINAffected tablesNot Supported
CURSORAlwaysMerged Read
LISTENAlwaysListen Only
PREPAREAlwaysShadow Write
DEALLOCATEAlwaysForward Both
NOTIFYAlwaysNot Supported

Decision Table (Redis)

Redis uses the same strategy model, but classification is command-based rather than SQL-based.
Command TypeConditionStrategy
Read (GET, HGETALL, LRANGE, etc.)Key not in delta/tombstoneProd Direct
Read (GET, HGETALL, LRANGE, etc.)Key in delta mapMerged Read (route to shadow)
Read (GET, HGETALL, LRANGE, etc.)Key tombstonedMerged Read (return type-appropriate nil)
MGETMixed keysMerged Read (per-key routing)
SCANAny deltas existMerged Read (two-phase cursor merge)
SET, MSET, SETNXAlwaysShadow Write
INCR, HSET, LPUSH, SADD, ZADD, etc.AlwaysHydrate and Write
DEL, UNLINK, SREM, ZREM, etc.AlwaysShadow Delete
FLUSHDB, FLUSHALLAlwaysTruncate (marks DB fully shadowed)
MULTI/EXEC/DISCARDAlwaysTransaction
SUBSCRIBE/PSUBSCRIBEAlwaysForward Both (fan-in from both)
PUBLISHAlwaysNot Supported

Decision Table (Firestore)

Firestore classification is gRPC method-based.
gRPC MethodConditionStrategy
GetDocumentNo deltas for collectionProd Direct
GetDocumentDocument in delta mapMerged Read (route to shadow)
GetDocumentDocument tombstonedMerged Read (return NOT_FOUND)
BatchGetDocumentsMixed documentsMerged Read (per-document routing)
ListDocuments, RunQueryAny collection deltasMerged Read (over-fetch + merge)
RunAggregationQueryAny collection deltasMerged Read (decompose + re-aggregate)
CreateDocumentAlwaysShadow Write
UpdateDocumentAlwaysHydrate and Write
DeleteDocumentAlwaysShadow Delete
Commit, BatchWriteAlwaysHydrate and Write (per-write)
BeginTransactionAlwaysTransaction
RollbackAlwaysTransaction
ListenAlwaysListen Only

Read Operations

Prod Direct

When a SELECT touches only tables with no local mutations, the query passes straight through to prod. Zero overhead beyond the proxy hop.

Merged Read (Single Table)

When a SELECT touches a table with delta or tombstone entries:
1

Shadow Query

Execute the original query against shadow. Returns the local subset of results.
2

Prod Query (over-fetched)

Execute against prod with adjusted LIMIT: original_limit + |deltas| + |tombstones|. If schema divergence exists, the query is rewritten first (remove predicates on shadow-only columns, revert renamed columns).
3

Filter Prod Results

Discard rows where (table, pk) exists in the Delta Map (shadow has the authoritative version) or the Tombstone Set (locally deleted).
4

Adapt Prod Results

Per the Schema Registry: inject NULLs for added columns, strip dropped columns, apply renames and type casts.
5

Merge and Deduplicate

Combine shadow results with filtered/adapted prod results. Deduplicate by PK — shadow wins on conflict.
6

Re-apply Ordering and Limits

Sort by original ORDER BY, apply LIMIT/OFFSET to produce the final result set.
PK injection: The proxy injects PK columns into the SELECT if they aren’t already present (needed for deduplication). They’re stripped from the final output. LIMIT over-fetching: Prod queries add delta_count + tombstone_count to the LIMIT to compensate for rows that will be filtered out.
Redis merged reads use a simpler per-key routing model instead of the SQL merge pipeline. Each key is checked against the delta map and tombstone set individually. MGET batches per-key routing. SCAN uses a two-phase cursor: Phase 0 scans prod (filtering delta/tombstoned keys), Phase 1 scans shadow for new/modified keys.
Firestore merged reads use document-path-based deduplication. GetDocument routes per-document (tombstone → NOT_FOUND, delta → shadow, else → prod). ListDocuments and RunQuery over-fetch from prod, query shadow, merge by document path, re-evaluate WHERE filters, and re-sort.

Join Patch

When a SELECT joins tables and at least one has local mutations:
1

Execute on Prod

Run the JOIN as-is on prod. Produces correct results for all non-delta rows.
2

Identify Affected Rows

For each row in the result, check if any primary key from a delta table is in the Delta Map or Tombstone Set. Partition into clean, delta, and dead rows.
3

Discard Tombstoned Rows

Remove rows referencing locally-deleted PKs.
4

Patch Delta Rows

Fetch shadow versions of delta rows and replace the delta columns in the joined result.
5

Execute on Shadow

Run the JOIN on shadow to catch locally-inserted rows that should appear in the result.
6

Merge and Deduplicate

Combine patched prod results with shadow results. Deduplicate by composite key (all joined table PKs), then re-apply ORDER BY and LIMIT.
Schema diff handling for JOINs: When joined tables have schema changes, dirty tables are materialized into temp tables and the JOIN is rewritten to reference them.

Aggregates

Aggregate queries (COUNT, SUM, AVG, MIN, MAX with GROUP BY) can’t be merged by simply combining prod and shadow results — aggregation must be re-applied on the combined data. Approach:
  1. Build a row-level base query (strip aggregates, keep GROUP BY columns and source columns)
  2. Run the base query through the merged read pipeline to get all rows
  3. Materialize into a temp table on shadow
  4. Re-aggregate in the proxy (in-memory grouping + function application)
  5. Apply HAVING filters on re-aggregated results
Complex aggregates (array_agg, json_agg, string_agg, etc.) materialize into a temp table and re-execute the original query against it on shadow, leveraging the database’s native aggregate implementation.
Firestore aggregations (COUNT, SUM, AVG only) decompose to a base RunQuery through the merged pipeline, then re-compute aggregations in memory. No temp tables — Firestore has no SQL execution engine.

Complex Reads

CTEs (WITH clause):
  • Non-recursive CTEs: Materialize dirty CTEs independently into temp tables
  • Recursive CTEs: Materialize dirty base tables, rewrite query to reference temp tables
Derived Tables (subqueries in FROM):
  • Recursively rewrite subqueries, replacing dirty subquery results with temp tables
LATERAL Joins:
  • Materialize referenced dirty base tables (not the LATERAL body itself)
  • Preserve LATERAL semantics through table materialization

Set Operations

UNION, INTERSECT, EXCEPT are decomposed into leaf SELECT nodes. Each leaf runs through the merged read pipeline independently, then in-memory set operations are applied:
  • UNION: Combine + deduplicate
  • UNION ALL: Combine without dedup
  • INTERSECT: Keep rows in both sets
  • EXCEPT: Keep rows only in first set

Window Functions

Window functions require the complete dataset to compute correctly — they can’t be decomposed.
  1. Build a base SELECT (same FROM/WHERE, no window functions)
  2. Run base through merged read pipeline
  3. Materialize result into temp table on shadow
  4. Rewrite original query to read from temp table
  5. Execute on shadow (leverages the database’s native window implementation)

Cursors

  • DECLARE CURSOR: Classifies inner query; materializes if affected tables
  • FETCH: Reads from cursor on the appropriate backend
  • CLOSE: Closes cursor on both backends

Write Operations

All writes execute on shadow. Production is never modified.

INSERT

Simple INSERT (Shadow Write):
  1. Enforce FK constraints (validate parent rows exist)
  2. Execute INSERT on shadow
  3. Capture RETURNING clause to extract inserted PKs for precise delta tracking
  4. Add all inserted PKs to the Delta Map
  5. Persist state
Shadow’s offset sequences assign PKs in the local range (e.g., 10,000,001+), guaranteed to not collide with prod. Upsert — INSERT … ON CONFLICT (Hydrate and Write):
  1. Detect conflict key columns
  2. Check prod for rows matching conflict keys
  3. Hydrate matching prod rows into shadow
  4. Execute upsert on shadow
  5. Track all affected rows (hydrated + newly inserted) in Delta Map
MySQL-specific: MySQL lacks RETURNING. INSERT PK extraction uses a 3-tier approach: (1) extract from VALUES clause, (2) use LAST_INSERT_ID(), (3) fall back to information_schema PK lookup. MySQL-specific: INSERT ... ON DUPLICATE KEY UPDATE and REPLACE INTO route through the hydrate-and-write path.

UPDATE

Point Update (PK extractable from WHERE, e.g., WHERE id = 42):
  1. Enforce FK constraints on SET values
  2. Check if row exists in shadow — if not, hydrate from prod
  3. Execute UPDATE on shadow
  4. Add affected PK to Delta Map
Bulk Update (PK not extractable, e.g., WHERE status = 'pending'):
  1. Query prod for all matching rows (with schema rewriting for prod compatibility)
  2. Hydrate all matching rows into shadow
  3. Rewrite UPDATE WHERE clause with concrete PK values
  4. Execute on shadow
  5. Track all affected PKs in Delta Map
Cross-table Updates (UPDATE … FROM):
  • Extract subquery predicates from FROM clause
  • Hydrate referenced tables before applying the update
Redis writes: SET/MSET route to shadow directly (Shadow Write). Commands that modify existing keys (INCR, HSET, LPUSH, SADD, ZADD, etc.) use DUMP/RESTORE to hydrate the key from prod to shadow before executing (Hydrate and Write).
Firestore writes: CreateDocument routes to shadow directly. UpdateDocument hydrates the document from prod first. Commit and BatchWrite hydrate per-write. Document transforms (serverTimestamp, increment, arrayUnion, arrayRemove) are tracked as deltas.

DELETE

Point Delete (PK extractable):
  1. Enforce FK RESTRICT/NO ACTION constraints (reject if child rows exist)
  2. Execute DELETE on shadow
  3. Add tombstone for each affected PK
  4. Enforce FK CASCADE (recursively tombstone/delete child rows)
  5. Return correct row count (including prod-only rows)
Bulk Delete (PK not extractable):
  1. Pre-query prod for matching PKs
  2. Compute total count: shadow actual deletes + prod-only matches
  3. Tombstone all matching PKs
  4. Execute DELETE on shadow
DELETE … RETURNING: Hydrates from prod before deletion to synthesize return data.

TRUNCATE

  1. Execute on shadow
  2. Mark table as fully shadowed in Schema Registry — all subsequent reads skip prod entirely
  3. Clear all tombstones for the table
  4. CASCADE: recursively mark FK-related tables as fully shadowed
Redis: FLUSHDB/FLUSHALL mark the entire database as fully shadowed. All subsequent reads go to shadow only.

DDL Handling

DDL operations execute on shadow only. Production schema is never modified.
  1. Strip FK constraints from DDL before shadow execution (the proxy enforces FKs instead)
  2. Extract and store FK metadata in Schema Registry
  3. Execute modified DDL on shadow
  4. Update Schema Registry with the structural change
DDLRegistry UpdateImpact on Reads
ADD COLUMNRecords column name, type, defaultProd rows get NULL/default injected
DROP COLUMNRecords dropped column nameColumn stripped from prod results
RENAME COLUMNRecords old → new mappingProd queries use old name, results renamed
ALTER TYPERecords old_type → new_typeValues cast in prod results
CREATE TABLEMarks table as shadow-onlyTable is shadow-only, no prod reads
DROP TABLEMarks table as droppedQueries return “table not found”
CREATE INDEXShadow only, silentNo impact on reads
RENAME TABLERecords old → new mappingReads redirect to new name
Redis and Firestore have no DDL. Redis is schemaless (key-value); Firestore is schemaless (document). The Schema Registry is unused for these engines except for the IsFullyShadowed flag (set by FLUSHDB in Redis).

Transaction Management

Lifecycle

OperationBehavior
BEGINOpen read-only transaction on prod (REPEATABLE READ) + read-write transaction on shadow. Snapshot delta/tombstone/schema state.
COMMITCommit shadow, close prod, promote staged deltas/tombstones to persistent state.
ROLLBACKRollback shadow, close prod, discard staged deltas/tombstones. Restore schema registry from snapshot.
Prod uses REPEATABLE READ isolation to ensure consistent reads across the transaction — the merged read always sees a stable prod snapshot.

Engine-Specific Transaction Behavior

EngineTransaction ModelNotes
PostgreSQL, CockroachDBFull BEGIN/COMMIT/ROLLBACK + savepointsProd uses REPEATABLE READ
MySQL, MariaDBFull BEGIN/COMMIT/ROLLBACK + savepointsProd uses REPEATABLE READ
MSSQLFull BEGIN/COMMIT/ROLLBACKSavepoints supported
SQLite, DuckDBFull BEGIN/COMMIT/ROLLBACKSingle-writer; no concurrent txns
RedisMULTI/EXEC/DISCARDNo savepoints; WATCH-based optimistic locking
FirestoreBeginTransaction/Commit/RollbackProd-side snapshot isolation via ReadTime

Staged Deltas

Within a transaction, all delta/tombstone additions are staged — held in a per-transaction buffer. This prevents phantom deltas from failed transactions: if a transaction rolls back, the Delta Map and Tombstone Set remain unchanged. Autocommit mode (implicit single-statement transactions) applies deltas immediately — no staging needed.

Savepoints

Savepoints are fully supported (SQL engines) with stack-based snapshot management:
  • SAVEPOINT name: Capture current delta/tombstone/schema/insert-count state, push onto stack
  • RELEASE SAVEPOINT name: Pop snapshot from stack (changes persist)
  • ROLLBACK TO SAVEPOINT name: Restore state from snapshot, keep savepoint on stack for re-use

Foreign Key Enforcement

Why FKs are Stripped

Shadow starts empty. INSERTs referencing prod rows via foreign key would fail because the referenced row doesn’t exist in shadow. The proxy enforces referential integrity instead.

Parent Row Validation (INSERT/UPDATE)

Validation order (fast-path first):
  1. Delta map check: If parent PK in delta → exists in shadow
  2. Tombstone check: If parent PK tombstoned → REJECT
  3. Shadow query: Check shadow for parent row
  4. Prod query: Fall back to prod if not found in shadow
Supports multi-column (composite) foreign keys.

Referential Action Enforcement (DELETE)

  • RESTRICT / NO ACTION: Reject parent delete if child rows exist (checks shadow + prod)
  • CASCADE: Tombstone child rows when parent deleted, recursively cascade
  • SET NULL / SET DEFAULT: Falls through to shadow enforcement
Redis and Firestore have no foreign key concept. FK enforcement is SQL-engine-specific.

Known Limitations

LimitationEnginesImpactMitigation
Multi-column ORDER BYAll SQLOnly single-column sort re-applied after mergeMulti-column ORDER BY returns in backend order
Mutating CTEs only see shadow dataAll SQLCTE reads from clean tables return emptyv1.1: split into write + read
Stored procedures with side effectsAll SQLSELECT my_function() routes as READ to prodv2: function introspection or config allowlist
Tables without PKsAll SQLRead-only, writes produce warningRare in practice
Bulk ops on large tablesAll SQLHydrating thousands of rows is slowAcceptable for dev tool
JOIN PK patchingAll SQLRequires PK in SELECT listPatching skipped when PK not selected
Parameterized LIMITAll SQLLIMIT $1 not rewritten for over-fetchingUncommon in practice
sp_executesql parameter resolutionMSSQLParameterized SELECT not resolved for merged readsUse inline SQL where possible
Blocking commands (BLPOP, etc.)RedisWon’t see pushes from non-Mori clients to prodInherent to copy-on-write model
Real-time listeners (Listen)FirestoreOnly see prod data, not shadow mutationsBy design — listeners reflect prod state
Don’t rename PK columnsAll SQLDelta tracking depends on stable PK column namesAvoid renaming PK columns