Overview
| Property | Value |
|---|---|
| Engine ID | cockroachdb |
| Wire Protocol | pgwire v3 (shared with PostgreSQL) |
| Classifier | AST-based (pg_query_go — shared with PostgreSQL) |
| Shadow | Docker PostgreSQL container (schema-compatible) |
| Default Port | 26257 |
ID(), ParseConnStr() (default port 26257), Init() (CRDB-specific schema export), and NewProxy() (SERIALIZABLE isolation + rowid dedup).
Connection Parameters: Same as PostgreSQL (default port: 26257).
Differences from PostgreSQL
- Shadow uses PostgreSQL, not CockroachDB — The shadow database is a PostgreSQL container, not CockroachDB. CockroachDB-specific DDL syntax (FAMILY, INTERLEAVE, CONFIGURE ZONE, etc.) is cleaned and converted to PostgreSQL-compatible DDL during init.
- Schema export via
SHOW CREATE ALL TABLES— Uses CockroachDB’s native schema export instead ofpg_dump(which is incompatible with CockroachDB). - SERIALIZABLE isolation — CockroachDB only supports SERIALIZABLE. The proxy sends
BEGIN ISOLATION LEVEL SERIALIZABLEexplicitly instead of REPEATABLE READ. rowidfor PK-less tables — CockroachDB does not have PostgreSQL’sctidsystem column. Uses CockroachDB’s implicitrowidcolumn for deduplication instead.unique_rowid()for SERIAL columns — CockroachDB’s SERIAL usesunique_rowid()(timestamp + node ID) instead of PostgreSQL sequences. The init pipeline recognizes this and classifies PK types accordingly.- No extensions — CockroachDB has no extension system. Extension detection is skipped during init.
Known Limitations
- CockroachDB-specific SQL syntax (
SHOW RANGES,ALTER TABLE ... CONFIGURE ZONE, etc.) is not parseable bypg_query_goand will fail if sent through the proxy. - Under contention, CockroachDB may return serialization retry errors (SQLSTATE 40001) that would not occur with PostgreSQL.
- Unsupported operations: NOTIFY, COPY, LOCK TABLE, DO $$, CALL, EXPLAIN ANALYZE.
- Functions aren’t supported yet because it’s really hard to determine if the function is non-mutating or not.
- CTE updates/deletes/upserts might produce incorrect subsequent merged read results.
- Performance: Rare, but some extremely complex JOINs over large tables might be slow either because (a) the prod data needed is too large or (b) the query is too complex and the safety mechanism decides to materialize the result into a temporary table locally. If this happens, try passing in
--max-rowsduringmori start, which will cap the number of rows pulled from prod.

