Replication Engine: Online // Sub-second CDC

LIVE DATA FOR YOUR AI STACK

Sub-second streaming. Single binary - no Kafka, no Debezium, no Ops overhead. Cut your infrastructure costs and start shipping data products - not pipelines.

Single binary · no runtime dependencies + Free forever tier
Pipeline Monitor
3 Active
orders-to-bigquery
RUNNING
postgres → bigquery lag: 0.4ms
events-to-clickhouse
RUNNING
mysql → snowflake lag: 0.7ms
users-to-iceberg
RUNNING
mongodb → iceberg lag: 0.9ms
Throughput
2.4M
events/s
Avg Lag
0.7ms
end-to-end
Checkpoint
LSN+3
committed
10:42:01WAL decoder: LSN 0/3A1B2C4D received
10:42:01orders INSERT → ArrowRecord[124 rows]
10:42:01AdaptiveBuffer flush triggered (size gate)
10:42:01BigQuery committed: offset 4,821 → 4,945
10:42:02Schema drift check: orders — no changes
10:42:02checkpoint saved: LSN 0/3A1B2C4D
10:42:02WARN: replication lag spike 42ms → resolved
10:42:03users UPDATE → ArrowRecord[87 rows]
10:42:03Iceberg committed: snapshot 8a4f2c
10:42:01WAL decoder: LSN 0/3A1B2C4D received
10:42:01orders INSERT → ArrowRecord[124 rows]
10:42:01AdaptiveBuffer flush triggered (size gate)
10:42:01BigQuery committed: offset 4,821 → 4,945
10:42:02checkpoint saved: LSN 0/3A1B2C4D
vs. competitors: Fivetran: 15 min Airbyte: 15 min Confluent: ms (+ Kafka cluster) nanosync: <1sec · single binary
// Development

Ship in
Minutes

A single statically linked binary. Drop it anywhere — local, Docker, or Kubernetes. Zero runtime dependencies.

01
Install
Single binary, any platform
Homebrew
brew install nanosync/tap/nanosync
curl
curl -sSL https://get.nanosync.io | sh
Docker
docker pull nanosync/nanosync:latest
02
Create
Pipeline in seconds
Prompt
"replicate orders to bigquery, mask customer_email"
config generated · validated · deployed
source: postgres.orders
sink: bigquery.orders_replica
mask: [customer_email]
Command line
$ nanosync create pipeline \
--source postgres://prod-db/orders \
--sink   bigquery://acme/analytics \
--tables public.*
✓ pipeline created
03
Monitor
Start and watch live progress
Start server
$ nanosync serve --config pipeline.yaml
✓ nanosync v0.1.0 listening on :7600
Web UI: http://localhost:7600/app
Live status
$ nanosync list pipelines
NAME STATUS LAG EVENTS
orders-to-bigquery streaming <1ms 18.4k
replication live · snapshot complete · no errors
// Platform

Built for
Sub-second Latency

Every component designed to never be the bottleneck — from log decoder to sink writer, with the reliability and intelligence to run itself.

// Intelligence

Anomaly Detection

Pre-emptive alerts 5–15 min before failure. Continuous trend analysis on lag, throughput, and error rate — the platform pages you before users notice.

LAG SPIKE · orders_pipeline ALERT · 9 MIN EARLY
THROUGHPUT DROP · 34% INVESTIGATING
ERROR RATE · payments_sink RESOLVED · NO PAGE
WAL BACKLOG · 2.1M events ALERT · 12 MIN EARLY

Natural Language Pipelines

"Replicate orders to BigQuery, mask email" → config generated, validated, and deployed. No YAML required.

Prompt
"replicate orders to bigquery, mask customer_email"
config generated · validated · deployed
source: postgres.orders
sink: bigquery.orders_replica
mask: [customer_email]

Self-Tuning Engine

AdaptiveBuffer thresholds and batch sizes optimised continuously based on live traffic patterns. No manual tuning.

batch_size 1000 → 1847
flush_interval 100ms → 67ms
worker_concurrency 4 → 7
throughput delta +38%
// Reliability
Schema_Governance

Schema Drift Handled

Every schema change is classified. Backward-compatible widenings are auto-applied. Breaking changes gate on human approval. Nothing surprises your warehouse.

ADD COLUMN (nullable)AUTO-APPLIED
INT → BIGINTAUTO-APPLIED
DROP COLUMNAPPROVAL REQUIRED
TYPE NARROWINGAPPROVAL REQUIRED
Observability

30+ OTel Instruments

OpenTelemetry metrics via Prometheus bridge. Named instruments across pipeline, sink, source, worker, and scheduler subsystems. Grafana dashboards included.

Uptime SLA
99.9%
Enterprise tier
Checkpoints
0
per minute
Run history
90d
Enterprise tier
Instruments
30+
OTel metrics
Compliance_Log

Enterprise Security

RBAC, SSO, and audit logging for teams operating in regulated environments. PII masking and SOC 2 compliance on the roadmap.

RBAC + SSO (SAML/OIDC)Enterprise
Audit log (actor + diff)Enterprise
PII detection + maskingRoadmap
SOC 2 audit exportRoadmap
// Performance

Zero-copy log decoder

Your database commit lands at the destination in under a second — no intermediate queues, no Kafka lag to babysit. Built on a zero-copy WAL decoder that parses pgoutput directly into columnar records.

276ns filter eval <1ms end-to-end

Predictable memory load

Data flows as Apache Arrow columnar records from source to sink — no row-by-row deserialization, no intermediate copies. Memory stays flat whether you're moving 1K or 10M events per minute.

Zero-copy Flat memory profile

Adaptive Buffer

No manual tuning when load spikes. AdaptiveBuffer adjusts batch sizes and flush timing continuously based on live throughput — so your pipeline stays fast without an engineer watching it.

Auto-adjusting No ops overhead
Filter eval
276ns
per op · 0 allocs
Schema map
231ns
per op · Apple M4
End-to-end lag
<1ms
postgres → bigquery
Binary size
1 binary
CGO_ENABLED=0
// Connector_Matrix

Any Source
Any Sink

All sources and all sinks are included on every plan. Don't pay for connectors anymore.

Databases
PostgreSQL Postgres
MySQL MySQL
SQL Server SQL Server
MongoDB MongoDB
DynamoDB DynamoDB
Cloud SQL Cloud SQL
AlloyDB AlloyDB
Spanner Spanner
BigQuery BigQuery
Snowflake Snowflake
Storage
S3 S3
GCS GCS
Redshift Redshift
Iceberg Iceberg
Parquet Parquet
Avro Avro
JSONL / CSV
Messaging
Kafka Kafka
Pub/Sub Pub/Sub
// Replication_Pipeline

How It Works

Four stages. Nanoseconds between each. No intermediate storage, no external queue.

01_CONNECT
WAL / CDC Subscribe

Connect to source's native change stream. Postgres pgoutput, SQL Server CDC, MySQL binlog, MongoDB Change Streams. No polling — push only.

replication_slotACTIVE
WAL senderSTREAMING
LSN position0/3A1B2C4D
02_DECODE
Arrow Encoding

WAL bytes decoded directly into Apache Arrow columnar records. Type-safe, zero-copy. Schema inferred once, cached for the pipeline lifetime.

INSERT orders(id=9021, amount=149.99)
→ ArrowRecord[1 row]
UPDATE users(id=44, email=…)
→ ArrowRecord[1 row]
03_BUFFER
AdaptiveBuffer

Micro-batches accumulate until size (1K events), age (100ms), or source quiet. WASM transforms run here — column masking, filtering, enrichment.

Batch fill847 / 1000
Age73ms / 100ms
04_COMMIT
Write + Checkpoint

Batch written to sink via native API (Storage Write, Mutations, Snowpipe…). On success, LSN/WAL offset committed to checkpoint store. Crash-safe.

sink writeOK 4ms
checkpointSAVED
total e2e0.7ms
// Pricing_Tiers

Start Free.
Scale Autonomously.

Every plan includes all sources and all sinks. Upgrade when you need reliability, RBAC, or AI autonomy — not to unlock connectors.

Tier_01

Developer

$0 / forever
10M events / month

Prove the latency. All sources and sinks included. Get started in minutes with no credit card.

  • All sources + all sinks
  • 10M events / month
  • Single-node deployment
  • Community support
Get Early Access
Most Popular
Tier_02

Team

Seat + Usage
$20k–$150k ARR · unlimited pipelines

Multi-node HA, RBAC + SSO, WASM transforms, schema drift approval, 90-day history, custom alerts. Self-hostable, air-gap ready.

  • Everything in Developer
  • Multi-node HA
  • RBAC · SAML 2.0 · OIDC
  • Support with SLAs
Get Early Access
Tier_03

Enterprise

Custom
$150k–$500k+ ARR annual contract

The autonomous layer. Anomaly detection, natural language pipelines, self-tuning, PII masking, compliance packs, data catalog integration.

  • Everything in Team
  • Anomaly detection + pre-emptive alerts
  • NL pipeline creation (AI agent)
  • PII detection + auto-masking
  • Named support engineer
Get Early Access