Technology Stack

Scalability Approach

Scalability Approach

Nodes, Relayers, Future L2 Expansion

ETH6900’s scalability is designed around one principle: horizontal expansion without fragmentation.
Instead of a single relay bottleneck, the system operates through a network of distributed nodes and relayers, each capable of independently managing queue scheduling, gas prediction, and transaction broadcasting.

The architecture is modular, stateless where possible, and orchestrated to scale from hundreds to millions of queued transactions per day without compromising performance, latency, or security.

Core Components of the Scalability Model

Component

Function

Scaling Strategy

Relay Nodes

Accept, validate, and queue signed transactions.

Horizontally scalable — each relay runs stateless microservices that can be duplicated across data centers.

Scheduler Engine

Decides when queued transactions should be broadcast based on baseFee predictions and deadlines.

Multi-threaded task orchestration with sharding by chain ID and priority class.

Broadcast Workers

Communicate with blockchain RPCs and private builders to submit transactions.

Autoscaling container clusters managed via internal load balancer.

Gas Oracle

Collects and analyzes live gas data from multiple sources (Etherscan, Blocknative, native nodes).

Global replica sets for redundancy.

Health Monitor

Tracks uptime, error rates, and gas prediction accuracy.

Federated monitoring with on-chain status beacons.

Horizontal Scaling Architecture

ETH6900 uses a relay-mesh topology, meaning multiple relayers handle distinct queues while sharing a unified coordination protocol.

graph TD
A[User Wallet] -->|Signed Tx| B[Relay Node 1]
A -->|Signed Tx| C[Relay Node 2]
B --> D[Scheduler Engine]
C --> D
D --> E[Broadcast Workers]
E --> F[Blockchain Network]
D --> G[Metrics & Oracles]
  • Each relay node can spin up or down automatically depending on transaction load.

  • The scheduler distributes pending queues evenly to prevent regional congestion.

  • Transactions are chain-sharded, meaning L2 transactions are handled by relayers specialized for their network (e.g., Optimism relay cluster).

Load Balancing and Redundancy

1. Adaptive Load Balancer (ALB)

  • Dynamically assigns users to the least busy relayer cluster.

  • Detects latency or broadcast errors and reroutes queued items automatically.

2. Regional Replication

  • Global deployment across 3 core regions (NA, EU, APAC) with relay peers in each region.

  • Ensures <200ms relay response time worldwide.

3. Failover Mechanism

  • If one relay node crashes mid-scheduling, backup nodes automatically reconstruct queues from encrypted snapshot data.

  • No single point of failure with an uptime goal about 99.9%.

Data Partitioning (Queue Sharding)

ETH6900 uses queue sharding to manage scale across multiple chains and workloads.
Each shard handles a subset of pending transactions determined by:

  • Chain ID (e.g., 1 = Ethereum Mainnet, 10 = Optimism)

  • Deadline Class (short, medium, long)

  • Tx Type (swap, transfer, governance, contract call)

This separation prevents congestion in one category (e.g., NFT mints) from affecting others.

{
  "shard": "mainnet-short",
  "assignedTo": "relay-eu-3",
  "queueLength": 342,
  "throughput": "200 tx/s"
}

Multi-Chain Relay Federation

To prepare for a future of hundreds of L2s and appchains, ETH6900’s relayers operate as a federated network — each chain’s relayers communicate through lightweight consensus signals.

  • Independent Operation: Each chain relay can make decisions autonomously.

  • Shared Telemetry: Global gas patterns are synced for prediction accuracy.

  • Cross-Failover: If one network’s relay cluster goes offline, another region’s relayer can take over temporarily.

This architecture turns ETH6900 into a self-healing compute mesh capable of scaling globally.

Resource Elasticity (Autoscaling)

Relayer clusters are deployed as containerized microservices (Docker + Kubernetes).
They automatically scale based on metrics such as queue depth, transaction throughput, and block interval variance.

Metric

Scaling Trigger

Action

Queue depth > 5,000

Add 1 relay instance

Horizontal scale-out

Relay CPU > 70% for 5 mins

Add scheduler worker

Task distribution

Block interval < 12s sustained

Add broadcast workers

Throughput increase

Idle < 5% usage

Remove idle relayers

Cost optimization

Future L2 Expansion Plan

Phase

Target Chains

Objective

Implementation Notes

Phase 1

Optimism, Base

Establish stable relay clusters

✅ Completed

Phase 2

Arbitrum, Polygon zkEVM

Deploy specialized relayers for rollup-type L2s

🧪 Ongoing

Phase 3

Scroll, Linea, zkSync Era

Add zero-knowledge compatible broadcast adapters

⚙️ In progress

Phase 4

App-specific Rollups

Introduce light-client relayers with low memory footprint

🚧 Planned

Phase 5

Modular Expansion

Enable third-party relayer operators to join mesh (incentivized staking)

🌐 Future

The end goal is to evolve ETH6900 from a single-entity network to a decentralized relayer mesh, where multiple operators compete to provide optimal broadcast timing, this is similar to how Ethereum’s block builders compete for inclusion efficiency.


Scalability Metrics

Current performance (MVP relayer cluster, Oct 2025):

Metric

Current

Target

Transactions handled/day

~250,000

1,000,000+

Average queue latency

< 0.35s

< 0.25s

Average fee savings

22–35%

30–40%

Relay uptime

99.91%

99.99%

Max parallel tx throughput

5,000 tx/s

10,000 tx/s


ETH6900’s scalability approach is infrastructure-grade built to scale horizontally, adapt dynamically, and expand globally.
Each relayer acts as a node in a living mesh, constantly learning from gas fluctuations and distributing workloads intelligently across networks.