Powering On-Chain Trading, DeFi Risk, and Stablecoin Payments with Sub-100ms Latency

Powering On-Chain Trading, DeFi Risk, and Stablecoin Payments with Sub-100ms Latency

·

10 min read

In crypto, latency is not an abstract infra number—it defines trading outcomes, protocol stability, and user trust. On Solana, a new block is produced roughly every 400 ms. On BNB Chain, the roadmap aims to cut finality to under 150 ms by 2026. In this environment, if your systems process data seconds late, you are already behind. Market makers can’t trade on stale orderbooks, liquidation engines can’t afford delayed margin calls, and Layer 2 explorers can’t show outdated chain state. For traders, DeFi protocols, and Layer 2 developers, “near real-time” is not good enough. The expectation is that data is accurate, complete, and updated at block time.

RisingWave addresses this with a unified event streaming data platform written in Rust for high performance and reliability. It ingests massive, bursty flows of on-chain and off-chain events, keeps the results continuously updated, and makes them queryable as soon as new blocks arrive. Engineers stream the data in, define transformations in SQL, and RisingWave ensures the answers are always fresh—without wiring together and operating three separate systems.

What “Real Time” Means in Crypto Workloads

When teams in crypto say “we need real time,” they are not all asking for the same thing.

Exchanges. For an exchange, real time means detecting abnormal trade bursts or suspicious network patterns before they spread. Tens of billions of events per day must be inspected, and alerts need to fire in under 100 ms. Anything slower becomes forensic analysis after the damage is done.

DeFi protocols and NFT marketplaces. For DeFi, it means recomputing wallet values, pool health, or collection floor prices as soon as an offer, swap, or transfer hits the chain. Targets are sub-second end-to-end, with consistent behavior even during spikes.

Market makers and HFT. For market makers, real time is about keeping the orderbook truthful while streaming 100k–500k events every second across venues and buses. The loop must close faster than a block interval: Solana produces blocks every ~400 ms, and BNB Chain is pushing toward <150 ms finality by 2026. If analytics arrive a second late, they are already irrelevant.

Intelligence and compliance. For monitoring and compliance platforms, real time means correlating wallet flows, sanction data, governance events, and social spikes fast enough to catch anomalies as they form. The job is early detection, not historical reporting.

Layer 2 networks. For Layer 2s, real time means explorers and APIs that stay at the tip of the chain. Developers expect metrics like TPS, gas usage, and execution traces to update within ~100 ms of safe-block arrival, with late or replaced blocks handled correctly.

In other words, “real time” in crypto is not about making dashboards tick faster. It is about closing operational loops: executing trades, triggering liquidations, enforcing risk, reconciling payments, handling incidents, and giving developers instant feedback as soon as blocks are produced.


Exchanges: Real-Time Anti-Fraud and Abuse Prevention

Exchanges deal with billions of telemetry events every day: network packets, order flow, matching logs, authentication attempts, API calls. During token listings or market volatility, these volumes spike sharply. Most teams try to handle this with a stack like Kafka + Flink + data warehouses, but this approach has serious problems. Flink is complicated to develop and operate—pipelines often take weeks to write and debug. The overall stack is heavy, with data moving between multiple systems. And scaling is not dynamic: when bursts hit, it takes time and manual work to resize clusters, so backlogs build up and anomalies show up minutes late. By then, the exchange may already have suffered liquidity distortions or degraded service.

RisingWave simplifies this workflow. All telemetry streams into a unified event streaming data platform, and detection rules are defined as materialized views. One MV might track per-IP traffic baselines, another might correlate API surges with trade bursts, another might detect sudden waves of small orders from new accounts. Adding a new rule is just creating a new MV—no new Flink job, no wiring between multiple systems.

Every MV stays continuously up to date with sub-second latency, even at multi-billion-event/day scale. Results can be queried directly, or RisingWave can push events to downstream endpoints so other systems or subscribers can react immediately.

This solves the real problem for exchange engineers: instead of maintaining a fragile, complex data stack that lags under pressure, they get a single system that scales smoothly with bursts and delivers reliable, sub-second anomaly detection.


Marketplaces: Real-Time Valuation and Multi-Pair Market Data

Marketplaces need to present two kinds of data at scale: accurate portfolio valuation for every wallet and candlestick (K-line) market data across thousands or even millions of trading pairs or collections. Each new trade or offer ripples out to thousands of wallets, forcing instant recalculation of balances. At the same time, K-lines must be updated in multiple granularities—1s, 1m, 5m, 1h, 1d—for every active pair.

Traditional warehouse pipelines break down under this workload. Fan-out makes portfolio calculations lag, while recomputing K-lines by scanning raw trades is too heavy. During bursts of activity, dashboards fall behind, and users see stale numbers exactly when they need accuracy the most.

RisingWave solves this by maintaining event-driven materialized views. Every incoming trade or offer immediately triggers computation, so wallet balances and K-lines update in true real time, not in micro-batches or refresh cycles. All MVs are transactionally consistent, meaning there are no gaps or mismatched states across metrics. Higher-level rollups (like 1m K-lines) are built directly on top of lower-level views (like 1s), so updates cascade naturally and efficiently.

This approach is also highly scalable: inactive pairs don’t consume resources, and compute focuses only on markets with activity. The system can scale to millions of pairs and billions of events without losing consistency or freshness. Latency stays sub-second, with steady-state performance often in the tens of milliseconds.

For engineers, adding a new metric is simply defining another MV—no extra pipelines, no custom jobs. Results can be served directly via SQL queries, or RisingWave can push updates to APIs and endpoints so dashboards and downstream services always see the live state.

The outcome is a marketplace that delivers accurate portfolio values and live multi-pair K-line charts with guaranteed consistency and true real-time freshness, even under extreme scale.


Market Makers and HFT: Real-Time Orderbook and Position Monitoring

For market makers and high-frequency desks, success depends on constant visibility into fast-changing state: last best bid/ask, orderbook depth, queue position, spreads across venues, fee schedules, and above all, their own inventory and exposures. A warehouse can summarize PnL after the day ends, but it cannot help a trader or risk engine decide what to do in the next block.

The use case here is real-time monitoring. Teams need to watch their own positions, exposures, and flows continuously—not only to execute strategies but also to ensure they stay within internal risk and balance limits. A sudden imbalance, an inventory limit breach, or a queue position loss has to be visible immediately, not minutes later.

With RisingWave, both public orderbooks and private fills stream in from buses like NATS or Kafka. Engineers define materialized views that keep live state: top-of-book, rolling micro-windows, per-venue spreads, inventory deltas, PnL curves, and risk caps. Because every view is event-driven and consistent, each trade or order updates all dependent metrics instantly. Ingest rates of 100k–500k events per second are common, yet results remain sub-second without rescanning historical tables.

The system is flexible: new monitoring rules or signals are added simply by defining another MV, and results can be queried directly or pushed to downstream endpoints so risk engines and dashboards always see the live state.

The outcome is that market makers monitor their books in true real time—positions, exposures, and risk limits stay accurate and up to date, even under heavy market load.


Intelligence and Compliance: Real-Time Wallet Flows and Market Signals

In crypto, intelligence means more than counting transfers. Engineers need to detect patterns of behavior: a wallet that repeatedly funds new accounts, hops through mixers, or suddenly expands its counterparty graph. They also need to connect this with off-chain signals—social spikes, breaking news, or governance moves that often precede on-chain flows. The difficulty is correlation: by the time a warehouse job finishes joining across sources, the signal is stale and the opportunity to act has passed.

The use case here is real-time correlation of diverse signals. Traders, compliance teams, and analysts all want to know when whales shift positions, when suspicious wallets start moving, or when social chatter is immediately followed by transactions.

RisingWave makes this possible by letting teams stream on-chain transactions and logs together with off-chain feeds—for example, governance proposals, news sentiment, or Twitter activity. Engineers define materialized views that join and enrich these streams in real time. Every new event triggers compute instantly, so alerts and metrics are always sub-second fresh. Grace windows handle late events cleanly, keeping results consistent while still prioritizing urgency.

Because RisingWave can both serve live queries and push events to endpoints, downstream dashboards, risk engines, and alerting systems always see the current state. This allows intelligence and compliance teams to detect whale moves, wallet anomalies, and market-manipulation patterns while they are still unfolding, not after the fact.


Layer 2 Networks: Real-Time Explorers and Developer Analytics

Modern Layer 2s are pushing two frontiers: shorter block times and modular execution. Both approaches generate massive event streams—transactions, receipts, execution traces—and both require the same guarantee to their ecosystem: the data shown in explorers, APIs, and dashboards must reflect the chain as it is right now, not a delayed snapshot from a few seconds ago.

The use case here is clear: explorers, developer APIs, and monitoring systems must stay synchronized with chain activity in real time. If an explorer lags, developers lose trust. If monitoring falls behind, network operators can’t react to congestion or anomalies.

RisingWave addresses this by ingesting execution traces, receipts, and block metadata directly and maintaining event-driven materialized views for metrics like TPS, gas usage, success rates, block confirmation times, wallet activity, and contract calls. Every new block or transaction immediately triggers computation, so results stay sub-second fresh. Reorgs and replaced blocks are handled consistently: safe blocks can be pinned, finality windows defined, and prior values corrected when upstream data changes.

For engineers, adding new metrics or counters is as simple as defining another MV—no new pipelines, no extra jobs. Results can be queried live or pushed directly to endpoints, keeping explorers at the tip, APIs aligned with the chain, and internal monitoring stable even during token listings or congestion tests.

The outcome is that developers and users trust the numbers they see, because the analytics behave exactly like the chain behaves.


A Small Taste of the Developer Experience

You don’t need to learn a new programming model to build real-time pipelines. In RisingWave, everything is just SQL. Most teams begin by defining a few materialized views (MVs), then expand as their use cases grow. Each MV is event-driven and consistent—every new trade or block immediately updates the results.

-- Create a real-time enriched market dataset by joining prices, volatility, and sector sentiment.
CREATE MATERIALIZED VIEW enriched_market_data AS
SELECT
    bas.asset_id,
    ed.sector,
    bas.average_price,
    bas.bid_ask_spread,
    rv.rolling_volatility,
    ed.avg_historical_volatility,
    ed.avg_sector_performance,
    ed.avg_sentiment_score,
    rv.window_end
FROM
    avg_price_bid_ask_spread AS bas
JOIN
    rolling_volatility AS rv
ON
    bas.asset_id = rv.asset_id AND
    bas.window_end = rv.window_end
JOIN (
    SELECT asset_id,
        sector,
        window_end,
        AVG(historical_volatility) AS avg_historical_volatility,
        AVG(sector_performance) AS avg_sector_performance,
        AVG(sentiment_score) AS avg_sentiment_score
    FROM TUMBLE(enrichment_data, timestamp, '5 minutes')
    GROUP BY asset_id, sector, window_end
) AS ed
ON bas.asset_id = ed.asset_id AND
   bas.window_end = ed.window_end;
-- Detect unusual trade volume in real time by flagging trades
-- that are >1.5x the 10-minute average for a given asset.
CREATE MATERIALIZED VIEW unusual_volume AS
SELECT
    trade_id,
    asset_id,
    volume,
    CASE WHEN volume > avg_volume * 1.5 THEN TRUE ELSE FALSE END AS unusual_volume,
    window_start AS timestamp
FROM (
    SELECT
        trade_id,
        asset_id,
        volume,
        AVG(volume) OVER (PARTITION BY asset_id) as avg_volume,
        window_start
    FROM TUMBLE(trade_data, timestamp, INTERVAL '10 MINUTES')
    GROUP BY
        trade_id,
        asset_id,
        volume,
        window_start
);

You stream in trades, wallet flows, blocks, or receipts. You describe how to aggregate or join them in SQL. RisingWave keeps all results sub-second fresh and consistent, so your APIs and dashboards always serve live data. And if you want long-term storage, you can continuously write Parquet or Iceberg files to a data lake while still serving hot results from memory and disk.


Conclusion

In crypto, every second matters—whether it’s catching fraud, updating portfolios, monitoring positions, tracking wallet flows, or keeping a Layer 2 explorer at the tip. The challenge is always the same: massive scale and the need for data that is consistent, event-driven, and sub-second fresh.

RisingWave addresses this with a single unified event streaming data platform. Engineers define materialized views in SQL, and every new event updates results instantly. The same system serves live queries, pushes events to endpoints, and writes history to the lake.

Want to see how teams are already using RisingWave in production? Book a meeting with us and we’ll walk you through the real use cases in detail.

The Modern Backbone for Your
Data Streaming Workloads
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.