How to Build a Real-Time Leaderboard with Streaming SQL

How to Build a Real-Time Leaderboard with Streaming SQL

Every competitive game, fitness app, and sales dashboard shares the same core problem: ranking users by a score that changes constantly. A leaderboard sounds simple until you try to build one that updates in real time. Suddenly you are dealing with concurrent writes, rank recalculations across millions of rows, cache invalidation, and race conditions between your application tier and your database.

The traditional approach involves writing application code that aggregates scores, sorts users, and recalculates ranks on every event. This works fine for a few hundred users. At scale, it becomes a performance bottleneck that consumes engineering weeks to optimize, typically with Redis sorted sets, custom caching layers, and background workers stitched together.

There is a better way. A streaming database like RisingWave lets you define your leaderboard as a SQL query, and the system incrementally maintains the results as new events arrive. No application code for aggregation. No cache invalidation logic. No background jobs. You write SQL, and the leaderboard stays up to date automatically.

In this tutorial, you will build a complete real-time leaderboard system with multi-dimensional rankings, per-game-mode views, and a downstream sink to serve the results through your API. All queries have been tested and verified on RisingWave 2.8.0.

Why Do Traditional Leaderboard Architectures Break Down?

Most leaderboard implementations follow one of two patterns, and both have significant drawbacks at scale.

Pattern 1: Query-time aggregation. Your API endpoint runs a query like SELECT player_id, SUM(points) FROM events GROUP BY player_id ORDER BY SUM(points) DESC LIMIT 10 on every request. This is simple to implement, but the query scans the entire events table each time. With millions of events, response times climb from milliseconds to seconds. Adding indexes helps, but you are still doing redundant work on every API call.

Pattern 2: Application-managed state. You maintain a Redis sorted set or an in-memory cache, and update it from your application layer every time a scoring event occurs. This is fast to read, but now you need to handle:

  • Consistency: What happens if your app crashes between writing the event and updating Redis?
  • Replayability: If you need to recalculate scores after fixing a bug, you must rebuild the entire cache.
  • Multi-dimensional rankings: Want separate leaderboards for kills, objectives, and total score? That is three separate data structures to maintain in sync.
  • Operational overhead: Redis cluster management, monitoring, failover, and memory capacity planning.

A streaming materialized view eliminates these tradeoffs. You define the leaderboard query once. The streaming engine maintains the result incrementally, processing only the changed data as new events arrive. The result is always consistent, always fresh, and queryable with standard SQL.

How Does a Streaming Leaderboard Work in RisingWave?

RisingWave is a streaming database that supports PostgreSQL-compatible SQL. A materialized view in RisingWave is not a periodically refreshed snapshot like in traditional databases. It is an incrementally maintained query result. When a new row enters the source table (or Kafka topic), RisingWave propagates the change through the materialized view's query plan and updates only the affected rows in the result.

This means if StarForge scores 250 points, RisingWave does not re-aggregate all events for all players. It updates only StarForge's total, recalculates affected ranks, and the leaderboard query returns the new result immediately.

The architecture looks like this:

  1. Game servers emit scoring events to a Kafka topic (or directly to RisingWave).
  2. RisingWave ingests the events and maintains materialized views for leaderboard rankings.
  3. Your API queries the materialized views using any PostgreSQL client, getting millisecond-latency responses.
  4. Optionally, RisingWave sinks the leaderboard state to PostgreSQL, Redis, or another downstream system for serving.

Let's build it step by step.

How Do You Set Up the Event Source?

Step 1: Define the events table

In production, you would typically ingest events from Apache Kafka using a RisingWave source connector. For this tutorial, we will use a table so you can follow along without Kafka.

CREATE TABLE game_events (
    event_id VARCHAR,
    player_id VARCHAR,
    player_name VARCHAR,
    event_type VARCHAR,
    points INT,
    game_mode VARCHAR,
    event_time TIMESTAMPTZ
);

The schema captures each scoring event with the player who earned it, the type of event (kill, objective, assist), the points awarded, and the game mode.

Step 2: Load some initial events

INSERT INTO game_events VALUES
    ('e001', 'p1', 'ShadowNinja', 'kill', 100, 'ranked', '2026-03-29 10:00:00+00'),
    ('e002', 'p2', 'DragonSlayer', 'kill', 100, 'ranked', '2026-03-29 10:00:05+00'),
    ('e003', 'p1', 'ShadowNinja', 'kill', 100, 'ranked', '2026-03-29 10:00:10+00'),
    ('e004', 'p3', 'CosmicRay', 'objective', 250, 'ranked', '2026-03-29 10:00:15+00'),
    ('e005', 'p2', 'DragonSlayer', 'kill', 100, 'ranked', '2026-03-29 10:00:20+00'),
    ('e006', 'p4', 'IronWolf', 'kill', 100, 'ranked', '2026-03-29 10:00:25+00'),
    ('e007', 'p1', 'ShadowNinja', 'objective', 250, 'ranked', '2026-03-29 10:00:30+00'),
    ('e008', 'p5', 'PixelQueen', 'kill', 100, 'ranked', '2026-03-29 10:00:35+00'),
    ('e009', 'p3', 'CosmicRay', 'kill', 100, 'ranked', '2026-03-29 10:00:40+00'),
    ('e010', 'p2', 'DragonSlayer', 'objective', 250, 'ranked', '2026-03-29 10:00:45+00'),
    ('e011', 'p4', 'IronWolf', 'kill', 100, 'ranked', '2026-03-29 10:00:50+00'),
    ('e012', 'p1', 'ShadowNinja', 'kill', 100, 'ranked', '2026-03-29 10:00:55+00'),
    ('e013', 'p5', 'PixelQueen', 'objective', 250, 'ranked', '2026-03-29 10:01:00+00'),
    ('e014', 'p6', 'ThunderBolt', 'kill', 100, 'ranked', '2026-03-29 10:01:05+00'),
    ('e015', 'p3', 'CosmicRay', 'kill', 100, 'ranked', '2026-03-29 10:01:10+00'),
    ('e016', 'p4', 'IronWolf', 'objective', 250, 'ranked', '2026-03-29 10:01:15+00'),
    ('e017', 'p6', 'ThunderBolt', 'kill', 100, 'ranked', '2026-03-29 10:01:20+00'),
    ('e018', 'p2', 'DragonSlayer', 'kill', 100, 'ranked', '2026-03-29 10:01:25+00'),
    ('e019', 'p7', 'StarForge', 'kill', 100, 'ranked', '2026-03-29 10:01:30+00'),
    ('e020', 'p1', 'ShadowNinja', 'kill', 100, 'ranked', '2026-03-29 10:01:35+00');

We now have 20 events across 7 players, with a mix of kill events (100 points) and objective completions (250 points).

Connecting to Kafka in production

In a real deployment, replace the table with a Kafka source:

CREATE SOURCE game_events_source (
    event_id VARCHAR,
    player_id VARCHAR,
    player_name VARCHAR,
    event_type VARCHAR,
    points INT,
    game_mode VARCHAR,
    event_time TIMESTAMPTZ
) WITH (
    connector = 'kafka',
    topic = 'game.events',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;

RisingWave supports ingesting from Kafka, Pulsar, Kinesis, and other sources with the same downstream materialized view logic.

How Do You Build the Core Leaderboard?

Step 3: Create the score aggregation view

The first materialized view aggregates raw events into per-player totals:

CREATE MATERIALIZED VIEW player_total_scores AS
SELECT
    player_id,
    player_name,
    SUM(points) AS total_score,
    COUNT(*) AS total_events,
    COUNT(*) FILTER (WHERE event_type = 'kill') AS total_kills,
    COUNT(*) FILTER (WHERE event_type = 'objective') AS total_objectives,
    MAX(event_time) AS last_active
FROM game_events
GROUP BY player_id, player_name;

Query it immediately:

SELECT * FROM player_total_scores ORDER BY total_score DESC;
 player_id | player_name  | total_score | total_events | total_kills | total_objectives |        last_active
-----------+--------------+-------------+--------------+-------------+------------------+---------------------------
 p1        | ShadowNinja  |         650 |            5 |           4 |                1 | 2026-03-29 10:01:35+00:00
 p2        | DragonSlayer |         550 |            4 |           3 |                1 | 2026-03-29 10:01:25+00:00
 p3        | CosmicRay    |         450 |            3 |           2 |                1 | 2026-03-29 10:01:10+00:00
 p4        | IronWolf     |         450 |            3 |           2 |                1 | 2026-03-29 10:01:15+00:00
 p5        | PixelQueen   |         350 |            2 |           1 |                1 | 2026-03-29 10:01:00+00:00
 p6        | ThunderBolt  |         200 |            2 |           2 |                0 | 2026-03-29 10:01:20+00:00
 p7        | StarForge    |         100 |            1 |           1 |                0 | 2026-03-29 10:01:30+00:00
(7 rows)

ShadowNinja leads with 650 points from 4 kills and 1 objective. StarForge is last with just 100 points from a single kill.

Step 4: Add rank numbers

Leaderboards need rank positions. RisingWave supports window functions in materialized views:

CREATE MATERIALIZED VIEW leaderboard_with_ranks AS
SELECT
    player_id,
    player_name,
    total_score,
    total_kills,
    total_objectives,
    last_active,
    ROW_NUMBER() OVER (PARTITION BY 1::int ORDER BY total_score DESC) AS rank
FROM player_total_scores;

Note: RisingWave requires a PARTITION BY clause in window functions used within materialized views. Using PARTITION BY 1::int applies the ranking across all rows, which is equivalent to an empty partition.

SELECT rank, player_name, total_score, total_kills, total_objectives
FROM leaderboard_with_ranks
ORDER BY rank;
 rank | player_name  | total_score | total_kills | total_objectives
------+--------------+-------------+-------------+------------------
    1 | ShadowNinja  |         650 |           4 |                1
    2 | DragonSlayer |         550 |           3 |                1
    3 | CosmicRay    |         450 |           2 |                1
    4 | IronWolf     |         450 |           2 |                1
    5 | PixelQueen   |         350 |           1 |                1
    6 | ThunderBolt  |         200 |           2 |                0
    7 | StarForge    |         100 |           1 |                0
(7 rows)

This result is not computed on demand. It is pre-computed and incrementally maintained. A SELECT from this view is a simple table scan, not a full aggregation, so it returns in single-digit milliseconds regardless of how many underlying events exist.

How Does the Leaderboard Update in Real Time?

Here is where streaming shows its strength. When new events arrive, every materialized view in the pipeline updates automatically.

Let's say StarForge goes on a scoring spree:

INSERT INTO game_events VALUES
    ('e021', 'p7', 'StarForge', 'objective', 250, 'ranked', '2026-03-29 10:02:00+00'),
    ('e022', 'p7', 'StarForge', 'kill', 100, 'ranked', '2026-03-29 10:02:05+00'),
    ('e023', 'p7', 'StarForge', 'kill', 100, 'ranked', '2026-03-29 10:02:10+00'),
    ('e024', 'p7', 'StarForge', 'objective', 250, 'ranked', '2026-03-29 10:02:15+00'),
    ('e025', 'p7', 'StarForge', 'kill', 100, 'ranked', '2026-03-29 10:02:20+00');

Query the leaderboard again:

SELECT rank, player_name, total_score, total_kills, total_objectives
FROM leaderboard_with_ranks
ORDER BY rank;
 rank | player_name  | total_score | total_kills | total_objectives
------+--------------+-------------+-------------+------------------
    1 | StarForge    |         900 |           4 |                2
    2 | ShadowNinja  |         650 |           4 |                1
    3 | DragonSlayer |         550 |           3 |                1
    4 | CosmicRay    |         450 |           2 |                1
    5 | IronWolf     |         450 |           2 |                1
    6 | PixelQueen   |         350 |           1 |                1
    7 | ThunderBolt  |         200 |           2 |                0
(7 rows)

StarForge jumped from rank 7 to rank 1 with 900 total points. No application code triggered this update. No cache was invalidated. RisingWave propagated the 5 new events through player_total_scores (updating the aggregation) and then through leaderboard_with_ranks (recalculating ranks). Both views are now consistent with the latest data.

This is the core value of incremental view maintenance: only the affected rows are recomputed. Adding 5 events for StarForge does not cause re-aggregation of the other 6 players' data.

How Do You Build Multi-Dimensional Leaderboards?

Real games rarely have a single leaderboard. Players want to see who has the most kills, who plays the most objectives, and who is most efficient. You can build all of these as separate materialized views on the same event stream.

Per-game-mode leaderboard

CREATE MATERIALIZED VIEW mode_leaderboard AS
SELECT
    game_mode,
    player_id,
    player_name,
    SUM(points) AS mode_score,
    COUNT(*) FILTER (WHERE event_type = 'kill') AS mode_kills,
    ROW_NUMBER() OVER (PARTITION BY game_mode ORDER BY SUM(points) DESC) AS mode_rank
FROM game_events
GROUP BY game_mode, player_id, player_name;
SELECT mode_rank, player_name, mode_score, mode_kills
FROM mode_leaderboard
WHERE game_mode = 'ranked'
ORDER BY mode_rank
LIMIT 5;
 mode_rank | player_name  | mode_score | mode_kills
-----------+--------------+------------+------------
         1 | StarForge    |        900 |          4
         2 | ShadowNinja  |        650 |          4
         3 | DragonSlayer |        550 |          3
         4 | CosmicRay    |        450 |          2
         5 | IronWolf     |        450 |          2
(5 rows)

The PARTITION BY game_mode in the window function means ranks are computed independently for each mode. If you add a casual mode later, those players get their own ranking automatically.

Efficiency leaderboard

Which players score the most points per event? This is useful for identifying high-impact players regardless of playtime.

CREATE MATERIALIZED VIEW player_efficiency AS
SELECT
    player_id,
    player_name,
    SUM(points) AS total_score,
    COUNT(*) AS total_events,
    ROUND(SUM(points)::NUMERIC / COUNT(*), 1) AS avg_points_per_event,
    MAX(event_time) AS last_active
FROM game_events
GROUP BY player_id, player_name;
SELECT player_name, total_score, total_events, avg_points_per_event
FROM player_efficiency
ORDER BY avg_points_per_event DESC;
 player_name  | total_score | total_events | avg_points_per_event
--------------+-------------+--------------+----------------------
 PixelQueen   |         350 |            2 |                175.0
 StarForge    |         900 |            6 |                150.0
 CosmicRay    |         450 |            3 |                150.0
 IronWolf     |         450 |            3 |                150.0
 DragonSlayer |         550 |            4 |                137.5
 ShadowNinja  |         650 |            5 |                130.0
 ThunderBolt  |         200 |            2 |                100.0
(7 rows)

PixelQueen leads in efficiency at 175 points per event, despite having the second-lowest total score. Different views, same source data, zero duplication of aggregation logic.

Combined multi-rank view

You can compute multiple rankings in a single materialized view using multiple window functions:

CREATE MATERIALIZED VIEW leaderboard_multi AS
SELECT
    player_id,
    player_name,
    SUM(points) AS total_score,
    COUNT(*) FILTER (WHERE event_type = 'kill') AS kills,
    COUNT(*) FILTER (WHERE event_type = 'objective') AS objectives,
    ROUND(SUM(points)::NUMERIC / COUNT(*), 1) AS avg_points,
    MAX(event_time) AS last_active,
    ROW_NUMBER() OVER (PARTITION BY 1::int ORDER BY SUM(points) DESC) AS score_rank,
    ROW_NUMBER() OVER (PARTITION BY 1::int ORDER BY COUNT(*) FILTER (WHERE event_type = 'kill') DESC) AS kills_rank
FROM game_events
GROUP BY player_id, player_name;
SELECT player_name, total_score, score_rank, kills, kills_rank, objectives, avg_points
FROM leaderboard_multi
ORDER BY score_rank;
 player_name  | total_score | score_rank | kills | kills_rank | objectives | avg_points
--------------+-------------+------------+-------+------------+------------+------------
 StarForge    |         900 |          1 |     4 |          2 |          2 |      150.0
 ShadowNinja  |         650 |          2 |     4 |          1 |          1 |      130.0
 DragonSlayer |         550 |          3 |     3 |          3 |          1 |      137.5
 CosmicRay    |         450 |          4 |     2 |          4 |          1 |      150.0
 IronWolf     |         450 |          5 |     2 |          5 |          1 |      150.0
 PixelQueen   |         350 |          6 |     1 |          7 |          1 |      175.0
 ThunderBolt  |         200 |          7 |     2 |          6 |          0 |      100.0
(7 rows)

StarForge ranks #1 in score but #2 in kills. ShadowNinja is #2 in score but #1 in kills. A single materialized view gives your API everything it needs to power multiple leaderboard tabs.

How Do You Serve the Leaderboard to Your Application?

RisingWave is wire-compatible with PostgreSQL, so any PostgreSQL client library works out of the box. Your API server connects to RisingWave and runs simple SELECT queries against the materialized views.

Direct query from your API

import psycopg2

conn = psycopg2.connect(
    host="localhost",
    port=4566,
    user="root",
    dbname="dev"
)

cursor = conn.cursor()
cursor.execute("""
    SELECT rank, player_name, total_score, total_kills
    FROM leaderboard_with_ranks
    ORDER BY rank
    LIMIT 10
""")

for row in cursor.fetchall():
    print(f"#{row[0]} {row[1]}: {row[2]} pts ({row[3]} kills)")

Because the materialized view is pre-computed, this query completes in under 5 milliseconds, regardless of whether your events table has 1,000 or 100 million rows.

Sinking to a downstream database

If you want to serve the leaderboard from a separate PostgreSQL instance (for isolation or to use with an existing API), you can set up a sink:

CREATE SINK leaderboard_to_postgres FROM leaderboard_with_ranks
WITH (
    connector = 'jdbc',
    jdbc.url = 'jdbc:postgresql://your-db:5432/gamedb',
    table.name = 'leaderboard',
    type = 'upsert',
    primary_key = 'player_id'
);

RisingWave continuously pushes leaderboard updates to your PostgreSQL table. Every time a rank changes, only the affected rows are sent as upserts. Your existing API reads from PostgreSQL as usual, but the data stays fresh automatically.

What About Scale and Performance?

A few considerations for running this in production with millions of players:

Partitioned sources. If your Kafka topic has multiple partitions, RisingWave parallelizes ingestion automatically. Each partition is consumed by a separate worker, and the materialized views are distributed across compute nodes.

Top-N optimization. If you only need the top 100 players, RisingWave's query optimizer can push down the limit into the window function computation, avoiding full ranking of all players. Use LIMIT in your queries to benefit from this.

State storage. RisingWave stores materialized view state in a shared storage backend (S3, HDFS, or local disk). The state size is proportional to the number of unique players, not the total number of events. For a game with 10 million players, the leaderboard state is a few gigabytes, well within the capacity of a modest cluster.

Cascading views. Notice how leaderboard_with_ranks builds on player_total_scores. RisingWave handles cascading materialized views natively. When an event arrives, it propagates through the entire view chain in a single pass, maintaining transactional consistency across all views.

FAQ

What is a streaming leaderboard?

A streaming leaderboard is a ranked list of players (or entities) that updates automatically and in real time as new scoring events arrive. Unlike traditional leaderboards that require periodic batch recalculation, a streaming leaderboard uses incremental computation to reflect changes within milliseconds of the scoring event.

How does RisingWave compare to Redis for leaderboards?

Redis sorted sets are fast for single-dimension ranking but require application code to maintain consistency, handle failures, and support multiple ranking dimensions. RisingWave handles all of this with SQL, supports complex aggregations and window functions, provides exactly-once consistency guarantees, and eliminates the need for custom cache management code.

Can this approach handle millions of concurrent players?

Yes. RisingWave scales horizontally by distributing materialized view computation across multiple nodes. The state size is proportional to the number of unique players, not the total event volume. A cluster with 3-5 compute nodes can handle tens of millions of ranked players with sub-second update latency.

Do I need Kafka to use RisingWave for leaderboards?

No. While Kafka is the most common ingestion source for production deployments, RisingWave also supports direct inserts via its PostgreSQL-compatible interface, as well as ingestion from Pulsar, Kinesis, and other streaming platforms. You can start with direct inserts and migrate to Kafka when your event volume grows.

Conclusion

Building a real-time leaderboard with RisingWave comes down to three SQL statements: a table (or source) for events, a materialized view for aggregation, and a materialized view for ranking. Here is what we covered:

  • Streaming materialized views replace application-level aggregation code, cache invalidation logic, and background workers with declarative SQL.
  • Incremental view maintenance means only changed data is reprocessed, keeping updates fast regardless of total data volume.
  • Window functions like ROW_NUMBER() in materialized views give you continuously updated rankings without re-sorting the entire dataset.
  • Multi-dimensional leaderboards (by score, by kills, by game mode, by efficiency) are just additional materialized views on the same event stream.
  • PostgreSQL compatibility means your existing application code and libraries work without modification.

The same pattern applies beyond gaming: sales leaderboards, fitness app rankings, customer loyalty tiers, and any scenario where you need to rank entities by a continuously changing metric.


Ready to try this yourself? Get started with RisingWave in 5 minutes. Quickstart guide

Join our Slack community to ask questions and connect with other stream processing developers.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.