Building Real-Time Dashboards: From Kafka to Grafana via RisingWave

Building Real-Time Dashboards: From Kafka to Grafana via RisingWave

·

14 min read

Your product manager asks how revenue looked in the last five minutes. Your data engineer says the warehouse refreshes overnight. Your SRE points at a Grafana dashboard that shows infrastructure metrics but nothing about the business. Sound familiar?

The gap between raw event streams and actionable business dashboards is one of the most common pain points in modern data architectures. Kafka captures every click, purchase, and cart addition in real time, but turning that firehose into a dashboard that updates in seconds, not hours, typically requires a tangle of Flink jobs, intermediate stores, and custom connectors.

This tutorial shows you a simpler path. You will build a production-ready e-commerce dashboard that streams events from Kafka through RisingWave into Grafana. RisingWave serves as both the stream processor and the serving layer: it incrementally maintains materialized views that Grafana queries over a standard PostgreSQL connection. No JVM tuning, no separate database, no batch ETL jobs.

By the end, you will have four live dashboard panels: revenue over time, top products by sales, regional breakdown, and a conversion funnel, all updating within seconds of new events arriving.

Why the Traditional Stack Falls Short

A common approach to real-time dashboards involves Kafka, a stream processor like Apache Flink, a results database like PostgreSQL or Redis, and a visualization tool. This works, but it introduces real operational costs.

Multiple systems to manage. Each component in the pipeline needs monitoring, scaling, and failure handling. A Flink job checkpointing to S3, writing to PostgreSQL, which Grafana then polls: that is three potential failure points between your event and your chart.

State synchronization headaches. When your Flink job aggregates events into a database table, you need to handle exactly-once delivery, schema changes, and state recovery. A schema change in your Kafka topic can cascade into Flink job failures that silently stop your dashboard from updating.

High latency despite "real-time" claims. Many teams settle for micro-batch approaches that update every 30 seconds to 5 minutes, because true low-latency incremental processing is hard to get right with a DIY pipeline.

RisingWave collapses the stream processor and the serving database into a single system. You write SQL to define materialized views. RisingWave incrementally maintains them as events arrive. Grafana queries them like any PostgreSQL table. The result is fewer moving parts and genuinely low-latency updates.

Architecture Overview

The end-to-end architecture has three components:

  1. Kafka produces e-commerce events (page views, cart additions, purchases)
  2. RisingWave ingests from Kafka using a CREATE SOURCE or CREATE TABLE statement, then maintains materialized views that aggregate the raw events
  3. Grafana connects to RisingWave via its built-in PostgreSQL data source and queries the materialized views

RisingWave exposes a PostgreSQL-compatible interface on port 4566. From Grafana's perspective, it looks like a regular PostgreSQL database. No custom plugins or data source adapters are required.

This architecture eliminates the need for a separate serving database. RisingWave is both the stream processor and the query layer. When a new purchase event arrives in Kafka, the materialized views update incrementally within seconds, and the next Grafana refresh picks up the change.

Step 1: Define Your Event Schema

For this tutorial, we model an e-commerce event stream that captures user behavior across a product catalog. Each event represents a page view, an item added to cart, or a completed purchase.

In production, this table would use a Kafka connector (CREATE TABLE ... WITH (connector = 'kafka', ...)). For this tutorial, we use a standard table so you can follow along without a running Kafka cluster. The materialized views work identically regardless of the data source.

CREATE TABLE ecommerce_events (
    event_id VARCHAR,
    event_type VARCHAR,
    user_id VARCHAR,
    product_id VARCHAR,
    product_name VARCHAR,
    category VARCHAR,
    price NUMERIC,
    quantity INT,
    region VARCHAR,
    event_time TIMESTAMPTZ
);

Now insert sample events that simulate realistic user behavior across multiple products, regions, and time windows:

INSERT INTO ecommerce_events VALUES
('e001', 'page_view',   'u101', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  NULL, 'us-east',  '2025-06-15 10:01:00+00'),
('e002', 'page_view',   'u102', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  NULL, 'us-west',  '2025-06-15 10:01:15+00'),
('e003', 'add_to_cart', 'u101', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  1,    'us-east',  '2025-06-15 10:02:00+00'),
('e004', 'page_view',   'u103', 'p003', 'Running Shoes',       'Apparel',     119.99, NULL, 'eu-west',  '2025-06-15 10:02:30+00'),
('e005', 'add_to_cart', 'u102', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  1,    'us-west',  '2025-06-15 10:03:00+00'),
('e006', 'purchase',    'u101', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  2,    'us-east',  '2025-06-15 10:03:30+00'),
('e007', 'page_view',   'u104', 'p004', 'Yoga Mat',            'Fitness',     34.99,  NULL, 'us-east',  '2025-06-15 10:04:00+00'),
('e008', 'purchase',    'u102', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  1,    'us-west',  '2025-06-15 10:04:30+00'),
('e009', 'add_to_cart', 'u103', 'p003', 'Running Shoes',       'Apparel',     119.99, 1,    'eu-west',  '2025-06-15 10:05:00+00'),
('e010', 'page_view',   'u105', 'p005', 'Water Bottle',        'Fitness',     14.99,  NULL, 'ap-south', '2025-06-15 10:05:15+00'),
('e011', 'purchase',    'u103', 'p003', 'Running Shoes',       'Apparel',     119.99, 1,    'eu-west',  '2025-06-15 10:06:00+00'),
('e012', 'add_to_cart', 'u104', 'p004', 'Yoga Mat',            'Fitness',     34.99,  1,    'us-east',  '2025-06-15 10:06:30+00'),
('e013', 'page_view',   'u106', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  NULL, 'us-west',  '2025-06-15 10:07:00+00'),
('e014', 'purchase',    'u104', 'p004', 'Yoga Mat',            'Fitness',     34.99,  1,    'us-east',  '2025-06-15 10:07:30+00'),
('e015', 'add_to_cart', 'u105', 'p005', 'Water Bottle',        'Fitness',     14.99,  2,    'ap-south', '2025-06-15 10:08:00+00'),
('e016', 'page_view',   'u107', 'p006', 'Desk Lamp',           'Home Office', 49.99,  NULL, 'eu-west',  '2025-06-15 10:08:15+00'),
('e017', 'purchase',    'u105', 'p005', 'Water Bottle',        'Fitness',     14.99,  2,    'ap-south', '2025-06-15 10:09:00+00'),
('e018', 'add_to_cart', 'u106', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  1,    'us-west',  '2025-06-15 10:09:30+00'),
('e019', 'purchase',    'u106', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  1,    'us-west',  '2025-06-15 10:10:00+00'),
('e020', 'add_to_cart', 'u107', 'p006', 'Desk Lamp',           'Home Office', 49.99,  1,    'eu-west',  '2025-06-15 10:10:30+00'),
('e021', 'purchase',    'u107', 'p006', 'Desk Lamp',           'Home Office', 49.99,  1,    'eu-west',  '2025-06-15 10:11:00+00'),
('e022', 'page_view',   'u108', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  NULL, 'us-east',  '2025-06-15 10:11:30+00'),
('e023', 'add_to_cart', 'u108', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  1,    'us-east',  '2025-06-15 10:12:00+00'),
('e024', 'purchase',    'u108', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  1,    'us-east',  '2025-06-15 10:12:30+00'),
('e025', 'page_view',   'u109', 'p003', 'Running Shoes',       'Apparel',     119.99, NULL, 'us-east',  '2025-06-15 10:13:00+00'),
('e026', 'page_view',   'u110', 'p004', 'Yoga Mat',            'Fitness',     34.99,  NULL, 'us-west',  '2025-06-15 10:13:30+00'),
('e027', 'page_view',   'u111', 'p001', 'Wireless Mouse',      'Electronics', 29.99,  NULL, 'eu-west',  '2025-06-15 10:14:00+00'),
('e028', 'page_view',   'u112', 'p005', 'Water Bottle',        'Fitness',     14.99,  NULL, 'ap-south', '2025-06-15 10:14:30+00'),
('e029', 'add_to_cart', 'u109', 'p003', 'Running Shoes',       'Apparel',     119.99, 1,    'us-east',  '2025-06-15 10:15:00+00'),
('e030', 'add_to_cart', 'u110', 'p004', 'Yoga Mat',            'Fitness',     34.99,  1,    'us-west',  '2025-06-15 10:15:30+00'),
('e031', 'page_view',   'u113', 'p006', 'Desk Lamp',           'Home Office', 49.99,  NULL, 'us-east',  '2025-06-15 10:16:00+00'),
('e032', 'page_view',   'u114', 'p002', 'Mechanical Keyboard', 'Electronics', 89.99,  NULL, 'us-west',  '2025-06-15 10:16:30+00');

This dataset includes 14 unique users across four regions, with a realistic distribution: some users only browse, some add items to cart but do not purchase, and some complete the full funnel.

Step 2: Create Materialized Views for Each Dashboard Panel

Each Grafana panel maps to a single materialized view. RisingWave maintains these incrementally: when a new event arrives, only the affected aggregations recompute, not the entire dataset.

Panel 1: Revenue Over Time

This view aggregates purchase revenue into one-minute windows, perfect for a time-series chart in Grafana.

CREATE MATERIALIZED VIEW mv_revenue_per_minute AS
SELECT
    date_trunc('minute', event_time) AS minute_window,
    SUM(price * quantity) AS total_revenue,
    COUNT(*) AS order_count,
    COUNT(DISTINCT user_id) AS unique_buyers
FROM ecommerce_events
WHERE event_type = 'purchase'
GROUP BY date_trunc('minute', event_time);

Query results:

       minute_window       | total_revenue | order_count | unique_buyers
---------------------------+---------------+-------------+--------------
 2025-06-15 10:03:00+00:00 |         59.98 |           1 |            1
 2025-06-15 10:04:00+00:00 |         89.99 |           1 |            1
 2025-06-15 10:06:00+00:00 |        119.99 |           1 |            1
 2025-06-15 10:07:00+00:00 |         34.99 |           1 |            1
 2025-06-15 10:09:00+00:00 |         29.98 |           1 |            1
 2025-06-15 10:10:00+00:00 |         29.99 |           1 |            1
 2025-06-15 10:11:00+00:00 |         49.99 |           1 |            1
 2025-06-15 10:12:00+00:00 |         89.99 |           1 |            1
(8 rows)

The date_trunc function groups events into minute-level buckets. In production with higher throughput, you would see multiple orders per minute. The COUNT(DISTINCT user_id) column helps you distinguish between revenue from repeat buyers versus new customers.

Panel 2: Top Products by Revenue

This view ranks products by total revenue, showing both revenue and unit volume.

CREATE MATERIALIZED VIEW mv_top_products AS
SELECT
    product_name,
    category,
    SUM(price * quantity) AS total_revenue,
    SUM(quantity) AS units_sold,
    COUNT(DISTINCT user_id) AS unique_buyers
FROM ecommerce_events
WHERE event_type = 'purchase'
GROUP BY product_name, category;

Query results:

    product_name     |  category   | total_revenue | units_sold | unique_buyers
---------------------+-------------+---------------+------------+--------------
 Mechanical Keyboard | Electronics |        179.98 |          2 |            2
 Running Shoes       | Apparel     |        119.99 |          1 |            1
 Wireless Mouse      | Electronics |         89.97 |          3 |            2
 Desk Lamp           | Home Office |         49.99 |          1 |            1
 Yoga Mat            | Fitness     |         34.99 |          1 |            1
 Water Bottle        | Fitness     |         29.98 |          2 |            1
(6 rows)

Notice that the Wireless Mouse sold 3 units but generated less revenue than the Mechanical Keyboard's 2 units. This kind of insight (high volume vs. high value) is exactly what product managers need on their dashboards.

Panel 3: Sales by Region

This view breaks down orders and revenue by geographic region, useful for a bar chart or map visualization.

CREATE MATERIALIZED VIEW mv_sales_by_region AS
SELECT
    region,
    COUNT(*) AS total_orders,
    SUM(price * quantity) AS total_revenue,
    ROUND(AVG(price * quantity), 2) AS avg_order_value
FROM ecommerce_events
WHERE event_type = 'purchase'
GROUP BY region;

Query results:

  region  | total_orders | total_revenue | avg_order_value
----------+--------------+---------------+----------------
 us-east  |            3 |        184.96 |          61.65
 eu-west  |            2 |        169.98 |          84.99
 us-west  |            2 |        119.98 |          59.99
 ap-south |            1 |         29.98 |          29.98
(4 rows)

The avg_order_value column reveals that eu-west has the highest average order value at $84.99, even though us-east leads in total revenue. This is the kind of regional insight that can drive marketing spend decisions.

Panel 4: Conversion Funnel

This view tracks how users move through the purchase funnel: page view, add to cart, purchase.

CREATE MATERIALIZED VIEW mv_conversion_funnel AS
SELECT
    COUNT(DISTINCT CASE WHEN event_type = 'page_view' THEN user_id END)    AS viewers,
    COUNT(DISTINCT CASE WHEN event_type = 'add_to_cart' THEN user_id END)  AS added_to_cart,
    COUNT(DISTINCT CASE WHEN event_type = 'purchase' THEN user_id END)     AS purchasers
FROM ecommerce_events;

Query with calculated conversion rates:

SELECT
    viewers,
    added_to_cart,
    purchasers,
    ROUND(100.0 * added_to_cart / GREATEST(viewers, 1), 1) AS view_to_cart_pct,
    ROUND(100.0 * purchasers / GREATEST(added_to_cart, 1), 1) AS cart_to_purchase_pct
FROM mv_conversion_funnel;

Results:

 viewers | added_to_cart | purchasers | view_to_cart_pct | cart_to_purchase_pct
---------+--------------+------------+-----------------+---------------------
      14 |           10 |          8 |            71.4 |               80.0
(1 row)

Out of 14 unique viewers, 10 added something to their cart (71.4% conversion), and 8 completed a purchase (80% cart-to-purchase conversion). In production, you would typically see much lower rates: 2-5% for view-to-cart and 30-50% for cart-to-purchase. The GREATEST function prevents division by zero when the dashboard first loads before any events arrive.

Step 3: Connect Grafana to RisingWave

RisingWave speaks the PostgreSQL wire protocol, so Grafana connects to it using the built-in PostgreSQL data source. No custom plugin installation is needed.

Configure the Data Source

In Grafana, navigate to Connections > Data Sources > Add data source > PostgreSQL and enter:

SettingValue
Hostlocalhost:4566
Databasedev
Userroot
Password(leave blank)
TLS/SSL Modedisable
Version13

Click Save & Test. Grafana should report "Database Connection OK."

For production deployments using RisingWave Cloud, use the connection string from your cluster dashboard, which includes TLS encryption and proper authentication.

Build the Time-Series Panel

Create a new panel, select the PostgreSQL data source, and switch to the SQL editor. Enter:

SELECT
    minute_window AS time,
    total_revenue AS "Revenue",
    order_count AS "Orders"
FROM mv_revenue_per_minute
ORDER BY time;

Set the visualization to Time series. Grafana maps the time column to the x-axis and plots Revenue and Orders as separate series. Set the panel refresh interval to 5 or 10 seconds.

This query is cheap because it reads pre-computed results from the materialized view. RisingWave does not re-scan the raw events table on each Grafana refresh. The query latency is typically under 10 milliseconds, regardless of how many raw events exist upstream.

Build the Regional Bar Chart

Add a second panel with this query:

SELECT
    region AS "Region",
    total_revenue AS "Revenue",
    total_orders AS "Orders"
FROM mv_sales_by_region
ORDER BY total_revenue DESC;

Set the visualization to Bar chart. Use Region as the category axis and Revenue as the value axis. This panel shows at a glance which regions drive the most revenue.

Build the Funnel Stat Panel

For the conversion funnel, use a Stat or Bar gauge visualization:

SELECT
    viewers AS "Page Views",
    added_to_cart AS "Add to Cart",
    purchasers AS "Purchases"
FROM mv_conversion_funnel;

Alternatively, display conversion rates as percentages using the calculated query from Step 2.

Step 4: Set Up Alerts

Grafana's alerting system can monitor your materialized views and notify you when business metrics cross thresholds. Since the views update incrementally, alerts trigger within seconds of the underlying change.

Revenue Drop Alert

Create an alert rule that fires when per-minute revenue drops below a threshold:

SELECT
    minute_window,
    total_revenue,
    order_count
FROM mv_revenue_per_minute
WHERE total_revenue < 50
ORDER BY minute_window;

Results showing minutes with revenue below $50:

       minute_window       | total_revenue | order_count
---------------------------+---------------+------------
 2025-06-15 10:07:00+00:00 |         34.99 |           1
 2025-06-15 10:09:00+00:00 |         29.98 |           1
 2025-06-15 10:10:00+00:00 |         29.99 |           1
 2025-06-15 10:11:00+00:00 |         49.99 |           1
(4 rows)

In Grafana, configure this as an alert rule under Alerting > Alert rules > New alert rule. Set the condition to fire when the query returns rows, and configure a notification channel (Slack, PagerDuty, email) to receive the alert.

Conversion Rate Alert

Monitor when your cart-to-purchase rate drops, which might indicate checkout issues:

SELECT
    purchasers,
    added_to_cart,
    ROUND(100.0 * purchasers / GREATEST(added_to_cart, 1), 1) AS conversion_rate
FROM mv_conversion_funnel
WHERE (100.0 * purchasers / GREATEST(added_to_cart, 1)) < 60;

This alert fires when fewer than 60% of users who add items to their cart complete a purchase.

Connecting Kafka in Production

The tutorial above uses a standard table for simplicity. In production, you connect RisingWave directly to Kafka using the Kafka connector:

CREATE TABLE ecommerce_events (
    event_id VARCHAR,
    event_type VARCHAR,
    user_id VARCHAR,
    product_id VARCHAR,
    product_name VARCHAR,
    category VARCHAR,
    price NUMERIC,
    quantity INT,
    region VARCHAR,
    event_time TIMESTAMPTZ
) WITH (
    connector = 'kafka',
    topic = 'ecommerce.events',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;

The materialized views defined in Step 2 work exactly the same way. RisingWave continuously reads from the Kafka topic and updates the views incrementally. The only change is the WITH clause on the source table.

For Avro or Protobuf-encoded topics, RisingWave integrates with Confluent Schema Registry to handle schema evolution automatically.

How Incremental Maintenance Works

Understanding why this architecture is fast requires a look at how RisingWave handles materialized view maintenance.

When a new purchase event arrives in the ecommerce_events table, RisingWave does not recompute the entire mv_revenue_per_minute view. Instead, it:

  1. Routes the event to the correct minute bucket
  2. Updates only that bucket's aggregation (adds the revenue, increments the count)
  3. Stores the updated result in the view's internal state

This is called incremental view maintenance (IVM). The cost of each update is proportional to the change, not to the total data size. A view over a billion events updates just as fast as a view over a thousand events, because each new event only touches one aggregation bucket.

This is fundamentally different from how traditional databases handle materialized views. PostgreSQL's REFRESH MATERIALIZED VIEW re-executes the entire query from scratch. For a view aggregating millions of rows, that refresh can take minutes. RisingWave's incremental approach takes milliseconds.

Production Considerations

Scaling Dashboard Queries

RisingWave can handle hundreds of concurrent Grafana queries against materialized views with low latency. The queries read pre-computed results, so they are effectively key-value lookups rather than full table scans.

For dashboards accessed by large teams, set Grafana's refresh interval to 5-10 seconds rather than 1 second. This reduces query load without meaningfully affecting the user experience.

Monitoring Your Materialized Views

RisingWave exposes system catalogs that let you inspect your streaming pipeline. Query rw_catalog.rw_materialized_views to list all views and their definitions:

SELECT
    name,
    id,
    definition
FROM rw_catalog.rw_materialized_views
ORDER BY name;

This catalog is useful for auditing which views exist and verifying their SQL definitions match your expectations.

Managing View Lifecycle

As your dashboard requirements evolve, you can modify views without downtime:

  • Add a new panel: Create a new materialized view. RisingWave backfills it from existing data.
  • Modify a panel: Drop and recreate the view with updated SQL. RisingWave rebuilds the state automatically.
  • Remove a panel: DROP MATERIALIZED VIEW reclaims the resources.

RisingWave handles cascading dependencies. If view B depends on view A, dropping A also drops B (with CASCADE), or RisingWave prevents the drop and warns you.

FAQ

What is the latency from a Kafka event to a Grafana dashboard update?

End-to-end latency is typically under 2 seconds. RisingWave processes events and updates materialized views within milliseconds. The dominant factor is Grafana's refresh interval, which you configure per-panel (minimum 1 second, recommended 5-10 seconds for most use cases).

Can I use RisingWave with Grafana without Kafka?

Yes. RisingWave supports multiple data sources including PostgreSQL CDC, MySQL CDC, Amazon Kinesis, Apache Pulsar, and direct inserts via SQL. The Grafana integration works the same regardless of the upstream source.

The main advantage is operational simplicity. With Flink, you manage a JVM-based stream processor, configure checkpointing to S3 or HDFS, write a separate sink connector to PostgreSQL, and monitor both systems independently. RisingWave combines the stream processor and serving layer into one system with a PostgreSQL-compatible interface. You write standard SQL instead of Java or Python Flink jobs.

How many materialized views can RisingWave maintain simultaneously?

There is no fixed limit. Production deployments commonly run hundreds of materialized views. The resource consumption depends on the complexity of each view (number of joins, aggregation cardinality, state size) rather than the raw count.

Conclusion

Building real-time dashboards does not require a complex pipeline of stream processors, intermediate databases, and custom connectors. With RisingWave and Grafana, the architecture reduces to three components:

  • Kafka captures events
  • RisingWave maintains incrementally-updated materialized views using standard SQL
  • Grafana queries those views over a PostgreSQL connection, with sub-second response times

The key insight is that materialized views serve double duty: they are both the stream processing logic and the serving layer. There is no ETL step, no separate results database, and no custom code to keep them in sync.

All SQL in this tutorial was verified against RisingWave 2.8.0.


Ready to try this yourself? Get started with RisingWave in 5 minutes. Quickstart guide

Join our Slack community to ask questions and connect with other stream processing developers.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.