Blog/
Blog/
\r\nCREATE STREAM dwd_order_detail(id, ts, prod_id, prod_name, prod_detail)\r\nAS (\r\n SELECT\r\n ods_order.id,\r\n ods_order.ts,\r\n ods_order.prod_id,\r\n dim_prod.prod_name,\r\n dim_prod.prod_detail\r\n FROM STREAMING ALL ods_order\r\n INNER JOIN dim_prod\r\n ON dim_prod.id = ods_order.prod_id\r\n) PRIMARY KEY (id);\r\n
\r\n\r\nThis stream table acts like a regular table that supports queries, but also reflects real-time updates whenever the source data changes (INSERT/UPDATE/DELETE).\r\n\r\n2. Unified Batch-Stream Ingestion\r\n\r\nDomino standardizes ingestion for both batch and stream using SQL. Developers no longer need to learn different interfaces or tools for ingesting real-time versus historical data. This not only reduces redundancy caused by format and API inconsistencies, but also lowers the entry barrier for developers.\r\n\r\nWith tables as the core abstraction, modifying or deleting data is straightforward — even in streaming scenarios.\r\n\r\n3. Unified Execution Model\r\n\r\nBoth batch and stream tasks run on the same pipeline-based execution engine, following the Volcano model — a time-tested standard in database systems. When users define a stream table, Domino auto-generates execution plans. These are triggered when source data changes, keeping stream table results up to date.\r\n\r\nThis unified execution model allows developers to work entirely in SQL, with no need to manage dual pipelines or complex event logic.\r\n\r\n4. Unified Storage Model\r\n\r\nDomino stores both batch (table) and stream (stream table) data using the same engine, ensuring durability and ACID consistency. With this model, there’s no need to worry about memory limits or small window sizes common in traditional streaming systems.\r\n\r\nIn Domino, there are no fixed windows or watermarks — just continuous data stored reliably, ready to be queried or computed as needed. This reduces overhead and boosts performance by eliminating the friction between compute and storage layers.\r\n\r\n5. SQL as the Streaming Programming Language\r\n\r\nDomino extends standard SQL to cover stream processing capabilities. Instead of learning Java, Scala, or Python, developers can express complex real-time and historical transformations using only SQL.\r\n\r\nFor instance:\r\n\r\n\r\nSELECT * FROM order_stream WHERE subsidy_amount > 50;\r\n
\r\n\r\nThis single query supports both real-time fraud detection and offline analytics.\r\n\r\nDomino empowers developers to do more with less — writing once in SQL and deploying to both batch and stream contexts.\r\n\r\n6. Rich Ecosystem: Unified Data Access, Regardless of Source\r\n\r\nDomino integrates with a wide range of upstream and downstream systems including FineDataLink, DSG, DataPipeline, Seatunnel, BluePipe, UFIDA IUAP, EMQ, Talend, and more.\r\n\r\nWhether data comes from IoT, enterprise systems, log collectors, or external APIs, Domino acts as a universal connector — standardizing and unifying ingestion.\r\n\r\n7. Eliminating the Need for Windows and Watermarks\r\n\r\nTraditional streaming engines rely heavily on windows and watermarks to handle unbounded data. These mechanisms often complicate logic and debugging.\r\n\r\nDomino’s approach eliminates these constructs by leveraging unified storage and execution, along with built-in transactional guarantees. Developers can forget about window boundaries and late-arriving data — Domino handles consistency and correctness automatically.\r\n\r\nConclusion: A True Kernel-Level Breakthrough\r\n\r\nDomino represents a generational leap in data processing architecture. With unified ingestion, storage, computation, and a SQL-centric approach, it breaks down the silos that have long divided batch and stream systems.\r\n\r\nNow, developers fluent in SQL can master real-time analytics without learning new languages or managing complex toolchains. For organizations, this means lower complexity, reduced costs, and greater agility.\r\n\r\nDomino is more than a product — it’s the beginning of the “unified everything” era.\r\n\r\nFrom Greenplum to YMatrix: Migrating Core Business Data for a Leading Power-Battery Manufacturer
SERES × YMatrix: 3-Hour Migration of 2.13TB, 50% Faster Multi-Scenario Queries
How YMatrix Helps Farasis Energy Achieve Smart-Factory Analytics in Seconds
China Telecom Completes SAP HANA Localization Upgrade
Ganfeng Lithium: Real-Time Reporting at Scale with YMatrix
Smart Manufacturing at Scale with YMatrix HTAP: Real-Time Ingestion & Unified Analytics