×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

Adaptive pipeline architecture with shared memory and selective ordering for high-performance stream data processing

Abstract

Adaptive pipeline architecture with shared memory and selective ordering for high-performance stream data processing

Antonov M.O., Temkin I.O., Deryabin S.A.

Incoming article date: 05.02.2025

This paper presents an adaptive pipeline architecture designed to enhance both throughput and reduce latency in real-time stream data processing within single- and multi-processor systems. Unlike predominantly conceptual models or narrowly focused algorithms, the practical impact of this architecture is demonstrated by achieving measurable performance gains through reducing redundant data copying and synchronization costs or by providing flexible control over input and output data ordering. The architecture employs shared memory to eliminate buffer duplication, uses data transfer channels that adapt based on the need for order preservation, and supports the replication of processes within or across CPU cores. Experimental results indicate that the proposed architecture delivers both high throughput and low latency while introducing minimal overhead for data transmission and process synchronization. By offering a flexible and scalable foundation, this architecture can be applied to a wide range of real-time applications, from video surveillance and robotics to distributed platforms for processing large data sets. It demonstrates versatility and robustness in adapting to varying computational demands, thereby ensuring both efficiency and reliability in high-performance environments.

Keywords: parallelism, multiprocessor computing, computational pipeline, performance scaling, queues, shared memory