The paper provides an overview of research on the integration of evolutionary game theory (EGT) and multi-agent reinforcement learning (MARL). The main problems of MARL and the corresponding advantages of EGT are analyzed. As a result of the analysis, it was found that the implementation of EGT can effectively solve the problems of instability, credit allocation and partial observability in MARL, providing stable strategic convergence and a new path for group optimization. It is shown that the integration of EGT and MARL forms a promising theoretical and technical basis for a breakthrough in multi-agent control. At the same time, in order to deeply merge the two directions, integration mechanisms will have to be optimized in the future, more reliable algorithms will have to be developed, and applied research in complex heterogeneous systems will have to be strengthened.
Keywords: evolutionary game theory, multi-agent reinforcement learning, multi-agent control, instability, credit allocation, partial observability
The article is devoted to the application of systems analysis methodology to the study of application server architectures. The principles of the systems approach are considered in relation to software platforms. A classification of architectures is proposed based on the degree of distribution, component organization principles, request processing methods, and applied architectural patterns. A component-based analysis of a typical multi-tier architecture is carried out, identifying functional relationships and interaction mechanisms between elements. Evaluation criteria for application server architectures from the perspective of systems analysis are defined, including performance, scalability, availability, and modifiability. Methods of decomposition and architectural patterns are examined in the context of their systemic properties, providing a basis for a justified selection of design solutions depending on the requirements of specific tasks.
Keywords: system analysis, application server architecture, architecture classification, component analysis, architectural patterns, system decomposition, architecture evaluation criteria
In recent years, the safe operation of energy facilities has increasingly been ensured by probabilistic non-destructive testing systems. This article examines a method for predicting and estimating the number of missed defects by solving an inverse problem. A detailed analysis of indirect manifestations and prediction of an indirect parameter is conducted using the Keras deep learning library, which determines the quantitative characteristics of the facility under study. The results of the study demonstrate encouraging prediction accuracy with easily correctable signs of model overfitting.
Keywords: non-destructive testing, defects, defect detection probability distribution curves, synthetic data for deep learning, regression forecasting, Keras, structural and semantic features, non-linear dependencies
Introduction. Ensuring the quality of cardboard packaging is a critical challenge for modern warehouse logistics, as damaged packaging increases the risk of product loss and negatively affects customer satisfaction. With the rapid growth of e-commerce, there is a growing need for automated and reliable quality control solutions based on computer vision technologies.
Aim and objectives. The aim of this study is to develop an automated monitoring system capable of detecting and classifying defects in cardboard boxes in real time under warehouse conditions. The objectives include designing a defect detection model, integrating it into a web-based system, and evaluating its performance in practical scenarios.
Methods. The proposed solution is implemented as a web application that integrates a YOLOv8-based deep learning model trained on a custom dataset of defective and intact packaging images. The backend is built with Flask for real-time video stream processing, while Apache Superset is used to provide analytical dashboards for visualizing defect statistics.
Results. Experimental testing in storage, sorting, and delivery scenarios demonstrated high detection accuracy exceeding 95% and stable performance under varying lighting conditions and partial occlusions. The system successfully identified major defect types such as dents, tears, and deformations with minimal false positives.
Conclusion. The developed monitoring system proves to be an effective tool for improving packaging quality control in warehouse operations, reducing operational risks, and supporting data-driven management decisions in logistics environments.
Keywords: computer vision, defect detection, carton packaging, YOLOv8, deep learning, monitoring system, video analytics
This paper presents the results of research into the synthesis of a mathematical model for the welding process of critical power engineering structures. To quantitatively assess the condition of welding components, an entropy approach is proposed, allowing for the study of objects taking into account the stochastic nature of the processes occurring within them. An information and analytical system is proposed as a means of obtaining information, enabling measurement procedures to be performed under real production conditions. Electrical signals of welding current and voltage are measured. In the first stage, the signals are converted into time series and subjected to entropy parameterization. Next, the condition is monitored using a vector entropy model. The model is constructed based on a procedure for comparing vector entropies characterizing the states of components from the previous and current welding processes.
Keywords: monitoring, information, entropy, welding production, modeling of complex systems
This work focuses on improving the processes for preparing families of methodological materials, including those used in higher education. Materials are generated using the LuaLaTeX desktop publishing system. The proposed solution is based on the integration of the Lua scripting language and the TeX desktop publishing language. This approach enables the implementation of the concept of a multi-variant document, in which assembly logic and content management are managed at the software level. The practical significance of this work lies in increasing the efficiency of developing and modernizing families of methodological materials, which is achieved through automated document assembly from a single source.
Keywords: family of teaching materials, multi-variant document, automated layout, educational content, data consistency
This paper considers a modification of the (m, m) visual cryptography scheme using quasi-orthogonal matrices. The use of Mersenne matrices with two-level values {a, -b} is proposed. The scenario of partial key compromise is investigated, where a potential attacker knows the structure of the key matrix but lacks information about its specific level values {a, -b}. Numerical modeling of the restoration process for grayscale secret images using Mersenne matrices of a fixed order and structure with different sets of level parameters has been conducted. It is shown that even with extremely small deviations of the level values from the true ones, the restoration of a visually distinguishable image becomes impossible. The obtained results confirm that the use of Mersenne matrices expands the key space compared to the earlier (m, m) scheme using Hadamard matrices and provides an additional layer of protection in visual cryptography tasks.
Keywords: Image with a secret, Hadamard matrices, Mersenne matrices, matrix multiplication
The article is devoted to the quantitative formalization of quality assessment criteria for use in an automated quality management system at the stages of design preparation of production (KPP). The article discusses the stages of a technical proposal, a draft design, a technical design and the development of working design documentation. A unified evaluation procedure is proposed for five key groups of criteria for the quality of a machine-building product: the quality of technical solutions, reliability and operability, safety, adaptability and uniformity, as well as operational properties. Grouping criteria into groups allows for an end-to-end assessment of the CHECKPOINT between the stages, which provides an opportunity for experts to conduct an assessment. For each group, a method has been formulated to obtain a final indicator that provides a comparable numerical estimate that is practically suitable for further use in an automated quality management system at the stages of the CHECKPOINT. The methodological basis of the study was the analysis of sources that make it possible to identify the most characteristic problems and requirements for quality assessment at the stages of the CHECKPOINT. The results obtained contribute to increasing the degree of formalization of the quality assessment procedure at the early stages of the product life cycle, which creates conditions for improving the quality of products through systematic and reproducible evaluation of innovative solutions. Procedures for rationing scales and setting thresholds are provided.
Keywords: design preparation of production, quality assessment, formal criteria, terms of reference
This paper concerns the problems of identification of systems with mixed-type nonlinearities. An improved method of frequency identification using a system of correlators, which allows recording a bilinear frequency response is shown. An approach is proposed that provides a more accurate measurement frequency response of Volterra kernels, which consists of output correction. The efficiency of the proposed approach is demonstrated using the example of a nonlinear system including a deadband block. Based on known analytical values, the errors of the methods are calculated.
Keywords: Volterra series, system identification, nonlinear systems, piecewise nonlinearities, frequency responses, Riccati equation
The article describes an experiment on the compilation of a training sample, training and testing of a neural network model of a computer vision system for detecting burns of a tundish nozzle at a continuous steel casting plant. The issue of validity of augmentation of data for training is considered. The obtained results are analyzed.
Keywords: computer vision, object detection, dataset, augmentation, steelmaking, continuous steel casting, burnout of a tundish nozzle
This article addresses the challenge of building Android applications within secure, network-isolated environments where no direct internet connection is available. The primary objective is to develop a reliable method for the continuous integration and delivery (CI/CD) of Android artifacts under these constraints. The proposed solution methodologically integrates Docker containerization to ensure a standardized build environment with the Nexus Repository Manager for creating a comprehensive local mirror of all external dependencies, such as those from Google Maven. This local repository cache is then made accessible inside the isolated network via a configured nginx proxy server. The implemented system successfully enables a complete and automated Android build pipeline, entirely eliminating the need for external access during compilation. The results demonstrate significant enhancements in security by mitigating risks associated with public repositories, while also ensuring build stability, reproducibility, and protection against upstream outages. In conclusion, this approach provides a practical and robust framework for secure mobile application development in high-security or restricted corporate network infrastructures.
Keywords: docker, containerization, android, flutter, ci/cd, nginx, proxying, network isolation, application building.
The design of automated control systems for the physical protection of facilities is one of the most sought-after area in the development of domestic software products. The article presents the architecture of a hardware-software system, an assessment of the development tools required to implement a web application based on the Astra Linux operating system, and a description of an experiment to create a system prototype. The following tools were used to build the system: the Angular framework for the client layer; the FastAPI framework, the SQLAlchemy library, and the WebSocket protocol for the server layer; and the object-relational PostgreSQL database management system for data storage. The result of the work is a technical means control system that demonstrates interaction with devices and the database. The implemented prototype will serve as a basis for developing a hardware-software complex for the physical protection of a facility.
Keywords: domestic operating system, web application, development tools, management system, database, sensor, monitoring
The article discusses the problem of feature selection when training machine learning (ML) models in the task of identifying fake (phishing) websites. As a solution, a set of key metrics is proposed: efficiency, reliability, fault tolerance, and retrieval speed. Efficiency measures impact of feature to prediction accuracy. Reliability measures how well feature distinct phishing from legitimate. Fault tolerance score measures empirical probability of feature to be valid and fulfilled. And retrieval speed is logarithmic time of feature extraction. This approach allows for the ranking of features into categories and their subsequent selection for training machine learning models, depending on the specific domain and constraints. In this article, 82 features was measured, and 6 fully-connected neural networks was trained to evaluate the effectiveness of metrics. Experiments has shown that proposed approach can increase the accuracy of models by 1-3%, precision by 0.03, and significantly reduce overall extraction time and so improve response rate.
Keywords: feature evaluation method, machine learning model, identification of phishing websites, metric, efficiency, reliability, fault tolerance, and retrieval speed
This article proposes a hybrid method for speckle noise reduction in radar images based on a combination of the wavelet transform and the U-Net neural network (NN) architecture with enhancement of low-frequency components in high-frequency subbands. The wavelet transform decomposes the radar images into frequency subbands, allowing noise to be localized primarily in high-frequency components. These components are processed using a U-Net neural network, whose effectiveness stems from its symmetric structure and skip connections, which allow for the accurate preservation and restoration of important image details. Furthermore, enhancing the low-frequency component in high-frequency subbands to improve the signal-to-noise ratio allows the neural network to more accurately separate useful signal structures from the noise. The combined approach demonstrates high speckle noise reduction efficiency with minimal loss of structural information, outperforming traditional methods in terms of restoration quality and image clarity.
Keywords: speckle noise, noise reduction, wavelet transform, neural networks, U-Net, neural networks, frequency subbands
The article presents a hybrid neural network for estimating the mass of a car and the longitudinal/transverse slopes of a road, combining a square-root sigma-point Kalman filter and a neural network model based on a transformer encoder using cross-attention to the evaluation residuals. The proposed approach combines the physical interpretability of the filter with the high approximation capability of the neural network. To ensure implementation on embedded electronic control units, the model was simplified by converting knowledge into a compact network of long-term short-term memory. The results of experiments in various scenarios showed a reduction in the average error by more than 25% with a computational delay of less than 0.3 ms.
Keywords: vehicle condition assessment, road slope assessment, vehicle mass assessment, transformer neural network, cross-focus, adaptive filtering, knowledge distillation, square-root sigma-dot Kalman filter, intelligent vehicles, sensor fusion
The article is devoted to the method of formalizing indicators of compromise (IoC) using a Bayesian approach to classify and rank them based on probabilistic inference. The problem of detecting malicious indicators from a large volume of data found in various sources of threat information is critically important for assessing modern cybersecurity systems. Traditional heuristic approaches, based on simple aggregation or expert evaluation of IoCs, do not provide sufficient formalization and further ranking of their reliability regarding their association with a particular malicious campaign due to the incompleteness and uncertainty of the information received from various sources.
Keywords: indicators of compromise (IoC), Bayesian inference, cyber threats, probabilistic models, malicious activity analysis, threat intelligence, IoC classification, multi-source analysis
The integration of artificial intelligence into mobile devices is fraught with serious challenges, especially due to the limited resources available and the requirements for real-time data processing. The article discusses modern approaches to reducing computing costs and resources in systems for mobile objects with artificial intelligence, including model optimization, and computing allocation strategies for mobile platforms with limited resources.
Keywords: artificial intelligence, moving objects, lightweight models, peripheral models, hardware acceleration, knowledge distillation, quantization
The article discusses modern approaches to forecasting and detecting forest fires using machine learning technologies and remote sensing data. Special attention is paid to the use of computer vision algorithms, such as convolutional neural networks and transformers, to detect and segment fires in images from unmanned aerial vehicles. The high efficiency of hybrid architectures and lightweight models for real-time operation is noted.
Keywords: forest fires, forecasting, unmanned aerial vehicles, deep learning, convolutional neural networks, transformers, image segmentation
A method of applying mathematical analysis and machine learning to organize predictive maintenance of an electric motor is considered using the example of the AIMU 112 MV6 U1 electric motor. A comprehensive technique for diagnosing the technical condition of an electric motor based on the analysis of vibration signals recorded by a three-axis accelerometer is proposed, which can be adapted to monitor the condition of various types of rotating equipment in industrial conditions.
Keywords: predictive maintenance, electric motor, vibration analysis, machine learning, neural networks, fault diagnosis, accelerometer, condition classification
The article presents an analysis of the current state of the monitoring process in spacecraft (SC) flight control. It outlines the state of monitoring technologies currently used in the flight control of modern spacecraft. The shortcomings of the monitoring process, which are exacerbated by the development of space technology, are identified. To address these shortcomings, the use of new intelligent methods is proposed, which, by increasing the automation of the spacecraft flight monitoring process, will enhance the reliability and efficiency of control. Promising methods for improving the reliability of spacecraft monitoring using artificial intelligence technologies, in particular artificial neural networks (ANN), are considered.
An analysis of scientific publications on the application of ANNs in space technology was conducted; examples of ANN application in flight control, diagnostics, and data processing tasks are provided. The advantages and limitations of using neural networks in space technology are examined.
Keywords: spacecraft, flight control, monitoring, state analysis, flight management, telemetry data
The article presents a systematic study of information flows in the "application-DBMS" link and proposes a comprehensive model of protection against SQL injections based on multi-level analysis. The system analysis considers the full cycle of query processing, which allows overcoming the fragmentation of existing approaches. The limitations of existing methods based on signature analysis, machine learning, and syntax validation are analyzed. To improve the reliability and accuracy of detection, a new combined method is proposed that integrates static syntax analysis of abstract syntax trees (AST) of queries with dynamic behavioral analysis of sessions. A key feature of the syntax module is the application of the Jaccard coefficient to assess the structural similarity of paths in the AST, which ensures the efficient detection of polymorphic injections. The behavioral module analyzes the temporal and statistical patterns of the query sequence, which allows.
Keywords: SQL injections, system analysis, machine learning, parsing, abstract syntax tree, behavioral analysis, Jaccard coefficient, polymorphic attacks, time-based attacks
This article describes a method for filtering speckle noise from synthetic aperture radar images using wavelet transforms and nonlocal means. The proposed method utilizes a spatial-frequency representation of the image and analyzes the similarity of local wavelet coefficient structures in subbands. Experimental results demonstrate that the developed method outperforms some known methods in terms of metrics such as mean square error, peak signal-to-noise ratio, and structural similarity index, as well as subjective visual assessment. The method effectively filters speckle while preserving fine details, contrasting edges, and correctly restoring background brightness without introducing noticeable artifacts.
Keywords: radar image, synthetic aperture radar, speckle noise, image filtering, wavelet transform, thresholding, non-local means
This work presents the Multi-Agent Coverage Controller (MACC)—a specialized deep reinforcement learning method designed to solve the coverage path planning problem in multi-agent systems. The method addresses key challenges inherent to coverage path planning, including sparse and noisy rewards, high gradient variance, the difficulty of credit assignment among agents, and the need to scale to a variable number of agents. MACC integrates a specific set of mechanisms: an adaptive clipping-interval width, advantage-modulation gating, a counterfactual baseline for the centralized critic, and a multi-head self-attention mechanism with a presence mask. Theoretical properties of the method are provided, demonstrating optimization stability and reduced variance of gradient estimates. A comprehensive ablation study is conducted, showing the contribution of each mechanism to agent coordination, spatial distribution of trajectories, and overall coverage speed. Experiments on a set of satellite maps indicate that MACC achieves substantial improvements in coverage completeness and speed compared to the baseline configuration, delivering the best results when all integrated mechanisms are used jointly.
Keywords: multi-agent system, coverage path planning, deep reinforcement learning, adaptive trim interval width, modulated advantage gateway, counterfactual basis, multi-head self-awareness mechanism, agent coordination
The paper presents a methodology for addressing the scarcity of labeled industrial data for training deep neural networks for semantic segmentation. A platform is proposed for synthetic generation of training point cloud datasets based on a minimal number of real laser-scanning samples of mechanical, electrical, and plumbing networks. The algorithm includes detecting the axes of cylindrical elements using the Random Sample Consensus method, constructing perpendicular joint planes, and applying affine transformations to create assemblies of 2–7 elements. The training set is increased from 8 real scans to more than 800 synthetic examples, which makes it possible to improve the segmentation accuracy of the PointNet++ deep hierarchical point cloud learning architecture from 72% to 89% in terms of the Intersection over Union (IoU) metric. The developed system enables automated creation of BIM models of engineering infrastructure with 90–95% accuracy with respect to design parameters.
Keywords: synthetic data generation, point clouds, semantic segmentation, laser scanning, Random Sample Consensus method, shortage of labeled data, BIM modeling, engineering networks, deep learning
An algorithm for modeling smooth hysteresis nonlinearities is proposed, taking into account the slope coefficient k and the saturation level c. The developed model provides accuracy and ease of adjustment while maintaining intuitive physical parameters of the hysteresis loop, which makes it effective for practical application in the tasks of analysis and synthesis of nonlinear control systems.
Keywords: unambiguous nonlinearities, hysteresis, automatic control systems, backlash with saturation, ambiguous nonlinearities, algorithm, modeling of automatic control systems, relay, static characteristic, approximation