This paper is devoted to the theoretical analysis and comparative characteristics of methods and algorithms for automatic identity verification based on the dynamic characteristics of a handwritten signature. The processes of collecting and preprocessing dynamic characteristics are considered. An analysis of classical methods, including hidden Markov models, support vector machines, and modern neural network architectures, including recurrent, convolutional, and Siamese neural networks, is conducted. The advantages of using Siamese neural networks in verification tasks under the condition of a small volume of training data are highlighted. Key metrics for assessing the quality of biometric systems are defined. The advantages and disadvantages of the considered methods are summarized, and promising areas of research are outlined.
Keywords: verification, signature, machine learning, dynamic characteristic, hidden Markov models, support vector machine, neural network approach, recurrent neural networks, convolutional neural networks, siamese neural networks, type I error
The article discusses the principles of operation, key technologies, and prospects for the development of eye-tracking systems in virtual reality (VR) devices. It highlights the main components of such systems, including infrared cameras, computer vision algorithms, and calibration methods. Eye-tracking technologies such as Pupil Center Corneal Reflection (PCCR) are analyzed in detail, as well as their integration with rendering to implement foveal rendering, which significantly reduces the load on the GPU. Current issues, including latency and power consumption, are discussed, and solutions are proposed, such as the use of predictive algorithms and hardware acceleration. Special attention is paid to promising areas, including neurointerfaces and holographic systems. The article is based on the latest research and developments from leading companies such as Tobii, Qualcomm, and Facebook Reality Labs. The article is of interest to VR device developers, researchers in the field of human-computer interaction, and computer vision specialists.
Keywords: eye-tracking, virtual reality, foveated rendering, computer vision, human-computer interaction, PCCR
This article presents a methodology for assessing damage to railway infrastructure in emergency situations using imagery from unmanned aerial vehicles (UAVs). The study focuses on applying computer vision and machine learning techniques to process high-resolution aerial data for detecting, segmenting, and classifying structural damage.
Optimized image processing algorithms, including U-Net for segmentation and Canny edge detection, are used to automate analysis. A mathematical model based on linear programming is proposed to optimize the logistics of restoration efforts. Test results show reductions in total cost and delivery time by up to 25% when optimization is applied.
The paper also explores 3D modeling from UAV imagery using photogrammetry methods (Structure from Motion and Multi-View Stereo), enabling point cloud generation for further damage analysis. Additionally, machine learning models (Random Forest, XGBoost) are employed to predict flight parameters and resource needs under changing environmental and logistical constraints.
The combination of UAV-based imaging, algorithmic damage assessment, and predictive modeling allows for a faster and more accurate response to natural or man-made disasters affecting railway systems. The presented framework enhances decision-making and contributes to a more efficient and cost-effective restoration process.
Keywords: UAVs, image processing, LiDAR, 3D models of destroyed objects, emergencies, computer vision, convolutional neural networks, machine learning methods, infrastructure restoration, damage diagnostics, damage assessment
The practice of producing optical interference coatings shows that when using new thin-film materials, obtaining optical products with specified quality function requirements depends on the accuracy of their refractive index. The results of its evaluation on large frequency crystals differ, which does not allow narrowband filters with the required technical parameters. This article proposes an approach to estimating the parameters of the refractive index of a thin film based on solving the inverse synthesis problem, which is based on the experimental determination of the thickness of sprayed films using an X-ray fluorescence coating thickness analyzer and data on the reflection coefficient spectrum obtained using a broadband spectrophotometer. The numerical modeling carried out during the study showed that even if there are 5% tolerances for estimating the thickness of coatings, a fairly accurate determination of the refractive index can be expected. The correctness of the results of using this approach was verified by using a thin film with a known refractive index, which was also determined using the proposed method of numerical modeling of the reflection spectrum of the digital twin coating.
Keywords: interference coating, numerical modeling, reflection coefficient spectrum
Modern approaches to synthetic speech recognition are in most cases based on the analysis of specific acoustic, spectral, or linguistic patterns left behind by speech synthesis algorithms. An analysis of open sources has shown that the further development of methods and algorithms for synthetic speech recognition is crucial for providing protection against emerging threats and maintaining trust in existing biometric systems.
This paper proposes an algorithm for synthetic speech detection based on the calculation of audio signal entropy. The relevance of the work is driven by the increasing number of cases involving the malicious use of synthetic speech, which is becoming almost indistinguishable from genuine human speech. The results demonstrated that the entropy of synthetic speech is significantly higher, and the algorithm is robust to data losses. The advantages of the algorithm are its interpretability and low computational complexity. Experiments were conducted on the CMU ARCTIC dataset using the XTTS v.2 model. The proposed algorithm enables making a decision on the presence of synthetic speech without the need for complex spectral analysis or machine learning methods.
Keywords: synthetic speech, spoofing, Shannon entropy, speech recognition
Разработан алгоритм и составлена программа на языке программирования Python для расчета численных значений оптимального оператора фильтрации с запаздыванием для L-марковского процесса с квазирациональной спектральной плотностью, являющегося обобщением марковского процесса с рациональным спектром. В основе построения оптимального оператора фильтрации с запаздыванием лежит спектральная теория случайных процессов. Расчетная формула оператора фильтрации была получена с использованием теории L-марковских процессов, методов вычисления стохастических интегралов, теории функций комплексного переменного и методов тригонометрической регрессии. Рассмотрен интересный с точки зрения управления сложными стохастическими системами пример L-марковского процесса (сигнала) с квазирациональным спектром. За основу при построении математической модели оптимального оператора фильтрации с запаздыванием была взята тригонометрическая модель. Показано, что значения оператора фильтрации с запаздыванием представляются линейной комбинацией значений принимаемого сигнала в определенные моменты времени и значений синусоидальных и косинусоидальных функций в те же моменты. Установлено, что числовые значения оператора фильтрации существенно зависят от параметра β совместной спектральной плотности принимаемого и передаваемого сигналов, в связи с чем в работе рассматривались три разные задачи прохождения сигнала через разные физические среды. Установлено, что абсолютная величина действительной части оператора фильтрации на всех трех интервалах изменения срока запаздывания и во всех трех средах превышает абсолютную величину мнимой части в среднем в два и более раз. Построены графики зависимости действительных и мнимых частей оператора фильтрации от срока запаздывания τ, а также трехмерные графики зависимости самого оператора фильтрации с запаздыванием от срока запаздывания. Дано физическое обоснование полученным результатам.
Keywords: random process, L-Markov process, noise, delayed filtering, spectral characteristic, filtering operator, trigonometric trend, standardized approximation error
A mathematical model has been constructed, an algorithm has been developed, and a program has been written in the Python programming language for calculating the numerical values of the optimal filtering operator with a forecast for an L-Markov process with a quasi-rational spectrum. The probabilistic model of the filtering operator formula has been obtained based on the spectral analysis of L-Markov processes using methods for calculating stochastic integrals, the theory of analytical functions of a complex variable, and methods for correlation and regression analysis. Considered an example of L-Markov process, the values of the optimal filtering operator with a forecast for which it was possible to express in the form of a linear combination of the values of the process at some moments of time and the sum of numerical values of cosines and sines at the same moments. The basis for obtaining the numerical values of the filtering operator was the mathematical model of trigonometric regression with 16 harmonics, which best approximates the process under study and has a minimum
Keywords: random process, L-Markov process, prediction filtering, spectral characteristics, filtering operator
In the modern world, when technology is developing at an incredible rate, computers have gained the ability to "see" and perceive the world around them like a human. This has led to a revolution in visual data analysis and processing. One of the key achievements was the use of computer vision to search for objects in photographs and videos. Thanks to these technologies, it is possible not only to find objects such as people, cars or animals, but also to accurately indicate their position using bounding boxes or masks for segmentation. This article discusses in detail modern models of deep neural networks used to detect humans in images and videos taken from a height and a long distance against a complex background. The architectures of the Faster Region-based Convolutional Neural Network (Faster R-CNN), Mask Region-based Convolutional Neural Network (Mask R-CNN), Single Shot Detector (SSD) and You Only Look Once (YOLO) are analyzed, their accuracy, speed and ability to effectively detect objects in conditions of a heterogeneous background are compared. Special attention is paid to studying the features of each model in specific practical situations, where both high-quality target object detection and image processing speed are important.
Keywords: machine learning, artificial intelligence, deep learning, convolutional neural networks, human detection, computer vision, object detection, image processing
The article proposes the development of a mathematical model that includes an integrated approach to modeling the interaction of surfaces, taking into account the geometric features of the groove. An important aspect of the novelty of the work is its validation based on experimental data. To describe the movement of the lubricant in the working gap, a model is used that describes the movement of a truly viscous lubricant, including the continuity equation. The calculations and experiments performed have confirmed the adequacy of the proposed model, which indicates the possibility of its practical application for engineering analysis and design. The results of this work made it possible to improve the understanding of the mechanism of movement of the lubricant in radial sliding bearings having a polymer coating with an axial groove on the shaft surface. Studies have also shown that the presence of a groove on the shaft surface affects the pressure distribution, which, in turn, affects the tribotechnical parameters of the bearing. The introduction of the groove helps to distribute the lubricant more efficiently over the working gap, increase the bearing capacity of the bearing, reduce the coefficient of friction and reduce wear on the contact surfaces.
Keywords: radial bearing, wear resistance assessment, antifriction polymer coating, groove, hydrodynamic mode, verification
This paper proposes a mathematical model of the laminar flow of a truly viscous lubricant in the clearance of a radial plain bearing with a nonstandard support profile. The influence of a fluoroplastic-containing polymer coating and a groove on the shaft surface is considered, taking into account nonlinear effects, which improves the accuracy of the description of hydrodynamic processes. Thin-film approximations and continuity equations are used to determine the hydrodynamic pressure, load capacity, and friction coefficient. A comparison with existing calculation models demonstrated improved performance prediction. The results demonstrate the feasibility of ensuring stable shaft floatation, confirming the applicability of the developed model for engineering calculations of bearings with a polymer coating and a groove.
Keywords: radial plain bearing, mathematical modeling, true viscous lubricant, polymer composite coating, hydrodynamic regime, tribotechnical characteristics
This article presents the development of a combined method for summarizing Russian-language texts, integrating extractive and abstractive approaches to overcome the limitations of existing methods. The proposed method is preceded by the following stages: text preprocessing, comprehensive linguistic analysis using RuBERT, and semantic similarity-based clustering. The method involves extractive summarization via the TextRank algorithm and abstractive refinement using the RuT5 neural network model. Experiments conducted on the Gazeta.Ru news corpus confirmed the method's superiority in terms of precision, recall, F-score, and ROUGE metrics. The results demonstrated the superiority of the combined approach over purely extractive methods (such as TF-IDF and statistical methods) and abstractive methods (such as RuT5 and mBART).
Keywords: combined method, summarization, Russian-language texts, TextRank, RuT5
The paper considers a stochastic model of the operation of an automatic information processing system, which is described by a system of differential equations of the Kolmogorov distribution of state probabilities, assuming that the flow of requests is Poisson, including the simplest one. A scheme for solving a system of differential equations of high dimensionality with slowly changing initial data is proposed, and the parameters of the presented model are compared with the parameters of the simulation model of the Apache HTTP Server. To compare the simulation and stochastic models, a test server was used to generate requests and simulate their processing using the Apache JMeter program, which was used to estimate the parameters of the incoming and processed request flows. The presented model does not contradict the simulation model and allows us to evaluate the system's states under different operating conditions and calculate the load on the web server when there is a large amount of data.
Keywords: stochastic modeling, simulation model, Kolmogorov equations, sweep method, queuing system, performance characteristics, test server, request flow, service channels, queue
The paper considers the effect of particle size on the dynamics of suspended sediments in a riverbed. The EcoGIS-Simulation computing complex is used to simulate the joint dynamics of surface waters and sediments in the Volga River model below the Volga hydroelectric dam. The most important factor in the variability of the riverbed is the spring releases of water from the Volgograd reservoir, when water consumption increases fivefold. Some integral and local characteristics of the riverbed are calculated depending on the particle size coefficient.
Keywords: suspended sediment, soil particle size, sediment dynamics, diffusion, bottom sediments, channel morphology, relief, particle gravitational settling velocity, EcoGIS-Simulation software and hardware complex, Wexler formula, water flow
The article examines the influence of the data processing direction on the results of the discrete cosine transform (DCT). Based on the theory of groups, the symmetries of the basic functions of the DCT are considered, and the changes that occur when the direction of signal processing is changed are analyzed. It is shown that the antisymmetric components of the basis change sign in the reverse order of counts, while the symmetric ones remain unchanged. Modified expressions for block PREP are proposed, taking into account the change in the processing direction. The invariance of the frequency composition of the transform to the data processing direction has been experimentally confirmed. The results demonstrate the possibility of applying the proposed approach to the analysis of arbitrary signals, including image processing and data compression.
Keywords: discrete transforms, basic functions, invariance, symmetry, processing direction, matrix representation, correlation
Modern engineering equipment operation necessitates solving optimal control problems based on measurement data from numerous physical and technological process parameters. The analysis of multidimensional data arrays for their approximation with analytical dependencies represents both current and practically significant challenges. Existing software solutions demonstrate limitations when working with multidimensional data or provide only fixed sets of basis functions.
Objectives. The aim of this study is to develop software for multidimensional regression based on the least squares method and a library of constructible basis functions, enabling users to create and utilize diverse basis functions for approximating multidimensional data.
Methods. The development employs a generalized least squares method model with loss function minimization in the form of a multidimensional elliptical paraboloid. LASSO (L1), ridge regression (L2), and Elastic Net regularization mechanisms enhance model generalization and numerical stability. A precomputation strategy reduces asymptotic complexity from O(b²·N·f·log₂(p)) to O(b·N·(b+f·log₂(p))). The software architecture includes recursive algorithms for basis function generation, WebAssembly for computationally intensive operations, and modern web technologies including Vue3, TypeScript, and visualization libraries.
Results. The developed web application provides efficient approximation of multidimensional data with 2D and 3D visualization capabilities. Quality assessment employs MSE, R², and AIC metrics. The software supports XLSX data loading and intuitive basis function construction through a user-friendly interface.
Conclusion. The practical value lies in creating a publicly accessible tool at https://datapprox.com for analyzing and modeling complex multidimensional dependencies without requiring additional software installation.
Keywords: approximation, least squares method, basic functions, multidimensional regression, L1/L2 regularization, web-based
The study addresses the problem of short-term forecasting of ice temperature in engineering systems with high sensitivity to thermal loads. A transformer-based architecture is proposed, enhanced with a physics-informed loss function derived from the heat balance equation. This approach accounts for the inertial properties of the system and aligns the predicted temperature dynamics with the supplied power and external conditions. The model is tested on data from an ice rink, sampled at one-minute intervals. A comparative analysis is conducted against baseline architectures including LSTM, GRU, and Transformer using MSE, MAE, and MAPE metrics. The results demonstrate a significant improvement in accuracy during transitional regimes, as well as robustness to sharp temperature fluctuations—particularly following ice resurfacing. The proposed method can be integrated into intelligent control loops for engineering systems, providing not only high predictive accuracy but also physical interpretability. The study confirms the effectiveness of incorporating physical knowledge into neural forecasting models.
Keywords: short-term forecasting, time series analysis, transformer architecture, machine learning, physics-informed modeling, predictive control
This paper is devoted to the theoretical analysis of the methods used in verifying the dynamics of a signature obtained from a graphic tablet. A classification of three fundamental approaches to solving this problem is carried out: matching with a standard; stochastic modeling and discriminative classification. Each approach in this paper is considered using a specific method as an example: dynamic transformation of the time scale; hidden Markov models; support vector machine. For each method, the theoretical foundations are disclosed, the mathematical apparatus is presented, the main advantages and disadvantages are identified. The results of the comparative analysis can be used as the necessary theoretical basis for developing modern signature dynamics verification systems.
Keywords: verification, biometric authentication, signature dynamics, graphic tablet, classification of methods, matching with a standard, stochastic modeling, discriminative classification, hidden Markov models, dynamic transformation of the time scale
The characteristics of a submersible induction motor are described with sufficient reliability for practice by the theory of multi-motor electric drive. In this case, the classical circuit of a submersible induction motor is a coupled system of several equivalent-T circuits. In turn, this significantly increases its computational complexity and reduces the speed of ACS. It is proposed to construct a mathematical model of the submersible electric motor in the form of polynomials with significantly higher speed using the methods of experiment planning. In the area of applicability, the differences in the estimation of energy performance do not exceed 1.1%, between the proposed models and classical equivalent-T circuits.
Keywords: automated control system, mathematical model, polynomial, mean absolute percentage error, computational complexity, design of experiment, scatter diagram, modal interval, submersible electrical motor, rotor package
The article is devoted to the study of the possibilities of automatic transcription and analysis of audio recordings of telephone conversations of sales department employees with clients. The relevance of the study is associated with the growth of the volume of voice data and the need for their rapid processing in organizations whose activities are closely related to the sale of their products or services to clients. Automatic processing of audio recordings will allow checking the quality of work of call center employees, identifying violations in the scripts of conversations with clients. The proposed software solution is based on the use of the Whisper model for speech recognition, the pyannote.audio library for speaker diarization, and the RapidFuzz library for organizing fuzzy search when analyzing strings. In the course of an experimental study conducted on the basis of the developed software solution, it was confirmed that the use of modern language models and algorithms allows achieving a high degree of automation of audio recordings processing and can be used as a preliminary control tool without the participation of a specialist. The results confirm the practical applicability of the approach used by the authors for solving quality control problems in sales departments or call centers.
Keywords: call center, audio file, speech recognition, transcription, speaker diarization, replica classification, audio recording processing, Whisper, pyannote.audio, RapidFuzz
The article presents a mathematical model that formalizes the process of managing the scientific activities of an organization. The model based on the theory of queuing. The principle of death - reproduction used in the construction. For a special case, a graph of states and a system of Kolmogorov differential equations are given. The intensity of the input and output streams are time-dependent non-stationary streams. The model allows us to consider various structures and schemes of interaction between scientific departments and various sce-narios for setting scientific tasks and the intensity of their solution by employees of the organization. A software package for decision-making has developed for the model for optimal management of the scientific activities of the department. The article presents one of the results of an experimental and model study of the influence of the motivational component and the level of competence of employees. Graphs of the system states given for the resulting solution. The research can used for comprehensive evaluation of results, planning, resource allocation and management of scientific activities.
Keywords: scientific activity, mathematical model, queuing system, death-reproduction principle, graph of states, system of differential equations
The article discusses the structure and principle of operation of an improved centrifugal unit for mixing bulk materials. A special feature of which is the ability to control mixing modes. Due to its design, the selection of a rational position of the bump makes it possible to provide such conditions for the impact interaction of particle flows, in which a high-quality homogeneous mixture of components is formed, the particles of which have different sizes, shapes and other parameters. To characterize the resulting mixture, the coefficient of heterogeneity was used, the conclusion of which is based on a probabilistic approach. A computational scheme of the rarefied flow formation process is given. An expression is derived for calculating the coefficient of heterogeneity when mixing bulk media, the particles of which have different sizes, shapes and other parameters. The research conducted in the article allows not only to predict the quality of the resulting mixture, but also to identify the factors that have the greatest impact on achieving the required uniformity.
Keywords: aggregate, bulk media, mixing, coefficient of heterogeneity, concentration, design scheme, particle size
The article presents a novel approach for adaptive control of genetic algorithm parameters using reinforcement learning methods. The use of the Q-learning algorithm enables dynamic adjustment of mutation and crossover probabilities based on the current population state and the evolutionary process progress. Experimental results demonstrate that this method offers a more efficient solution for optimization problems compared to the classical genetic algorithm and previously developed approaches employing artificial neural networks. Tests conducted on the Rastrigin and Shaffer functions confirm the advantages of the new method in problems characterized by a large number of local extrema and high dimensionality. The article details the theoretical foundations, describes the implementation of the proposed hybrid model, and thoroughly analyzes experimental results. Conclusions highlight the method's adaptability, efficiency, and potential for application in complex optimization scenarios.
Keywords: genetic algorithm, reinforcement learning, adaptive control, Q-learning, global optimization, Rastrigin function, Shaffer function
The article discusses a software module developed by the authors for automatic generation of program code based on UML diagrams. The relevance of developing this module is due to the limitations of existing foreign code generation tools related to functionality, ease of use, support for modern technologies, as well as their unavailability in Russian Federation. The module analyzes JSON files obtained by exporting UML diagrams from the draw.io online service and converts them into code in a selected programming language (Python, C++, Java) or DDL scripts for DBMS (PostgreSQL, Oracle, MySQL). The Python language and the Jinja2 template engine were used as the main development tools. The operation of the software module is demonstrated using the example of a small project "Library Management System". During the study, a series of tests were conducted on automatic code generation based on the architectures of software information systems developed by students of the Software Engineering bachelor's degree program in the discipline "Design and Architecture of Software Systems". The test results showed that the code generated using the developed module fully complies with the original UML diagrams, including the structure of classes, relationships between them, as well as the configuration of the database and infrastructure (Docker Compose). The practical significance of the investigation is that the proposed concept of generating program code based on visual models of UML diagrams built in the popular online editor draw.io significantly simplifies the development of software information systems, and can be used for educational purposes.
Keywords: code generation, automation, python, jinja2, uml diagram, json, template engine, parsing, class diagram, database, deployment diagram
Traditional marketing methods of promoting goods and services are aimed at a wide audience and do not take into account the individual characteristics of consumers, which can lead to a small percentage of positive responses and even to negative responses (loss of customers). Wide audience coverage leads to an increase in the cost of marketing interactions and does not guarantee the achievement of the goals of marketing campaigns. In such a situation, the task is to minimize excess costs through a more rational organization of marketing interactions aimed at obtaining maximum profit from each target client. To implement such a strategy, tools are needed that can identify customer segments, marketing interaction with which will lead to a positive response. One of the technologies for building such tools is uplift modeling, which is a section of machine learning and is considered a promising direction in data-driven marketing. In this article, based on the open data X5 RetailHero Uplift Modeling Dataset, provided by X5 Retail Group, a comparative analysis of the effectiveness of various uplift modeling approaches is conducted to identify the segment of customers who are most susceptible to target impact. Various uplift metrics and visual technologies are used to conduct the comparative analysis.
Keywords: effective marketing communications with customers, customer segmentation, machine learning methods, uplift modeling, uplift quality metrics
The article studies possibilities for analyzing geopolitical processes within the framework of situational analysis methodology using cognitive modeling. Situational analysis description is given, and scenario for developing events is presented where two stages are distinguished: a preparatory stage (a pre-scenario stage) which is essential for performing descriptive and explanatory functions of predictive research, and a scenario stage intended for substantive and formal research as well as for description of predicted processes, construction of system models and preparation of all significant information for scenario synthesis. Furthermore, a method for applying situational analysis is proposed to be used within the framework of the cognitive modeling toolkit of a “future scenario” option and its analysis with account of new “main” factors, relationships, feedbacks and dynamics of their alterations. When forming a scenario for a specific geopolitical situation within the framework of cognitive modeling, this method can be presented by causal (functional) and logical-semantic relation between the elements/agents of actions and counteractions. By interpreting the logical-semantic as structural, and the causal as dynamic, we obtain a structural-dynamic systemic description of geopolitical confrontation using the language of cognitive graphs, i.e. presenting a graphical expression of causal relationships between the concepts (factors) that characterize a particular geopolitical process. Thus, within the framework of a scenario stage the following procedures are conducted: analyzing the initial geopolitical situation, namely: determining key factors that build up the scheme of internal connections and external relationships, and their structuring; defining factors that make an impact; determining impact directions and force (positive and negative effect); choosing basic stereotypes or generalized models of interactions that correspond to the initial situation; constructing cognitive models of the current state of a situation; studying trends for the situation’s development and its dynamics analysis; transferring a scenario onto a practical basis.
Keywords: geopolitical processes, situational analysis, cognitive modeling, and forecasting scenario