×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Methods for solving the linear cutting problem with minimization of knives' changes

    In this article, an analysis of the main methods for solving the linear cutting problem (LCP) with the criterion of minimizing the number of knife rearrangements is presented. The linear cutting problem in its general form represents an optimization problem that involves placing given types of material (rolls) in such a way as to minimize waste and/or maximize the use of raw materials, taking into account constraints on the number of knives, the width of the master roll, and the required orders. This article discusses a specific case of the problem with an additional condition for minimizing knives' changes and the following approaches for its solution: the exhaustive search method, which ensures finding a global optimal solution but can be extremely inefficient for problems with a large number of orders, as well as random search based on genetic and evolutionary algorithms that model natural selection processes to find good solutions. Pseudocode is provided for various methods of solving the LCP. A comparison is made in terms of algorithmic complexity, controllability of execution time, and accuracy. The random search based on genetic and evolutionary algorithms proved to be more suited for solving the LCP with the minimization of waste and knife rearrangements.

    Keywords: paper production planning, linear cutting, exhaustive search, genetic algorithm, waste minimization, knife permutation minimization

  • Deviation detection and route correction system

    Deviation of forestry equipment from the designated route leads to environmental, legal, and economic issues, such as soil damage, tree destruction, and fines. Autonomous route correction systems are essential to address these problems. The aim of this study is to develop a system for deviation detection and trajectory calculation to return to the designated route. The system determines the current position of the equipment using global positioning sensors and an inertial measurement unit. The Kalman filter ensures positioning accuracy, while the A* algorithm and trajectory smoothing methods are used to compute efficient routes considering obstacles and turning radii. The proposed solution effectively detects deviations and calculates a trajectory for returning to the route.

    Keywords: deviation detection, route correction, mobile application, Kalman filter, logging operations

  • Modeling the dynamics of mixing of a two-component mixture by a Markov process

    The article considers the issues of imitation modeling of fibrous material mixing processes using Markov processes. The correct combination and redistribution of components in a two-component mixture significantly affects their physical properties, and the developed model makes it possible to optimize this process. The authors propose an algorithm for modeling transitions between mixture states based on Markov processes.

    Keywords: modeling, imitation, mixture, mixing, fibrous materials

  • Research of recurrent neural network models for predicting river levels using data on the Amur River as an example

    The use of recurrent neural networks to predict the water level in the Amur River are consider. The advantages of using such networks in comparison with traditional machine learning methods are described. Various architectures of recurrent networks are compared, and hyperparameters of the model are optimized. The developed model based on long-term short-term memory (LSTM) has demonstrated high prediction accuracy, surpassing traditional methods. The results obtained can be used to improve the effectiveness of monitoring water resources and flood prevention.

    Keywords: time series analysis, Amur, water level, forecasting, neural networks, recurrent network

  • Hybrid optimization methods: adaptive control of the evolutionary process using artificial neural networks

    The relevance of the research is determined by the need to solve complex optimization problems under conditions of high dimensionality, noisy data, and dynamically changing environments. Classical methods, such as genetic algorithms, often encounter the problem of premature convergence and fail to effectively adapt to changes in the problem. Therefore, this article focuses on identifying opportunities to enhance the flexibility and efficiency of evolutionary algorithms through integration with artificial neural networks, which allow for dynamically adjusting search parameters during the evolutionary process. The leading approach to addressing this problem is the development of a hybrid system that combines genetic algorithms with neural networks. This approach enables the neural network to adaptively regulate mutation and crossover probabilities based on the analysis of the current state of the population, preventing premature convergence and accelerating the search for the global extremum. The article presents methods for dynamic adjustment of evolutionary parameters using a neural network approach, reveals the principles of the hybrid system's operation, and provides results from testing on the Rastrigin function. The materials of the article hold practical value for further research in the field of optimization, particularly in solving problems with many local minima, where traditional methods may be ineffective. The application of the proposed hybrid model opens new perspectives for developing adaptive algorithms that can be used in various fields of science and engineering, where high accuracy and robustness to environmental changes are required.

    Keywords: genetic algorithm, artificial neural network, dynamic tuning, hybrid method, global optimization, adaptive algorithm

  • Neural network model for monitoring farm animals in relation to pasture farming

    The article explores the use of computer vision technologies to automate the process of observing animals in open spaces, with the aim of counting and identifying species. It discusses advanced methods of animal detection and recognition through the use of highly accurate neural networks. A significant challenge addressed in the study is the issue of duplicate animal counts in image data. To overcome this, two approaches are proposed: the analysis of video data sequences and the individual recognition of animals. The advantages and limitations of each method are analyzed in detail, alongside the potential benefits of combining both techniques to enhance the system's accuracy. The study also describes the process of training a neural network using a specialized dataset. Particular attention is given to the steps involved in data preparation, augmentation, and the application of neural networks like YOLO for efficient detection and classification. Testing results highlight the system's success in detecting animals, even under challenging conditions. Moreover, the article emphasizes the practical applications and potential of these technologies in monitoring animal populations and improving livestock management. It is noted that these advancements could contribute significantly to the development of similar systems in agriculture. The integration of such technologies is presented as a promising solution for tracking animal movement, assessing their health, and minimizing livestock losses across vast grazing areas.

    Keywords: algorithm, computer vision, monitoring, pasture-based, livestock farming

  • Application of neural networks in modern radiography: automated analysis of reflectometry data using machine learning

    This article will present the mlreflect package, written in Python, which is an optimized data pipeline for automated analysis of reflectometry data using machine learning. This package combines several methods of training and data processing. The predictions made by the neural network are accurate and reliable enough to serve as good starting parameters for subsequent data fitting using the least-mean-squares (LSC) method. For a large dataset consisting of 250 reflectivity curves of various thin films on silicon substrates, it was demonstrated that the analytical data pipeline with high accuracy finds the minimum of the film, which is very close to the set by the researcher using physical knowledge and carefully selected boundary conditions.

    Keywords: neural network, radiography, thin films, data pipeline, machine learning

  • Analysis of the influence of data representation accuracy on the quality of wavelet image processing using Winograd method computations

    This paper is devoted to the application of the Winograd method to perform the wavelet transform in the problem of image compression. The application of this method reduces the computational complexity and also increases the speed of computation due to group processing of pixels. In this paper, the minimum number of bits at which high quality of processed images is achieved as a result of performing discrete wavelet transform in fixed-point computation format is determined. The experimental results showed that for processing fragments of 2 and 3 pixels without loss of accuracy using the Winograd method it is enough to use 2 binary decimal places for calculations. To obtain a high-quality image when processing groups of 4 and 5 pixels, it is sufficient to use 4 and 7 binary decimal places, respectively. Development of hardware accelerators of the proposed method of image compression is a promising direction for further research.

    Keywords: wavelet transform, Winograd method, image processing, digital filtering, convolution with step

  • A method for semantic segmentation of thermal images

    This paper presents the results of a study aimed at developing a method for semantic segmentation of thermal images using a modified neural network algorithm that differs from the original neural network algorithm by a higher speed or processing graphic information. As part of the study, a modification of the DeepLabv3+ semantic segmentation neural network algorithm was carried out by reducing the number of parameters of the neural network model, which made it possible to increase the speed of processing graphic information by 48% – from 27 to 40 frames per second. A training method is also presented that allows to increase the accuracy of the modified neural network algorithm; the accuracy value obtained was 5% lower than the accuracy of the original neural network algorithm.

    Keywords: neural network algorithms, semantic segmentation, machine learning, data augmentation

  • Classification of Micro-Expressions Based on Optical Flow Considering Gender Differences

    This study presents a method for recognizing and classifying micro-expressions using optical flow analysis and the YOLOv11 architecture. Unlike previous binary detection approaches, this research enables multi-class classification while considering gender differences, as facial expressions may vary between males and females. A novel optical flow algorithm and a discretization technique improve classification stability, while the Micro ROC-AUC metric addresses class imbalance. Experimental results show that the proposed method achieves competitive accuracy, with gender-specific models further enhancing performance. Future work will explore ethnic variations and advanced learning strategies for improved recognition.

    Keywords: microexpressions, pattern recognition, optical flow, YOLOv11

  • Programming using the actor model on the Akka platform: concepts, patterns, and implementation examples

    This article discusses the basic concepts and practical aspects of programming using the actor model on the Akka platform. The actor model is a powerful tool for creating parallel and distributed systems, providing high performance, fault tolerance and scalability. The article describes in detail the basic principles of how actors work, their lifecycle, and messaging mechanisms, as well as provides examples of typical patterns such as Master/Worker and Proxy. Special attention is paid to clustering and remote interaction of actors, which makes the article useful for developers working on distributed systems.

    Keywords: actor model, akka, parallel programming, distributed systems, messaging, clustering, fault tolerance, actor lifecycle, programming patterns, master worker, proxy actor, synchronization, asynchrony, scalability, error handling

  • Models of structural balance management in the context ofAmerican Revolution

    Mathematical modeling, numerical methods and program complexes (technical sciences). Geopolitical situation analysis of a number of episodes of the American Revolution in the context of applying structural balance and mathematical modeling methods. Structural balance management can help to find the most optimal strategies for interacting parties. This approach is used in a set of disciplines. In this article, the author analyzes examples of actors' interaction in the context of the American Revolution, which allows us to evaluate the state of affairs at this historical stage in an illustrative form. This approach is universal and is able to emphasize the management of structural balance in systems with actors, each of which has its own features and interests. A number of specific historical episodes serves as an example of the balanced and unbalanced systems. Each episode has its explanation in the frame of history. During the American Revolution, actors (countries and specific politicians, as well as indigenous peoples) had their own goals and interests, and their positive or negative interactions shaped the course of history in many ways.

    Keywords: mathematical modeling, structural balance, discrete models, sign graph, U.S. history

  • Development of a software tool for automated generation of timing constraints in the circuit design flow in field programmable gate array basis

    The article is devoted to the development of a tool for automated generation of time constraints in the context of circuit development in the basis of programmable logic integrated circuits (FPGAs). The paper analyzes current solutions in the field of interface tools for generating design constraints. The data structure for the means of generating design constraints and algorithms for reading and writing Synopsys Design Constraints format files have been developed. Based on the developed structures and algorithms, a software module was implemented, which was subsequently implemented into the circuit design flow in the FPGA basis - X-CAD.

    Keywords: computer-aided design, field programmable gate array, automation, design constraints, development, design route, interface, algorithm, tool, static timing analysis

  • The socratic method as a tool for choosing machine learning models for corporate information systems

    The article presents an analysis of the application of the Socratic method for selecting machine learning models in corporate information systems. The study aims to explore the potential of utilizing the modular architecture of Socratic Models for integrating pretrained models without the need for additional training. The methodology relies on linguistic interactions between modules, enabling the combination of data from various domains, including text, images, and audio, to address multimodal tasks. The results demonstrate that the proposed approach holds significant potential for optimizing model selection, accelerating decision-making processes, and reducing the costs associated with implementing artificial intelligence in corporate environments.

    Keywords: Socratic method, machine learning, corporate information systems, multimodal data, linguistic interaction, business process optimization, artificial intelligence

  • Synthesis of neural networks and system analysis using socratic methods for managing corporate it projects

    The article examines the modular structure of interactions between various models based on the Socratic dialogue. The research aims to explore the possibilities of synthesizing neural networks and system analysis using Socratic methods for managing corporate IT projects. The application of these methods enables the integration of knowledge stored in pre – trained models without additional training, facilitating the resolution of complex management tasks. The research methodology is based on analyzing the capabilities of multimodal models, their integration through linguistic interactions, and system analysis of key aspects of IT project management. The results include the development of a structured framework for selecting suitable models and generating recommendations, thereby improving the efficiency of project management in corporate environments. The scientific significance of the study lies in the integration of modern artificial intelligence approaches to implement system analysis using multi – agent solutions.

    Keywords: neural networks, system analysis, Socratic method, corporate IT projects, multimodal models, project management

  • Evaluation of the effectiveness of a data set expansion method based on deep reinforcement learning

    The article presents the results of a numerical experiment comparing the accuracy of neural network recognition of objects in images using various types of data set extensions. It describes the need to expand data sets using adaptive approaches in order to minimize the use of image transformations that may reduce the accuracy of object recognition. The author considers such approaches to data set expansion as random and automatic augmentation, as they are common, as well as the developed method of adaptive data set expansion using a reinforcement learning algorithm. The algorithms of operation of each of the approaches, their advantages and disadvantages of the methods are given. The work and main parameters of the developed method of expanding the dataset using the Deep-Q-Network algorithm are described from the point of view of the algorithm and the main module of the software package. Attention is being paid to one of the machine learning approaches, namely reinforcement learning. The application of a neural network for approximating the Q-function and updating it in the learning process, which is based on the developed method, is described. The experimental results show the advantage of using data set expansion using a reinforcement learning algorithm using the example of the Squeezenet v1.1 classification model. The comparison of recognition accuracy using data set expansion methods was carried out using the same parameters of a neural network classifier with and without the use of pre-trained weights. Thus, the increase in accuracy in comparison with other methods varies from 2.91% to 6.635%.

    Keywords: dataset, extension, neural network models, classification, image transformation, data replacement

  • Modern tools used in the formation of intelligent control systems

    Modern intelligent control systems (ICS) are complex software and hardware systems that use artificial intelligence, machine learning, and big data processing to automate decision-making processes. The article discusses the main tools and technologies used in the development of ICS, such as neural networks, deep learning algorithms, expert systems and decision support systems. Special attention is paid to the role of cloud computing, the Internet of Things and cyber-physical systems in improving the efficiency of intelligent control systems. The prospects for the development of this field are analyzed, as well as challenges related to data security and interpretability of models. Examples of the successful implementation of ICS in industry, medicine and urban management are given.

    Keywords: intelligent control systems, artificial intelligence, machine learning, neural networks, big data, Internet of things, cyber-physical systems, deep learning, expert systems, automation

  • Application of variational principles in problems of development and testing of complex technical systems

    The technology of applying the variational principle in problems of development and testing of complex technical systems is described. Let there be a certain set of restrictions imposed on random variables in the form of given statistical moments and/or in the form of a restriction by some estimates from above and below the range of possible values of these random variables. The task is set: without knowing anything except these restrictions, to construct for further research, ultimately, for assessing the efficiency of the complex technical system being developed, the probability distribution function of its determining parameter. By varying the functional, including Shannon entropy and typical restrictions on the distribution density function of the determining parameter of a complex technical system, the main stages of constructing the distribution density function are described. It is shown that, depending on the type of restriction, the constructed distribution density function can have an analytical form, be expressed through special mathematical functions, or be calculated numerically. Examples of applying the variational principle to find the distribution density function are given. It is demonstrated that the variational principle allows obtaining both the distribution laws widely used in probability theory and mathematical statistics, and specific distributions characteristic of the problems of developing and testing complex technical systems. The technology of applying the variational principle presented in the article can be used in the model of managing the self-diagnostics process of intelligent control systems with machine consciousness.

    Keywords: variational principle, distribution density function, Shannon entropy, complex technical system

  • The actor model in the Elixir programming language: fundamentals and application

    The article explores the actor model as implemented in the Elixir programming language, which builds upon the principles of the Erlang language. The actor model is an approach to parallel programming where independent entities, called actors, communicate with each other through asynchronous messages. The article details the main concepts of Elixir, such as comparison with a sample, data immutability, types and collections, and mechanisms for working with the actors. Special attention is paid to the practical aspects of creating and managing actors, their interaction and maintenance. This article will be valuable for researchers and developers interested in parallel programming and functional programming languages.

    Keywords: actor model, elixir, parallel programming, pattern matching, data immutability, processes, messages, mailbox, state, recursion, asynchrony, distributed systems, functional programming, fault tolerance, scalability

  • Development and Analysis of a Feature Model for Dynamic Handwritten Signature Recognition

    In this work, we present the development and analysis of a feature model for dynamic handwritten signature recognition to improve its effectiveness. The feature model is based on the extraction of both global features (signature length, average angle between signature vectors, range of dynamic characteristics, proportionality coefficient, average input speed) and local features (pen coordinates, pressure, azimuth, and tilt angle). We utilized the method of potentials to generate a signature template that accounts for variations in writing style. Experimental evaluation was conducted using the MCYT_Signature_100 signature database, which contains 2500 genuine and 2500 forged samples. We determined optimal compactness values for each feature, enabling us to accommodate signature writing variability and enhance recognition accuracy. The obtained results confirm the effectiveness of the proposed feature model and its potential for biometric authentication systems, presenting practical interest for information security specialists.

    Keywords: dynamic handwritten signature, signature recognition, biometric authentication, feature model, potential method, MCYT_Signature_100, FRR, FAR

  • The effect of data replacement and expansion using transformations on the recognition accuracy of the deep neural network ResNet - 50

    The article examines how the replacement of the original data with transformed data affects the quality of training of deep neural network models. The author conducts four experiments to assess the impact of data substitution in tasks with small datasets. The first experiment consists in training the model without making changes to the original data set, the second is to replace all images in the original set with transformed ones, the third is to reduce the number of original images and expand the original data set using transformations applied to images, and also in the fourth experiment, the data set is expanded in order to balance the number of images There are more in each class.

    Keywords: dataset, extension, neural network models, classification, image transformation, data replacement

  • Modeling Paid Parking Occupancy: A Regression Analysis Taking into Account Customer Behavior

    The article describes the methodology for constructing a regression model of occupancy of paid parking zones taking into account the uneven distribution of sessions during the day and the behavioral characteristics of two groups of clients - the regression model consists of two equations that take into account the characteristics of each group. In addition, the process of creating a data model, collecting, processing and analyzing data, distribution of occupancy during the day is described. Also, the methodology for modeling a phenomenon whose distribution has the shape of a bell and depends on the time of day is given. The results can be used by commercial enterprises managing parking lots and city administrations, researchers when modeling similar indicators that demonstrate a normal distribution characteristic of many natural processes (customer flow in bank branches, replenishment and / or withdrawal of funds during the life of replenished deposits, etc.).

    Keywords: paid parking, occupancy, regression model, customer behavior, behavioral segmentation, model robustness, model, forecast, parking management, distribution

  • Software for calculating the surface characteristics of liquid media

    Software has been developed to evaluate the surface characteristics of liquids, solutions and suspensions in the Microsoft Visual Studio environment. The module with a user-friendly interface does not require special skills from the user and allows for a numerical calculation of the energy characteristics of the liquid in a time of ~ 1 second: adhesion, cohesion, wetting energy, spreading coefficient and adhesion of the liquid composition to the contact surface. Using the example of a test liquid - distilled water and an initial liquid separation lubricant of the Penta-100 series, an example of calculating the wetting of a steel surface with liquid media is demonstrated. Optical microscopy methods have shown that good lubrication of the steel surface ensures the formation of a homogeneous, defect-free coating. The use of the proposed module allows for an express assessment of the compatibility of liquid formulations with the protected surface and is of interest to manufacturers of paint and varnish materials in product quality control.

    Keywords: computer program, C# programming language, wetting, surface, adhesion

  • Modeling the interaction of a single abrasive grain with the surface of a part

    A review of various approaches used to model the contact interaction between the grinding wheel grain and the surface layer of the workpiece during grinding is presented. In addition, the influence of material properties, grinding parameters and grain morphology on the contact process is studied.

    Keywords: grinding, grain, contact zone, modeling, grinding wheel, indenter, micro cutting, cutting depth

  • Methods for forming quasi-orthogonal matrices based on pseudo-random sequences of maximum length

    Linear feedback shift registers (LFSR) and the pseudo-random sequences of maximum length (m-sequences) generated by them have become widely used in solving problems of mathematical modeling, cryptography, radar and communications. The wide distribution is due to their special properties, such as correlation. An interesting, but rarely discussed in the scientific literature of recent years, property of these sequences is the possibility of forming quasi-orthogonal matrices on their basis.In this paper, was conducted a study of methods for generating quasi-orthogonal matrices based on pseudo-random sequences of maximum length (m-sequences). An analysis of the existing method based on the cyclic shift of the m-sequence and the addition of a border to the resulting cyclic matrix is carried out. Proposed an alternative method based on the relationship between pseudo-random sequences of maximum length and quasi-orthogonal Mersenne and Hadamard matrices, which allows generating cyclic quasi-orthogonal matrices of symmetric structure without a border. A comparative analysis of the correlation properties of the matrices obtained by both methods and the original m-sequences is performed. It is shown that the proposed method inherits the correlation properties of m-sequences, provides more efficient storage, and is potentially better suited for privacy problems.

    Keywords: orthogonal matrices, quasi-orthogonal matrices, Hadamard matrices, m-sequences