×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Development of a client-server application for constructing a virtual museum

    The article describes the methodology for developing a client-server application intended for constructing a virtual museum. The creation of the server part of the application with the functions of processing and executing requests from the client part, as well as the creation of a database and interaction with it, is discussed in detail. The client part is developed using the Angular framework and the TypeScript language; the three-dimensional implementation is based on the three.js library, which is an add-on to WebGL technology. The server part is developed on the ASP.NET Core platform in C#. The database schema is based on a Code-First approach using Entity Framework Core. Microsoft SQL Server is used as the database management system.

    Keywords: client-server application, virtual tour designer, virtual museum, three.js library, framework, Angular, ASP.NET Core, Entity Framework Core, Code-First, WebGL

  • Vision of the modern concept of medical decision support systems in the Russian healthcare system

    This article presents a study on the approach to the development of a medical decision support system (DSS) for the selection of formulas for calculating the optical strength of intraocular lenses (IOLs) used in the surgical treatment of cataracts. The system is based on the methods of building recommendation systems, which allows you to automate the process of choosing an IOL and minimize the risk of human error. The implementation of the system in the practice of medical organizations is expected to be highly accurate and efficient, significantly reduce the time allowed for decision-making, as well as improve the results of surgical interventions.

    Keywords: intraocular lens, ophthalmology, formulas for calculating optical strength, web application, machine learning, eye parameters, prognostic model, recommendation system, prediction accuracy, medical decision

  • Designing multicomponent simulation models using GPT-based LLM

    Modern simulation model design involves a wide range of specialists from various fields. Additional resources are also required for the development and debugging of software code. This study is aimed at demonstrating the capabilities of large language models (LLM) applied at all stages of creating and using simulation models, starting from the formalization of dynamic systems models, and assessing the contribution of these technologies to speeding up the creation of simulation models and reducing their complexity.The model development methodology includes stages of formalization, verification, and the creation of a mathematical model based on dialogues with LLMs. Experiments were conducted using the example of creating a multi-agent community of robots using hybrid automata. The results of the experiments showed that the model created with the help of LLMs demonstrates identical outcomes compared to the model developed in a specialized simulation environment. Based on the analysis of the experimental results, it can be concluded that there is significant potential for the use of LLMs to accelerate and simplify the process of creating complex simulation models.

    Keywords: Simulation modeling, large language model, neural network, GPT-4, simulation environment, mathematical model

  • Application of language neural network models for malware detection

    The growing popularity of large language models in various fields of scientific and industrial activity leads to the emergence of solutions using these technologies for completely different tasks. This article suggests using the BERT, GPT, and GPT-2 language models to detect malicious code. The neural network model, previously trained on natural texts, is further trained on a preprocessed dataset containing program files with malicious and harmless code. The preprocessing of the dataset consists in the fact that program files in the form of machine instructions are translated into a textual description in a formalized language. The model trained in this way is used for the task of classifying software based on the indication of the content of malicious code in it. The article provides information about the conducted experiment on the use of the proposed model. The quality of this approach is evaluated in comparison with existing antivirus technologies. Ways to improve the characteristics of the model are also suggested.

    Keywords: antivirus, neural network, language models, malicious code, machine learning, model training, fine tuning, BERT, GPT, GPT-2

  • Quality Assessment of Natural Landscape Images Colorization based on Neural Network Autoencoder

    The article discusses the application of neural network autoencoder in the problem of monochrome image colorization. The description of the network architecture, the applied training method and the method of preparing training and validation data is given. A dataset consisting of 540 natural landscape images with a resolution of 256 by 256 pixels was used for training. The results of comparing the quality of the outputs of the obtained model were evaluated and the average coefficients of metrics as well as the mean squared error of the VGG model outputs are presented.

    Keywords: neural networks, machine learning, autoencoder, image quality analysis, colorization, CIELAB

  • Algorithm for generating three-dimensional terrain models in the monocular case using deep learning models

    The article is devoted to the development of an algorithm for three-dimensional terrain reconstruction based on single satellite images. The algorithm is based on the algorithmic formation of three-dimensional models based on the output data of two deep learning models to solve the problems of elevation restoration and instance segmentation, respectively. The paper also presents methods for processing large satellite images with deep learning models. The algorithm proposed in the framework of the work makes it possible to significantly reduce the requirements for input data in the problem of three-dimensional reconstruction.

    Keywords: three-dimensional reconstruction, deep learning, computer vision, elevation restoration, segmentation, depth determination, contour approximation

  • Research on the use of the MatLab Simulink software environment as a development environment for microcontrollers of the STM32 family

    This article presents a study aimed at evaluating the use of the Matlab Simulink software environment for the development of microcontroller systems of the STM32 family. The possibilities of Simulink in the field of modeling and testing control algorithms, as well as in generating code that can be directly applied to microcontrollers, are analyzed. The article describes in detail the process of creating conceptual models and their dynamic modeling. The advantages of using Simulink include speeding up the development process through automated assembly and the ability to adjust model parameters in real time. In addition, Simulink allows you to generate processor-optimized code, which significantly increases the efficiency of microcontroller systems. However, attention is also drawn to some limitations associated with using Simulink, such as the need to create a configuration file in STM32CubeMX and potential difficulties in configuring it. The article provides an in-depth analysis of the application of Simulink in the context of the development of STM32 microcontrollers and can become a key material for those who want to deepen their knowledge in this area.

    Keywords: model-oriented programming, MatLab, Simulink, STM32, microcontroller, code generation, automatic control system, DC motor

  • Design and Development of Automated Information Systems for Recording Parameters of the Technological Process of Production of an Industrial Enterprise

    The article is devoted to the creation of a highly specialized automated information system for recording the parameters of the technological process of production of an industrial enterprise. The development of such software products will simplify and speed up the work of technologists and reduce the influence of the human factor in collecting and processing data.

    Keywords: automated information system, system for recording production process parameters, Rammler-Breich diagram, role-based data access system

  • Multi-agent search engine optimization algorithm based on hybridization and co-evolutionary procedures

    The paper proposes a hybrid multi-agent solution search algorithm containing procedures that simulate the behavior of a bee colony, a swarm of agents and co-evolution methods, with a reconfigurable architecture. The developed hybrid algorithm is based on a hierarchical multi-population approach, which allows, using the diversity of a set of solutions, to expand the areas of search for solutions. Formulations of metaheuristics for a bee colony and a swarm of agents of a canonical species are presented. As a measure of the similarity of two solutions, affinity is used - a measure of equivalence, relatedness (similarity, closeness) of two solutions. The principle of operation and application of the directed mutation operator is revealed. A description of the modified chromosome swarm paradigm is given, which provides the ability to search for solutions with integer parameter values, in contrast to canonical methods. The time complexity of the algorithm is O(n2)-O(n3).

    Keywords: swarm of agents, bee colony, co-evolution, search space, hybridization, reconfigurable architecture

  • Planning and designing an organization's information system: stages and methods

    Information technologies are used in all spheres of modern society. Databases and document flow in organizations must be clearly organized, streamlined, and the interconnected work of company departments and services must be ensured to collect and process information flows and make effective management decisions. The article reflects the place of the stages of planning and designing information technologies and methods of their development in the algorithm for forming the strategy of an organization's IT project. Approaches to the formation of automated workplaces are shown using the example of the organizational and managerial structure of an enterprise. The services and departments of the organization responsible for planning, accounting, analysis and control of its financial results have been identified, which led to the conclusion about the directions for improving the quality of IT project development.

    Keywords: information system, IT project, planning, design, modeling, automated workstations

  • Algorithm for searching for patterns of building location using geoinformation technologies

    The paper proposes a method for identifying patterns of the relative positions of buildings, which can be used to analyze the dispersion of air pollutants in urban areas. The impact of building configuration on pollutant dispersion in the urban environment is investigated. Patterns of building arrangements are identified. The methods and techniques for recognizing buildings are examined. The outcomes of applying the proposed method to identify building alignments are discussed.

    Keywords: patterns of building location, geoinformation technologies, GIS, geoinformation systems, atmospheric air

  • A gaming approach to diagnosing depression based on user behavior analysis

    This article is dedicated to developing a method for diagnosing depression using the analysis of user behavior in a video game on the Unity platform. The method involves employing machine learning to train classification models based on data from gaming sessions of users with confirmed diagnoses of depression. As part of the research, users are engaged in playing a video game, during which their in-game behavior is analyzed using specific depression criteria taken from the DSM-5 diagnostic guidelines. Subsequently, this data is used to train and evaluate machine learning models capable of classifying users based on their in-game behavior. Gaming session data is serialized and stored in the Firebase Realtime Database in text format for further use by the classification model. Classification methods such as decision trees, k-nearest neighbors, support vector machines, and random forest methods have been applied. The diagnostic method in the virtual space demonstrates prospects for remote depression diagnosis using video games. Machine learning models trained based on gaming session data show the ability to effectively distinguish users with and without depression, confirming the potential of this approach for early identification of depressive states. Using video games as a diagnostic tool enables a more accessible and engaging approach to detecting mental disorders, which can increase awareness and aid in combating depression in society.

    Keywords: videogame, unity, psychiatric diagnosis, depression, machine learning, classification, behavior analysis, in-game behavior, diagnosis, virtual space

  • Development of an Optical Cell of Collecting and Processing Video Information for Earth Remote Sensing

    As the space industry accelerates the trend to reduce development and production costs and simplify the use of space hardware, small spacecraft, including CubeSats, have become popular representatives of this trend. Over the last decade, the development, production and operation of small spacecraft has become in demand because of a number of advantages: simplicity of design, short design and production times, and reduced development costs. The main problem in the design of CubeSats is their miniaturisation. This paper presents the results of the development of the optical cell of collecting and processing video information for remote sensing systems of the CubeSat 3U format satellite, with the aim of obtaining the maximum possible image characteristics, taking into account the strict physical limitations of the CubeSat unit. In the course of the work, using computer-aided design systems Altium Designer and Creo Parametric, the structural diagram, electrical circuit diagram, topology, 3D model, as well as the design of the housing of the cell of collection and processing of video information were developed. PCB size: 90x90 mm, PCB thickness: 1.9 mm, number of PCB layers: 10, accuracy class: 5, cell height: 20 mm, cell weight: 110 grams.

    Keywords: space hardware, Earth remote sensing, small spacecraft, nanosatellite, printed circuit board, small satellite development trend, printed circuit board topology, CubeSat

  • Multi-criteria parametric optimization of food production

    The article deals with multi-criteria mathematical programming problems aimed at optimizing food production. One of the models of one-parameter programming is associated with solving the problem of combining crop production, animal husbandry and product processing. It is proposed to use the time factor as the main parameter, since some production and economic characteristics can be described by significant trends. The second multi-criteria parametric programming model makes it possible to optimize the production of agricultural products and harvesting of wild plants. in relation to the municipality, which is important for territories with developed agriculture and high potential of food forest resources.

    Keywords: parametric programming, agricultural production, two-criteria model

  • Parallel algorithm for simulating the dynamics of cargo volume in a storage warehouse

    The use of simulation analysis requires a large number of models and computational time. Reduce the calculation time in complex complex simulation and statistical modeling, allowing the implementation of parallel programming technologies in the implemented models. This paper sets the task of parallelizing the algorithmization of simulation modeling of the dynamics of a certain indicator (using the example of a model of the dynamics of cargo volume in a storage warehouse). The model is presented in the form of lines for calculating input and output flows, specified as: a moving average autoregressive model with trend components; flows of the described processes, specified according to the principle of limiting the limitation on the volume (size) of the limiting parameter, with strong stationarity of each of them. A parallelization algorithm using OpenMP technology is proposed. The efficiency indicators of the parallel algorithm are estimated: speedup, calculated as the ratio of the execution time of the sequential and parallel algorithm, and efficiency, reflecting the proportion of time that computational threads spend in calculations, and representing the ratio of the speedup to the sequential result of the processors. The dependence of the execution of the sequential and parallel algorithm on the number of simulations has been constructed. The efficiency of the parallel algorithm for the main stages of the simulation implementation was obtained at the level of 73%, the speedup is 4.38 with the number of processors 6. Computational experiments demonstrate a fairly high efficiency of the proposed parallel algorithm.

    Keywords: simulation modeling, parallel programming, parallel algorithm efficiency, warehouse loading model, OpenMP technology

  • Analysis of problems and methods for mathematical modeling of fires in tunnel structures

    The article presents a systematic review of scientific works by domestic and foreign authors devoted to modeling fires in tunnels for various purposes. Using the search results in the databases of scientific publications eLIBRARY.RU and Google Scholar, 30 of the most relevant articles were identified that meet the following criteria: the ability to access the full-text version, the material was published in a peer-reviewed publication, the article has a significant number of citations, and the presence of a description of the results of the authors’ own experiments. An analysis was made of the methodology used in the research, as well as the results of studying fires in transport tunnels (road, railway, subway) and mine workings presented in the works. A classification of publications was carried out according to the types of tunnel structures, cross-sectional shape, subject of research, mathematical model used to describe the processes of heat and mass transfer in a gaseous environment and heating of enclosing structures, software used, validation of experimental data, and the use of scaling in modeling. It has been established that the problems of mathematical modeling of fires in deep tunnel structures, as well as modeling of a fire in a tunnel taking into account the operation of fire protection systems, are poorly studied.

    Keywords: fire modeling, tunnel, mathematical model, fire prediction, heat transfer, structures, systematic review

  • Identifying customers in the modern business environment: achievements and prospects using artificial intelligence

    In today's highly competitive business environment, understanding customer needs, preferences and behavior is of paramount importance. Customer identification software is a digital solution for accurate customer identification and authentication used in various sectors such as banking, healthcare, and e-commerce. Big data, machine learning, and artificial intelligence technologies have greatly improved the customer identification process, allowing companies to improve personalization of services and products, and increase customer satisfaction. However, implementing AI for customer identification faces challenges related to protecting data privacy, training staff, and selecting the right AI tools. In the future, deep learning, neural networks and the Internet of Things may provide new opportunities for customer identification, providing higher levels of security and privacy. However, there is a need to comply with privacy legislation and ensure an ethical approach to the use of AI in customer identification.

    Keywords: software, customer identification, traditional methods, machine learning, artificial intelligence, evolution of identification software, future trends

  • Functional model of a virtual simulator for the organization of evacuation training

    The article discusses the possibilities of using virtual reality technologies to organize fire safety training for schoolchildren. The requirements for the virtual simulator are formulated from the point of view of ensuring the possibility of conducting classes on practicing evacuation skills from the building of a specific educational organization. A functional model of a virtual simulator is presented, built on the basis of the methodology of structural analysis and design, describing the process of developing a virtual space with interactive elements and organizing training for the evacuation of students based on it. A semantic description of the control signals of the functional model, its inputs, mechanisms and outputs is given. The contents of the model subsystems are revealed. Requirements for software, hardware and methodological support for training using virtual reality technologies when conducting fire training are formulated. The concept of creating a digital twin of a building of a general education organization in virtual space is substantiated. Examples of improving virtual space by using the results of mathematical modeling of fire are given. The use of visualization of smoke and flame in virtual space is justified to avoid the occurrence of panic in children during evacuation in fire conditions. Conclusions are drawn about the advantages of the proposed virtual simulator. The prospects for further research and solution to the problem of developing skills for evacuating students from a building of a general education organization in case of fire are listed.

    Keywords: virtual reality, virtual simulator, virtual space, fire safety, evacuation, fire training, mathematical modeling of fire, educational technologies, functional modeling

  • Development of a method for analyzing the surface quality of a product based on anomaly detection methods

    This article is devoted to the development of a method for detecting defects on the surface of a product based on anomaly detection methods using a feature extractor based on a convolutional neural network. The method involves the use of machine learning to train classification models based on the obtained features from a layer of a pre-trained U-Net neural network. As part of the study, an autoencoder is trained based on the U-Net model on data that does not contain images of defects. The features obtained from the neural network are classified using classical algorithms for identifying anomalies in the data. This method allows you to localize areas of anomalies in a test data set when only samples without anomalies are available for training. The proposed method not only provides anomaly detection capabilities, but also has high potential for automating quality control processes in various industries, including manufacturing, medicine, and information security. Due to the advantages of unsupervised machine learning models, such as robustness to unknown forms of anomalies, this method can significantly improve the efficiency of quality control and diagnostics, which in turn will reduce costs and increase productivity. It is expected that further research in this area will lead to even more accurate and reliable methods for detecting anomalies, which will contribute to the development of industry and science.

    Keywords: U-Net, neural network, classification, anomaly, defect, novelty detection, autoencoder, machine learning, image, product quality, performance

  • Minimizing costs when transmitting information over cellular communication channels

    The problem of planning the sending of messages in a cellular network to destinations with known needs is considered. It is assumed that the costs of transmitting information on the one hand are proportional to the transmitted volumes and the cost of transmitting a unit of information over the selected communication channels in cases of exceeding the traffic established by the contract with the mobile operator, and on the other hand are associated with a fixed subscription fee for the use of channels, independent of the volume of information transmitted. An indicator of the quality of the plan in this setting is the total cost of sending the entire planned volume of messages. A procedure for reducing the formulated problem to a linear transport problem is proposed. The accuracy of the solution obtained on the basis of the proposed algorithm is estimated.

    Keywords: single jump function, transport problem, minimum total cost criterion, computational complexity of the algorithm, confidence interval

  • Development of an automated system for planning road surface maintenance operations

    The article is dedicated to the development of an automated system aimed at creating a program of works for the maintenance of road surfaces. The system is based on data from the diagnostics and assessment of the technical condition of roads, in particular data on the assessment of the International Roughness Index (IRI). The development of a program of works for the maintenance of road surfaces is carried out based on the analysis of the IRI assessment both in the short term and on the time horizon of the contractor's work under the contract. The system is developed on the principle of modular programming, where one of the modules uses polynomial regression to predict the IRI assessment for several years ahead. The analysis of the deviation of the predicted IRI value from the actual one is the basis for the selection of works included in the program. The financial module allows the system to comply with the budget framework limited by the contract and provides an opportunity to evaluate the effectiveness of planning by calculating the difference between the cost of road surface maintenance and the contract value. Practical studies demonstrate that the system is capable of effectively and efficiently planning road surface maintenance works in accordance with the established contract deadlines.

    Keywords: road surface, automated system, modular programming, machine learning, recurrent neural network, road condition, international roughness index, road diagnostics, road work planning, road work program

  • About the integration of the Telegram bot into the information system for processing the results of sports competitions

    The article describes the integration aspects of the Telegram bot implemented on the 1C: Enterprise platform, into the information system for processing the results of sports competitions. The basic functionality of user interaction with the bot is considered. A diagram of the system states in the process of user interaction with the bot is provided, illustrating the possible transition states when the user selects certain commands or buttons. A diagram of the sequence of the registration process for participants of events using a Telegram bot is presented, illustrating the transmission of messages using post and get requests.

    Keywords: processing the results of sports competitions, Telegram bot, messenger,1C: Enterprise platform, state processing, information systems in the field of sports

  • Mathematical equipment and technological structure of the synthetic voice deepfakes forecasting system

    The article considers mathematical models for the collection and processing of voice content, on the basis of which a fundamentally logical scheme for predicting synthetic voice deepfakes has been developed. Experiments have been conducted on selected mathematical formulas and sets of python programming language libraries that allow real-time analysis of audio content in an organization. The software capabilities of neural networks for detecting voice fakes and generated synthetic (artificial) speech are considered and the main criteria for the study of voice messages are determined. Based on the results of the experiments, a mathematical apparatus has been formed that is necessary for positive solutions to problems of detecting voice deepfakes. A list of technical standards recommended for collecting voice information and improving the quality of information security in the organization has been formed.

    Keywords: neural networks, detection of voice defects, information security, synthetic voice speech, voice deepfakes, technical standards for collecting voice information, algorithms for detecting audio deepfakes, voice cloning

  • Using machine learning technologies to develop optimal traffic light control programs

    One of the key directions in the development of intelligent transport networks (ITS) is the introduction of automated traffic management systems. In the context of these systems, special attention is paid to the effective management of traffic lights, which are an important element of automated traffic management systems. The article is devoted to the development of an automated system aimed at compiling an optimal program of traffic light signals on a certain section of the road network. The Simulation of Urban Mobility (SUMO) traffic modeling package was chosen as a modeling tool, BFGS (Broyden-Fletcher-Goldfarb-Shanno) optimization algorithm was used, gradient boosting was used as a machine learning method. The results of practical research show that the developed system is able to quickly and effectively optimize the parameters of phases and duration of traffic light cycles, which significantly improves traffic management on the corresponding section of the road network.

    Keywords: intelligent transport network, traffic management, machine learning, traffic jam, traffic light, phase of the traffic light cycle, traffic flow, modeling of the road network, python, simulation of urban mobility

  • Evaluating the efficiency of fog computing in geographic information systems

    It is propossed to use foggy calculations to reduce the load on data transmission devices and computing systems in GIS. To improve the accuracy of estimating the efficiency of foggy calculations a non-Markov model of a multichannel system with queues, "warming up" and "cooling" is used. A method for calculating the probalistic-temporal characteristics of a non-Markov system with queues and with Cox distributions of the duration of "warming up" and "cooling" is prorosed. A program has been created to calculate the characteristics of the efficiency of fog calculations. The silution can be used as a software tool for predictive evaluation of the efficiency of access to geographic information systems, taking into account the features of fog computing technology and the costs of ensuring information security.

    Keywords: fog computing, model of a multi-channel service system with queues, “warming up”, “cooling down”, geographic information systems, Cox distribution