The article discusses a method for detecting local areas with hidden defects in products whose length is several orders of magnitude greater than other dimensions, when processing information from non-destructive testing of the product. To obtain the necessary information, various means of introscopy and radiation of different nature are used. Processing of information obtained using scanning control should detect areas with defects and determine their nature. To compare different processing methods and select the optimal method for processing information, a computer modeling method was used, with the help of which the process of obtaining information and processing it was simulated, which simplifies the selection of the most suitable method for detecting a defect. The article describes typical models of the received signal and presents the simulation results.
Keywords: defects, non-destructive testing, extended products, simulation model, moving averaging, time series
The article presents expressions that allow you to calculate the amount of power consumption of end nodes when transmitting a message in a wireless sensor network. Data are obtained on the values that the value of power consumption of the end node of the sensor network takes, depending on the attenuation of signals during transmission over a wireless channel, as well as on the set values of the output power and the spreading factor of the transmitted signals.
Keywords: internet of Things, sensor network, LoRaWAN, IoT system, end node power consumption, spreading factor, output power
The article discusses the conducted studies of changes in the output signal from a measuring device to assess the quality of mixing natural and chemical fibers in semi-finished products of spinning production obtained on a belt machine at various transitions. The construction of polynomial models in data analysis makes it possible to interpret information about the uniformity of fiber distribution in the tape, without taking into account the effect on changes in its linear density.
Keywords: fiber mixing quality, linear density, infrared estimation method, data estimation, linear polynomial, polynomial function
The article considers the problem of cryptanalysis of an information security system based on a difficult-to-solve problem of Diophantine equations. A mathematical model of such a protection system is described in the article and a solution to the cryptanalysis problem using an artificial immune system adapted for solving Diophantine equations is proposed. The paper discusses the basic principles of building artificial immune systems and presents the results of experiments on evaluating the effectiveness of the proposed system of Diophantine equations of a degree not exceeding six. The results obtained demonstrate the possibility of using artificial immune systems to solve the problem of cryptanalysis of information security systems based on Diophantine equations.
Keywords: cryptanalysis, information security system, diophantine equations, artificial immune system, adaptive algorithm, efficiency assessment
A low-profile wideband circularly polarized antenna is proposed for use in navigation satellite systems. The VSWR ≤2 bandwidth is 75%. The 3-dB axial ratio (circular polarization) bandwidth is 54%. The designed radiating element was fabricated and measured as part of the antenna array.
Keywords: circular polarization, wideband antenna,, antenna array, axial ratio
Determining human emotions from speech is a pressing task at the moment, because it can be applied in various industries, such as economics, medicine, marketing, security and education. This work examines the recognition of human emotions specifically from speech, because speech is an informative indicator that is quite difficult to fake. The paper discusses a neural network approach to solving the problem. A recurrent neural network with LSTM memory was implemented, and our own dataset was collected on which the model was trained. The dataset includes the speech of Russian-speaking actors, which will improve the quality of the model for Russian-speaking users.
Keywords: neural network, emotion detection, speech, classification, deep learning, recurrent model, LSTM
This article describes aspects of ontology design for the sphere of information security. There are some examples of the use of ontologies in the sphere of information security including risk management, classification of threats and vulnerabilities, monitoring incidents, as well as examples of existing developments of ontologies for information security. The relevance of the development of legal ontologies is determined and examples of their use in practice are given. Also, the importance of designing a legal ontology for the subject area of information security under consideration is given due to the presence of a large legal framework. The paper presents the developed ontology model for one of the regulatory documents in the field of personal data protection. The approach to ontology design presented in the paper is proposed to be applied in the development of an information security learning system.
Keywords: security, information security, protection of information, information, domain model, normative legal act, ontology, ontological approach, design, legal ontology
This article presents a research study dedicated to the application of the YOLOv8 neural network model for road sign detection. During the study, a model based on YOLOv8 was developed and trained, which successfully detects road signs in real-time. The article also presents the results of experiments in which the YOLOv8 model is compared to other widely used methods for sign detection. The obtained results have practical significance in the field of road traffic safety, offering an innovative approach to automatic road sign detection, which contributes to improving speed control, attentiveness, and reducing accidents on the roads.
Keywords: machine learning, road signs, convolutional neural networks, image recognition
The article presents a mathematical model for assessing the applicability of intelligent chatbots in the context of studying dialects of foreign languages. The model is based on the analysis of key parameters and characteristics of chatbots, as well as their ability to adapt to various dialects. The model's parameters include questions, answers, evaluation criteria, types, and costs of errors. The quality of the chatbot's responses is evaluated both according to individual criteria and overall. To test the effectiveness of the proposed method, an experimental study was conducted using the dialects of the German language as examples. During the research, such intelligent chatbots as ChatGPT-3.5, GPT-4, YouChat, Bard, DeepSeek, and Chatsonic were evaluated. The analysis of the results of applying the developed mathematical model showed that at present, the models by OpenAI (ChatGPT-3.5 and GPT-4) offer the broadest range of possibilities. ChatGPT-3.5 demonstrated the best results in communication in Bavarian and Austrian dialects, while YouChat excelled in the Swiss dialect. The obtained results allow for important practical recommendations to be made for selecting intelligent chatbots in the field of studying dialects of foreign languages and serve as a basis for further research in the area of evaluating the effectiveness of educational technologies based on artificial intelligence.
Keywords: large language model, chatbot, quality assessment, foreign language learning, artificial intelligence technology in education
In this article we present a study on Natural Language Processing (NLP) and Machine Learning (ML) techniques, specifically focusing on deep learning algorithms. The research explores the application of Long Short-Term Memory (LSTM) models with attention mechanisms for text summarization tasks. The dataset used for experimentation consists of news articles and their corresponding summaries. The article discusses the preprocessing steps, including text cleaning and tokenization, performed on the data. The study also investigates the impact of different hyperparameters on the model's performance. The results demonstrate the effectiveness of the proposed approach in generating concise summaries from lengthy texts. The findings contribute to the advancement of Natural Language Processing and Machine Learning techniques for text summarization.
Keywords: extractive text summarization, sequence-to-sequence, long short-term memory, encoder_decoder, summarization model, natural language processing, machine learning, deep learning, attention mechanism
Fuel efficiency of dump trucks is affected by real world variables such as vehicle parameters, road conditions, weather parameters, and driver behavior. Predicting fuel consumption per trip using dynamic road condition data can effectively reduce the cost and time associated with on-road testing. This paper proposes new models for predicting fuel consumption of dump trucks in surface mining operations. The models combine locally collected data from dump truck sensors and analyze it to enhance their capabilities. The architectural design consists of two distinct parts, initially based on dual Long-term Short-Term Memories (LSTMs) and dual dense layers of Deep Neural Networks (DNNs). The new hybrid architecture improves the performance of the proposed model compared to other models, especially in terms of accuracy measurement. The MAE, RMSE, MSE and R2 scores indicate high prediction accuracy.
Keywords: LSTM algorithm, DNN, density, prediction, fuel consumption, quarries
The task of planning the sending of messages of known volumes from source points to destinations with known needs. At the same time, it is assumed that the costs of transmitting information on the one hand are proportional to the transmitted volumes and the cost of transmitting a unit of information over the selected communication channels, and on the other hand are associated with a fixed subscription fee for the use of channels that does not depend on the volume of transmitted information. An indicator of the quality of the plan with such a statement is the total cost of sending the entire planned volume of messages. A comparative characteristic of the effectiveness of methods for obtaining optimal plans using a linearized objective function and an exact solution by one of the combinatorial methods is carried out.
Keywords: message transmission, transport task, criterion of minimum total costs, computational complexity of the algorithm, linearization of the objective function
Outlier detection is an important area of data research in various fields. The aim of the study is to provide a non-exhaustive overview of the features of using methods for detecting outliers in data based on various machine learning techniques: supervised, unsupervised, semi-supervised. The article outlines the features of the application of certain methods, their advantages and limitations. It has been established that there is no universal method for detecting outliers suitable for various data, therefore, the choice of a particular method for the implementation of research should be made based on an analysis of the advantages and limitations inherent in the chosen method, with the obligatory consideration of the capabilities of the available computing power and the characteristics of the available data, in including those including their classification into outliers and normal data, as well as their volume.
Keywords: outliers, machine learning, outlier detection, data analysis, data mining, big data, principal component analysis, regression, isolating forest, support vector machine
This article explores how to optimize Quantum Espresso for efficient use of Nvidia's graphics processing unit (GPU) using CUDA technology. Quantum Espresso is a powerful tool for quantum mechanical simulation and calculation of material properties. However, the original version of the package was not designed for GPU use, so optimization is required to achieve the best performance.
Keywords: Quantum Espresso, GPU, CUDA, compute acceleration
This article discusses the process of collecting initial data for the cadastral assessment of cultural heritage objects. It has been revealed that cultural heritage objects have a number of features among other real estate objects, so special attention should be paid to the methodology for assessing such objects. The purpose of the study is to analyze and identify problematic issues in collecting initial data on cultural heritage objects during the state cadastral assessment. The results of the study showed a number of problematic issues related to the type of cultural heritage object, its binding to the cadastral number and the possibility of interpreting the information received to bring it to automatic data processing.
Keywords: state cadastral valuation, cultural heritage object, collection of initial data, interdepartmental interaction, status of cultural heritage object
The procedure for filling positions of teaching staff of universities belonging to the teaching staff is regulated by federal laws and local regulations. At the same time, it becomes necessary to store and exchange a large number of documents between various participants of competitive events. The aim of the work was to automate the process of holding competitive events and use a common data warehouse, with the help of which it is possible to speed up paperwork, save time and consumables, ensure the safety of storing, transmitting and processing information. The article reflects the obtained results of automation of the competitive selection process at the St. Petersburg State University of Architecture and Civil Engineering.
Keywords: higher education institutions, competitive election, teaching staff, automation
The modern cycle of creating simulation models is not complete without analysts, modelers, developers, and specialists from various fields. There are numerous well-known tools available to simplify simulation modeling, and in addition, it is proposed to use large language models (LLMs), consisting of neural networks. The article considered the GPT-4 model as an example. Such models have the potential to reduce costs, whether financial or time-related, in the creation of simulation models. Examples of using GPT-4 were presented, leading to the hypothesis that LLMs can replace or significantly reduce the labor intensity of employing a large number of specialists and even skip the formalization stage. Work has been conducted comparing the processes of creating models and conducting experiments using different simulation modeling tools, and the results have been formatted into a comparative table. The comparison was conducted based on the main simulation modeling criteria. Experiments with GPT-4 have successfully demonstrated that the creation of simulation models using LLMs is significantly accelerated and has great perspective in this field.
Keywords: Simulation modeling, large language model, neural network, GPT-4, simulation environment, mathematical model
The article is a review work on the methods and technologies used in the analysis of vulnerabilities in information systems. The article describes the main steps in conducting a vulnerability analysis, such as collecting information about the system, scanning the system for vulnerabilities, and analyzing the scan results. It also discusses how to protect against vulnerabilities, such as regularly updating software, conducting vulnerability analysis, and developing a data security strategy.
Keywords: vulnerability analysis, data security, information security threats, attack protection, information security, computer security, security risk, network vulnerability, security system, protection
The publication discusses the definition of a shared data environment. The main criteria for choosing SOD are put forward. A generalized analysis of the weaknesses of all existing ODS systems is provided. The article will help you better understand ODS and make the right choice of system.
Keywords: general data environment, design, construction, information, information modeling, ODS, criteria, management, information organization, information transfer
This study is a pilot one. The purpose of the study is to identify the nature of the relationship between Poisson's ratio and cohesion, on the example of a soil mass. The main objective of the study is to identify the dependence of Poisson's ratio and cohesion coefficient to obtain the fracture limit of the material (in this study of soil massif) - plastic flows in the material. The study is conducted by methods of mathematical modeling. In order to achieve the objective, it is necessary to justify the possibility of performing this experiment by means of boundary value problem, and to perform the ranking of the number of numerical experiments by experiment planning method to obtain the extrema. Next, it is necessary to perform the numerical experiment itself to reveal the relationship between Poisson's ratio and cohesion. The obtained data will be used to compose the inverse problem when testing a new Russian software product in the field of geotechnical and geomechanical modeling.
Keywords: Poisson's ratio, cohesion, soil massif, numerical experiment, finite element method, mathematical modelling, plastic flow, deformation, stress
This article is devoted to solving the problem of research and detection of malware. The method implemented in the work allows you to dynamically detect malware for Android using system call graphs using graph neural networks. The objective of this work is to create a computer model for a method designed to detect and investigate malware. Research on this topic is important in mathematical and software modeling, as well as in the application of system call control algorithms on Android devices. The originality of this direction lies in the constant improvement of approaches in the fight against malware, as well as limited information on the use of computer simulation to study such phenomena and features in the world.
Keywords: system calls, android, virus, malware, neural networks, artificial intelligence, fuzzy logic
The article is devoted to the consideration of topical issues related to the study of the possibility of forecasting the dynamics of stock markets based on neural network models of machine learning. The prospects of applying the neural network approach to building investment forecasts are highlighted. To solve the problem of predicting the dynamics of changes in the value of securities, the problems of training a model on data presented in the form of time series are considered and an approach to the transformation of training data is considered. The method of recursive exclusion of features is described, which is used to identify the most significant parameters that affect price changes in the stock market. An experimental comparison of a number of neural networks was carried out in order to identify the most effective approach to solving the problem of forecasting market dynamics. As a separate example, the implementation of regression based on a radial-basis neural network was considered and an assessment of the quality of the model was presented.
Keywords: stock market, forecast, daily slice, shares, neural network, machine learning, activation function, radial basis function, cross-validation, time series
This article discusses the issue of the features of measuring and predicting changes in the surface electric field strength in the atmosphere. The results of measurements of the atmospheric electric field strength are presented. The possibilities of forecasting changes in the surface electric field strength, including the use of numerical models, as well as the use of measurement results as an indicator of dangerous weather phenomena, are considered. The prospects of using the prediction of variations in the surface electric field intensity to predict adverse weather events and the importance of monitoring the intensity of the atmospheric electric field for understanding global climate change processes and the impact of the electric field on human health and the environment are discussed. For the research, a model was created that allows predicting electric field variations based on meteorological data. The developed neural network has shown good results. It is demonstrated that the use of neural networks can be an effective approach for predicting the parameters of the electric field of the surface layer of the atmosphere. In further research, it is planned to expand the measurement area by including additional parameters such as temperature, pressure and humidity in the analysis, as well as using more complex machine learning models to improve the accuracy of forecasts. In general, the results show that machine learning models can be effective in predicting variations of the electric field in the surface layer of the atmosphere. This can have practical applications in various fields such as aeronautics, meteorology, geology and others. Further research in this area contributes to the development of new methods and technologies in the field of electric power and communications and to improving our knowledge about the nature of the impact of atmospheric electrophysical phenomena on the environment and human health.
Keywords: electric field, surface layer of the atmosphere, measurements, methods, forecasting, modeling of variations in field strength
One of the tasks of data preprocessing is the task of eliminating gaps in the data, i.e. imputation task. The paper proposes algorithms for filling gaps in data based on the method of statistical simulation. The proposed gap filling algorithms include the stages of clustering data by a set of features, classifying an object with a gap, constructing a distribution function for a feature that has gaps for each cluster, recovering missing values using the inverse function method. Computational experiments were carried out on the basis of statistical data on socio-economic indicators for the constituent entities of the Russian Federation for 2022. An analysis of the properties of the proposed imputation algorithms is carried out in comparison with known methods. The efficiency of the proposed algorithms is shown.
Keywords: imputation algorithm, data gaps, statistical modeling, inverse function method, data simulation
The article discusses the use of graph theory to calculate the location of elements and ways of laying information cables in a distributed control system. It describes how the use of graph theory can help improve system performance, reduce maintenance costs, and increase reliability and security. The article presents the general principles of using graph theory to solve problems related to the location of elements and paths for laying information cables in distributed control systems. The authors conclude that the use of graph theory is a powerful tool for solving problems associated with distributed control systems, and can be effectively applied to improve the efficiency of the system, reduce costs and increase reliability and security.
Keywords: graph theory, distributed control system, Python, Matplotlib, production process optimization, automatic analysis, control system, data cable, automation