The article considers the methodology of forming and determining the parameters of machine learning algorithms for classifying electronic documents according to the importance of information for officials of organizations, which differs from the known ones by the dynamic formation of the structure and number of machine learning algorithms, due to the automated determination of sets of structural divisions of the organization, sets of keywords reflecting the tasks and functions of structural divisions in the process of automated analysis of the Organization's Regulations, The positions of structural units based on the theory of pattern recognition.
Keywords: lemmatization, pattern recognition, machine learning algorithm, electronic document, vectorization, formalized documents
In modern society, problems related to the ethics of artificial intelligence (AI) are increasingly emerging. AI is used everywhere, and the lack of ethical standards and a code necessitates its creation to ensure the safety and comfort of users. The purpose of the work is to analyze approaches to the ethics of artificial intelligence and identify the parameters for evaluating approaches to create systems that meet ethical standards and meet the needs of users. Approaches to the ethics of artificial intelligence are considered. The parameters for evaluating approaches are highlighted. The main characteristics are highlighted for each parameter. The parameters described in this paper will help achieve better results when creating standards for the development of safer and more user-friendly systems.
Keywords: Code, parameters, indicators, characteristics, ethics, artificial intelligence
Road surface quality assessment is one of the most popular tasks worldwide. To solve it, there are many systems, mainly interacting with images of the roadway. They work on the basis of both traditional methods (without using machine learning) and machine learning algorithms. To increase the effectiveness of such systems, there are a sufficient number of ways, including improving image quality. However, each of the approaches has certain characteristics. For example, some of them produce an improved version of the original photo faster. The analyzed methods for improving image quality are: noise reduction, histogram equalization, sharpening and smoothing. The main indicator of effectiveness in this study is the average time to obtain an improved image. The source material is 10 different photos of the road surface in 5 sizes (447x447, 632x632, 775x775, 894x894, 1000x1000) in png, jpg, bmp formats. The best performance indicator according to the methodology proposed in the study was demonstrated by the "Histogram equalization" approach, the "Sharpening" method has a comparable result.
Keywords: comparison, analysis, dependence, effectiveness, approach, quality improvement, image, photo, format, size, road surface
This paper considers the conditions and factors affecting the security of information systems functioning under network reconnaissance conditions. The developed model is based on the techniques that realize the dynamic change of domain names, network addresses and ports to the network devices of the information system and false network information objects functioning as part of them. The formalization of the research problem was carried out. The theoretical basis of the developed model is the theories of probability and random processes. The modeled target system is represented as a semi-Markov process identified by an oriented graph. The results of calculation of probabilistic-temporal characteristics of the target system depending on the actions of network reconnaissance are presented, which allow to determine the mode of adjustment of the developed protection measures and to evaluate the security of the target system under different conditions of its functioning.
Keywords: departmental information system, network intelligence, structural and functional characterization, false network information object
To date, a huge amount of heterogeneous information passes through electronic computing systems. There is a critical need to analyze an endless stream of data with limited means, and this, in turn, requires structuring information. One of the steps in solving the problem of data ordering is deduplication. This article discusses the method of removing duplicates using databases, analyzes the results of testing work with various types of database management systems with different sets of parameters.
Keywords: deduplication, database, field, row, text data, artificial neural network, sets, query, software, unstructured data
Social and pension provision are key processes in the activities of any state, and the issues of forecasting expenses for them are among the most important in the economy. The task of evaluating the effectiveness of the pension fund has been solved by various methods, including regression analysis methods. This task is particularly difficult due to the presence of a large number of factors determining the activity of the pension fund, such as: the number of recipients of old-age pensions, the number of policyholders, self-employed policyholders, recipients of benefits, insured persons and working pensioners. As the main approach to the study, the method of implementing a model competition was applied. Those variants that violated the meaningful meaning of the variables and did not fully reflect the behavior of the modeled process were excluded from the resulting set of alternative model options. The final option was selected using the multi-criteria selection method. It is revealed that the use of relative variables is important for qualitative modeling of the studied processes. The above model shows that an increase in the ratio of the number of employers and the self-employed to the number of insured persons leads to a decrease in the cost of financing social and pension provision.The model can be effectively used for short-term forecasting of the total annual volume of financing of the pension fund department in the context of changing social and macroeconomic factors.
Keywords: pension fund, regression model, model competition, adequacy criteria, forecasting
Currently, digitalization as a technological tool penetrates into the humanitarian sphere of knowledge, linking technocratic and humanitarian industries. An example is legal informatics, in which conceptual devices of quite different – at first glance – areas of human knowledge are interfaced. However, the desire to abstract (formalize) any knowledge is the most important task in the "convergence" of computer technologies and mathematical methods into a non-traditional humanitarian sphere for them. The paper discusses the problems generated by the superficial idea of artificial intelligence. A typical example is the attempt of some authors in jurisprudence to give computer technologies, often referred to as artificial intelligence by humanitarians, an almost sacred meaning and endow it with legal personality.
Keywords: artificial intelligence, deep learning, machine learning, hybrid intelligence, adaptive behavior, digital economy, digital law, legal personality of artificial intelligence
The article considers an approach to solving the problem of optimizing the speed of aggregating queries to a continuous range of rows of a PostgreSQL database table. A program module based on PostgreSQL Extensions is created, which provides construction of a segment tree for a table and queries to it. Increased query speed by more than 80 times for a table of 100 million records compared to existing solutions.
Keywords: PostgreSQL, segment tree, query, aggregation, optimization, PosgreSQL Extensions, asymptotics, index, build, get, insert
Frequency multiplexing (OFDM) methods have become the main basis for most outbred systems. These methods have also found application in modern systems of low-orbit satellite Internet (LOSIS). For example, the StarLink system uses OFDM transmission systems that use a signal frame consisting of 52 channels to transmit data. One way to increase the data rate in OFDM is to replace the Fourier transform (FT) with a faster orthogonal transform. As such, the modified wavelet transform (MWT) of Haar was chosen. The Haar MVP allows to reduce the number of arithmetic operations during the orthogonal signal transformation in comparison with the PF. The use of integer algebraic systems, such as Galois fields and modular residue class codes (MCCR), makes it possible to increase the speed of a computing device that performs orthogonal transformations of signals. Obviously, the transition to new algebraic systems should lead to changes in the structure of OFDM systems. Therefore, the development of structural models of an OFDM transmission system using the Haar MWP in the Galois field and the ICCM is an urgent task. Therefore, the aim of the work is to develop structural models of wireless OFDM systems using a modified integer discrete Haar transform, which can reduce the execution time of the orthogonal signal transformation. And this, in turn, will lead to an increase in the data transfer rate in the SNSI.
Keywords: orthogonal frequency multiplexing, modification of the Haar wavelet transform, structural models of execution of the Haar MVP, Galois field, modular residue class codes
The article proposes an algorithm for ensuring the minimum power consumption of end nodes in a wireless network of sensors. A simulation model of the process of information exchange in a wireless network of sensors developed in the Matlab - Simulink software environment is presented, the use of which allows estimating the total power consumption when transmitting messages by all end nodes of the network during a given time interval.
Keywords: wireless sensor network, LoRaWAN, Internet of Things, Internet of Things, IoT, power consumption, simulation model, Simulink, signal attenuation, frame transmission
The article discusses methods for optimizing floating point calculations on microcontroller devices. Hardware and software methods for accelerating calculations are considered. Algorithms of Karatsuba and Schönhage-Strassen for the multiplication operation are given. A method for replacing floating-point calculations with integer calculations is proposed. Describes how to use fixed point instead of floating point. The option of using hash memory and code optimization is considered. The results of measuring calculations on the AVR microcontroller are presented.
Keywords: floating point calculations, fixed point calculations, microcontroller, AVR, ARM
This article analyzes data processing problems for training a neural network. The first stage of model training - feature extraction - is discussed in detail. The article discusses the method of mel-frequency cepstral coefficients. The spectrum of the voice signal was plotted. By multiplying the vectors of the signal spectrum and the window function, we found the signal energy that falls into each of the analysis windows. Next, we calculated the mel-frequency cepstral coefficients. The use of a chalk scale helps in audio analysis tasks and is used in training neural networks when working with speech. The use of mel-cepstral coefficients significantly improved the quality of recognition due to the fact that it made it possible to see the most informative coefficients. These coefficients have already been used as input to the neural network. The method with mel-frequency cepstral coefficients made it possible to reduce the input data for training, increase productivity, and improve recognition clarity.
Keywords: machine learning, data preprocessing, audio analysis, mel-cepstral coefficients, feature extraction, voice signal spectrum, Fourier transform, Hann window, discrete cosine transform, short Fourier transform
This article describes the first stage of the research work on the development of an FPGA-based camera for vehicle identification tasks, which are widely used in automated weight and size control points. Since the FPGA is an alternative to conventional processors, which features the ability to perform multiple tasks in parallel, an FPGA-equipped camera will be able to perform the functions of detecting and identifying vehicles at the same time.Thus, the camera will not only transmit the image, but also transmit the result of processing for problem-oriented control systems, decision-making and optimization of data flow processing, after which the server will only need to confirm or deny the results of the camera, which will significantly reduce the image processing time from all automated points of weight and size control.In the course of development, a simple VGA port board, a static image program for displaying it on a monitor in 640x480 resolution, and a pixel counter program were implemented. EP4CE6E22C8 is used as FPGA, the power of which is more than enough to achieve the result.
Keywords: system analysis methods, optimization, FPGA, VGA adapter, Verilog, recognition camera, board design, information processing, statistics
This paper considers existing classical and neural network methods for combating noise in computer vision systems. Despite the fact that neural network classifiers demonstrate high accuracy, it is not possible to achieve stability on noisy data. Methods for improving an image based on a bilateral filter, a histogram of oriented gradients, integration of filters with Retinex, a gamma-normal model, a combination of a dark channel with various tools, as well as changes in the architecture of convolutional neural networks by modifying or replacing its components and the applicability of ensembles of neural networks are considered.
Keywords: image processing, image filtering, machine vision, pattern recognition
This article explores the LTE-R group, in which the OFDM system has a special place, and considers the possibility of developing new methods for detecting error-correcting coding to test high data transmission under rate estimation conditions. The problems associated with the transmission of large amounts of information in conditions of high speed of movement of trains and the volatility of the environment are considered, as well as the features of interference associated with the railway infrastructure. As noise-immune codes, modular codes are used, which, unlike criminal BCH codes, are arithmetic.
Keywords: LTE-R standard, OFDM system, modular codes, noise immunity, error hit interval, BCH codes, error packet, error rate
The method of analyzing hierarchies has long been described, studied and applied in practice. In order to reduce the factor of subjectivity inherent in many decisions, we are considering the option of using the hierarchy analysis method, in which the assessment is carried out not by the decision maker, but by a group of independent experts. Thus, we propose a method for solving multicriteria optimization problems based on mixing (combination) of two methods - the method of analyzing hierarchies and the method of expert assessment.
Keywords: Optimality criteria, alternative, decision maker, optimization, method of expert assessments, method of hierarchy analysis, competence of experts, consistency of expert opinions
Often in practice, construction times are estimated using deterministic methods, for example, based on a network schedule of the construction plan with deterministic values for the timing of specific works. This approach does not reflect the reality associated with the probabilistic nature of risks and leads to a systematic underestimation of the time and, as a consequence, the cost of construction. The research proposes to use a Markov discrete heterogeneous Markov chain to assess the risks of non-completion of construction in due time. The states of the Markov process are proposed to correspond to the stages of construction of the object. Probabilities of system transitions from state to state are proposed to be estimated on the basis of empirical data on previously implemented projects and/or expertly, taking into account the risks characterising construction conditions in dynamics. The dynamic model of the construction plan development allows to determine such characteristics as: the probability of the construction plan realisation within the established terms, the probability that the object will ever be completed, the time of construction to the stage of completion with a given degree of reliability; unconditional probabilities of the system states (construction stage) in a given period of time relative to the beginning of construction. The model has been tested. The proposed model allows us to estimate the time of completion of construction, to assess the risks of failure to complete construction within the established deadlines in the planned conditions of construction realisation, taking into account the dynamics of risks.
Keywords: construction time, risk assessment, markov model, discrete Markov chain, inhomogeneous random process
The purpose of the study is to improve the efficiency of Dijkstra's algorithm by using the shared memory model with OpenMP library and working on the principle of parallel execution in the implementation of the algorithm. Using Dijkstra's algorithm to find the shortest path between two nodes in a graph is quite common. However, the time complexity of the algorithm increases as the size of the graph increases, resulting in longer execution time, so parallel execution is a good option to solve the time complexity problem. In this research work, we propose a parallel computing method to improve the efficiency of Dijkstra's algorithm for large graphs.The method involves dividing the array of paths in Dijkstra's algorithm into a specified number of processors for parallel execution. We provide an implementation of the parallelized Dijkstra algorithm and access its performance using actual datasets and with different number of nodes. Our results show that Dijkstra's parallelized algorithm can significantly speed up the process compared to the sequential version of the algorithm, while reducing execution time and continuously improving CPU efficiency, making it a useful choice for finding shortest paths in large graphs.
Keywords: Dijkstra algorithm, graph, shortest paths, parallel computing, shared memory model, OpenMP library
In the conditions of modern economy, where optimal personnel decisions are very important for any organizations, especially in the dynamically divisive electric power industry, the issue of developing an intelligent system for making personnel decisions in the electric power industry becomes relevant. This paper analyzes the existing tools for selection of candidates for vacant positions including managerial positions and vacancies from the electric power industry. Based on the analysis and earlier research, a competency profile of managers of the electric power industry is formed. The development of the program product was conducted using various programming languages in the Visual Studio development environment. The program represents a dynamic and interactive process of managerial decision-making, where users face different scenarios to assess the formed competencies, with the output of a detailed report on their skills, which provides employers with an objective assessment of the candidate's potential for a vacant managerial position.
Keywords: electric power industry, competences, personnel, personnel, optimal personnel management decisions, intellectual system, personnel management, competence assessment, software product
It is estimated that more than 9% of the Russian population is hearing impaired, and the development of dactyl recognition systems is becoming critical to facilitate their social communication. The introduction of dactyl recognition systems will improve communication for these people, providing them with equal opportunities and improving their quality of life. The research focused on learning the characters of the dactyl alphabet, as well as developing a labeled dataset and training a neural network for gesture recognition. The aim of the work is to create tools capable of recognizing the signs of the Russian dactyl alphabet. Within the framework of this research the method of computer vision was applied. The process of gesture recognition consists of the following steps: first the camera captures the video stream, after the images of hands are preprocessed. Then a pre-trained neural network analyzes these images and extracts important features. Next, gesture classification takes place, where the model determines whether the sign belongs to a certain letter of the alphabet. Finally, the recognition results are interpreted into a suitable symbol associated with the gesture. During the research process, the signs of the dactyl alphabet and interaction features of people with auditory impairment were studied and a dataset of more than 25000 trained data was also created. A model was developed and trained based on the most appropriate architecture for the task of the work. The model was tested and optimized to improve its accuracy. The results of this work can be used in the creation of devices to compensate for poor hearing, providing people with hearing impairment comfort in society.
Keywords: computer vision, sign recognition, dactyl classification, transfer learning, Russian dactyl alphabet, deep learning, computerization, software, assistive technology, convolutional neural networks
In this paper, we consider a technique for automatic analysis of video files for detecting the presence of persons and attractions, using recognition by key, non-repeating frames, based on algorithms for their extraction. Recognition of landmarks and faces only by keyframes will significantly reduce computational costs, as well as avoid overflowing with repetitive information. The effectiveness of the proposed technique is evaluated in terms of accuracy and speed on a set of test videos.
Keywords: keyframe, recognition, computer vision, algorithm, video
A new mathematical apparatus is proposed for monitoring the adequacy of the choice of signal sampling interval from the point of view of taking into account the main high-frequency components and identifying the possibilities of increasing it. It is based on the construction of special aliasing grams based on measured signal samples. Aliasing grams are graphs of standard deviations of the amplitude spectra of a conventionally reference discrete signal, specified with the highest sampling frequency, and auxiliary discrete signals obtained over the same observation interval, but with lower sampling frequencies. By analyzing such graphs, it is easy to identify sampling frequencies that lead to the appearance of the aliasing effect in the case of sampling, and, consequently, to distortion of the signal spectrum. To speed up and simplify the construction of aliasinggrams, it is proposed to use as auxiliary signals obtained from the reference one by thinning. It has been shown that this device is also effective in the case of the spectrum spreading effect. It can be used in self-learning measuring systems.
Keywords: sampling interval, aliasing, amplitude spectrum, aliasing-gram, sample decimation, spectrum spreading
This paper reveals many topical problems related to the modernization of inclusive education in Russia, with an emphasis on the practice of teaching foreign language in higher educational institutions. The paper also presents models of inclusion of persons with disabilities relevant to the modern educational environment. A brief description of the historical and legal basis of inclusion in Russia is also given. The authors note that in higher education institutions inclusive education is still at the stage of formation and that for its successful implementation it is necessary to comprehend the problem and create a methodological basis.
Keywords: inclusive education, persons with disabilities, equal access, quality education, integration, synergy, legal framework, adaptation, transformation.
The article discusses a graphical notation using three-dimensional visualization for representing models of automated systems according to the Methodology of Automation of Intellectual Labor (MAIL). The research aims to enhance the efficiency of modeling automated systems by providing a more comprehensive representation of the models. Research methods employed include a systems approach. The study results in the formulation of descriptions and rules for creating the corresponding graphical notation for the initial and conceptual modeling stages of subject tasks in MAIL, as well as rules for forming representations for static and dynamic model structures and representing their interrelations. Additionally, rules for visually highlighting and concealing elements within the diagrams of the graphical notation are examined, rendering it suitable for implementation as a software module with a graphical interface for CASE tools, facilitating modeling according to MAIL. Such an approach enables the visualization of the model as a whole and enhances the efficiency of analysts conducting modeling following the methodology.
Keywords: methodology of Automation of Intellectual Labor, modeling of automated systems, conceptual modeling, graphical notation, three-dimensional visualization
The work is aimed at modeling the control system of a slitting machine of a paper machine in order to improve the quality of products and eliminate defects in winding density. The developed automated system implements the functions of controlling the operating modes of the machine, distributing the loads of the bearing shafts, braking the roll and tensioning the paper web.
Keywords: slitting machine, paper machine, automated control system, rewinder, pressure roller, decoiler, reeler, accelerating shaft, deflecting shaft, cutting section