Our lives are permeated by data, with endless streams of information passing through computer systems. Today it is impossible to imagine modern software without interaction with databases. There are many different DBMSs depending on the purpose of using the information. The article discusses the Locality-sensitive hashing (LSH) algorithm based on the Pl/PgSQL language, which allows you to search for similar documents in the database.
Keywords: LSH, hashing, field, string, text data, query, software, SQL
The paper proposes a method for identifying patterns of the relative positions of buildings, which can be used to analyze the dispersion of air pollutants in urban areas. The impact of building configuration on pollutant dispersion in the urban environment is investigated. Patterns of building arrangements are identified. The methods and techniques for recognizing buildings are examined. The outcomes of applying the proposed method to identify building alignments are discussed.
Keywords: patterns of building location, geoinformation technologies, GIS, geoinformation systems, atmospheric air
This article is dedicated to developing a method for diagnosing depression using the analysis of user behavior in a video game on the Unity platform. The method involves employing machine learning to train classification models based on data from gaming sessions of users with confirmed diagnoses of depression. As part of the research, users are engaged in playing a video game, during which their in-game behavior is analyzed using specific depression criteria taken from the DSM-5 diagnostic guidelines. Subsequently, this data is used to train and evaluate machine learning models capable of classifying users based on their in-game behavior. Gaming session data is serialized and stored in the Firebase Realtime Database in text format for further use by the classification model. Classification methods such as decision trees, k-nearest neighbors, support vector machines, and random forest methods have been applied. The diagnostic method in the virtual space demonstrates prospects for remote depression diagnosis using video games. Machine learning models trained based on gaming session data show the ability to effectively distinguish users with and without depression, confirming the potential of this approach for early identification of depressive states. Using video games as a diagnostic tool enables a more accessible and engaging approach to detecting mental disorders, which can increase awareness and aid in combating depression in society.
Keywords: videogame, unity, psychiatric diagnosis, depression, machine learning, classification, behavior analysis, in-game behavior, diagnosis, virtual space
As the space industry accelerates the trend to reduce development and production costs and simplify the use of space hardware, small spacecraft, including CubeSats, have become popular representatives of this trend. Over the last decade, the development, production and operation of small spacecraft has become in demand because of a number of advantages: simplicity of design, short design and production times, and reduced development costs. The main problem in the design of CubeSats is their miniaturisation. This paper presents the results of the development of the optical cell of collecting and processing video information for remote sensing systems of the CubeSat 3U format satellite, with the aim of obtaining the maximum possible image characteristics, taking into account the strict physical limitations of the CubeSat unit. In the course of the work, using computer-aided design systems Altium Designer and Creo Parametric, the structural diagram, electrical circuit diagram, topology, 3D model, as well as the design of the housing of the cell of collection and processing of video information were developed. PCB size: 90x90 mm, PCB thickness: 1.9 mm, number of PCB layers: 10, accuracy class: 5, cell height: 20 mm, cell weight: 110 grams.
Keywords: space hardware, Earth remote sensing, small spacecraft, nanosatellite, printed circuit board, small satellite development trend, printed circuit board topology, CubeSat
The article deals with multi-criteria mathematical programming problems aimed at optimizing food production. One of the models of one-parameter programming is associated with solving the problem of combining crop production, animal husbandry and product processing. It is proposed to use the time factor as the main parameter, since some production and economic characteristics can be described by significant trends. The second multi-criteria parametric programming model makes it possible to optimize the production of agricultural products and harvesting of wild plants. in relation to the municipality, which is important for territories with developed agriculture and high potential of food forest resources.
Keywords: parametric programming, agricultural production, two-criteria model
The use of simulation analysis requires a large number of models and computational time. Reduce the calculation time in complex complex simulation and statistical modeling, allowing the implementation of parallel programming technologies in the implemented models. This paper sets the task of parallelizing the algorithmization of simulation modeling of the dynamics of a certain indicator (using the example of a model of the dynamics of cargo volume in a storage warehouse). The model is presented in the form of lines for calculating input and output flows, specified as: a moving average autoregressive model with trend components; flows of the described processes, specified according to the principle of limiting the limitation on the volume (size) of the limiting parameter, with strong stationarity of each of them. A parallelization algorithm using OpenMP technology is proposed. The efficiency indicators of the parallel algorithm are estimated: speedup, calculated as the ratio of the execution time of the sequential and parallel algorithm, and efficiency, reflecting the proportion of time that computational threads spend in calculations, and representing the ratio of the speedup to the sequential result of the processors. The dependence of the execution of the sequential and parallel algorithm on the number of simulations has been constructed. The efficiency of the parallel algorithm for the main stages of the simulation implementation was obtained at the level of 73%, the speedup is 4.38 with the number of processors 6. Computational experiments demonstrate a fairly high efficiency of the proposed parallel algorithm.
Keywords: simulation modeling, parallel programming, parallel algorithm efficiency, warehouse loading model, OpenMP technology
The article presents a systematic review of scientific works by domestic and foreign authors devoted to modeling fires in tunnels for various purposes. Using the search results in the databases of scientific publications eLIBRARY.RU and Google Scholar, 30 of the most relevant articles were identified that meet the following criteria: the ability to access the full-text version, the material was published in a peer-reviewed publication, the article has a significant number of citations, and the presence of a description of the results of the authors’ own experiments. An analysis was made of the methodology used in the research, as well as the results of studying fires in transport tunnels (road, railway, subway) and mine workings presented in the works. A classification of publications was carried out according to the types of tunnel structures, cross-sectional shape, subject of research, mathematical model used to describe the processes of heat and mass transfer in a gaseous environment and heating of enclosing structures, software used, validation of experimental data, and the use of scaling in modeling. It has been established that the problems of mathematical modeling of fires in deep tunnel structures, as well as modeling of a fire in a tunnel taking into account the operation of fire protection systems, are poorly studied.
Keywords: fire modeling, tunnel, mathematical model, fire prediction, heat transfer, structures, systematic review
In today's highly competitive business environment, understanding customer needs, preferences and behavior is of paramount importance. Customer identification software is a digital solution for accurate customer identification and authentication used in various sectors such as banking, healthcare, and e-commerce. Big data, machine learning, and artificial intelligence technologies have greatly improved the customer identification process, allowing companies to improve personalization of services and products, and increase customer satisfaction. However, implementing AI for customer identification faces challenges related to protecting data privacy, training staff, and selecting the right AI tools. In the future, deep learning, neural networks and the Internet of Things may provide new opportunities for customer identification, providing higher levels of security and privacy. However, there is a need to comply with privacy legislation and ensure an ethical approach to the use of AI in customer identification.
Keywords: software, customer identification, traditional methods, machine learning, artificial intelligence, evolution of identification software, future trends
The article discusses the possibilities of using virtual reality technologies to organize fire safety training for schoolchildren. The requirements for the virtual simulator are formulated from the point of view of ensuring the possibility of conducting classes on practicing evacuation skills from the building of a specific educational organization. A functional model of a virtual simulator is presented, built on the basis of the methodology of structural analysis and design, describing the process of developing a virtual space with interactive elements and organizing training for the evacuation of students based on it. A semantic description of the control signals of the functional model, its inputs, mechanisms and outputs is given. The contents of the model subsystems are revealed. Requirements for software, hardware and methodological support for training using virtual reality technologies when conducting fire training are formulated. The concept of creating a digital twin of a building of a general education organization in virtual space is substantiated. Examples of improving virtual space by using the results of mathematical modeling of fire are given. The use of visualization of smoke and flame in virtual space is justified to avoid the occurrence of panic in children during evacuation in fire conditions. Conclusions are drawn about the advantages of the proposed virtual simulator. The prospects for further research and solution to the problem of developing skills for evacuating students from a building of a general education organization in case of fire are listed.
Keywords: virtual reality, virtual simulator, virtual space, fire safety, evacuation, fire training, mathematical modeling of fire, educational technologies, functional modeling
This article is devoted to the development of a method for detecting defects on the surface of a product based on anomaly detection methods using a feature extractor based on a convolutional neural network. The method involves the use of machine learning to train classification models based on the obtained features from a layer of a pre-trained U-Net neural network. As part of the study, an autoencoder is trained based on the U-Net model on data that does not contain images of defects. The features obtained from the neural network are classified using classical algorithms for identifying anomalies in the data. This method allows you to localize areas of anomalies in a test data set when only samples without anomalies are available for training. The proposed method not only provides anomaly detection capabilities, but also has high potential for automating quality control processes in various industries, including manufacturing, medicine, and information security. Due to the advantages of unsupervised machine learning models, such as robustness to unknown forms of anomalies, this method can significantly improve the efficiency of quality control and diagnostics, which in turn will reduce costs and increase productivity. It is expected that further research in this area will lead to even more accurate and reliable methods for detecting anomalies, which will contribute to the development of industry and science.
Keywords: U-Net, neural network, classification, anomaly, defect, novelty detection, autoencoder, machine learning, image, product quality, performance
The problem of planning the sending of messages in a cellular network to destinations with known needs is considered. It is assumed that the costs of transmitting information on the one hand are proportional to the transmitted volumes and the cost of transmitting a unit of information over the selected communication channels in cases of exceeding the traffic established by the contract with the mobile operator, and on the other hand are associated with a fixed subscription fee for the use of channels, independent of the volume of information transmitted. An indicator of the quality of the plan in this setting is the total cost of sending the entire planned volume of messages. A procedure for reducing the formulated problem to a linear transport problem is proposed. The accuracy of the solution obtained on the basis of the proposed algorithm is estimated.
Keywords: single jump function, transport problem, minimum total cost criterion, computational complexity of the algorithm, confidence interval
The article is dedicated to the development of an automated system aimed at creating a program of works for the maintenance of road surfaces. The system is based on data from the diagnostics and assessment of the technical condition of roads, in particular data on the assessment of the International Roughness Index (IRI). The development of a program of works for the maintenance of road surfaces is carried out based on the analysis of the IRI assessment both in the short term and on the time horizon of the contractor's work under the contract. The system is developed on the principle of modular programming, where one of the modules uses polynomial regression to predict the IRI assessment for several years ahead. The analysis of the deviation of the predicted IRI value from the actual one is the basis for the selection of works included in the program. The financial module allows the system to comply with the budget framework limited by the contract and provides an opportunity to evaluate the effectiveness of planning by calculating the difference between the cost of road surface maintenance and the contract value. Practical studies demonstrate that the system is capable of effectively and efficiently planning road surface maintenance works in accordance with the established contract deadlines.
Keywords: road surface, automated system, modular programming, machine learning, recurrent neural network, road condition, international roughness index, road diagnostics, road work planning, road work program
The article describes the integration aspects of the Telegram bot implemented on the 1C: Enterprise platform, into the information system for processing the results of sports competitions. The basic functionality of user interaction with the bot is considered. A diagram of the system states in the process of user interaction with the bot is provided, illustrating the possible transition states when the user selects certain commands or buttons. A diagram of the sequence of the registration process for participants of events using a Telegram bot is presented, illustrating the transmission of messages using post and get requests.
Keywords: processing the results of sports competitions, Telegram bot, messenger,1C: Enterprise platform, state processing, information systems in the field of sports
The article considers mathematical models for the collection and processing of voice content, on the basis of which a fundamentally logical scheme for predicting synthetic voice deepfakes has been developed. Experiments have been conducted on selected mathematical formulas and sets of python programming language libraries that allow real-time analysis of audio content in an organization. The software capabilities of neural networks for detecting voice fakes and generated synthetic (artificial) speech are considered and the main criteria for the study of voice messages are determined. Based on the results of the experiments, a mathematical apparatus has been formed that is necessary for positive solutions to problems of detecting voice deepfakes. A list of technical standards recommended for collecting voice information and improving the quality of information security in the organization has been formed.
Keywords: neural networks, detection of voice defects, information security, synthetic voice speech, voice deepfakes, technical standards for collecting voice information, algorithms for detecting audio deepfakes, voice cloning
One of the key directions in the development of intelligent transport networks (ITS) is the introduction of automated traffic management systems. In the context of these systems, special attention is paid to the effective management of traffic lights, which are an important element of automated traffic management systems. The article is devoted to the development of an automated system aimed at compiling an optimal program of traffic light signals on a certain section of the road network. The Simulation of Urban Mobility (SUMO) traffic modeling package was chosen as a modeling tool, BFGS (Broyden-Fletcher-Goldfarb-Shanno) optimization algorithm was used, gradient boosting was used as a machine learning method. The results of practical research show that the developed system is able to quickly and effectively optimize the parameters of phases and duration of traffic light cycles, which significantly improves traffic management on the corresponding section of the road network.
Keywords: intelligent transport network, traffic management, machine learning, traffic jam, traffic light, phase of the traffic light cycle, traffic flow, modeling of the road network, python, simulation of urban mobility
It is propossed to use foggy calculations to reduce the load on data transmission devices and computing systems in GIS. To improve the accuracy of estimating the efficiency of foggy calculations a non-Markov model of a multichannel system with queues, "warming up" and "cooling" is used. A method for calculating the probalistic-temporal characteristics of a non-Markov system with queues and with Cox distributions of the duration of "warming up" and "cooling" is prorosed. A program has been created to calculate the characteristics of the efficiency of fog calculations. The silution can be used as a software tool for predictive evaluation of the efficiency of access to geographic information systems, taking into account the features of fog computing technology and the costs of ensuring information security.
Keywords: fog computing, model of a multi-channel service system with queues, “warming up”, “cooling down”, geographic information systems, Cox distribution
The paper discusses a method for constructing a nonlinear software reliability efficiency function. The proposed algorithm is based on the use of information about the values of reliability criteria, as well as some expert judgments. This approach differs significantly from previously proposed models for assessing software reliability, which are based on a probabilistic approach. In the proposed method, in addition to objective information, subjective expert assessments are taken into account, which allows for a more flexible assessment of the reliability of software products.
Keywords: software reliability, probabilistic models, statistical models, partial performance criteria, linear programming, vector optimization, decision theory
The article consider the influence of illumination and distance on the recognition quality for various models of neural networks of embedded systems. The platforms on which the testing was carried out, as well as the models used, are described. The results of the study of the influence of illumination on the quality of recognition are presented.
Keywords: artificial intelligence, computer vision, embedded systems, pattern recognition, YOLO, Inception, Peoplenet, ESP 32, Sipeed, Jetson, Nvidia, Max
5G wireless networks are of great interest for research. Network Slicing is one of the key technologies that allows efficient use of resources in fifth-generation networks. This paper considers a method of resource allocation in 5G wireless networks using Network Slicing technology. The paper examined a model for accessing radio network resources, which includes several solutions to improve service efficiency by configuring the logical part of the network. This model uses network slicing technology and elastic traffic. In the practical part of the work, transition intensity matrices were constructed for two different configurations.
Keywords: queuing system, 5G, two - service queuing system, resource allocation, Network Slicing, elastic traffic, minimum guaranteed bitrate
The article examines methods for assessing the structural stability of raster images. The study proposes a comprehensive approach, including texture analysis, color characteristics, and object shape analysis. The author presents experimental results demonstrating the effectiveness of the proposed method on various types of images. The findings obtained enable the optimization of processes for processing and storing graphical information, which is important for various fields, including medicine, geology, and computer vision.
Keywords: raster image, filtering, morphology, relative mean square error rrmse, OpenCV, Python
With the development of scientific and technological progress, the use of modern data forecasting methods is becoming an increasingly necessary and important task in analyzing the economic activity of any enterprise, since business operations can generate a very large amount of data. This article is devoted to the study of methods for forecasting financial and trade indicators using neural networks for enterprises of the Krasnodar Territory. The indicators under consideration are the company's revenue for the reporting period, the number of published (available for sale) goods, as well as the number of ordered goods during the day, week and month. In this study, a multilayer perceptron is considered in detail, which can be used in revenue forecasting tasks using neural networks, and neural network predictive models "MLP 21-8-1", "MLP 21-6-1", and "MLP 20-10-1" are built based on data from the online auto chemistry store Profline-23.
Keywords: automated neural networks, marketplaces, forecasting, neural network models, mathematical models, forecasting methods
The article discusses the features and prospects of implementing distributed management of critical urban infrastructure facilities based on the principles of autonomy. Based on the analysis, the main technologies, directions of development and features of energy transfer in an urban environment are highlighted, contributing to the introduction of distributed management of urban infrastructure facilities. The study focuses on the analysis of the distributed structure of integrated security of critical urban infrastructure facilities and the development of general principles of distributed management of critical infrastructure facilities using the «Autonomous Building» technology. t is shown that the reliable and safe functioning of critical infrastructure facilities in the city is ensured through the synthesis of special technical systems for complex protection of the facility from major security threats based on the combined use of elements of life support and safety systems. At the same time, technical life support systems for autonomous objects of critical infrastructure of the city are built on the basis of the combined use of autonomous energy sources, including non-renewable energy sources, on the principles of joint operation of electric and static power converters, storage, frequency regulation and energy conversion, and technical safety systems of autonomous objects are built using combined optical and electronic means event detection and recognition with the ability to control the full spectrum of electromagnetic radiation.
Keywords: distributed management, technology, energy, energy transfer, urban infrastructure, critical facility, electrification, decentralization, automation, autonomy
Equipping roads with communications is complicated by the almost complete lack of roadside infrastructure, including power lines, as well as difficult terrain. When emergencies occur on this kind of country roads, residents are forced to seek help from nearby settlements that are well-connected. Therefore, providing suburban routes with communications is a key social task. Using an existing base station as an example, this article calculates the attenuation and propagation range of a radio signal for LTE technology and GSM technology, provides a comparative analysis, and uses methods of mathematical modeling and system analysis.
Keywords: LTE, GSM, Okumura-Hata model, Lee model, Longley-Rice model
The present paper examines the actual problem of using graphics processing units (GPUs) in computing processes that are traditionally performed on central processing units (CPUs). With the development of technology and the advent of specialized architectures and libraries, GPUs have become indispensable in areas requiring intensive computing. The article examines in detail the advantages of using GPUs compared to traditional CPUs, justifying this with their ability to process in parallel and high throughput, which makes them an ideal tool for working with large amounts of data.are accidents caused by violations of rules and regulations at work sites, among them cases related to non-compliance with the rules of wearing protective helmets. The article examines methods and algorithms for recognizing protective helmets and helmets, and assesses their effectiveness.
Keywords: graphics processors, GPU, CUDA, OpenCL, cuBLAS, CL Blast, rocBLAS, parallel data processing, mathematical calculations, code optimization, memory management, machine learning, scientific research
The development of digital technologies stimulates widespread automation of processes in enterprises. This article discusses the problem of determining the values of the oil indicator of a transformer from the resulting image using computer vision. During the study, the device of the MS-1 and MS-2 oil indicators was studied and the features that must be taken into account when recognizing the device in the image and determining its value were considered. Based on the processed material, a method for recognizing device elements in an image has been developed using the OpenCV library and the Python programming language. The developed method determines instrument readings at different angles of rotation and in different weather conditions, which confirms the effectiveness of the proposed method.
Keywords: technical vision, oil indicator, contour recognition, OpenCV library