×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Time-frequency analysis of signals using EMD, ITD and VMD algorithms

    The article describes the mathematical foundations of time-frequency analysis of signals using the algorithms Empirical Mode Decomposition (EMD), Intrinsic Time-Scale Decomposition (ITD) and Variational Mode Decomposition (VMD). Synthetic and real signals distorted by additive white Gaussian noise with different signal-to-noise ratio are considered. A comprehensive comparison of the EMD, ITD and VMD algorithms has been performed. The possibility of using these algorithms in the tasks of signal denoising and spectral analysis is investigated. The estimation of algorithm execution time and calculation stability is performed.

    Keywords: time-frequency analysis, denoising, decomposition, mode, Hilbert-Huang transformation, Empirical Mode Decomposition, Intrinsic Time-Scale Decomposition, Variational Mode Decomposition

  • Development of an information and analytical system based on GIS technology in the field of rational management of forest resources: stages, methods, examples

    The article presents the main stages and recommendations for the development of an information and analytical system (IAS) based on geographic information systems (GIS) in the field of rational management of forest resources, providing for the processing, storage and presentation of information on forest wood resources, as well as a description of some specific examples of the implementation of its individual components and digital technologies. The following stages of IAS development are considered: the stage of collecting and structuring data on forest wood resources; the stage of justifying the type of software implementation of the IAS; the stage of equipment selection; the stage of developing a data analysis and processing unit; the stage of developing the architecture of interaction of IAS blocks; the stage of developing the IAS application interface; the stage of testing the IAS. It is proposed to implement the interaction between the client and server parts based on Asynchronous JavaScript and XML (AJAX) technology. It is recommended to use the open source Leaflet libraries for visualization of geodata. To store large amounts of data on the server, it is proposed to use the SQLite database management system. The proposed approaches can find application in the creation of an IAS for the formation of management decisions in the field of rational management of forest wood resources.

    Keywords: geographic information systems, forest resources, methodology, web application, AJAX technology, SQLite, Leaflet, information processing

  • Experience of integrated domestic systems for information modeling of infrastructure on the example of Vitro-CAD Common Data Environment and Topomatic Robur software

    More attention is being paid to the transition to domestic software with the digitalisation of the construction industry and import substitution. At each stage of construction, additional products are needed, including CAD and BIM. The experience of integration of Russian-made systems for the tasks of information modeling of transport infrastructure and road construction is considered. Within the framework of the work the integration of Vitro-CAD CDE and Topomatic Robur software system was performed. Joint work of the construction project participants in a single information space was organized. The efficiency of work of the project participants was determined due to the release from routine operations. Integration experience has shown that the combination of Vitro-CAD and Topomatic Robur allows to manage project data efficiently, store files with version tracking, coordinate documentation and issue comments to it.

    Keywords: common data environment, information space, information model, digital ecosystem, computer-aided design, building information modeling, automation, integration, import substitution, software complex, platform, design documentation, road construction

  • One of the Approaches to Analyzing Source Code in Student Projects

    When evaluating student work, the analysis of written assignments, particularly the analysis of source code, becomes particularly relevant. This article discusses an approach for evaluating the dynamics of feature changes in students' source code. Various metrics of source code are analyzed and key metrics are identified, including quantitative metrics, program control flow complexity metrics, and the TIOBE quality indicator. A set of text data containing program source codes from a website dedicated to practical programming, was used to determine threshold values for each metric and categorize them. The obtained results were used to conduct an analysis of students' source code using a developed service that allows for the evaluation of work based on key features, the observation of dynamics in code indicators, and the understanding of a student's position within the group based on the obtained values.

    Keywords: machine learning, text data analysis, program code analysis, digital footprint, data visualization

  • Evaluation of the complexity of algorithms for building a dominator tree in the context of the implementation of algorithms for data flow analysis in the implementation of the frontend compiler of the Solidity programming language

    This article discusses two of the most popular algorithms for constructing dominator trees in the context of static code analysis in the Solidity programming language. Both algorithms, the Cooper, Harvey, Kennedy iterative algorithm and the Lengauer-Tarjan algorithm, are considered effective and widely used in practice. The article compares these algorithms, evaluates their complexity, and selects the most preferable option in the context of Solidity. Criteria such as execution time and memory usage were used for comparison. The Cooper, Harvey, Kennedy iterative algorithm showed higher performance when working with small projects, while the Lengauer-Tarjan algorithm performed better when analyzing larger projects. However, overall, the Cooper, Harvey, Kennedy iterative algorithm was found to be more preferable in the context of Solidity as it showed higher efficiency and accuracy when analyzing smart contracts in this programming language. In conclusion, this article may be useful for developers and researchers who are involved in static code analysis in the Solidity language, and who can use the results and conclusions of this study in their work.

    Keywords: dominator tree, Solidity, algorithm comparison

  • Three-component flow of requests in closed queuing systems with endless storage capacity and waiting time limit

    This article explores the probabilistic characteristics of closed queuing systems, with a particular focus on the differences between "patient" and "impatient" demands. These categories of requests play a crucial role in understanding the dynamics of service, as patient demands wait in line, while impatient ones may be rejected if their waiting time exceeds a certain threshold. The uniqueness of this work lies in the analysis of a system with a three-component structure of incoming flow, which allows for a more detailed examination of the behavior of requests and the influence of various factors on service efficiency. The article derives key analytical expressions for determining probabilistic characteristics such as average queue length, rejection probability, and other critical metrics. These expressions enable not only the assessment of the current state of the system but also the prediction of its behavior under various load scenarios. The results of this research may be useful for both theoretical exploration of queuing systems and practical application in fields such as telecommunications, transportation, and service industries. The findings will assist specialists in developing more effective strategies for managing request flows, thereby improving service quality and reducing costs.

    Keywords: waiting, queue, service, markov process, queuing system with constraints, flow of requests, simulation modeling, mathematical model

  • The Case Based Reasoning Method for decision making when oil spills on oilfield

    Oil spills require timely measures to eliminate the causes and neutralize the consequences. The use of a case-based reasoning is promising to develop specific technological solutions in order to eliminate oil spills. It becomes important to structure the description of possible situations and the formation of a representation of solutions. In this paper, the results of these tasks are presented. A structure is proposed for representing situations in oil product spills based on a situation tree, a description of the algorithm for situational decision-making using this structure is given, parameters for describing situations in oil product spills and presenting solutions are proposed. The situation tree allows you to form a representation of situations based on the analysis of various source information. This approach makes it possible to quickly clarify the parameters and select similar situations from the knowledge base, the solutions of which can be used in the current undesirable situation.

    Keywords: case-based reasoning; decision making; oil spill, oil spill response, decision support, situation tree

  • Modeling the Random Forest Machine Learning Algorithm Using the Mathematical Apparatus of Petri Net Theory

    The article considers the possibility of modeling the random forest machine learning algorithm using the mathematical apparatus of Petri net theory. The proposed approach is based on the use of three types of Petri net extensions: classical, colored nets, and nested nets. For this purpose, the paper considers the general structure of decision trees and the rules for constructing models based on a bipartite directed graph with a subsequent transition to the random forest machine learning algorithm. The article provides examples of modeling this algorithm using Petri nets with the formation of a tree of reachable markings, which corresponds to the operation of both decision trees and a random forest.

    Keywords: Petri net, decision tree, random forest, machine learning, Petri net theory, bipartite directed graph, intelligent systems, evolutionary algorithms, decision support systems, mathematical modeling, graph theory, simulation modeling

  • Image compression method based on the analysis of the weights of the detailing coefficients of the wavelet transform

    Many modern information processing and control systems for various fields are based on software and hardware for image processing and analysis. At the same time, it is often necessary to ensure the storage and transmission of large data sets, including image collections. Data compression technologies are used to reduce the amount of memory required and increase the speed of information transmission. To date, approaches based on the use of discrete wavelet transformations have been developed and applied. The advantage of these transformations is the ability to localize the points of brightness change in images. The detailing coefficients corresponding to such points make a significant contribution to the energy of the image. This contribution can be quantified in the form of weights, the analysis of which allows us to determine the method of quantization of the coefficients of the wavelet transform in the proposed lossy compression method. The approach described in the paper corresponds to the general scheme of image compression and provides for the stages of transformation, quantization and encoding. It provides good compression performance and can be used in information processing and control systems.

    Keywords: image processing, image compression, redundancy in images, general image compression scheme, wavelet transform, compression based on wavelet transform, weight model, significance of detail coefficients, quantization, entropy coding

  • Computer vision algorithms for object recognition in low visibility conditions

    The work is devoted to the development and analysis of computer vision algorithms designed to recognize objects in conditions of limited visibility, such as fog, rain or poor lighting. In the context of modern requirements for safety and automation, the task of identifying objects becomes especially relevant. The theoretical foundations of computer vision methods and their application in difficult conditions are considered. An analysis of image processing algorithms is carried out, including machine learning and deep learning methods that are adapted to work in conditions of poor visibility. The results of experiments demonstrating the effectiveness of the proposed approaches are presented, as well as a comparison with existing recognition systems. The results of the study can be useful in the development of autonomous vehicles and video surveillance systems.

    Keywords: computer vision, mathematical modeling, software package, machine learning methods, autonomous transport systems

  • Comparison of models for reduction of measured packet signals in monitoring and diagnostic systems

    In systems for monitoring, diagnostics and recognition of the state of various types of objects, an important aspect is the reduction of the volume of measured signal data for its transmission or accumulation in information bases with the ability to restore it without significant distortion. A special type of signals in this case are packet signals, which represent sets of harmonics with multiple frequencies and are truly periodic with a clearly distinguishable period. Signals of this type are typical for mechanical, electromechanical systems with rotating elements: reducers, gearboxes, electric motors, internal combustion engines, etc. The article considers a number of models for reducing these signals and cases of priority application of each of them. In particular, the following are highlighted: the discrete Fourier transform model with a modified formula for restoring a continuous signal, the proposed model based on decomposition by bordering functions and the discrete cosine transform model. The first two models ideally provide absolute accuracy of signal restoration after reduction, the last one refers to reduction models with information loss. The main criteria for evaluating the models are: computational complexity of the implemented transformations, the degree of implemented signal reduction, and the error in restoring the signal from the reduced data. It was found that in the case of application to packet signals, each of the listed models can be used, the choice being determined by the priority indicators of the reduction assessment. The application of the considered reduction models is possible in information and measuring systems for monitoring the state, diagnostics, and control of the above-mentioned objects.

    Keywords: reduction model, measured packet signal, discrete cosine transform, decomposition into bordering functions, reduction quality assessment, information-measuring system

  • Using Chebyshev's inequalities in problems of designing complex technical systems

    The current situation in the practice of designing complex technical systems with metrological support is characterized by the following important features: a) the initial information that can actually be collected and prepared at the early stages of design for solving probabilistic problems turns out, as a rule, to be incomplete, inaccurate and, to a high degree, uncertain; b) the form of specifying the initial information (in the form of constraints) in problems can be very diverse: average and dispersion characteristics or functions of them, measurement errors or functions of them, characteristics specified by a probability measure, etc. These circumstances necessitate the formulation and study of new mathematical problems of characterizing distribution laws and developing methods and algorithms for solving them, taking into account the constraints on the value and nature of change of the determining parameter (random variable) of a complex technical system. As a generalized integral characteristic of the determining parameter, the law of its distribution is chosen, which, as is commonly believed, fully characterizes the random variable under study. The purpose of this work is to develop a method that allows constructing distribution laws of the determining parameter of a complex technical system using the minimum amount of available information based on the application of Chebyshev inequalities. A method for characterizing the distribution law by the property of maximum entropy is presented, designed to model the determining parameter of complex technical systems with metrological support. Unlike the classical characterization method, the proposed method is based on the use of Chebyshev inequalities instead of restrictions on statistical moments. An algorithm for constructing the distribution function of the determining parameter is described. A comparison is given of the results of constructing distribution laws using the developed method and using the classical variational calculus.

    Keywords: Chebyshev inequalities, complex technical system, design, determining parameter, characterization of distribution law

  • Features of functional relationships of parameters of a time-varying diagnostic signal in modeling, recognition of states and monitoring of systems

    In operational diagnostics and recognition of states of complex technical systems, an important task is to identify small time-determined changes in complex measured diagnostic signals of the controlled object. For these purposes, the signal is transformed into a small-sized image in the diagnostic feature space, moving along trajectories of different shapes, depending on the nature and magnitude of the changes. It is important to identify stable and deterministic patterns of changes in these complex-shaped diagnostic signals. Identification of such patterns largely depends on the principles of constructing a small-sized feature space. In the article, the space of decomposition coefficients of the measured signal in the adaptive orthonormal basis of canonical transformations is considered as such a space. In this case, the basis is constructed based on a representative sample of realizations of the controlled signal for various states of the system using the proposed algorithm. The identified shapes of the trajectories of the images correspond to specific types of deterministic changes in the signal. Analytical functional dependencies were discovered linking a specific type of signal change with the shape of the trajectory of the image in the feature space. The proposed approach, when used, simplifies modeling, operational diagnostics and condition monitoring during the implementation of, for example, low-frequency diagnostics and defectoscopy of structures, vibration diagnostics, monitoring of the stress state of an object by analyzing the time characteristics of response functions to impact.

    Keywords: modeling, functional dependencies, state recognition, diagnostic image, image movement trajectories, small changes in diagnostic signals, canonical decomposition basis, analytical description of image trajectory

  • Using Clustering Methods to Automate the Formation of User Roles

    The article solves the problem of automated generation of user roles using machine learning methods. To solve the problem, cluster data analysis methods implemented in Python in the Google Colab development environment are used. Based on the results obtained, a method for generating user roles was developed and tested, which allows reducing the time for generating a role-based access control model.

    Keywords: machine learning, role-based access control model, clustering, k-means method, hierarchical clustering, DBSCAN method

  • Development of a dataset storage module for collision detection using polygonal mesh and neural networks

    This article is devoted to the development of a collision detection technique using a polygonal mesh and neural networks. Collisions are an important aspect of realistically simulating physical interactions. Traditional collision detection methods have certain limitations related to computational accuracy and computational complexity. A new approach based on the use of neural networks for collision detection with polygonal meshes is proposed. Neural networks have shown excellent results in various computer vision and image processing tasks, and in this context they can be effectively applied to polygon pattern analysis and collision detection. The main idea of ​​the technique is to train a neural network on a large data set containing information about the geometry of objects and their movement for automatic collision detection. To train the network, it is necessary to create a special module responsible for storing and preparing the dataset. This module will provide collection, structuring and storage of data about polygonal models, their movements and collisions. The work includes the development and testing of a neural network training algorithm on the created dataset, as well as assessing the quality of network predictions in a controlled environment with various collision conditions.

    Keywords: modeling, collision detection techniques using polygonal meshes and neural networks, dataset, assessing the quality of network predictions

  • Modelling of web-server operation on the basis of mass service system

    The simulation model of Apache HTTP Server as a mass service system is considered, the parameters of the corresponding system and Apache HTTP Server are compared using GPSS World environment. The comparison of the simulation model with a real web server is based on the construction of a test server. using Apache JMeter application, which can be used to simulate high load on the server. Query generation and statistics collection was done by Apache JMeter application. A comparison of both reports was given, differences in characteristics were pointed out, and assumptions about the reasons for the differences were outlined. The model can be applied to establish requirements for Apache HTTP Server in order to optimise its performance.

    Keywords: simulation modelling, mass service system, efficiency characteristics, test server, flow of requests, service channels, queue

  • Survey of topology optimization methods for quantum key distribution networks

    At the moment, quantum key distribution (QKD) technology guarantees the highest level of data exchange security, which makes QKD networks one of the most promising areas in the field of computer security. Unfortunately, the problem of topology optimization when planning and extending QKD networks has not attracted enough attention. This paper reviews approaches that use analytical models in the topology optimization problem of quantum key distribution networks. Different methods that solve problems of network capacity and security maximization and cost minimization are reviewed, the utilized algorithms are described, and conclusions about possible further research in this area are drawn.

    Keywords: quantum key distribution, mathematical modeling, network topology, analytical modeling, topology optimization

  • The influence of data set expansion methods on the quality of training neural network models. Adaptive data set expansion approach

    The article analyzes the impact of transformation types on the learning quality of neural network classification models, and also suggests a new approach to expanding image sets using reinforcement learning.

    Keywords: neural network model, training dataset, data set expansion, image transformation, recognition accuracy, reinforcement learning, image vector

  • Method of building three-dimensional graphics based on distance fields

    This paper investigates the effectiveness of the distance fields method for building 3D graphics in comparison with the traditional polygonal approach. The main attention is paid to the use of analytical representation of models, which allows to determine the shortest distance to the objects of the scene and provides high speed even on weak hardware. Comparative analysis is made on the possibility of wide model detailing, applicability of different lighting sources, reflection mapping and model transformation. Conclusions are drawn about the promising potential of the distance field method for 3D graphics, especially in real-time rendering systems. It is also emphasized that further research and development in this area is relevant. Within the framework of this work, a universal software implementation of the distance fields method was realized.

    Keywords: computer graphics, rendering, 3D graphics, ray marching, polygonal graphics, 3D graphics development, modeling, 3D models

  • Development of a client-server application for constructing a virtual museum

    The article describes the methodology for developing a client-server application intended for constructing a virtual museum. The creation of the server part of the application with the functions of processing and executing requests from the client part, as well as the creation of a database and interaction with it, is discussed in detail. The client part is developed using the Angular framework and the TypeScript language; the three-dimensional implementation is based on the three.js library, which is an add-on to WebGL technology. The server part is developed on the ASP.NET Core platform in C#. The database schema is based on a Code-First approach using Entity Framework Core. Microsoft SQL Server is used as the database management system.

    Keywords: client-server application, virtual tour designer, virtual museum, three.js library, framework, Angular, ASP.NET Core, Entity Framework Core, Code-First, WebGL

  • Algorithm for generating three-dimensional terrain models in the monocular case using deep learning models

    The article is devoted to the development of an algorithm for three-dimensional terrain reconstruction based on single satellite images. The algorithm is based on the algorithmic formation of three-dimensional models based on the output data of two deep learning models to solve the problems of elevation restoration and instance segmentation, respectively. The paper also presents methods for processing large satellite images with deep learning models. The algorithm proposed in the framework of the work makes it possible to significantly reduce the requirements for input data in the problem of three-dimensional reconstruction.

    Keywords: three-dimensional reconstruction, deep learning, computer vision, elevation restoration, segmentation, depth determination, contour approximation

  • Multi-agent search engine optimization algorithm based on hybridization and co-evolutionary procedures

    The paper proposes a hybrid multi-agent solution search algorithm containing procedures that simulate the behavior of a bee colony, a swarm of agents and co-evolution methods, with a reconfigurable architecture. The developed hybrid algorithm is based on a hierarchical multi-population approach, which allows, using the diversity of a set of solutions, to expand the areas of search for solutions. Formulations of metaheuristics for a bee colony and a swarm of agents of a canonical species are presented. As a measure of the similarity of two solutions, affinity is used - a measure of equivalence, relatedness (similarity, closeness) of two solutions. The principle of operation and application of the directed mutation operator is revealed. A description of the modified chromosome swarm paradigm is given, which provides the ability to search for solutions with integer parameter values, in contrast to canonical methods. The time complexity of the algorithm is O(n2)-O(n3).

    Keywords: swarm of agents, bee colony, co-evolution, search space, hybridization, reconfigurable architecture

  • Modeling of lighting effects and development of a sketch of the lighting design project of the Drama theater named after A.V. Lunacharsky

    The article offers a variant of the development of lighting design projects for outdoor architectural lighting. Based on the modeling of light distribution in the DIALux 4.13 program, brushes have been created using specific lighting devices that simulate lighting effects from real lighting devices. A variant of the sketch of outdoor architectural lighting using Adobe Photoshop has been created with the implementation of local lighting techniques using the example of a drama theater building. Using a three-dimensional model of the object, a light design project was created in the DIALux EVO program. The proposed method of creating sketches is useful in professional activities related to the development of sketches of lighting design projects based on their high-quality photographs without the need to develop three-dimensional models, for conceptual proposals of fragments of the urban light environment and landscape territories. Having developed a base of brushes (based on real light distributions of lighting devices), it is possible to create sketches of architectural lighting of buildings that implement various lighting techniques.

    Keywords: adobe photoshop, dialux 4.13, dialux evo, sketch, brush, building facade, outdoor architectural lighting, lighting effect, lighting technique, architectural lighting concept

  • Research of NSGA-III and AGE-MOEA-II algorithms for solving multicriteria optimization problems

    The article is devoted to the consideration of multi-criteria Pareto optimization methods based on genetic algorithms. The NSGA-III and AGE-MOEA-II methods are considered, and their comparative analysis is given. The results obtained are important both for theoretical research in the field of genetic algorithms and for practical application in engineering and other fields where multicriteria optimization plays a key role.

    Keywords: multicriteria optimization problem, Pareto front, genetic algorithm, NSGA-III, AGE-MOEA-II

  • Development of a mathematical model of daily production planning in ferrous metallurgy on the example of JSC EVRAZ ZSMK

    Since 2017, EVRAZ ZSMK JSC has been developing and operating a mathematical model covering all processing stages from ore extraction to final products – SMM Forecast. The model will be used to calculate technical cases, plans, and parity prices for iron ore and coal, and its use brought more than 200 million rubles of economic effect in 2020 alone. The use of a universal mathematical model made it possible in 2023 to begin the development of a module for daily optimization of an agglomeration factory and blast furnace production. The article discusses the experience of EVRAZ ZSMK JSC in the development and implementation of a daily planning system based on the monthly planning model of SMM Forecast, as well as methods for achieving an acceptable speed of multi-period optimization. The SMM Forecast system was originally designed for end-to-end, scenario-based calculation of the main raw materials from ore and coal to finished products in a volumetric monthly planning. The system uses optimization algorithms to search for a global target function to maximize margin income under specified constraints. The mathematical model of redistribution uses the norms and technologies specified in the company's regulatory documents. At the same time, the model is universal and the transfer of algorithms from monthly to daily mode was carried out with minimal modifications. The article also discusses the difficulties encountered and various methods of solving these problems. The first problem faced by the developers was the low speed of optimization of the model in daily dynamics due to the strong complication of the optimization load. The calculation time has increased significantly, and to solve this problem, it took the introduction of a number of optimization cycles aimed at reducing the speed of solving equations, introducing variable boundaries, and determining starting points. As a result, the calculation time for one month was about 40 minutes. The second problem was the need to develop a complex supply management algorithm and optimize stacking at the sinter plant. As a result of solving this problem, a working tool has been developed that brings additional income to the enterprise.

    Keywords: metallurgy, modeling, planning, daily planning, sintering plant, blast furnace shop, stacking