Our lives are permeated by data, with endless streams of information passing through computer systems. Today it is impossible to imagine modern software without interaction with databases. There are many different DBMSs depending on the purpose of using the information. The article discusses the Locality-sensitive hashing (LSH) algorithm based on the Pl/PgSQL language, which allows you to search for similar documents in the database.
Keywords: LSH, hashing, field, string, text data, query, software, SQL
It is impossible to imagine the present time without software. Huge flows of information pass through computer computing systems. It is absolutely impossible to process unstructured, endlessly incoming data, so it is necessary to identify specific tasks and prepare information for processing. One such action is deduplication. This article discusses possible optimizations for the method of removing duplicates using databases.
Keywords: deduplication, database, field, string, text data, query, software, unstructured data
To date, a huge amount of heterogeneous information passes through electronic computing systems. There is a critical need to analyze an endless stream of data with limited means, and this, in turn, requires structuring information. One of the steps in solving the problem of data ordering is deduplication. This article discusses the method of removing duplicates using databases, analyzes the results of testing work with various types of database management systems with different sets of parameters.
Keywords: deduplication, database, field, row, text data, artificial neural network, sets, query, software, unstructured data