


Vol 43, No 3 (2017)
- Year: 2017
- Articles: 7
- URL: https://journal-vniispk.ru/0361-7688/issue/view/10838
Article
Parallel processing of very large databases using distributed column indexes
Abstract
The development and investigation of efficient methods of parallel processing of very large databases using the columnar data representation designed for computer cluster is discussed. An approach that combines the advantages of relational and column-oriented DBMSs is proposed. A new type of distributed column indexes fragmented based on the domain-interval principle is introduced. The column indexes are auxiliary structures that are constantly stored in the distributed main memory of a computer cluster. To match the elements of a column index to the tuples of the original relation, surrogate keys are used. Resource hungry relational operations are performed on the corresponding column indexes rather than on the original relations of the database. As a result, a precomputation table is obtained. Using this table, the DBMS reconstructs the resulting relation. For basic relational operations on column indexes, methods for their parallel decomposition that do not require massive data exchanges between the processor nodes are proposed. This approach improves the class OLAP query performance by hundreds of times.



Employing AVX vectorization to improve the performance of random number generators
Abstract
By the example of the RNGAVXLIB random number generator library, this paper considers some approaches to employing AVX vectorization for calculation speedup. The RNGAVXLIB library contains AVX implementations of modern generators and the routines allowing one to initialize up to 1019 independent random number streams. The AVX implementations yield exactly the same pseudorandom sequences as the original algorithms do, while being up to 40 times faster than the ANSI C implementations.



Employing information technologies based on .NET XNA framework for developing a virtual physical laboratory with elements of 3D computer modeling
Abstract
Nowadays, with the rapid evolution of information technologies, new teaching and learning methods and tools develop in the educational system. Virtual laboratories are one of those new tools that are now being increasingly used for teaching various disciplines in institutes and universities. In this paper, we outline the advantages of virtual laboratories over traditional ones and present a survey of existing solutions in the field of virtual laboratory development. Moreover, a virtual physical laboratory is described that was developed by authors of this paper taking into account both requirements imposed by the educational system of Kazakhstan and shortcomings of presently-available solutions.



Discontinuous Galerkin method on three-dimensional tetrahedral grids. The use of template metaprogramming of the C++ language
Abstract
Many mathematical physics problems have great computational complexity, especially when they are solved on large-scale three-dimensional grids. The discontinuous Galerkin method is just an example of this kind. Therefore, reduction of the amount of computation is very a topical task. One of the possible ways to reduce the amount of computation is to move some of the computations to the compilation stage. With the appearance of templates, C++ provides such an opportunity. The paper demonstrates the use of template metaprogramming to speed up computations in the discontinuous Galerkin method. In addition, template metaprogramming sometimes simplifies the algorithm at the expense of its generalization.



Constructing and visualizing three-dimensional sea bottom models to test AUV machine vision systems
Abstract
This paper describes an algorithm for constructing a procedural sea bottom model, which can be used for testing and debugging machine vision systems of autonomous underwater vehicles (AUVs). The algorithm consists of three main stages: generating a low-frequency heightmap (used by the designer to define the basic form of a water area), constructing a three-dimensional model (based on the heightmap and fractal noise), and visualizing the three-dimensional model (refined by means of hardware or manual tessellation). The sea bottom model has the following features: it is detailed accurate to a screen pixel, each of its sections is absolutely unique, and its size is adequate for any tests.



Memory-compact Metropolis light transport on GPUs
Abstract
Solutions to the key problems of Metropolis light transport implementation on GPUs are proposed. A “burn-in” method relying on the ordinary Monte Carlo method, owing to which the “startup bias” is significantly reduced, is suggested. Memory optimizations methods (including multiple proposal Metropolis light transport) are proposed, and technical aspects of efficient Metropolis light transport implementation on GPUs are discussed.



Min_c: Heterogeneous concentration policy for energy-aware scheduling of jobs with resource contention
Abstract
In this paper, we address energy-aware online scheduling of jobs with resource contention. We propose an optimization model and present new approach to resource allocation with job concentration taking into account types of applications and heterogeneous workloads that could include CPU-intensive, diskintensive, I/O-intensive, memory-intensive, network-intensive, and other applications. When jobs of one type are allocated to the same resource, they may create a bottleneck and resource contention either in CPU, memory, disk or network. It may result in degradation of the system performance and increasing energy consumption. We focus on energy characteristics of applications, and show that an intelligent allocation strategy can further improve energy consumption compared with traditional approaches. We propose heterogeneous job consolidation algorithms and validate them by conducting a performance evaluation study using the Cloud Sim toolkit under different scenarios and real data. We analyze several scheduling algorithms depending on the type and amount of information they require.


