


Vol 44, No 6 (2018)
- Year: 2018
- Articles: 20
- URL: https://journal-vniispk.ru/0361-7688/issue/view/10857
Article
Node Failure Aware Broadcasting Mechanism in Mobile Adhoc Network Environment
Abstract
Without any fixed infrastructure, the mobile ad hoc networks were distributed in the environment and this ad hoc network is nothing but a collection of the mobile nodes. In the network performance, transmitting plays a significant role, where it faces many complexities because of the dynamic changing behavior of mobile nodes. Environs Aware Neighbor-knowledge based Broadcasting (EANKBB) method and Optimal Cluster based Broadcasting using GA (OCBC-GA) were brought-in the earlier method, where the cluster head selection is performed optimally with the help of the genetic algorithm. The time sequence schedule is performed to permit each and every node to broadcast for the period of time, once after selecting the cluster head. But this work doesn’t concentrate much on the destination node’s location, where it has to transmit the broadcasted message to their neighbours, in order to attain the destination. This, in turn, leads to the memory overhead and utilization of the unwanted resource and then the node is permitted to broadcast the messages, if it has high priority and the rest of the nodes should wait until the prioritized node uncovered from the region. This will create issue in the performance of the rest of the nodes which occurs in the network. So, Node Failure Aware Broadcasting Mechanism (NFABM) is brought-in our work, to rectify these issues. With the help of the Hybrid Genetic cuckoo search optimization algorithm and location aware clustering, the cluster head selection is performed in the suggested work. The energy consumption, bandwidth, location and network coverage, were considered as the fitness value here and this, in turn, minimizes the number of broadcast message transmission by transmitting it only to the nodes that are located closer to the destination node. Prioritized Time Sequence Scheme helps our suggested work, to give the equal priority for entire nodes in the environment. Here, a time sequence is computed for every node and it is allocated to individual time period for every node to allow them to broadcast. This new algorithms works like allocating initial time sequence period for the prioritized node for the specific time and it would stop transmission when its time sequence completed. Hence, this concept helps to provide the priority to every node and also each node can be treated uniformly. In the NS2 simulation environment, the entire execution of the work is done, which proves that the suggested work lead to give the optimal result when compared with the current research method.



Energy Proficient Flooding Scheme Using Reduced Coverage Set Algorithm for Unreliable Links
Abstract
Wireless sensor network is a spatially distributed node that is monitored by sensors and it transmits the measured information to sink node or base station. The prevalence of wireless sensor network is that it can cope up with the node failure, and also it has the maximum potential to manage the mobility of nodes. It is also easy to handle and it can withstand in a harsh environment. Flooding is the basic attempt in the wireless sensor network in order to disseminate the message to the entire network. Flooding concept helps in discovering location, route establishments, querying etc .Many protocols and applications rely on flooding in Wireless sensor network communication purposes. To achieve reliability and energy proficiency, it is essential to consider retransmission and rebroadcasting of the same message while flooding. In this paper, a novel forwarding scheme called Reduced Coverage Set is proposed that reduces the number of rebroadcast and retransmission opportunistically and thereby reducing energy consumption. Also to improve the performance of the network during link failure and packet loss a Better Link Choosing Scheme is proposed to select the better link. By comparing with traditional flooding algorithms, the proposed design shows an outstanding performance by reducing the redundant packet transmissions by 12%~30%, thereby increasing the network life time.



A Machine Learning Framework for Feature Selection in Heart Disease Classification Using Improved Particle Swarm Optimization with Support Vector Machine Classifier
Abstract
Machine learning is used as an effective support system in health diagnosis which contains large volume of data. More commonly, analyzing such a large volume of data consumes more resources and execution time. In addition, all the features present in the dataset do not support in achieving the solution of the given problem. Hence, there is a need to use an effective feature selection algorithm for finding the more important features that contribute more in diagnosing the diseases. The Particle Swarm Optimization (PSO) is one of the metaheuristic algorithms to find the best solution with less time. Nowadays, PSO algorithm is not only used to select the more significant features but also removes the irrelevant and redundant features present in the dataset. However, the traditional PSO algorithm has an issue in selecting the optimal weight to update the velocity and position of the particles. To overcome this issue, this paper presents a novel function for identifying optimal weights on the basis of population diversity function and tuning function. We have also proposed a novel fitness function for PSO with the help of Support Vector Machine (SVM). The objective of the fitness function is to minimize the number of attributes and increase the accuracy. The performance of the proposed PSO-SVM is compared with the various existing feature selection algorithms such as Info gain, Chi-squared, One attribute based, Consistency subset, Relief, CFS, Filtered subset, Filtered attribute, Gain ratio and PSO algorithm. The SVM classifier is also compared with several classifiers such as Naive Bayes, Random forest and MLP.



Quality of Service Enhancement in Wireless Sensor Network Using Flower Pollination Algorithm
Abstract
One of the renowned and drastically rising research works is developing and deploying emerging applications under wireless sensor networks. Surveillance monitoring, military and health care industries are some examples of emerging applications. Generally, sensor nodes are deployed randomly and distributed within the network region. In Wireless sensor network, it is hard to increase the network lifetime and to maximize the coverage for interest points under certain constraints. A sensor node can able to sense only the environmental information within its sensing region; hence the deployment of the node must cover the points of interests by considering energy of the nodes. To do that, we propose a Flower Pollination Algorithm (FPA) to optimize the locations of the sensor nodes and balance the residual energy of the nodes. The FP algorithm adjusts the sensor nodes and schedules the node condition (Sleep/Active) to maintain target coverage for long period of time. The proposed FPA is experimented using a simulation tool and the obtained results are verified. From the results it is noticed that the target coverage time is increased by 30% and also the performance is evaluated by comparing the obtained results with the existing IWD and ACO algorithms. Finally, it is confirmed that the FP algorithm outperforms than the IWD and ACO by increasing the energy efficiency 40% than the existing approaches.



QoSTRP: A Trusted Clustering Based Routing Protocol for Mobile Ad-Hoc Networks
Abstract
One of the secured communication methods in MANET is providing a Trust Management (TM). A TM calculates a trust value for all the participants in the communication for security. TM assigns a predefined threshold value for trust variable where it will be checked during communications. If the node has a trust value greater than the threshold value, then the nodes are selected as a most trusted node among the nodes else it is considered as a un-trusted node which cannot be permitted for communication. Various existing research works proposed various Trust Management algorithms and Trusted Routing Protocols. But the efficiency of the existing approaches is not satisfactory up to the market level. This problem takes into account and it is motivated to design and develop a novel QoS improved Trusted Routing Protocol (QoSTRP) for secured efficient data transmission in MANET. The efficiency of the proposed protocol is verified and the performance is evaluated by comparing the results with other existing methods.



Linear Discriminant Analysis Based Genetic Algorithm with Generalized Regression Neural Network – A Hybrid Expert System for Diagnosis of Diabetes
Abstract
Among the applications enabled by expert systems, disease diagnosis is a particularly important one. Nowadays, diabetes is found to be a complex health issue in human life. There has been a wide range of intelligent methods proposed for early detection of diabetes. The objective of this paper is to propose an expert system for better diagnosis of diabetes. The methodology of the proposed framework is classified as two stages: (a) Linear Discriminant Analysis (LDA) based genetic algorithm for feature selection, (b) Generalized Regression Neural Network (GRNN) for classification. The proposed a genetic algorithm with Linear Discriminant Analysis (LDA) based feature selection for not only reduce the computation time and cost of the disease diagnosis but also improved the accuracy of classification. The performance of the method is evaluated using the calculation of accuracy, confusion matrix and Receiver-Operating Characteristic (ROC). The proposed method is compared with other existing methods for evaluating the performance and accuracy. The LDA based Genetic Algorithm (GA) with GRNN produces the accuracy of 80.2017% with a ROC of 0.875.



Deep Learning Based Efficient Channel Allocation Algorithm for Next Generation Cellular Networks
Abstract
The usage of mobile nodes is increasing very rapidly and so it is very essential to have an efficient channel allocation procedure for the next generation cellular networks. It is very expensive to increase the existing available spectrum. Hence, it is always better to utilize the existing spectrum in an effective way. In view of this, this paper proposes a channel allocation algorithm for next generation cellular networks which is based on deep learning. The system is made learned deeply to determine the number of channels that each base station can acquire and also dynamically varying based on the time. The originating and handoff calls are two different types of calls being considered in this paper. The number of channels that be exclusively used for originating calls and handoff calls is determined using deep learning. STWQ—Non-LA and STWQ—LAR are used to compare with the proposed work. The results show that the proposed algorithm, DLCA outperforms in terms of blocking and dropping probability.



NoSQL Injection Attack Detection in Web Applications Using RESTful Service
Abstract
Despite the extensive research of using web services for security purposes, there is a big challenge towards finding a no radical solution for NoSQL injection attack. This paper presents an independent RESTful web service in a layered approach to detect NoSQL injection attacks in web applications. The proposed method is named DNIARS. DNIARS depends on comparing the generated patterns from NoSQL statement structure in static code state and dynamic state. Accordingly, the DNIARS can respond to the web application with the possibility of NoSQL injection attack. The proposed DNIARS was implemented in PHP plain code and can be considered as an independent framework that has the ability for responding to different requests formats like JSON, XML. To evaluate its performance, DNIARS was tested using the most common testing tools for RESTful web service. According to the results, DNIARS can work in real environments where the error rate did not exceed 1%.



Crash Processing for Selection of Unique Defects
Abstract
Nowadays, software developers often face the following problem: there is a large amount of inputs that cause the program to crash. In practice, this amount of inputs is too large to be analyzed manually in a reasonable time. This paper contains an overview and analysis of existing methods for this problem. A new method for analyzing crashes to select unique defects is proposed. The method is based on comparison of control flow graphs (CFGs). For this purpose, a special metric is introduced: the graphs are considered similar if the metric does not exceed a certain threshold, which is a filtering parameter. Information about the graphs is collected dynamically at runtime through instrumentation of the program’s binary code. The method is applicable to binary executables and does not require any debugging information. The developers, having estimated their time and effort, can significantly reduce the number of crashes to be analyzed. In addition, an effective algorithm for fixing software bugs that cause crashes is proposed. The method is implemented as part of the fuzzer developed at the Institute for System Programming of the Russian Academy of Sciences (ISP RAS) and tested on a set of programs for x86-64/Linux. The test results show that the number of crashes to be analyzed can be reduced by several times.



OS-Agnostic Identification of Processes and Threads in the Full System Emulation for Selective Instrumentation
Abstract
Dynamic binary analysis is one of the most promising and key techniques in the analysis of programs and systems. It is usually based on the technique of dynamic binary instrumentation. The most useful instrumentation technique is whole-system instrumentation because it allows one to analyze operations that occur at the kernel level and monitor interactions between different processes. The whole-system instrumentation makes it possible to perform a wide range of analysis tasks; however, it has certain drawbacks—instrumentation of the whole system causes huge overheads both in terms of the speed of operation of the system under study and in terms of the amount of redundant data obtained for analysis, which significantly complicates the work of the analyst. A way to solve this problem is to use selective instrumentation in which the object of instrumentation is an individual process or thread in the analyzed system. The analyst can specify the information he is interested in while retaining the potentials of the whole-system analysis. To implement selective instrumentation, one needs to identify the current processes, threads, or higher level abstractions to determine the scope of instrumentation. In this paper, a number of available instrumentation systems and techniques used by them to get information of interest are discussed, problems and shortcomings of these systems are identified, an implementation of selective instrumentations for individual processes on ARM and x86 processors is described, and a version of selective instrumentation for threads is proposed.



Comparative Analysis of Two Approaches to Static Taint Analysis
Abstract
Currently, one of the most efficient ways to detect software security flaws is taint analysis. It can be based on static code analysis, and it helps detect bugs that lead to vulnerabilities, such as code injection or leaks of private data. Two approaches to the implementation of tainted data propagation over the program intermediate representation are proposed and compared. One of them is based on dataflow analysis (IFDS), and the other is based on symbolic execution. In this paper, the implementation of both approaches in the framework of the existing static analyzer infrastructure for detecting bugs in C# programs are described. These approaches are compared from the viewpoint of the scope of application, quality of results, performance, and resource requirements. Since both approaches use a common infrastructure for accessing information about the program and are implemented by the same team of developers, the results of the comparison are more significant and accurate than usual, and they can be used to select the best option in the context of the specific program and task. Our experiments show that it is possible to achieve the same completeness regardless of the chosen approach. The IFDS-based implementation has higher performance comparing with the symbolic execution for detectors with a small amount of tainted data sources. In the case of multiple detectors and a large number of sources, the scalability of the IFDS approach is worse than the scalability of the symbolic execution.



An Approach to Reachability Determination for Static Analysis Defects with the Help of Dynamic Symbolic Execution
Abstract
Program analysis methods for error detection are conventionally divided into two groups: static analysis methods and dynamic analysis methods. In this paper, we present a combined approach that allows one to determine reachability for defects found by static program analysis techniques through applying dynamic symbolic execution to a program. This approach is an extension of our previous approach to determining the reachability of specific program instructions by using dynamic symbolic execution. The approach is sequentially applied to several points in the program: a defect source point, a defect sink point, and additional intermediate conditional jumps related to a defect under analysis. Our approach can be briefly described as follows. First, static analysis of the program executable code is carried out to gather information about execution paths that guide dynamic symbolic execution to the source point of a defect. Then, dynamic symbolic execution is performed to generate an input dataset for reaching the defect source point and the defect sink point through intermediate conditional jumps. Dynamic symbolic execution is guided by the heuristic of the minimum distance from the previous path to the next defect trace point when selecting execution paths. The distance metric is computed using an extended call graph of the program, which combines its call graph and portions of its control flow graph that include all paths leading to the defect sink point. We evaluate our approach by using several open-source command line programs from Debian Linux. The evaluation confirms that the proposed approach can be used for classification of defects found by static program analysis. However, we found some limitations that prevent deploying this approach to industrial program analyzers. Mitigating these limitations is one of the possible directions for future research.



Active Learning and Crowdsourcing: A Survey of Optimization Methods for Data Labeling
Abstract
High-quality annotated collections are a key element in constructing systems that use machine learning. In most cases, these collections are created through manual labeling, which is expensive and tedious for annotators. To optimize data labeling, a number of methods using active learning and crowdsourcing were proposed. This paper provides a survey of currently available approaches, discusses their combined use, and describes existing software systems designed to facilitate the data labeling process.



Dynamically Changing User Interfaces: Software Solutions Based on Automatically Collected User Information
Abstract
This paper describes a system for automated adaptation of user interfaces (AAUI system). The system enables pseudo-identification of users, as well as building an anonymous user and rule base taking into account the user’s activity in web applications. A specific feature of the AAUI system is the model for identifying anonymous users of end software products and the dynamic identifier for automatic adaptation of the interface to the user identified. The analysis of systems designed to collect user information allows us to conclude that these systems provide only the collection of profile information about new users and do not enable the automatic adaptation of interfaces to user needs. Currently, there are no software products similar to the AAUI in open access. The prospects for further development of the AAUI system can involve an increase in the number of identification markers to improve user identification, integration with content management systems, and optimization of data management.



Toward Constructing a Modular Model of Distributed Intelligence
Abstract
Multi-agent social systems (MASSes) are systems of autonomous interdependent agents, each pursuing its own goals and interacting with other agents and environment. The dynamics of the MASS cannot be adequately modeled by the methods borrowed from statistical physics because these methods do not reflect the main feature of social systems, viz., their ability to percept, process, and use external information. This important quality of distributed (swarm) intelligence has to be directly taken into account in a correct theoretical description of social systems. However, discussion of distributed intelligence (DI) in the literature is mostly restricted to distributed tasks, information exchange, and aggregated judgment, i.e., to the “sum” or “average” of independent intellectual activities. This approach ignores the empirically well-known phenomenon of “collective insight” in a group, which is a specific manifestation of MASS DI. In this paper, the state of art in modeling social systems and investigating intelligence per se is briefly characterized and a new modular model of intelligence is proposed. This model makes it possible to reproduce the most important result of intellectual activity, viz., the creation of new information, which is not reflected in the contemporary schemes (e.g., neural networks). In the framework of the modular approach, the correspondence between individual intelligence and MASS DI is discussed and prospective directions for future research are outlined. The efficiency of DI is estimated numerically by computer simulation of a simple system of agents with variable kinematic parameters (ki) that move through a pathway with obstacles. Selection of fast agents with a positive mutation of the parameters provides ca. 20% reduction in the average passing time after 200–300 generations and creates a swarm movement whereby agents follow a leader and cooperatively avoid obstacles.



Analysis of Mobility Patterns for Public Transportation and Bus Stops Relocation
Abstract
Knowing the mobility patterns of citizens using public transportation is an important issue for modern smart cities. Mobility information is crucial for designing and planning an urban transportation system able to provide good service to citizens. We address two relevant problems related to public transportation systems: the analysis of mobility patterns of passengers and the relocation of bus stops in an urban area. For the first problem, a big-data approach is applied to process large volume one space of information. Several relevant metrics are computed and analyzed to characterize the mobility patterns using data from the public transportation system on Montevideo, Uruguay. We obtain user demand and origin-destination matrices by analyzing the tickets sale information and the buses locations. A distributed implementation is proposed, reaching significant execution time improvements (speedup up to 17.10 when using 24 computing resources). For the second problem, a multiobjective evolutionary algorithm is proposed to relocate bus stops in order to improve the quality of service by minimizing the travel time and bus operational costs. The algorithm is evaluated over instances of the problem generated with real data from the year 2015. The experimental results show that the algorithm is able to obtain improvements of up to 16.7 and 33.9% in time and cost respectively, compared to space situation in the year 2015.



Modeling Function Domain for Curves Constructed Based on a Linear Combination of Basis Bernstein Polynomials
Abstract
The paper is devoted to designing an automated algorithm for modeling the function domain of spline curves that is needed for developing tools for R-functional and voxel-functional construction of geometric models of complex shape. Mathematical approaches to solving this problem are discussed. The parametric dependence of a spline on the function domain is investigated. An automated algorithm for constructing the function domain of a spline curve based on a linear combination of the basis Bernstein polynomials is described.



Computer Algebra
Algorithms for Solving an Algebraic Equation
Abstract
For finding global approximate solutions to an algebraic equation in n unknowns, the Hadamard open polygon for the case n = 1 and Hadamard polyhedron for the case n = 2 are used. The solutions thus found are transformed to the coordinate space by a translation (for n = 1) and by a change of coordinates that uses the curve uniformization (for n = 2). Next, algorithms for the local solution of the algebraic equation in the vicinity of its singular (critical) point for obtaining asymptotic expansions of one-dimensional and two-dimensional branches are presented for n = 2 and n = 3. Using the Newton polygon (for n = 2), the Newton polyhedron (for n = 3), and power transformations, this problem is reduced to situations similar to those occurring in the implicit function theorem. In particular, the local analysis of solutions to the equation in three unknowns leads to the uniformization problem of a plane curve and its transformation to the coordinate axis. Then, an asymptotic expansion of a part of the surface under examination can be obtained in the vicinity of this axis. Examples of such calculations are presented.



Application of Computer Algebra to the Reconstruction of Surface from Its Photometric Images
Abstract
This paper addresses the problem of reconstructing the shape of an unknown Lambertian surface given in the 3D space by a continuously differentiable function \(z = u(x,y)\). The surface is reconstructed from its photometric images obtained by its successive illumination with three different remote light sources. Using computer algebra methods, we show that the unique solution of the problem, which exists in the domain \(\Omega \) of all three images, can be continued beyond this domain based on the solutions obtained for any pair of the three images. To disambiguate the reconstruction of the surface from its two images, we compute the corresponding value of the parameter \(\varepsilon \) at the boundary of the domain \(\Omega \). Soundness of the theoretical results is confirmed by simulating photometric images of various surfaces.



Erratum


