Technical Papers

In this paper we propose a searching technique termed as VAST to determine an optimal path from source node to destination node in densely deployed mobile ad-hoc network. We have compared the proposed VAST algorithm with metaheuristic algorithms GRASP, Semi-greedy and Tabu search, in terms of the routing cost and algorithm execution time. The comparison results shows that the proposed VAST algorithm outperforms the other algorithm and that it is suitable for adapting the routing optimization problem.
This paper introduces a new method VRS algorithm to generate pseudo random numbers. Pseudo random numbers are a type of random numbers that are generated using a seed value. Pseudo random number generators are also known as deterministic random number generators. Various pseudo random number generators have been proposed in the past, to generate pseudo random numbers that are uniformly distributed, completely random as well as have a large period. But large periodicity had been a major problem found in various pseudo random number generators. Our pseudo random number generator algorithm gives solution for this problem by providing large periodicity.
Bisection Method is one of the simplest methods in numerical analysis to find the roots of a non-linear equation. It is based on Intermediate Value Theorem. The algorithm proposed in this paper predicts the optimal interval in which the roots of the function may lie and then applies the bisection method to converge at the root within the tolerance range defined by the user. This algorithm also calculates another root of the equation, if that root lies just outside the range of the interval found.
This paper introduces the Cost Estimation Tool (CET). CET is a costing system used for tracking and analyzing the expenditure incurred by any organization under various departments using step down allocation and apportion technique. The overall aim of the CET is to estimate the cost of all the cost centers in any organization, to guide policy and efficient management of resources for improved services. The purpose of the current effort was to upgrade the accounting practices in the organization by introducing step-down method of apportioning. Here a case study of government Hospital is considered where patient is provided Healthcare services free of cost. But it would be exciting for Healthcare Manager of government Hospital to know the actual cost incurred for patient services. The ultimate objective of costing is to arrive at the cost of every unit, service, procedure, patient wise expenditure and compare the budgeted performance expectations in order to identify problem areas that require immediate attention.
Machine Learning techniques are most widely used in the field of clustering of data. The K-means algorithm is one which is widely used algorithm for clustering of data sets and is easy to understand and simulate on different datasets. In our paper work we have used K-means algorithm for clustering of yeast dataset and iris datasets, in which clustering resulted in less accuracy with more number of iterations. We are simulating an improved version in K- means algorithm for clustering of these datasets, the Improved K-means algorithm use the technique of minimum spanning tree. An undirected graph is generated for all the input data points and then shortest distance is calculated which intern results in better accuracy and also with less number of iterations. Both algorithms have been simulated using java programming language; the results obtained from both algorithms are been compared and analyzed. Algorithms have been run for several times under different clustering groups and the analysis results showed that the Improved K- means algorithm has provided a better performance as compared to K-means algorithm; also Improved K-means algorithm showed that, as the number of cluster values increases the accuracy of the algorithm also increases. Also we have inferred from the results that at a particular value of K (cluster groups) the accuracy of Improved K-means algorithm is optimal.
Significant researches are going on droplet routing of digital microfluidic biochip (DMFB). Detection of failure and performing defect free routing is a new paradigm. This paper proposes a heuristic approach of droplet routing by evading the defective electrode regions. The proposed work tries to minimize multiple usage of same resource, thus by reducing the probability of defects, or routing failure due to residual traces of several droplets. All the droplets are routed in concurrent fashion, and repeated utilization of same electrode is avoided by assigning a variable to the electrodes, and updating the value in each iteration. Experimental studies show some notable results compared to some existing literature. It is also noticed that percentage degradation in electrode usage and latest arrival time due to defect aware routing is quite less and under a tolerable limit considering the importance of defect bypassing approach.
`Sudoku' is a popular Japanese puzzle game that trains our logical mind. The word Sudoku means `the digits must remain single'. The Sudoku problem is important as it finds numerous applications in a variety of research domains with some sort of resemblance. Applications of solving a Sudoku instance are found in the fields of Steganography, Secret image sharing with necessary reversibility, Encrypting SMS, Digital watermarking, Image authentication, Image Encryption, and so and so forth. All the existing Sudoku solving techniques are primarily guess based heuristic or computation intensive soft computing methodology. They are all cell based, that is why very much time consuming. Therefore, in this paper a minigrid based novel technique is developed to solve the Sudoku puzzle in guessed free manner.
To reduce the dimensionality of dataset, redundant and irrelevant features need to be segregated from multidimensional dataset. To remove these features, one of the feature selection techniques needs to be used. Here, a feature selection technique to remove irrelevant features has been used. Correlation measures based on the concept of mutual information has been adopted to calculate the degree of association between features. In this paper authors are proposing a new algorithm to segregate features from high dimensional data by visualizing relevant features in the form of graph as a dataset.
IP design-houses are hard-pressed by their customers to provide SystemC models of their portfolio IPs, despite already existing VHDL views. VHDL IPs can be translated to SystemC, ensuring correctness, quality and maintainability of the translated code. This paper explores optimization scenarios that affect the simulation performance, resulting in upto 38% faster - simulation. In addition to the plain VHDL-to-SystemC conversion, there are possibilities of alternate implementations for a SystemC model. This paper explores these alternate scenarios to get 25% better simulation-speed. The optimization methodologies in this paper are relevant to architects, designers, verification-teams, IP design-houses that need to provide high-speed simulation-models, and can be used for optimizing simulation tools as well system-level models.
Gene regulation is either an intra-cellular, inter-cellular, intra-tissue or inter-tissue biochemical phenomenon in an organism where a few genes may regulate the expression(s) of any other gene(s), even the expression of itself. The regulation is performed through proteins, metabolites and other genetic spin-offs resulting from the change in environment that genes experience in the cellular context. The gene regulatory network which originates from the regulation process is a potential source from which different physiological, behavioral, medicinal and disease-related issues of an organism can be uncovered. Computational inference of the network is a well-known bioinformatics task. Easy availability of time series gene expression data has made the work easier. But this data suffers from the curse of dimensionality as columns (time points) are few in number in comparison with rows (genes). Methods which are proposed here take the microarray time series gene expression data as input and simulate a time series of larger number of rows with regular small intervals. The parameters of the gene regulatory network are estimated using three variants of Simulated Annealing, viz. Basic Simulated Annealing (BSA), Tabu Simulated Annealing (TSA) and Greedy Simulated Annealing (GSA). During the estimation of parameters, the main focus is on minimizing the cost between actual and simulated time series in successive iterations. The final parameter set is used to produce the simulated time series, each row of which is the expression profile of a gene. With an available synthetic data set, original expression profiles are compared to the expression profiles produced by three different methods. The simulated profiles show close correspondence to the original ones. GSA shows the closest correspondence and TSA proves to be the most efficient in terms of time and number of iterations. The simulated time series may be used for GRN reconstruction or other problems.
There are many governmental, cultural, commercial and educational organizations that manage large number of manuscript textual information. Kannada being one of the official languages of South India, such organizations include Kannada handwritten documents. Text line segmentation in such documents remains an open document analysis problem. Detection and correction of skew angle of the segmented text lines become another important step in document analysis. Most of the segmentation algorithms, for skewed text lines, present in the literature today are sensitive to the degree of skew, direction of skew, and spacing between adjacent lines. In this paper, proposed method for the text line extraction and skew correction of the extracted text lines uses a new cost function, which considers the spacing between text lines and the skew of each text line is used. Precisely, the problem is formulated as an energy minimization problem so that the minimization of the cost function yields a set of text lines. Further it is required to efficiently correct baseline skew and fluctuations of these text lines. This proposed method also uses an efficient algorithm for baseline correction. It consists of normalizing the lower baseline to a horizontal line using a skating window approaches, thus, avoiding the segmentation of text lines into subparts. This approach copes with baselines which are skewed, fluctuating, or both. It differs from machine learning approaches which need manual pixel assignments to baselines. Experimental results show that this baseline correction approach highly improves performance.
Mobile robots have the capability to navigate in the environment. We need some approaches for their collision-free and stable navigation. Authors have given their own algorithm and have implemented in C- language to move a robot from initial to final position. They have also shown the comparison in path length required by robot with the model proposed by Sir Parhi et al. in 2009.
The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing a x mod n and a x b y mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing a x mod n and a x b y mod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute a x mod n, and to compute a x b y mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage.
Coordination among maritime assets is crucial for reducing task latencies and enabling the effectiveness and success of the mission. Specifically, determining when the assets are dispatched to task locations, optimizing how the assets traverse to the task location, and how much time the assets wait at various intermediate points on the way to task location is a difficult problem due to dynamic and uncertain characteristics of the mission environment. In this paper, motivated by the navy's relevant operational needs to effectively route search assets in a dynamic mission environment, we consider a coordinated asset routing problem within a multi-objective framework allowing for stopping en route. The key objective is to find routes for a set of assets, given the start and end locations, such that the total traversal time, dispatch time and the wait time at each intermediate location is minimized. Given a task graph over Time-dependent Multi-objective (TM) risk maps, we formulate and solve a Time-dependent Multi-objective Shortest Path (TMSP) problem to determine asset routes in a multi-task scenario. We employ the method of compromised solution along with mixed integer linear programming to solve this NP-hard problem. Numerical results are provided by applying our proposed approach to an asset routing mission scenario.
Balanced graph partition is a type of graph partitioning problem that divides the given graph into components, such that the components are of about the same size and there are few connections between the components. The existing approaches partition the graph initially in a random manner which has a very high impact on determining the final quality of the solution. Recently, Multilevel Partitioning methods are proven to be faster among other approaches. This paper proposes a multilevel hybrid algorithm for balanced graph partitioning. Here graph is initially partitioned using Balanced Big method in order to improve the initial solution quality. Further, the quality of the obtained solution is improved using local search refinement procedure. The experimental results indicate that the relatively good initial partitions, when subjected to local search techniques like tabu search and hill climbing, results in better solutions. The experimental results also indicate that for the proposed approach, when the number of partitions increase (are high), the quality of the solution is better than the currently available solutions reported in the existing approaches.
In this era, every person is burdened with a number of activities to be carried out. With this busy schedule, some tasks are bound to slip out which may be crucial. It would be easier to deal with tasks if their logical relationships could be recorded and explored by focusing exclusively on the relevant and hiding the irrelevant. Mind Mapping Software allows these relationships to be stored in a graphical format with the ability to fold away or unfold details at will. This paper primarily focuses on integrating various views such as Priority and Temporal views along with the Logical view in the FreeMind, Open-Source Mind Mapping Tool. This is essential because every task must be associated with a deadline before which it must be accomplished. Notifications will be sent to the user in a prioritized manner in which the tasks need to be carried out based on their deadlines.
This paper presents an efficient technique for detecting zero-day polymorphic worms with almost zero false positives. Zero-day polymorphic worms not only exploit unknown vulnerabilities but also change their own representations on each new infection or encrypt their payloads using a different key per infection. Thus, there are many variations in the signatures for the same worm, making fingerprinting very difficult. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. If these zero-day worms are not detected and contained at right time, they can potentially disable the Internet or can wreak serious havoc. So the detection of Zero-day polymorphic worms is of paramount importance.
In a large Wireless Sensor Network, power efficiency of sensor node is one of the most important factor. Nowadays, WSN based solution have been used widely and is getting pervasively deployed in various applications. Long time operating capability with efficient energy management plays very important role for a sensor node. In this article, the sensor intelligence has been emerged with a low power processor model. Sensor node within a single chip has been developed and implemented on a high performance FPGA kit. Xilinx ISE 14.3 simulator has been used to design the processor model in VHDL code. An efficient sleep scheduling with a synchronized timer and algorithm to achieve optimum power efficiency has been adopted in this design. Realization up to RTL schematic level has been performed and results power efficiency of almost 90% compared to commercially available microcontroller based sensor.
People who are deaf or hard-of-hearing may have challenges communicating with others via spoken words and may have challenges being aware of audio events in their environments. This is especially true in public places, which may not have accessible ways of communicating announcements and other audio events. In this paper, the design and evolution of a mobile sound transcription tool for the deaf and hard-of-hearing is elaborated. Transcriptions include dialog and descriptions of environmental sounds. The transcriber is a multilingual transcriber who listens to the audio and sends the converted text message to the server, the server then using the IP address of the user sends it to the user. If the user is not logged into the server then the message is stored in the server database and sent when the user logs in.
Metamorphic malware modifies the code of every new offspring by using code obfuscation techniques. Recent research have depicted that metamorphic writers make use of benign dead code to thwart signature and Hidden Markov based detectors. Failure in the detection is due to the fact that the malware code appear statistically similar to benign programs. In order to detect complex malware generated with hacker generated tool i.e. NGVCK known to the research community, and the intricate metamorphic worm available as benchmark data we propose, a novel approach using Linear Discriminant Analysis (LDA) to rank and synthesize most prominent opcode bi-gram features for identifying unseen malware and benign samples. Our investigation resulted in 99.7% accuracy which reveals that the current method could be employed to improve the detection rate of existing malware scanner available in public.
Energy efficient protocol design and algorithm is one of the key issues to elongate lifetime of sensor networks. The cluster heads advertisements in each virtual grid are supposed to outreach the corresponding grid member nodes. With different competition radius, sensor node in the middle and at the edge of each virtual grid can cover grid members which implies that each cluster head should have variable competition range depending on location in the grid for energy saving. So, in this paper we design cluster head election mechanism and algorithm to decide the transmission range of each cluster head advertisement. For inter cluster communication, to let uniform depletion of energy among sensor nodes, we also derive and mathematically formulate the optimal percentage of packet traffic that should be routed to neighbor and cross level grid. The protocol is implemented in OMNeT++ and simulation results show that the proposed protocol has better network life time compared to LEACH and grid based multi hop routing protocols.
Anycast is an important way of communication for Mobile Ad hoc Networks (MANETs) in terms of resources, robustness and efficiency for replicated service applications. Most of the anycast routing protocols for MANETs select unstable and congested intermediate nodes, thereby causing frequent path failures and packet losses. We propose Node movement Stability and Congestion aware Anycast Routing scheme in MANETs (NSCAR) that employs two models, namely (i) node movement stability model to identify stable nodes, and (ii) congestion model that considers congestion aware parameters like channel load and buffer occupancy. These models and Dynamic Source Routing (DSR) protocol are used in the route discovery process to select nearest k-servers that facilitate creation of stable paths from designated client to one of the nearest servers. A server among k-servers is selected based on route stability, channel load, hops, and server load. The simulation results indicate that proposed NSCAR demonstrates, reduction in control overheads, improvement in packet delivery ratio (PDR) and reduction in end-to-end delays compared to existing methods such as flooding and load balanced service discovery.
The rapid changes in the Telecommunication world and the advances in the Information Technology have paved way for a highly competitive market for the Service Providers, where new services and products need to be offered frequently. This paper examines the Order Management aspect of the Communication Providers and the challenges faced by the companies to make the application as agile and flexible as possible to keep up with the pace of changes in the market and competition. After discussing the current solutions provided by the Software Vendors in the market, this paper attempts to provide a new hybrid solution which when implemented would bring in rapid transformation to their OSS/BSS.
Wireless Multimedia Sensor Networks (WMSNs) are resource constrained (compared to the requirement of video communication), where the sensor nodes have limited bandwidth, energy, processing power and memory. Resource mapping is required in such networks, which is based on user requirements to offer better communication services as well as to use optimal resources. This paper proposes Mamdani's fuzzy inference system (FIS) to map the user requirement to resource demands by considering video/image. The input fuzzy parameters used are: available node energy, available bandwidth, and user quality needs. Output fuzzy parameter is frames to be transmitted per second (fps). By using `fps' and the resolution of an image, the sensor node computes bandwidth and buffer requirements. The various defuzzification methods: centroid, smallest of maximum, and mean of maximum are used to get the response from the fuzzy inference system, and later they are compared.
This paper proposes a model where the energy usage of sensor nodes in a multi-hop Wireless Sensor Network (WSN) is optimized by using data aggregation, a technique which discards “unnecessary” packets so as to save bandwidth. It does so in two steps; first of which uses an Exponential Weighted Moving Average (EWMA) data aggregation technique to compare the current data value with all the previous values before deciding whether to forward or drop the packet. The second step optimizes the network even further by considering readings from neighboring sensors into the equation. Some flexibility has been designed in the algorithm allowing the precision level to be adjusted based on the users requirements. Simulated results show how the scheme reduces the number of packets transmitted and the normalized energy consumed by the nodes.
Mass failures of slope, which includes movement in soil, rock, ice which cause a considerable damage to the natural habitat, environment, economy and other resources. Detection, monitoring and control are the three major issues regarding Real-Time applications. For a large scale detection of fault and monitoring the faults is one of the important applications that lead to advancement of many kind of technologies. In this paper A Land-Slide detection system is being developed at Bidholi (village), Dehradun, India, a region with high rainfall and versatile climatic behavior most of the year. Integrating Geo-physical sensors forming a heterogeneous wireless network helps in identifying the fault and this paper also includes development, deployment (analysis) and data retrieval of the sensors information using WSN.
BitTorrent is one of the most popular peer to peer (P2P) protocols used for downloading large sized files over internet. Success of BitTorrent relies heavily upon the peers contributing to the protocol by uploading the partially (or fully) downloaded files to other peers. However, it is matter of common observation that very often a peer is found only downloading large files and not allowing uploading of files so as to minimize bandwidth utilization at its own end. Such peers are referred as free-riders and the phenomenon is called free-riding. Free-riding is a major problem in BitTorrent based peer to peer protocol because such free rides consume network resources (download files) without contributing to the network (upload files). In this paper, we first prove empirically that presence of free-riders degrades the performance of BitTorrent system by increasing the download time of peers in P2P networks. Thereafter, a detection-cum-punishment mechanism is proposed which detects and punishes free-riders in P2P network. Impact of proposed mechanism on free-riders and non-free-riders is analyzed using ns-2 based simulation. Results prove that the proposed punishment mechanism improves the performance of the P2P network by decreasing the download times of non-free-rider peers of the network and punishes free-riders by increasing their download times.
Mobile ad hoc networking enables communication in mobile wireless network by incorporating routing functionality into mobile nodes. In Mobile ad hoc network, nodes can formulates multi-hop dynamic topology which is sometimes rapidly changing and likely composed of bandwidth-constrained and variable capacity wireless links. Over such type of network choice of appropriate routing protocol that could offer efficient communication is obligatory. To elucidate this issue, paper presents the simulation analysis to investigate the performance of selected proactive and reactive routing protocols in a dynamic ad hoc environment considering UDP traffic. Paper also examines the impact of various conditions triggered due to node mobility.
This paper describes the design and implementation of a SOPC based Network enabled Voice Codec Unit which bridges analog speech I/O (compressed speech well within the perceived MOS) to ethernet network. A custom FPGA based board is designed primarily around a low cost Cyclone II FPGA, AMBE Vocoder chip and a Ethernet single chip MAC+PHY. The objective is to enable one such unit to convert the input analog speech into digital speech samples, encode the speech using the selected AMBE Vocoder mode and then send the compressed bit stream at the rate 2.0 ~2.4 Kbps out as TCP packets over the Ethernet interface. Simultaneously, the compressed bit stream of TCP packets from the other similar unit is read in from the Ethernet interface and decoded back in to digital speech samples. The decoded samples are converted back into analog speech via the codec whose output is sent to both the handset or line-level output connection depending upon the mode selected. In the present work the functionalities of ordinary telephones have been relied to generate voice and signalling information and a custom embedded application over a configurable RISC processor platform employing Niche TCP/IP stack and MicroC-OS II RTOS in the same FPGA to transfer packet based data. Two such Network enabled Voice Codec Units are developed that communicates directly with each other without any intermediary servers.
Wireless sensor network consists of large number of nodes deployed randomly or deterministically in the area of interest to sense the event. One of the main constraint of WSN is the limitation of battery power. To increase the life of WSN, efficient utilization of power is required. Various clustering algorithm have been proposed in the literature to increase the lifetime of the WSN. In all algorithms, data processing is done at the Cluster Head(CH) which consume large amount of energy because energy consumption depends upon the length of the packet which is to be transmitted and the distance between nodes to CHs and CH to BS. The proposed method reduces the length of the packet by processing the data at the node itself using Delta Modulation. In the proposed method present data value is compared with the previous data value, and if present data value is greater than the previous data value, output is `1' otherwise output is `0.' It reduces the size of packet and hence the energy consumption. The algorithm is implemented for even size clustering and for unequal size clustering.
The main purpose of this research work is to evaluate and analyze the behavior of Zigbee network topologies of wireless sensor networks. This paper investigates the impact of varying the number and speed of mobile nodes in Zigbee network using star, tree and mesh topologies. The functionality of star, tree and mesh topologies on the basis of packet delivery ratio, throughput and number of hops, media access control delay and end to end delay has been examined. The result show better performances of tree topology, which provides better packet delivery ratio and throughput. As the mobile nodes increases in tree topology the media access control delay and end to end delay decreases to lesser extent but there is no effect on number of hops it remain same.
Wireless sensor based control has drawn attention of many industries because of the reduced cost, easy mobility, easy maintenance, power management etc. Wireless Sensor based systems have been deployed in industries, army and in household for various applications such as monitoring, maintenance, security etc. In this paper we discuss the use of wireless sensor technology (Bluetooth) for energy conservation, in which the sensor are deployed to sense and monitor the environmental conditions and take decisions based on the inputs from the various sensors.
In recent years, Visible Light Communication has generated worldwide interest in the field of wireless communication because of its low cost and secure data exchange. However VLC suffers from serious drawbacks which degrade the communication performance. One of the major problems faced by any VLC system is the interference caused by ambient light noise, deteriorating the performance of the system. In this paper we propose an AVR based model to mitigate the ambient light noise interference and discuss its effectiveness. Further we have discussed other difficulties of VLC system.
Routing of data packets from source to destination is the primary function of Network layer. If a single path is chosen for data transfer from source to destination, congestion and packet transfer delay increases. In order to minimize this problem, multi-path routing protocol is preferred. There are many multipath routing techniques available; such as SMPC, SMPC-I, SMPC-P. In multi-path routing, not all the paths may have the same capacity, and then there is every possibility of occurrence of congestion at a router in the path. The proposed method provides a better solution for minimizing the congestion by rerouting the data packets over other paths, which are not utilized by the same in multi-path routing. This method avoids the unnecessary dropping of packets at a congested router and improves the network performance.
Various WSN applications use hierarchical routing protocol for routing sensed data to the sink. LEACH is one of the widely used hierarchical, distributed clustering protocol in WSN. In LEACH, Non-Cluster head Nodes decide to join a cluster head based on Received Signal Strength (RSS) of receiving HELLO packets from CHs making it vulnerable to HELLO Flood attack. A laptop-class adversary node can broadcast packets advertising it as cluster head with higher signal strength, all sensor nodes will select it as cluster head and send join packet into it, thinking that the adversary is in their range and thus, the whole network will be in a state of confusion. Existing solutions for detection of HELLO flood attack are either cryptographic which are less suitable in terms of memory and battery power, or non-cryptographic which involves sending the test packet for detection. This increases communication overhead as the energy required for transmission of the packet is far more than the energy required for processing/calculation. Based on these facts, a non-cryptographic solution for HELLO flood attack detection is proposed in this paper in which the no. of times the test packet is transmitted is greatly reduced. The simulation results showed detection of adversary nodes with minimal communication overhead as the number of test packets sent for detection is reduced from 20-35 to 10-14 (approx.).
The web browser has become one of most accessed process/applications in recent years. The latest website security statistics report about 30% of vulnerability attacks happen due to the information leakage by browser application and its use by hackers to exploit privacy of an individual. This leaked information is one of the main sources for hackers to attack individual's PC or to make the PC a part of botnet. A software controller is proposed to track system calls invoked by the browser process. The designed prototype deals with the systems calls which perform operations related to read, write, access personal and/or system information. The objective of the controller is to confine the leakage of information by a browser process.
Robotics industry has replaced human efforts gradually in performing rather difficult tasks. A very pertinent aspect of an intelligent security robot is to reach the goal safely by avoiding unknown obstacles in an unknown environment. In this paper we have developed an embedded C program code to design an intelligent robot which can overcome the obstacles coming in its way. We have made use of three infrared sensors to detect the obstacles via the infrared communication technique. The infrared transmitter sends out infrared radiation in a direction which consequently bounces back on coming across the surface of an object and thereafter is picked up by the infrared receiver. Authors have applied a multi sensor integration technique to sense the obstacles using an LED based infrared transmitter and receiver module integrated with the 8051 micro controller which permits collision free navigation of robots.
The wireless communication has undergone a revolution due to advancements in technology. For each new user or application to be a part of communication network the preliminary requirement is the allocation of frequency spectrum band. This frequency band is a limited resource and it is impossible to expand its boundaries. So the need is to employ intelligent, adaptive and reconfigurable communication systems which can investigate the requirements of the end user and assign the requisite resources in contrast to the traditional communication systems which allocate a fixed amount of resource to the user under adaptive, autonomic and opportunistic cognitive radio environment. Cognitive Radio(CR) Technology has emerged from software defined radios wherein the key parameters of interest are frequency, power and modulation technique adopted. The role of Cognitive Radio is to alter these parameters under ubiquitous situations. The Spectrum Sensing is an important task to determine the availability of the vacant channels to be utilised by the secondary users without posing any harmful interference to the primary users. In Multicarrier Communication using Digital Signal Processing Techniques, Filter Bank Multi Carrier has an edge over other technologies in terms of Bandwidth and Spectral Efficiency. The present paper deals with the Multi Rate FIR Decimation and Interpolation Filter approach for physical layer of Cognitive Radio under AWGN fading channel environment.
The wireless sensor networks are designed to install the smart network applications or network for emergency solutions, where human interaction is not possible. The nodes in wireless sensor networks have to self organize as per the users requirements through monitoring environments. As the sensor nodes are deployed in an inaccessible location for particular mission, it is difficult to exchange or recharge the nodes battery. Hence the important issues to design the sensor network for maximum time duration of network and also for low power operation of the nodes. The proposal is to select the cluster head intelligently using auction data of node i.e. its local battery power, topology strength and external battery support. The network lifetime is the centre focus of the research paper which explores intelligently selection of cluster head using auction based approach. The multi-objective parameters are considered to solve the problem using genetic algorithm of evolutionary approach.
In a two-tier wireless sensor network (WSN), the relay nodes act as the cluster heads for data aggregation and dissemination to the base station. It is very crucial and difficult to find the proper position where the relay nodes can be placed so that the WSN is fully covered and connected. In this paper, we propose an algorithm for placing minimum number of relay nodes with full coverage and connectivity of the WSN with the constraint of minimizing the overall communication cost. The algorithm is based on spiral sequence generated for arbitrarily deployed sensor nodes. The simulation results demonstrate the effectiveness of the algorithm.
Line surveillance is a coverage and connectivity problem in wireless sensor network, monitoring the boundary line from an intruder attempting to cross the line of control. However, when sensors are deployed along the line of control like sensors are air dropped along a given line, they would be distributed along the line with random offsets due to wind and other environmental factors. In our study, the sensors are deployed along the line with different deployment distributions and their coverage was compared. The study was further extended for connectivity was incorporated duty cycling between the nodes for extending life time of the network. Our studies provide good understanding for coverage and connectivity problems in sensor networks.
Next Generation Networks (NGN) are the future of communication which will provide each and every service over the same network. The network will only act as a bit highway or a packet highway, forwarding packets of each and every kind over the internet protocol or IP. NGN already on the implementation phase is not a new network but focuses on the replacement of public switched telephone networks (PSTN) first up and later on cable TV networks as we see IPTV services already beginning to be deployed. Among the many issues faced by NGN is the quality of service (QoS) issue as would be with any IP based network; which is measured in terms of network latency, throughput, packet delay variation, packet loss etc. Several techniques try to solve this issue in a best effort environment. Multi Protocol Label Switching (MPLS) was found to be the technology which seems to solve the problem better than others primarily due to its traffic engineering capabilities. However faced by the limitation of non-differentiation of services a DiffServ support for MPLS was only natural as there are certain resemblances between the two the most important being that traffic classification in case of both DiffServ and MPLS takes place at the ingress of their respective domains. Using this combination we get more control in providing the quality of service where priorities are to be given to some applications over the others. This has been proved by the simulation results given towards the end of this paper.
This paper proposes design of a Wireless Sensor Network using MiWi Wireless Modules, MICROCHIP MCU PIC18F97J60 Microcontroller under a Lab VIEW 8.5 platform and a series of temperature sensor to sensor nodes are implemented. The system was tested in the laboratory environment using temperature sensor need to be replaced with 0 to 3.3 volt analog. Both tested and simulation reports are presented. It gives good physical insight.
Critical role is being played by data related handling in providing efficient working environment for any organization or system. The Tracking related services are being used by various persons and organizations for facilitating day to day working of business. The continuous work and research has done by service providers and researchers for providing exact and efficient information to users of Vehicle Tracking Systems. This is a well known fact that the information can be generated through related and valid data. The need is to handle the received data carefully so that the users could get relevant and exact information. This paper is an attempt for presenting an approach to data handling and tracking of vehicle by using SPSS (A Statistical Tool). The overall scenario starting from data received through GPS device to tracking of the vehicle with position and movement through SPSS is being reflected in this paper.
Digital databases serve as the vehicles for compiling, disseminating and utilizing all forms of information that are pivotal for societal development. A major challenge that needs to be tackled is to recover crucial information that may be lost due to malicious attacks on database integrity. In the domain of digital watermarking, past research has focused on robust watermarking for establishing database ownership and fragile watermarking for tamper detection. In this paper, we propose a new technique for multiple watermarking of relational databases that provides a unified solution to two major security concerns; ownership identification and information recovery. In order to resolve ownership conflicts a secure watermark is embedded using a secret key known only to the database owner. Another watermark encapsulates granular information on user-specified crucial attributes in a manner such that the perturbed or lost data can be regenerated conveniently later. Theoretical analysis shows that the probability of successful regeneration of tampered/lost data improves dramatically as we increase the number of candidate attributes for embedding the watermark. We experimentally verify that the proposed technique is robust enough to extract the watermark accurately even after 100% tuple addition or alteration and after 98% tuple deletion.
Due to the advancement in communication technologies, mobile ad hoc network increases the ability in terms of ad hoc communication between the mobile nodes. Mobile ad hoc networks do not use any predefined infrastructure during the communication so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. In terms of security perspectives, communication via wireless links makes mobile ad hoc networks more vulnerable to attacks because any one can join and move the networks at any time. Particularly, in mobile ad hoc networks one of very common attack is packet dropping attack through the malicious node (s). This paper developed an anomaly based fuzzy intrusion detection system to detect the packet dropping attack from mobile ad hoc networks and this proposed solution also save the resources of mobile nodes in respect to remove the malicious nodes. For implementation point of view, qualnet simulator 6.1 and sugeno-type fuzzy inference system are used to make the fuzzy rule base for analyzing the results. From the simulation results it's proved that proposed system is more capable to detect the packet dropping attack with high positive rate and low false positive under each level (low, medium and high) of speed of mobile nodes.
Multi-input-multi-output (MIMO) systems along with orthogonal frequency division multiplexing (OFDM) technique plays a crucial role in 4G based wireless networks. The inclusion of MIMO technology enables new types of links such as downlink multi-user communication. Long term evolution (LTE) systems are based on multi-user MIMO technologies, which have flexible bandwidths. In this paper, we derived closed form expression of bit error rate (BER) for multi-user LTE and LTE-Advanced (LTE-A) systems. Moment generating function (MGF) and pair-wise error probability (PEP) based approaches are used to derive these BER expressions. Theoretical results of BER are validated using simulation results for different number of users. Results are plotted on the basis of various features such as coordinated multipoint (CoMP) transmission and reception, multimedia broadcast and multicast services (MBMS) and allocation of Bandwidth in LTE and LTE-A systems.
IEEE 802.11E is standardized to overcome limitations of IEEE 802.11B. 802.11E differentiates among Access Categories (AC), to provide Quality of Service (QoS). In the convention, 802.11E, random waiting time or the backoff number is selected from a contention window; the probability to select any backoff number is same. In the proposed paper, random waiting time is selected such that it follows Binomial Distribution, hence making the probability of selection unique. The performance of the Voice traffic is studied under varying network load and against varying packet size. The scenario is simulated in C taking two of the four different services- Voice services and Best effort services, and the results convey an improvement in network performance in terms of throughput and delay.
Communication in Mobile Ad hoc network is done over a shared wireless channel with no Central Authority (CA) to monitor. Responsibility of maintaining the integrity and secrecy of data, nodes in the network are held responsible. To attain the goal of trusted communication in MANET (Mobile Ad hoc Network) lot of approaches using key management has been implemented. This work proposes a composite identity and trust based model (CIDT) which depends on public key, physical identity, and trust of a node which helps in secure data transfer over wireless channels. CIDT is a modified DSR routing protocol for achieving security. Trust Factor of a node along with its key pair and identity is used to authenticate a node in the network. Experience based trust factor (TF) of a node is used to decide the authenticity of a node. A valid certificate is generated for authentic node to carry out the communication in the network. Proposed method works well for self certification scheme of a node in the network.
With the increasing spectrum scarcity due to increase in the wireless devices, and limited availability of spectrum for licensed users only, the need for secondary access by unlicensed users is increasing. Cognitive radio turns out to be helping this situation because all that is needed is a technique that could efficiently detect the empty spaces and provide them to the secondary devices without causing any interference to the primary (licensed) users. Spectrum sensing is the foremost function of the cognitive radio which senses the environment for white spaces. Various techniques have been introduced in the spectrum sensing literature and these techniques are still under research. In this paper, we study one of the chiefly used techniques called energy detection spectrum sensing. It is known that when the signals travel in the wireless medium via various channels, they undergo several impairments caused by the different channels like additive white Gaussian noise and Rayleigh fading etc. Here, an attempt is made to assess the energy detection technique over these two wireless channels.
Wireless Sensor Networks (WSN) have become popular in industries for measurement of process parameters like temperature, vibration, humidity etc. For the past few years, research and development efforts have increased to implement WSN technology in nuclear industry also. For this, wireless hardware and software must have a record of reliable performance along with verification and validation testing. In order to prove the robustness of WSN technology and to gain enough experience before deploying WSN in nuclear reactor environment, several hardware and software developments, experimental WSN deployments have been carried out in Indira Gandhi Centre for Atomic Research (IGCAR). Prior to WSN deployments, experiments have been conducted in computer division by establishing test network to analyze various performance metrics. The test network was established with nodes communicating using ZigBee standard and the performance results specific to the configured transceiver, have been analyzed and used in various experimental deployments. This paper describes about the test network setup and performance analysis of the network.
Anomaly detection is one of the important challenges of network security associated today. We present a novel hybrid technique called G-LDA to identify the anomalies in network traffic. We propose a hybrid technique integrating Latent Dirichlet Allocation and genetic algorithm namely the G-LDA process. Furthermore, feature selection plays an important role in identifying the subset of attributes for determining the anomaly packets. The proposed method is evaluated by carrying out experiments on KDDCUP'99 dataset. The experimental results reveal that the hybrid technique has a better accuracy for detecting known and unknown attacks and a low false positive rate.
Wireless Body Area Networks (WBANs) can use different types of medium accesses namely CSMA, scheduled, polling, or a combination of these techniques. These access methods are specified in IEEE 802.15.6 standard for WBAN. Most of the studies done so far on medium access methods in WBAN have been related to CSMA and scheduled access. Polling is also specified in the standard as a medium access method with the method of implementation left to the user. In this paper, a discrete time polling access method for WBAN is given and the performance of the protocol is studied for certain categories of medical applications. Polling as a media access method is advantageous in the case of intermittent data in comparison to continuous data. It is shown that lifetimes of medical devices depend on the data rates of the application and the polling scheme used where an adaptive sleep schedule is employed. Simulations are done to study the behaviour of the protocol with respect to important QoS parameters like data rate and latency requirements of the application.
Wireless Sensor Network is useful in broad range of applications such as natural disaster relief, military, environmental and health monitoring. Coverage is one of the fundamental problem and an active research area in wireless sensor network. WSN is an emerging field due to its large contribution in dealing with coverage. It consist of low cost, low power, small size and multifunction sensor nodes. The critical aspect with wireless sensor network is energy conservation. In power constrained WSN, scheduling of sensors to be done effectively and efficiently so as to maximize network lifetime. In this paper we give an introduction to WSN and its fundamental problems that is target coverage problem together with energy constraint. The target coverage problem is proven to be NP-Complete problem by many researchers. We propose a new energy-efficient heuristic for target coverage problem in wireless sensor network to maximize total network lifetime.
Theft or loss of a mobile device could be an information security risk as it can result in loss of con fidential personal data. Traditional cryptographic algorithms are not suitable for resource constrained and handheld devices. In this paper, we have developed an efficient and user friendly tool called “NCRYPT” on Android platform. “NCRYPT” application is used to secure the data at rest on Android thus making it inaccessible to unauthorized users. It is based on lightweight encryption scheme i.e. Hummingbird-2. The application provides secure storage by making use of password based authentication so that an adversary cannot access the confidential data stored on the mobile device. The cryptographic key is derived through the password based key generation method PBKDF2 from the standard SUN JCE cryptographic provider. Various tools for encryption are available in the market which are based on AES or DES encryption schemes. Ihe reported tool is based on Hummingbird-2 and is faster than most of the other existing schemes. It is also resistant to most of attacks applicable to Block and Stream Ciphers. Hummingbird-2 has been coded in C language and embedded in Android platform with the help of JNI (Java Native Interface) for faster execution. This application provides choice for en crypting the entire data on SD card or selective files on the smart phone and protect p ersonal or confidential information available in such devices.
Internet today is also a medium of sharing immeasurable amount of information for widespread Peer to Peer (P2P) environments. Various application domains such as file sharing, distributed computing and e community based applications adopted the P2P technology as underlying network structure. A fairly open structure of P2P network applications, also make peers exposed. Interaction with unfamiliar peer in the absence of a trusted third party makes them vulnerable to potential attacks. To enable a reliable communication among peers, trust and reputation mechanisms came into existence. Malicious behavior of peer itself within the network, make reputation system themselves as vulnerable to attacks. Malicious peers often collude to procure a collective objective. The paper reviews existing collusion attacks. It also proposes a reactive defense mechanism against such collusion attacks. The proposed mechanism, detects collusion based on underlying trust and reputation knowledge. It also provides a reduction mechanism to chastise colluded peers.
To improve the network performance in multi hop wireless network, cooperative routing is widely used now-a-days. We mainly study the lifetime maximizing broadcast tree generation for a given non cooperative broadcast tree under cooperative routing technique in energy constrained wireless network. The work presented in this paper identifies energy inefficient directed edges and replaces them by directed energy efficient cooperative paths. Energy consumption of a node in broadcast tree includes (1) point to point communication, or (2) point to multipoint communication, or (3) summation of point to point and cooperative communication, or (4) summation of point to multipoint and cooperative communication. Simulation results show that our method improves the network lifetime than most popular MST based broadcast tree.
This paper addresses the problem of congestion control and queue management in multimedia wireless LANs with Quality-of-Service guarantees. The proposed model provides differentiated QOS for different multimedia traffic, which is still an open challenge. This mechanism reduces packet loss rate in wireless networks by Forward Error Correction (FEC) redundancy rate calculation at the access point which reduces the time required for FEC rate calculation at the sender. The number of redundant packets is adaptively adjusted by considering both network traffic load and wireless channel condition. Thus the possibility of congestion due to excessive number of redundant packets is eliminated. Multimedia applications such as Learning-On-Demand and video-conferencing need the special attention for the flow and require differentiated QoS. Thus the proposed work prioritizes the incoming traffic to give p reference to higher priority packets using push-out policy which achieves considerable improvement in the video quality at the receiver. The qualitative analysis is presented which shows that this work is more efficient by introducing push-out scheme to improve the performance further by supporting higher priority multimedia flows.
In cognitive radio (CR) network spectrum sensing is an important issue. Cooperative spectrum sensing improves the detection probability. The accuracy in taking decision about the presence of primary user (PU) depends on sensing time and number of CR users involving in cooperation. A CR network can be efficient if it performs the detection process with minimum error probability and at the same time it maximizes its overall throughput. We have investigated the optimal number of CRs required in cooperation to minimize the total error. Throughput of the CR network has been investigated with respect to sensing time. The performance of the network has been investigated in terms of maximum throughput for optimal number of CR users.
The cognitive nodes which are far away from primary user (PU) may not be able to detect the PU due to severe fading in channel. To improve the efficiency of spectrum sensing, we propose a co operative communication scheme based on cognitive relaying. Cognitive relay nodes sense the activity of PU and forward the received data to the Cognitive radio (CR) which is far a way from the PU. In this condition, the probability of detection increases, which in turn reduces the bit error rate (BER). Employment of an umber of relay nodes reduces the sensing time and increases the throughput of the system.
With the increasing deployment of Wireless Sensor Networks (WSN) there has been a demand for power efficient, reliable, Wireless network based Distributed Computing system (WDC). The fading nature of the wireless channel offer several challenges to WDC over traditional distributed computing system. Orthogonal Frequency Division Multiple Access (OFDMA) based wireless communication systems offer several advantages and are increasingly replacing single carrier communication systems. In this paper we propose resource allocation for OFDMA based wireless distributed computing system. The performance of the proposed system is analyzed using computer simulation.
The evolution of Software Defined Networking (SDN) and network virtualization changed the future networking paradigm. Network virtualization is the key element in cloud-aware networks. OpenFlow allows the separation of the control plane from forward plane, which provides the flexibility of dynamic network programming. The Open vSwitch is an OpenFlow based open source switch implementation which is used as a virtual switch in virtualized environments. The OpenFlow specifications are targeted for Layer2 and layer3 functionality. The latest networking shift is to enable the switch with L4-L7 services like load balancers, proxies, firewalls, IPSec etc. This would make the middle boxes redundant in the networking deployments. In this work, we propose a methodology to extend the most commonly used Open vSwitch to L4-L7 service aware OpenFlow switch.
Uses of multimedia on video and audio application are increasing day-by-day on mobile devices. The continuity of these applications may hamper due to improper session rates during transmission. In this study we survey various papers for session rate prediction of streaming media using network traffic prediction methods. Also bandwidth estimation is carried out for the wireless network which plays a significant role in predicting the session rates. Our proposed session rate expression helps to understand the significance of predicting the session rate for streaming media in mobile wireless network. We study various proposals in this regard and the state-of-the-art analytical analysis is presented followed by our notations.
VoIP applications are becoming popular these days. A lot of Internet traffic are being generated by them. Detection of VoIP traffic is becoming important because of QoS issues and security concerns. A VoIP client typically opens a number of network connection between VoIP client and VoIP client, VoIP client and VoIP server. In the case of peer to peer VoIP applications like Skype network, connections may be between client to client, client to Super Node, client to login server, Super Node to Super Node. Typically, VoIP media traffic are carried by UDP unless firewalls blocks UDP, in which case media and signalling traffic are carried by TCP. Many VoIP applications uses RTP to carry media traffic. Notable examples includes GTalk, Google+ Hangouts, Asterisk based VoIP and Apple's FaceTime. On the other hand, Skype uses a proprietary protocol based on P2P architecture. It uses encryption for end to end communications and adopts obfuscation and anti reverse engineering techniques to prevent reverse engineering of the Skype protocol. This makes the detection of Skype flows a challenging task. Although Skype encrypts all communications, still a portion of Skype payload header known as Start of Message (SoM) is left unecrypted. In this paper, we develop a method for detection of VoIP flows in UDP media streams. Our detection method relies on signalling traffic generated by VoIP applications and heuristics based on the information contained in Skype SoM and RTP/RTCP headers.
In recent years, wireless communication technology has reduced the distance between people and has hence become a significant part of our lives. Two such technologies are WiFi(IEEE 802.11) and WiMAX(IEEE 802.16) where the latter is a long range system covering many kilometers, whereas former is a synonym for WLAN providing a coverage of only short ranges. This work describes the implementation of a framework in which a multi-hop, ad-hoc network is deployed with hybrid nodes to enhance network throughput. The data traffic received is split between the WiFi and WiMAX radios on the basis of th e split coefficient value statically. The routing algorithm being implemented in this paper is the be e-hive algorithm. Bee-hive algorithm is a multi-path routing algorithm inspired by the social behavior of swarms of bees. It is dynamic, robust and flexible yet simple algorithm which can prove helpful for optimal management of available network resources. In this paper, we have split data traffic over two radio channels for achieving enhanced performance and reduced delay.
This paper presents a technique to improve anti-theft for android based mobile phones by using different services like MMS instead of SMS. As the use of smartphones, tablets, phablets based on android operating system is increasing, many scenarios related with anti-theft have already been proposed and many software based on anti-theft have also been developed, but most of these software are not freely available and it's difficult to identify the thief by using these software's e.g. GPS Tracking. We put forward a new scheme, which enhances the present scenario, based on new technologies like Multimedia Messages. The scenario proposed in this work is totally dependent on the hardware of your smartphone like camera (front & back) and support for multimedia messages. Once this software is installed, it will work in the background, stores the current SIM number in a variable and keeps checking continuously for SIM change, whenever SIM gets changed from mobile, it will take snapshots and record a video in the background i.e., without taking user permission and then it will send an MMS, and number of snap shots, to an alternate mobile number and an email id, which was provided during installation. The enviable advantage of this software is that it is very easy to configure and it keeps running in the background without interrupting the user. To some extent it helps the owner to identify the thief.
In many real world environment, a wireless sensor node may have to gather different types of data obtained from a variety of sensors. Each of the data may have different priorities to be processed. Often, real time traffic could be one of the data to be handeled by the sensor node and to be successfully transmitted to the sink node. Therefore, rate control in wireless sensor node (WSN) plays a pivotal role. In this paper, we present a priority based rate control algorithm to take care of different data. The input traffic consists of real time as well as non-real time traffic data. The performance of the algorithms has been found to be superior to the algorithm proposed by Yaghmaee et al. with respect to throughput, delay, and loss.
A Wireless Sensor Network (WSN) consists of a large number of tiny devices called sensor nodes, which are usually deployed randomly over a wide area in order to sense and monitor various physical phenomena related parameters including environmental conditions at various locations. The WSN nodes communicate with each other. WSN devices have various resource constraints such as less memory, low clock speed, finite battery energy, and limited computational power. It may not be feasible to replace the batteries in the WSN nodes. As all the nodes are battery operated it is necessary to conserve the limited battery energy so that the lifetime of the network can be extended. Network lifetime, energy efficiency, load balancing and more over scalability are some key requirements of WSN applications. This work presents a multi level hierarchical routing protocol, which is based on the LEACH protocol. This protocol improves both the energy efficiency and the lifetime of the network. Two-level LEACH (TL-LEACH), Three-level LEACH (3L-LEACH) and Four-level LEACH (4L-LEACH) have been presented. NS-3 simulation platform has been used to carry out performance analysis of these hierarchical routing protocols. The performance analysis shows that the hierarchical routing protocols, TL-LEACH, 3L-LEACH and 4L-LEACH fare better than the LEACH protocol.
Mobile Ad Hoc Networks (MANETs) are a self-configuring network of mobile nodes connected by wireless links where each mobile node works as a host as well as a router. With the growth and proliferation of these devices in every aspect of society, the need for such devices to communicate in a seamless manner is becoming increasingly essential. Applications supported by MANETs have stringent Quality of Service (QoS) requirements and to support these QoS parameters MANETs should have efficient routing protocols. Most of the reactive routing protocols like AODV provide a single route for packet delivery. However, when the single route fails, it results in a decline in a performance of various QoS parameters. Providing a single backup route also does not solve the problem completely as the backup route may also fail. Also, providing multiple backup routes may lead to multiple packets flooding the network. Hence, an efficient routing protocol is required which provides solution to this problem. This paper proposes AODV routing protocol with nth backup route (AODV nthBR) that provides source node with more than one back up routes in case of a link failure. The proposed scheme results in better throughput, lesser end to end delay and improved lifetime of devices.
This paper presents a mobile health monitoring scheme based on femtocell and mobile cloud computing. In this scheme, the health information of each user is captured by sensors and sent to the corresponding mobile device. From the mobile device the health data is transferred to the femtocell under which the mobile device is registered. In femtocell it is verified whether the user's health is normal using a database stored inside the femtocell. If any abnormality is detected the data are sent to the cloud. The health data are securely stored on the cloud and accessed by the health centre. Based on health data, the corresponding health centre takes proper action to cure the patient. The monetary cost required to access the data on the cloud through the proposed scheme is calculated. Simulation results present that using femtocell results in achieving approximately 28-70% and 30-75% reduction in cost consumption for accessing medium and large amount of data on cloud respectively than using macrocell, microcell or picocell base station.
A MANET is a wireless network in which nodes can act as sender/receiver or even as intermediaries like routers. Nodes in a MANET may misbehave with an intent to conserve resources. This happens because of limited resources available for each node. This causes a great impact on the entire network performance. The proposed system organizes a MANET in the form of zones and clusters with a Static Agent as a central node and a Zonal Agent for each zone. It is an improvement over Mobile Agent based architecture which is made possible because by introducing Zonal Agents. Thus, the system is able to detect Selfish and Malicious Nodes with reduced amount of information exchange between the nodes.
Cognitive Radio (CR) emerging approach which solves issues like spectrum scarcity and underutilization is resolved putting into effect the innovative ideas. FCC has announced a very interesting report in 2002, pointing out that more than 70% of the Radio Spectrums are underutilized at certain times or geographic positions, which proves that spectrum scarcity is not due to fundamental lack of spectrum instead because of waste of static spectrum allocation. In Non-Cooperative sensing has a very major limitation when a user experiences shadowing or fading effects under such situations, user cannot distinguish between an unused band and a deep fade. Collaborative spectrum sensing can be used to combat such effects. This paper deals with Cooperative Sensing using AND and OR Detection method and Trade-off point is calculated using simulation on precisely built Cognitive Radio MATLAB environment. Performance analysis using ROC curves taking into consideration the Probability of False alarm (Pfa) and Probability of Missed detection (Pmd) is done. This MATLAB Simulation analysis shows and intimate that in conditions where quick response is of great importance OR detection method leads with compromise over accuracy i.e more false alarms whereas AND detection method improve significantly the overall Cognitive system performance, bumping-off issues like Pfa in OR Detection method on the expense of delayed response. Best point is simulated out having acceptable accuracy and low latency period.
This paper illustrates the design and implementation of an e-health monitoring networked system. The architecture for this system is based on smart devices and wireless sensor networks for real time analysis of various parameters of patients. This system is aimed at developing a set of modules which can facilitate the diagnosis for the doctors through tele-monitoring of patients. It also facilitates continuous investigation of the patient for emergencies looked over by attendees and caregivers. A set of medical and environmental sensors are used to monitor the health as well as the surrounding of the patient. This sensor data is then relayed to the server using a smart device or a base station in close proximity. The doctors and caregivers monitor the patient in real time through the data received through the server. The medical history of each patient including medications and medical reports are stored on cloud for easy access and processing for logistics and prognosis of future complications. The architecture is so designed for monitoring a unitary patient privately at home as well as multiple patients in hospitals and public health care units. Use of smartphones to relay data over internet reduces the total cost of the system. We have also considered the privacy and security aspects of the system keeping the provision for selective authority for patients and their relatives to access the cloud storage as well as the possible threats to the system. We have also introduced a novel set of value added services through this paper which include Real Time Health Advice and Action (ReTiHA) and Parent monitoring for people with their family living abroad.
Chen has recently proposed a visual secret sharing scheme for gray-scale images [5]. Linear equations of Hill cipher are used to divide an image into sub-images and then the concept of random grid is applied to sub-images for construction of encrypted image. The scheme is easy to implement and can be applied for visual secret sharing. However, the scheme suffers from security issues. Although, the random grid is used as a second layer of security, it does not play any effective role during decryption. Secondly, even a crude guess of the coefficient matrix used in Hill cipher equations can reveal the secret. To overcome these drawbacks, a new scheme based on linear equation is proposed in this paper. Experimental results demonstrate that the method is effective and secure.
Due to the advancements in the wireless technologies and its wholesome support of mobility, the growth of mobile users is exponential. The applications like video and audio streaming does not sustain continuous data flow due to handover as it disconnects the flow during handover over the mobile IPv6 networks. In this paper, we focus on session handover by using session rate prediction to enable video session continuity without video freeze for mobile wireless networks. The results are presented to differentiate latency and workload between IntraDomain and InterDomain session handover to facilitate seamless streaming over the mobile networks.
With the advancement in the database technology NoSQL databases are becoming more and more popular now a days. The cloud-based NoSQL database MongoDB is one of them. As one can know the data modeling strategies for relational and non-relational databases differ drastically. Modeling of data in MongoDB database depends on the data and characteristics of MongoDB. Due to variation in data models MongoDB based application performance gets affected. In the present paper we have applied two different modeling styles as embedding of documents and normalization on collections. With the embedding feature we may face situation where documents grow in size after creation which may degrade the performance of database. The maximum document size allowed in MongoDB is limited. With references we get maximum flexibility than embedding but client-side applications must issue follow-up queries to resolve the references. The joins in this case cannot be effectively used. Hence there is need for defining the strategy of extent of normalization and embedding to get better performance in the mixed situation. The paper discussed here shows the variation in the performance along with the change in the modeling style with reference to normalization and embedding and it gives the base to find the extent of normalization and embedding for reducing query execution time.
Frequent pattern mining is an important area in data mining. Transactional databases are insufficient to analyze current shopping trends and as such we should consider Dynamic Datasets that updates transactions in an adhoc basis. Algorithms such as Apriori require more time to generate huge set of rules. Discovering interesting rules from the generated rules is difficult. Works that are reported until now in reducing number of rules are either time consuming or does not consider the interestingness of the user and does not focus on analysis of rules. This paper presents a case study on grocery data that uses SSFPOA semantic measure to reduce number of generated patterns, clusters the similar patterns and visualizes these clusters for easy analysis. Six graphs namely NCGraph, NSGraph, LCGraph, LSGraph, NEGraph and HGraph are proposed in VizSFP for visualizing frequent patterns. Clusters that are formed by SSFPOA are validated using clustering validating indices.
This paper focus on the importance of Desktop Applications, to be known as standalone utilities, a well known application of which a new version has been developed using JavaFx. The paper, in sequence, will discuss about the need for desktop application in e-Aushadhi, use of XML files as database storage as well as for import/export of information. The paper will also discuss about JavaFx and its ability to create transaction and various other screens without using and web controls. The main aim is to let users identify the importance of desktop applications and the comparison between web and desktop applications.
With an increasing use of mobile and hand-held computing devices, there is a need for new algorithms for data and transaction management in mobile environments. Devices are becoming more and more computationally capable and in many cases power is no more a critical issue: for example, laptops, which can be charged time to time. The reliability of communication links is also not of concern. The bottleneck in many of these situations is turning out to be the communication bandwidth. In this work we present a mobile transaction management protocol, which employs a lazy commit strategy to minimize the bandwidth utilization and the frequency of communication. We defer the commit and lock release until some other device requests a conflicting lock or the user explicitly asks the system to commit the changes. This reduces the communication frequency and also the bandwidth usage. Simulation results show that, in terms of bandwidth usage, our protocol performs strictly better than an existing optimistic protocol for mobile transactions.
Discovering a service over the web that meets the desired functionalities is still one of the most challenging tasks in the area of Service Oriented Computing. Lack of semantic information in the web services profiles poses a restriction in the automated discovery of services. Irrelevant and huge number of services returned by the UDDI and lack of standard mechanisms are the main problems faced by the users today during service discovery. In this paper, we propose a Web service discovery approach independent of the description model that tries to manage with the heterogeneity found in semantic service description frameworks. This proposed approach uses the principals from text mining, measures of semantic relatedness and information retrieval where the semantic information of the services is integrated with the syntactic service profiles to give hybrid service vectors. Empirical evaluation of the proposed approach implemented on OWL-X services has been presented to show the feasibility of the approach. Experimental results have shown that the proposed approach is able to discover better semantic relationship between services, therefore, more relevant results are ensured during discovery.
Stacking Ensemble is a collective frame work having strategies to combine the predictions of learned classifiers to generate predictions as new instances occur. In early research it has been proved that a stacking ensemble is usually more accurate than any other single-component classifier. Many ensemble methods are proposed, but still it is a difficult task to find the suitable ensemble configuration. Meta-heuristic methods can be used as a solution to find optimized configurations. Genetic algorithms, Ant Colony algorithms are some popular approaches on which current researches are going on. This paper is about meta-heuristic approaches used so far for the optimization of stacking configuration and what work can be done in the future to overcome the shortcomings of existing techniques. Particle swarm optimization based stacking ensemble framework can be applied to get better results. A number of studies, comparison and experiments are presented by extracting from a large no of references.
Mass Estimation, an alternative to density estimation, is proving to be an effective base modeling mechanism in data mining. It is as basic as density estimation which has been the fundamental for most data modeling methods for a wide range of tasks such as classification, clustering and anomaly detection. This paper reviews the theoretical basis of Mass Estimation that can be employed to solve various tasks in data mining and different ways to estimates the mass of data points in different dimensions. The paper also talks about applications of mass estimation in various data mining tasks and their comparison with previously used density estimation technique. The paper reviews the Mass Estimation technique in detail and will be helpful to researchers working in this area, which is relatively new.
With the spectacular increase in online activities like e-transactions, security and privacy issues are at the peak with respect to their significance. Large numbers of database security breaches are occurring at a very high rate on daily basis. So, there is a crucial need in the field of database forensics to make several redundant copies of sensitive data found in database server artifacts, audit logs, cache, table storage etc. for analysis purposes. Large volume of metadata is available in database infrastructure for investigation purposes but most of the effort lies in the retrieval and analysis of that information from computing systems. Thus, in this paper we mainly focus on the significance of metadata in database forensics. We proposed a system here to perform forensics analysis of database by generating its metadata file independent of the DBMS system used. We also aim to generate the digital evidence against criminals for presenting it in the court of law in the form of who, when, why, what, how and where did the fraudulent transaction occur. Thus, we are presenting a system to detect major database attacks as well as anti-forensics attacks by developing an open source database forensics tool. Eventually, we are pointing out the challenges in the field of forensics and how these challenges can be used as opportunities to stimulate the areas of database forensics.
Inductive Logic Programming (ILP) is used in relational data mining to discover rules in first order logic, given data in multiple relations. This form of data mining has to be distinguished from market basket analysis where the data comes from a single relational table. Although ILP addresses the problem of dealing with data from multiple relational tables, the fact remains that the efficiency of inferring rules in first order logic is significantly less than that of many-sorted logic. Further, many sorted logic is a closer reflection of the real world of objects that belong to sorts, in the presence of a sort hierarchy. We propose a new approach to ILP using many-sorted logic that is more computationally efficient than the approach based on unsorted first order logic.
The practical challenges on the web are irrelevant and huge number of services returned by the UDDI and lack of standard mechanisms that helps in the discovery of desired web services. The utilization of the implicit semantic information from the service profiles can help the service consumers in selecting the most relevant services from a set of offered services. In this paper, an approach for web service discovery is proposed which uses a lexical semantic network constructed from the web snippets as a knowledge base for the calculation of semantic similarity between the service profiles. Our approach takes into account the text descriptions and involves mapping of service profiles to a category based dimension vector by using the notion of semantic similarity which is further merged with the IR based techniques of weight generation and is used for calculating the semantic degree of similarity between the services. We present results that we obtained by applying the approach on set of 106 OWL-S service profiles. Empirical evaluation shows that the proposed approach helps in better discovery of semantically similar and relevant services which are otherwise shown to be unrelated by the keyword based approaches.
The rapid growth in information on the World Wide Web has created different challenges to the users such as finding relevant and useful information and knowledge. To deal with such problem, recommender systems came into existence. Collaborative filtering techniques have gained much more popularity than other techniques in recommender system. As in real life, we ask our friends for different recommendations. But traditional systems ignore social relationships among users. In order to resolve this problem and improve recommender system's results, the idea of using social recommender system was discussed which contains the capability of identifying user's interests and preferences and their social network relationship. However, this approach is not sensitive to those users whose friends have dissimilar tastes. To tackle the problem of inaccuracy in result due to information deficiency, the social regularization term is used to impose constraints between one user and their friends individually.
Social Media is being used as a key platform by advertisers to improve business by providing targeted and personalized advertising. There exist a trade-off between productivity in advertising and invasion of user's privacy in the existing approaches. Due to these privacy concerns, there were many law suits filed against the Beacon[1] advertising model used by Facebook resulting in its discontinuation. The new approaches need to address targeted audience while preserving the privacy in order to build appropriate revenue model for their operations. In this paper we propose an innovative model that leverage the trade-off. The model effectively interacts with user's linked data present in the web structured format, retrieve it and integrate data from marketing partners. Then it broadcasts the advertisement in social graph with flexible sentences For targeted advertising and privacy our model maintains interaction records among users in virtual containers for finding tie-strength[2] and also active friends 1 by Association Rules Mining [ARM] algorithm. We applied and validated our approach using a real data set obtained from 506 active social media users.
Sign Language, a language that uses a system of manual, facial, and other body movements as the means of communication, is the primary means of communication for people having speaking and hearing impairment. This paper uses image processing and fuzzy rule based system to develop an intelligent system which can act as an interpreter between the Bengali sign language and the spoken language. Initially the data is processed from raw images and then the rules are identified by measuring angles. Primarily, the system is tested only for two letters in Bengali.
Multilevel secure database systems are the systems, which each data item is assigned a security level or sensitivity level. The security classifications are assigned from complete relation to the individual data elements. To assign sensitivity levels, the capability of security constraints or classification rules is utilized. Security constraints assign the security levels on the basis of content, context, and time of data items. In this paper, we propose an integrated architecture for distributed query operations and coding of multi-levels for the purpose of providing extra security when it is stored on various sites.
Text clustering is an unsupervised process forming its basis solely on finding the similarity relationship between documents with the output as a set of clusters [14]. In this research, a commonality measure is defined to find commonality between two text files which is used as a similarity measure. The main idea is to apply any existing frequent item finding algorithm such as apriori or fp-tree to the initial set of text files to reduce the dimension of the input text files. A document feature vector is formed for all the documents. Then a vector is formed for all the static text input files. The algorithm outputs a set of clusters from the initial input of text files considered.
Recommendation systems are widely used to recommend products to the end users that are most appropriate. Online book selling websites now-a-days are competing with each other by many means. Recommendation system is one of the stronger tools to increase profit and retaining buyer. The book recommendation system must recommend books that are of buyer's interest. This paper presents book recommendation system based on combined features of content filtering, collaborative filtering and association rule mining.
Frequent itemset mining over dynamic data is an important problem in the context of knowledge discovery and data mining. Various data stream models are being used for mining frequent itemsets. In a data stream model the data arrive at high speed such that the algorithms used for mining data streams must process them in strict constraint of time and space. Due to emphasis over recent data and its bounded memory requirement, sliding window model is a widely used model for mining frequent itemset over data stream. In this paper we proposed an algorithm named Variable-Moment for mining both frequent and closed frequent itemset over data stream. The algorithm is appropriate for noticing latest or new changes in the set of frequent itemset by making its window size variable, which is determined dynamically based on the extent of concept drift occurring within the arriving data stream. The size of window expands when there is no concept drift in the arriving data stream and size shrinks when there is a concept change. The relative support instead of absolute support is being used for making the concept of variable window effective. The algorithm uses an in-memory data structure to store frequent itemsets. Data structure gets updated whenever a batch of transaction is added or deleted from the sliding window to output exact frequent itemsets. Extensive experiments on both real and synthetic data show that our algorithm excellently spots the concept changes and adapts itself to the new concept along data stream by adjusting window size.
Process mining is emerging scientific research discipline, concentrates on discovering, monitoring and enhancing the operational processes using the operational traces of the process documented in log. Process mining enables the process centric analysis of the data and aims at bridging gap between data mining, business process modeling and analysis. This article analyses use of Critical Path Method used in project management, in the context of process mining in order to find critical paths in process model. This article aims in leveraging process mining practices with the application of CPM and study its feasibility. Critical path identifies the minimum time possible to finish the project. Extra care must be taken while executing activities on critical path. Delay in any of the activities on critical path would definitely delay the process completion time and collapse overall process plan.
Process mining is originated form the fact that the modern information systems systematically record and maintain history of the process which they monitor and support. Systematic study of the recorded information in process centric manner will help to understand the process in a better way. Process mining acts as enabling technology by facilitating process centric analysis of data, which other available data science like data mining etc. fails to provide. Process mining algorithms are able to provide excellent insights on the process which they analyze, but they fail to handle the change in the process. Concept drift is a phenomenon of change in the process while it is being analyzed and it is a non-stationary learning problem. As the process changes while it is being analyzed, end result of the analysis becomes obsolete. Process mining algorithms are static biased, they assume that process at the beginning of analysis period will remain as same at the end of analysis period. There is at most requirement to effectively deal with the change in process to conduct optimal analysis. The main focus of this paper is to identify different factors to be considered while designing the solution for the problem of concept drift and explain each of the identified factors briefly. As the phenomenon of concept drift is extensively under consideration for research in other scientific research disciplines, this article considers restricting the content strictly concerning to the context of process mining.
A multiclass classification refers to the classification of the instance into more than two classes. In real life many classification problem requires decisions among a set of contending classes. Multiclass classification and prediction is suitable for hand written digit recognition, hand written character recognition, speech recognition and body parts recognition etc. This paper compares five classification algorithms namely Decision Tree, Naïve Bayes, Naïve Bayes Tree, K-Nearest Neighbor and Bayesian Network algorithms for predicting students' grade particularly for engineering students. This is a four class prediction problem. Student's marks are classified into four classes A, B, C and F respectively. Initially complete data set is used to build the classifiers then Bootstrap method is used to improve the accuracy of the each classifier. Bootstrap method is a resample function available in WEKA tool kit. The excellent results of this function can be seen through IBK, Decision Tree and Bayes Net algorithm. However the overall results of all four algorithms are good but the results of individual classes for Naïve Bayes and NB Tree is not sufficient enough for the individual class prediction particularly for this study. This paper also presents a comparative study of the previous work related to student's performance predictions.
In this paper we present an incremental algorithm for mining all the closed intervals from interval dataset. Previous methods for mining closed intervals assume that the dataset is available at the starting of the process, whereas in practice, the data in the dataset may change over time. This paper describes an algorithm, which provides efficient method for mining closed intervals by using a data-structure called CI-Tree (Closed Interval Tree) in dynamically changing datasets. If a new interval is added in the dataset the algorithm modifies the CI-Tree without looking at the dataset. The proposed method is tested with various real life and synthetic datasets.
Data streams are viewed as a sequence of relational tuples (e.g., sensor readings,call records, web page visits) that continuously arrive at time-varying and possibly unbound streams. These data streams are potentially huge in size and thus it is impossible to process many data mining techniques and approaches. Classification techniques fail to successfully process data streams because of two factors: their overwhelming volume and their distinctive feature known as concept drift. Concept drift is a term used to describe changes in the learned structure that occur over time. The occurance of concept drift leads to a drastic drop in classification accuracy. The recognition of concept drift in data streams has led to sliding-window approaches also different approaches to mining data streams with concept drift include instance selection methods, drift detection, ensemble classifiers, option trees and using Hoeffding boundaries to estimate classifier performance. This paper describes the various types of concept drifts that affect the data examples and discusses various approaches in order to handle concept drift scenarios. The aim of this paper is to review and compare single classifier and ensemble approaches to data stream mining respectively.
Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.
The presence of unimportant and superfluous features in datasets motivates researchers to devise novel feature selection strategies. The problem of feature selection is multi-objective in nature and hence optimizing feature subsets with respect to any single evaluation criteria is not sufficient [1]. Moreover, discovering a single best subset of features is not of much interest. In fact, finding several feature subsets reflecting a trade off among several objective criteria is more beneficial as it provides the users a broad choice for feature subset selection. Thus, in order to combine several feature selection criteria, we propose multi-objective optimization of feature subsets using Multi-Objective Genetic Algorithm. This work is an attempt to discover non-dominated feature subsets of smaller cardinality with high predictive power and least redundancy. To meet this purpose we have used NSGA II, a well known Multi-objective Genetic Algorithm (MOGA), for discovering non-dominated feature subsets for the task of classification. The main contribution of this paper is the design of a novel multi-objective fitness function consisting of information gain, mutual correlation and size of the feature subset as the multi-optimization criteria. The suggested approach is validated on seven datasets from the UCI machine learning repository. Support Vector Machine, a well tested classification algorithm is used to measure the classification accuracy. The results confirm that the proposed system is able to discover diverse optimal feature subsets that are well spread in the overall feature space and the classification accuracy of the resulting feature subsets is reasonably high.
This paper reports on an application of classification models to identify college students at risk of failing in the first year of study. Data was gathered from three student cohorts in the academic years 2010 through 2012. Students within the cohorts were sampled from a range of academic disciplines (n=1074), and were diverse in their academic backgrounds and abilities. Metrics used included data that are typically available to colleges such as age, gender and prior academic performance. The study also considered psychometric indicators that can be assessed in the early stages after enrolment, specifically, personality, motivation and learning strategies. Six classification algorithms were considered. Model accuracy was assessed using cross validation and was compared to outcomes when models were applied to a subsequent academic year. It was found that mature students were more complex to model than younger students. Furthermore, 10-fold cross validation accurately estimated model performance when modeling younger students only, but over-estimated model accuracy when modeling mature students.
In today's online world users are suffering with the problem of information overload. To handle this problem, recommender systems assist users in giving required information by filtering out irrelevant information. So, most of the recommender systems mainly strive to achieve only accuracy in recommendations but this is not just what users want. Users require more coverage and diversity in recommendations mainly in the case of news domain which is highly dynamic in nature. To handle the issues of coverage and diversity we have worked on proactive predictions of those user interests which could not have been predicted by just user behavior analysis. User interest has been expanded on the basis of Concepts, sub concepts, entities, properties and relationships stored in our designed news domain ontology. Ontology design is based on news industry standards and careful study of the domain. It is also semantically annotated with context sensitive knowledge, extracted from external knowledge source DBpedia.
Unstructured form of text documents has seen a huge growth. Feature selection methods are important for the preprocessing of such text documents for dynamic text classification. Appropriate and useful features are focused during feature selection. This can decrease the cost involved while huge amount of data is dispensed out and will also amplify the next textual classifying work. This paper devised a novel geometric optimization method labeling for textual classification. An experimental study on the said geometric feature optimization method is conducted using divergent sizes of text data sets. Experimentally it is shown that how effective this method and how it is better than the tradition methods.
Email service proves to be a convenient and powerful communication tool. As internet continues to grow, the type of information available to user has shifted from text only to multimedia enriched. Embedded text in multimedia content is one of the prevalent means for delivering messages to content viewers. With the increasing importance of emails and the incursions of internet marketers, spam has become a major problem and has given rise to unwanted mails. Spammers are continuously adopting new techniques to evade detection. Image spam is one such technique where in embedded text within images carries the main information of the spam message instead of text based spam. Currently, image spam is evaluated to be roughly 50% of all spam traffic and is still on the rise, thus a serious research issue. Filtering mails is one of the popular approaches used to block spam mails. This work proposes new model ReP-ETD (Repetitive Pre-processing technique for Embedded Text Detection) for efficiently and accurately detecting spam in email images. The performance of the proposed ReP-ETD model has been evaluated across the identified parameters and compared with other existing models. The simulation results demonstrate the effectiveness of the proposed model.
Classification is the best way to partition a given data set. Decision tree is one of the common methods for extracting knowledge from the data set. Traditional decision tree faces the problem of crisp boundary hence fuzzy boundary conditions are proposed in this research. The paper proposes Fuzzy Heterogeneous Split Measure (FHSM) algorithm for decision tree construction that uses trapezoidal membership function to assign fuzzy membership value to the attributes. Size of the decision tree is one of the main concern as larger size leads to incomprehensible rules. The proposed algorithm tries to reduce the size of the decision tree generated by fixing the value of the control variable in this approach without compromising the classification accuracy.
When we talk about running a lasting business that experiences drastic growth, a social media presence is critical. Having the knowledge of social media and its necessity, running a successful social media marketing campaign is totally a different idea. In fact, when it comes to establishing a social media presence that makes an impact, understanding the interest of all users & based on it publishing the required information as per their tastes is an important factor. For advertising campaigns and product development, discovering the appropriate target markets and audience is an important stage in the market research. This paper aims to focus on four important works i.e. Identifying the target users, Designing of market strategy/plan, Building the marketing network (groups) & Statistical analysis of categories. Influentiality of target user has been discussed with real time instances. Categories have been found based on their influence by using clustering technique. Lastly, ended with statistical analysis that includes graphical representation of highly influenced users. Further this paper helps to extract emotional feelings of the user so that any related articles, posts or videos can be posted to that user.
The World Wide Web consists of millions of interconnected web pages that provide information to the user present in any part of the world. The World Wide Web is expanding and growing in size and the complexity of the web pages. That is why it is necessary to retrieve the best or the web pages that are more relevant in terms of information for the query entered by the user in the search engine. To extract the relevancy of a web page, the search engine requires applying retrieval or a ranking module that applies to a ranking algorithm on the web to fetch the web pages in order of the importance of the information entered by the user in the query. The ranking algorithm is much efficient to rank the surface web, i.e. the web pages that can be indexed by the search engine, as well as the hidden web, i.e. the web pages that cannot be indexed by the search engine. This Paper proposed an algorithm consists of: 1) PageRank Algorithm, 2) Term Weighting Technique, 3) Feedback (Likes/Dislikes) and 4) Visitor Count.
This paper discusses the novel area of Brain Informatics (BI). BI is an interdisciplinary field that studies Information processing and Neuroscience. First section of the paper discusses this area and identifies major research issues associated with BI. Second section provides a comprehensive literature survey of neuroimaging techniques and their pros and cons. It then relates the most promising technique with its applications in BI. The third section discusses the process of classification. The design of classifiers in the context of BI and use of classification in BI is then discussed. The fifth section discusses the applications of analysis of neuroimaging data. The final section concludes the paper and provides directions for future work.
This paper exploits sentiment analysis based techniques to automatically identify the interlinked events from disaster related news coverage. Here the interlinked events include: (1) effect (loss/damage) due to disaster, (2) recovery effort applied by recovery agencies and (3) people's feedback related to the recovery effort. The main idea is to analyze the performance of the disaster recovery agencies through people's feedback with respect to the damage/loss occurred during the disaster. To automatically identify the interlinked events, we introduce the combined use of text mining based techniques with sentiment analysis based techniques. Finally, to automatically detect the impact of the recovery effort for better disaster management, we introduce the use of SentiWordNet and manually created vocabulary related to the disaster. Our experimental results show the effectiveness of the proposed system on automatically collected news stories. It is important to note that the proposed approach will be helpful in better disaster management and resource planning for the future.
In today's world the music professionals are facing challenges when they want to practice on any song. The challenge is that the instrumentalists are not available at all times to accompany them. Thus there is an increase demand of the karaoke systems that can generate the instrumental music without having the vocals. This creates a significant need to develop such kind of systems. In this paper we are proposing an efficient approach that helps in creating the karaoke of any song and removes the vocal part from any song. Lots of researchers are focusing on using different aspects, we propose the LPF Technique that is exploited to separate the vocal and the instrumental portion from the song clip. As there is room for additional improvements for the system, the proposed technique is useful and easy to use with the proven validations. The strategy uses the matlab programming environment and the filtering techniques. Using the Matlab GUI one working prototype has been proposed that helps user to create the karaoke which is the vocal separated instrumental only.
The world wide web can be viewed as a repository of opinions from users spread across various websites and networks, and today's netizens look up reviews and opinions to judge commodities, visit forums to debate about events and policies. With this explosion in the volume of and reliance on user reviews and opinions, manufacturers and retailers face the challenge of automating the analysis of such big amounts of data (user reviews, opinions, sentiments). Armed with these results, sellers can enhance their product and tailor experience for the customer. Similarly, policy makers can analyse these posts to get instant and comprehensive feedback. Or use it for new ideas that democratize the policy making process. This paper is the outcome of our research in gathering opinion and review data from popular portals, e-commerce websites, forums or social networks; and processing the data using the rules of natural language and grammar to find out what exactly was being talked about in the user's review and the sentiments that people are expressing. Our approach diligently scans every line of data, and generates a cogent summary of every review (categorized by aspects) along with various graphical visualizations. A novel application of this approach is helping out product manufacturers or the government in gauging response. We aim to provide summarized positive and negative features about products, laws or policies by mining reviews, discussions, forums etc.
Building geodemographic models using census data is not new. However, many of the models are built commercially and often little is published about how the models are designed and created. One of the important steps to building any geodemographic system is identifying the variables that will produce meaningful and application-relevant clusters. Unfortunately many of the commonly used methodologies for feature selection cannot be used on the census data and so were not available for clustering tasks. This resulted in a more objective-focused approach being taken when choosing variables. This paper outlines the reasons more commonly-used feature selection techniques could not be used, and then explains the alternative approaches and methodologies that were used to select variables to help build an Irish geodemographic model.
Several deaths every year are caused by burns. On the occurrence of the burn injury, the first crucial response is to provide the appropriate treatment to the victim's injury and it may require consulting an experts or specialist of burn center. However, the number of burn centers or burn care units are very limited. People in remote areas, rural or hilly regions have to travel long distances in search of a burn care center. Treating minor burns may not require expert's consultation, but there is lack of awareness about the first aid and prevention strategies, leading to higher incidence of major burns. Hence, an automatic system that helps assess the burn injury and provide first aid information for burn injuries would be extremely beneficial. The aim of this work is to develop an automated system. Our solution to this problem arises in one of the most widespread inventions of today's time - the Smartphone. We propose and develop a Smartphone app using cloud interface that assists to identify the type and severity of the burn injury. Our approach to this problem is twofold. Firstly, providing first aid information by recognizing degree of the burn. Secondly, sending the victim's information and image of burn to the cloud interface which is also connected to one or more burn trauma centers seeking the experts advice, and making the cloud learn from this data.
Bin packing problem is one amongst the major problems which need attention in this era of distributed computing. In this optimization is attained by packing a set of items in as fewer bins as possible. Its application can vary from placing data on multiple disks to jobs scheduling, packing advertisements in fixed length radio/TV station breaks etc. The efforts have been put to parallelize the bin packing solution with the well-known programming model, MapReduce which is highly supportive for distributed computing over large cluster of computers. Here we have proposed two different algorithms using two different approaches, for parallelizing generalized bin packing problem. The results obtained were tested on the hadoop cluster organization and complexities were estimated thereafter. It is found that working on the problem set in parallel results in significant time efficient solutions for Bin Packing Problem.
Indexing of data is important for the fast query response in the information retrieval. Support of multiple query on the multidimensional data is a challenging task. Indexing of multidimensional data received much attention recently. In this paper a new data structure Perfect Hash Base R-tree (PHR-tree) is proposed. Node of PHR-tree is expansion of traditional R-tree node with Perfect Hashing Index to support multiple queries efficiently. It supports point query on the multidimensional data efficiently. It provides space efficiency and fast response to query (O(log n)) on all type of queries.
Searching for a facility near to many customers is such a problem that even an approximately correct answer can save lot of labor, time and money. A possible solution for decision makers to reach facility approximately nearer to all customers has been discussed. It took time of order O(n log n) as it is based on voronoi diagram of order O(n log n) and its own time is of linear order. The proposed algorithm tackled the problem of finding the nearest facility for multiple customers by considering two criteria. The first one was minimizing the aggregate distances i.e. sum of total distances covered by all the customers. The second one was minimizing the maximum difference i.e. the difference between the farthest customer and the nearest customer. The approach given here has used Plane sweep algorithm or Fortune's algorithm for voronoi diagram construction algorithm as its base algorithm because it is one the most efficient algorithm known for computing voronoi diagram.
This paper gives an algorithm to find the nearest facility location using Geohashing. Open box query is implemented to find nearest location which is based on unbounded data. Searching is done based on the longitude and latitude of locations. Geohashing is a technique which converts the longitude, latitude pair into a single value which is represented in binary format. Data accumulated in Geospatial queries is too big in size. Therefore MapReduce framework is used for parallel implementation. MapReduce splits the input into the independent chunks and execute them in parallel over different mappers. Geohashing and MapReduce when fused together to find the nearest facility location gives very good results.
The Pillreports.com database was mined in order to determine if the free-text fields in the database could be of use in differentiating regular pills from those that have been adulterated, i.e. contains ingredients not comparable to MDMA. The data was download and extracted using RapidMiner and Xpath queries. A Naive Bayes and SVM binary classification model was created. Pre-processing techniques of tokenisation, n-gram creation, stop-word removal, stemming as well as feature selection by weights were applied to the data, resulting in a 15 point improvement in the model. In addition we are reporting on a comprehensive cluster analysis. Frequent terms and differences between clusters were visualised using word clouds. Clusters were compared with values contained in nominal fields. Model results and interpretation are provided at various preprocessing stages. Key phrase extraction is identified as an area of possible future work.
This research paper uses text visualisation methods, such as Wordclouds, to emphasis the key terms used by different political parties throughout the 2011 Irish Parliament (Dáil) period; this includes the final sessions of the 30th Dáil before the 2011 General Election and the entire 2011 sessions of the 31st Dáil. Using term frequencies this research clearly shows the topics that were important to each relevant political party. This study also highlights the key differences that may reside between what a political party stated clearly in their election manifesto and what was written on Dáil transcript records during the course of 2011. The main objective of this research is exploring and understanding ways in which key political information can be summarised and subsequently displayed in an intuitive visual fashion as a guiding education facility.
This paper reports on the text analysis (visually and computationally) carried out on the lyrics of the rock musician Tom Waits. The analysis mainly focuses on the generally agreed transition period of the musician with his album Swordfishtrombones which started a new phase in Waits' 40 year career. A total of ten supervised learners are tested with the aim to separate the high dimensional space of the word vector (based on his lyrics)into two phases. After initial tests particular focus is given to the Maximum Entropy classifier by further working with some additional pre-processing techniques. The classifier is able to shed some further light into the two classes by being able to separate the two classes with an accuracy of 95%.
Battery consumption in mobile hand held devices is an important issue, given the recent slew of such devices with advanced computing capabilities. In this paper we propose some software based approaches to reduce the power consumption of a mobile device running an Android application. In the first approach we consider a typical Android application. We try to minimize the power consumption by reducing the main window surface size. In the second approach we consider the applications having higher graphics or rendering requirements and utilize surface view for rending the content, such as games, internet browser and video player, which contribute most in draining the battery. Our approach involves removing the surface view and directly utilizing the application's main window surface. We present various experiments to measure the power consumption for every scenario. We also discuss about advantages and problems of the proposed methods along with some possible solutions.
Load Balancing is very important requirement of any Distributed System. Migration of task is one of the eminent methodologies for the same purpose. Till now utilization is the only key based on which source and destination processor of victim task are being selected. This paper evaluates information theoretic based entropy factor that works similar to utilization factor and its better performance in task migration methodology. We have computed the performance of our system by calculating the Average scheduling latency, Deadline missing rate, Migration rate and finally the Execution ratio of total tasks for 5 and 10 numbers of processors of RTDS.
Cloud computing is new era of network based computing, where resources are distributed over the network and shared among its users. Any user can use these resources through internet on the basis of Pay-As-Per-Use system. A service used by any user can produce a very large amount of data. So in this case, the data transfer cost between two dependent resources will be very high. In addition, a complex application can have a large number of tasks which may cause an increase in total cost of execution of that application, if not scheduled in an optimized way. So to overcome these problems, the authors present a Cat Swarm Optimization (CSO) - based heuristic scheduling algorithm to schedule the tasks of an application onto available resources. The CSO heuristic algorithm considers both data transmission cost between two dependent resources and execution cost of tasks on different resources. The authors experiment with the proposed CSO algorithm using a hypothetical workflow and compare the workflow scheduling results with the existing Particle Swarm Optimization (PSO) algorithm. The experimental results show - (1) CSO gives an optimal task-to-resource (TOR) scheduling scheme that minimizes the total cost, (2) CSO shows an improvement over existing PSO in terms of number of iterations, and (3) CSO ensures fair load distribution on the available resources.
Hadoop is an open source cloud computing platform of the Apache Foundation that provides a software programming framework called MapReduce and distributed file system, HDFS. It is a Linux based set of tools that uses commodity hardware, which are relatively inexpensive, to handle, analyze and transform large quantity of data. Hadoop Distributed File System, HDFS, stores huge data set reliably and streams it to user application at high bandwidth and MapReduce is a framework that is used for processing massive data sets in a distributed fashion over a several machines. This paper gives a brief overview of Big Data, Hadoop MapReduce and Hadoop Distributed File System along with its architecture.
The primary objective of load balancing is to minimize the job execution time and maximize resource utilization. The load balancing algorithms for parallel computing system must adhere to three inherent policies; viz. information policy, transfer policy and placement policy. To better utilize the system resources this work proposes a load balancing strategy with information exchange policy based on random walk of packets for system with decentralized nature. Information is exchanged via random packets so that each node in a system has up-to-date states of the others nodes.
Floorplanning is a key problem in VLSI physical design. The floorplanning problem can be formulated as that a given set of 3D rectangular blocks while minimizing suitable cost functions. Here, we are concentrating on the minimization of the total volume of 3D die. In this paper, first we propose a new topological structure using weighted directed graph of a floorplaning problem in 3D VLSI physical design. But here the main question is this structure is effective or not. For this, we give the idea of a new algorithm to minimize the volume of 3D die in floorplanning problem using this new representation technique. It is interesting to see that our proposed structure is also capable to calculate the total volume and position of the dead spaces if dead spaces exist. Next, we give the experimental result of our new algorithm and then conclude the paper.
Scheduling on a computational grid is an NP Hard problem encouraging a number of scheduling strategies using various parameters and objectives. A computational grid provides the user a platform to execute the compute intensive jobs which otherwise cannot be executed at the user's end. A grid system can be used to its full potential if the scheduling strategy provides the efficient mapping between the software parallelisms available in the application on the hardware parallelism offered by the grid. This work proposes a scheduling strategy that schedules the job on the suitable grid resources as per the job's requirements while considering the communication requirements and preserving the precedence constraints within the job. Simulation study reveals the effectiveness of the model under various conditions.
Internet is an ever changing and rapidly progressing entity. With each passing day, we come across a new technology or concept. Many of these concepts pass as a fad but few become the cornerstone of our future technology. Three of the hottest current research topics which are highly inter-related are cyber physical cloud (CPC), cloud of sensors (CoS) and internet of things (IoT). All these have sensors and cloud as integral part of their architecture. While the first two topics are directly related to cloud, the third one requires cloud computing in backend for huge amounts of data processing and storage. In this paper, we will study these concepts with intent of finding the area of overlap or similarities and their subtle yet important differences.
This paper introduce the statistical definitions of an algorithm in Parallel Random Access Machine Model (PRAM). We present a concept of statistical analysis of 2 × 2 matrix multiplication and its summation will be executed in 2 n processors. It shows conceptually how a communication delays taking an important role in parallel time complexity. We compare communications delay with networks topology such as ring, mesh and hypercube. The result of processor utilization curve and timeline chart shows that which networks and how many processors are req1uired for the problem.
This paper presents a novel approach to enhance the performance of Medical image processing, primarily using code optimization techniques. The goal is to work on medical imaging which is both time and life critical. The problem of performance improvement is tackled from multiple directions - architecture, algorithm, compiler and the code level, the approach being unique in th is work. Image acceleration techniques are useful for faster diagnosis and treatment of medical issues, which are experimented in this work. Performance of image processing algorithms is measured with varying parameters such as different operating systems, different processors, different image sizes and various compiler optimization techniques like loop fusion, constant folding etc. It is found that code optimization and certain compiler optimizations improve the performance.
Cloud Computing involves the concepts of parallel processing and distributed computing in order to provide the shared resources by means of Virtual Machines(VMs) hosted by physical servers. Efficient management of VMs directly influences resource utilization and QoS delivered by the system. As the cloud setting is dynamic in nature, the number of VMs distributed among the physical servers tends to become uneven over a period of time. Under this circumstance, VMs must be migrated from overloaded server to underloaded server to balance the load. In this paper, we present a random graph model of the network of servers in a data center. By initiating random walks and using the heuristics Maximum Correlation Coefficient and Migration Opportunity, we select the migrating set of VMs as well as the target server respectively. Simulation results show that the model always finds a target server in minimum time. Also the graph maintains uniform average degree which shows that the network of physical servers remains load balanced even when the load and the migration opportunity vary with time.
Cloud computing provides facility to its customers to dynamically scale up the applications, platform and the hardware infrastructure. But the resources provided from one cloud provider are finite and at some point of time can violate the SLA (service level agreements). One approach can be used to better facilitate the customers is to scale the applications, software platforms and the infrastructure to multiple independent clouds i.e. federated clouds. The federated clouds can share the resources with other cloud providers as the scale and load increases and can pay for the service on usage based. Virtual machine allocation is also an important parameter of federated clouds, because multiple clouds are exchanging the VM (Virtual Machine) with one another and the trading policies of all the clouds is not same. VM Allocation can be optimized for cost-effective Virtual machine allocation. This paper is a survey of all VM allocation policies available in federated clouds.
The properties of DNA sequences offer an opportunity to develop DNA specific compression algorithm. A lossless two phase compression algorithm is presented for DNA sequences. In the first phase a modified version of Run Length Encoding (RLE) is applied and in the second phase the resultant genetic sequences is compressed using ASCII values. Using ASCII codes for eight bits ensures one-fourth compression irrespective of repeated or non-repeated behavior of the sequence and modified RLE technique enhances the compression further more. Not only the compression ratio of the algorithm is quite encouraging but the simple technique of compression makes it more interesting.
Triangulation is the key step for determining the maximum size of cliques, by which the performances of probabilistic inference algorithms are determined. Obtaining the optimal triangulation is NP-complete. But, triangulation performance can be improved through removing redundant fill edges from the triangulated graph. MinimalChordal and Recursive-Thinning are two popular methods for removing redundant fill edges. It has been validated that if the triangulated graph is minimal, the number of fill edges removed by them is small. In this paper, we shall analyze ordering strategies related to their performance, and propose two ordering algorithms for improving them. However, their performances are determined by triangulation orderings. To make them valuable, some better triangulation ordering method similar to MCS-M is expected even if it is not minimal.
Proliferation of GPGPU and other accelerators, is making the industry consider accelerator based systems as a viable option for high-performance: low-power HPC systems. This paper describes a multi-accelerator heterogeneous cluster in which each node has GPGPU and FPGA cards. Extracting the maximum computational power simultaneously of all the compute elements, i.e. multi-core CPU, GPGPU and FPGA is an important challenge. StarPU is a popular open source runtime that supports heterogeneous architectures. This paper describes the key features of heterogeneous runtime and how StarPU has been adapted to execute parallel programs which span across both GPGPU and FPGA accelerators.
Automated suppression of howling noise in public address systems is of utmost importance as howling noise is very annoying to the speaker and audience. It adversely affects speech intelligibility. Howling is built up by a positive feedback loop set up between microphone, amplifier, and the speaker system. An automated method for detection and suppression of howling noise is proposed using sinusoidal model based analysis/synthesis. The method automatically adapts to the properties of howling noise and has the advantage of suppressing the noise, without degrading speech intelligibility. The system has been tested for satisfactory operation using a computer-based setup.
High Speed Computing is a promising technology that meets ever increasing real-time computational demands through leveraging of flexibility and parallelism. This paper introduces a reconfigurable fabric named Reconfigurable High Speed Computing System (RHSCS) and offers high degree of flexibility and parallelism. RHSCS contains Field Programmable Gate Array (FPGA) as a Processing Element (PE). Thus, RHSCS made to share the FPGA resources among the tasks within single application. In this paper an efficient dynamic scheduler is proposed to get full advantage of hardware utilization and also to speed up the application execution. The addressed scheduler distributes the tasks of an application to the resources of RHSCS platform based on the cost function called Minimum Laxity First (MLF). Finally, comparative study has been made for designed scheduling technique with the existing techniques. The proposed platform RHSCS and scheduler with Minimum Laxity First (MLF) as cost function, enhances the speed of an application up to 80.30%.
Publish/Subscribe systems; establish a connection between subscribers (consumers) and publishers (producers) of events, behaving as a mediator between subscribers and publishers. So core functionality of this system is to match the events with the subscriptions and send these events to subscribers whose subscriptions are related to the events. Publish/Subscribe system is used in many application domains, ranging from smart grid to transportations. But, traditional Publish/Subscribe systems are unable to handle these emerging IoT (Internet of Things) applications due to its lack of QoS capability. To meet the requirements for QoS in IoT, modification in message broker is done which introduces the ability to schedule the computations resources. We propose two techniques of resource selection that makes better resource utilization. Performance of the system is measured by taking number of failures as a parameter. Experimental result shows that, performing good resource selection decreases the number of failures by 3%.
A storage system in a data center consists of various components such as Disk Array Enclosure (DAE), disks, processors, servers, hosts running different applications, and so on. Hard disk and server failures are not frequent but are often very costly. Such failures can have a very adverse effect on the business of an organization. The ability to accurately predict an impending disk or server failure can add an essential functionality for designing a reliable, fault tolerant and continuously available storage system. This paper explains a novel approach to predict hardware failures using spectrum-kernel Parallel Support Vector Machine (Parallel SVM) method by analyzing the system events logged in the system log files. These log files not only records the events processed by the system but it also holds the messages as the system state changes. A single message in the system log file is insufficient for any prediction and such prediction is bound to be less accurate. The approach introduced in the paper uses a sequence or pattern of messages from the system log file using a Sliding Window of messages with window size of 5 message sequence to predict the likelihood of a failure. These Sliding Windows of message sequences acts as inputs to the Parallel SVM. The Parallel SVM further tags the messages to a failure or non-failure system. Data Mining techniques are used in extracting useful information from the raw dataset. A solutioning model is developed using the structured dataset and Machine Learning algorithms. This environment when implemented using actual system logs from Linux-based storage system have shown to predict a hardware failure with accuracy of 90-92 percent.
This paper proposes partial reconfigurable FIR filter design using systolic Distributed Arithmetic (DA) architecture optimized for FPGAs. To implement computationally efficient, low power, high speed Finite Impulse Response (FIR) filter a two dimensional fully pipelined structure is used. To reduce the partial reconfiguration time a new architecture for the Look-Up Table (LUT) in distributed arithmetic is proposed. The FIR filter is dynamically reconfigured to realize low pass and high pass filter characteristics by changing the filter coefficients in the partial reconfiguration module. The design is implemented using XUP Virtex 5 LX110T FPGA kit. The FIR filter design shows improvement in configuration time and efficiency.
Ability of measuring the Host resource is useful in several domains like Cluster and Grid computing. Ganglia monitoring system provides the ability to monitor the resource information of hosts which are present in Grid or Cluster computing environments. The objective of this research paper is to present an architecture study of ganglia monitoring system with focus on embedding new custom metric. Moreover, the paper also presents detailed methodology to build a new custom module. Main steps of custom module development: compilation, insertion, and testing of newly developed custom metric are presented. Moreover, presented steps of custom module development are demonstrated with practical implementation of custom module to collect value of “Anon Pages” of the host. The presented work would be very useful to researchers who work in the area of resource monitoring in distributed environment.
Cloud Computing is on demand network, data access anywhere anytime. Pay per model is gaining commercial day by day. Due to the popularity of cloud and all time availability, the security issue is again the main concern nowadays. The client don't know where the data is stored and in which datacenter. The distributed nature of cloud is again the point of security in cloud and also gives the chance to malicious activities to be carried out very easily. Cloud computing presents the abstract layer to user for storing their confidential data hiding their architectural details. Due to this whenever the malicious activity happens in cloud, it becomes very difficult to trace. This gives rise to new area of research in the field of digital forensics that has unique challenges and opportunities in context of cloud. This paper presents the detail study of malicious activity that can be carried out in cloud and with the help of some case studies and detailed methodology of proposed architecture earlier mentioned in the paper.
In this paper, we are discussing about cluster growth along with evolution strategy for evolvable hardware. In today's world, need of optimization of circuits based on certain constraints is increasing as th e number gates on an IC (Integrated Circuits) is increasing. Some of these constraints are feasibility of circuit, cost, delay, power, area of the chip etc. So for this purpose we need to find such efficient optimization techniques that can be implemented using software programming to obtain the hardware circuit.
In a large-scale cloud computing environment the cloud data centers and end users are geographically distributed across the globe. The biggest challenge for cloud data centers is how to handle and service the millions of requests that are arriving very frequently from end users efficiently and correctly. In cloud computing, load balancing is required to distribute the dynamic workload evenly across all the nodes. Load balancing helps to achieve a high user satisfaction and resource utilization ratio by ensuring an efficient and fair allocation of every computing resource. Proper load balancing aids in minimizing resource consumption, implementing fail-over, enabling scalability, avoiding bottlenecks and over-provisioning etc. In this paper, we propose “Central Load Balancer” a load balancing algorithm to balance the load among virtual machines in cloud data center. Results show that our algorithm can achieve better load balancing in a large-scale cloud computing environment as compared to previous load balancing algorithms.
Two methods for partial vectorization are implemented in the state-of-the-art optimizing open-source Open64 compiler. The first method vectorizes isomorphic expression trees in a basic block using tree matching. Given that finding isomorphic trees in a basic block is fast, the compile time overhead for this approach is negligible. The second method computes the dependency DAG of the instructions in the basic block and deploys a variant of the dynamic programming approach to partial vectorization. Due to the cost in dynamic programming, this method is slower than the first. But results show that the second approach is effective in harnessing partial vectorization for complex DAGs. To improve benefits from partial vectorization, our experience demonstrates the need for careful tuning in the absence of accurate alias data. Unrolling loops also make partial vectorization more effective. Performance improvements range from 12% to 22% for the proposed approaches when generating SSE2/AVX code. Also this is the first reported instances of effective partial vectorization techniques implemented in a full-fledged open-source and robust commercial compiler.
Since beginning of Grid computing, scheduling of dependent tasks application has attracted attention of researchers due to NP-Complete nature of it. In Grid environment, scheduling is deciding about assignment of tasks to available resources. Scheduling in Grid is challenging when the tasks have dependencies and resources are heterogeneous. The main objective in scheduling of dependent tasks is minimizing make-span. Due to NP-complete nature of scheduling problem, exact solutions cannot generate schedule efficiently. Therefore, researchers apply heuristic or random search techniques to get optimal or near to optimal solution of such problems. In this paper, we show how Genetic Algorithm can be used to solve dependent task scheduling problem. We describe how initial population can be generated using random assignment and height based approach. We also present design of crossover and mutation operators to enable scheduling of dependent tasks application without violating dependency constraints. For implementation of GA based scheduling, we explore and analyze SimGrid and GridSim simulation toolkits. From results, we found that SimGrid is suitable, as it has support of SimDag API for DAG applications. We found that GA based approach can generate schedule for dependent tasks application in reasonable time while trying to minimize make-span.
Cloud computing provides computing resources on demand. It is a promising solution for utility computing. Increasing number of cloud service providers having similar functionality poses a problem to cloud users of its selection. To assist the users, for selection of a best service provider as per user's requirement, it is necessary to create a solution. User may provide its QoS expectation and service providers may also express the offers. Experience of existing users may also be beneficial in selection of best cloud service provider. This paper identifies QoS metrics and defines it in such a way that user and provider both can express their expectation and offers respectively into quantified form. A dynamic and flexible framework using Ranked Voting Method is proposed which takes requirement of user as an input and provides a best provider as output.
Quantum Computation is slowly evolving as the next big innovation that would change the entire way we perceive the world around us. Even though many scientists are actively researching on it, one main hurdle is that a physical quantum computer is not readily available to everyone to work on. The present paper computationally simulates a quantum algorithm using nuclear magnetic resonance on four different kinds of molecules and shows that the spectra obtained are similar to the experimental results. Computational simulations like these would allow researchers to verify their quantum algorithms using NMR without actually having a physical device hence overcoming the hurdle.
The emergence of Cloud Computing has brought new dimension to the world of information technology. Even though Cloud Computing provide various benefits like agility, on-demand provisioning of resources, reduced cost, multi-tenancy etc., there are risks and flaws associated with it. One key research challenge in Cloud Computing is to ensure continuous reliability and guaranteed availability of resources provided by it. So there is a need for a robust Fault Tolerant (FT) system in Cloud Computing. To better understand FT in Cloud Computing, it is essential to understand the different types of faults. In this paper, we highlight the basic concepts of fault tolerance by understanding the different FT policies like Reactive FT policy and Proactive FT policy and the associated FT techniques used on different types of faults. A study on various fault tolerant methods, algorithms, frameworks etc., has been carried out which are developed and implemented by research experts in this field. This is an area where lot of research is happening and these studies will guide us to build a robust FT technique in Cloud.
Public-key infrastructure based cryptographic algorithms are usually considered as slower than their corresponding symmetric key based algorithms due to their root in modular arithmetic. In the RSA public-key security algorithm, the encryption and decryption is entirely based on modular exponentiation and modular reduction which are performed on very large integers, typically 1024 bits. Due to this reason the sequential implementation of RSA becomes compute-intensive and takes lot of time and energy to execute. Moreover, it is very difficult to perform intense modular computations on very large integers because of the limitation in size of basic data types available with GCC infrastructure. In this paper, we are looking into the possibility of improving the performance of proposed parallel RSA algorithm by using two different techniques simultaneously, first implementing modular calculations on larger integers using GMP library and second by parallelizing it using OpenMP on the GCC infrastructure. We have also analyzed the performance gained by comparing the sequential version with the parallel versions of RSA running on the GCC infrastructure.
Multi-Agent is an emerging research field in modern era. Agent based modeling is very much suitable for distributed systems. In this paper we are exploring the distributed nature of railway system and modeling it using multi-agents. We are proposing this model to make the railway system more efficient, loyal and trustable. We are considering the problem of passing the trains moving in same direction and proposing multi-agent based solution to take immediate decisions by negotiation between train agents and reduce the overall delay of the system up to acceptable limit.
Mobile Ad hoc networks (MANETs) have significantly enhanced the wireless networks as they eliminate the need for fixed infrastructure and are easily deployable. Apart from their application for communication purposes, these are increasingly being used for expanding the computing capabilities of existing cellular mobile systems and for the implementation of mobile computing grids. Therefore, a fault tolerance technique is crucial in order to effectively utilize the computing potential of the network. Rollback recovery has been widely used to achieve fault tolerance in distributed networks; yet its application is not trivial in a MANET due to limited availability of stable storage, node mobility and frequent network partitioning. The paper presents a rollback recovery protocol for MANETs which addresses these challenges by using opportunistic routing. Since all nodes may not have enough stable storage, the nodes with sufficient availability of stable storage are distinguished as Checkpoint Storage Nodes (CSNs). Opportunistic contacts between mobile nodes are used, firstly, for locating Checkpoint Storage Nodes in the network and subsequently, for retrieving the last saved checkpoint of a failed node from a CSN at the time of recovery. We calculate the recoverability, i.e., the probability that a process can recovery in a given time period.
Cloud computing is one of the emerging technologies with its ease of access and diverse applicability, letting customers attracted to it and thus posing many challenging issues that need to overcome in this field. Since the evolution of cloud computing: Load balancing, power constrains, program offloading, cost modelling and security issues are the popular research topic in this field. Deploying real cloud for testing or for commercial use is very costly. Cloud simulator helps to model various kinds of cloud application by creating Data Centre, Virtual Machine and many Utilities which can be added to configure it, thus making it very easy to analyse. Till now, many cloud simulators have been proposed and also available to use. These simulators are built for specific purpose, and have varying features in each of them. In this paper we presented a comprehensive study of major cloud simulators by highlighting their important features and analysing their pros and cons. we made a comparison among the simulators by considering their important attributes and finally concluded with our future direction.
Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
Cloud computing is an area that is rapidly gaining popularity in both academia and industry. Cloud-Analyst is useful tool to model and analyze cloud computing environment and applications before actual deployment of cloud products. Service broker controls the traffic routing between user bases and data centers based on different service broker policies. Service proximity based routing policy selects closest data center to route the user request. If there are more than one data centers within the same region, it is selected randomly without considering workload, cost, processing time or other parameters. Randomly selected data center is prone to give unsatisfactory results in term of response time, resource utilization, cost or other parameters. In this paper we propose a priority based Round-Robin service broker algorithm which distributes the requests based on the priority of data centers and gives better performance than the conventional Random selection algorithm.
Shopping Agent is a kind of Web application software that, when queried by the customer, provides him/her with the consolidated list of the information about all the retail products relating to a query from various e-commerce sites and resources. This helps customers to decide on the best site that provides nearest, cheapest and most reliable product that they desire to buy. This paper aims to develop a distributed crawler to help on-line shoppers to compare the prices of the requested products from different vendors and get the best deal at one place. The crawling usually consumes large set of computer resources to process the vast amount of data in fat e-commerce servers in a real world scenario. So the alternative way is to use map-reduce paradigm to process large amount of data by forming Hadoop cluster of cheap commodity hardware. Therefore, this paper describes implementation of a shopping agent on a distributed web crawler using map-Reduce paradigm to crawl the web pages.
Automatic genre identification is a task which plays a crucial role in many domains such as automatic storytellers, recommender systems and web page topic detectors. Genre classification is especially interesting in the domain of narrative content which is characterized by a large number of ambiguous and overlapping categories. The rise in popularity of social tagging systems forms a rich source of input information which could be harnessed for this task. In this paper we investigate two different information folksonomy sources for the movie domain namely: keywords and tags, the first of which is user annotated and expert monitored whereas the latter is non-monitored. A comparison is performed to assess the efficacy of both sources in solving this multi-label classification problem and it is found that the in spite of being expert monitored and better structured, keywords are worse predictors of the genres of movies than tags in most cases.
Understanding workload characteristics is crucial for optimizing and improving the performance of large scale data produced by different industries. In this paper, we analyse a large scale production workload trace (version 2) [1] which is recently made publicly available by Google. We discuss statistical summary of the data. Further we perform k-means clustering to identify common groups of job. Cluster analysis provides insight into the data by dividing the objects into groups (clusters) of objects, such that objects in a cluster are more similar to each other than to the objects in other clusters. This work presents a simple technique for constructing workload characteristics and also provides production insights into understanding workload performance in cluster machine.
In the age of growing population, critical issues in health care system made a nation somehow struggling against the inefficient health care. Delay and simple error may cause fatal cases keeping nation staggered. With the advancement of electronic industry, the microprocessor obsolescence is a major concern. The adaptive VLSI design is an emergent trend to move from bespoke custom microprocessor based system to the soft core processor embedded within FPGA/CPLD due to the demanding advantages like low NRE cost, low time to market, less hardware, excellent design flexibility & reprogram ability, low power consumption and high speed performance. In electronic industry there are so many areas, like wireless communications, robotics etc, where adaptive design has been implemented. In this aspect, an attempt has been made to design and implement an advanced health care system in this article using Zigbee enabled RFID technology in accordance with the advanced VLSI design of the processor. The system is very fast and cost effective acquiring almost 0% identification error. Xilinx ISE 14.3 simulator has been used to simulate the processor module and the hardware of the processor has been implemented up to the RTL schematic level and in order to substantiate our design we have used high performance Kintex-7 FPGA board.
This paper presents an innovative and efficient way of E-learning, E-experimenting and E-assessment of rotating machinery faults such as sympathetic vibration. When two (or more) machines are installed close by, vibrations from the operating machine gets transmitted to other machines, even when they are in the standby mode. Such vibrations are known as sympathetic vibrations. This is a typical scenario in a manufacturing industry. A virtual experiment is developed to understand, feel and measure such vibrations. Sympathetic vibrations get transmitted between machines through common base forming the vibration transmission path. The student can also do experiment by eliminating the vibration transmission path and see the effect on the sympathetic vibration. Experiment can be done on line through the World Wide Web. The basic theory, experimental procedure and animation of the experiment are provided for conducting such experiment.
Search for an efficient research supervisor/guide by a research scholar accordingly now-a-days is a very tough task and ask. Searching them in web 2.0 is a time consuming task often leading to in complete knowledge gain. Web 3.0 [1] supported Ontologies helps the research scholars to search for their concerned area guide in an efficient manner. This is possible by merging the individual Ontologies like Friend-Of-A-Friend [3], Semantic Web for Research Communities [4] and etc., so that obtained Ontology will be semantics rich in nature in terms of guide area based classification and linked data [2] among the peer guides from other geographically dispersed locations which can thereby be used to gain crucial knowledge about the guides and making better decisions in reaching them.
In this paper, we apply social network analytic methods to unveil the structural dynamics of a popular open source goal oriented IRC community, Ubuntu. The primary objective is to track the development of this ever growing community over time using a social network lens and examine the dynamically changing participation patterns of people. Specifically, our research seeks out to investigate answers to the following question: How can the communication dynamics help us in delineating important substructures in the IRC network? This gives an insight into how open source learning communities function internally and what drives the exhibited IRC behavior. By application of a consistent set of social network metrics, we discern factors that affect people's embeddedness in the overall IRC network, their structural influence and importance as discussion initiators or responders. Deciphering these informal connections are crucial for the development of novel strategies to improve communication and foster collaboration between people conversing in the IRC channel, there by stimulating knowledge flow in the network. Our approach reveals a novel network skeleton, that more closely resembles the behavior of participants interacting online. We highlight bottlenecks to effective knowledge dissemination in the IRC, so that focused attention could be provided to communities with peculiar behavioral patterns.
The success rate of e-Government is very low in spite of latest technology and huge budget. There are various factors for making low success rate. The implementation part is one of them. Especially, in least developing countries, it is one of the big problems. A vigorous research is done to identify the root cause of this problem. There are various reasons for this but the core challenges such as system design, awareness etc. are identified in this paper. Nepal, being a least developing country is considered as a case study in this paper. Few major challenges are identified by conducting various research methodologies. The mathematical modeling using Fuzzy logic is used to verify these challenges. At the end, this paper also provides the outcomes of modeling.
The growth in spatio- temporal applications calls for new indexing structures. A typical spatio- temporal application is one that tracks the behavior of moving objects through location-aware devices (e.g., GPS). Through the last decade, many spatio- temporal access methods are developed. The goal of this paper is to outline the advance of the indexing methods in the previous years as well as to make review of all as a comparative study.
It is no longer a secret that IT is revolutionising the word at a pace that has never been experienced before. It is also not surprising that the educational system has been amongst the first adopters of this new technology. The way learning takes place today is no longer confined to within the four walls of a classroom. The learner expects a more exciting and effective learning experience whereby he can tap from the full facilities provided by the computer world. New technologies integrated into the education system enhance the access to knowledge and improve the efficiency of knowledge transfer. Agent-based tutoring systems unlike traditional tutoring systems make use of an agent that works continuously and autonomously. The agent is able to learn by experience. They can also communicate and cooperate with other agents to complete their tasks more efficiently. This ensures that the learning process is more personalised, focused and captivating. In this paper, agents are used to create a learning profile for each student so that appropriate questions are provided to them for answering. This avoids wasting the time of more able students and reduces the frustration of weaker students.
With the advent of World Wide Web, link context has been widely used for finding the theme of the target web page. Many approaches have been used to take advantage of the link context to get the precise context of link but the approaches were not very efficient. Link Context has been used in many areas like classification of web page, search engines, topical crawlers. In this paper we have derived the link context using LALR parser (Bison parser). For this different web pages have been collected and with the help of tag tree concepts are found out. Then using Bison parser link context have been derived. We have also compared the technique with the anchor text based method using Jaccard coefficient.
In computer vision applications it is necessary to extract the regions of interest in order to reduce the search space and to improve image contents identification. Human-Oriented Regions of Interest can be extracted by collecting some feedback from the user. The feedback usually provided by the user by giving different ranks for the identified regions in the image. This rank is then used to adapt the identification process. Nowadays eye tracking technology is widely used in different applications, one of the suggested applications is by using the data collected from the eye-tracking device, which represents the user gaze points in extracting the regions of interest. In this paper we shall introduce a new agglomerative clustering algorithm which uses blobs extraction technique and statistical measures in clustering the gaze points obtained from the eye tracker. The algorithm is fully automatic, which means does not need any human intervention to specify the stopping criterion. In the suggested algorithm the points are replaced with small regions (blobs) then these blobs are grouped together to form a cloud, from which the interesting regions are constructed.
Image fusion based on the Fourier and wavelet transform methods retain rich multispectral details but less spatial details from source images. Wavelets perform well only at linear features but not at non linear discontinuities because they do not use the geometric properties of structures. Curvelet transforms overcome such difficulties in feature representation. In this paper, we define a novel fusion rule via high pass modulation using Local Magnitude Ratio (LMR) in Fast Discrete Curvelet Transforms (FDCT) domain. For experimental study of this method Indian Remote Sensing (IRS) Resourcesat-1 LISS IV satellite sensor image of spatial resolution of 5.8m is used as low resolution (LR) multispectral image and Cartosat-1 Panchromatic (Pan) of spatial resolution 2.5m is used as high resolution (HR) Pan image. This fusion rule generates HR multispectral image at 2.5m spatial resolution. This method is quantitatively compared with Wavelet, Principal component analysis (PCA), High pass filtering(HPF), Modified Intensity-Hue-Saturation (M.IHS) and Grams-Schmidth fusion methods. Proposed method spatially outperform the other methods and retains rich multispectral details.
Orthogonal Frequency Division Multiplexing (OFDM) is recognized as high data rate transmission technique. Further, application of space-time block coding (STBC) to the OFDM system may help in combating severe affects of fading. In this paper, space time block encoded time frequency training OFDM (TFT-OFDM) system is proposed. The TFT-OFDM signal is trained in both time and frequency domain by appending the training sequence and by inserting the grouped pilots, respectively. Such structure of signal helps in providing better spectral efficiency and reliability. The performance of proposed system is analyzed over fast fading channel and compared with various STBC based OFDM transmission schemes. These various STBC-based OFDM transmission techniques are STBC-based cyclic prefix OFDM (CP-OFDM), STBC-based zero padding OFDM (ZP-OFDM), and STBC-based time domain synchronous OFDM (TDS-OFDM). Simulation results indicate that the STBC-based TFT-OFDM is better than other STBC-based OFDM transmission techniques in BER performance.
In medical image processing, low contrast image analysis is a challenging problem. Low contrast digital images reduce the ability of observer in analyzing the image. Histogram based techniques are used to enhance contrast of all type of medical images. They are mainly used for all type of medical images such as for Mias-mammogram images, these methods are used to find exact locations of cancerous regions and for low-dose CT images, these methods are used to intensify tiny anatomies like vessels, lungs nodules, airways and pulmonary fissures. The most effective method used for contrast enhancement is Histogram Equalization (HE). Here we propose a new method named “Modified Histogram Based Contrast Enhancement using Homomorphic Filtering” (MH-FIL) for medical images. This method uses two step processing, in first step global contrast of image is enhanced using histogram modification followed by histogram equalization and then in second step homomorphic filtering is used for image sharpening, this filtering if followed by image normalization. To evaluate the effectiveness of our method we choose two widely used metrics Absolute Mean Brightness Error (AMBE) and Entropy. Based on results of these two metrics this algorithm is proved as a flexible and effective way for medical image enhancement and can be used as a pre-processing step for medical image understanding and analysis.
In this paper we proposed a new statistical multivariate method for tracking an object in a video. This method is based on the Hottelling T 2 test which is designed to provide a global significance test for the difference between two region or two group with simultaneously measured multiple dependent or independent variables. An object to be tracked can be found by comparing its multivariate mean in the successive frame of the video. The T 2 value give the measurement of the difference of two mean vector. In this approach the object window containing the matrix of intensity value is transformed into a set of feature vector. These set of features is compared using multivariate T 2 test in the successive frame for the significant matching of the object in its nearest locality. It is observed that higher the T 2 value more is the chance of mismatching and lower the T 2 value more is the chance of matching the multi attribute. Simulation result shows that the proposed method is capable of accurately detecting the non rigid, moving object in stationary as well as non stationary camera with noisy and occlusion environment.
This paper proposes complex wavelet-based moving object segmentation using approximate median filter based method. The proposed method is well capable of dealing with the drawbacks such as ghosts, shadows and noise present in other spatial domain methods available in literature. The performance of the proposed method is evaluated and compared with other standard spatial domain methods. The various performance measures used for comparison include RFAM (relative foreground area measure), MP (misclassification penalty), RPM (relative position based measure), NCC (normalized cross correlation) and the various methods are tested on standard Pets dataset. Finally, based on performance analysis it is observed that the proposed method in complex wavelet domain is performing better in comparison to other methods as presented in the paper.
This Paper develops a technique that efficiently discovers and analyzes the relationship between objects in captured video. The proposed framework detects and track the whole objects to extract temporal information from frame to frame which helps to find relation between objects by performing matching between objects in different frames. We use different techniques to implement these methods which were evaluated in order to assess how well they can detect moving regions, track objects and find relations. The final result of the algorithm is analyze by finding false positive and negative number of object detected which have relationship among each other in different frames.
The paper deals to propose a hybrid watermark embedding and extracting technique. SVD and DWT methods are used for watermark embedding because DWT method is more flexible and provides a wide range of functionalities for still image processing. Further a significant image attacks are carried out on the watermarked image and the watermark is extracted by the help of proposed watermark extraction algorithm based on Back Propagation Neural Network (BPNN).In the current era of online business such type of technique can play a significant role to identify the copyright of the product even there are numerous image attacks on the watermarked image. It can lead to save the owner of the product from a heavy loss and restrict the further theft of the products. The proposed watermark embedding and extracting algorithms are tested on MATLAB 8.2 with various parameters such as PSNR and MSE. The result shows that after a numerous image attacks like paper and salts, mean, median, shear, noise, crop and rotation the watermark is identifiable.
A new approach towards time frequency localization has been proposed in this paper. This scheme is based on a local variance factor. The framework of the approach has been demonstrated mathematically. The consistency of approach and the resulting methodology have been empirically verified.
The advancement of computing technology over the years has provided assistance to drivers mainly in the form of intelligent vehicle systems. Driver fatigue is a significant factor in a large number of vehicle accidents. Thus, driver drowsiness detection has been considered a major potential area so as to prevent a huge number of sleep induced road accidents. This paper proposes a vision based intelligent algorithm to detect driver drowsiness. Previous approaches are generally based on blink rate, eye closure, yawning, eye brow shape and other hand engineered facial features. The proposed algorithm makes use of features learnt using convolutional neural network so as to explicitly capture various latent facial features and the complex non-linear feature interactions. A softmax layer is used to classify the driver as drowsy or non-drowsy. This system is hence used for warning the driver of drowsiness or in attention to prevent traffic accidents. We present both qualitative and quantitative results to substantiate the claims made in the paper.
In current research on high-efficiency video coding (HEVC), motion vector resolution is always fixed to 1/4 pixel for the entire video sequence. Inter-coding with a fixed motion vector resolution can decrease the coding efficiency because the statistical properties of the local image are not considered. In this paper, we propose an adaptive decision scheme for motion vector resolution to improve the coding efficiency. The proposed scheme capitalizes on the tendency for a high-pel-precision level to be beneficial in terms of coding efficiency as the coding unit (CU) depth decreases. Also, we determined the strength with a rate-distortion (RD) cost and selected a predefined threshold set per slice level. Simulation results with respect to HM7.0 show that the proposed scheme provides a coding gain of 2.4% for a low-delay structure. Moreover, it was found that the average encoding time is reduced by 5%. The proposed scheme can also improve the coding efficiency at a slightly increased encoding time compared to conventional methods.
This paper proposes a new algorithm for compression of color images using RGB and Y CBCR color space models. The scheme is applied by using Biortho wavelet filter on the standard images like Barbara, Pepper, Lena and Zedla. The comparison is done on the basis of quality estimation parameters like energy retained and peak to signal ratio.The values are calculated and it is found that good compression quality images are obtained by using Y CbCr color model.
Reversible data hiding is latest research area in field of data hiding. In RDH (Reversible data hiding), the data to be hidden is embedded in the cover image and at the extraction side both hidden secret and cover image is reconstructed without any distortion. This kind of reversibility is required in some areas such as medical and military applications. This paper covers many RDH techniques like Difference expansion, histogram shifting and prediction error expansion. Main focus of this paper is on histogram shifting techniques.
In lip reading, selection of feature play crucial role. Goal of this work is to compare the common feature extraction modules. Proposed two stage feature extraction technique is exceedingly discriminative, precised and computation efficient. We have used, Discrete Wavelet Transform (DWT) to decorrelate spectral information and extract only the salient visual speech information from lip portion. In the second stage the Locality Sensitive Discriminant Analysis (LSDA) is used to further trim down the feature dimension while preserving the required identifiable ability. A competent feature extraction module result a novel automatic lip reading system. We have compared performance of classical Naive Bayes with the popular SVM classifier. The CUAVE database is used for experimentation and performance comparison. Experimental results show that DWT+LSDA feature mining is better than DWT with PCA or LDA. The performance of Naïve Bayes classifier is exceedingly augmented with DWT+LSDA.
Power is a critical factor for working of hand held devices. The video applications are one of the widely used functionality of hand held devices and are prime factor for deciding the usability of product. These video applications are consists of complex algorithmic operations which depend on codec used for video. Because of difference in complexity these codecs, the power consumption is also different. The work presented here analyzes the power consumption for different video codec for same video on Android device. It gives the better view of power consumption for the device running on battery as power model used for calculation computes the power consumed in mAh unit. The results show that the video codecs in descending order of their power consumption are DivX, MPEG 4, H 264, and Xvid.
Watermarking is the technique to solve the issue of copyright degradation, but this has to be resolved by keeping a steady check on the imperceptibility and robustness which incur to be its main objectives. In order to accomplish these objectives the usage of a hybrid transform is adopted in this paper, the idea behind using a hybrid transform is that the cover image is modified in its singular values rather than on the DWT subbands, therefore the watermark makes itself vulnerable to vivid attacks. Experimental results are available to support the study.
Machine printed and handwritten texts intermixed appear in the ICR cells of variety of documents. Recognition techniques for machine printed and handwritten text in these document images are significantly different. It is necessary to separate these two types of texts and feed them to the respective engine - OCR (Optical Character Recognition) and ICR (Intelligent Character Recognition) engine to achieve optimal performance. This paper addresses the problem of classification of machine printed and handwritten text from acquired document images. Document processors can increase their productivity and classify handwritten and printed characters inside the ICR cells and feed their images to the appropriate OCR or ICR engine for better accuracy. The algorithm is tested on variety of forms and the recognition rate is calculated to be over 91%.
This paper presents a robust method for reconstruction of surface in terms of its height values. The height values are calculated from the gradient vector field in image data. These gradient values are used in the formulation of minimization problem according to variational principle. From this minimization problem, Euler's Equation is derived, which is Poisson's equation whose right hand side is having discrete values at the grid points. Thereafter, discrete Fourier sine transform is implied for the solution of this Poisson's equation by assuming the image intensities at the boundaries. The relative depth values can be obtained at the corresponding grid points and subsequently used for the reconstruction of surface.
One of the most severe diseases in the field of medical science is brain tumor. A proper diagnosis is required in the early phase of tumor growth. In the past various methods had applied on brain MR (Magnetic Resonance) imaging to figure out the proper abnormality region from overall volume of the brain. The literature helps to identify that various bi-clustering algorithms had cluster out the region based on some predefined threshold value which results in generation of cluster which was dependent on specific threshold value only. In this paper a new bi-clustering algorithm has been proposed to cluster out the maximum abnormality area from the brain MR image without any predefined threshold. The algorithm is based on the closely link associated pixel (CLAP) mechanism for tumor segmentation.
Cheque Truncation System (CTS) is an image based cheque clearing system to speed up the process of clearing the cheques. Financial frauds are being carried out by tampering the content of the cheque image. Therefore, there is a need of detecting this kind of tampering of images. This paper proposes a method to detect whether a cheque image has been tampered or not. A difference expansion based watermarking technique is applied for this purpose. Experimental results demonstrate that the proposed method successfully distinguishes between genuine and tampered cheque images.
A Technique for the encryption of the image is proposed using the random phase masks and fractional Fourier transform. The method uses four random phase masks and two fractional orders that act as the encryption key. The encryption scheme transmits the data to the authorised user maintaining its integrity and confidentiality. Numerical simulations results have been carried out to validate the algorithm and its Mean Square Error (MSE) is calculated. Furthermore, an image is divided in to four sections and on each of the section of the image different algorithms are applied and then there encryption and the decryption time is studied and also their MSE are calculated and compared to find an algorithm which is most optimal.
In this paper, a new algorithm for surface reconstruction from their arbitrary perspective images is presented. An optimization formulation for such type of reconstruction problem, based on a Non-Uniform Rational B-Spline (NURBS) surface model is adopted. It converts reconstruction of a 3D surface into reconstruction of control points and weight vectors of a NURBS representation of the surface. Perspective invariance property of NURBS surface is used to formulate the 3D surface reconstruction problem as a nonlinear optimization problem. The fitting is obtained by solving a quadratic programming problem for finding the weight vectors of NURBS surface and then solving a system of linear equation for finding control points. A comparison study is shown in terms of various type of errors between proposed and triangulation based approach where point-to-point correspondence is required.
Palm leaf manuscripts are one of the earliest forms of written media that has enlightened humanity with various subjects such as medicine, astronomy, mathematics and astrology. Many palm leaf manuscripts are approaching the end of their natural life time and are undergoing rapid degradation. The primary objective of image processing with such degraded palm leaf manuscripts is to retrieve and preserve the historical knowledge. The main objective of the paper is the accurate extraction of foreground information from palm leaf images. We apply the concept of clustering in palm leaf image binarization with three dimensional features. To demonstrate the usefulness of the proposed method, a set of ground truth corresponding to ten palm leaf images is generated. It allows setting the benchmark for the proposed and existing technique's practical effectiveness. The proposed clustering based method is observed to achieve higher binarization accuracy in palm leaf manuscripts than the thresholding based approaches.
In this paper, the trace transform based affine invariant features are applied for signature verification. The trace and diametric functional are suitably chosen to derive a set of circus functions from each signature image. The affine relationships of intra-class and inter-class circus functions are converted to a simple scale and shift correspondence through normalization. The normalized associated circus functions are to be used as the features for signature verification. The similarity measures for same-writer and different-writer pairs are used in deciding the threshold value. The proposed system is found to be effective for signature verification over a large unconstrained signature database.
Fingerprint recognitions is one of the most popular biometric identification methods. However, most of fingerprint recognition algorithms seen in the literature have the problem of creating false minutia points or not reliably identifying the valid minutia points because of low image quality. This paper proposes a robust minutiae based approach for fingerprint recognition. The image is first enhanced using multiple thresholding method. Then after binarization, minutia points are extracted using a 3×3 window as described in the paper. We propose a simple strategy to obtain valid minutiae points. For each point under test, the proposed strategy examines the four neighboring points to determine if the current point is a valid bifurcation point. The proposed strategy is tested on a number of images of publicly available FVC2006 database. Results indicate that the proposed strategy filters the false minutiae points reliably.
The aim of this paper is to describe an algorithm to recognize Assamese handwritten numerals using mathematical morphology. The digits are classified into two groups. One group contains digits which contains one or more blobs or/and stems in its structure. The other group does not contain any blobs. The number of blobs is determined with the help of morphological boundary finding method considering the property as hole. We also use the concept called `connected component' of morphology to recognize digits without blobs. Digits without blobs are extended to blobs by using connected component approach of morphology. Digits with blobs and stems need to recognize the number of stems. The present study shows that stems need not to be exactly vertical or horizontal to detect it. The proposed algorithm has been applied and tested for various handwritten digits from ISI Kolkata database. We also compare this algorithm for various printed Assamese digits. Experiment result shows that the average recognition rate that can be achieved by this algorithm is 80% for handwritten numerals and almost 100% for printed numerals. The result is obtained by using 50 handwritten samples for each digit and different printed Assamese digits.
This paper addresses the problem of image enhancement thereby enhancement of scene visibility in outdoor images. Visibility is a very important issue in case of computer based surveillance, crime analysis, driver assistance system design etc. The most important challenge related to visibility is the atmospheric haze and poor lighting. The problem becomes more challenging if haze is too dense and lighting during night is extremely poor. In this paper an automatic degradation detection and restoration algorithm has been proposed, which detects the type of degradation using the distribution of the scene, then uses the hybrid dark channel prior based haze removal algorithm if the image is degraded due to atmospheric haze only, otherwise it computes the image negative first and then uses the hybrid DCP to resolve the problem. The algorithm has been tested in many situations and the results obtained are satisfactory as comparison to the existing algorithms.
Real world signals are afflicted by noise, be it the communication signals or the remote sensor signals including the medical imaging sensors or surveillance sensors. In the case of signals from SAR(Synthetic Aperture Radar) sensor from remote sensing platforms, or USG or MRI from medical images, the noise is multiplicative. The coherent nature of the signal processing involved in the data generation for the latter, gives rise to a g rainy kind of signal dependent noise called speckle. Speckle noise affects the radiometric quality of the image and makes data interpretation and analysis difficult. Since the utility of any data depends on the accuracy of the data both in terms of geometry and radiometry, denoising becomes mandatory for image analysis applications. The common problem with most of these denoising filters is that they blur the images while reducing the noise. The challenge thus, lies in cleaning the image without sacrificing the geometric resolution. Wavelet based filtering techniques show promising results in this regard. In this paper the results and analysis of performing filtering on high resolution SAR data, using different mother wavelets of Daubechie's using soft thresholding technique, are presented.
The face being the primary focus of attention in social interaction plays a major role in conveying identity and emotion. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. The main aim of this paper is to analyse the method of Principal Component Analysis (PCA) and its performance when applied to face recognition. This algorithm creates a subspace (face space) where the faces in a database are represented using a reduced number of features called feature vectors. The PCA technique has also been used to identify various facial expressions such as happy, sad, neutral, anger, disgust, fear etc. Experimental results that follow show that PCA based methods provide better face recognition with reasonably low error rates. From the paper, we conclude that PCA is a good technique for face recognition as it is able to identify faces fairly well with varying illuminations, facial expressions etc.
3-Dimensional Diffuse Optical Tomographic (3-D DOT) image reconstruction algorithm is computationally complex and requires excessive matrix computations and thus hampers reconstruction in real time. In this paper, we present near real time 3D DOT image reconstruction that is based on Broyden approach for updating Jacobian matrix. The Broyden method simplifies the algorithm by avoiding re-computation of the Jacobian matrix in each iteration. We have developed CPU and heterogeneous CPU/GPU code for 3D DOT image reconstruction in C and MatLab programming platform. We have used Compute Unified Device Architecture (CUDA) programming framework and CUDA linear algebra library (CULA) to utilize the massively parallel computational power of GPUs (NVIDIA Tesla K20c). The computation time achieved for C program based implementation for a CPU/GPU system for 3 planes measurement and FEM mesh size of 19172 tetrahedral elements is 806 milliseconds for an iteration.
Due to the limited depth of focus or long focal lengths, it is not possible to get an image which contains all relevant objects in focus. The solution for this problem is to acquire several images with different focus points and registered them. windowed based principle component analysis is implemented for multi focus and multimodal images. Experiments are carried out on medical images of brain angiography and images for course of carotid arteries from ascending aorta to brain, required for looking for blockage in case of brain stroke, Multi modal medical images, taken from different sensors like CT and MRI images are registered using this algorithm for medical diagnostics. Multi modal navigation aid images for helicopter pilots taken with low light television sensor (LLTV) and thermal imaging forward-looking-infrared (FLIR) sensor are also registered. This is used for helicopter pilots for navigational aids. The proposed algorithm of windowed PCA is compared with conventional averaging method and PCA based method without windowing using different quality measures like Entropy, Correlation Coefficient, Histogram Error, Root Mean Square Error, Maximum Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), Standard Deviation and Universal Image Quality Index (UIQI).
Steganography is the method of hiding any secret information like password, text, and image, audio behind original cover file. In this paper we proposed the audio-video cryptosteganography which is the combination of image steganography and audio steganography using computer forensics technique as a tool for authentication. Our aim is to hide secret information behind image and audio of video file. As video is the application of many still frames of images and audio, we can select any frame of video and audio for hiding our secret data. Suitable algorithm such as 4LSB is used for image steganography and phase coding algorithm for audio steganography. Suitable parameter ofsecurity and authentication like PSNR, histogram are obtained at receiver and transmitter side which are exactly identical, hence data security can be increased. This paper focus the idea of computer forensics technique and its use of video steganography in both investigative and security manner.
To achieve a panoptic machine vision, recognition of images from disparate classes like person, car, building, etcetera is of primal importance. The Locally-connected Neural Pyramid (LCNP) was proposed earlier to achieve a robust and a time efficient training of large datasets of images from these disparate classes. The objective of this paper is to propose a technique for fast training of the LCNP. YIQ coding is used to extract the color based information of the images as it separates the color information from the luminance information. As R GB to YIQ conversion is an embarrassingly parallel situation, this recoding can give a tremendous speed-up over the previous approach- where PCA de-correlation of RGB channels was carried out. Also, the use of YIQ coding has entailed a reduction in the complexity of the LCNP, thus, reducing the computations considerably. This will further boost the time performance of the training. Despite a considerable reduction in the complexity of the LCNP and the use of YIQ coding, the recognition performance achieved by this approach is similar to the previous approach. A recognition rate of 85.62% is achieved for the testing samples of the LabelMe-12-50K dataset. We p ropose that if the previous method of de-correlating RGB ch annels using PCA is rep laced with YIQ coding, tremendous speed-up will be achieved.
Image watermarking is being used for proving the authenticity of image as well as video documents. Several web applications based on the extracted watermark are catching up fast for retrieval of information from web. These applications need a lightweight and robust watermarking method for inserting and extracting watermark from image. In this paper we propose a new image watermarking method based on contourlet transform and compared its performance with a wavelet transform based method. In this technique a watermark bit is embedded into one of the Eigenvalues of a fixed size block of the sub band coefficients obtained using contourlet transform. Watermark extraction process follows the inverse operations used in watermark embedding. The simulation results show that contourlet based method is more robust than wavelet based method even under several attacks but wavelet method gives better PSNR than contourlet based method.
An automatic segmentation and color feature based video object tracking algorithm has been proposed. The proposed algorithm automatically segments the moving object in video by creating a multiplicative mask, which contains reduced number of shadowed pixels, noisy pixels and false pixels. The segmented object can be tracked by extracting its features such as color. Once the object to be tracked is segmented and its feature extracted, the position of the moving object is predicted using Kalman filter which is an optimal recursive estimator. Kalman Filter efficiently tracks the moving object in real time applications. The proposed algorithm accurately segments the moving object by reducing the effect of the shadowing and/or noisy pixels and successfully tracks the moving object.
Image segmentation is one of major image processing activity used to identify a specific pixel area over the image. In this presented, an improved segmented approach is presented to recognize the object in a scene or image. Such kind of segmentation is helpful to identify the component object in a scene or the image. The presented work is based on the Mathematical analysis to identify the object position in the Scene. Once the object frame will be identified, the next work is to separate the object area from the background and in the final stage the edge detection will be implemented to highlight the object. In this paper, the basic segmentation approaches are defined along with the proposed algorithmic approach to perform the object detection. The work is here been defined for the hand object detection.
Abandoned object detection is an essential requirement in many video surveillance contexts. We introduce an abandoned object detection tool based on a set of possible events and on a set of rules to act upon those events. This implementation is simple and reusable unlike existing techniques. It is implemented using a simple logical reasoning upon textual data, in contrast to image centric processing. Objects foreign to a usual environment are extracted using background subtraction. Results of blob detection and tagging process are passed to an abandoned object detector in a textual format. The abandoned object detector, which is an acyclic graph of asynchronously interconnected lightweight processing modules, evaluates the variations of speeds and inter-blob distances. By configuring several parameters according to the context, it generates an alert upon encountering such a scenario. We provide results of this implementation by applying it on PETS 2006 dataset.
This paper presents an automated blood vessel detection method from the fundus image. The method first performs some basic image preprocessing tasks on the green channel of the retinal image. A combination of morphological operations like top- hat and bottom-hat transformations are applied on the preprocessed image to highlight the blood vessels. Finally, the Kohonen Clustering Network is applied to cluster the input image into two clusters namely vessel and non-vessel. The performance of the proposed method is tested by applying it on retinal images from Digital Retinal Images for Vessel Extraction (DRIVE)database. The results obtained from the proposed method are compared with three other state of the art methods. The sensitivity, false-positive fraction (FPF) and accuracy of the proposed method is found to be higher than the other methods which imply that the proposed method is more efficient and accurate.
This paper proposes a new technique for 3D object retrieval using skeletons of objects' orthographic projections. The proposed method exploits the 2D multi-views wherein silhouettes are obtained for different viewpoints for each model. A feature vector is extracted for the skeleton of each viewpoint of an image obtained by contour partitioning with Discrete Curve Evolution (DCE). The vectors are further trained in a matrix for pairwise comparison. The experimental results have been given by conducting experiments on the Princeton Shape-Benchmark (PSB), a publicly available database of 3D models. The obtained results are quite encouraging in terms of accuracy.
Writer identification is a vibrant research field, though a lot of work has been done on writer identification on normal text, writer identification in music score sheet has not been addressed in that large scale. Here we propose a method to identify writers of music score sheets using Daubchies wavelet features along with SVM classifier. We have evaluated our proposed approach in a sub-set of CVC-MUSCIMA dataset. From the experiment on 140 score sheet images from 7 writers we obtained encouraging results.
Similar to biology inspired optimization algorithms, this paper proposes a novel metaheuristic Greedy Politics Optimization (GPO), inspired by political strategies adopted by politicians to contest in elections and form the government. Interestingly, the performance of the algorithm was found to improve when unethical practices adopted by greedy politicians were taken into account. Several benchmark multi-dimensional test functions were optimized using the proposed algorithm and the accuracy of GPO proved efficient compared to other classical metaheuristics. Performance analysis also reveals that the convergence efficiency of GPO is highly superior compared to particle swarm optimization.
Advancement in computer viruses complexity is a big threat for researchers working in the field of computer virology. The methods and techniques developed cannot assure the complete disinfection and cure, but still lot of efficient methods are developed by researchers. Our proposed technique includes the three test cases that are designed for the identification of metamorphic viruses. First test case contains the combination of static analysis and edit distance approach. Second test case contains static analysis and pairwise alignment. Third test case contains the combination of static analysis, pairwise alignment and edit distance approach. First two test cases describe the extension of similarity analysis with static analysis for identification of metamorphic viruses. Test case third is designed to assure exact detection that leads to detection of metamorphic viruses with very low false positive and false negative rate. Experimental results are obtained to demonstrate that our method is efficient for identification of metamorphic viruses and achieve satisfactory performance.
Text Summarization is the procedure by which the significant portions of a text are retrieved. Most of the approaches perform the summarization based on some hand tagged rules, such as format of the writing of a sentence, position of a sentence in the text, frequency of few particular words in a sentence etc. But according to different input sources, these pre-defined constraints greatly affect the result. The proposed approach performs the summarization task by unsupervised learning methodology. The importance of a sentence in an input text is evaluated by the help of Simplified Lesk algorithm. As an online semantic dictionary WordNet is used. First, this approach evaluates the weights of all the sentences of a text separately using the Simplified Lesk algorithm and arranges them in decreasing order according to their weights. Next, according to the given percentage of summarization, a particular number of sentences are selected from that ordered list. The proposed approach gives best results upto 50% summarization of the original text and gives satisfactory result even upto 25% summarization of the original text.
In Chemical and process industries modeling of non linear systems posses a major challenging task to design engineers due to multivariable process interactions. Innovative technology for process identification is on high demand. A model identification using Neural Networks and ANFIS for the nonlinear systems in series is proposed and designed using conductivity as a measured parameter and flow rate as manipulated variable. Real time experimental data of the non linear system is used to train the neural network by back propagation training algorithm and ANFIS using Matlab. The identified model using various estimators is compared with the actual process model. The error analysis was also performed. Neural Model Predictive Controller controller (NMPC) is designed to control the level. Performance of NMPC compared with traditional PID controller.
Examinations play a vital role in deciding the quality of students. Generating an effective question paper is a task of great importance for any educational institute. Conventionally question papers are developed manually. In this paper, a fuzzy logic based model is proposed for autonomous paper generation, using MATLAB ® . Comparative analysis with classical method is done and fuzzy model is found to be more reliable, fast and logical.
Drift in sensors, mainly in chemical or gas sensors is an unavoidable problem that introduces shift in feature values in the dataset. This makes sample classification and identification process more challenging over time in olfactory machines. The generation of uncertain chemical sensor drift is long term degradation of the sensor properties and no matter what they are made of, how expensive they are, or how accurate. To deal with this problem a multiple classifier approach using artificial neural network (ANN) and k nearest neighbour (KNN) is proposed here and tested with the gas sensor array drift dataset which is retrieved from UCI machine learning repository. At first the extensive dataset is processed using principal component analysis (PCA) for visualization of underlying clusters. Then in order to supervise the problem and counteract its effect, drift compensation techniques using multiple classifiers using ANN (BP-MLP)& KNN have been formulated. Finally, a comparative study on the efficiency of ensemble of classifier for the single standalone classifier in terms of average classification accuracy is evaluated. The results clearly indicate the superiority of multiple classifier approach which not only improves the classifier performance but also compensate with sensor drift concept without replacing the physical sensor for long term use.
This paper compares the performance of conventional adaptive network based fuzzy inference system (ANFIS) network and extreme-ANFIS on regression problems. ANFIS networks incorporate the explicit knowledge of the fuzzy systems and learning capabilities of neural networks. The proposed new learning technique overcomes the slow learning speed of the conventional learning techniques like neural networks and support vector machines (SVM) without sacrificing the generalization capability. The structure of extreme-ANFIS network is similar to the conventional ANFIS which combines the fuzzy logic's qualitative approach and neural network's adaptive capability. As in the case of extreme learning machines (ELM), the first layer parameters of the proposed learning machine are not tuned. Performance on two regression problems shows that extreme-ANFIS provides better generalization capability and faster learning speed.
This paper analysis the performance of Fuzzy PD+I and conventional PID controller on nonlinear systems. The nonlinear systems used are level control of a surge tank and cart pole system. Level control of a surge tank level system and angular position of cart pole system have been tracked using Fuzzy PD+I controller and conventional controller. Four type of membership functions Bell, Pi, Gaussian and Psigmoid are used in the fuzzy PD+I control of the nonlinear systems. Effects of different membership functions on the systems control have been compared with conventional PID controller. For four different membership functions besides control performance stability criterion also have been implemented using phase-plane method.
This paper introduces simple Euler method to the existing roach infestation optimization algorithm to improve swarm stability and enhance local and global search performance. A dynamic step size adaptation roach infestation optimization (DSARIO) algorithm is proposed using the Euler step size adaptation. Experimental results obtained from the proposed algorithm demonstrated improved accuracy and convergence ability over existing roach infestation optimization algorithm. Also the numerical results with the proposed algorithm show clearly its ability to solve multi-dimensional problems. The performance of the proposed algorithm is compared with that of existing roach infestation optimization and hungry roach infestation optimization algorithms.
Portfolio optimization is a standard problem in the financial world for making investment decisions which involve investing into a variety of assets with the aim of maximizing yield and minimizing risk. Modern portfolio theory is a mathematical approach to the problem that endeavors to accomplish a plausive portfolio by giving best weighting of the assets. In this study, an indicator based evolutionary algorithm (IBEA) has been compared with two well known evolutionary algorithms-Non-dominated Sorting Genetic Algorithm II( NSGA- II) and Strength Pareto Evolutionary Algorithm (SPEA-II).The results reveal that IBEA outperforms the other two algorithms in terms of its closeness to the true pareto front. Also, a diversity enhanced version of IBEA (IBEA-D) is proposed, which is found to be providing more diverse solutions than IBEA.
The disease diagnosis based on artificial intelligence techniques is an effective technique. To enhance the training procedure of the neural network to diagnose the heart disease effectively, we use a hybrid algorithm which is combination of GSO and ABC. Initially, we generate an initial population that has number of members and the members have the weight values which are used to train the neural network. To identify a perfect member to train the neural network, we use the hybrid algorithm operations. We give each member to the neural network and we find the fitness for each member and we categorize the members to perform the hybrid operations i.e. which member has to do which operation. After performing corresponding operations on the categorized members, we get a new set of members and we iterate the process until we get a stable member for producer operation. We choose the weight values of the producer to train the neural network to detect the heart disease.
Decisions from Experience (DFE) involve situations where decision makers sample information before making a final choice. Trying clothes before choosing a garment and enquiring about jobs before opting for one are some examples involving such situations. In DFE research, conventionally, the final choice that is made after sampling information is aggregated over all participants and problems in a given dataset. However, this aggregation does not explain the individual choices made by participants. In this paper, we test the ability of computational models of aggregate choice to explain choices at the individual level. Top three DFE models of aggregate choices are evaluated on how these models account for individual choices. A Primed-Sampler (PS) model, a Natural-Mean Heuristic (NMH) model, and an Instance-Based Learning (IBL) model are calibrated to explain individual choices in the Technion Prediction Tournament (the largest publically available DFE dataset). Our results reveal that all the three DFE models of aggregate choices perform well to explain individual choices. Although the PS and NMH models perform slightly better than the IBL model; the IBL model is able to account for all individuals in the dataset compared to the PS and NMH models. We conclude by drawing implications for computational cognitive models in explaining individual choices in DFE research.
This article explores a novel way of implementing the artificial neural networks (ANN) by using magnets. Upon implementation it works very efficiently as the data spreads throughout the network, analogous to the natural neuronal systems in the human brain and it requires no computations at all as it is done in the present ANN implementations. The striking feature is, it gives error-free outputs and thereby it proves to be much more efficient compared to the existing artificial neural network systems.
The research on tactile sensors and its wide applications have received extensive attention among researchers very recently, especially in the two fields-Medical Surgery (Minimally Invasive Surgery-MIS) and Fruit and Vegetable Grading Industry. This paper proposes the implementation of a robotic system which can distinguish objects of different softness using machine learning approach, based on different parameters. Two piezoresistive flexible tactile sensors are mounted on a two fingered robotic gripper, as robotic arm can perform repetitive tasks under a controlled environment. A PIC32 microcontroller is used to control the gripping action and to acquire pressure data. Decision Tree and Naive Bayes methods are used as intelligent classifiers using feature vectors, obtained from the time series response of tactile sensors during grasping action for grading the objects. From the analytical point of view it is observed that Decision Tree based approach is better than the Bayesian approach.
In this paper an application of genetic algorithms (GAs) and Gaussian Naïve Bayesian (GNB) approach is studied to explore the brain activities by decoding specific cognitive states from functional magnetic resonance imaging (fMRI) data. However, in case of fMRI data analysis the large number of attributes may leads to a serious problem of classifying cognitive states. It significantly increases the computational cost and memory usage of a classifier. Hence to address this problem, we use GAs for selecting optimal set of attributes and then GNB classifier in a pipeline to classify different cognitive states. The experimental outcomes prove its worthiness in successfully classifying different cognitive states. The detailed comparison study with popular machine learning classifiers illustrates the importance of such GA-Bayesian approach applied in pipeline for fMRI data analysis.
The functional magnetic resonance imaging (fMRI) is considered as a powerful technique for performing brain activation studies by measuring neural activities. However, the tons of voxels over time are posed a major challenge to neuroscientists and researchers for analyzing it effectively. The decoding of brain activities required fast, accurate, and reliable classifiers. In classification scenario, although machine learning classifiers have shown promising result, but the individual classifiers have their limitations. This paper proposes a method based on the ensemble of Neural Networks applying on fMRI data for cognitive state classification. The Neural Networks (NNs) classifier has been selected for ensembling. The Fuzzy Integral (FI) approach is used as an efficient tool for combining different classifiers. The classifiers ensemble technique performs better than the single base learner by reducing misclassification as well as both bias and variance. The proposed technique successfully classifies different cognitive states with high classification accuracy. The performance improvement while applying the ensemble technique as compared with the individual neural network strongly recommends the usefulness of the proposed approach.
Content based recommender systems have the drawback of recommending only similar items to a user's particular taste, irrespective of the item's popularity. Collaborative Filtering based systems face the problem of data sparsity and expensive parameter training. In this paper, a combination of content-based, model and memory-based collaborative filtering techniques is used in order to remove these drawbacks and to present predicted ratings more accurately. The training of the data is done using feedforward backpropagation neural network and the system performance is analyzed under various circumstances like number of users, their ratings and system model.
When it comes to any type of crime it is just not a good picture to paint. Crime is a worldwide problem that directly or indirectly touches every one of us in multiple manners: as sufferers, in paying greater costs for merchandises and amenities, besides living with the concerns of fright and corruption. Moreover, location has a significant impact on crime. We propose a system to provide optimal travel route in Gurgaon, India (chosen as sample) along different modes of transport keeping in mind the distance and safety. We shall be using different learning algorithms to train the system based on available crime statistics (using research and surveys from government/non-government organizations).
A nuclear power plant (NPP) is a complex engineering system. In Prototype Fast Breeder Reactor (PFBR) which is a NPP, identification of any abnormality or transient should be notified to the operator in the control room to avoid accidents. During any such abnormality, the operator plays a major role in bringing back the plant to a safe mode of operation, when the safety features incorporated in the plant fails to act automatically. The proper decision making ability of the operator can avoid any catastrophe in the plant. There are many techniques used for the operator assistance in taking a legitimate decision. Fuzzy logic is one such technique which guides the operator in identifying the occurrence of any transient with a quicker response time and minimum information overloading. This paper discusses about the implementation of fuzzy logic based transient identification system for operator guidance. A comparative study has been made using plant parameters like reactor inlet temperature and deaerator level as fuzzy input variables to two different systems. Observations based on the stability and the response time on these two parameters shows the effectiveness of the developed system for operator guidance.
In this paper, adaptive neurofuzzy inference system(ANFIS) approaches for automatically classified the diabetes is presented. ANFIS, used fuzzy logic expert system for decision making ability and neural network learning approach of backpropagation algorithm. Simulated result shows, the proposed work is effectively working with accuracy to classified the diabetes of reducing error rate with .0524.
This research is about development of an intelligent system that can detect human emotions by observing the facial characteristics. The problem is addressed employing Active Shape Models (ASM) structured with a Support Vector Machine (SVM) classifier. We define 4 ratios from features in the human face, using FACS Action Units to classify emotions.
A new method for the weight initialization of sigmoidal feedforward artificial neural networks (SFFANNs) is proposed. The proposed weight initialization routine initializes the input to the hidden layer weights on different regions of an interval corresponding to the [-1,1 + ε], where the value of ε depends on the number of nodes in the hidden layer. The thresholds of the hidden layers are initialized to one end of the sub-interval associated with the hidden node for input to hidden node initialization. The hidden nodes to the output node weights are initialized in a deterministic manner in an interval dependent on the number of hidden nodes in the network while the threshold of the output node is initialized to zero. The proposed weight initialization method is compared on a set of 8 function approximation tasks with four instances of the random weight initialization method. The results indicate that the networks initialized with the proposed methods, reach deeper minima of the error functional during training, generalize better and are faster in convergence.
Database store datasets that are not always complete. They contain missing fields inside some records, that may occur due to human or system error involved in a data collection task. Data imputation is the process of filling in the missing value to generate complete records. Complete databases can be analyzed more accurately in comparison to incomplete databases. This paper proposes a 2-stage hybrid model for filling in the missing values using fuzzy c-means clustering and multilayer perceptron (MLP) working in sequence and compares it with k -means imputation and fuzzy c -means (FCM) imputation. The accuracy of the model is checked using Mean Absolute Percentage Error (MAPE). The MAPE value obtained shows that the proposed model is more accurate in filling multiple values in a record compared to stage 1 alone.
It is very important to have accurate, reliable and timely information on crop Area, crop Production and land use for making certain important decisions by the planners and policy makers for the development of agriculture. The present study is carried out to predict the crop Area and crop Production (Maize) of Upper Brahmaputra Valley Zone of Assam using Artificial Neural Networks (ANNs). Multilayer Perceptron (MLP) with single hidden layer and Radial Basis Function (RBF) network have been trained with the secondary data of the Area, Maize Production and meteorological data obtained from various sources. The appropriate model for each of the network is identified. The performance of the developed ANN models has been measured using Root Mean Squared Errors (RMSE) and Correlation Coefficients (CC). The accuracy of the developed ANN models has been compared with Multiple Linear Regression (MLR) Model. The experimental results show that MLP and RBF models outperform MLR model. Sensitivity analysis has been performed for Prediction of Maize Production and results show that temperature (maximum) is the most sensitive parameter for Maize Production followed by technology index for Upper Brahmaputra Valley Zone of Assam.
Information on the web is increasing at an enormous speed. Every user has a distinct background and aspecific goal when searching for information on the web. Present search engines produce results that are best suited to given query. But these engines are unaware of user's individual preferences which in turn can vary with individual interest and these interests most of the time change with individual working environment time. To provide such personalized results, user's topical preferences could be stored and utilized for the purpose. Different approaches have been implemented for the same such as, Collaborative Filtering, Document-Based or Concept based profiling etc. We are proposing hybrid approach based on Document Based as well as Concept Based Profiling. Proposed framework aims to re-rank results for a given query obtained from existing search engines. Thus this system would provide an adaptive methodology for learning changing user preferences to re-rank results according to one's individual interests.
Trust lies at the acme in this competitive trend of E-Commerce. Trust enumerates the different prospects of trustworthiness that are sculpted between the vendor and customer, inducing a better customer satisfaction and Business-to-Customer (B2C) E-Commerce. Considering the vagueness and ambiguity of E-Commerce trust different trust models coupled with Fuzzy Logic (FL) have been implemented through time. In this paper we proposed a fuzzy based trust model where all the important facets that affect the trust are taken into account. Traditional trust models were based on subjective logic which fails to map the real time environment of E-commerce that deals with uncertain behavioral values, Fuzzy logic is a way to deal with uncertainty.
Nowadays soft computing techniques such as fuzzy logic, artificial neural network and neuro- fuzzy networks are widely used for the diagnosis of various diseases at different levels. These diagnosing systems help in early detection of diseases and assist the patient to get proper medication in time. In this paper, the artificial neural network such as multilayer perceptron neural network and radial basis neural network and their hybrid model i.e. combination of fuzzy logic with neural networks (FNN) are introduced to classify the mammography mass data set into two classes benign and malignant on the basis of mammography mass data set attributes. The comparison of the ANNs' performance is done with the FNN models. In the system, the missing value of records is handled using mean substitution method. A four - fold cross validation method is used for the assessment of generalization of the system. The result shows that the FNN networks perform better than the artificial neural networks with an accuracy of 87.50% and 90.00 % and proving their usefulness in classification of mammography mass data.
This paper introduces a method of preference analysis based on electroencephalogram (EEG) analysis of prefrontal cortex activity. The proposed method applies the relationship between EEG activity and the Egogram. The EEG senses a single point and records readings by means of a dry-type sensor and a number of electrodes. The EEG analysis adapts the feature mining and the clustering on EEG patterns using a self-organizing map (SOM). EEG activity of the prefrontal cortex displays individual difference. To take the individual difference into account, we construct a feature vector for input modality of the SOM. The input vector for the SOM consists of the extracted EEG feature vector and a human character vector, which is the human character quantified through the ego analysis using psychological testing. In preprocessing, we extract the EEG feature vector by calculating the time average on each frequency band: θ, low- β, and high- β. To prove the effectiveness of the proposed method, we perform experiments using real EEG data. These results show that the accuracy rate of the EEG pattern classification is higher than it was before improvement of the input vector.
Factorization of a number composed of two large prime numbers of almost equal number of digits is computationally a difficult task. The RSA public-key cryptosystem relies on this difficulty of factoring out the product of two very large prime numbers. There are various ways to find these two prime factors, but the huge memory and runtime expenses for large numbers pose tremendous difficulty. In this paper, we explore the possibility of solving this problem with the aid of Swarm Intelligence Metaheuristics using a Multithreaded Bound Varying Chaotic Firefly Algorithm. Firefly algorithm is one of the recent evolutionary computing models inspired by the behavior of fireflies. We have considered factors of equal number of digits. Observations show that the Firefly algorithm can be an effective tool to factorize a semi prime and hence can be further extended on extremely large numbers.
This paper discusses the application of nature inspired optimization technique for nonlinear constrained optimization problems (NCPP). Here the technique of Invasive Weed Optimization Algorithm (IWO) is chosen with the Simulated Annealing penalty method. In Simulated Annealing penalty method, the penalty function increases with generation number as the infeasible solution is forced towards the feasible region. This paper has reported the capability of IWO with Simulated Annealing penalty method for six bench mark problems. The results demonstrate the superiority of the proposed method over the previously published results.
This paper presents the use of Meta-heuristic techniques to optimize the Blood Assignment Problem (BAP). The demand for blood is high leading to scarce blood resources and a need to minimize the total amount of blood resources imported from outside. A basic mathematical model has been designed as a good contribution to minimize the total amount of blood imported from outside the system. The problem was modeled as a knapsack problem and two Metaheuristics, Tabu search (TS) and Simulated Annealing (SA) were used separately to solve the problem. A hybrid of TS and SA was also tested. Experimental results show that the hybrid algorithm obtained better results compare to the individual algorithm.
In daily life the language used for communication can be termed as Natural Language (NL) and it evolves from generation to generation. NL is the most powerful tool that humans possess for conveying information. At the core of Natural Language Processing (NLP) task there is an important issue of Natural Language Understanding (NLU). NLP is computer manipulation of NL. In this paper we propose a fuzzy parser which is a form of syntax analyzer that performs analysis of a complete source input. The Bottom up LR (left to right) syntax analysis [1] method is a useful and versatile technique for parsing deterministic Fuzzy context-free languages. Here we have proposed a Fuzzy Simple LR parser (FSLR) for parsing English sentences which uses Fuzzy Context Free Grammar (FCFG). LR parsers are a family of efficient, bottom-up shift-reduce parsers that can be used to parse a large class of context-free languages. The system is intended to rank the large number of syntactic analyses produced by NL grammars according to the frequency of occurrence of the individual rules deployed in each analysis. This paper discusses a procedure for constructing an LR parse table from Fuzzy context free grammar and using this table the input sentence is tested for syntactic correctness.
Traditional tuning techniques for classical Proportional-Integral-Derivative (PID) controller suffer from many disadvantages like non-customized performance measure and insufficient process information. For the past two decades nature inspired optimization algorithms are efficiently being implemented for tuning of PID controllers. In this paper, four optimization methods namely Genetic Algorithm (GA), Accelerated Particle Swarm Optimization (APSO), Differential Evolution (DE) and Cuckoo Search (CS) are studied and used to optimize the controller gains of a Proportional-Integral (PI) controller for set point tracking in speed control of a DC motor by minimizing Integral Time Absolute Error (ITAE). Hardware validation of the efficiency of above mentioned optimization algorithms is studied and presented. The plant under study is a DC motor control module (MS15) from M/S LJ CREATE™. M/S National Instruments (NI) based software and hardware components i.e. LabVIEW™ and its add-ons toolkit and data acquisition (DAQ) card has been utilized for the closed loop control in real time. The system identification is done in LabVIEW™ and then offline performance optimization is carried out in MATLAB™. The tuned gains are further used to study the run time performances in LabVIEW™ environment. This is done because MATLAB™ has very good optimization tools and on the other hand LABVIEW™ makes the measurement very easy. From the results obtained it can be clearly inferred that CS algorithm outperformed other algorithms studied in this paper, particularly in disturbance rejection.
To simulate an efficient Intrusion Detection System (IDS) model, enormous amount of data are required to train and testing the model. To improve the accuracy and efficiency of the model, it is essential to infer the statistical properties from the observable elements of th e dataset. In this work, we have proposed some data preprocessing techniques such as filling the missing values, removing redundant samples, reduce the dimension, selecting most relevant features and finally, normalize the samples. After data preprocessing, we have simulated and tested the dataset by applying various data mining algorithms such as Support Vector Machine (SVM), Decision Tree, K nearest neighbor, K-Mean and Fuzzy C-Mean Clustering which provides better result in less computational time.
Artificial bee colony (ABC) optimization algorithm is a powerful stochastic evolutionary algorithm that is used to find the global optima. In ABC each bee stores the information of candidate solution and stochastically modifies this over time, based on the information provided by neighboring bees and based on the best solution found by the bee itself. When tested over various benchmark function and real life problems, it has performed better than some evolutionary algorithms and other search heuristics. However ABC, like other probabilistic optimization algorithms, has inherent drawback of premature convergence or stagnation that leads to the loss of exploration and exploitation capability. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. Therefore, in order to balance between exploration and exploitation capability of the ABC, a new search strategy is proposed. In the proposed strategy, new solution is generated using the current solution and the best solution. Further, in the proposed search strategy, the swarm of bees is dynamically divided into smaller subgroups and the search process is performed by independent smaller groups of bees. The experiments on 15 test functions of different complexities show that the proposed strategy outperforms the ABC algorithm in most of the experiments. Further, the results of the proposed strategy are compared with the results of recent variants of ABC named Gbest guided ABC (GABC), Best-So-Far ABC (BSFABC) and Modified ABC (MABC).
Boolean functions exhibiting strong cryptographic characteristics are essential crypto primitive to be included in the design of secure cryptosystems. It is not only common to incorporate Boolean functions in the design of symmetric block and stream ciphers but they also have a key place in the design of cryptographic hash functions. Strong Boolean functions make the system secure and resistant against cryptanalytic attacks. A wide range of approaches have been adopted in the discovery of Boolean functions that excel in terms of several cryptographic characteristics. In this paper, we present a new scheme based on Genetic Algorithm to generate Boolean functions which satisfy balancedness, correlation immunity, algebraic degree and nonlinearity characteristics. Proposed scheme generates strong Boolean functions with desired values of these characteristics.
Different effort estimation techniques exist for sizing software systems but none is directly applicable to object-oriented software. Although many researchers worked for size and thus effort estimation but still the problem not resolved fully. Different existing estimation techniques works specifically for specific development environment. PRICE systems has developed the predictive object point (POP) metric for predicting effort required for developing an object oriented software system and is based on the counting scheme of function point (FP) method. Though it was an interesting theoretical development but could not gain sufficient recognition from practitioners to be used on a regular basis due to lack of an easy to use support tool and too much complicated formulations. In this paper we tried to simplify the POP calculation. The POP count formula suggested by PRICE system has been modified for estimating the effort. An easy to use support tool to automate the counting method has been prepared. The refined POP count formula and preliminary results of its application in an industrial environment are presented and discussed here for validation of the suggested modification or simplification in formula.
Unified Modeling Language (UML) is a standard modeling language for Object Oriented Modeling. UML in its standard form do not allow modeling the new constructs added by aspects-oriented. Therefore, UML needs to be extended by some mechanism to incorporate aspect related concepts. The two mechanisms available are UML profiling and UML meta-modeling. The mechanism adopted is meta-modeling due to its expression of freedom while introducing new elements. The Meta models presented add new elements needed to model the aspect orientation constructs. The aspect modeling approach presented allows structural as well as behavioural modeling of aspects. Although many Meta models have been proposed earlier for structural modeling but either they are incomplete or need updation to UML 2.0. The work presented is an extension to UML 2.0 and provides complete means for AOM. More over no work is found in Metamodeling of behavioural diagrams. This work not only provides extensions to the class diagram but also to the interactions diagram. Using this approach one can model the static structure and the behavioural structure of crosscutting concerns.
Software companies ensure to complete the project within time and cost, for which good planning and thinking is required. Software project estimation is a form of problem solving which cannot be solved in a single piece of data by using some formulae. Decomposition of the problem helps in concentrating on smaller parts so that they are not missed. It aids in controlling and approximating the software risks which are commendably fixed and accurate. This paper represents an innovative idea which is the working of Principal Component Analysis (PCA) with Artificial Neural Network (ANN) by keeping the base of Constructive Cost Model II (COCOMO II) model. Feed forward ANN uses delta rule learning method to train the network. Training of ANN is based on PCA and COCOMO II sample dataset repository. PCA is a type of classification method which can filter multiple input values into a few certain values. It also helps in reducing the gap between actual and estimated effort. The test results from this hybrid model are compared with COCOMO II and ANN
One of the most important issues in effort estimation is accuracy of size measure methods, because accuracy of estimation depends upon the accurate prediction of size. Prediction of size is depends upon project data,Most of the time in initial stages project data is imperfect and ambiguous this leads to imprecision in its output thereby resulting in erroneous effort estimation using Constructive Cost Model (COCOMO-II) Model. Today's software development is component based and that makes effort estimation process difficult due to the black box nature of component. Also traditional method does not support the component based software development effort estimation. Now the method which support accurate size prediction in component based software development is too much important for accurate effort estimation. Fuzzy logic based cost estimation model address the imperfect and ambiguousness present in Constructive Cost Model (COCOMO-II) models to make reliable and accurate estimation of effort. Component point method supports the accurate size prediction for component based software development which leads to accurate effort estimation in CBSD. The first aim of this paper is to show with comparisons the importance of size measure methods for accurate effort estimation. Paper shows component point is the best method for accurate size prediction in component black box nature. The second aim of this paper is to analyze the use of fuzzy logic in COCOMO-II model to address the imprecision present in its input and suggested four new cost drivers to improve the accuracy of effort estimation.
Cost, time and quality (CTQ) have been the primary factors behind continuous advancements in the field of Software development methodologies. Waterfall model has been with the software industry for over 30 years offering a sequential and linear development approach. However, the impending issues with regards to CTQ have led to rise of agile development methodologies in last couple of decades offering incremental & iterative development, customer collaboration and reduced time to market. The new agile paradigm, on the other side, is also bringing its own challenges and risks primarily in terms of runtime infrastructure cost, network accessibility and platform availability steering towards CTQ issues in a disguised manner. With Cloud Computing gaining popularity over the last few years offering almost zero CAPEX (Capital Expenditure), reduced OPEX (Operational Expenditure), high scalability & availability, it has been considered worthwhile to explore Agile based software development on Cloud. In this article, migration of Agile based project to Cloud has been discussed along with its benefits and challenges at both the stages. While Cloud adoption is a catalyst for Agile based development, there are a few open issues to be addressed.
Researchers have investigated the use of test suite's reduction techniques to reduce the cost of regression testing. Test suite minimization techniques attempt to remove those test cases from the test suite that have become redundant over time, since the requirements covered by them are also covered by other test cases in the test suite. This paper proposes an approach for software testing, using the concept of decision tables generated from software requirement specification defined by the user. The proposed approach allows testers to make an early estimation of errors and thus, reduce the overall testing cost and time. Moreover, for this type of method since the testing is done using the requirement specification it does not require the tester to have the knowledge of coding or programming logic. The test case redundancy reduction observed for various problems is upto 33%, thus saving significant cost and time.
Refactoring tools suffer from usability in the areas of automating mundane tasks, providing user customization, and providing strategies for error recovery. The automation of refactoring tools can be a risky undertaking since user intervention is required in many scenarios to maintain the internal quality of the system. Any type of design level refactoring requires complex changes to the code, the validation of preconditions, and could potentially harm the system. Smaller refactorings that handle code-smells can be carried out automatically with considerably lower risk. This paper discusses the current state of automated refactoring tools, and the development of an automated refactoring tool to extract and propagate Java literal expressions in IntelliJ IDEA.
The paper highlights various issues of complexity and deliverability and its impact on software popularity. The use of Expert Intelligence system helps us in identifying the dominant and non-dominant impediments of software. FRBS is being developed to quantify the trade-off between complexity and deliverability issues of a software system.
To achieve software quality and reliability we need to identify and minimize the errors in early stages of software development life cycle which can be achieved by software verification. So, static analysis is viable surrogate which statically determines and verifies dynamic program properties. Abstract Interpretation is the theory of approximation of program semantics which is used in static analysis of programs. In this work, we provide an overview of major concepts relating to abstract interpretation theory and technique. An illustrative example is presented that shows how to do an interval analysis of a given program.
In Enterprise Systems of a domain, various Business processes are similar to one another yet differ in some other ways from one organization, project or industry to another. These variations in business process of enterprise systems represent business operations of different organizations. So, there is need of a configurable business process models and a modeling language which could represent all the possible variations of an Enterprise system. Business process are model using Business Process Modeling Notation, BPMN. BPMN is accepted by OMG as a standard. But BPMN is not sufficiently capable to model all aspects of Configurable system. The aim of paper is identify the various dimensions of configurability and then extend the BPMN notation to Configurable BPMN, (C-BPMN) to model the variability exist in process model and construct configurable process model. So the C-BPMN is used to model configurable aspects of business processes. Approach is illustrated by library domain example.
With advancements in software development field day by day, new and better concepts of developing a software are required which leads to emergence of the concept REUSABIITY. In this paper we have studied various attributes or factors that affect the reusability of software. The most common factors are identified and their impact is analyzed. The study also accesses the measures or metrics to quantify these attributes and henceforth justify them.
Software Systems are evolving by adding new functions and modifying existing functions over time. Through the evolution process, copy paste programming and other processes leads to duplication of data resulting in model clones or code clones. Since clones are believed to reduce the maintainability of software, several code clone detection techniques and tools have been proposed. This paper proposes a new clone detection technique to outwit the hindrance of clones by applying a 3-way approach of detecting and removing the clones. The 3-way approach for cloning integrates the three aspects of software engineering: Model Based Visual Analysis, Pattern Based Semantic Analysis and Syntactical Code Analysis. The process is automated by developing a tool that requires no parsing yet is able to detect a significant amount of code duplication.
Business Rule identification is an important task of Requirements Engineering process. However, the task is challenging as business rules are often not explicitly stated in the requirements documents. In case business rules are explicit, they may not be atomic in nature or, may be vague. In this paper, we present an approach for identifying business rules in the available requirements documentation. We first identify various business rules categories and, then examine requirements documentation (including requirements specifications, domain knowledge documents, change request, request for proposal) for the presence of these rules. Our study aims at finding how effectively business rules can be identified and classified into one of the categories of business rules using machine learning algorithms. We report on the results of the experiments performed. Our observations indicate that in terms of overall result, support vector machine algorithm performed better than other classifiers. Random Forest algorithm had a higher precision than support vector machine algorithm but relatively low recall. Naïve Bayes algorithm had a higher recall than support vector machine. We also report on evaluation study of our requirements corpus using stop-words and stemming the requirements statements.
Aspect-oriented modeling (AOM) has been developed to modularize crosscutting concerns appropriately in UML models. In software engineering, aspects are concerns that cut across multiple modules. In requirements modeling, we analyze interactions and potential inconsistencies. We use UML to model requirements in a use case driven approach. During requirements specification a structural model of the problem domain is captured with a class diagram. Use cases refined by activities are the join points to compose crosscutting concerns. Graph transformation systems provide analysis support for detecting potential conflicts and dependencies between rule-based transformations.
An Agile framework of software development has attracted major players of the software industry. This transition of approach has caused significant changes in terms of fast delivery, less documentation, more satisfaction and more interactions. Effective handling of frequent changes during development is one of the important accepting criteria for this framework by software professionals. Frequent changes during the sprint will cause aggregation of test cases in the suite and may effect the time to delivery of product to the customer. To overcome this issue of late delivery, an approach of test case selection is proposed in this paper. This approach takes into consideration weights of the undirected graph of the group of user story in a module and optimal nature of this proposed method removes other risks of the development of the user story.
In present times software reengineering has become an important domain of research to increase the shelf life of legacy system. The major objective for reengineering revolves around reducing the cost of investment in Information Technology (IT) infrastructure by reducing the maintenance cost and capitalizing on the current existing IT infrastructure. This can be achieved by making it more adaptable to the changing requirements. The decision for reengineering system is quite challenging as one has to select from the available option of investing in new system or legacy system. Further cost of reengineering is not a true decisive parameter for taking up reengineering. A better approach is finding out return on investment (ROI). ROI of reengineering a system is difficult to calculate as one has to assume the cost of project and the project also depends on the requirement of reengineering. This paper gives a generalized approach towards decision making for reengineering the legacy system. This paper presents a requirement specific approach of cost estimation and proposes ROI computation for the assessment of reengineering.
This is evident that training duration is a key factor for a successful therapy. Robot supported therapy can improve the rehabilitation allowing more intensive training. This paper presents the kinematic, the control architecture and benchmark criteria to evaluate the performance of Robotic Rehabilitation Devices (RRD). Equipped with position, force and impedance controller, the proposed RRD can deliver the patient cooperative lower limb therapy taking into account the patient activity and supporting him/her only as much as needed[1]. One of the main objectives of a successful lower limb robotic rehabilitation device is to obtain a smooth human machine interaction in different phase of gait cycle at the interaction point (haptic behavior). The input (interaction force, Joint angle, rate of change of interaction force) and output (impedance, Δτ) relationship of the control system is nonlinear. This paper proposes a fuzzy rule based controller to be used to control the interaction force at the patient exoskeleton interaction point. In achieving the objective, impedance, driver torque and angular velocity have been modulated in a way such that there is a reduction of interaction force. Minimum interaction force at the interaction point and tracking the defined gait trajectory with minimum error are set as the benchmark to evaluate the performance in many tasks. In this paper there is an evaluation of what degree of impedance is ideal for what type of interaction force and joint angle to maintain a trajectory tunnel. This paper describes the control architecture of one Degree of freedom lower limb exoskeleton that has been specifically designed in order to ensure a proper trajectory control for guiding patient's limb along an adaptive reference gait pattern [2]. The proposed methodology satisfies all the desired criteria to be an ideal robotic rehabilitation device.
The essentially infinite storage space offered by Cloud Computing is quickly becoming a problem for forensics investigators in regards to evidence acquisition, forensic imaging and extended time for data analysis. It is apparent that the amount of stored data will at some point become impossible to practically image for the forensic investigators to complete a full investigation. In this paper, we address these issues by determining the relationship between acquisition times on the different storage capacities, using remote acquisition to obtain data from virtual machines in the cloud. A hypothetical case study is used to investigate the importance of using a partial and full approach for acquisition of data from the cloud and to determine how each approach affects the duration and accuracy of the forensics investigation and outcome. Our results indicate that the relation between the time taken for image acquisition and different storage volumes is not linear, owing to several factors affecting remote acquisition, especially over the Internet. Performing the acquisition using cloud resources showed a considerable reduction in time when compared to the conventional imaging method. For a 30GB storage volume, the least time was recorded for the snapshot functionality of the cloud and dd command. The time using this method is reduced by almost 77 percent. FTK Remote Agent proved to be most efficient showing an almost 12 percent reduction in time over other methods of acquisition. Furthermore, the timelines produced with the help of the case study, showed that the hybrid approach should be preferred to complete approach for performing acquisition from the cloud, especially in time critical scenarios.