For the wireless communication in radio frequency range, IEEE 802.11 is one of the many standards available. IEEE 802.11b defines the medium access control layer [MAC] for wireless local area networks. The wireless local area network, WLAN is dominated by IEEE 802.11 standard. It becomes one of the main focuses of the WLAN research. Now most of the ongoing research projects are simulation based as their actual hardware implementation is not cost effective. The main core of the IEEE 802.11b standard are the CSMA\CA, Physical and MAC layers. But only MAC layer for transmitter is modeled in this paper using the VHDL. The VHDL (Very High Speed Hardware Description Language) is defined in IEEE as a tool of creation of electronics system because it supports the development, verification, synthesis and testing of hardware design, the communication of hardware design data and the maintenance, modification and procurement of hardware. It is a common language for electronics design and development prototyping. The main purpose of the IEEE 802.11 standard is to provide wireless connectivity to devices that require a faster installation, such as Laptops, PDA's or generally mobile devices inside a WLAN. MAC procedures are defined here for accessing the physical medium, which can be infrared or radio frequency. Here Wi-Fi MAC transmitter module is divided in to 5 blocks i.e. data unit interface block, controller block, pay load data storage block, MAC header register block, data processing block. In this paper, we are considering only two blocks i.e pay load data storage block & data processing block. So other blocks i.e. data unit interface block, controller block, MAC header register block are not discussed further in this paper.
This paper presents a novel approach to simulate a Knowledge Based System for diagnosis of Breast Cancer using Soft Computing tools like artificial neural networks (ANNs) and Neuro Fuzzy Systems. The feed-forward neural network has been trained using three ANN algorithms, the back propagation algorithm (BPA), the radial basis function (RBF) Networks and the learning vector quantization (LVQ) Networks; and also by adaptive neuro fuzzy inference system (ANFIS). The simulator has been developed using MATLAB and performance is compared by considering the metrics like accuracy of diagnosis, training time, number of neurons, number of epochs etc. The simulation results show that this Knowledge Based Approach can be effectively used for early detection of Breast Cancer to help oncologists to enhance the survival rates significantly.
ATM, an ultimate solution of broadband-integrated service digital network (B-ISDN) to provide integrated multimedia services including voice, video, and data, has entered into the limelight with increased demand for such services. Hence, ATM is to be capable of supporting a variety of service classes and providing appropriate QoS according to classes. This may force us to sacrifice low priority traffic classes for high priority traffic classes to satisfy QoS requirements for the high priority traffic classes in case of congestion. There have been many possibilities suggested for traffic control in terms of QoS and 'Cell Loss Priority (CLP) control', which was originally introduced in ATM networks for the purpose of congestion control, must be one of them. The capability of CLP control can apply to Doubly Finite Queue buffer priority scheme. This scheme based only priority queuing disciplines that are used to secure the Cell Loss Ratio (CLR) of higher priority cells at the cost of loss of low priority cells by examining the CLP bit of each incoming cell set to '0' or '1'. The cell CLP value equal to '0' assign high priority cell and CLP value equal to '1' assign low priority cell. Priority queuing is especially appropriate in cases where WAN (Wide Area Network) links are congested from time to time. Doubly finite queues (DFQ) and multi-source virtual dynamic routing algorithm (MSVDR) use an adaptive and iterative path search approach and taken advantage of the PNNI hierarchical structure. It consists of the following six major components.
A major use of microarray data is to classify genes with similar expression profiles into groups in order to investigate their biological significance. Cluster analysis is by far the most used technique for gene expression analysis. It has grown to be an important research topic in a wide variety of fields owing to its wide applications. A number of clustering methods exist with one or more limitations, such as, dependence on initial parameters, inefficiency in presence of noisy data, to name a few. This paper proposes a novel clustering algorithm for gene microarray data which is free from the above limitations. Besides, it is simple to implement, and is has been proved to be very effective even in the presence of noisy data. Further, it is extremely exhaustive and is hence, less likely to get stuck at local optima.
In distance vector routing [4,7, 9, 10] each router collects and forwards the information from and to the neighbors. It was the original ARPANET routing algorithm and use in the Internet under the RIP [2]. The methodology of collecting and broadcasting the routing related information initiates the problems i.e. (1) two-node loop instability, (2) three-node loop instability and (3) count-to-infinity, are well known all over. We try to give the details of distance vector routing algorithm and the problems in the introduction section and discuss the related works in direction of this problem. New algorithms are introduce to give the solution of the discuss problem along with the corrective actions for arising the problem due to implementation of new discuss methodology. We try to reduce the loopholes in the distance vector routing algorithm by applying the new concept of test packet, which is helpful in receiving and forwarding the correct and updated information regarding the available and non available routers to the other routers.
In this paper, we propose a new routing protocol for ad hoc wireless networks, which is based on DSR (Dynamic Source Routing) on-demand routing protocol. Congestion is main reason for packet loss in mobile ad hoc networks. If the workload is distributed among the nodes in the system based on the delay of the paths, the average execution time can be minimized and the lifetime of the nodes can be maximized. We propose a scheme to distribute load between multiple paths according to the Time Stamp values of the packets of associated paths. Our simulation results confirm that TASR improves the throughput and reduces the number of collisions in the network.
EDF (earliest deadline first) has been proved to be optimal scheduling algorithm for single processor real-time system. It also performs well for multiprocessor system. Limitation of EDF is that its performance decreases exponentially when system becomes slightly overloaded. ACO (ant colony optimization) based scheduling algorithm performs well in both underloaded and overloaded conditions. But its limitation is that it takes more time for execution compared to EDF. In this paper, an adaptive algorithm for multiprocessor real-time system is proposed, which is combination of both of these algorithms. The proposed algorithm along with EDF and ACO based algorithm is simulated for real-time multiprocessor system and the results are obtained. The performance is measured in terms of success ratio (SR) and effective CPU utilization (ECU). Execution time taken by each scheduling algorithm is also measured. From analysis and experiments, it reveals that the proposed algorithm is fast as well as efficient in both underloaded and overloaded conditions for real-time multiprocessor systems.
A new quality controlled Maxlift and Medlift transforms based compression methods for electrocardiogram (ECG) signal are presented. The ECG signal is transformed using Maxlift and Medlift. The transformed coefficients are thresholded using the bisection algorithm in order to match the predefined user specified percentage root mean square difference (PRD) within the tolerance. Then, the binary lookup table is made to store the position map for zero and non-zero coefficients (NZC). The NZC are quantized by Max-Lloyd quantizer followed by Arithmetic codng. Lookup table is encoded by Huffman coding. The results are presented on different ECG signals of varying characteristics. The results show that Medlift gives better performance as compared to the Maxlift.
This article deals with the development of an improved clustering technique for categorical data that is based on the identification of points having significant membership to multiple classes. Cluster assignments of such points are difficult, and they often affect the actual partitioning of the data. As a consequence, it may be more effective if the points that are associated with maximum confusion regarding their cluster assignments are first identified and excluded from consideration at the first stage of algorithm and these points may be assigned to one of the identified clusters based on an ANN classifier at the second stage of this algorithm. At the first stage of this algorithm we are using our developed genetic algorithm and simulated annealing based fuzzy clustering and well known fuzzy C-medoids algorithm when the number of clusters is known a priori. The performance of the proposed clustering algorithms has been compared with the average linkage hierarchical clustering algorithm, in addition to the genetic algorithm based fuzzy clustering, simulated annealing based fuzzy clustering and fuzzy C-medoids with ANN for a variety of artificial and real life categorical data sets. Also statistical significance test have been performed to establish the superiority of the proposed algorithm.
The paper gives the guideline to choose a best suitable hashing method hash function for a particular problem. After studying the various problem we find some criteria has been found to predict the best hash method and hash function for that problem. We present six suitable various classes of hash functions in which most of the problems can find their solution. Paper discusses about hashing and its various components which are involved in hashing and states the need of using hashing for faster data retrieval. Hashing methods were used in many different applications of computer science discipline. These applications are spread from spell checker, database management applications, symbol tables generated by loaders, assembler, and compilers. There are various forms of hashing that are used in different problems of hashing like Dynamic hashing, Cryptographic hashing, Geometric hashing, Robust hashing, Bloom hash, String hashing. At the end we conclude which type of hash function is suitable for which kind of problem.
Based on a distance of kernel method, a novel noise-resistant fuzzy clustering algorithm called kernel noise clustering (KNC) algorithm, is proposed. KNC is an extension of the noise clustering (NC) algorithm proposed by Dave. By replacing the Euclidean distance used in the objective function of NC algorithm, a new distance is introduced in NC algorithm. The distance of the kernel method is more robust than Euclidean and alternative distance. Moreover, The properties of the new algorithm illustrated that the KNC are most suitable and effective method for clusters with non-spherical shapes such as annular ring shape. In addition, KNC is a better method to solve the problems annular ring shape with noise than the FKCM is.
Image restoration is the process of recovering original image from its degraded version. One possibility of the image degradation is the relative motion between the camera and the object which may blur the captured image during its formation. In this paper, a generalized partial differential equations (PDEs) based model of image is proposed to recover the original image from the blurred image in spatial domain itself. For digital implementations, the resulting PDE is discretized using Lax method which is the modified form of forward time centred space (FTCS) differencing scheme that stabilizes the FTCS scheme. Therefore, the PDE that models the motion blur and image restoration process is a 1D flux-conservative equation or wave equation with added diffusion term which is in the form of the Navier-Stokes equation for viscous fluid. The proposed method is implemented in software using MATLAB for various grey test images for various length of motion blur in pixels and the subjective analysis of the result shows desired results.
Layout is an important issue in designing sensor networks. This paper proposes a new approach for energy efficient layout of wireless sensor network. The sensors communicate with each other to transmit their data to a high energy communication node which acts as an interface between data processing unit and sensors. Optimization of sensor locations is essential to provide communication for a longer duration. An energy efficient layout with good coverage based on Multi-objective Particle Swarm Optimization algorithm is proposed here. During the process of optimization, sensors move to form a uniformly distributed network The two objectives taken into consideration are coverage and lifetime. Basically a set of network layouts are obtained. The simulation results also exhibit improvement of performance with increase in number of generations in the algorithm.
The grid is an emerging technology for enabling resource sharing and coordinated problem solving in dynamic multi-institutional virtual organizations. The resource-matching problem in the grid involves assigning resources to tasks in order to satisfy task requirements and resource policies. Just as the World Wide Web has changed the way that society deals with information, researchers and educators now expect the grid to change the way that they deal with computing resources and, ultimately, how they deal with knowledge. It is important to mention that, so far, the computational grid has reached a much higher level of maturity than the other types of grid. This paper deals with the implementation of a computer program, which employs Grid Computing (GCs) technique in the quest for reducing the computational time. The program is written in Java and incorporates a grid-gain strategy for faster evolution. The factors like average turn around time, waiting time, and the execution time has been calculated for the given data set. Comparison has been made between the execution times of this algorithm with and without using grid environment and finally given the best choice for CPU scheduling algorithms (i.e. SJF (shortest job first), PRIORITY, and FCFS (first come first serve)) for given data set. In the future in order to construct large, repeatable experiments, a simulation environment designed to examine application scheduling on the grid such as Simgrid could be used.
Disk scheduling has an important role in QOS guarantee of soft real-time environments such as video-on-demand and multimedia servers. Since now, some disk scheduling algorithms have been proposed to optimize scheduling disk requests. One of the most recent algorithms is GSR. GSR improved the disk throughput by globally rescheduling scheme for real-time disk requests. In this paper, we propose a new algorithm based on GSR that is called IGSR (Improved GSR). The proposed method improves throughput and decreases the number of missed deadline requests by employing FTS (Feasible Tasks Sequence). Simulation results showed IGSR decreased the number of missed deadlines 50% and increased disk throughput 12% in compare with GSR. Also, when the input is infeasible, IGSR provided about 40% more feasible output schedules.
Task duplication based scheduling algorithms generate shorter schedules without sacrificing efficiency but leave the computing resources over consumed due to the heavily duplications. In this paper, we try to optimize the duplications after generating a schedule without affecting the overall schedule length (makespan). Here, we suggested two workflow scheduling algorithms with economical duplication i.e. Reduced duplication for homogeneous systems (RD) and heterogeneous economical duplication (HED) for heterogeneous systems respectively. In these algorithms, a static task schedule is generated using an insertion-based task-duplication scheduling strategy and try to optimized by removing some duplicated tasks in the schedule whose removal does not affect the makespan adversely. Further, in some situations, the earlier schedule of a task becomes unproductive after it has been duplicated later on different processor(s). The algorithms investigate and remove such schedules in order to reduce processor consumption. Removing such useless tasks generate larger scheduling holes in the schedule, which can be better utilized for scheduling other parallel and distributed applications such as in grid environment. The simulation results show that RD and HED algorithms generate better schedule with lesser number of duplications and remarkably less processor consumption as compared with SD, CPFD for homogeneous systems and HLD, LDBS for heterogeneous systems.
Fault-tolerance in an interconnection network is very important for its continuous operation over a relatively long period of time. Fault-tolerance is the ability of the system to continue operating in the presence of faults. In this paper a new irregular network IABN has been proposed and an efficient routing procedure has been defined to study the fault tolerance of the network. The behavior of this network has been analysed and compared with regular network ABN, under fault free conditions and in the presence of faults. It has been found that in an IABN, there are six possible paths between any source-destination pair, whereas ABN has only two paths. Thus the proposed network IABN is more fault-tolerant.
The paper presents a task allocation technique for multiple applications onto heterogeneous distributed computing system to minimize the overall makespan. An existing critical path based algorithm for scheduling of tasks of single application has been used to allocate tasks of multiple applications onto heterogeneous distributed computing system. The paper discusses how a composite application is given for multiple applications and how critical path based algorithm is applied efficiently for the allocation of tasks of different applications.
This paper presents the de-tagging of the cardiac magnetic resonance images (MRI) in the curvelet domain. The second generation discrete curvelet transform captures the directional activities of an image as well as the directional high intensity peak of the magnitude spectrum in their subbands effectively. Hence, the curvelet transform is used to identify the high directional peak corresponds to the tag patterns in the magnitude spectrum and suppress the same using the curvelet coefficients. The de-tag method presented here undergoes three steps in the curvelet domain: (1) since, the fine scale subband of the curvelet decomposition capture the tag lines the fine scale isotropic wavelet subband coefficients are suppressed and at the initial stage. (2) Identifying the subbands which capture the tag patterns using the directional subband coefficients. (3) Filter out the coefficients of the identified subbands. The proposed method show the better results compared to the several existing de-tag methods.
Semantic Image Annotation is a difficult task in Annotation Based Image Retrieval (ABIR) systems. Several techniques proposed in the past were lagging in efficiency and robustness. In this paper we are proposing a novel technique for automatically annotating multi-object images with higher accuracy. The colour entropy is used to eliminate the image background, and then we applied normalized cut principle for object separation. Our experimental results proved that the multi-class n-SVM performs better with colour feature extracted using histogram and shape feature extracted using Region Contours.
Semantic Image Annotation is a difficult task in Annotation Based Image Retrieval (ABIR) systems. Several techniques proposed in the past were lagging in efficiency and robustness. In this paper we are proposing a novel technique for automatically annotating multi-object images with higher accuracy. The colour entropy is used to eliminate the image background, and then we applied normalized cut principle for object separation. Our experimental results proved that the multi-class n-SVM performs better with colour feature extracted using histogram and shape feature extracted using Region Contours.
Presently, the optimization concept plays an important role in the problems related to engineering management and commerce etc. Recent trends in optimization, points towards the genetic algorithm and evolutionary approaches. Different genetic algorithms are proposed, designed and implemented for the single objective as well as for the multiobjective problems. GAS3 [2006] (Genetic Algorithm with Species and Sexual Selection) proposed by Dr. M. M. Raghuwanshi and Dr. O. G. Kakde is a distributed Quasi steady state real-coded genetic algorithm. In this work, we have modified GAS3 algorithm. We introduce a reclustering module in GAS3 after simple distance based parameter less clustering (species formation). GAS3KM (Modifying Genetic Algorithm with Species and Sexual Selection by using K-means algorithm) uses K-means clustering algorithm for reclustering. Experimental results show that GAS3KM has outperformed GAS3 algorithm when tested on unimodal and multimodal test functions.
Presently, the optimization concept plays an important role in the problems related to engineering management and commerce etc. Recent trends in optimization, points towards the genetic algorithm and evolutionary approaches. Different genetic algorithms are proposed, designed and implemented for the single objective as well as for the multiobjective problems. GAS3 [2006] (Genetic Algorithm with Species and Sexual Selection) proposed by Dr. M. M. Raghuwanshi and Dr. O. G. Kakde is a distributed Quasi steady state real-coded genetic algorithm. In this work, we have modified GAS3 algorithm. We introduce a reclustering module in GAS3 after simple distance based parameter less clustering (species formation). GAS3KM (Modifying Genetic Algorithm with Species and Sexual Selection by using K-means algorithm) uses K-means clustering algorithm for reclustering. Experimental results show that GAS3KM has outperformed GAS3 algorithm when tested on unimodal and multimodal test functions.
In this paper, we present a novel design of a wavelet based edge detection technique. Edge detection is an important task in image processing. Edges in images can be mathematically defined as local singularities. Until recently, the Fourier transforms was the main mathematical tool for analyzing singularities. However, the Fourier transform is global and not well adapted to local singularities. It is hard to find the location and spatial distribution of singularities with Fourier transforms. Wavelet analysis is a local analysis; it is especially suitable for time frequency analysis, which is essential for singularity detection. The fact motivated us to develop a technique using Haar wavelet to find an edge from an image. The proposed technique has been demonstrated for iris imagery and the reported results have been compared with Daubechies D4 wavelet based edge detection technique.
The shuffled frog leaping algorithm (SFLA) is a recent meta-heuristic memetic algorithm used for optimization having a simple algorithm with a fast calculation time. It mimics the social behavior of a species (frogs) found in nature. Clonal selection algorithm (CSA) is an optimization algorithm developed based on the processes occurring in natural immune system. In this paper, a novel algorithm is proposed that is based on a modified CSA and SFLA. In the proposed algorithm a modified CSA is used for the best candidates in the population to progress and SFLA for the worst candidates in the population to move towards the best candidates. The power of the algorithm lies in the fact that it avoids stagnation and has a very fast convergence speed. The algorithm is tested against SFLA for five functions in which it greatly outperforms SFLA in terms of convergence rate and the optimum result obtained.
The shuffled frog leaping algorithm (SFLA) is a recent meta-heuristic memetic algorithm used for optimization having a simple algorithm with a fast calculation time. It mimics the social behavior of a species (frogs) found in nature. Clonal selection algorithm (CSA) is an optimization algorithm developed based on the processes occurring in natural immune system. In this paper, a novel algorithm is proposed that is based on a modified CSA and SFLA. In the proposed algorithm a modified CSA is used for the best candidates in the population to progress and SFLA for the worst candidates in the population to move towards the best candidates. The power of the algorithm lies in the fact that it avoids stagnation and has a very fast convergence speed. The algorithm is tested against SFLA for five functions in which it greatly outperforms SFLA in terms of convergence rate and the optimum result obtained.
Wireless sensor network is the technology of future due to its potential to be applied in all kind of monitoring, automation and management applications. In wireless sensor networks localization has become very important because for any application which need information from a specific region of WSN deployment we need location information of the individual nodes. Localization may also help network layer routing protocols. With the knowledge of location directional packet forwarding can be used which will reduce network load. Wireless sensor network is a large scale energy constrained network, so it is necessary to design localization algorithm which consume less power. Here we have developed an algorithm which uses directional antenna in each sensor node and try to locate each node with some error probability.
The state of network routing today is the result of theoretical progress, technological advances and operational experiences. It is also impacted by economic and policy issues. The packet information search at router is really complicated task and hence, packet classification is often a performance bottleneck in network infrastructure; therefore, it has received much attention in the research community. In general, there have been two major threads of research addressing this problem: algorithmic and architecture.. The novel scheme considers the IPV4 packet header structure. The proposed approach extract IP addresses, portal addresses and protocol field from header part and match them with the rule in the classifier. The rules are arranged such that each fields are divided into two equal parts and stored in a static data structure. A binary search tree is generated if the algorithm encounters the same rule. It is important to note that the uniqueness of the rule is judged by the source and destination address rule. The proposed scheme significantly reduces the processing time by simplification of the heuristics used in static allocation of array as its data structure.
This identification of nonlinear MIMO plants finds extensive applications in stability analysis, controller design, modeling of intelligent instrumentation, analysis of power systems, modeling of multipath communication channels etc. For identification of such complex nonlinear plants, the recent trend of research is to employ nonlinear structures and to train their parameters by adaptive optimization algorithms. The area of artificial immune system (AIS) is emerging as an active and attractive field involving models, techniques and applications of greater diversity. In this paper a new optimization algorithm based on AIS is developed. This algorithm is hybridized with FLANN structure to develop a new model for efficient identification of nonlinear dynamic system. Simulation study of few benchmark MIMO identification problems is carried out to show superior performance of the proposed model over the standard GA and PSO based approach.
We aim at an empirical analysis of distributed vertex coloring algorithms. To this end, we compare the empirical performance of a recently proposed distributed vertex coloring algorithm [8] with that of Luby's algorithm. To get a good coverage we look at the cycle graph on n vertices, cliques, and random graphs from the family G(n, p) by controlling n, p and np. The results of our experiments fairly demonstrate the improvement in the bit complexity of the algorithm proposed in [8]. Our results also match those of the experiments of Panconesi et. al. [3] on Luby's algorithm.
In this paper in consideration of each available techniques deficiencies for speech recognition, an advanced method is presented that's able to classify speech signals with the high accuracy (98%) at the minimum time. In the presented method, first, the recorded signal is preprocessed that this section includes denoising with Mels Frequency Cepstral Analysis and feature extraction using discrete wavelet transform (DWT) coefficients; Then these features are fed to multilayer perceptron (MLP) network for classification. Finally, after training of neural network effective features are selected with UTA algorithm.
In this paper in consideration of each available techniques deficiencies for speech recognition, an advanced method is presented that's able to classify speech signals with the high accuracy (98%) at the minimum time. In the presented method, first, the recorded signal is preprocessed that this section includes denoising with Mels Frequency Cepstral Analysis and feature extraction using discrete wavelet transform (DWT) coefficients; Then these features are fed to multilayer perceptron (MLP) network for classification. Finally, after training of neural network effective features are selected with UTA algorithm.
Many complex problems like speech recognition, bioinformatics, climatology, control and communication are solved using hidden Markov models (HMM). Mostly, optimization problems are modeled as HMM learning problem in which HMM parameters are either maximized or minimized. In general, Baum-Welch Method (BW) is used to solve HMM learning problem giving only local maxima/minima in exponential time. In this paper, we have modeled HMM learning problem as a discrete optimization problem such that randomized search methods can be used to solve the learning problem. We have implemented metropolis algorithm (MA) and simulated annealing algorithm (SAA) to solve the discretized HMM learning problem. A comparative study of randomized algorithms with the Baum Welch method to estimate the HMM learning parameters has been made. The metropolis algorithm is found to reach maxima in minimum number of transactions as compared to the Baum-Welch and simulated annealing algorithms.
Trigonometric functions have laid their greatest impact on the world of electronics; in this paper, an algorithm is proposed which calculates sine of any angle (in degrees) with an accuracy of 99.5% or more with respect to the standard values. The algorithm occupies lesser execution memory with respect to the presently known algorithm but has a very fast execution and is ideal for software as well as hardware implementation.
The behavior of an aircraft can be described with a set of non-linear differential equations by assuming six degrees of freedom (3 for linear motions and 3 for angular motions) about x, y & z axis. All the aircrafts have a PID controller for autopilot control system for pitch, yaw and roll. The PID [6, 12] controllers are associated with their PID gains which can either be tuned manually or by optimized methods like genetic algorithms to get better control and enhancement in the performance. Genetic algorithms (GAs) are the powerful tool in the field of global optimization to various problems. Genetic algorithms are the models of machine learning which are based on the process of natural evolution. Here genetic algorithms are successfully applied for three axis autopilot (Pitch, Roll and Yaw) control of an aircraft using MATLAB simulation for tuning of PID controller and thus optimized values of PID gains for the controller are derived. Further the autopilot system has been validated using FlightGear software [1]. In this software the PID gains evaluated using genetic algorithms have been fed as an input in the autopilot program for the Boeing 747 - 400 (the model used here for autopilot system is Boeing 747 - 400).
OTIS (Optical Transpose Interconnection System) is popular model of optoelectronic parallel computers. This is a hybrid interconnection network using electronic and optical communication channels. In the recent years, many parallel algorithms for various numeric and non-numeric computations have been developed on these networks. In this paper, we propose a parallel algorithm for sorting N (=n 2 ) data elements on an OTIS model of parallel computers, called OTIS-mesh of trees. Our algorithm is based on sparse enumeration sort (Horowitz et al., 2002) and shown to run in 4.5 log N electronic moves + 5 OTIS moves.
IEEE 802.15.4-2003 is a standard for low rate, low powered, Low memory wireless personal area network (WPAN). The physical layer (PHY) and medium access control (MAC) specification has been given by IEEE and the network layer by ZigBee alliance. They support two kind of net work tree/mess. In tree network no routing table is required for routing. After the great success in PAN this technique has also been tried to apply in business network too. The main problem with this routing is that the maximum length of the net work is 16 hops and in some cases the network can not grow because of the exhaustion of address in some part while the other part is very poorly loaded. Here In this paper we have provided a unified address borrowing scheme which can be easily applied to grow the network beyond 16 hops and overcome the address exhaustion problem by borrowing address.
In this paper, an approximate algorithm is proposed to solve facility layout problem (FLP) which is formulated as quadratic assignment problem (QAP). In the proposed approach, linear assignment problem (LAP) is formulated which is solvable in polynomial time. The proposed heuristic is applied to solve FLP from the set of LAP solution. To evaluate the performance of the heuristic a comparison between optimal and heuristic solution is also provided. The approach is tested on numerous benchmark problems available in literature. An encouraging comparative performance of this procedure is thus reported.
This paper reports a study on better-fit heuristic for classical bin-packing problem, proposed in [1]. Better-fit replaces an existing object from a bin with the next object in the list, if it can fill the bin better than the object replaced. It takes O(n 2 m) time where n is the number of objects and m is the number of distinct object sizes in the list. It behaves as off-line as well as on-line heuristic with the condition of permanent assignment of objects to a bin removed. Experiments have been conducted on representative problem instances in terms of expected waste rates. It outperforms off-line best-fit-decreasing heuristic on most of the instances. It always performs better than the on-line best-fit heuristic.
OCR reading technology is benefited by the evolution of high-powered desktop computing allowing for the development of more powerful recognition software that can read a variety of common printed fonts and handwritten texts. But still it remains a highly challenging task to implement an OCR that works under all possible conditions and gives highly accurate results. This paper describes an OCR system for printed text documents in Malayalam, a language of the South Indian State, Kerala. The input to the system would be the scanned image of a page of text and the output is a machine editable file. Initially, the image is preprocessed to remove noise and skew. Lines, words and characters are segmented from the processed document image. The proposed method uses wavelet multi-resolution analysis for the purpose of extracting features and Feed Forward Back-propagation Neural Network to accomplish the recognition tasks.
Conflict Resolution in context-aware computing is getting more significant attention from researchers as ubiquitous computing environments take into account multiple users. For multi-user ubiquitous computing environments, conflicts among users' contexts need to be detected and resolved. For this, application developers or end-users specify conflicts situations, and the underlying ubiquitous computing middleware detects and resolve conflicts between applications when one of the conflict situations arises. In this paper, we propose Conflict manager which employs an array of Conflict Resolving Algorithms to resolve conflicts in Context-aware applications. Conflicts arise when multiple users try to access an application. In order to resolve conflicts among users, the Conflict resolver sums up preferences of users which collide with each other and recommends specific contents depending on different criteria's such as role of the user or the priority assigned with the user. This paper proposes various algorithms where in each algorithm tries to resolve the conflicts among the users considering issues such as starvation. To show the usefulness of the proposed conflict resolution method, we apply the proposed conflict resolution method to Context aware TV, a smart testbed. The conflict manager is built using an array of algorithms which work on the principles of preemption, non preemption, and uses methodologies like Role, Priority , Time slice based approaches. Thereby making the system to provide comprehensible and well suited solutions to cater the versatile needs of the family .The algorithms utilize various factors like priority, credits, age, and time in the above mentioned approaches. The motto of this paper is to enable context aware applications to offer personalized services to multiple users by resolving service conflicts among users.
There has been high demand for low power and area efficient implementation of complex arithmetic operations in many digital signal processing applications. The CORDIC (coordinate rotation digital computer) algorithm is a unique technique for performing various complex arithmetic functions using shift-add iterations. This paper proposes an enhanced version of new CORDIC algorithm (obtained from conventional CORDIC by using Taylor series expansion of sine and cosine functions) discussed in. The recursive architecture implementation of the revised new CORDIC algorithm improves the throughput by 50% as compared to the previous design. Revised algorithm, VLSI implementation of the design and its performance comparison with are discussed.
In this paper an improved genetic algorithm is proposed to solve optimal problems applying fixed point algorithms of continuous self-mapping in Euclidean space. The algorithm operates on subdivision of searching space and generates the integer labels at the vertices, and then only mutation operator relying on the genetic encoding designed which is proposed by virtue of the concept of relative coordinates. In this case, whether every individual of the population is in a completely labeled simplex can be used as an objective convergence criterion and determined whether the algorithm will be terminated. The algorithm combines genetic algorithms with fixed point algorithms and gradually fine mesh. Finally, an example is provided to be examined which demonstrate the effectiveness of this method.
The mobile agent paradigm has attracted many attentions recently but it is still not widely used. One of the barriers is the difficulty in protecting an agent from failure because an agent is able to migrate over the network autonomously. Design and implementation of mechanisms to relocate computations requires a careful consideration of fault tolerance, especially on open networks like the Internet. . In the context of mobile agents, fault-tolerance prevents a partial or complete loss of the agent, i.e., ensures that the agent arrives at its destination. In this paper, we propose fault tolerant mechanism based on replication and voting for mobile agent platform system. The proposed mechanism has been implemented and evaluated the effects of varying of degree, different replication methods and voting frequencies on Secure Mobile Agent Platform System (SMAPS). We also report on the result of reliability and performance issue involved in mobile agent for Internet application.
Intrusion detection systems are treated as vital elements of protective measures to computer systems and networks from abuse. The drastic increase in network speed and detection workloads necessitates the need for highly efficient network intrusion detection systems(NIDS). Since most NIDSs need to check for a large number of known attack patterns in every packet, pattern matching becomes the most significant part of signature-based NIDSs in terms of processing and memory resources. To support segmentation of network traffic and to detect fragmented attacks, we propose a method which performs both 'partial' and 'full' pattern matching using the data structure CDAWG (Compact Direct Acyclic Word Graph). In the present work, we designed and implemented an efficient string matching algorithm using CDAWG structure. Experimental results show that this algorithm is 2.5 times faster than the currently used Aho-Corasick algorithm.
Heart failures are of increasing importance due to increasing life expectation. For clinical diagnosis parameters for the condition of hearts are needed and can be derived automatically by image processing. Accurate and fast image segmentation algorithms are of paramount importance for a wide range of medical imaging applications. In this paper, we present a method using heat equation with variable threshold technique towards seeds selection in random walk based image segmentation.
In this paper, I present a new approach for generating sub-words using the Chebyshev Polynomials in place of traditional S-Box formation in AES algorithm, which is based on using look-up tables. S-Box computation is a time-consuming operation in AES algorithm as it is required in every round. So it is desirous to overcome the overhead of S-Box computation using any other computational techniques. It is in my best of knowledge that it is a truly new approach for expanding keys in Passport Protocol.
Kernel discriminative common vector is one of the most effective non-linear techniques for feature extraction from high dimensional data including images and text data. This paper presents a new algorithm called "improved kernel discriminative common vector" (IKDCV) to further improves the overall performance of KDCV by integrating the boosting parameters and KDCV techniques. The proposed method possesses several appealing properties. First, like all kernel methods, it handles non-linearity in a disciplined manner. Second by introducing the pair-wise class discriminant information into discriminant criterion, it further increases the classification accuracy. Third, by calculating significant discriminant information, within class scatter space, it also effectively deals with the small sample size problem. Fourth, it constitutes a strong ensemble based KDCV framework by taking advantage of the boosting parameters and KDCV techniques. This new method is applied on extended YaleB face database and achieves better recognition performance by means of solving overlapping between classes. Experimental results demonstrate the promising performance of the proposed method as compared to the other linear and non-linear methods.
Object Counting is a challenging problem with different solutions based on the available computing power and the nature of data to be processed. Memory efficiency, simplicity and speed are very important requisites for algorithms to be used in modern day systems which include distributed systems and wireless sensor networks where it is extremely advantageous to do basic preprocessing of data from the sensors in the nodes itself. Reduced computing power available in the nodes poses a challenge and this can be overcome by the use of algorithms with simple steps. We present an algorithm to monitor continuity, count objects and measure parameters such as area by isolating patterns and objects from data based on simple computations. Memory efficiency, speed and flexibility of the proposed algorithm have been discussed. It has many broad applications that include counting objects on a conveyer, monitoring people in a traffic signal, pattern isolation from multicolored images and monitoring continuity in real time image data from sensors. We describe the experimental setup used to implement this algorithm.
The radio frequency identification (RFID) is not designed primarily for indoor location sensing. However, location can be sensed in a viable and cost effective way by employing some algorithm. This paper is an attempt to extend the current LANDMARC algorithm by introducing the 'z' coordinate. The presented work utilizes the passive tag instead of active, which essentially cuts the cost of RFID tracking system drastically. The Received Signal Strength Indicator (RSSI) is provided by Intel R1000 transceiver directly, it eliminates the scanning time of existing LANDMARC algorithm. The presented result shows the correctness and the efficiency of the simulation. The error rate in the estimation of location is .5 only, which is acceptable in any tracking system. Furthermore, this algorithm can be coined with other route-tracking algorithm to give exact path of the tag.
Binocular Stereo vision system has been actively used for real time obstacle avoidance in autonomous mobile robotics for the last century. The computation of free space is one of the essential tasks in this field. This paper describes algorithm for obstacle avoidance for mobile robots which can navigate through obstacle. While most of the paper based on stereo vision works on the disparity image but we are proposing a method based on reducing the 3D point cloud obtained from stereo camera after 3D reconstruction of the environment to build a stochastic representation of environment navigation map. The algorithm assigns each cell of the grid with a value (free or obstacle or unknown) which helps the robot avoid obstacles and navigate in real time. The algorithm has been successfully tested on "Lakshya" - an UGV Dagger platform in both outdoor and indoor condition.
The problem of longest common subsequence is defined as finding the longest subsequence common to two input sequences. It can be employed in many fields such as speech and signal processing, data compression, syntactic pattern recognition, string processing (bioinformatics), and genetic engineering. This paper describes the design of a parallel longest common protein subsequence hardware, implemented in a FPGA device, using a dynamic programming (DP) algorithm. Such algorithms have computational complexity proportional to the length product of both involved sequences. Usually, lengths of both input sequences are very long, resulting in long processing time. The data dependency in DP imposes a serious constraint on the algorithm, not allowing its direct parallelization. To alleviate this serious problem, a reconfigurable accelerator for DP algorithm is presented. The main features include: a multistage PE (processing element) design with even stage delay which significantly reduces the FPGA resource usage and hence allows more parallelism to be exploited; and a pipelined control mechanism. Basing on these two techniques, the proposed accelerator can reach at 82-MHz frequency in an Altera EP1S30 device. This accelerator provides more than 660 speedup as compared to a standard desktop platform with a 2.8-GHz Xeon processor and 4-GB memory. Results show that reconfigurable computing can offer interesting solutions for bioinformatics problems.
In this paper, an attempt has been made to formulate multiple objective fractional transportation problem (MOFTTP). The multiple objective fractional transportation problem with non-linear bottleneck objective function is related to lexicographic multiple fractional time transportation problem, which is solved by a lexicographic primal code. An algorithm is developed to determine an initial efficient basic solution of this MOFTTP. A real life example is also solved by using this algorithm for minimizing the total actual arrival time to total standard arrival time, total actual departure time to total standard departure time and total actual congestion time to total standard congestion time for transporting the bikes.
A task scheduling problem is to arrange the tasks of an application on computing resources so as to achieve minimum schedule length. Many effective scheduling algorithms have been proposed, but most of them assumes that network is fully connected and contention free. In order to make this problem more practical, the link contention constraints are considered. In this paper we proposed an effective and efficient scheduling algorithm called Migration scheduling algorithm (MSA) based on Tabu search extended from list scheduling algorithm. The edges among the tasks are also scheduled by treating communication links between the processors as resources. To present the effectiveness of the proposed algorithm, we compared it with the Dynamic level scheduling algorithm (DLS) and List scheduling without contention. The proposed algorithm has admissible time complexity and suitable for regular as well as irregular task graph structures. Experimental results show that algorithm with tabu search produce optimal schedules in reasonable time.
Fault tolerant routing algorithms, are a key concern in on-chip communication. This paper examines fault tolerant communication algorithms for use in network-on-chip (NoC). We propose an improved wormhole-switched routing algorithm in 2-dimensional mesh based on f-cube3 algorithm to decrease message latency. The existing key concept is using numbers of virtual channels (VC) via a physical link. This paper proposes some improvements to make use of VCs while the numbers of them are fixed. We show that when a message is not blocked by fault, all VCs could be used; f-cube3, however, uses only one of the VCs. Furthermore, the strength of the improved algorithm is demonstrated by comparing results of simulations in both f-cube3 and the improved algorithm if-cube3.
The Connected Dominating Set (CDS) of a graph acts as a virtual backbone in ad-hoc wireless network. In this paper, a simple and efficient algorithm is proposed for the determination of CDS in a graph. The algorithm starts by finding a root node in the graph; a priority queue is maintained centrally to decide whether an element would be a part of CDS. This concept is extended to distributed version of the algorithm where each dominated node maintains a priority queue and acts as dominator for its local domain only. Simulation results show that the proposed approach is very efficient in determining CDS especially in large and dense graphs.
The novel representation of the graph using edge based data structure, as proposed in [1] is an adapted version of half edge structure, traditionally used in digital geometry processing [6]. This edge based data structure provides an efficient way for storing and accessing graphs. However the shortest path algorithm implemented in [1] on the above mentioned data structure proves to be inefficient when dealing with extensive graphs with varying scales and degrees. The main concern is to minimize the time and space overheads involved, as they take a great toll on system resources. The short comings associated with the above mentioned approach acted as our motivation. In this paper, we present an approach based on greedy and intuitionist programming strategies, which derives advantages from heuristics that apply to all kinds of graph, hence achieving greater efficiency. This aspect of the algorithm is also borne out by the experimental conclusions we have obtained.
A multilayer perceptron is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate output. It is a modification of the standard linear perceptron in that it uses three or more layers of neurons (nodes) with nonlinear activation functions, and is more powerful than the perceptron in that it can distinguish data that is not linearly separable, or separable by a hyper plane. MLP networks are general-purpose, flexible, nonlinear models consisting of a number of units organised into multiple layers. The complexity of the MLP network can be changed by varying the number of layers and the number of units in each layer. Given enough hidden units and enough data, it has been shown that MLPs can approximate virtually any function to any desired accuracy. This paper presents the performance comparison between Multi-layer Perceptron (back propagation, delta rule and perceptron). Perceptron is a steepest descent type algorithm that normally has slow convergence rate and the search for the global minimum often becomes trapped at poor local minima. The current study investigates the performance of three algorithms to train MLP networks. Its was found that the Perceptron algorithm are much better than others algorithms.
For multi-objective optimization problems, an improved multi-objective adaptive niche genetic algorithm based on Pareto Front is proposed in this paper. In this Algorithm, the rank value and the niche value are introduced to evaluate the individuals. The evolution population adopts the adaptive-crossover and adaptive-mutation probability, which can adjust the search scope according to solution quality. The experimental results show that this algorithm convergent faster and is able to achieve a broader distribution of the Pareto optimal solution.
Ballistic Trajectory Computation Program is a part of weapon delivery system and it is responsible for accurate delivery of the weapon. Mach number (M) and Coefficient of drag (Cd) are critical parameters in external Ballistic computation and are used in computation of Impact Point. Cd Vs M relation is not available in a functional form. These data are available in discrete form from Wind Tunnel. To accomplish this polynomial curve fitting is done. Different numerical methods have been tried and compared to find best polynomial fit. Polynomial relation is found between Cd and M for many cases using Least Square Approximation with Crout's method. It is observed, that the results obtained by this approach are of very high accuracy and have improved the computation of Ballistic path. A comparative result analysis between the Wind Tunnel data and the estimated data is presented. Results obtained were field tested and found that the performance was within the expected accuracy in determining various parameters such as Forward Throw and Impact Point.
This paper proposes a low power linear feedback shift register (LFSR) for test pattern generation (TPG) technique with reducing power dissipation during testing. The correlations between the consecutive patterns are higher during normal mode than during testing. The proposed approach uses the concept of reducing the transitions in the test pattern generated by conventional LFSR. The transition is reduced by increasing the correlation between the successive bits. The simulation result show that the interrupt controller benchmark circuit's testing power is reduced by 46% with respect to the power consumed during the testing carried by conventional LFSR.
The time-frequency representation (TFR) has been used as a powerful technique to identify, measure and process the time varying nature of signals. In the recent past S-transform gained a lot of interest in time-frequency localization due to its superiority over all the existing identical methods. It produces the progressive resolution of the wavelet transform maintaining a direct link to the Fourier transform. The S-transform has an advantage in that it provides multi resolution analysis while retaining the absolute phase of each frequency component of the signal. But it suffers from poor energy concentration in the time-frequency domain. It gives degradation in time resolution at lower frequency and poor frequency resolution at higher frequency. In this paper we propose a modified Gaussian window which scales with the frequency in a efficient manner to provide improved energy concentration of the S-transform. The potentiality of the proposed method is analyzed using a variety of test signals. The results of the study reveal that the proposed scheme can resolve the time-frequency localization in a better way than the standard S-transform.
In recent years, an increasing amount of computer network research has focused on the problem of cluster system in order to achieve higher performance and lower cost. Memory management becomes a prerequisite when handling applications that require immense volume of data for e.g. satellite images used for remote sensing, defense purposes and scientific applications. The load unbalance is the major defect that reduces performance of a cluster system that uses parallel program in a form of SPMD (single program multiple data). Dynamic load unbalancing can solve the load unbalance problem of cluster system and reduce its communication cost. This paper proposes a new algorithm that correlates the scheduling of incoming jobs and balancing of the loads at each node in a multi cluster. This method assigns weights for each node to schedule an incoming job and then load will be balanced dynamically using memory locality as the main factor. The main parameters used in this algorithm are partition size, CPU usage, memory usage, page faults and execution time. The tests evaluated with various applications shown a significant optimization in the cluster performance.
Visual optimization is a very interesting topic to the application users for many purposes. It enables the user with an interactive platform where, by varying different parameter settings, one can customize a solution. Several attempts of developing generalized evolutionary optimizers are found in literature which work well for function optimization problems only. Solving combinatorial optimization problems on such a general platform is a difficult task. In this paper, we have tried to solve partitional clustering problem using a generalized visual stochastic optimization algorithm that was initially developed for function optimization problems only.
The allocation of distribution centers or the facility center is an important issue for any company. The problem of facility location is faced by both new and existing companies and its solution is critical to a company's eventual success. This issue got the highest priority in last few years. It is equally important for both private as well as the public sector. The k-center problem is one of the basic problems in facility location. The aim is to locate a set of k facilities for a given set of demand points, such that for any demand point the nearest facility is as close as possible. Heuristics is a popular way to undertake such kind of typical problems. In this paper we present an intensive analysis of heuristic approach for k-center problem.
To improve the speed and accuracy of numerical optimization methods, this paper proposes a new technique, using fuzzy systems. Although the proposed method is employed to improve the efficiency of the genetic algorithm and ant colony optimization, it can be applied to any swarm intelligence methods. The main idea of this method is to control positive and negative feedbacks to achieve a suitable trade-off between them depending on convergence rate of the algorithm. In order to demonstrate the performance of the proposed method, it is applied to simulation examples.
Software requirement prioritization has gained a lot of importance in industrial projects. In practice the requirement prioritization is never done because of large number of requirements need more number of comparisons. One of the algorithm given by us used B-tree to prioritize requirements where the number of comparisons required to prioritize requirements were t * log t (n). Where n is number of requirements to prioritize, t is constant depending on how many keys the B-tree node can have. This method drastically reduced the number of comparisons as compared to previous methods used to prioritize requirements. In this paper we explore further reduction in number of comparison than our previous approach as here we can prioritize the n number of requirements in just log 2 t * log t (n) comparison.
Protein prediction is a fundamental problem in biology the most application of solving this problem is drug design where we search for a special shape to interaction of protein. In this paper we want to solve this problem in 2D environment as same as method of navigation for robot. In others word, this problem is similar to know the best optimal structure of path of a robot which meeting a predetermined zero and one sequence. The goal of our robot is to have an optimal structure of this string that has the most number of one, one neighborly. The number of 1,1 neighborhoods expresses the number of H.H bonds, moreover the more number of H.H bonds, gives the minimum of energy and in finally the stable structure of hydrophobic, polar bonds. We present some rules in navigation and then move our robot. We want to have maximum numbers of 1.1 closely to have a stable protein. This work has many opportunities to be extended.
In this article, a distributed clustering technique, that is suitable for dealing with large data sets, is presented. This algorithm is actually a modified version of the very common k-means algorithm with suitable changes for making it executable in a distributed environment. For large input size, the running time complexity of k-means algorithm is very high and is measured as O(TKN), where K is the number of desired clusters, T is the number of iterations, and N is the number of input patterns. The high time complexity of the serial k-means can be heavily reduced by executing it on a distributed parallel environment. Here, we shall describe a new distributed clustering algorithm and compared its performance with some other existing algorithms. Results of experiments show that this distributed approach can provide higher speedups and at the same time maintains all necessary characteristics of the serial k-means algorithm. We have successfully applied the new algorithm for clustering a number of data sets including a large satellite image data.
In the process of finding the solutions of real life problems decision makers always remain in confusion that the data they are using to solve there problems are exact or not. In solving a linear programming problem, optimize Z = C T x subject to (Ax) i les b i , foralli, x ges 0 there may be confusion about the values of c T , b i and A and due to confusion for these values the confusion may also exist for the value of objective function. Several researchers have used fuzzy set theory for linear programming problems but this theory could not tackle the confusion part of the data. There is no method in the literature for solving linear programming problems in the situation when decision makers are in confusion about the exactness of the data. To incorporate this confusion concept of vague sets have been used. In this paper, we have extended the idea of fuzzy linear programming by vague linear programming and proposed a new method to solve linear programming problems by assuming that the decision makers are confused only for the values of b i and there is no confusion and uncertainty for the values of c T and A . To explain the advantage of proposed method, a numerical example is solved. Obtained results are explained.
This paper gives a new approach for encrypting different types of files like text, image, audio, video etc. The file is considered as a binary string. It is first broken into blocks of equal sizes. Each block is then represented as a square matrix. Helical transposition is applied first, and then columnar transposition is done on the basis of session key. Session key is a 24 bit sequence which is used to generate decimal number sequence and accordingly columnar transposition is done. For decryption session key and anti-helical algorithm both are required.
The paper proposes an efficient quad-tree based filtering algorithm for the restoration of impulse corrupted digital images. The quad-tree decomposition stage facilitates pixel classification in the impulse detection phase and minimizes miss-classification of signals as impulses by clearly distinguishing the high frequency image details from impulse corrupted pixels. The adaptive restoration phase identifies the most suitable signal restorer from among the true signals of a reliable neighborhood. Experimental results in terms of subjective assessment and objective metrics favor the proposed algorithm at all impulse noise levels over many top-ranking filters.
The performance of population based search techniques like Differential Evolution (DE) depends largely on the selection of initial population. A good initialization scheme not only helps in giving a better final solution but also helps in improving the convergence rate of the algorithm. In the present study we propose a novel initialization scheme which uses the concept of quadratic interpolation to generate the initial population. The proposed DE is validated on a test bed of 10 benchmark problems with varying dimensions and the results are compared with the classical DE using random initialization, DE using opposition based learning for generating the initial population. The numerical results show that the proposed algorithm using quadratic interpolation for generating the initial population accelerates the convergence speed quite considerably.
Character recognition is the important area in image processing and pattern recognition fields. Handwritten character recognition has received extensive attention in academic and production fields. The recognition system can be either online or off-line. Off-line handwriting recognition is the subfield of optical character recognition. India is a multi-lingual and multi-script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we present zone and distance metric based feature extraction system. The character centroid is computed and the image is further divided in to n equal zones. Average distance from the character centroid to the each pixel present in the zone is computed. This procedure is repeated for all the zones present in the numeral image. Finally n such features are extracted for classification and recognition. Support vector machine is used for subsequent classification and recognition purpose. We obtained 97.75% recognition rate for Kannada numerals.
Kinematic measurements using active markers are well known methods. This paper aims at the development of an image analysis algorithm and LabVIEW based software tool for active marker based gait analysis. Active markers in the form of light-emitting diodes (LEDs) were positioned at anatomical landmarks to measure the coordinated kinematics of human joints.
The Particle Swarm Optimization (PSO) is a stochastic, population-based algorithm for search and optimization from a multidimensional space. Many engineering design problems in real life have complicated optimization functions, which require massive computational power, to solve in reasonable time when implemented sequentially. Thus, scalable parallel implementations are required to speed up these algorithms, and reduce the over-all design process time. In this paper, we present a model for parallelization of PSO algorithm, and its implementation on Cell Broadband Engine architecture.
An Intelligent Biometrics systems aims at localizing and detecting human faces from supplied images so that further recognition of persons and their facial expression recognition will be easy. Based on facial expression; it is easy to detect intension of person if they are involved in some terror activities or not. This paper presents an automatic, fast and efficient detection of face from an image so that the detected face can be used for facial expression recognition. Images obtained from Sony make digital camera (resolution 7.1 Mega pixel)/Emotion database are considered for our work. Method used includes combination of skin detection algorithms such as modified RGB, YCbCr and HSV and gives better results. Experimental result shows that the algorithm is good to detect and localize human face in an image with better accuracy.
In wavelet based image coding, a variety of orthogonal and biorthogonal filters have been developed by researchers for signal analysis and compression. The selection of wavelet filters plays a crucial part in achieving an effective coding performance, because there is no filter that performs the best for all images. The aim of this paper is to examine a set of wavelet filters from different families for implementation in still image compression system and to analyze their effects on image quality. Three quality measures viz. peak signal to noise ratio (PSNR), picture quality scale (PQS) and a recently developed quality measure structural similarity index (SSIM), which compares local patterns of pixel intensities that have been normalized for luminance and contrast, are used for comparison at various bit rates on selected test images. Our aim here is to suggest the most suitable wavelet filter for different test images based upon these quality measures.
In this paper we have proposed an algorithm for a wide variety of workload conditions including I/O intensive and memory intensive loads. However, in our task the CPU requirements of the system is minimum as the tasks which come are mostly video fetch tasks which require negligible system interaction but a lot of I/O consumption. The goal of the proposed algorithm is to balance the requests across the entire cluster of servers basing on its memory, CPU and I/O requirements so that the response time and the completion time for each job is minimum. Here preemptive migrations of tasks are not taken into consideration. A typical transaction in our model can be defined as the duration between the acceptance of task into the system and fulfillment of its requirements by the system. The requirements of the task are video files which the system has to load from a secondary storage device and stream the video continuously to the end user who initiated the request. We have compared our algorithm (IOCMLB) to two other allocation policies and trace driven simulation shows that our algorithm performed better than other two policies.
This paper describes an algorithm for making decision on allocation of funds to most deserving among competing applicants. Funds are consumable resource in nature, which are required by organization to execute their projects/schemes, especially government schemes. The algorithm considers multiple decision making factors to allocate weights to applicants. Based on the weight assigned and probability factor of successful implementation of proposed project, algorithm places applicant into one of the three levels viz level 1, level 2 or level 3. Level 1 category applicants are allocated 100 percent funds if sufficient funds are available, otherwise weighted or proportionate funds. Level 2 category applicants are allocated weighted funds requirement and level 3 applicants are found ineligible for availing funds and hence denied allocations. On simulating the algorithm in MatLab, it has been found that level 1 category applicants are given priority over level 2 category applicants. This algorithm is suitable where multiple applicants apply for funds from multiple categories of sources. Major advantage is that it considers past experiences of applicant in implementing projects.
Inventory management is one of the significant fields in supply chain management. Efficient and effective management of inventory throughout the supply chain significantly improves the ultimate service provided to the customer. Hence there is a necessity of determining the inventory to be held at different stages in a supply chain so that the total supply chain cost is minimized. Minimizing the total supply chain cost is meant for minimizing holding and shortage cost in the entire supply chain. This inspiration of minimizing Total Supply Chain Cost could be done only by optimizing the base stock level at each member of the supply chain. The dilemma occurring here is that the excess stock level and shortage level is very dynamic for every period. In this paper, we have developed a novel and efficient approach using Genetic Algorithm which clearly determines the most possible excess stock level and shortage level that is needed for inventory optimization in the supply chain so as to minimize the total supply chain cost.
High quality compression of video content can greatly enhance the bandwidth utilization over scarce resource networks. This paper discusses a system that optimizes the MPEG2 encoder by using a multicore processor (cell broadband engine). MPEG video compression is quite difficult to achieve in real time. The hardware solutions proposed for this problem are expensive besides being obsolete. Also, the use of a distributed environment and multiprocessing system give problems such as communication overhead. The paper presents a portable, fault-tolerant, parallelized software implementation of the MPEG2 encoder. The use of a platform like multicore processor aids in the implementation of parallel multimedia applications, such as the encoder. The encoder is expected to perform better than various encoders available today. Also, our encoder does not require available network processing resources during execution.
MicroRNAs (miRNAs) are small non-coding RNA molecules that post-transcrlptionally regulate gene expression by base-pairing to mRNAs. Prediction of microRNA (miRNA)- target transcript pair is now in the forefront of current research. A number of experimental and computational approaches have already detected thousands of targets for hundreds of human miRNAs. However, most of the computational target prediction methods suffer from high false positive and false negative rate. One reason for this is the marked deficiency of negative examples or non-target data. Current machine learning based target prediction algorithms suffer from lack of sufficient number of negative examples to train the machine properly because only a limited number of biologically verified negative miRNA-target transcripts have been identified with respect to true miRNA- target examples. Hence researchers have to rely on artificial negative examples. But, it has been observed that these artificially generated negative examples can not provide a good prediction accuracy for the independent test data set. Therefore it is necessary to generate more confident artificial negative examples. In the proposed article we have predicted potential miRNA- target pairs with higher sensitivity and specificity based on a new way of generating negative examples. Firstly, artificial miRNAs are generated that are believed not to be a true miRNA. In this regard, we use a novel approach K-mer exchange between key and non-key regions of the miRNA. Based on the false miRNAs we search their potential targets by scanning entire 3' untranslated regions (UTRs) using the target prediction algorithm miRanda. Based on the newly generated negative examples and a set of biologically verified positive examples we trained the classifier SVM and classify a set of independent test samples. In this regard we have generated a set of 90 experimentally verified context specific features. Our prediction algorithm has been validated with a...
A wireless sensor network (WSN) consists of autonomous devices equipped with sensors to cooperatively monitor certain physical or environmental phenomena, such as temperature, vibration, pressure, or pollutants, at different locations. These devices called sensor nodes (SNs) have sensing, computation and wireless communication capabilities. One of the significant features of SNs is their limited battery power and it is sometimes not feasible to recharge or replace the batteries. Thus, efforts must be employed at all layers to minimize the power consumption so that the network lifetime is increased. In this paper we present, design and implementation of a Group aware network management (GANM) protocol for WSNs. GANM optimally utilizes the closeness of nodes falling within a predefined diameter, called the grouping diameter (g D ). The set of nodes which mutually fall within this diameter with respect to each other can be allowed to form a group. The members of a group can be made to go to a low energy sleep mode, while one of the members remaining awake to represent its group. This protocol also enables a fault tolerant scheduling scheme among the group members such that there is always one member awake to listen to the surrounding, i.e., to sense and to transmit. This group is a "black box" for rest of the network and can be treated as a single node for any routing protocol applied at the abstract level.
In this paper we propose a full causal two Dimensional Hidden Markov Model in which the state transition probability depends on all neighbouring states where causality is preserved. We have modified the Expectation Maximization algorithm (EM) for evaluating the proposed model. A novel 2D Viterbi algorithm is formulated to decode the proposed model with reduced complexity in decoding larger blocks. The proposed model can be used in areas such as image segmentation and classification. In particularly when applied to poor quality images such as ultrasound images with more ambiguous regions our model showed promising results when compared with existing models.
A comparative analysis using different intelligent techniques has been carried out for the economic load dispatch (ELD) problem considering line flow constraints for the regulated power system to ensure a practical, economical and secure generation schedule. The objective of this paper is to minimize the total production cost of the thermal power generation. Economic load dispatch (ELD) has been applied to obtain optimal fuel cost. Optimal power flow has been carried out to obtain ELD solutions with minimum operating cost satisfying both unit and network constraints. In this paper, various intelligent techniques such as genetic algorithm (GA), evolutionary programming (EP), particle swarm optimization (PSO), and differential evolution (DE) have been applied to obtain ELD solutions. The proposed algorithm has been tested on two sample systems viz IEEE 30 bus system and a 15 unit system. The results obtained by the various intelligent techniques are compared. The solutions obtained are quite encouraging and useful in the economic environment. The algorithm and simulation are carried out using Matlab software.
The data captured at the remote terminal unit have ambiguities and the uncertainties and it is required to take care of these. As the measurand deviates from their nominal values, preprocessing of them with fuzzy logic improves the reliability of the measurements. These captured data are preprocessed for extracting the information. The processed data after necessary processing is presented to the operator and to the higher hierarchy for taking appropriate action. Fuzzification of counts is done with triangular, trapezoidal and gaussian membership functions. It is shown that the gaussian membership function gave best results and the errors are reduced as compared to trapezoidal and triangular membership function.
Security in wireless ad hoc network (WAN) is very important issue. Due to dynamic topology and mobility of nodes, Wireless Ad hoc Networks are more vulnerable to security attacks than conventional wired and wireless networks. Nodes of Wireless Ad hoc Network communicate directly without any central base station. That means in ad hoc network, infrastructure is not required for establishing communication. To provide security in small networks is easy as compare to large networks. For our convenience, we divide large network into number of zones. Therefore attacks in WAN are very frequent than other networks. In this research paper we are describing black hole attack which is easy to launch in wireless ad hoc network. Black hole attack is referred to as a node, dropping all packets and sending bogus information that it has shortest path between source and destination. In this paper we are implementing Secure-ZRP protocol which can be used to prevent black hole attack in zones or outside the zones. We evaluated performance in Qualnet simulator. Our analysis indicates that S-ZRP is very suitable to stop this attack.
This paper tackles the NP-complete problem of academic class scheduling (or timetabling). The aim is to find a feasible timetable for the department of computer engineering in Izmir Institute of Technology. The approach focuses on simulated annealing. We compare the performance of various neighborhood searching algorithms based on so-called simple search, swapping, simple search-swapping and their combinations, taking into account the execution times and the final costs. The most satisfactory timetable is achieved with the combination of all these three algorithms. The results highlight the efficacy of the proposed scheme.
Code optimization involves the application of rules and algorithms to program code, with the goal of making it faster, smaller, more efficient, and so on. Applying the right compiler optimizations to a particular program can have a significant impact on program performance. The effectiveness of compiler optimizations is determined by the combination of target architectures, target application, and the compilation environment, which is defined by the setting of the compiler optimizations and compiler heuristics. Finding a compiler setting which is optimal requires a delicate tradeoff between these factors. Due to the non-linear interaction of compiler optimizations, however, determining the best setting is nontrivial. The trivial solution of trying all combinations of techniques would be infeasible, as it is of complexity O (2 n ) even for "n" on-off optimizations. There have been several proposed techniques that search the space of compiler options to find good solutions; however such approaches can be expensive. In current compilers, through command line arguments, the user must decide which optimizations are to be applied in a given compilation run. Clearly, this is not a long-term solution. As compiler optimizations get increasingly numerous and complex, this problem must find an automated solution. In this paper, a new technique is suggested, which prunes the large search space using branch and bound technique, so that only the area in search space which is most beneficial is given higher priority for further exploration whereas the least promising regions are straightaway pruned off, thus saving time by not exploring those regions which give a handful or negligible benefits. Also further probes of the search space tree are halted, once it is determined that the relative improvement in the time is not considerable as compared to the cost incurred for further probes. The time complexity of proposed method under worst case is found to be of O (n 2 ) and under best...
We present the design principles of Rank Based Merge Sorting Network (RBMSN) architectures for the realization of 2D median and morphological filters used in image preprocessing. The proposed architectures focus on optimization strategies for sorting in terms of the number of comparators and throughput. The minimization in the computational cost is achieved by rank range based merging, column sorting and storing the sorted elements of the overlapping columns of the consecutive windows at each intermediate stage of the sorting network. The proposed architecture uses the concepts of pipelining and grain level parallelism to accomplish the task of processing one pixel per clock cycle. The architectures for median erosion and dilation filters are synthesized for 3times3 and 5times5 window sizes. The proposed RBMSN Median filter architectures for N = radicN times radicN window size require N/2(Log 2 N) comparators and radicN(radicN - 1), memory registers. The proposed design and implementations are compared with a few of the reported architectures.
The objective of realizing more effective solution during any complex system design can be achieved by the application of Multidisciplinary Design Optimization. The primary problem in developing an integrated framework, which is essential in the iterative procedure of optimization, is how to automate the design codes that were designed to be used by experts. Automation of design codes primarily calls for a robust optimization algorithm which can reach global optimum without calling for much expertise - with reference to neither the design problem nor the optimizing algorithm's parameters - from the user. Gradient search methods' efficiency in reaching global optimum relies on the expertise in providing right initial guess. Whereas in case of Genetic Algorithm(GA), it depends on the expertise in choosing GA parameters. This paper proposes a new hybrid approach, Genetic Algorithm Guided Gradient Search (GAGGS), which overcomes these limitations. This algorithm simultaneously exploits the gradients method's capability to quickly converge to the local optimum and GA's capability to explore the entire design space. To demonstrate its robustness and efficiency, it is applied to Keane's bumpy function with two and ten design variables.
In Today's era distributed system is gaining too much popularity and at the same time there are many issues related to distributed system like memory management, message passing, consistency, replicas, speed of operations, transparency and many more. We would like to focus on garbage collection in distributed system which is also a very important issue. Here we are considering the train algorithm given by Hudson & Moss and focus that if we use distributed garbage collector's train algorithm in server-client application, the performance can be improved..
Image mining deals with the extraction of implicit knowledge, image data relationship, or other patterns not explicitly stored in the images. This paper proposes an enhanced image classifier to extract patterns from images containing text using a combination of features. Image containing text can be divided into the following types: scene text image, caption text image and document image. A total of eight features including intensity histogram features and GLCM texture features are used to classify the images. In the first level of classification, the histogram features are extracted from grayscale images to separate document image from the others. In the second stage, the GLCM features are extracted from binary images to classify scene text and caption text images. In both stages, the decision tree classifier (DTC) is used for the classification. Experimental results have been obtained for a dataset of about 60 images of different types. This technique of classification has not been attempted before and its applications include preprocessing for indexing of images, for simplifying and speeding up content based image retrieval (CBIR) techniques and in areas of machine vision.
Processors speed is much faster than memory; to bridge this gap cache memory is used. This paper proposes a preeminent pair of replacement algorithms for Level 1 cache (L1) and Level 2 cache (L2) respectively for the matrix multiplication (MM) application. The access patterns of L1 and L2 are different, when CPU not gets the desired data in L1 then it goes to L2. Thus the replacement algorithm which works efficiently for L1 may not be efficient for L2. With the reference string of MM, the paper has analyzed the behavior of various existing replacement algorithms at L1 and L2 respectively. The replacement algorithms which are taken into consideration are: least recently used (LRU), least frequently used (LFU) and first in first out (FIFO). This paper has also proposed new replacement algorithms for L1 (NEW ALGO1) and for L2 (NEW ALGO2) respectively for the same application. Analysis shows that by applying these algorithms at L1 and L2 respectively miss rates are considerably reduced.
Discretization of continuous valued features is an important problem to consider during classification learning. There already exist a number of successful discretization techniques based on LVQ algorithm. In this paper, we have approached the problem of discretization from a different angle, and have proposed an algorithm based on optimization of Learning Vector Quantization (LVQ) with Genetic Algorithm (GA). LVQ has been employed to function as a classification algorithm and discretization is performed using this classification nature of LVQ algorithms. We have modeled a GA based algorithm, which enhances the accuracy of the classifier.
In this paper we introduce a new interconnection network, the extended hypercube with cross connection denoted by EHC(n,k). This network has hierarchical structure and it overcomes the poor fault tolerant properties of extended hypercube (EH). This network has low diameter, constant degree connectivity and low message traffic density.
Authentication by biometric verification is becoming increasingly common in corporate, public security and other such systems. There is scads of work done in the area of offline palmprints like palmprint segmentation, crease extraction, special areas, feature matching etc. But to the best of our knowledge no work has been done yet to extract and identify the right hand of a person, given his/her left hand or vice versa from a given database. This kind of identification assumes special significance in cases like bomb blasts, air crash etc., where body parts of various persons get mutilated and mixed up. A framework has been designed where palmprint feature vectors are extracted using 2-D wavelet transform and then a OPMAOP Clustering algorithm (proposed in this paper) is applied to cluster the palmprints to get the opposite hand. Using this approach one can easily achieve the said target with a very high accuracy rate. The FAR of the result has been discussed graphically in subsequent sections.
India is a multi-lingual and multi-script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose Zone and projection distance metric based feature extraction system. The character /image (50times50) is further divided in to 25 equal zones (10times10 each). For each zone column average pixel distance is computed in Vertical Downward Direction (VDD) (one feature). This procedure is sequentially repeated for entire zone/grid/box columns present in the zone (ten features). Similarly this procedure is repeated for each zone from all the direction say Vertical Upward Direction (VUD), Horizontal Right Direction (HRD) and Horizontal Left Direction (HLD) to extract 10 features for each direction. Hence 40 features are extracted for each zone. This procedure is sequentially for the entire zone present in the numeral image. Finally 1000 such features are extracted for classification and recognition. There could be some zone column/row having empty foreground pixels. Hence the feature value of such particular zone column/row in the feature vector is zero. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.8% recognition rate for Kannada numerals.
In this paper we discussed and implemented Morphological method for face recognition using fiducial points. A new technique for extracting facial features is suggested here. This method is independent of the face expressions. In recognition process, these fiducial point are fed as inputs to a Back propagation neural network for learning and identifying a person. So with the help of this technique.
Multimodal Biometrics is an emerging domain in biometric technology where more than one biometric trait is combined to improve the performance. The biometric system take Face, Fingerprint, Voice, Handwritten Signatures, Retina, Iris, Gait, Palm print, Ear & Hand geometry as common features. Human is identified by correct matching of these features. However, features like face, voice, and signature have low permanence and they change with time. Ageing of human, as well as other psychological & environmental conditions cause gradual change in these features. While enrolling feature set we don't consider this factor. Here we propose a new concept that can be used in designing future multimodal biometrics systems which can adapt to the change in the biometrics features like face, voice, signature, and gait over the time or any other factor without compromising the security. Regression based technique can be used to detect change. This algorithm requires use of at least one biometric feature which has very low variance or high degree of permanence, like Fingerprint, Iris, Retina etc. This algorithm can address the problem of false rejection caused by sustained change in biometric features due to Ageing or any other factor without the need of re-enrollment of feature set.
Artificial Neural Networks have found a variety of applications that cover almost every domain. The increasing use of Artificial Neural Networks and machine learning has led to a huge amount of research and making in of large data sets that are used for training purposes. Handwriting recognition, speech recognition, speaker recognition, face recognition are some of the varied areas of applications of artificial neural networks. The larger training data sets are a big boon to these systems as the performance gets better and better with the increase in data sets. The higher training data set although drastically increases the training time. Also it is possible that the artificial neural network does not train at all with the large data sets. This paper proposes a novel concept of dealing with these scenarios. The paper proposes the use of a hierarchical model where the training data set is first clustered into clusters. Each cluster has its own neural network. When an unknown input is given to the system, the system first finds out the cluster to which the input belongs. Then the input is processed by the individual neural network of that system. The general structure of the algorithm is similar to a hybrid system consisting of fuzzy logic and artificial neural network being applied one after the other. The system has huge applications in all the areas where Artificial Neural Network is being used extensively.
Although the idea of mobile medical information systems is not new, most of the current systems operate as standalone programs with intermittent connectivity on high end mobile phones or PDAs. In a rural setting however, few people possess such expensive devices. The information regarding drugs is available on various Web sites but this information does not reach people at the time of medical emergencies. Moreover lack of doctors often forces people to take advice from paramedics, who may not be very qualified. In such situations the information on drugs and diseases would allow both the patient and the medic to cross check the diagnosis and the prescribed medicines. In this paper we explore the viability and present our system implementation to handle drug information related queries via SMS. We identify various classes into which drug information questions can be broken and pass the query through various modules to retrieve answers from the drug knowledge sources.
With the popularity and importance of document images as an information source, information retrieval in document image databases has become a challenge. In this paper, an approach with the capability of matching partial word images to address two issues in document image retrieval: word spotting and similarity measurement between documents has been proposed. Initially, each word image is represented by a primitive string. Then, an inexact string matching technique is utilized to measure the similarity between the string generated of the query word with the word string generated from the document. Based on the similarity, we can find out how a word image is relevant to the other and, can be decided whether one is a portion of the other. In order to deal with various character fonts, a primitive string which is tolerant to serif and font differences to represent a word image has been used. Using this technique of inexact string matching, our method is able to successfully handle the problem of heavily touching characters. From the experimental results on a variety of document image databases it is confirmed that the proposed approach is feasible, valid, and efficient in document image retrieval.
Most attractive and easy to operate ICMP based DoS/DDoS attacks are amplification attacks. Permitting ICMP traffic in a conservative manner will help defending the flooding attacks. Existing methods try to control the ICMP traffic with bandwidth limitation, sometimes the limitation is prodigal and in other cases the limitation is stringent which denies the ICMP traffic completely even the vital usage. However the usage of ICMP over the Internet is necessary, therefore in this paper we identify the harmless rate at which the ICMP traffic can be generated and resounded over the Internet. This harmless rate is achieved through ICMP window restriction scheme. We analyze and prove that the window restriction will remove the attack productivity region from the ICMP traffic and promotes only genuine traffic, thus helps to neutralize the flooding attacks. ICMP window restriction scheme therefore overcomes the issues concerning the unfair vertical limitation in bandwidth.
A novel method is presented to improve the object recognition performance of a biologically inspired model by learning class-specific feature codebook. The feature codebook is multi-class shared in the original model, and the content proportion for different codeword type is set in uniform distribution. According to corresponding discriminability, the codebook content proportion is adjusted upon different codeword types (feature vector sizes and filter scales). The test results demonstrate that the codebooks built with proposed modification achieve higher total-length efficiency.
In this paper, we try to proposal a new mechanism for optimizing the route of the Greedy parts of GPSR. By studying the GPSR, we find that the Greedy part of GPSR is not always to find the most optimized route, especially in the denser scenario. So according to this problem, we try to propose a mechanism to find an optimized route. In our proposal, we give out a formula to compute a unique value for each of neighbors. The one with the smallest value will be chosen as the next forwarding hop. The main advantage of our mechanism is that it integrates the influence of both of the denser and sparse environment. Therefore, wherever we located, we can always find an optimized route comparing with GPSR. In this paper, we also give out a mathematical model to evaluate the performance of our proposal. Basing on the model, we use ns to simulate the highway condition. Form the result, we find that our proposal is better that GPSR at aspect of time delay. But in the packet delivery ratio, it doesn't bring us a remarkable improvement. In the following part, I will describe them in detail.
The concept of transform domain adaptive equalizer is introduced in this paper. An ideal equalizer should offer minimum synchronization time. This can be achieved if the equalizer takes minimum sampling or training. This objective can be achieved if the transformed domain equalizer is used instead of the time domain one. In the present investigation discrete Gabor transform (DGT) is selected to be used in the front end of the transformed domain equalizer. Its convergence performance and minimum mean square error are obtained through simulation and is compared with those of LMS and DFT based equalizers. It is observed that the new transformed domain equalizer provides superior performance compared to the time domain one however the performance is equivalent to that of other orthogonal transformed based equalizers. The British broadcasting channel is taken for experiment. The SNR is set at 20dB and 15 dB in first and second case. The MSE was obtained (plotted) after averaging 500 independent runs each consisting 3000 iterations. Six different channels were studied. The best performance was obtained in case of Gabor transform domain equalizer in channel 6 and 3. The SNR is considered as 15 dB. . It gave better convergence rate and lower MSE floor. For lower noise level at SNR 20 dB the same result was obtained
It is well known fact that organizations diversifies and increase their product line locally and globally. In developing new products, project managers prefer to undertake related projects. As the projects are interdependent and inter related, there is a risk related interdependency. Information Systems(I.S) project selection decision is influenced by a number of factors like long-term plans, profit maximization, tangible and intangible benefits, availability of resource mix, and the underlying project risk. Risk related dependency can be co-related and can be measured. Although the risk is intangible but a rupee value can always be assigned to it by risk determination factor taking into consideration the three main risks i.e. Size, Structure, and Technology risk. Risk is inversely proportional to these three factors. The problem of determining risk factor can be solved easily by multiplying proportions of risk and the relational interdependency factor, called common risk, and subsequently taking the square root of it. And as risk is tangible to some extent, we have called this square root value as the relative risk factor making it our Risk Related objective function.
A mobile ad hoc network is a multihop wireless network with dynamically and frequently changing topology. The power, energy and bandwidth constraint of these self operating and self organized systems has made routing a challenging problem. Number of routing protocols has been developed to find routes with minimum control overhead and network resources. Extensions are done on the conventional protocols to improve the throughput by further reducing the control overhead. This paper gives an overview of the existing on demand routing protocols and a parametric comparison is made with the recently developed protocols, proposed in the literature, These protocols are the multipath extensions of ad hoc on demand distance vector routing protocol (AODV) such as AODV with break avoidance (AODV-BR), scalable multipath on demand routing (SMORT) etc.
Speckled Computing is an emerging technology in which data will be sensed and processed in small (around 5*5 sq, millimeter) semiconductor grains called Specks. A dense and non-static wireless network of thousands of these specks is called a specknet. This specknet will collaborate among themselves to extract information from the data. Speckled computing works in a wireless communication of typical mobile Ad-hoc networks and sensor networks. To extract information from a set of collaborative specks, identifying the logical location of specks is very important, as they are mobile. The main goal of proposed system is to estimate and maintain the logical location of the mobile specks in a Sensor network application.
Transmission Control Protocol was designed to work over the networks characterized by low delay and negligible bit error rates. Transmission Control Protocol (TCP) performance degrades when TCP traffic is carried over the satellite networks that have large latency, high bit error rates and path asymmetry. To comprehend the effect of satellite conditions over TCP, we have evaluated the TCP performance by three universally accepted methodologies i.e. Simulation, emulation and experimentation. The experimentation has been carried out over actual geostationary earth orbit (GEO) satellite link equipped with tangible hardware at ground station. This paper discusses each of the three methods used to evaluate the TCP and describes the results that clearly bring out the degree of performance degradation and the impact of various link parameters on the TCP performance. Experiments based analysis of the TCP over satellite network assist to gather the more realistic statistics for benchmarking that will be useful guide to understand the protocol limitations that further steer in enhancing its capabilities by make it an efficient protocol for satellite networks. Emulation and Simulation results clearly demonstrate the effect of latency and bit errors of satellite links. Simulation results, in particular bring out the effect of various error levels in satellite links over TCP flavors working in well known operating systems.
Of late in the field of Information Security, we have plenty of security tools which are made to protect the transmission of multimedia objects. But approaches for the security of text messages are comparatively less. In this paper, a security model is proposed which imposes the concept of secrecy over privacy for text messages. This model combines cryptography, steganography (taken as security layers) and along with that an extra layer of security has been imposed in between them. This newly introduced extra layer of security changes the format of normal encrypted message and the security layer followed by it embeds the encrypted message behind a multimedia cover object.
Clustering approach is widely used in biomedical applications particularly for brain tumor detection in abnormal magnetic resonance (MR) images. Fuzzy clustering using fuzzy C-means (FCM) algorithm proved to be superior over the other clustering approaches in terms of segmentation efficiency. But the major drawback of the FCM algorithm is the huge computational time required for convergence. The effectiveness of the FCM algorithm in terms of computational rate is improved by modifying the cluster center and membership value updation criterion. In this paper, the application of modified FCM algorithm for MR brain tumor detection is explored. Abnormal brain images from four tumor classes namely metastase, meningioma, glioma and astrocytoma are used in this work. A comprehensive feature vector space is used for the segmentation technique. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures.
With networking speeds doubling every year, it is becoming increasingly difficult for software based solutions to keep up with system performance. Hardware based solutions provides high speed and better performance. Specifically we developed signature file method, which is fast and efficient method for intrusion detection systems. This paper introduces a novel and efficient VLSI architecture of signature file method based host intrusion prevention system. The VLSI architecture is implemented on the field programmable gate array (FPGA) as it provides the flexibility of reconfigurability and reprogram ability. Intrusion sequences can be detected by using a flexible pattern matching model called similarity match which enables the system to not only reduce false positive alarms, but also detect clever intruders with unexpected behavior. Hence the host based hardware detects malicious attacks, and blocks those attacks to protect it self.
In the paradigm of Network on chip, design decisions at various levels of hierarchy are to be made based on timing, power and area constraints. Topology design is one of the most important parts of a NoC design, with design decision affected by constraints in all the three parameters. Tree based structures are one of the most commonly used and basic network on chip architectures, in addition to generic structures like 2-D mesh. In this paper, we present the area and power comparisons for some of the tree based NoC architectures optimized for performance, with video object plane decoder (VOPD) as the case study. We also present a comparison with 2-D mesh architecture and reduce the trend followed in the area and power parameters.
Revolutions in the domain of computing have molded the structures and characteristics of computing systems. Conventional computing techniques involved the use of application specific integrated circuits to achieve a high performance at the cost of extremely inflexible hardware design meanwhile the flexibility of hardware design was achieved at the cost of slow speed processing by using programmable processors. The emergence of reconfigurable computing has filled the gap between the flexibility and performance of system. Reconfigurable computing combines the high speed of application specific integrated circuits with the flexibility of the programmable processors. The reconfigurable processors have further boosted up the dramatic nature of reconfigurable computing systems. These processors configure the most optimal and efficient hardware resources according to the demands of running application. The configured hardware resources can be modified or reconfigured later on according to the new demands of the running application. In this research paper reconfigurable processor architecture has been presented for high speed applications. The proposed reconfigurable processor is based on very long instruction word architecture. The proposed processor is using an efficient multi-threaded configuration controller and a multi-ported configuration memory to configure the multiple reconfigurable function units concurrently with minimum possible configuration overhead.
Automated eye disease identification systems facilitate the ophthalmologists in accurate diagnosis and treatment planning. In this paper, an automated system based on artificial neural network is proposed for eye disease classification. Abnormal retinal images from four different classes namely non-proliferative diabetic retinopathy (NPDR), Central retinal vein occlusion (CRVO), Choroidal neovascularisation membrane (CNVM) and central serous retinopathy (CSR) are used in this work. A suitable feature set is extracted from the pre-processed images and fed to the classifier, Classification of the four eye diseases is performed using the supervised neural network namely back propagation neural network (BPN). Experimental results show promising results for the back propagation neural network as a disease classifier. The results are compared with the statistical classifier namely minimum distance classifier to justify the superior nature of neural network based classification.
This paper proposes a new segmentation approach by considering the non extensive property of mammograms. The novel thresholding technique is performed by Tsallis entropy characterized by one more parameter q, which depends on the nonextensiveness of mammogram. Mammograms are typical examples of image with fractal-type structures (nonextensiveness). The proposed approach has been tested on various images, and the results have demonstrated that the proposed Tsallis fuzzy approach outperforms the 2D nonfuzzy approach and traditional Shannon entropy partition approach. Some typical results are presented to illustrate the influence of the parameter q in the thresholding.
In mobile computing caching plays a vital role owing to its ability to alleviate the performance and availability limitations of weakly-connected and disconnected operations. An efficient way to reduce query delay, save bandwidth and improve system performance is to cache the frequently accessed data objects at the local buffer of a mobile. Owing to the disconnection and mobility of the mobile clients, classical cache management strategies may be inappropriate for mobile environments. Generally, cache placement, cache discovery, cache consistency and cache replacement techniques constitute cache management in mobile environment. In this paper, we design a distributed cache management architecture which includes all the above techniques. The architecture also includes a location update procedure for a moving mobile client. The simulation results illustrate that our proposed architecture achieves lower latency and packet loss, reduced network bandwidth consumption, and reduced data server workload.
Mining images means extracting patterns and derive knowledge from large collections of images. Image mining follows image feature gathering, learning and retrieving procedures. This paper apprises as to what extent the users of the self organizing maps(SOM) techniques are satisfied with its efficiency of visualizing and organizing large amounts of image data. The main contribution of the paper consists of identifying factor that influences the quality of SOM The result analysis shows that, SOM learning capacity is sensitive to initial weight vector, Learning rate, number of epochs for training and distance measure to select winning neuron. The result affirms that among all theses features distance measure factor has high rate of impact in SOM clustering. Euclidian measure is substituted by the Linfin norm (maximum value distance) measure of Minkowski r_metric. Maximum value distance based SOM exhibits both accurate functionality and image mining feasibility.
In the present day power system planning and operation, considerable interest is being shown in contingency analysis. Contingency screening and ranking is one of the important components of on-line system security assessment which is done with the help of various computer softwares which employ iterative methods like Newton Raphson and Fast Decouple Load Flow Methods for obtaining the magnitudes of various parameters. The objective of contingency screening and ranking is to quickly and accurately select a short list of critical contingencies from a large list of potential contingencies and rank them according to their severity . Suitable preventive control actions can be implemented considering contingencies that are likely to affect the power system performance. Network contingencies often contribute to overloading of network branches, unsatisfactory voltages and also leading to voltage collapse. To maintain security against voltage collapse, it is desirable to estimate the effect of contingencies on the voltage stability. This research paper presents a new approach using fuzzy logic to evaluate the degree of severity of the considered contingency and to eliminate masking effect in the technique. The proposed approach, in addition to real power loadings and bus voltage violations, voltage stability indices at the load buses are also used as the post-contingent quantities to evaluate the network contingency ranking of a Practical IEEE-5 BUS System.
To take care of variability involved in the writing style of different individuals in this paper we propose a robust scheme to segment unconstrained handwritten Bangla words into characters. Online handwriting recognition refers to the problem of interpretation of handwriting input captured as a stream of pen positions using a digitizer or other pen position sensor. For online recognition of word the segmentation of word into basic strokes is needed. For word segmentation, at first, we divide the word image into two different zones. The upper zone is taken as the 1/3rd of the height of the total image. Now, based on the concept of downside movement of stroke in this upper zone we segment each word into a combination of basic strokes. We segment at a pixel where the slope of six consecutive pixels satisfies certain angular value. We tested our system on 5500 Bangla word data and obtained 81.13% accuracy on word data from the proposed system.
The conventional TCP suffers from poor performance on high bandwidth delay product links meant for supporting data transmission rates of multi Gigabits per seconds (Gbps). This is mainly due to the fact that during congestion, the TCP's congestion control algorithm reduces the congestion window cwnd to frac12 and enters additive increase mode, which can be slow in taking advantage of large amounts of available bandwidth. In this paper we have presented a modified new model and to overcome the drawbacks of the TCP protocol and propose to carry out a study of the modified model based on various parameters viz., Throughput, fairness, stability, performance and bandwidth utilization for supporting data transmission across the high speed networks.
The research paper gives a model for investigating the impact of discount-oriented promotional offers in retail sale on the shoppers. The model uses a data mining approach to find the effectiveness of a promotion, "Buy two, get one free (BTGOF)", on the customers. The model inducts decision tree based on the classes defined with the quantity of purchase for the item on promotion.
The recent advances in wireless technologies and ubiquitous computing have driven an immense interest in mobile ad hoc networks (MANETs). However, the performance of MANETs is highly affected by the behaviour of its constituting nodes, which must cooperate in order to provide the basic networking functionality. So to remove the problem of misbehaving node affecting the behavior of MANET, We present a solution that detects and avoids misbehaving nodes, which agree to route packets for other nodes and subsequently drop these packets. Such misbehaviour is of direct effect on quality of service (QoS) solutions, namely the QoS goodput metric. The solution takes a transparent layered approach and assumes no security constraints. The solution was simulated using NS2. Experiments were done using multiple variations of mobile ad hoc environments, according to the hostility degree, mobility scenario and traffic load. The experimentation results show that the solution consistently detects and avoids misbehaving nodes leading to improved goodput by up to 25%.
A proxy signature scheme allows one user to delegate his/her signing capability to another user called a proxy signer in such a way that the latter can sign messages on behalf of the the former. After verification the verifier is convinced of the original signer's agreement on the signed message. Like digital signatures, these proxy signatures are also vulnerable to leakage of proxy secret key. Forward-Secure signatures enable the signer to guarantee the security of messages signed in the past even if his secret key is exposed today. By applying the concept of Forward-Security to proxy signatures, we have come up with a forward secure proxy signature scheme based on DSA(Digital signature algorithm). Compared to existing schemes, the special feature of our scheme is that an original signer can delegate his signing capability to any number of proxy signers in varying time periods. Though the original signer gives proxy information to all the proxy signers at the beginning of the protocol, the proxy signers will be able to generate proxy signatures only in their allotted time periods. Further, the proxy signatures are made forward-secure. Moreover, our scheme meets the basic requirements of a proxy signature scheme along with proxy revocation. Both on-demand proxy revocation i.e. whenever the original signer wants to revoke the proxy signer and automatic proxy revocation i.e. immediate revocation after the expiry of the time period of the proxy signer, is provided. Additional properties of our scheme are as follows: identity of the proxy signer is available in the information sent by original signer to proxy signer, original signer need not send the information to proxy signer through a secure channel, warrant on the delegated messages can be specified, original signer cannot play the role of proxy signer, and verifier can determine when the proxy signature was generated.
This paper presents a new approach to enhance the contrast of microcalcifications in mammograms using a fuzzy algorithm based on Tsallis entropy. In phase I image is fuzzified using S membership function. In Phase II using the non-uniformity factor calculated from local information the contrast of Microcalcifications were enhanced while suppressing the background heavily. This is the first time in literature to propose an enhancement algorithm using Tsallis entropy. Tsallis entropy has an extra parameter q. The proposed approach can be even suitable for dense mammograms.
This paper presents a reliable method of computation for minutiae feature extraction from fingerprint images. We present a novel fingerprint representation scheme that relies on describing the orientation field of the fingerprint pattern with respect to each minutia detail. A fingerprint image is treated as a textured image. Improved algorithms for enhancement of fingerprint images, which have the adaptive normalization based on block processing, are proposed. An orientation flow field of the ridges is computed for the fingerprint image. To accurately locate ridges, a ridge orientation based computation method is used. After ridge segmentation a method of computation is used for smoothing the ridges. The ridge skeleton image is obtained and then smoothed using morphological operators to detect the features. A post processing stage eliminates a large number of false features from the detected set of minutiae features. A fingerprint matching algorithm, based on the proposed representation, is developed and tested with a series of experiments conducted on collections of fingerprint images. The results reveal that our method can achieve good performance on these data collections.
Nowadays, the requirement of Internet is increasing in more heterogeneous scenarios; especially in mobile platforms, such as Planes, trains and buses. NEMOWG (network mobility workging group) a new working group in IETF (internet engineering task force) is formed to provide mechanisms to manage the mobility of a network as a whole, enabling that network to change its point of attachment to an IP-based fixed infrastructure without disturbing the ongoing communications or sessions. This article describes the IPv6 network mobility (NEMO) basic support protocol, analysis its limitation and suggesting a route optimization technique for nested network mobility.
Web personalization is the process of customizing a Web site to the needs of each specific user or set of users, taking advantage of the knowledge acquired through the analysis of the user's navigational behavior (usage data). Web personalization domain has gained great momentum both in the research and the commercial area. In this paper, we present a web personalization system, NetPersonal, using clusters of web usage data. The system is inferred from the web server's access logs by means of data and web mining techniques to extract the knowledge about user profiles. The extracted knowledge is deployed to the purpose of offering a personalized view of the Web services to users.
In this paper, the problem of finding the number of optimal cluster partitions in fuzzy domain has been countered. The fact motivated us to develop an algorithm on differential evolution for automatic cluster detection from the unknown data set. Here, assignments of points to different clusters are done based on a Xie-Beni index where the Euclidean distance takes into consideration. The cluster centers are encoded in the vectors, and the Xie-Beni index is used as a measure of the validity of the corresponding partition. The effectiveness of the proposed technique is demonstrated for two synthetic and two real life data sets. Superiority of the new method is demonstrated by comparing it with the variable length genetic algorithm based fuzzy clustering and well known fuzzy c-means algorithm.
The peer-to-peer (P2P) networks is heavily used for content distribution applications and are becoming increasingly popular for Internet file sharing. Generally the download of a file can take from minutes up to several hours depending on the level of network congestion or the service capacity fluctuation. In this paper, we consider two major factors that have significant impact on average download time, namely, the spatial heterogeneity of service capacities in different source peers and the temporal fluctuation in service capacity of a single source peer. We prove that both spatial heterogeneity and temporal correlations in service capacity increase the average download time in P2P networks and then analyze a simple, distributed algorithm to minimize the file download time. Here, we have designed a new distributed algorithm namely dynamically distributed parallel periodic switching (D2PS) that effectively removes the negative factors of the existing parallel downloading, chunk based switching, periodic switching, thus minimizing the average download time. There are two schemes (i) Parallel Permanent Connection, and (ii) Parallel Random Periodic Switching in our dynamically distributed parallel periodic switching (D2PS) method. In our Parallel Permanent Connection, the downloader randomly chooses multiple source peers and divides the file randomly into chunks and download happens in parallel for the fixed time slot t and source selection function does not change for that fixed time slot.
Defects in underground pipeline images are indicative of the condition of buried infrastructures like sewers and water mains. This paper entitled automated assessment Tool for the depth of pipe deterioration presents a three step method which is a simple, robust and efficient one to detect defects in the underground concrete pipes. It identifies and extracts defect-like structures from pipe images whose contrast has been enhanced. We propose to use segmentation and feature extraction using structural elements. The main objective behind using this tool is to find the dimensions of the defect such as the length, width and depth and also the type of defect. The detection of defects in buried pipes is a crucial step in assessing the degree of pipe deterioration for municipal operators. Although the human eye is extremely effective at recognition and classification, it is not suitable for assessing pipe defects in thousands of miles of pipeline because of fatigue, subjectivity and cost. Our objective is to reduce the effort and the labour of a person in detecting the defects in underground pipes.
Wavelets are mathematical functions that catch up data into different frequency components and study each component with a resolution matched to its scale. They have advantage over traditional Fourier methods in analyzing physical situations where the signal contains discontinuities and sharp spikes. Wavelets were developed independently in the fields of mathematics, quantum physics, electrical engineering, and seismic geology. Interchanges between these fields during the last ten years have led to many new wavelet applications such as image compression, turbulence, human vision, radar, and earth quake prediction. Proposed methodology provides image compression and de-noising better than existing technologies.
A protocol is secure if the parties who want to compute their inputs hands it to the trusted parties. Trusted parties in turn compute the inputs using the function f and give the result to the respective parties after computation in such a way that no party can identify other's party data. During computation of inputs, we had considered the factor, what if trusted third parties are malicious? Considering different probabilities for the malicious users, we have tried to find out the correctness of the result and percentage of system acceptability. We then tried to increase the number of TTP's in order to get the accuracy of the result. The aim of our proposed work is to identify what probability of malicious users will lead to the system in an unacceptable state.
In this paper, we have introduced RSA cryptosystem and its improvements. There are many cases when there is the need to enhance the decryption/signature generation speed at the cost of encryption/signature verification speed, e. g., in banks, signature generation can be in huge amount in a single day as compared to only one signature verification in the complete day at receiver side. So here in this paper the main stress is on the improvement of decryption/signature generation cost. Many methods are discussed to improve the same, e. g., Batch RSA, MultiPrime RSA, MultiPower RSA, Rebalanced RSA, RPrime RSA. The proposed approach to improve decryption/signature generation speed is given in the paper. We have tried the improvement by the combination of MultiPower RSA and Rebalanced RSA. Theoretically, the proposed scheme (for key length 2048 bits moduli) is about 14 times faster than that given by RSA with CRT and about 56 times faster than the standard RSA. Tabular and graphical comparison with other variants of RSA is also shown in the paper.
The performance of code division multiple accesses in sensor networks is limited by multiple access interference (MAI). In this article, we propose a frequency division technique to reduce the MAI in a DS-CDMA sensor network. Our proposal also reduces the energy consumption of the network. In the model, first a new clustering technique is used over several numbers of randomly deployed sensor nodes to form different clusters and then use FDMA-CDMA technique in different clusters. Simulation is done for the proposed system and compared it with other systems, which do not use frequency division. The study found that, by using few number of frequency channels, the MAI can be reduced significantly. The system also has less channel contention, and lower energy consumption.
Currently, there is very little research that aims at handling QoS requirements using multipath routing in a very energy constrained environment like sensor networks. In this paper, energy efficient fault-tolerant multipath routing technique which utilizes multiple paths between source and the sink. has been proposed. This protocol is intended to provide a reliable transmission environment with low energy consumption, by efficiently utilizing the energy availability and the available bandwidth of the nodes to identify multiple routes to the destination. To achieve reliability and fault tolerance, this protocol selects reliable paths based on the average reliability rank (ARR) of the paths. Average reliability rank of a path is based on each node's reliability rank (RR), which represents the probability that a node correctly delivers data to the destination. In case the existing route encounters some unexpected link or route failure, the algorithm selects the path with the next highest ARR, from the list of selected paths. Simulation results show that the proposed protocol minimizes the energy and latency and maximizes the delivery ratio.
In wireless sensor networks, most of the existing key management schemes, establish shared keys for all pairs of neighbor sensors without considering the communication between these nodes. This results in causing huge overhead. For large scale WSNs, these schemes still need each sensor to be loaded with a bulky amount of keys. In many-to-one traffic pattern of sensor networks, large numbers of sensor nodes send data to single or few base stations. Thus a sensor node may communicate with a small set of neighbors. Based on this fact, in this paper, a novel traffic-aware key management (TKM) scheme is developed for WSNs, which only establishes shared keys for active sensors which participate in direct communication, based on the topology information of the network. Numerical results show that proposed key management scheme achieves high connectivity. Simulation results show that proposed key management scheme achieves stronger resilience against node capture and low energy consumption.
This paper introduces three embedded solutions based on H.264 video codec on TI DaVinci processor. These solutions are remote video consultation system for the healthcare industry, point to point video chat and place-shifting system for the consumer industry. The main value additions in these solutions are bandwidth efficiencies and at the same time a better video quality as these systems are built on top of the H.264 video codecs.
We present a design of intelligent storage controllers in IP SAN environment to overcome the performance issues of IP SAN. The controllers are designed with clustering features to introduce parallelism thus increasing the performance of IP SAN. Also implementing the virtualization entity in controller hides the physical storage complexity. The performance bottlenecks created by introducing storage controllers can be overcome by implementing virtualization with packet forwarding system and global cache, thus avoiding encapsulation-decapsulation overheads and disk accesses for frequently accessed data respectively. The in-band approach used in the design of controllers gives complete control over the storage network, which is necessary for high availability and guaranteed performance. Also the controllers in clustering environment provide the benefits like high availability, scalability, load balancing and failover.
Battery life optimization is the most important problem in the IEEE 802.16e (Mobile WiMAX). With the increasing number of mobile users in the world, there is a need for a better technology which optimizes battery life of mobile devices. This paper throws light on the battery life optimization of mobile devices. The paper brings out a new technology called "extended sleep model" that saves battery life of mobiles to a greater extent. This proposed technology has been tested under various scenarios and is expected to produce better results. An algorithm has also been proposed for implementation of this new technology.
This paper investigates different mobility protocols (network layer mobility protocols Mobile IPv4 and Mobile IPv6, the transport layer mobility protocol mSCTP) as an approach to achieve interworking between 3G cellular networks (such as UMTS) and IEEE's 802.11 wireless LANs (WLANs). A simulative model has been developed using OPNET to support the study. Simulation results supported mSCTP as the best approach for achieving desired interworking.
Most of the image based modeling and rendering works found in the literature rely on the input supplied by the user and hence it becomes necessary to optimize the user interaction while building the 3D model. We present an interactive system for image based 3D model building from single view uncalibrated images based on depth-cueing which constructs approximate wireframe from the user specified depth information. The depth information is interpreted by drawing a gray shaded line whose intensity is highest at the vertices closer to viewing point and decreases towards the vertices farther. On the rendering part, the perspective distortion is rectified on each surface based on projective transformation by exploiting the presence of symmetric objects (like circle, square, etc) in the images to get the fronto-parallel view. Our study shows that the symmetric objects like circle get deformed to an ellipse due to perspective distortion and projective transformation. We demonstrated the significance of symmetric objects in a scene/image to rectify the perspective distortion. The rectified surfaces are used to retrieve the actual 3D coordinates of the wireframe and also used as texture maps during rendering to get the photo realistic results. We have used images containing planar surfaces. The user interaction during wireframe building needs no idea about the scene and camera parameters. The results are significant and convincing.
In this paper, voice signal compression and spectrum analysis (VSCSA) is a technique that is used to compress transparent high quality voice signals upto 45% - 60% of the source file at low bit rate 45 kbps with same extension (i.e. .wav to .wav), then voice spectrum analysis (VSA) is started. Voice signal compression (VSC) is done by using adaptive wavelet packet a tool of MatLab for decomposition and psychoacoustic model implementation. Entropy & signal to noise ratio (SNR) of given input voice signal is computed during VSC. Filter-bank is used according to psychoacoustic model criteria and computational complexity of the decoder for VSC. Bit allocation method is used that also take input from the psychoacoustic model. The purpose of VSCSA is to compress the voice signals with same extension with the help of VSC and then distinguish between constitutional and unconstitutional voice with the help of VSA according to various parameters of DSP. If a voice signal is compressed first then spectrum analysis will be very fast because selected .wav file will take very short time for execution of various DSP parameters that gives better result. For example, if a device is bolted with DSP parameters then it can unbolt only when bolted device is recognized same DSP parameters from the .wav warehouse. This work is suitable for pervasive computing, Internet, and limited storage devices because of reduction in file size and fast execution.
With the rapid growth of the Internet and the widespread access to the digital data, the content creators often encounter problems like illegal copy and distribution of their content. This paper addresses the problem for multimedia data. An efficient video encryption technique is proposed to protect illegal distribution of video content stored in H.264 format. This paper presents a fast video encryption algorithm that performs real-time encryption of the video in H.264 format on a commercially available DSP platform. This algorithm is applied in a real-time place-shifting solution on DSP platform.
Exponential increase in scale of integration, is increasing day by day and movement of high volume information among various parts of computing system contributes a lot on overall power budget. Feasibility of system depends on power consumption in modern computing systems. There are various ways to reduce power requirement. Dynamic power requirement is the dominating factor among them. Power consumption due to communication on system-level buses contributes a lot of power consumption. Scheme used to reduce dynamic power in called as bus encoding technique. Various schemes have been proposed in literature to encode data to reduce the number of bus transition. Data volume of information that belongs to computer system is not always used with equal probability, but frequencies of usages are different and high for only few sets of data. Bus encoding scheme suggested with the above fact is called as frequent value encoding (FVE) scheme. This paper provides mathematical model of FVE scheme. Paper aims at providing a framework for evaluation of bus encoding algorithms for 8 to 64-bit information flowing in computing system. Probabilistic model has been drawn and result shows efficiency of FVE scheme.
Lookup architectures are among the well researched subjects in networking. This is due to its fundamental role in the performance of Internet routers. Internet routers use a lookup method known as Longest Prefix Match (LPM) algorithm to determine the next-hop to forward the packet to. State-of-the-art lookup designs try to achieve better search times and/or reduce storage requirements thereby sacrificing the requirement for high update rates. But recent studies have shown the requirement for high update rates, especially in the Internet core routers, due to increasing routing instabilities and anomalous traffic. This paper presents a novel architecture to obtain high update rates in forwarding devices without compromising on the speed and space advantages.
With the unprecedented growth of multimedia applications, transmission of voluminous data over heterogeneous networks is frequently required. In addition, many important services require secure exchange of data between mobile nodes and resource constrained networks. Standard encryption schemes designed for normal applications are not suitable for securing fast streaming media under these environments. We propose a new scheme that makes use of quasi group based look-up operations followed by reduced number of rounds of existing encryption algorithms like SMS4 or AES. Depending upon the transmission-rate and available resources, the scheme can be customized by adjusting its different parameters. Observations taken on images using the proposed scheme show drastic improvement with higher entropy and more uniformly scrambled image blocks. Results prove that our scheme provides adequate security for multimedia applications under different environments besides being computationally more efficient as compared to the standard schemes.
Wireless sensor networks (WSN) consists of tiny autonomous devices called sensors capable of sensing, processing and transmitting information. Energy consumption, routing and maximizing lifetime are the important challenges in the sensor network. In this paper, we propose multiple tree construction (MTC) algorithm to address the problem of finding path from each node to its nearest base stations to maximize the network lifetime. The algorithm deals with having optimal number of base stations. Analytical and simulation results show that MTC performs better than existing algorithms involving single base station.
In this paper a code assignment scheme called Next Code Precedence High (NCPH) is proposed for UTRA-FDD (universal terrestrial radio access frequency division duplex) systems based on OVSF (orthogonal variable spreading factor) channelization codes. Using the proposed scheme, the number of codes searched before finding suitable vacant code is least. Also the external code fragmentation which can lead to code blocking is less because of the compact nature of code assignment scheme. Simulation results are presented to compare the reduction in call blocking probability of the proposed scheme with the existing novel assignment schemes.
This work presents a unique layer based, location aware routing scheme for the location based routing of nodes in MANET. Our model is based on intra-layered and inter-layered mode of communication between two or more sensor nodes. We have mathematically established the velocity required for a packet to be sent from the source node and also the correct direction in which it has to be sent so as to be received by the mobile receiver sensor node. Packet efficiency and the time taken for a packet to reach its destination are the other two important factors of our study. Our proposed scheme involves no overhead of route establishment, route maintenance and network wide searches for destination nodes through the use of location information and thus exhibits superior performance.
A flat mobile ad hoc network has an inherent scalability limitation in terms of achievable network capacity. It is seen that when the network size increases, per node throughput of an ad hoc network rapidly decreases. This is due to the fact that in large scale networks, flat structure of networks results in long hop paths which are prone to breaks. These long hop paths can be avoided by using virtual nodes concept working as mobile backbone network (MBN). There are some specific virtual power capable nodes functionally more capable than ordinary nodes. In this paper, a new routing protocol for large scale networks with mobile virtual nodes has been proposed. This routing protocol uses different types of routing protocols which makes it easily extendable to support QoS as well. To establish the structure, some of the virtual nodes are elected to act as backbone nodes (BNs), which form the higher layer. Finally, the NS-2 network simulator has been used for the realistic simulation of the proposed protocol. The simulation results of the proposed routing protocol for networks of different sizes and mobility speeds show that the proposed routing protocol performs better compared to other protocols.
Soft handoff in CDMA cellular system increases system capacity as compared to hard handoff as interference is reduced by transmitting signals as lower power level. A CDMA cellular system involving integrated voice and data calls is considered in this paper. The developed model is based on priority reservation for handoff voice and data calls along with call queuing scheme for handoff data calls. Upper channel restrictions are set for new voice as well as data calls to give priority to handoff calls. For better resource management and efficient call admission to originating and handoff calls, Neuro-fuzzy call admission controller is designed. This controller uses the adaptable feature of soft handoff threshold parameters to house important voice and data calls by monitoring current new voice blocking probability, handoff data dropping probability and speed of handoff voice users. The proposed ASNFC controller is compared with non-Call admission control scheme for required QoS of the system.
Wireless sensor networks(WSN) usually consists of a large number of tiny sensors with limited computation capacity, memory space and power resource. WSN's are extremely vulnerable against any kind of internal or external attacks, due to several factors such as resource constrained nodes and lack of tamper-resistant packages. To achieve security in wireless sensor networks, it is important to encrypt messages sent among sensor nodes. In this paper, we propose a scheme called Modified Bloom's Scheme(MBS); it makes use of asymmetric matrices in place of symmetric matrices in order to establish secret keys between node pairs. In this proposed scheme, the network resilience against node capture is substantially improved.
Wireless sensor networking is envisioned as an economically viable paradigm and a promising technology because of its ability to provide a variety of services, such as intrusion detection, weather monitoring, security, tactical surveillance, and disaster management. The services provided by wireless sensor networks (WSNs) are based on collaboration among small energy-constrained sensor nodes. The large deployment of WSNs and the need for energy efficient strategy necessitate efficient organization of the network topology for the purpose of balancing the load and prolonging the network lifetime. Clustering has been proven to provide the required scalability and prolong the network lifetime. Due to the bottle neck phenomena in WSNs, a sensor network loses its connectivity with the base station and the remaining energy resources of the functioning nodes are wasted. In this paper, a new hierarchical clustering scheme is proposed to prolong the network lifetime in heterogeneous wireless sensor networks. Finally, the simulation results shows that our proposed scheme is more effective in prolonging the network lifetime compared with LEACH.
There are different types of computer worms like email worms, IRC worms, network worms, e.t.c. silent worms are network worms which have a hit-list of vulnerable hosts and limits the number of infection activities of each copy to suppress anomaly network activities of each infected host. There are different techniques which use aggressive nature of network worms as a clue to detect network worms but these techniques aren't effective against silent worms. Hence, anomaly connection tree method (ACTM) is used to detect silent worms. ACTM uses a worm propagation behaviour expressed as tree-like structures composed of infection connections as edges to detect silent worms. Then, by detecting connections composed of anomaly connections, ACTM detects the worms before 10% of the hosts are infected. Comparison of ACTM with other method like AC counting method is done to show that the tree structure help detect the worm faster than just considering the anomaly connections making the detection rate faster. The simulator explained in this paper have been designed and implemented using Java.
Floods have always been a major cause of destruction from the past itself. In the manual river flow control system there is always a situation of the increasing or decreasing of the discharge available depending on the precipitation of the catchment region. In this article, with a blend of mobile agents, wireless sensor networks (WSNs) and intelligent systems we intend to formulate a new application to manage the water flow of a river by continuously monitoring the precipitation and correspondingly changing the position of the diversion head regulators (HR) of a barrage.
This paper analyzes an embedded architecture of torus network with the hypercube pertinent to parallel architecture. The product generated from torus and hypercube networks show how good interconnection network can be designed for parallel computation. The advantages of hypercube network and torus topology are used for product network known as Torus embedded hypercube network. A complete design analysis, data routing and comparison of this network with basic networks is given using network parameters.
In this paper, we propose a new and efficient cryptographic hash function based on random Latin squares and non-linear transformations. The developed scheme satisfies basic as well as desirable properties of an ideal hash function. Use of repeated lookup on Latin squares, non-linear transformations and complex shift operations further increase the strength of our cryptographic hash function at a low computational overhead. It also ensures pre-image resistance and collision resistance as required for the present day lightweight cryptographic applications.
In recent time watermarking technique becomes a potential solution for copyright protection, authentication and integrity verification of digital media. Among the widely used watermarking techniques, spread spectrum modulation based method becomes appealing due to its inherent advantage of greater robustness and is used widely for various applications. Some watermarking applications, for example, digital television broadcasting, Internet protocol television (IP-TV), etc. essentially demand development of low cost watermark algorithms in order to implement in real-time environment. This paper proposes a block based multiple bit spatial domain spread spectrum image watermarking scheme where a gray scale watermark image is represented by less number of binary digits using novel channel coding and spatial biphase modulation principle. VLSI implementation using field programmable gate array (FPGA) has been developed for the algorithm and circuit can be integrated into the existing digital still camera framework. The proposed image watermarking algorithm may be applied for authentication as well as secured communication in real time environment.
In this paper we propose a game theoretic framework for routing in mobile ad hoc networks. A trust oriented auction based packet forwarding model is developed for stimulating cooperation among the nodes avoiding the selfish behavior. Our proposed scheme provides incentive for cooperative behavior of the node and the amount of incentive is decided by the trustworthiness of a node. To find the cost of a packet we use two auction mechanisms procurement and Dutch auction. Trust is measured by the past behavior of each node towards proper functionality of the network.
Recent advances in cellular mobile communications have touched every aspect of our lives. In mobile computing, mobile hosts or mobile terminals (MTs) move randomly from one place to another within a well-defined geographical area. To provide timely services to mobile users, a challenging task is to track the location of the mobile user effectively so that the connection establishment delay is low. This can be done only if available system resources are used optimally. In most of the available schemes, when a call arrives, the target MT is searched using a method calledpaging even when the target MT is in such a low coverage area where it can not bear a call. This wastage of network resources puts extra burden on the network and increases the total location management (LM) cost. The proposed scheme tries to cut off this resource wastage up to a significant amount by not paging those MTs that can not bear a call due to low received signal strength (RSS) value. The performance of proposed scheme is evaluated using an analytical model and is compared with Sun-Jin-Oh's LM scheme[1].
In this paper a novel watermarking scheme based on wavelet packet transform (WPT) with best tree is explored. The basic idea behind the proposed scheme is to decompose the preprocessed host image via wavelet packet transform and then find the best tree by entropy based algorithm. Watermark is embedded in all frequency bands of best tree. A reliable watermark extraction scheme is developed for the extraction of watermark from the distorted images. Experimental evaluations demonstrate that the proposed scheme is robust against variety of attacks.
ROI (region of interest) in an image or video signal contains important information and attracts more attention of an image viewer. It is desirable to insert robust watermark in ROI to give better protection of digital image. This paper describes a region specific spatial domain scheme where watermark information is embedded into the most valuable area of the cover image. The regions are selected by quad tree decomposition of host image. Simulation results validate watermark embedding in the most valuable areas of the image. Simulation results also show that the proposed scheme is robust against a wide range of attacks available in StirMark 4.0 package.
In this paper, a newer version of Walsh-Hadamard Transform namely multiresolution Walsh-Hadamard Transform (MR-WHT) is proposed for images. Further, a robust watermarking scheme is proposed for copyright protection using MR-WHT and singular value decomposition. The core idea of the proposed scheme is to decompose an image using MR-WHT and then middle singular values of high frequency sub-band at the coarsest and the finest level are modified with the singular values of the watermark. Finally, a reliable watermark extraction scheme is developed for the extraction of the watermark from the distorted image. The experimental results show better visual imperceptibility and resiliency of the proposed scheme against intentional or un-intentional variety of attacks.
In this paper we aim at presenting an implementation of a new MA_IDS (mobile agent for intrusion detection system) model, based on misuse approach. Through its ease to detect simulated attacks, we show that the use of mobile agents has practical advantages for intrusion detection. Based on a set of simulated intrusions, we established a comparative experimental study of four IDSs, showing that most of current IDS are generally centralized and suffer from significant limitations when used in high speed networks, especially when they face distributed attacks. This leads us to use distributed model based on mobile agents paradigm. We believe that agent will help collecting efficient and useful information for IDS.
In this paper, we propose a new throughput analysis for IEEE 802.11 distributed coordination function (DCF) considering the real channel conditions and capture effects under arbitrary load conditions employing basic access method. The aggregate throughput of a practical wireless local area (WLAN) strongly depends on the channel conditions. In real radio environment, the received signal power at the access point from a station is subjected to deterministic path loss, shadowing and fast multipath fading. We extend the multidimensional Markov chain model initially proposed by Bianchi to characterize the behavior of DCF in order to account both real channel conditions and capture effects, especially in a high interference radio environment.
Detection, localization and quantification of voltage and current disturbances are important tasks in monitoring and protection of distribution systems. The concept of discrete wavelet transform for feature extraction of power quality disturbance signals has been incorporated as a powerful tool for detecting and classifying power quality problems. This paper presents the ability of multiresolution signal decomposition technique, by plotting standard deviations of the decomposed signal at different resolution levels, to classify and quantify the power quality disturbance. It has been also shown that the curve obtained by plotting the entropy of the decomposed signal at different levels provides clear information for the detection and quantification of the power quality disturbances.
Automatic extraction of opinions on products from Web has been receiving interest increasingly. Such extracted knowledge helps to find out what other people think about the particular product or service. With the growing availability of resources like online review sites and personal blogs, new opportunities and challenges arise as people can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden growth in the area of opinion mining, which deals with the computational techniques for opinion extraction and understanding created an utmost need to understand and view the Web in a different prospect. In this paper, we demonstrate an opinion-mining framework that extracts the opinions and views of the consumers/customers, and analyze them to provide concrete market flow along with proven statistical data. The software uses classification, clustering and lingual knowledge-based opinion mining for providing these features.
In the modern world, digital images can be easily and rapidly transferred over networks. This may give rise to the possibilities of illegal copy or reproduction. Securing digital images while transferring through networks and later extracting it in the original form is a very challenging task. Several advance computation techniques have been developed which provide security to digital media while transmission through open networks. To prevent digital content from illegal copy or reproduction, digital watermarking is evolved. There have been several proposals from various researchers, to hide and extract watermark from digital image using spatial and frequency domain techniques. However, in most of the methods, there is a gradual reduction in the fidelity of the original cover image with the increase in embedded information content. This paper discusses a new technique, which utilizes the insignificant portion of the fractional part of the cover image pixel intensity value, to hide the watermark, and then later, successfully extract it in the original form. The technique preserves high level of fidelity of watermarked image.
In today's fast growing Internet world, the number of distributed denial of service attacks (DDoS) is increasing at an alarming rate. Evading these attacks has created a lot of attention from researchers. A number of monitoring and filtering devices have been developed to verify the authenticity of the packets based on the packet payload data in intrusion detection systems (IDS). However, the methods used for IDS cannot be deployed in DDoS filters since in DDoS attacks, a lot of packets arrive in a short span of time and deriving packet payload patterns become cumbersome with these IDS algorithms. This paper presents a three-level mechanism to distinguish attack packets from legitimate ones by scanning the payload of the packet. Packet patterns are derived by using the eigen vector concept and the obtained patterns are compared using an optimal string matching algorithm. This three-level filter was tested in the ANTS active network tool kit with the 1999 DARPA IDS dataset as the back end. Results validate the proposed scheme's efficiency and the time complexity of the filter proposed is smaller than IDS payload scanning methodologies.
This paper proposes a lightweight quasigroup based encryption scheme for providing message confidentiality. A fast and efficient method of generating a practically unlimited number of quasigroups of an arbitrary order is presented. The scheme first generates a random quasigroup of order 256 out of the extremely large number of available options. A new method using shift and lookup operations on the quasigroup is then used for encryption/decryption of the data. Experiments conducted on text and visual data prove that enhanced security may be provided using computationally efficient schemes involving simple operations on quasigroups.
Orthogonal variable spreading factor (OVSF) codes are employed as channelization codes in wideband CDMA. The quantized rate handling feature of OVSF codes leads to wastage of capacity of OVSF codes reducing utilization and spectral efficiency of OVSF based systems. This paper presents a multi code assignment scheme for quantized and non quantized data rates to reduce code rate wastage. The multi code assignment is possible due to use of multiple rakes equipped in the base station and user equipment. Two multi code assignment schemes are discussed. The first multi code assignment uses least number of rakes to minimize code wastage and hence complexity is lowest. Second code assignment aims to minimize code scattering provides further reduction in code rate wastage and blocking probability for a specified number of rake combiners. Simulation results are demonstrated to show the superiority of the proposed multi code design.
Mobile ad-hoc network (MANET) basically, consists of collection of mobile hosts incorporating wireless interfaces. These mobile nodes forms a transitory autonomous network and do not base on fixed infrastructure or central administration, rather they work on the principle of self-organized interconnections among mobile nodes. The idea at the core of the MANET technology is that while in movement when a node comes inside the transmission range of neighboring nodes', they can be detected by each other and can communicate directly. However for communication outside this range they have to depend on some other nodes to relay the messages. In fact, MANET' node rely on the resources and services from other participating nodes. And when these services are being offered by many, the choice of selecting whom to use, plays an important role in adhoc networks. Additionally, the dynamic and vulnerable nature of ad-hoc network itself presents many new security and privacy challenges. Above all, securing the process of service discovery is one of them. Although much previous research has concentrated on service discovery in MANETs, not much effort has been done on the security side of service selection. This paper emphasize that secure service selection in MANETs has an intense effect on network performance. Specifically, we stress that in adhoc network where all the participants are self controlled, security is an essential aspect and the effective service selection can improve network throughput.
In spite of continuous research in the field of ad hoc networks by the research community, it still is quite away from the wide scale use by the common masses. We have developed a protocol named feedback dependent multicast routing protocol (FDMRP) to implement video conferencing in ad hoc networks. The protocol uses mobility prediction and feed back mechanism as its principal tools and hence has less control packet overhead and high immunity to error propagation. It also avoids delay during transmission and hence minimizes the freezing of video signals at the receiver. In this paper we describe an algorithm to implement our protocol for the purpose of video conferencing. We have compared our protocol's merits with that of a well known protocol named ODMRP (on demand multicast routing protocol). FDMRP has clear advantages over existing routing protocols for video conferencing.
The technique considers a message as binary string on which the Fibonacci based position substitution (FBPS) method is applied. A block of n bits is taken as an input stream from a continuous stream of bits. The decimal equivalent value of a source block is obtain and finds its position on the Fibonacci series, on a number or in between two numbers. The source value is mapped on a previous number of the series called target number. For proper one-one mapping a scheme is applied on the target number. This target number is again projected on a previous number and so on until the target number reached in a 0 or 1. Each time of the projection a 0 or 1 is produced. Plain text is encrypted for different block sizes as per the specification of a session key of a session to generate the final encrypted stream. Comparisons of the proposed technique with existing and industrially accepted RSA and triple DES have also been done in terms of frequency distribution and chi-square value. Test of avalanche and bit ratio have also been performed for the proposed technique.
In this paper the theory of carry value transformation (CVT) is designed and developed on a pair of n-bit strings and is used to produce many interesting patterns. One of them is found to be a self-similar fractal whose dimension is same as the dimension of the Sierpinski triangle. Different construction procedures like L-system, cellular automata rule, Tilling for this fractal are obtained which signifies that like other tools CVT can also be used for the formation of self-similar fractals. Finally it is shown that CVT can also be used for the production of periodic as well as chaotic patterns.
In this paper we propose different channel quality indication (CQI) feedback schemes for orthogonal frequency division multiple access (OFDMA) based future wireless broad-band systems. CQI information considered is the signal to noise ratio (SNR) values of the individual frequency subbands. The feedback schemes are based on linear regression methods that fit a quadratic polynomial to the SNR feedback values in an efficient manner. The Best-M feedback reporting method and the Max SNR scheduler are employed in the system used to evaluate the performance of the schemes. The simulations show that considerable gain in uplink throughput is obtained due to the less feedback bits requirement of the proposed schemes, especially for the future broadband networks that are designed to support a large number of users.
This paper proposes a method to cluster documents of variable length. The main idea is to apply (a) automatic identification of 1, 2, and 3 grams (To reduce the dependency on huge background vocabulary support or learning or complex probabilistic approach), (b) order them by some measure of relevance, which is developed with the help of Tf-Idf and Term-Weighting approach, and finally (c) use them (instead of bag of words based approach) to create vector space model and apply some known clustering methods i. e. Bisecting K-means, K-means, hierarchical method (single link) and Graph based method. Our experimental results with publicly available text dataset (Cogprints and NewsGroup20) show remarkable improvements in the performance of these clustering algorithms with this new approach.
The Radio Frequency Identification (RFID) system has become one of the most popular systems for identifying and tracking the objects in the RFID network [2]. The RFID tracking simulator is a real-time RFID environment simulation tool for tracking objects/persons in the RFID network. This simulator tracks the path of the RFID tag movement in the network. The presented work is the part of the RFID tracking simulator that generates Electronic product code (EPC) for various classes of the RFID tags viz. Class-0 (64), Class-1G1 (96), Class-1G2 (128-256), Class-III (256-1 KB), and Class IV (4KB) up to length of 4KB depending on the type of RFID tags. The presented work, exploits the potential of novel Secure Hash Algorithm (SHA-256) computed with 32-bit words for generating random patterns of n-bit EPC with a periodicity of 2 256 . The EPC generator is based on the n-tier architecture (client, RFID Middleware, RFID Database) that communicates through TCP/IP sockets. The EPC generator has tremendous application in the field simulation and conversion of data format (SGTIN, GID, DoD, GRAI, SGLN) from one to EPC.
Mobile agent technology promises to be a powerful mechanism to improve the flexibility and customizability of applications with its ability to dynamically deploy application components across the network. But none of the present mobile agent prototype systems satisfy all the requirements to provide a secure and reliable architecture, suitable for any mobile agent based distributed application. This paper presents architecture for a mobile agent system to provide a reliable agent tracking mechanism from the perspective of the owner of the agent, protection for the host from a malicious agent and the agent from the malicious host when an agent migrates to different nodes with an assigned task. This framework uses various encryption mechanisms to provide the required host security and agent security.
This work aims at a multi-step feedback centric Web search engine ensuring the retrieval of relevant fresh live results instead of those existing in the indexes. The methodology is based on the new concept called "micro search" which in turn creates the "micro indexes". These micro-indexes are the key factors utilized in re-ranking the selected documents. A prototype of the system called FEAST (freshness enriched active searching technology) has been implemented. The results of experiments conducted on FEAST are encouraging and they confirm the improved quality of information retrieved.
The essential services such as banking, transportation, medicine, education and defense are being progressively replaced by cheaper and more efficient Internet-based web applications. Therefore, the availability of Web services is very critical for the socio-economic growth of the society. Distributed denial-of-service (DDoS) attacks pose an immense threat to the availability of these services. The services are severely degraded and hence lot of business loses are incurred due to these attacks. To objectively evaluate DDoS attack's impact, its severity and the effectiveness of a potential defense, we need precise, quantitative and comprehensive DDoS impact metrics that are applicable to web services. In this paper an attempt has been made to analyze impact of DDoS attacks on the Web services. The experiments are conducted in NS-2. The attack traffic is mixed with legitimate traffic at different strengths and impact of DDoS attacks are measured in terms of throughput, response time, ratio of average serve to request rate, percentage link utilization, and normal packet survival ratio (NPSR). electronic document is a "live" template. The various components of your paper [title, text, heads, etc.] are already defined on the style sheet, as illustrated by the portions given in this document.
In a typical local area network (LAN), the global security policies, often defined in abstract form, are implemented through a set of access control rules (ACL) placed in a distributed fashion to the access switches of its sub-networks. Proper enforcement of the global security policies of the network demands well-defined policy specification as a whole as well as correct implementation of the policies in various interfaces. But, ensuring correctness of the implementation manually is hard due to the complex security policies and presence of hidden access paths in the network. This paper presents a formal verification framework to verify the security implementations in a LAN with respect to a defined security policy. The proposed framework stems from formal models of network security policy specifications, device-specific security implementations, and deploys verification supported by SAT based procedures. The novelty of the work lies in the analysis of the hidden access paths, which plays a significant role in correct security implementations.
A major problem in medical science is attaining the correct diagnosis of disease in precedence of its treatment. This paper presents the diagnosis of thyroid disorders using artificial neural networks (ANNs). The feed-forward neural network has been trained using three ANN algorithms; the Back propagation algorithm (BPA), the radial basis function (RBF) Networks and the learning vector quantization (LVQ) networks. The networks are simulated using MATLAB and their performance is assessed in terms of factors like accuracy of diagnosis and training time. The performance comparison helps to find out the best model for diagnosis of thyroid disorders.
A new and improved model of dynamic pricing scheme for handling of congestion vis-a-vis providing improved QoS for mobile networks is proposed in this paper. The user calls are divided into multiple priority levels and the call requests are scheduled by developing a tree structure. A very effective scheduling algorithm has been developed and analyzed and in this process a unique path sequence for each cell could also be identified.
Prevalence of hypertension in recent years have raised questions about the shape of association between blood pressure (BP) and various factors such as alcohol consumption, age, physical activity level (PAL) to assess the overall cardiovascular risk. However, the risk complexity and the process Dynamics are difficult to analyze. The study presents modeling of Systolic Blood Pressure (SBP), Diastolic Blood Pressure (DBP) and Pulse pressure (PP) variations with biological parameters like age, pulse rate, alcohol addiction, and PAL by statistical multiple regression and Feed-forward Artificial Neural Network (ANN) approaches. Different algorithms likes Quasi-Newton, Gradient descent and Genetic Algorithm were used in training the ANN to improve the training performance and better optimization of the predicted values. The statistical analyses clearly indicate that the variation of SBP, DBP and PP with the parameters is statistically significant. ANN approaches give a much more flexible and non-linear model for prognosis and prediction of the blood pressure parameters than classical statistical algorithms. Moreover, evolutionary algorithms like Genetic Algorithms (GA) provide many advantages over usual training algorithms like Gradient descent or Quasi-Newton methods. Although all measures of blood pressure were strongly related to the variables, the analyses indicated that SBP was the best single predictor of cardiovascular events for the present study. More detailed study with varied parameters that influence BP is required for a exhaustive modeling.
The integrated use of telecommunications and information is known as telematics. Telematics have been applied specifically to the use of global positioning system technology integrated with computers. Most narrowly, the term has evolved to refer to the use of such system within road vehicles, in which case the term vehicle telematics may be used. Vehicle telematics is a term used to define connected vehicles interchanging electronic data. Advances in wireless inter-vehicle communication systems enable the development of vehicular ad-hoc networks (VANET) and create significant opportunities for the deployment of a wide variety of vehicular applications and services. This paper analyzes the connectivity of a vehicular telematics network to provide a test bed for the network design and algorithm, based on the real world movement history data of many vehicles.
The last fifteen years have witnessed a resurgence of interest in asynchronous digital design techniques as they promise to liberate VLSI systems from clock skew problems, offer the potential for low power and high performance and encourage a modular design philosophy which makes incremental technological migration a much easier. One of the main reasons for using asynchronous design is that it offers the opportunity to exploit the data-dependent latency of many operations in order to achieve low-power, high-performance, or low area. This paper describes a novel power aware 8-bit asynchronous arithmetic and logic Unit (ALU). The designed ALU is targeted for low power. The 8-bit asynchronous arithmetic and logic unit (ALU) has been designed entirely using the tool named Balsa, which is an Advanced Asynchronous Hardware Description Language and Synthesis Tool, developed by University of Manchester, UK.
This research is intended to evaluate the particle swarm optimization (PSO) algorithms for solving complex problems of water resources management. To achieve the goal, the standard particle swarm optimization algorithm and the modified method named Elitist-Mutation particle swarm optimization (EMPSO) are used to determine optimal operating of a single reservoir system with 504 decision variables. The two methods were compared and contrasted with other meta-heuristic methods such as Genetic Algorithm (GA), and original and modified Ant Colony Optimization in continuous domains (ACO R ). The results indicated that the use of EMPSO in complex problems is remarkably superior to the PSO in terms of run time and the optimal value of objective function. Moreover, EMPSO was found comparable to other above stated meta-heuristic methods.
A group key agreement (GKA) protocol is a mechanism to establish a cryptographic key for a group of participants, based on each one's contribution, over a public network. Security of various group-oriented applications for ad-hoc groups requires a group secret shared between all participants. In ad hoc networks, the movement of the network nodes may quickly change the topology resulting in the increased in e overhead during messaging for topology maintenance, the region based schemes of ad hoc networks therefore aim at handling topology maintenance, managing node movement and reducing overhead. When the group composition changes, group controller can employ supplementary GKA protocols to derive a new key. Thus, they are well-suited to the key establishment needs of dynamic peer-to-peer networks as in ad hoc networks. While many of the proposed GKA protocols are too expensive to be employed by the constrained devices often present in ad hoc networks, others lack a formal security analysis. In this paper, a simple, secure and efficient region based GKA protocol using elliptic curve cryptography well suited to dynamic ad hoc networks is presented. This paper introduces a region-based contributory group key agreement that achieves the performance lower bound by utilizing a novel group elliptic curve Diffie-Hellman (GECDH) protocol and tree-based group elliptic curve Diffle-Hellman (TGECDH) protocol, called GEDH & TGECDH protocol. Both theoretical and simulation studies shows that the proposed scheme achieves much lower communication, computation and memory cost than the existing group Diffie-Hellman and tree based contributory group key agreement schemes.
Recent advancements in wireless communication and microchip techniques have accelerated the development of wireless sensor networks (WSNs). Key management in WSNs is a critical and challenging problem because of the inner characteristics of sensor networks: deployment in hostile environments, limited resource and ad hoc nature. In this paper we investigate the constraints and special requirements of key management in sensor network environment, with some basic evaluation metrics.
Routing protocols typically establish paths over which packets will be sent. Mobility breaks those paths and disrupts the communication executing a load, overhead and increased rate of link failure on routing protocols. Mobility can be exploited to improve the route longevity in the establishment of the route. Also the protocols relying on the source of information (i.e. location information from GPS is unavailable such as in tunnels and undergrounds) will fail to operate correctly if this information is not available. Hence self-content information should be available to perform the task of routing. The heading direction angle (HDA) of the nodes is such alternate self-content information used to perform routing. Based on the heading direction angle of a mobile node, only the selected nodes in the network are utilized in the broadcasting of the messages to find the route to the destination. The ways the nodes are selected exploit the mobility to establish a long-lived route to a destination, which lasts for longer period. Multicast adhoc on demand distance vector routing protocol (MAODV) makes use of HDA to establish a long-lived route to a destination and also to limit the scope of route requests. The results show that this modified MAODV (M-MAODV) reduces the overhead, increases the route longevity and improve its performance when compare to MAODV.
Sports coaches today have an access to a wide variety of information sources that describe the performance of their players. Cricket match data is highly available and rapidly growing in size which far exceeds the human abilities to analyze. Our major intention is to model an automated framework to identify specifics and correlations among play patterns, so as to haul out knowledge which can further be represented in the form of useful information in relevance to modify or improve coaching strategies and methodologies to confine performance enrichment at team level as well as individual. With this information, a coach can assess the effectiveness of certain coaching decisions and formulate game strategy for subsequent games. Since real time cricket data is too complex , Object-relational model is used to employ more sophisticated structure to store such data. Frequent pattern evaluation is imperative for sports be fond of cricket match data which facilitates recognition of main factors accounting for variances in data. While using simple apriori for interrelationship analysis, it is less time efficient because the raw data set which is too large and complex. On integrating association mining with Principal Component Analysis, the efficiency of mining algorithm is improved provided that Principal Component Analysis generates frequent patterns through statistical analysis and summarization not by repeated searching like other frequent patterns generation techniques. As the size and dimension of annotation database is large, Principal Component Analysis proceeds as a compression mechanism. Then the frequent patterns are analyzed for their interrelationship in order to generate interesting and confident rules of association.
In this paper, the traffic characteristics have been studied by collecting traces from a CDMA2000 cellular wireless network that provides services like messaging, video streaming, e-mail, Internet, song downloading. The traces record call activities including call initiation time, termination time, originating node identification number, packet size, home station id, foreign station id (when roaming), handoffs. Traffic parameters namely, call inter-arrival times and call holding times were estimated using statistical methods. The results show that call inter-arrival time distribution in this CDMA cellular wireless network is heavy-tailed and can be modeled by gamma as well as Weibull distributions and are asymptotically long-range dependent. It is also found that the call holding times are best fitted with lognormal distribution and are not correlated. An analytical model based on our observations for performance measures of a circuit-switched cellular wireless network with multiclass traffic sources is also proposed.
A scalable QoS-aware multicast deployment in DiffServ networks has become an important area of research. Although multicasting and differentiated services are two complementary technologies, the integration of the two technologies is a non-trivial task due to architectural conflicts between them. A popular solution proposed is to extend the functionality of the DiffServ components to support multicasting. In this paper, we propose an algorithm to construct an efficient QoS-driven multicast tree, taking into account the available bandwidth per service class. We also present an efficient method to provide the limited available bandwidth for supporting heterogeneous users. The proposed mechanism is evaluated using simulated tests. The simulated results reveal that our algorithm can effectively minimize the bandwidth use and transmission cost.
This paper proposes a face recognition method using the FERET face database. Facial images of two classes and three classes with different expressions and angles are used for classification. Fisher Discriminant Method is used for comparison of the results of two classes with the results of three classes. Euclidian distance method is used for similarity measure. The experimental results have been demonstrated that performance of Fisher Discriminant Analysis for three classes is same as the performance for two classes.
In this paper, a space-time trellis code (STTC) concatenated with space-time block code (STBC) for multi- carrier code-division multiple-access (MC-CDMA) system in the multi-path fading channel is considered, and the performance of the system is evaluated through simulations. The corresponding bit error rate (BER) of the concatenated STTC-STBC-MC-CDMA system is compared with STTC-MC-CDMA system. The results show that the improved performance can be achieved by concatenated STTC-STBC-MC-CDMA system having 16 states STTC.
The paper presents an image authentication and secures message transmission technique by embedding message/image into color images. Authentication is done by embedding message/image by choosing image blocks of size 3 times 3 called mask from the source image in row major order. The position of insertion is chosen within the mask according to the formula k % s and (k + 1) % s + 1 where k and s are any number between 0 to 7 and 2 to 7 respectively. The dimension of authenticating image followed by MD-5 key and then the content of authenticating message/image are also embedded. This is followed by an XOR operation of the embedded image with another self generated MD-5 key obtained from the source image. The decoding is done by applying the reverse algorithm. The result has been tested with the aid of Histogram analysis, noise analysis and standard deviation computation of the source image with the embedded image and has been compared with popular existing steganographic algorithms like S-Tools where the proposed IAHLVDDSMTTM is capable to hide large volume of data than S-Tools and shows better performance.
IEEE 802.16, popularly known as WiMAX, is one of the most innovative methods developed in recent times to address growing demand for high-speed wireless broadband networks. This paper addresses the problems concerning the delivery of video packets in video conferencing and other multimedia application services over WiMAX. Multiple competing traffic sources over a point-to-multipoint WiMAX topology is modeled. The performance analysis on the capacity of the WiMAX equipment to handle VoIP and video traffic flows was conducted. Parameters that indicate quality of service, such as, throughput, packet loss, average jitter and average delay, are analyzed for different types of service flows as defined in WiMAX.
Machine simulation of human functions like recognition of the text is a challenging task. The off-line handwritten character recognition requires more research to reach the ultimate goal of machine recognition of the text. An attempt is made towards English language by a large number of researchers since six decades. But for Indian languages it is still a dream. We propose a method on offline isolated English character. The method is also applied to Marathi vowels. The image acquired is preprocessed to remove all unwanted details from the image so that the image is suitable for feature extraction. Feature extraction plays an important role in handwritten recognition. The two feature extraction methods based on directional features are considered. The first method uses stroke distribution of a character. The second method uses contour extraction. The two directional features are compared with two different correlation techniques separately to check the suitability of the recognition method. First correlation technique calculates the dissimilarity between reference pattern and test pattern, and the other calculates the similarity between reference pattern and test pattern. The result of the comparison is to classify the character under consideration to a class if hit. If miss, the confusion information is extracted for the analysis.
Recently macroblock based technique for backward playback of MPEG video streaming has been discussed in which the network bandwidth and buffer size requirements are less. In this technique, the macroblocks are divided into two categories backward macroblock (BMB) and forward/backward macroblock (FBMB) based upon their motion characteristics. In this paper, we propose a new method in which the network bandwidth requirement further reduces without incurring cost on the buffer. We predict the required I or P-frame from the currently decoded frame which resides in the frame buffer at the client system. The motion vector information of the current frame is used to find the position of its various pixels in the previous frame and the prediction errors of the current frame are used to find their exact values. We performed extensive simulations on various videos having slow, medium and high motion characteristics. Experimental results show that on an average 93.4% of pixels in previous I- or P-frame can be reverse-predicted from the current P-frame. This makes considerable saving in the network bandwidth requirement as fewer pixels are requested from the server.
Web is one of the most popular internet services in today's world. In today's world, web servers and web based applications are the popular corporate applications and become the targets of the attackers. A Large number of Web applications, especially those deployed for companies to e-business operation involve high reliability, efficiency and confidentiality. Such applications are written in script languages like PHP embedded in HTML allowing establish the connection to databases, retrieving data and putting them in WWW site. In order to detect known attacks, misuse detection of web based attacks consists of attack rules and descriptions. As misuse detection considers predefined signatures for intrusion detection, here we have proposed two phases of intrusion detection mechanism. In the first phase we have used web host based intrusion detection with matching mechanism using 'Hamming Edit Distance'. We have considered here. the web layer log file for matching. This phase has been tested with our university intranet web server's log file. We have tested successfully the SQL injection for unauthorized access. We proposed a 'Query based projected clustering' for unsupervised anomaly detection and also a 'packet arrival factor' for intrusion detection in the second phase. We tested the scheme in this phase using KDD CUP99. In this phase while testing our scheme, we have extracted the feature dataset with protocol 'tcp' and services 'http'. Both the phases of our scheme found working successfully and an evaluated threshold has been proposed for better result.
Multiple cooperating robots hold the promise of improved performance and increased fault tolerance for large-scale problems such as planetary survey and habitat construction. Multi-robot coordination, however, is a complex problem. The problem is cast in the framework of multi-robot dynamic task allocation and then two methods are described that follow general guidelines for task allocation strategies. In the first method, four distinct task allocation strategies are considered using a simulated grid world. The data from the simulations show that there is no single strategy that produces best performance in all cases, and that the best task allocation strategy changes as a function of the noise in the system. This result is significant, and shows the need for further investigation of task allocation strategies. In the second method, a multi-criteria assessment model capable of evaluating the suitability of individual robots is considered for a specified task according to their capabilities, and existing tasks. Candidates are ranked based on their suitability scores to support administrators in selecting appropriate robots to perform the tasks. The proposed assessment models overcome the lack of role-based task assignment in workflow management systems.
In this paper, we present the approximated CRB for time delay estimator in multipath channel operating at low SNR ranges. In general, evaluation of true CRB is difficult to obtain. Instead, here we derive a simple closed form expression, which is an approximation of true CRB, at low SNR ranges. The computation of this approximated CRB involves less complexity and is reasonably accurate.
Wireless sensor networks are a nascent type of ad-hoc networks that had drawn the interest of the research community in last few years. They require working together for distributed tasks. Accurate and reliable time is one of the needs for some of its applications. The use of traditional time synchronization protocols for wired network is restricted by severe energy constraint in sensor network. The realization of time synchronized network poses many challenges, which are the subject of active research in the fileld. In this paper we argue that clock synchronization provides a lot of overhead, and can be eliminated in some of the cases. We advocate that sensor should be allowed to run unsynchronized. The time when any event of interest occurs, the sensor node records that event in its memory with timestamp from its cluster head's clock, instead of its local clock.
With the extremely high volume of traffic carried on wavelength division multiplexing (WDM) networks, survivability becomes increasingly critical in managing high speed networks. In a WDM network, the failure of network element (i.e. fiber links, and cross connects) may cause the failure of several optical channels, thereby leading to large data loss. Here the different approaches are investigated to protect mesh based WDM optical networks for such failure and subsequently, a mathematical model for capacity utilization for unicast traffic is presented. In this paper survivable schemes, such as path protection/restoration and link protection/restoration are reviewed for unicast as well as for multicast traffic. In addition to the shared link risk group (SLRG) based shared path protection (SLRG-SPP) and SLRG-based shared link protections (SLRG-SLP) are also discussed. It is observed that, the shared-path protection is more efficient in terms of capacity utilization over dedicated-path protection schemes and shared-link protection schemes, for random traffic demands. For low optical cross connect (OXC) configuration time (10 ns) the shared-link protection, scheme, offers better protection-switching time than the path-protection scheme. Taking high OXC configuration time (10 ms), the dedicated-path protection scheme has a better protection-switching time than the shared-path and shared-link protection.
In this paper, we analyze the performance, security and attack aspects of cryptographic techniques and also investigate the performance-security tradeoff for mobile adhoc networks. We propose KK' cryptographic technique and analyze the dominant issues of security, attack and various information theory characteristics of cipher texts for DES, Substitution and proposed KK' cryptographic technique. It is found that the security and information theory characteristics of proposed KK' Cryptographic and DES algorithms is much better then substitution algorithm. The performance packet delivery fraction for KK' and substitution algorithms is much better than DES algorithm. The end-to-end delay for normal AODV protocol is very less, for substitution and KK' algorithms it is moderate and for DES algorithm it is quite high. The security aspect for KK' algorithm is almost equal to DES and network performance is almost equal to substitution algorithm. Finally, we benchmark proposed KK' cryptographic algorithms in search for the better cryptographic algorithm for security in MANET.
In this paper, we provide the first comprehensive comparison of methods for part-of-speech tagging and chunking for Hindi. We present an analysis of the application of three major learning algorithms (viz. Maximum entropy models [2] [9], Conditional random fields [12] and Support Vector Machines [8]) to part-of-speech tagging and chunking for Hindi Language using datasets of different sizes. The use of language independent features make this analysis more general and capable of concluding important results for similar South and South East Asian Languages. The results show that CRFs outperform SVMs and Maxent in terms of accuracy. We are able to achieve an accuracy of 92.26% for part-of-speech tagging and 93.57% for chunking using Conditional Random Fields algorithm. The corpus we have used had 138177 annotated instances for training. We report results for three learning algorithms by varying various conditions (clustering, BIEO notation vs. BIES notation, multiclass methods for SVMs etc.) and present an extensive analysis of the whole process. These results will give future researchers an insight into how to shape their research keeping in mind the comparative performance of major algorithms on datasets of various sizes and in various conditions.
Through this paper we present a comparative study of two sequential learning algorithms viz. Conditional random fields (CRF) and static vector machine (SVM) applied to the task of named entity recognition in Hindi. Since the features used are language independent hence the same procedure can be applied to tag the named entities for other Indian languages like Telgu, Bengali, Marathi etc. We have used CRF++ for implementing CRF algorithm and Yamcha for implementing SVM algorithm. The results show a superiority of CRF over SVM and are just a little lower than the highest results achieved for this task which is due to the non-usage of any pre-processing and post-processing steps. The system makes use of the contextual information of words along with various language independent features to label the named entities (NEs). We first present the two systems (CRF and SVM) and then compare their results for the same data.
This paper discusses the mitigation of Distributed Denial of Service (DDoS) attack and as well preservation of computational time on wireless network. The DDoS effects upon the QoS in the loss of bandwidth and the resources available at the server. The uncertainty of Distributed denial of Service attack can be best simulated with the help of probabilistic model. The simple hop count method was used to calculate the hop count from the Time-To-Live (TTL) field of each packet. In this approach calculation of hop count for each packet is not required to detect the malicious packet. The number of packet we need to examine depends upon the probabilistic approach. This method mitigates the DDoS attack by reducing computational time and memory during the processing of a packet.
Biometrics is a verification system that identifies a person based on his physiological and behavioral features. Signature verification is the biometrics identification method which is legally accepted and used in many commercial fields such as e-business, access control and so on. In this paper we propose a robust off-line signature verification based on global features (ROSVGF) for skilled and random forgeries. In this model prior to extracting the features, we preprocessed the signatures in the database. Preprocessing consists of i) normalization ii) noise reduction iii) thinning and skelitazition, for feature set extraction which consists of global features such as signature height-to-width ratio (aspect ratio), maximum horizontal histogram and maximum vertical histogram, horizontal center and vertical center of the signature, end points of the signature, signature area. It is observed that our proposed model gives the better Type I and Type II errors compared to existing models.
A distinguishing characteristic of wireless sensor networks is the opportunity to exploit characteristics of the application at lower layers. This paper reports on the results of a simulation comparison of proposed data dissemination protocols using the J-Sim simulator for the WSN protocols: Forwarding diffusion data dissemination (FDDDP), decentralized data dissemination (DDDP), credit broadcast data dissemination (CBDDP), Energy aware & geographical data dissemination (EAGDDP). Our performance provides useful insights for the network designer such as which protocols (and design choices) scale control traffic well, improve data delivery or reduce overall energy consumption, improves routing overhead and maximizes the bandwidth utilization. The static pre configuration of the cell size in DDDP, is one of the reasons why DDDP exhibits larger routing overhead than FDDDP by 74.2% on average. Although CBDDP produces approximately 94.6% smaller overhead than DDDP and 90.7% smaller than FDDDP, because of statically configured amount credit CBDDP delivers on average 7.5 times more of the redundant data packets than DDDP and FDDDP.EAGDDP improves the delivery by 80% on average and makes a balance of energy consumption . We suggest that making these protocols truly self-learning can significantly improve their performance.
The infrastructure based IEEE 802.11 Wireless LANs (WLANs) are wide open in nature and are prone for varieties of attacks. Present security mechanisms are not comprehensive enough to overcome these attacks. In this paper we discuss the possible attacks in WLANs and the drawbacks of the existing security mechanisms. We provide a security architecture which uses mobile agents as a security facilitator. Our architecture assures secured communication by two way authentication between the access point and the wireless nodes.
This paper looks at two new bus arbitration algorithms for use in multi-processor and multi-core systems, where different processors must share the same bus to access main memory. These algorithms try to improve upon existing algorithms in terms of latency caused by contention among the processors. Both the algorithms take into account characteristics of arbitration which are normally ignored, or given less importance to. The Request-Service bus arbitration algorithm attempts to remove all forms of starvation among the competing processors. Arbitration takes place in two stages: the Request stage, where all requests from processors are latched onto the bus, and the Service stage, where all these requests are served. This algorithm works well under conditions of light load. The age-based bus arbitration algorithm gives more priority to processors that have recently acquired the bus, thus leading to greater throughput. To control starvation, this scheme is used only as long as there are few processors with active requests. This algorithm is suitable in cases where processors have to transfer large blocks of data.
This paper evaluates the performance of OFDM-BPSK and -QPSK system in alpha-mu distribution. A fading model which is based on the non-linearity present in the propagation medium is utilized here for generation of alpha-mu variants. Different combinations of alpha and mu provides various fading distributions, one of which is Weibull fading. Here, simulations of OFDM signals are carried with this Weibull faded signal to understand the effect of channel fading.
The usage of sensor networks is rapidly growing due to their small size and easily deployment. It means that we can easily expand and shrink such network, so sensor networks are more flexible as compare to the other wired networks. Due to this flexible nature such network has many applications in various fields. Object tracking is such one of the most important application of sensor network. Wireless sensor networks are Ad-hoc networks which contain a set of nodes which have limited computational power and limited power resources. As the energy resources are limited in the sensor node so full utilization of the resources with minimum energy remains main consideration when a wireless sensor network application is designed. Power is supplied with the help of batteries fitted with the sensor node and is not easily replaceable. As energy is one of the major components of such network, so we take this issue for further consideration. In order to maximize the lifetime of sensor networks, the system needs aggressive energy optimization techniques, ensuring that energy awareness is incorporated not only into individual sensor nodes but also into groups of cooperating nodes and into an entire sensor network. In this paper we suggest an energy saving scheme called Maximize the Lifetime of Object tracking sensor Network with node-to- node Activation Scheme (MLONAS) in which minimum number of nodes are involved in tracking of an object while other nodes remain in the sleep mode. When an object is going to enter the region of other node it will activate that node and when that node start tracking the object previous one will go to the sleep state. This scheme can increase the life of sensor network as few nodes are involved in tracking of moving object where as others remain in the sleep state.
An approach involving a new ant colony optimization (ACO) and fuzzy derivative is presented to tackle the image edge detection problem. Ant colony optimization (ACO) is inspired from the foraging behavior of some ant species which deposit pheromone on their way. Ant colonies and more generally social insects act as a distributed system presenting a highly structured social organization. They communicate with each other by modifying the environment (stigmergy). The number of ants acting on the image is decided by the variation of fuzzy probability factor calculated from fuzzy derivatives which establishes a pheronone matrix. To avoid the movement of ants due to the variation of intensity caused by noise we use fuzzy derivative approach to make sure that the variation of intensity due to an edge is reflected in the probabilistic transition matrix. Finally a binary decision is made on the pheromone matrix by calculating a threshold adaptively.
A combinatorial approach for protecting Web applications against SQL injection is discussed in this paper, which is a novel idea of incorporating the uniqueness of signature based method and auditing method. The major issue of web application security is the SQL injection, which can give the attackers unrestricted access to the database that underlie Web applications and has become increasingly frequent and serious. From signature based method standpoint of view, it present a detection mode for SQL injection using pair wise sequence alignment of amino acid code formulated from Web application form parameter sent via Web server. On the other hand from the Auditing based method standpoint of view, it analyzes the transaction to find out the malicious access. In signature based method It uses an approach called Hirschberg algorithm, it is a divide and conquer approach to reduce the time and space complexity. This system was able to stop all of the successful attacks and did not generate any false positives.
The reversible method of converting color images into gray images has been recently proposed using YCbCr color space and Discrete Wavelet Transform (DWT). The method embeds the colors of original colored image into the gray image using DWT. The gray image is being converted to textured gray because of color information embedding. This textured gray image could be either printed or sent to the receiver with the size less than the original color image. After receiving the textured gray image, receiver applies DWT on it and extracts the veiled color information to reconstruct the color image. The printed textured gray image is scanned for digitization and then the techniques discussed here can extract the colors of image to recreate the original color image. Using Kekre's LUV color space for the technique improves the quality of recreated colored image. Also the matted gray image has better quality and eye perception than using YCbCr color space. Applications of this can be compressing the image data for transmission and printing the color images using black color cartridge only and regenerating the color images afterwards. The improvement in recovered color image and matted gray image are shown in results and supported by statistical data.
Easy availability of internet, together with relatively inexpensive digital recording and storage peripherals has created an era where duplication, unauthorized use and maldistribution of digital content has become easier. To prevent unauthorized use, misappropriation, misrepresentation; authentication of multimedia contents achieved a broad attention in recent days. In this regard we've already introduced a technique for invisible image watermarking for color image authentication. In this paper we propose a method for embedding color watermark image to color host image in a more efficient manner. We introduce a new arena of embedding color watermark image to different positions of color host image in this paper. The host image is simply divided into some blocks so that as per our proposed technique we are able to embed watermark at LSB of all such blocks. and perceptually it is entirely invisible to human visual system. In proposed watermarking framework, it allows a user with an appropriate secret key and a hash function to verify the authencity, integrity and ownership of an image. If the user performs the watermark extraction with an incorrect key and inappropriate hash function, the user obtains an image that resembles noise. So that authencity will not be preserved even if a single pixel of the image is changed. This embedding method is beneficial to us at the watermark extraction end. We are using blind extraction method. At the time of extraction the watermark can easily be resolved combining LSB from different blocks of the watermarked image.
Content matching based algorithms form the core of many network security devices. It is one of the critical components due to the fact that it allows making decisions based on the actual content flowing through the network. The most important parameters that go into the design of a content matching algorithm are its performance and accuracy of detection. Although this topic had received significant attention in literature over past decade, much of the work was focused on improving the performance. The accuracy of detection was limited within a packet instance. Protocols like TCP do not guarantee that message boundaries are preserved. This can result in a segmented pattern across packets. This paper demonstrates a novel flow-aware content matching algorithm that solves this limitation without compromising the performance.
In this paper, we introduce a new topology of optimal polynomial fuzzy swarm net (OPFSN) that is based on swarm optimized multilayer perceptron with fuzzy polynomial neurons. The study offers a comprehensive design methodology involving mechanisms of particle swarm optimization (PSO). The design of the conventional PNN uses extended group methods of data handling (GMDH) with a fixed scheme for the network. It also considers a fixed number of input nodes in each layer and the resulting architecture does not guarantee optimal network architecture. Here, the development of OPFSN gives rise to a structurally optimized topology and comes with a substantial level of flexibility which becomes apparent when contrasted with the one we encounter in the conventional PNN. To evaluate the performance of the swarm optimized OPFSN, we experimented with bench mark data sets. A comparative analysis reveals that the proposed OPFSN exhibits higher classification accuracy in comparison to PNN.
The most critical factors responsible for bottleneck in the design and implementation of high-speed AES (Advanced Encryption Standard) architectures for any resource constrained target platform such as an FPGA are Substitute byte/Inverse SubstituteByte and MixColumn/InverseMixcolumn operations. Most implementations conventionally make use of the memory intensive look up table approach for Substitute byte/Inverse SubstituteByte (SB/ISR) block implementations resulting in an unbreakable delay. The proposed work employs a memory-less combinatorial design for the implementation of SB/ISR as an alternative to achieve higher speeds by eliminating memory access delays while retaining or enhancing the over all area efficiency. The work also explores use of sub-pipelining to further enhance the speed and throughput of the suggested implementation. The architecture employs optimization in both inverter design and isomorphic mapping using composite field arithmetic to reduce the area requirements. The proposed design replicates the very compact SB/ISR reported in [6] and [13] with an overall reduction in area requirement of 18% and 14% resply. The Optimum construction of composite field for AES S-Box are selected based on the complexities of subfield operations in the design of inverter in GF (2 8 ) for the effects of irreducible polynomial coefficients, and isomorphic mappings to minimize gate count and critical path. This decreased size of SB/ISR design could help for an area limited hardware implementations and also to allow more copies of SB/ISR for parallelism and/or pipelining of AES. The proposed decomposition method for integrated MixColumn/InverseMixcolumn (MC/IMC) optimizes the area and path delay.
Ad-hoc networks may exhibit varying characteristics in different environment, which may make the use of various physical layers, network topologies, and nodal mobility's. Using a simulator it is possible to model and simulate different physical layers, Link / MAC layers, and multi-routing schemes, to compare end-to-end statistics (end-to-end delay, throughput and energy efficiency), andfinally to determine the most energy efficient solution. This paper analyzes the energy consumption aspect for emergency ad-hoc networks. Hence, we propose the energy awareness scheme (Energy Efficient routing-EER) that assimilates the data link layer and physical layer for path loss, fading, ISI (Inter signal interference) at the destination receiver.
Information security is an indispensable requirement in any network environment. Compromising on security may lead to serious technical and social ill effects. Security is achieved by encryption and decryption of data. Traditionally a symmetric key is initially shared between the two communicating nodes via a public key cryptography. The session is then secured using the shared key for encryption and decryption of messages. However security of the system can be enhanced if the single shared key is replaced by a bunch of keys together with a mechanism to determine the sequence in which the keys are used. In this way, the session is not compromised just by cracking a single key and a high level of security is assured and number of communications involved between the two communicators in order to exchange the symmetric key is decreased. Our work explains the concept of keybunch, how it works, Protocol of key bunch exchange in unicast and multicast scenarios and resistance against attacks.
The proliferation of the World Wide Web and the pervasiveness of the related technologies have increased the demand of high performance clusters and distributed resources. Load balancing of the systems in which heterogeneous servers and resources are distributed becomes a key factor in achieving a rapid response time, efficient resource utilization and consequently higher throughput. In a large scale distributed environment, load balancing using mobile agents holds appeal in comparison with pure message passing protocol scheme. In this paper, the integration of a genetic algorithm with the sender initiated approach for server selection serves as a promising possibility for efficient load balancing and effective resource utilization in distributed systems.
Security remains as a concern in MANET. In case of multicast MANET it is even more challenging as most of the routing protocols have been proposed assuming that the nodes in MANET behave in a secure manner. In this paper we are outlining a secure novel multicast routing protocol named Secure Hypercube Based Team Multicast Routing Protocol (S-HTMRP) that deals with both multicast routing and packet forwarding. Significant attention has been recently devoted to develop secure on-demand routing protocols that defend against a variety of possible attacks on routing. This paper develops and proposes a trust model to secure the network from the rushing attack, a variety of denial of services attack that may exist in the MANET. An alternative idea of implementing distributive security mechanism by computing trust levels from the inherent knowledge of the network has been proposed here, along with randomized packet forwarding mechanism and combining it with a scalable routing architecture to get a resulting routing protocol for multicast MANET that is more secure. The routes calculated through this mechanism may not be optimal but certainly have an accurate measure of reliability in them. The routing protocol namely S-HTMRP that has been proposed in this paper outperforms the existing hypercube based team multicast routing protocol in presence of rushing attack.
All validations of MANET simulations are meaningful when they use realistic mobility models. If this movement is unrealistic, the simulation results obtained may not correctly reflect the true performance of the protocols. The majority of existing mobility models for MANET do not provide realistic movement. Most of the mobile nodes are directly or indirectly handled by human beings so mobility model based on social network theory predicts the node movements more realistic. Major challenges of these models are identifying the community and predicting the behavior of each node and each community. Our paper reinforces the model and overcome the challenges by using Unified Relationship Matrix which identifying the community structure more accurately and helps to predict the behaviour of a MANET. Unified Relationship Matrix helps to represent the relationships of inter and intra type of nodes among the various community groups with in the given terrain. It solves the node duplication among various heterogeneous community groups and identifies the correct community structure.
Collaborative applications are feasible nowadays and are becoming more popular due to the advancement in Internetworking technology. The typical collaborative applications, in India include the space research, military applications, higher learning in Universities and satellite campuses, state and central government sponsored projects, e-governance, e-healthcare systems, etc. In such applications, computing resources for a particular institution/organization spread across districts and states and communication is achieved through internetworking. Therefore the computing and communication resources must be protected against security attacks as any compromise on these resources would jeopardize the entire application/mission. Collaborative environment is prone for various threats, of which distributed denial of service (DDoS) attacks are of major concern. DDoS attack prevents legitimate access to critical resources. A survey by Arbor networks reveals that approximately 1,200 DDoS attacks occur per day. As the DDoS attack is coordinated, the defense for the same has to be a collaborative one. To counter DDoS attacks in a collaborative environment, all the routers need to work collaboratively by exchanging their caveat messages with their neighbors. This paper analyses the security measures in a collaborative environment, identifies the popular DDoS attack tools, and surveys the existing traceback mechanisms to trace the real attacker.
With the advent of VLSI technology, the demand for higher processing has increased to a large extent. Study of parallel computer interconnection topology has been made extensively emphasizing the cube based topologies in particular. This paper proposes a new cube based topology called the Folded metacube with better features such as reduced diameter, cost and improved broadcast time in comparison to its parent topologies: viz: Folded hypercube and Metacube. Two separate routing algorithms one-to-one and one-to-all broadcast have been proposed for the new network.
Security of websites and online systems is of paramount concern today. A significant threat comes from malicious automated programs designed to take advantage of online facilities, resulting in wastage of resources and breach of web security. To counter them, CAPTCHAS are employed as a means of differentiating these bots from humans. However, highly sophisticated computer programs evolved over time have kept pace with all the current CAPTCHA generation schemes to render them ineffective. Due to the vulnerability of current CAPTCHAS, generation schemes of greater stability are required. Therefore, in this paper we propose a novel scheme of embedding numbers in text CAPTCHAS. It incorporates two levels of testing that includes identification of displayed characters and secondly interpreting the logical ordering based on the embedded numbers. Our CAPTCHA can be conveniently implemented on Web sites and provides the advantages of robustness and low space requirements. Discussion and conclusion, at the end of the paper, justify our approach.
This paper aims at exploring short term spectral features for Emotion Recognition (ER). Linear predictive cepstral coefficients (LPCC), mel frequency cepstral coefficients (MFCC) and log frequency power co-efficients (LFPC) are explored for classification of emotions. For capturing the emotion specific knowledge from the above short-term speech features vector quantizer (VQ) models are used in this paper. Indian Institute of Technology, Kharagpur-Simulated Emotion Speech Corpus (IITKGP-SESC) is used for developing the emotion specific models and validating the models by emotion recognition task. The emotions considered for the study are anger, compassion, disgust, fear, happy, neutral, sarcastic and surprise. The recognition performance of the developed models is observed to be about 40%, where as the subjective listening tests show the performance about 60%.
The unpredictable movement of mobile nodes consequences dynamic change of topology as well as breaking and reestablishment of links among of the network nodes. The failure of links, not only change the network size but creates an environment for the intruders to become members of the network, which is a major security concern. Detection of such unwanted entity requires secure key exchange mechanism with the certificate authority (CA) node. In this paper, we propose an end-to-end security mechanism by employing CA functionality at servers for providing digital certificates to certain selected client nodes using Threshold Cryptography and Diffie Hellman Key exchange method. The server nodes provide the CA functionality to the client node that wants to access the network. To provide a robust security system for the network, we employ genetic algorithm (GA) in our model, as an optimal verification tool. The simulation result of the model shows that the verification of any entity requires very less time. However, no entity can obtain the key of CA with known equipment-id within a time limit.
The increasing demand of real-time multimedia applications in wireless environment requires stringent quality of service (QoS) provisioning to the mobile hosts (MH). The scenario becomes more complex during group communication from single source to multiple destinations. Fulfilling users demand with respect to delay, jitter, available bandwidth, packet loss rate and cost associated with the communication needs multi-objective optimization (MOO) with QoS satisfaction, is a NP-complete problem. In this paper we propose a multi-objective optimal algorithm to determine a min-cost multicast tree with end-to-end delay, jitter, packet loss rate and blocking probability constraints. The simulation result shows that the proposed algorithm satisfies QoS requirements (like high availability, good load balancing and fault-tolerance) made by the hosts in varying topology and bursty data traffic for multimedia communication networks. The performance of the algorithm for scalability is also highly encouraged.
The speedy development and commercialization of the internet gives the idea of simultaneous transmission of data to multiple receivers, so that multicast communication model becomes a regular mode of communication. Multicast communication is vulnerable due to well known IP addresses. One of the main challenges for securing multicast communication is source authentication. Initially, it is provided by digital signature which is not efficient. Several others schemes have been existed for multicast source authentication such as simple hash chaining, random hash chaining, tree hash chaining etc. but each have shortcoming. In this paper, we are going to propose an adaptive scheme for multicast source authentication (AMSA), the results shows that ours approach is better than existed techniques.
In this paper, the mathematical models of two water distribution network systems, namely a Serial Network and a Branched Network have been considered as nonlinear optimization problems and Particle Swarm Optimization (PSO) method has been used to obtain the global optimal solution in each case. The PSO is a heuristic optimization method. Herein, the PSO has been used to determine the minimum cost of the water distribution network system for a Serial Network and Branched Network. The numerical results indicate the robustness of the PSO method. It suggests that PSO method can be effectively used for any generalized water distribution network system.
The radio frequency identification system (RFID) is becoming very popular system wireless technologies. The UHF RFID tag emulator is a part of RFID testing tools. The UHF RFID tag emulator would be imitating the behavior of RFID Tag. The UHF RFID tag emulator (860 Mhz to 960 MHz) is aimed for testing the RFID systems and also as a general-purpose communication link to other electronic devices. In this work, we have presented high-level architecture of tag emulator and the design of FM0 encoder and Miller encoder. As motivated by finite state machine, encoders are discussed with particular focus to use the RFID emulator as data transport device and debugging tool. The synthesis result shows that FSM design is efficient and we have achieved operating frequency of 192.641 MHz and 188.644 MHz for FM0 and Miller encoders.
In this paper we discuss an off-line signature recognition system designed using clustering techniques. These cluster based features are mainly morphological feature, they include Walsh coefficients of pixel distributions, vector quantization based codeword histogram, grid & texture information features and geometric centers of a signature. In this paper we discuss the extraction and performance analysis of these features. We present the FAR, FRR achieved by the system using these features . We compare individual performance and overall system performance.
Complex software systems, such as telecom OSS/BSS, evolve over the years, based on a stream of incremental specifications. This paper examines the raison d'etre and an approach for building a repository of validating prototypes against key requirements in the incremental specifications, to formalize the specifications in a requirements engineering knowledge base.
Upstream elements are very significant in disclosing the property of the sequence not only they set a signal for the various protein to bind there but also help in locating hidden sequences and their property like TATA box. The whole idea about developing this algorithm is that to find out upstream sequences which carry hidden property like road signs which can alert drivers. In this case protein help user to predict and analyse the upstream sequences. We downloaded the database file (nucleotide file), query file and did the nBLAST. Then we parse the blast output to filter out full length sequences (sequences which are not truncated either from 5' or 3' end for more than 11 bases). The time complexity of algorithm was improved from exponential time complex to linear time complex by using the divide and conquer approach, where the large database file is divided into smaller files. This algorithm gives good hits and filters out the upstream element. One can even fix the option of having a gap or un-gapped alignment in the database.
This paper discusses a speech-and-speaker (SAS) identification system. The speech signal is recorded and then processed. The speech signal is treated graphically in order to extract the essential image features as a basic step in successful data mining applications in the biometric techniques. The object considered here is the human-voice signal. The identifying and classifying methods are performed with Burg's estimation model and the algorithm of Toeplitz matrix minimal eigenvalues is used as the main tools for signal-image description and feature extraction. The extracted feature-carrying image comprises the elements of Toeplitz matrices to consecutively compute their minimal eigenvalues and introduce a set of feature vectors within a class of voices. At the stage of classification, both conventional and neural-network-based methods are used. This helps in speech recognition and speaker authentication. Some examples on applications and comparisons are presented. The required computations were performed in Matlab proving speech-signal image recognition in a simple and easy-to-use way. any special hardware and can be used along with other biometric technologies in hybrid systems for multi-factor verification.
This paper presents an overview of low level parallel image processing algorithms and their implementation for active vision systems. Authors have demonstrated novel low level image processing algorithms for point operators, local operators, dithering, smoothing, edge detection, morphological operators, image segmentation and image compression. The algorithms have been prepared & described as pseudo codes. These algorithms have been simulated using Parallel Computing Toolboxtrade (PCT) of MATLAB. The PCT provides parallel constructs in the MATLAB language, such as parallel for loops, distributed arrays and message passing & enables rapid prototyping of parallel code through an interactive parallel MATLAB session.
The use of data mining approaches in the domain of medicine is increasing rapidly. The effectiveness of these approaches to classification and prediction has improved the performance of their systems. These are particularly useful to medical practitioners in decision making. In this paper, we present an analysis of prediction of the survivability of the burn patients. The machine learning algorithm c4.5 is used to classify the patients using WEKA tool. The performance of the algorithm is examined by using the classification accuracy, sensitivity, specificity and confusion matrix. The dataset was collected from Swami Ramanand Tirth Hospital, Ambajogai, Maharashtra, India and is used retroactively from data records of the burn patients. The results are found to be precise and accurate by comparing with actual information on survivability or death.
A wireless Ad-hoc network consists of wireless nodes communicating without the need for a centralized administration, in which all nodes potentially contribute to the routing process. In this paper, we analyze packet scheduling algorithm to find those that most improve performance in congested network. Hence, a scheduling algorithm to schedule the packet based on their priorities will improve the performance of the network. Packet schedulers in wireless ad hoc networks serve data packets in FIFO order. Here, we present a fuzzy based priority scheduler for mobile ad-hoc networks, to determine the priority of the packets using Destination Sequenced Distance Vector (DSRs) as the routing protocols. The performance of this scheduler has been studied using OPNET simulator and measured such as packet delivery ratio, end-to-end delay and throughput. It is found that the scheduler provides overall improvement in the performance of the system when evaluated under different load and mobility conditions. In this proposed model as shown in fig. 1, from the simulation results, the packet delivered for DSR improves by 39% for a total transmission of packets and end-to-end delay decreases by around 0.35 seconds.
The focus of this paper is a three-level neutral point clamped(NPC) inverter, which is a principle part of the power conditioning system. Accordingly, this paper deals with the space vector pulse width modulation (SVPWM) of three-level NPC inverters and introduce a computationally very efficient three-level SVPWM algorithm that is implemented using MATLAB and Simulink software package.
Machine based gender classification is one of the challenging problem to the computer science researchers. The effortless ability exhibited by a two year kid for the same needs immense computation power for the computing machines. Many people attempted this problem by using different psychological characteristics such as handwriting, speech recognition, query response etc. These computational intensive techniques are application based and classification efficiency is limited. Some of the face identification systems have used the pixel based information, Eigen faces and geometrical relations of the facial features. We present a neural network-based upright invariant frontal face detection system which can classify the gender based on the facial information. In our approach we club the pixel based and geometric facial features to increase the reliability of classification process. The use of pi-sigma neural network and the cyclic shift invariance technique enhances the robustness of classification process.
A finite element method based forward solver is developed for solving the forward problem of a 2D-electrical impedance tomography. The method of weighted residual technique with a Galerkin approach is used for the FEM formulation of EIT forward problem. The algorithm is written in MatLAB7.0 and the forward problem is studied with a practical biological phantom developed. EIT governing equation is numerically solved to calculate the surface potentials at the phantom boundary for a uniform conductivity. An EIT-phantom is developed with an array of 16 electrodes placed on the inner surface of the phantom tank filled with KCl solution. A sinusoidal current is injected through the current electrodes and the differential potentials across the voltage electrodes are measured. Measured data is compared with the differential potential calculated for known current and solution conductivity. Comparing measured voltage with the calculated data it is attempted to find the sources of errors to improve data quality for better image reconstruction.
The objective of this paper is to propose a new algorithm for classifying the concept present in the image. The training images are segmented with fixed size blocks and features are extracted from it. The features are extracted by using the orthogonal polynomials based transformation. From the feature set vectors, a codebook is generated for each concept class. Then significant class representative vectors are calculated which is used for classifying the concepts. The proposed method gives a better representation of training images. The approach definitely guarantees that the results produced are promising.
All most all of the bugs related to graphical user interface (GUI) module of applications and are described in terms of events associated with GUI components. In this paper, a bug mining model for discovering duplicate and similar GUI bugs is presented and approach for detecting the similar and duplicate GUI bugs is described. Resolution of similar and duplicate bugs are almost identical, so if similar and duplicates are identified it will optimize the time for fixing reported GUI bugs and it can also help in achieving the faster development. A GUI bug can be transformed into a sequence of events, components and expected implementation requirements for each GUI event. This transformation is used in this paper to discover the similar and duplicate GUI bugs. First all the GUI bugs are transformed into events, components and requirements sequence, then these sequences are pair wise matched and common subsequence is generated which will indicate the similarity for the GUI bugs.
In this paper an efficient method for data clustering is proposed. The proposed algorithm is a modified psFCM, called the pshFCM clustering algorithm that finds better cluster centers for a given data sets as compared to the cluster centers obtained by he sFC. Tecmpuatinalperormnceof he ropsed pshFCM algorithm is comparable with thepsFCM and the FCM.
Multilevel security requirements introduce a new dimension to traditional database schedulers as they cause covert channels. To prevent covert channels, scheduler for multilevel secure database should ensure that transactions at low security level are never delayed by high security level transactions in the event of a data conflict. This may subjected to an indefinite delay if it is forced to abort repeatedly to high security level transactions and making the secure scheduler unfair towards high security level transactions. This paper proposes secure database scheduler that is based on both optimistic and locking techniques (SO2PL) for multilevel secure distributed database systems. The proposed database scheduler is free from covert channels without starving the high security level transactions. Through a simulation study we evaluate the performance of the SO2PL and compare it with S2PL scheduler.
A real world challenging task for Web master of a Web site is to match the user needs and keep their attention in their Web site. Because, it's easy for the Web users to reach out the counterpart sites by a single click. So, only option for Web master is to provide the pages by capturing the intuition of their user. Web master should use the WUM method to capture user intuition. A WUM is designed to operate on Web server logs which contains user navigation. Hence, WUM system can be used for forecasting the navigation of user and recommend those to user. However, the accuracy of intuition capturing cannot still satisfy the users especially in the huge site where there are more topic of interest. To capture the intuition of users efficiently, We proposed a two tier intelligent agent based architecture for capturing users intuition in WUM system and also suggested a GCD algorithm for classifying user navigation patterns to capture user intuition. The practical implementation of our proposed architecture and algorithm in it shows that accuracy of user intuition capturing is very much improved.
The novel approach combines color and texture features for content based image retrieval (CBIR). The color and texture features are obtained by computing the mean and standard deviation on each color band of image and sub-band of different wavelets. The standard Wavelet and Gabor wavelet transforms are used for decomposing the image into sub-bands. The retrieval results obtained by applying color histogram (CH) + Gabor wavelet transform(GWT) to a 1000 image database demonstrated significant improvement in precision and recall, compared to the color histogram (CH), wavelet transform (WT), wavelet transform + color histogram (WT + CH) and Gabor wavelet transform (GWT).
Query estimation plays an important role in query optimization by choosing a particular query plan. Performing query estimation becomes quite challenging in case of fast, continuous, online data streams. Different summarization methods like sampling, histograms, wavelets, sketches, discrete cosine series etc. are used to store data distribution for query estimation. In this paper a brief survey of query estimation techniques in view of data streams is presented.
Since the Internet has become a huge repository of information, many studies address the issue of web pages categorization. For web page classification, we want to find a subset of words which help to discriminate between different kinds of web pages, so we introduced feature selection. In this paper, we study some feature selection methods such as ReliefF and Symmetrical Uncertainty. Also, the high dimensional text vocabulary space is one of the main challenges of web pages, we used Hidden Naive Bayes, Complement class Naive Bayes and other traditional techniques for web page classification. Results on benchmark dataset show that the abilities of HNB perform more satisfying than other methods and SU is more competitive than ReliefF for relevant words selection in web pages categorization.
Feature subset selection is of immense importance in the field of data mining. The increased dimensionality of data makes testing and training of general classification method difficult. This paper presents the development of a model for classifying Pima Indian diabetic database (PIDD). The model consists of two stages. In the first stage, genetic algorithm (GA) and Correlation based feature selection have been used in a cascaded fashion. GA rendered global search of attributes with fitness evaluation effected by CFS. The second stage a fine tuned classification is done using artificial neural networks by making the feature subset elicited in the first stage as inputs for the network. Experimental results signify that the feature subset identified by the proposed filter when given as input to Back propagation neural network classifier, lead to enhanced classification accuracy.
The goal of classification learning is to develop a model that separates the data into the different classes, with the aim of classifying new examples in the future. A weak learner is one which takes labeled training examples and produces a classifier which can label test examples more accurately than random guessing. When such weak learner is used directly for classification task then it may not give the better prediction accuracy, due to the limitation and simplicity of single classifier system. On the other hand, multiple classifier systems often known as ensemble based systems, have shown to produce favorable results compared to single-classifier systems. Boosting is one of the most important recent developments in ensemble system, which works by sequentially applying a classification algorithm to re-weighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers. Our experiments demonstrate the underlying weak learner's ability to achieve a fairly low error rate on the testing data, as well as the boosting algorithm's ability to reduce the error rate of the weak learner. In our experiment we have used decision stump as a weak learner (classifier) and using the boosting approach, the result demonstrates the improvement in the classifier's accuracy.
Advanced data mining technologies along with the large quantities of remotely sensed imagery, provide a data mining opportunity with high potential for useful results. Extracting interesting patterns and rules from data sets composed of images and associated ground data are typically used in order to detect the distribution of vegetation, soil classes, built-up areas, roads and water bodies such as rivers, lakes etc. The availability of new high spatial resolution satellite sensors permits people having large amounts of detailed digital imaging of rural environment. In this paper an approach towards the automatic segmentation of the satellite image into distinct regions and further to extract tree count from the vegetative area is presented. Counting trees in specific geographical areas is a very complicated process. Now a days manual counting is done by the forest department, both in agricultural as well as forest regions. Image segmentation is a very important technique in image processing. However, it is a very difficult task and there is no single unified approach for all types of images In this paper, image processing techniques have been employed for automatic segmentation of the satellite image and extraction of the trees from the segmented image.
The proper protection of personal information is increasingly becoming an important issue in an age where misuse of personal information and identity theft are widespread. At times there is a need however for management or statistical purposes based on personal information in aggregated form. The k-anonymization technique has been developed to de-associate sensitive attributes and anonymise the information needed to a point where the identity and associated details cannot be reconstructed. The protection of personal information has manifested itself in various forms, ranging from legislation, to policies such as P3P and also information systems such as Hippocratic database. Unfortunately, none of these provide support for statistical data research and analysis. The traditional k-anonymity technique proposes a solution to this problem, but determining which information can be generalized and which information needs to be suppressed is potentially difficult to determine. In this paper we propose a new idea that integrates personal information ontology with the concept of k-anonymity, in order to overcome these problems. We demonstrate the idea with a prototype in the context of healthcare data management, a sector in which maintaining the privacy of individual information is essential.
Stock market analysis and prediction has been one of the widely studied and most interesting time series analysis problems till date. Many researchers have employed many different models, some of them are linear statistic based while some non linear regression, rule, ANN, GA and fuzzy logic based. In this paper we have proposed a novel model that tries to predict short term price fluctuation, using candlestick analysis. This is a proven technique used for short term prediction of stock price fluctuation and market timing since many years. Our approach has been hybrid that combines self organizing map with case based reasoning to indemnify profitable patterns (candlestick) and predicting stock price fluctuation based on the pattern consequences.
In the light of developments in technology to analyze personal data, public concerns regarding privacy are rising. Often a data holder, such as a hospital or bank needs to share person specific records in such a way that the identities of the individuals who are the subjects of data cannot be determined. The generalization techniques such as K-anonymous, L-diverse and t-closeness were given as solutions to solve the problem of privacy breach, at the cost of information loss. Also, a very few papers dealt with personalized generalization. But, all these methods were developed to solve the external linkage problem resulting in sensitive attribute disclosure. It is very easy to prevent sensitive attribute disclosure by simply not publishing quasi-identifiers and sensitive attributes together. But the only reason to publish generalized quasi identifiers and sensitive attributes together is to support data mining tasks that consider both types of attributes in the database. Our goal in this paper is to eliminate the privacy breach (how much an adversary learn from the published data) and increase utility (accuracy of data mining task) of a released database. This is achieved by transforming a part of quasi-identifier and personalizing the sensitive attribute values. Our experiment conducted on the datasets from the UCI machine repository demonstrates that there is incremental gain in data mining utility while preserving the privacy to a great extend.
Utility based data mining is a new research area entranced in all types of utility factors in data mining processes and focused at integrating utility considerations in data mining tasks. A research area within utility based data mining known as high utility mining is aimed at finding itemsets that interpose high utility. The well known efficient algorithm for mining high utility itemsets from large transaction databases is the UMining algorithm. We present here a novel algorithm fast utility mining (FUM) which finds all high utility itemsets within the given utility constraint threshold. It is faster and simpler than the original UMining algorithm. The experimental evaluation on transaction datasets showed that our algorithm executes faster than UMining algorithm and exceptionally faster when more itemsets are identified as high utility itemsets and when the number of distinct items in the database increases. We have also suggested a novel method of generating different types of itemsets such as High Utility and High Frequency itemsets (HUHF), High Utility and Low Frequency itemsets (HULF), Low Utility and High Frequency itemsets (LUHF) and Low Utility and Low Frequency itemsets (LULF) using a combination of FUM and Fast Utility Frequent mining (FUFM) algorithms.
Web usage mining is an important area that requires providing information to the user appropriately for quicker navigation to the desired Web page. In this research work, we are applying Web usage mining for quicker navigation to the desired Web page. A supervised back propagation algorithm (BPA) has been applied to learn the navigated Web pages by different users at different sessions. Online training of BPA is done during browsing of pages and parallelly online testing is done to suggest next probable Web page to the user. The inputs to the BPA are the codified form of Web page IDs and the target outputs are the successive pages. The topology of the network used is 12 X 3 X 1. The log records are used for collecting the details of the Web page contains minimum 6 Web pages and maximum 12 Web pages visited. The performance of the BPA in predicting the next possible web page is above 90%.
In this paper, an incremental framework for feature selection and Bayesian classification for multivariate normal distribution is proposed. Feature set can be determined incrementally using Kullback divergence and Chernoff distance measures which are commonly used for feature selection. The proposed integrated incremental learning is computationally efficient over its batch mode in terms of time. The effectiveness of the proposed method has been demonstrated through experiments on different datasets. It is found on the basis of experiments that the new scheme has an equivalent power compared to its batch mode in terms of classification accuracy. However, the proposed integrated incremental learning has very high speed efficiency in comparison to integrated batch learning.
DBSCAN is a pioneer density based clustering algorithm. It can find out the clusters of different shapes and sizes from the large amount of data which is containing noise and outliers. But the clusters detected by it contain large amount of density variation within them. It can not handle the local density variation that exists within the cluster. For good clustering a significant density variation may be allowed within the cluster because if we go for homogeneous clustering, a large number of smaller unimportant clusters may be generated. In this paper we propose an Enhanced DBSCAN algorithm which keeps track of local density variation within the cluster. It calculates the density variance for any core object with respect to its e -neighborhood. If density variance of a core object is less than or equal to a threshold value and also satisfying the homogeneity index with respect to its e -neighborhood then it will allow the core object for expansion. The experimental results show that the proposed clustering algorithm gives optimized results.
Automated discovery of decision rules is a research area of significant importance as the discovered rules improve the decision making process in various real world situations across a wide spectrum of application fields. Rough set framework proposes automated discovery of decision rules and are particularly good at handling vagueness and uncertainty inherent to decision making situations. Though rough set theory discovers the high level symbolic decision rules (If-Then Rules) which are comprehensible individually, it produces large number of decision rules even for small datasets. A large set of rules may give high predictive accuracy but it is not comprehensible in the sense that it fails on the important criteria of manual inspection to gain insight into the application domain. This paper proposes a post processing scheme that takes the rules produced by rough set theory, organizes and summarizes the rules in the form of rule + exceptions structure consisting of default/general rules and their corresponding exceptions. The proposed scheme not only suitably organizes the decision rules for manual inspection and analysis, it also makes them more accurate and interesting by discovering exceptions.
Knowledge management is rapidly evolving as a way for companies to become more and more competitive and intuitive. There are many instances when there is a need for urgent and relevant information. A task may have been executed or handled in the past by someone in the enterprise. It may be there in a worker's memory or some storage device lying remotely in the company office. It is crucial for the success of a business that such useful information is shared and traceable as and when required . The information may be distributed across multiple sources such as Internet, intranet, portals, project servers and miscellaneous storage media. Such distributed databases may be used for OLAP, OLTP and business intelligence. A popular software development and services company XYZ InfoSystems Limited implemented a knowledge management infrastructure to improve productivity and growth benefits that could be effectively measured in terms of value and business.
The drastic development of the World Wide Web in the recent times has made the concept of Web crawling receive remarkable significance. The voluminous amounts of Web documents swarming the Web have posed huge challenges to the Web search engines making their results less relevant to the users. The presence of duplicate and near duplicate Web documents in abundance has created additional overheads for the search engines critically affecting their performance and quality. The detection of duplicate and near duplicate Web pages has long been recognized in Web crawling research community. It is an important requirement for search engines to provide users with the relevant results for their queries in the first page without duplicate and redundant results. In this paper, we have presented a novel and efficient approach for the detection of near duplicate Web pages in Web crawling. Detection of near duplicate Web pages is carried out ahead of storing the crawled Web pages in to repositories. At first, the keywords are extracted from the crawled pages and the similarity score between two pages is calculated based on the extracted keywords. The documents having similarity scores greater than a threshold value are considered as near duplicates. The detection has resulted in reduced memory for repositories and improved search engine quality.
Wide availability of electronic data has led to the vast interest in text analysis, information retrieval and text categorization methods. To provide a better service, there is a need for non-English based document analysis and categorizing systems, as is currently available for English text documents. This study is mainly focused on categorizing Indic language documents. The main techniques examined in this study include data pre-processing and document clustering. The approach makes use of a transformation based on the text frequency and the inverse document frequency, which enhances the clustering performance. This approach is based on latent semantic analysis, k-means clustering and Gaussian mixture model clustering. A text corpus categorized by human readers is utilized to test the validity of the suggested approach. The technique introduced in this work enables the processing of text documents written in Sinhala, and empowers citizens and organizations to do their daily work eficiently.
The volume of data storage capacity has changed a lot as compared with earlier times. As most computers were standalone and only the users had access to data, security was not a big concern. All this changed when computers became linked in networks, in form of small dedicated networks to large LANs, WANs and the World Wide Web. With the growth of networking the security of data became a big issue. Data passes through various networks, communication protocols, and devices to ultimately reach to the user which has made data security increasingly important. Security is becoming one of the most urgent challenges in database research and industry. Instead of building walls around servers, a protective layer of encryption should be provided around specific sensitive data-items. This also allows us to define which data stored in databases are sensitive and thereby focusing the protection only on the sensitive data, which in turn minimizes the delays or burdens on the system that may occur from other bulk encryption methods. This paper describes a highly original and new approach of securing numeric data of databases. It presents a practical solution to the problem where numeric data was converted to alphanumeric type and hence encrypted data was not possible to be stored in the existing numeric field. The proposed algorithm allows transparent record level encryption that does not change the data field type or fixed length.
A Knowledge Framework to Search Similar Disease Patterns using Data Mining is presented in this paper. The framework termed as Doctor's Desk is a customizable, general clinical diagnosis data capturing system in a parameterized format with support for preparing the data in a suitable format for data mining purposes. Here, two classification methods are investigated and analyzed for patient classification and similar disease patterns search based on clinical diagnosis data captured through the Doctor's Desk system.
The usage of XML data in the World Wide Web and elsewhere as a standard for the exchange of data and to represent semi structured data tends to develop the various tools and techniques to perform various data mining operations on XML documents and XML repositories. In recent years, several encouraging methods have been identified and developed for mining XML data. In this paper, we present an improved framework for mining association rules from XML data using XQUERY and . NET based implementation of Apriori algorithm.
Discretization turns numeric attributes into discrete ones. Feature selection eliminates some irrelevant and/or redundant attributes. Data discretization and feature selection are two important tasks that performed prior to the learning phase of data mining algorithms and significantly reduces the processing effort of the learning algorithm. In this paper, we present a new algorithm, called Nano, that can perform simultaneously data discretization and feature selection. In feature selection process irrelevant and redundant attributes as a measure of inconsistence are eliminated to determine the final number of intervals and to select features. The proposed Nano algorithm aims at keeping the minimal number of intervals with minimal inconsistency and establishes a tradeoff between these measures. The empirical results demonstrate that the proposed Nano algorithm is effective in feature selection and discretization of numeric and ordinal attributes.
Concurrency control plays important role in advanced database management system (ADMS) especially in computer aided design (CAD) with knowledge database management system (KBMS). ADMS related databases involve longer transaction time and unsure when the transaction is committed. In such situations, how the data edited by more than one user is preserved. It is by using version control or using locking methods or both. Longer transactions can be better controlled by intelligent method. In this work, CPN has been implemented for transaction control of CAD database.
Recommender systems help to overcome the problem of information overload on the Internet by providing personalized recommendations to the users. Content-based filtering and collaborative filtering are usually applied to predict these recommendations. Among these two, Collaborative filtering is the most common approach for designing e-commerce recommender systems. Two major challenges for CF based recommender systems are scalability and sparsity. In this paper we present an incremental clustering approach to improve the scalability of collaborative filtering.
Web mining is an active research area in present scenario. Web Mining is defined as the application of data mining techniques on the World Wide Web to find hidden information. This hidden information i. e. knowledge could be contained in content of Web pages or in link structure of WWW or in Web server logs. Based upon the type of knowledge, Web mining is usually divided in three categories: Web content mining, Web structure mining and Web usage mining. An application of Web mining can be seen in the case of search engines. Most of the search engines are ranking their search results in response to users' queries to make their search navigation easier. In this paper, a survey of page ranking algorithms and comparison of some important algorithms in context of performance has been carried out.
Association rules discovered by association rule mining may contain some sensitive rules, which may cause potential threats towards privacy and security. In this paper, we address the problem of privacy preserving association rule mining by proposing a Knowledge Sanitization to prevent the disclosure of sensitive rules. We also present parallel approach to generate a non-sensitive and sensitive rule mining. Our proposed solution key issue is to provide a high degree of parallelism for privacy to the data owner.
Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically calibrated image data is a great reason for computer vision to start looking globally on the scale of the entire planet. In this paper, we propose a correlated association rule based framework for querying an image database The analysis is based on geographic locations from a single image using a purely data driven feature extraction approach. We apply a specific set of traditional data mining techniques such as association to the non traditional domain of image datasets. Image Features are selected based on the position of the image objects using color histograms approach and Line features. Correlation analysis is applied on image datasets using association rule. A query model based on Query by Example (QBE) is proposed. And we improve the technique further by using association rule based query mining.
Object-oriented modeling has become the de-facto standard in the software development process during the last decades. A great deal of research in this area focuses on proposing modeling languages. In order to properly understand, and assess an object oriented modeling language, we believe that a set of criteria or requirements is needed. This paper presents a framework to investigate and compare graphical object oriented modeling languages. This framework is based on a requirement set for an ideal object-oriented modeling languages.
UML statechart based models are being used extensively during software development to describe state behavior of components and systems. However, often designers produce state models that are essentially finite state machines rather than statecharts, lacking hierarchy and concurrency. Also for legacy code, reverse engineering efforts lead naturally to finite state machines rather than statecharts, necessitating a model transformation at a later stage. We propose a method to automatically convert an FSM model to an equivalent statechart model. Our experimental studies indicate that the statechart model results in significantly reduced structural complexity on the average compared to the original FSM models.
Data gathering is a vital operation in wireless sensor network applications, which necessitates energy efficiency techniques in order to increase the lifetime of the network. Similarly, Clustering is an effective technique that is implemented to improve the energy efficiency and network lifetime in wireless sensor networks. This paper proposes an Energy Sorting Protocol (ESP) architecture, which employs clustering. The objective of ESP architecture in case of wireless micro-sensor networks is the accomplishment of low energy dissipation and latency without sacrificing application specific quality. The ESP accomplishes the objective by employing (i) randomized, adaptive, self - configuring cluster formation (ii) localized control for data transfers and (iii) application - specific data processing, such as data aggregation or compression. The cluster formation algorithm permits each node to make autonomous decisions, so as to generate good clusters as the outcome. From the simulation results, it is illustrated that this algorithm also minimizes the energy and latency for cluster formation, in order to minimize overhead to the protocol.
RFID technology is increasingly adopted and deployed in real applications owing to the fact that the RFID technology offers momentous advantages when compared to the traditional object-tracking technologies. The RFID readers with antennas, host computers, and transponders or RF tags (which are recognized by the readers) constitute an RFID system. When multiple readers are employed, it results in duplication (generated through the multiple readers). In order to avoid the duplication, we restrict the mobility of readers and manage readers similar to disabling when there are no items in their coverage. The duplication of RFID tag readings is caused due to the reason that multiple RFID readers may track a single item. In this paper, we have planned to develop a novel efficient algorithm which manages the RFID readers effectively, in order to avoid duplication of data and hence optimizing the storage space.
An efficient reliable sensor-to-sink data transport protocol will ensure that the sink can collect enough information and minimize energy consumption of data transport. It should be designed to adjust the reporting rates of sources and adapting to wireless communication conditions. We design a congestion control mechanism at the source which reacts based on the sum of the node weights at each node. In this scheme, each node passes its calculated weight upstream. Each node adds its current weight to that it received from a downstream node, and passes this information toward the upstream node. At the end, the source will receive the sum of all weight information from the corresponding downstream nodes and use the it for controlling rates. Each sensor node transmits the data with the adjusted rate. The sink node receives the time series for each sensor node. After collecting enough data, the sink node uses a clustering algorithm to partition sensor nodes according to the sending rates and similarity of data obtained. Then it sends out the cluster information to all sensor nodes and requires the sensor nodes within the same cluster to work alternatively to save energy. The nodes within a cluster adaptively enters into energy saving mode according to a random schedule. By simulation results, we show that our protocol achieves congestion control along with energy saving.
A combined algorithm is required to select an optimal set of sensors, satisfying the conditions of coverage and connectivity. Since topological changes have significant impact on the coverage quality, there is a need for dynamic coverage maintenance algorithms. In this paper, an energy efficient distributed connected coverage (EEDCC) algorithm and a set of dynamic coverage maintenance (DCM) algorithms have been proposed. EEDCC aims to reduce the energy consumption with a lower communication overhead. The DCM algorithms aid in the tracking of changes in network topology thereby addressing the issues related to dynamic coverage and loss recovery. These algorithms would assist in the adaptive maintenance of the coverage either by migrating sensor node or by updating the radii accordingly. Simulations results show that the EEDCC-DCM algorithms attain significant reduction of energy, with strong connectivity and coverage.
Medical diagnosis is considered an art regardless of all standardization efforts made, which is greatly due to the fact that medical diagnosis necessitates an expertise in coping with uncertainty simply not found in today's computing machinery. The researchers are encouraged by the advancement in computer technology to develop software to assist doctors in making decision without necessitating the direct consultation with the specialists. Comprehensibility is very significant for any machine learning technique to be used in computer-aided medical diagnosis. Since an artificial neural network ensemble is composed of multiple artificial neural networks, its comprehensibility is worse than that of a single artificial neural network. For medical problems, a reasonably high-quality solution could be given by the neural network algorithms. In this paper, application of artificial intelligence in typical disease Hepatitis B diagnosis has been investigated. In this research, an intelligent system based on logical inference along with a generalized regression neural network is presented for the diagnosis. An expert system based on logical inference is used to decide what type of hepatitis is possible to appear for a patient, whether it is Hepatitis B or not. Then artificial neural networks will be used in order to do the predictions regarding hepatitis B. The Generalized regression neural network is applied to hepatitis data for predictions regarding the Hepatitis B which gives severity level on the patient. Results obtained show that generalized regression neural network can be successfully used for diagnosing hepatitis B. The role of effective diagnosis and the advantages of data training on neural networks-based automatic medical diagnosis system are suggested by the outcomes.
A code validation tool for RISC microcontrollers, at the level of machine instruction stream is described. This purports to a methodological approach to achieve software debugging and code validation, where the source code might be created in assembly language or a high level language. The appropriateness of instructions as well as its sequence in a program is validated with the help of rules governing the occurrence of illegal instructions and code sequences for executing the CPU and integrated peripheral functions. This is achieved through the static analysis of machine codes by applying the rules formulated. This validation tool can be integrated to the system development environment for the detection of such errors without introducing any software or run time overhead in the resulting code. A prototype based on PIC 16F87X microcontrollers is developed. The algorithm can encompass a wide range of RISC processors, once appropriate rules are available for such processors.
Classification-Tree Method (CTM) is widely used to generate test cases from functional specifications and is based on the idea of partition testing. Some few works have been done based on merging of two classification-trees. Existing method can be used to generate test cases by combining two classification trees based on information from the system specification and information from the COTS specification. The construction of combined classification-tree is rather ad hoc and it may vary from one test engineer to other test engineer. To overcome this problem, we provide all the basic cases for merging two classification-trees with their formal notations.
The reliability is one very important parameter of applications software 6 . The most straight restriction in most software reliability models is the assumption of statistical independence among successive software 2 factors considered. Qualitative/quantitative measurement of software quality related aspects in all stages of software development are desirable. Any measurement in this line forms an element in the set of software quality metrics. Here the seven factors considered are size, effort, duration, S1 (Tools use), S2 (software's logical complexity), S3(requirements volatility) and S4(quality requirements), S5(efficiency requirements) 4 In this paper four different cases are carried out by means of principle component analysis. First analysis with size as predominant factor, Second analysis with effort as predominant factor, third analysis with duration as predominant factor, and finally including all the three associated in the list of seven factors with software reliability performance. The analysis of variables is to identify the dimension that are latent. That can be considered in the phenomena of performance correlation. This is to study the effects in the developed principal components analysis approach.
As hardware components are becoming cheaper and powerful day by day, the expected services from modern software are increasing like any thing. Developing such software has become extremely challenging. Not only the complexity, but also the developing of such software within the time constraints and budget has become the real challenge. Quality concern and maintainability are added flavour to the challenge. On stream, the requirements of the clients are changing so frequently that it has become extremely tough to manage these changes. More often, the clients are unhappy with the end product. Large, complex software projects are notoriously late to market, often exhibit quality problems, and don't always deliver on promised functionality. None of the existing models are helpful to cater the modern software crisis. Hence, a better modern software development process model to handle with the present software crisis is badly needed. This paper suggests a new software development process model, BRIDGE, to tackle present software crisis.
In Software Development Process Model, the reusability of elements can help to reduce the efforts of the project management for developing systems in a very short period. This paper focuses on the consecutive tasks like 'Domain Analysis', 'Package Analysis;' and 'System Analysis' for reusability to minimize the required technical efforts in development area. Domain analysis has various methods for verifying the project domains for getting the knowledge about the previous developed projects to match with the current project requirement. Package Analysis is depending upon the result of 'Domain Analysis' when existing domain is similar to the current domain problem. In Package Analysis, the team has to find the components or packages those can be reusable to satisfy any of the requirements in the new problem domain. When specific package(s) is selected for reusing that should be undergone various levels of Integration Testing to verify whether that package(s) will not violate the non-functional requirements are given by customer. When these 3 analysis phase are completed and got the results positively then the reusable packages and related deliverables can use in the current development of system.
Software visualization encompasses the development and evaluation of methods for graphically representing different aspects of methods of software, including its structure, its execution and its evolution. Creating visualizations helps the user to better understand complex phenomena. It is also found by the software engineering community that visualization is essential and important but not a critical activity. In order to visualize the evolution of the models in model-driven software evolution (MoDSE), this paper derives and constructs a framework, with key areas (views) and key features, for the assessment of MoDSE process and addresses a number of stakeholder concerns. The framework is derived by the application of the goal question metric paradigm. The application of the framework is determined by considering the different roles of the stakeholders and their concerns. The framework is applied to state how each stakeholder might satisfy their concerns and gain knowledge about the models during evolution.
The function point analysis (FPA) method is the preferred scheme of estimation for project managers to determine the size, effort, schedule, resource loading and other such parameters. The FPA method by International Function Point Users Group (IFPUG) has captured the critical implementation features of an application through fourteen general system characteristics. However, non-functional requirements (NFRs) such as functionality, reliability, efficiency, usability, maintainability, portability, etc. have not been included in the FPA estimation method. This paper discusses some of the NFRs and tries to determine a degree of influence for each of them. An attempt to factor the NFRs into estimation has been made. This approach needs to be validated with data collection and analysis.
Re documentation and design recovery are two important areas of reverse engineering. Detection of recurring organizations of classes and communicating objects, called software patterns, supports this process. Many approaches to detect software patterns which have been published in the past years suffer from the problems of necessity of reference library, performance and language compatibility. This paper presents a model to solve those problems in software pattern detection. The proposed model solves the problem of necessity of reference library by detecting software patterns using formal concept analysis (FCA). The proposed model solves the problem of performance by using the most efficient algorithm CMCG (Concept-Matrix Based Concepts Generation) for the construction of concept lattice, which is the core data structure of FCA. The proposed model solves the problem of language compatibility by using the language independent meta model called MOOSE for taking the input information. The validity of this model was proved in theory and by experiment.
The current software development approaches that support developers in providing enterprise centric computing solutions have been falling short of expectations in handling some of the most trivial issues like changes in requirements, changing technologies, multiple platforms and platform interoperability. Model driven development (MDD) approach for software development is aimed at leveraging models to cater to these challenges. Model driven architecture (MDA) based on MDD and supported by UML, MOF and other standards is fast becoming a dominant approach for software development these days. This paper is an attempt to provide a state-of-the-art review of MDA concepts and summarizes various advantages and disadvantages of MDA.
Different network security solutions exist and contribute to enhanced security. From these solutions, Intrusion detection systems (IDS) have become one of the most common countermeasures for monitoring safety in computer systems and networks. However, In order to address these limitations, the paper presents a fast and efficient system classifying alerts into true positives and false positives and formulating more general alerts based on individual true positives. based on an adaptive alerts correlation.
After decades from introducing and using agile methodologies, project managers realized that no methodology is sufficient by itself. Thus, merging their principles is the solution yet no formal solution has been proposed. Relying on previous work, ATT provides a mathematical model to act as a tailoring tool to formulate a new agile method based on experienced agile methods and the project specifications. It requires project managers to understand well the project requirements in terms of SDLC phases, and accordingly the new agile methodology is tailored.
Quality of Web-applications plays major role in its success. And the high quality Web-application is really possible by following high quality Web engineering process. The use of strong Web-application architecture with strong development platform not only make Web-applications robust and of high quality but also give Web-application an ability to meet changing and demanding customer requirements in efficient manner. A Model View Controller (MVC) design pattern has remained fundamental architectural design pattern even after great evolution in architectures of user interactive applications. In this paper, we discuss the support for achieving quality attributes for the Web-application, the support for Web-application development process and the support for meeting demanding features of Web application on Java EE platform. This contribution will help a lot for small scale as well as large scale Web-application development and for moulding Web-application into a future's high quality finished product from inception phase itself.
Requirements must be pertinent and aligned to business goals. Frequently, developed requirements meet only immediate concerns and there is dissonance with business goals resulting in wasted opportunities. Comprehensive models exist, whose rigors demand sustained stakeholder engagement and suit process mature environments. Proposed approach supports early alignment of requirements with business goals in common business environments. The approach is helpful in seeking early-stage stakeholder engagement for requirement validation. The approach leverages goal oriented requirements engineering methods and is usable by diverse stakeholders.
Recent trends have indicated the increased use of object oriented technology not only for the design and development of traditional software but also for safety critical software. There has been an ongoing effort for the application of traditional well documented and well tested hardware safety and reliability analysis techniques to software. Software failure modes and effects analysis (SFMEA) is one such technique that has been adopted from its hardware counter part failure modes and effects analysis (FMEA). Despite differences in operational failure modes between hardware and software, the recent research has shown the usefulness of the technique in software development process. This paper aims to: (i) highlight the application of SFMEA in object oriented design process and (ii) use the results of analysis obtained from previous step at implementation phase for improving robustness of the code.