Technical Papers

Software as a service (SaaS) is a kind of cloud service which offers software services through internet. SaaS is commonly utilized and it provides several benefits to service consumers. To realize these benefits, it is essential to evaluate the quality of SaaS and manage the relatively higher level of its quality based on the evaluation result and verify consistency of artifacts. Hence, there is a high demand for devising a quality model to evaluate SaaS cloud services. Conventional frameworks do not effectively define excellence assurance strategies for SaaS development and a traceability framework for consistency verification. In this paper, we propose the excellence assurance strategies for SaaS development and a traceability framework where all the artifacts (Data to Trace) can be correlated and the consistency of artifacts can be verified. Till traceability framework is considered an important ability, required high quality SaaS Cloud Services cannot be provided to cloud consumers. Finally, we define our continuing efforts to characterize the inherent trade-off between effective excellence assurance strategies for development and consistency verification of artifacts, a critical problem in SaaS cloud computing. The consistency of artifacts has been validated by Z Formal specification language in general. Using the proposed framework, Saas services with high quality can be efficiently developed and consistency can be verified.
Cloud storage system enables storing of data in the cloud server efficiently and makes the user to work with the data without any trouble of the resources. In the existing system, the data are stored in the cloud using dynamic data operation with computation which makes the user need to make a copy for further updating and verification of the data loss. An efficient distributed storage auditing mechanism is planned which over comes the limitations in handling the data loss. In this paper the partitioning method is proposed for the data storage which avoids the local copy at the user side by using partitioning method. This method ensures high cloud storage integrity, enhanced error localization and easy identification of misbehaving server. To achieve this, remote data integrity checking concept is used to enhance the performance of cloud storage. In nature the data are dynamic in cloud; hence this work aims to store the data in reduced space with less time and computational cost.
A technique proposed for the automatic detection of spikes in electroencephalograms (EEG). The important features of the raw EEG data are extracted using two methods.i.e, Wavelet Transform and Energy Estimation. This data is normalized and given as input to the Neural Network, which is trained using Back propagation algorithm. Energy Estimation is used as an amplitude threshold parameter. The Wavelet transform (WT) is a powerful tool for multi-resolution analysis of non-stationary signal as well as for signal compression, recognition and restoration, which uses Daubechies 4 as the mother wavelet. The details of the Wavelet decomposition level, 1,2,3 and energy estimation parameters are given as input to the neural network in order to detect spikes. The codes are written in C and implemented on the Texas Instruments TMS320C5410-100 processor board, and a Del Mar PWA EEG Amplifier. The waveforms are observed on MATLAB. The effectiveness of the proposed technique was confirmed with and EEG layouts.
The problem of handling imprecision, vagueness and uncertainty in data has been attempted for a long time by philosophers, logicians and mathematicians. Recently there have been many approaches explored to understand and manipulate the imprecise knowledge. The most successful one is fuzzy set theory proposed by Zadeh. The theory of Rough set is a relatively new mathematical approach to decision making in data characterized by imprecision, vagueness and uncertainty. This paper examines, Measure of roughness of information granules may be one tool to manipulate the useful knowledge hidden in uncertain and imprecise data. The measure of roughness of information granules may possibly give knowledge about information system with precision.
Task scheduling is the most challenging work in grid environment. Scheduling approach is based on various criteria's like Quality of Service (QoS), deadline, security, the arrival time of the task, suffrage value etc. It is difficult to get the optimal solution with a unified approach. In this paper, we propose a threshold based technique to decide the task assignments to the machines. It also describes a value to balance the load. Our experimental analysis shows better results than traditional task scheduling algorithms such as Minimum Execution Time (MET), Minimum Completion Time (MCT), Min-Min, Max-Min, K-Percent Best (KPB) and suffer-age heuristic.
The applications submitted to cloud middle ware have been distributed to the Cloud Service Providers(CSPs) based on the available CSPs in the cloud environment to categorize the service CSP providers with this work we are trying to introduce a concept to find the optimal CSP based on rough set based approach. IaaS provides a large amount of computational capacities to users in a flexible and efficient way. In the market various CSPs are available example Amazons elastic computing cloud offers virtual machine with 0.1 us dollars per hour similarly another cloud Google compute cloud offers virtual machine with 0.5 us dollars per hour then the cloud users need rating among the various CSPs. In this research work we have been proposing an approach to provide the rating of CSPs based on the internal performance of Datacenters and virtual machines. In present situation day-by-day number of cloud service providers have been increasing drastically. In this scenario existing service providers scheduling need a mechanism to find the optimal service providers information to Service request scheduling using this information SRS can allocate the service to the respective optimal service providers. In this paper we studied the problem of dynamic request allocation and scheduling for context aware application deployed in geographically distributed data centers forming a cloud.
Clock synchronization is one of the most basic building blocks for many applications in a Distributed system. Synchronized clocks are interestingly important because they can be used to improve performance of a distributed system. The purpose of clock synchronization is to provide the constituent parts of a distributed system with a common notion of time. There are several algorithms for maintaining clock synchrony in a distributed multiprocessor system where each processor has its own clock. In this paper we consider the problem of clock synchronization with bounded clock drift. We propose a clock synchronization algorithm which does a two level synchronization to synchronize the local clocks of the nodes and also it exhibits fault tolerant behavior. Our algorithm is a combination of external clock synchronization and internal clock synchronization.
The quality of the scheduling has a strong impact on the overall application performance because of process and data affinities. However, this issue is now becoming critical due to the variable memory access latencies in NUMA (Non-Uniform Memory Access) architecture, because in NUMA architecture local data access being significantly faster than remote access, then data locality emerges as a critical criterion for scheduling threads and processes, and it becomes important to be able migrate memory together with their accessing tasks. To perform memory migration, we present memory migration on-demand policy to enable automatic dynamic migration of pages with low cost when they are actually accessed by a task. We use PTE flag setup with the help of madvise system call and the corresponding Copy-on-Touch code added in the page-fault handler which allocates the specific page near the accessing task.
Proxy caching is used to enhance performance of user access to popular web content. Many system uses multilevel cache for better performance. Multi level cache generally operates by checking the smallest level (L1) cache first. If miss occur in smaller cache than next larger level cache (L2) is checked. This paper will consider L1cache as primary memory and L2 cache as secondary memory of proxy server. LRU page replacement algorithm is taken into consideration. This paper proposes new way to enhance proxy server performance by arranging the cache content in some special manner. In this manner L1 cache is used to store their own web pages as well as references of web pages that are in L2 cache. That will reduces the Average Memory Access Time of proxy server.
The current paper is an attempt to integrate privacy preservation with privacy leak detection in the context of text mining. This may be considered as extending the Storage as a Service feature of Cloud Computing wherein the user in the role of content creator submits the documents to be stored. There exists a facility to cluster these documents based on the concept-based mining algorithm. The clustered documents are usually available in the form a tree. When a user in the role of a subscriber requests to access a document, this request for access will have to go through an authentication procedure based on the Leakage Free Redactable Signature Scheme. Access control information is being maintained in the form of a Cloud user access control list. A privacy detection leak module which detects privacy leaks depending upon the pattern of previous privacy leaks is also being proposed. This information is then used to update the cloud user access control list and users responsible for privacy leaks are prevented from accessing the cloud service. The current document being requested by the user together with the information from the access control list is used to decide which part of the redacted trees have to be made available to the user as a response. Thus this combination of this authentication procedure and privacy leak detection can be used to ensure the privacy of the sensitive information stored by the user in the cloud.
In Agriculture industry, plants are prone to diseases caused by pathogens and environment conditions and it is a prime cause to lose of revenue. It requires continuous monitoring of plants and environment parameters to overcome this problem. A mobile Robotic system for monitoring these parameters using wireless network has been envisaged here and developed based on ARM-Linux platform. Robotic platform consists of ARM9 based S3C2440 processor from SAMSUNG and Linux Kernel , Motor driver, robot mechanical assembly. The farm environment and plant condition such as temperature, humidity soil moisture content etc. are continuously monitored through suitable data acquisition system incorporated in the robotic system. A servo motor based robotic arm is designed for collecting soil sample and test various soil parameters. A closed loop feedback algorithm based on Digital PID controller has been developed for precise position and speed control of mobile robot. The wireless control of mobile robot and monitored data acquisition is accomplished using zigbee wireless protocol. For displaying acquired data on host system a Graphical user interface is designed using qt creater framework. For independent functioning of mobile robot, application program is written in c language and cross compiled using arm-linux-gcc compiler on Ubuntu 10.04 platform and ported on the memory of ARM processor.
The conventional deadline and budget constrained scheduling heuristics for economics-based computational grids do not consider goodwill of resources, that could lead to an increase in application cost and execution time. This paper proposes a new scheduling algorithm that not only considers resource cost and time but also considers goodwill when resources are selected. Resources goodwill is calculated based on their past performances. Goodwill is broader and more representative characteristic than reliability of a resource to use as criterion for selecting the resource for execution. Goodwill criterion takes into account “on time performance” of a resource calculated as real time basis where reliability criterion works on a predefined static term expressed by the system designer. The Gridsim toolkit, standard workload model and resource configuration was used to simulate the grid environment and application. The experimental results show that the average overall time and cost of parallel tasks are lesser than existing approaches.
As computer and information technologies advance, intelligent computing begins to play more and more important role in industrial management. However, new phenomena continue to emerge in many industries' evolution processes, thereby limiting the effectiveness of conventional systems approaches as a basis for intelligent computing applications. Therefore, it becomes critical to recognize the nature and manifestation of a system of systems (SoS) environment, and to develop a corresponding pathway to holistic intelligent computing methodology. This paper discusses the intelligence computing process and suggests that the key application fields in challenging system of systems environment include SoS complexity, indistinction of component boundaries, hidden systems processes, holistic emergent behaviors, uncertainty and unpredictability. Furthermore, taking tourism as an example of typical complex industrial SoS, a technical system of systems engineering framework is presented as a basis for industrial analysis and policy making process. The framework focuses: 1) SoS virtualization through geographical intelligent computing; 2) SoS process simulation using network algorithms; 3) SoS evaluation though progressive analysis approach. This integrated methodology is to be tested in empirical study. Though SoS engineering remains full of challenges and opportunities, exploration of its role and its coupling with intelligent computing in complex task of managing industrial systems will have tremendous influence to management discipline.
Load balancing algorithm is used to distribute the load among various nodes in the distributed system to improve the resource utilization and request response time of the system. These algorithms are mainly used to overcome the situation where a node is heavily loaded and other nodes are idle and because of which the request fails. Many load balancing algorithms are being proposed in distributed and grid environment, but they do not take into consideration the trust and the reliability of the datacenter. In this paper, a suitable trust model is proposed based on the existing model that is suitable for trust value management for the cloud IaaS parameters. Based on the above achieved trust values, a suitable load balancing algorithm is proposed for better distribution of load which further enhance the QOS of services being provided to the users.
Cloud computing platform has become one of the most significant computing platforms in recent years. It dynamically allocates, deploy, redeploy and cancel the cloud services on the basis of demands. However, cloud computing inevitably poses new security challenges because traditional security mechanisms being followed are insufficient to safeguard the cloud assets. Recently, time-bound ticket-based mutual authentication scheme has been proposed for cloud computing. This paper shows that the scheme is vulnerable to Denial-of-Service attack and insecure password change phase. To overcome these security pitfalls, an enhanced scheme is also proposed. Performance comparison shows that the enhanced scheme is efficient one.
The growing demand of computation, large data storage needed for running a high performance computing enterprise and high dimensional data based web application increases the energy and power consumed by large infrastructure. Cloud computing is providing a solution as part of the Green IT initiative to reduce the adverse environmental impacts and save energy. Our paper describes important metrics of cloud computing which makes it greener. We discuss the various power and energy models and identify major challenges to build a model for Green Cloud. We also discuss the ways to reduce power and energy in terms of cloud computing services. Our work surveys the various models and helps understand the road map for a greener cloud.
The present paper focuses on privacy preserving technique. Cloud computing is not a new technology more over it is a new way of delivering technology. Providers deliver it in the form of services, in computing field security is the main concern which blocks the tremendous growth of Cloud computing and became a huge debate area worldwide, due to security breach of user's valuable information. So we identified the week service bonding of cloud providers in maintaining, and preserving users' secrecy and failed to have a universal service level agreement. This paper focus on privacy mitigation methodology by proposing a privacy preserving algorithmic approach to congregate the privacy issue and preserve ones confidential data stored in the cloud.
This paper spank out the enormity of unknown users hand on web service users' and their data protection level bump up as an issue in cloud users mind. Once hiring a data space in cloud it's the responsibility of both to accustomed the stored information's privacy and preserving it in a secret way. It is noticed by the academicians and researchers oodles and masses of privacy breaches relentlessly observable fact dealt globally. It is one of the hottest research topics. This work targets data privacy and its preservation by proposing an evolutionary approach to safeguard the confidential data stored in the cloud. It also focuses on prominent study of users' privacy need and to preserve data distorted from intermediate digital data thieves.
In the next five years to come people around the globe would choose open source deployment not just because they cut down cost but also helps avoid vendor lock-in. Our research paper gives you an insight to use open source IaaS to set your own public, private or hybrid cloud. The reason behind it is that it delivers value to your enterprise. Comparing these three open clouds will help researcher and other users to decide which one would be a better option for their enterprise.
Cloud computing is the most envisioned paradigm shift in the computing world. Its services are being applied in several IT scenarios. This unique platform has brought new security issues to contemplate. This paper proposes a homomorphic encryption scheme based on the Elliptic curve cryptography. It implements a provable data possession scheme to support dynamic operation on data. The application of proof of retrievability scheme provisioned the client to challenge integrity of the data stored. The notion of a third party auditor (TPA) is considered, who verifies and modifies the data on behalf of the client. Data storage at the server is done using a Merkle hash tree (MHT) accomplishing faster data access. This proffered scheme not only checks the data storage correctness but also identifies misbehaving servers. The initial results demonstrate its effectiveness as an improved security system for data storage compared to the existing ones in most prospects.
The wide adoption of cloud computing is raising several concerns about treatment of data in the cloud. Now a day's Cloud computing is a buzz word and it is still in its infancy in terms of its implementation at all levels due the limitations it suffers. Most of the security schemes require a basic level of trust between the data owner and cloud provider, when this trust is breached either intentionally or unintentionally it is the data and its owner that suffers. Thus, we suggest a scheme where the trust from service provider is not required. The security of data will be in control of the data owner solely. It would mainly contain a tool that would allow the owner of the data to decide about the access rights of his/her data, revocation if any, and notification if any security breaches are in place. This paper also allows a user to search their files in an encrypted database with the help of ranked keyword search which is an improvement over conventional searching techniques.
We consider scheduling of bag of independent mixed tasks (Hard, firm and soft) in a distributed dynamic grid environment. Recently, a general distributed scalable grid scheduler (GDS) for independent tasks was proposed to maximize successful schedule percent in an error-free grid environment. However, GDS did not consider constraint failure of task during execution due to resource overload, which leads to limited successful schedule percent. In this paper, we propose a novel distributed dynamic grid scheduler for mixed tasks (DDGS-MT), which takes into consideration the constraint failure of task during execution due to resource overload. The proposed scheduler incorporates migration and resume fault tolerant mechanisms for computational and communication intensive tasks respectively. The proposed scheduler shows improved performance in terms of successful schedule percent and makespan in comparison with GDS. The results of our exhaustive simulations experiments demonstrate the superiority of DDGS over GDS scheduler.
Cloud computing has become an promising and popular prototype, by presenting its different popular paradigm as infrastructure as a service, platform as a service, software as a service. Now a days there is a challenge of building scalable, available, consistent Cloud Data Stores. In this paper, data partitioning approach such as static and dynamic partitioning are discussed. The main objective of this paper is to bring out the issues in the area of database scalability and to discuss the opportunities open for research work. In this paper, taxonomy of data management in Cloud is proposed. This paper also introduces the design model for partitioning scheme.
With the development of cloud computing, the security and trust management in distributed systems is changeable. This paper proposes an architecture and solution of security and trust management for distributed systems which use cloud computing. Since the solution requires future technologies, an optimization to the security and trust management, including multi-paths transmission, virtual personal networks and encryption, is proposed. This optimization will enhance the security and trust in distributed systems.
Protocols for wireless sensor network are based upon the application for which the network is deployed for. It is also the individual node characteristics which may lead to different network characteristics as well as protocols. Centrality measure defines the importance and role of a node within a network. It also helps in calculating its importance and relationship with its neighboring nodes. Hence centrality measures may help in resolving various inherent problems and developing efficient algorithms for wireless sensor networks. In this paper we present various graph centrality measures that are applicable to wireless sensor networks. The utility and importance of these graph centrality measures have also been discussed. Further we have proposed a new graph centrality measure named as cluster centrality for WSNs.
In the application based WSN environment, energy and bandwidth of the sensor are valuable resources and need to utilize efficiently. Data aggregation at the sink by individual node causes flooding of the data which results in maximum energy consumption. To minimize this problem we propose and evaluate the group based data aggregation method, where grouping of nodes based on available data and correlation in the intra-cluster and grouping of cluster heads at the network level help to reduce the energy consumption. In addition, proposed method uses additive and divisible data aggregation function at cluster head (CH) as in-network processing to reduce energy consumption. Cluster head transmits aggregated information to remote sink and cluster head nodes transmit data to CH. Simulation result shows, proposed algorithm provides an improvement of 14.94% in energy consumption as compared with primary cluster based protocol LEACH which uses only one CH, it also improves the network stability.
In this study, two novel time sample cross correlation based power attacks using novel voting mechanism and novel multi reference bit mechanism are introduced. These two attack methods are applied on a Montgomery Ladder (ML) implementation of RSA algorithm. In the target ML implementation, use of operands from different locations according to the existence of toggling on the exponent bits is the source of vulnerability. To retrieve the bit type (toggling or not toggling of consecutive bit values) of the secret key, cross correlation values between power traces of a fixed reference bit and power trace of remaining bits of the secret key are calculated. For proposed first method, for each key bit, if this cross correlation value is greater than a threshold, this bit is labeled as the same type, otherwise labeled as the opposite type with respect to the reference bit and corresponding scores are increased. This procedure is repeated for each RSA run. As the number of used power traces are increased, to decide to the final type of each key bit, a voting mechanism is applied on the scores gathered from each RSA run. By application of this method, type of 970 bits of the 1024 bit RSA key could be retrieved correctly. However locations of wrongly estimated 54 bit positions can be found by examining the corresponding scores of those bits. For the second method, instead of scores, sum of correlation values are used to decide to the type of each bit. By this method type of all the 1024 key bits could be estimated correctly. It is also shown that this second attack method can be improved by using multi reference bits together. This property makes the method more flexible. Both of the attack methods are not affected by message blinding or modulus blinding type countermeasures. For a successful attack of these types, positions of square and multiply operations related with each key bit must be known. However, exponent blinding can be used as a countermeasure.
In Current Scenario GPS is very popular device among people for tracking and navigation purpose but GPS data can be used for several type of applications. This GPS data can be further used for analysis the trip, elevation profile etc. There are so many GPS data formats. Different GPS receivers support different formats. There are advantages and disadvantages of each GPS format. GPS provides many types information in various format like binary, rinex etc. NMEA is one of them. In this work we develop a NMEA interpreter for logging GPS data which contains several types of information like Latitude, longitude, Time, Speed, Elevation, PDOP etc. we develop this project in VB.Net.
In today's competitive world consideration of maintaining the security of information is must. Nowadays LAN is generally the most common and popular network domain. Network domains are always prone to number of network attacks. One of the most dangerous form of such network attacks is ARP Cache Poisoning also known as ARP Spoofing. ARP is a stateless protocol and ARP Spoofing takes place mainly because it lacks any mechanism of verifying the identity of sending host. It has been seen that most of the LAN attacks results from ARP Spoofing. So prevention, detection and mitigation of this problem can stop number of network attacks. ARP Spoofing is the act of vindictively changing the IP,MAC associations stored in ARP cache of any network host. In this paper we have proposed a probe based technique with an Enhanced Spoof Detection Engine (E- SDE) which not only detects ARP Spoofing but also identifies the genuine IP,MAC association. ARP and ICMP packets have been used as probe packets. Working of E-SDE is explained with the help of algorithm. We have also proposed an attacking model to clearly understand the incremental development of E-SDE to work effectively against most of the type of attackers. We have also measured the network traffic added by the proposed technique.
The rapidly growing demand for the wireless transmission of video, speech and data is driving the communication technology to be more efficient and more reliable. MIMO has become one of the key technologies for wireless communication systems. It constitutes a breakthrough in wireless communication system that offers number of benefits that helps in improving the reliability of the data link. There are various techniques to improve the data rate, and one of the promising transmit diversity scheme is Alamouti space time block codes. It has been regarded as an effective transmit diversity technique in existing wireless communication channel systems for its compatibility with orthogonal two transmit antennas system and its simple decoding scheme i.e. maximum likelihood (ML) decoding scheme. In this paper, an approach to improve the data rate over the multiple antenna channels for reliable communication. Unlike, most of the technique exists to achieve full diversity and full rate, we aim to increase the data rate over the channel by using dent channel model. We exploit the time and space diversity simultaneously to improve the performance of the system under mobile radio channel. Furthermore, simulations and analysis results are carried out using dent's channel model in terms of BER and the coding methods is suggested for 2×2 Alamouti STBC MIMO systems. Although the technique of two transmit antennas is the main focus of this paper, the same idea can be directly applied to Alamouti STBC codes with more than two transmit antennas.
With the growing sophistication of computer worms, information security has become a prime concern for individuals, community and organizations. Traditional signature based IDS, though effective for known attacks but failed to handle the unknown attack promptly. This paper describes a novel honeypot system which capture worm based on their characteristics of self replication. We introduce combination of unlimited and limited outbound connections to capture different payload of single or multiple worms. The proposed system isolate the suspicious traffic and able to effectively control the malicious traffic and capture most useful information regarding the worm's activities, without attacker's knowledge. Our system will be used for critical study of structure and behavior of most sophisticated worms and then forwards the necessary input to Signature Generation Module for automatically generating signature of unknown worms. Our attempt is to generate signature of unknown especially polymorphic worms with low false positive and high coverage. Our system is able to enhance the capability of IDS signature library and increases the probability of detecting most variant of unknown worms.
In a heterogeneous networking environment vertical handover decision plays a very crucial role in the overall handover process. Many parameters and techniques have been proposed in the literature for selecting a best network available at a particular instance of time. Since user satisfaction is one of the ultimate aim of vertical handover process and different users can have different preferences on the same parameter, one of the vertical handover decision parameter is User-Preference. In this paper, limiting the scope to 3G-WLAN interworking environment and using Analytic Hierarchy Process (AHP), the problem of capturing user preferences on top of IEEE 802.11u is modeled. IEEE 802.11u is the ninth amendment to IEEE 802.11-2007 standard and allows interworking of Wireless Local Area Network (WLAN) with external networks. Using the proposed model, the ranking of available networks can be done based on only user preferences. These ranking values can further be used with other parameters (e.g. bandwidth, delay, jitter etc.) to select the best network for vertical handover. We also present the results of a survey among a group of users. This survey suggests the services which 802.11u compatible WLAN service providers should consider providing to users.
In this paper, the transmission/reflection characteristics of a circular ring frequency selective surface (FSS) has been discussed. We have also explored a method for computing the various structure parameters of a frequency selective surface. The analytical results of the proposed structure are comparable to that of the desired result. This process of computation is also extended to the bandpass structure and obtained results are close to that of the desired results. The computed results of the proposed structure is compared with that of the simulated, which is performed by using CST Microwave Studio a commercially available simulator based on the finite integral technique.
Due to the huge compulsion in which collective wireless technologies should act below an abstinent awning. Mobile Ad Hoc Networks are leaving to accomplish this mandate. Wireless technologies are abstruse in itself along with we are going to adhere them in an abstinent awning they inherited the clashes form their parental technologies and frequent additional confrontations will broaden. WiMax / UMTS infrastructure is observing for expediency of Mobile Ad Hoc Networks. The core analysis challenge in WiMax is about the anode allowance. additionally destined to deficient anode affiliate and bulky amplification in mobile consumers, the difficulty of contest free channels allocation dovetails very arrogant. Hence, the main objective of this paper is to reduce the Multilevel Channel Conflicts in Mobile Ad Hoc Networks .Channel allowance is an elementary affair of resource activity that aggregates the comprehension and extent of attendants. As dormant channel apportionment misses the heuristic applications to allot the channels to the cages. headed channel assignment behaves beneficially inferior leaden traffic. assiduous channel allowance apprise behaves inferior brightness as well as alleviated traffic. To advance the benefit of the channel chunk we try an alloyed application for channel allowance in which FCA and DCA apprises coupled will work concomitantly. The results depict that the proposed mechanism is able to allot the conflict free channels to all enclosures according to the constraint of the enclosures. This allocation is able to reduce the conflicts in multi level channels in ad hock networks. For replication we apprise only four cells clusters architecture and seven cells cluster arrangement and consequences arises that allocated channels are conflict free and based on the compatibility matrix which is allocation methodology.
In recent past there has been an increasing interest in wireless communication applications owing to its ease of use, cost effectiveness, maintainability and ease of deployment. Consequently, a number of wireless systems have been developed and deployed, thereby, leading to a belief that frequency spectrum is slowly running out of usable frequencies. Concept of Cognitive Radio (CR) has been proposed to overcome this issue of spectrum scarcity by making use of opportunistic spectrum access. Along with CRs, new types of security threats have evolved e.g. Primary User Emulation Attack (PUEA) and Spectrum Sensing Data Falsification (SSDF) attack. This paper introduces a simple yet efficient technique to counter the SSDF attack. Rigorous survey study shows that a handful of techniques are proposed to counter SSDF attack. The results show that the proposed techniques fail when malicious secondary users outnumber the genuine secondary users, which is a possible threat scenario in CR networks. We propose a technique that is independent of the number of malicious SUs in the network. It makes use of primary user's Received Signal Strength (RSS) at an SU to localize its position and compare this with that calculated using received signal strength of SU transmissions at data fusion center.
In today's technology, new attacks are emerging day by day which makes the systems insecure even the system wrapped with number of security measures. Intrusion Detection System (IDS) is used to detect the intrusion. Its prime function is to detect the intrusion and respond in timely manner. In other words, IDS function is limited to detection as well as response. The IDS is unable to capture the state of the system when an intrusion is detected. Hence, it fails to preserve the evidences against the attack in original form. To maintain the completeness and reliability of evidence for later examination, new security strategy is very much needed. In this research work, automated Digital Forensic Technique with Intrusion Detection System is proposed. Once an IDS detects an intrusion, it sends an alert message to administrator followed by invoke the digital forensic tool to capture the state of the system. Captured image can be used as evidence in the court of law to prove the damage.
This paper emphasis on the necessity of fast and cost effective services in embedded domain and the method of accomplishing it. Here the service provider will access the system from remote end using internet and run a diagnostic s/w. And take corrective action like s/w updates, notifications etc. Some of the common known corrective actions are preloaded by the manufacturer on to the ROM memory in the system. And for those which are newly occurring will get rapid upgradation from the remote end.
A rectangular microstrip antenna for quadrature band and penta-band is developed using a mutual coupling technique. Single rectangular microstrip is splitted into multiple resonators along the width and gap coupled to non-radiating edges. The proposed structure gives sufficient separation between the operating frequencies for quadrature band and penta-band. It covers the frequency range from 1900 MHz to 3 GHz covering Universal Mobile Telecommunication System (UMTS, 1920-2170 MHz), Wireless Local Area Network (WLAN, 2400-2483.5 MHz) and low band Worldwide Interoperability for Microwave Access (WIMAX, 2.5 to 2.8 GHz). Simulation results are presented and discussed.
Microstrip patch antennas are mostly due to their versatility in terms of possible geometries that makes them applicable for many different situations. The lightweight construction and the suitability for integration with microwave integrated circuits are two more of their numerous advantages. A single layer, single feed compact slotted patch antenna is thoroughly simulated in this paper. The aim of this work is to design and to implement a microstrip patch antenna array that meets the requirements of a microwave communications system. Resonant frequency has been reduced drastically by cutting `Two hexagonal and a circular' like shape slots from the conventional microstrip patch antenna. For the design of the antenna, simulation tool software IE3D, based on the Method of Moments (MoM) was used. After simulating and optimizing a single element proposed antenna that achieved satisfactory measurement results with also size has been reduced by 80.22% when compared to a Conventional microstrip patch antenna.
This paper gives an approach for loss reduction in the design of maximally flat low pass filter (LPF) using two different methods. First using defected ground structure (DGS) and second by using series of grounded patches (SGP) for surface wave compensation. The fourth order maximally flat low pass filter is designed at cut-off frequency of 3 GHz. To achieve arbitrary cut-off frequency and impedance level specification, it is converted to low pass filter using frequency scaling and impedance transform. Loss reduction is investigated first by defected ground structure and then for series of grounded patches structure (SGP) of small square patches of 2 mm by 2mm on microstripline. The simulation is performed using PUFF and IE3D software and simulation results are reported. Simulation result shows improvement in reflection co-efficient for 3 GHz cut-off frequency by -7.18 dB for filter with DGS and -2.56 dB for filter with SGP, in comparison with simple LPF. However, for 2.68 GHz frequency, LPF with SGP shows -20.18 dB improvements. All the three filters are fabricated i.e. without defected ground, with defected ground and with series of grounded patches. Measured results of all three fabricated low pass filters, using vector network analyser are presented. Insertion loss, radiation loss, transmission loss and return loss are calculated on the basis of the measured and simulated parameters and group delays of each filter are reported. Comparative insight with different losses, group delay and theoretical and practical values for three filters are presented and shows good agreement.
The Delay Tolerant Network (DTN) is a network of regional networks
The Delay Tolerant Network (DTN) is a binder for regional heterogeneous networks. Nodes in DTN rely on intermittent contacts between themselves to deliver packets. The key to improve performance in DTNs is to design a superior number of deportation fortuities. A considerable number of different routing strategies for DTN environment to route information in such decentralized scenarios have been proposed like the Epidemic Routing, Spray and Wait, etc. Routing in such environment should be effective of achieving high receivable rate with minimalistic overhead. In this paper we aim to achieve a high receivable rate keeping the number of message duplication to a significant low. We propose a routing strategy using credit distribution. This routing protocol improves delivery capability by replicating messages where it is advantageous. It provides a better way of message delivery in challenged environments and a suitable, efficient technique of solving certain prime issues of cost overhead, replication and loss of messages. To ascertain our claims, we have done simulations and compared our proposed approach with few of the notable existing protocols and the results manifest that our approach satisfies its objectives.
Cluster based routing protocols are used to improve the performance of large-scale networks. In this paper, we propose a new approach for intra and inter cluster routing in different scenarios. Our proposed algorithm takes the advantages of proactive and reactive routing protocols. For intra and inter cluster routing, proactive and reactive routing concepts are used, respectively. We have assumed that common nodes among the clusters are gateway nodes and act as intermediate nodes. Our proposed algorithm enhances the performance of cluster based routing protocol. We use an analytical model to calculate the overheads during update process of routing tables. Our results show the enhanced performance of proposed technique.
The secure routing protocols with cryptographic algorithms require many prerequisites like establishment, maintenance and operational mechanisms. The nodes in the ad-hoc network are dependent upon the trusting nature of the other nodes. The trust levels between the nodes will be developed over a period of time. A central trust authority may create an unfeasible environment in ad-hoc networks. In this paper we proposed a trust model, which enforces reliability through collaboration instead of achieving trust through security. All nodes independently execute this trust model in the network and take their decisions about other nodes in the network. We evaluated and analyzed both existing and trust based reactive routing protocols by using QualNet simulator.
Plants have an important role in our life, but due to environmental changes, many species of plants are facing extinction. It is very important to treasure this great wealth by maintaining complete details of all types of plant which will help in understanding the aspect of their survival. A leaf plays an important role in the identification of plant. Edge details of a leaf are detected using edge detection algorithms and can be stored in the form of feature vectors. Hence to improve the effectiveness of identification of leaf, we are hereby proposing to generate Clustered Database. This paper introduces an approach of Clustering based on Eccentricity. For clustering database, K-means algorithm is used. Experimental results show the effectiveness of forming clusters by calculating entropy and purity. From experimental results, it was found that the proposed approach of clustering is quite effective and would enhance the retrieval efficiency.
An ad hoc network is a dynamic topology network having no centralized control or base station. In such a network, routing is a challenging task due to frequently changes in network topology and resource constraints. Many routing protocols have been proposed to overcome various challenges of routing in ad hoc networks. Each routing protocol has some advantages and disadvantages in various situations. So, it is difficult for a group of user's to choose a particular routing protocol for a particular requirement. In this performance analysis paper, we compare six existing well known routing protocols based on user's point of view. The protocols are AODV, DSR, LAR, OLSR, STAR and ZRP. This paper, presents its usefulness by providing comparative analysis of the protocols with the important routing parameters. The parameters are packet delivery ratio, throughput, end-to-end delay, battery power consumption, average hop count for a connection, packet drop due to retransmission limit and average jitter for received packets. We analyze the protocols in a most realistic ad hoc network scenario using simulation. Comparative study of the protocols using the routing parameters is useful to make a decision about a protocol by an analyzer or user that, which protocol is suitable for particular requirements.
Genetic algorithms have been successfully applied in the area of software testing. The demand for automation of test case generation in object oriented software testing is increasing. Extensive tests can only be achieved through a test automation process. The benefits achieved through test automation include lowering the cost of tests and consequently, the cost of whole process of software development. Several studies have been performed using this technique for automation in generating test data but this technique is expensive and cannot be applied properly to programs having complex structures. Since, previous approaches in the area of object-oriented testing are limited in terms of test case feasibility due to call dependences and runtime exceptions. This paper proposes a strategy for evaluating the fitness of both feasible and unfeasible test cases leading to the improvement of evolutionary search by achieving higher coverage and evolving more number of unfeasible test cases into feasible ones.
Now a day large research is going on in Wireless Sensor Network (WSNs). WSN is collection of various sensor nodes and one destination as sink. These nodes sense the environment and transmit data to the designation sink. The main aim of wireless sensor network is data dissemination to the destination sink which must be reliable. All the nodes send data at a time, so congestion may happen and is the most important problem in WSNs. Congestion causes arbitrary dropping of packet and energy wastage. This paper describes the performance analysis of existing routing protocol in WSNs. Simulation result shows the suitability protocol for random topology in terms of packet delivery ratio, received packets, total drop packets, node density.
Rapid advances in VLSI technology has increased the chip density by constantly increasing the number of constituents on a single chip, as well as decreasing the chip feature size. In such a complex scenario the primary objective is to limit the power-delay product of the system. It can be done by reducing the interconnect delay by optimizing the wire lengths i.e. by the proper interconnection of all the nodes. The minimum cost of interconnection of all nodes can be found by a Rectilinear Steiner Minimal Tree (RSMT) formed by the nodes. The problem of finding a RSMT is an NP-complete one. Particle Swarm Optimization (PSO) is an efficient swarm intelligence algorithm which boasts of fast convergence and ease of implementation, capable of solving such a problem. This paper presents a novel discrete particle swarm optimization (DPSO) to solve the NP-complete problem i.e. finding the RSMT. A modified Prim's Algorithm has been adopted for the purpose of finding the cost of the RSMT. A unique modification to the traditional PSO has been done by introducing the Mutation operation of Genetic Algorithm (GA) which produces up to 20% reduction in the wire lengths or cost of interconnections. Two versions of the DPSO algorithm - one with linearly decreasing inertia weight and another with self-adaptive inertia weight - have been employed and their results have been compared. Comparisons have also been made between the results available from recent work and our algorithm and the latter has established itself to superior in optimizing the interconnect lengths and thereby finding the lowest wire lengths.
A compact reverse G-shape UWB antenna with notch is presented. The proposed antenna consists of an L-shape slot cut in ground to achieve a band notched characteristics in the WLAN (5.15-5.825GHz) band. The antenna has an operating impedance bandwidth from 3.2 GHz to 10.9GHz, with a band-notched from 5.01 GHz to 5.96 GHz band. VSWR is less than 2 (S11 <; - 10 dB) over the operating frequency band except band-notch frequency range. The gain variation of this antenna over the ultra-wideband range is 4 to 7dB and the gain at notch frequency -0.17dB. In the notched band region this is a significant fall in gain. This antenna can be used to stops the WLAN and HIPERLAN band with in the UWB.
The data dissemination among the wireless nodes in an ad hoc environment mainly depends on the cooperation maintained between them. This co-operation is essential for establishing both forward and reverse route as well as relaying the packets. Due to the limited availability of resources in an ad hoc scenario, some of the nodes may tend to drop the packets coming from their neighbor nodes while forwarding their own packets. This type of node's behavior is known as “selfish behavior”. In this paper, we devise and propose a mathematical model that could detect the selfish nodes based on the split half reliability co-efficient. This split half reliability is a two level consistency check mechanism, which is based on the aggregate number of packets entering and leaving a node at a time instant. The effective performance of the devised mathematical model is studied through ns-2 simulation with the help of parameters namely packet delivery ratio, control overhead, total overhead and throughput by varying the number of selfish nodes. The results of the experimental study depicts that this proposed model could detect the selfish nodes rapidly when compared to any other existing models available in the literature.
A packet-scheduling scheme that determines the order of servicing of incoming packets in an intermediate node plays a vital role in deciding the queuing delay experienced by a packet in a Wireless Sensor Network. In case of Real-time WSN applications, minimizing the End-to-End delay is a major concern to meet the required bounded time constraints. Therefore, a Real-time packet-scheduling scheme has to schedule the incoming packets effectively based upon their deadline by minimizing the queuing delay incurred at each node. In this paper, we presented the role of Real-time packet scheduling policy for Real-time Wireless Sensor Networks and we proposed an effective scheduling scheme that aids to Real-time data communication. Simulation result shows that the proposed scheduling policy works better in terms of average delay, packet drop ratio and packet miss ratio.
A mobile ad hoc network is an infrastructure less network where the nodes can move freely in any direction. Since the nodes have limited battery power, energy efficient route discovery mechanisms are critical for proper performance of this kind of networks. Experience-based Energy-efficient Routing Protocol (EXERP) [1] is one such protocol that intelligently addresses this issue and it requires two caches, namely, history cache (H-cache) and packet cache (P-cache). These two caches are dependent upon one another. In this article, we propose a fuzzy controlled cache management (FCM) technique for EXERP in ad hoc networks. Simulation results establish that the proposed scheme achieves a high hit ratio at low complexity than other cache management schemes.
Mesh and Torus are most popular interconnection topologies, based on 2D-Mesh a new interconnection network topology, the Center-Connected Mesh (C 2 Mesh) is proposed. The C 2 Mesh, perform better than the simple 2D mesh and torus interconnection networks. This paper presents an introduction to the new topology C 2 Mesh and analyzes its architectural potential in terms of routing algorithms. Topological properties of the n×n C 2 Mesh network are presented.
Mobile location-based tourism applications provide guidance to the tourist on the move based on their preferences and context such as time and location. These applications depend heavily on the GPS which is used to determine the user location so as to deliver information that takes into account user preferences and context. However, the delivery of information that is relevant to the user in terms of their preferences and their contextual condition seem to be lacking in most of the applications. Semantic web technologies provide an opportunity to develop more intelligent location-based applications that would enable the delivery of information that is more accurate and relevant to the user. Thus, the objectives are; (1) to identify the benefits of leveraging semantic web technologies on the framework (2) to identify the techniques used to provide personalized activities recommendations based on the user preferences. (3) to identify the methods used to build user profile that contains user preferences. (4) to identify the key location-based component that needs to be integrated in the framework. A conceptual framework for personalized mobile location-based tourism apps leveraging semantic web to enhance tourism experience is proposed.
IEEE 802.16 supports different scheduling services to grant Quality of Service (QoS) for multimedia applications. IEEE standard has not specified any scheduling algorithm for fulfilling QoS for different traffic classes. In this paper, we have modified the maximum Signal-to-Interference Ratio (mSIR) scheduling algorithm, where Bandwidth allocation to Subscriber Stations (SSs) is not only depending on Signal-to-Interference Ratio (SIR) value of SS. Queue length of SS is also considered for providing service by the Base Station (BS). The simulation results show that by using the proposed scheduler the throughput increased to 8.08% in comparison to the mSIR scheduler. The proposed approach optimizes the usage of the bandwidth and resources in the WiMAX networks by enhancing the overall throughput and decreasing mean sojourn time of real time Polling Services (rtPS) class. Moreover, it does fair management for SS having low SIR and high real time data to deliver.
During the path establishment, states of switching elements in the switch network may need rearrangement. C.-T. Lea et. al [1] has first mentioned about the rearrangement behavior of the switching networks. He has analyzed the frequency of its occurrence using stochastic method. It is notable that rearrangement behavior of switching network includes rearrangement of connections and rearrangement of switching elements. Optimization of rearrangement cost of switching network can save both power and time, which in turn can increase the switching speed. This paper determines the rearrangement cost of strictly non-blocking Vertically Stacked Optical Banyan networks in terms of switching elements. We believe that our result will be helpful in designing, implementing and analyzing different algorithms of the networks.
A mobile ad-hoc network (MANET) is a self starting dynamic network comprising of mobile nodes, where each and every participant node voluntarily transmit the packets destined to some remote node using wireless (radio signal) transmission. Past research efforts have denoted the problematic behaviour of traditional TCP agents in MANET environments and have proposed various remedies across the networking stack. However, there has not been performance evaluation of different TCP agents under varying mobility conditions which takes into account past experiences in MANET evaluation. In this paper we do performance evaluation of various TCP variants viz Tahoe, Reno, New reno and SACK under varying node speed using simulation tool OPNET and evaluate its effect on Throughput, End to end delay, FTP download response time and FTP upload response time.
A SAT based detailed routing technique for island style FPGA architecture is presented in this paper. This technique uses the graph-colouring paradigm to propose a routing technique which routes multiple nets without decomposing them into 2-pin subnets for simplicity. In spite of this fact, the technique proposed proves to be efficient and scalable since it leverages the computing power of fast SAT solvers running in the back end, as shown by the experiments on benchmark circuits.
In this paper data aided (DA) timing synchronization scheme for orthogonal frequency division multiplexing (OFDM) system is proposed. It works in time domain and the timing metric is independent of the specific structure of the preamble. Quadrature phase shift keying (QPSK) signals, independent Rayleigh fading multi path channel in presence of additive white Gaussian noise (AWGN) are considered. The proposed algorithm estimates the starting point of OFDM frames by using matched filter (MF) at the receiver end. The performance is evaluated and compared in terms of probability of erasure (PE) or probability of not detecting a frame and mean squared error (MSE) with the existing timing synchronization methods for OFDM systems. The simulation results indicate that the proposed method improves the performance of the system significantly. Also the computational time for timing synchronization is less as compared to previously existing methods.
The Vertical-Bell Laboratories Layered Space-Time algorithm is a multi layer symbol detection scheme used in MIMO system to obtain multiplexing gain. Spatial Multiplexing give rise to Multistream Interference, which makes the performance of the detectors more critical. This paper aims to investigate different V-BLAST detector algorithms like ZF, MMSE and ML with and without successive cancellation and optimum ordering, for different modulation schemes to overcome MSI. The work is also extended to study the effect of Rayleigh channel fading correlation and antenna cross polarization coupling on these V-BLAST detectors. Simulation results show the tradeoff between bit error rate performance and computation complexity in various detectors. It is also observed that lesser the channel fading correlation and cross polarization coupling values better is the performance of these detector schemes.
Unreliable wireless communication, mobility of network participants and limited resource capabilities of mobile devices make conventional replication techniques non suitable for MANETs. Frequent network partitions and dynamic disconnection should be handled to improve data availability. In this paper, a novel node failure fault tolerant data replication strategy is proposed. Main objective is to improve data accessibility among a mission critical mobility group. It ensures persistent data availability by duplicating data over replica nodes. The algorithm is designed to operate in environments with node failures. The proposed replica allocation methodology takes care of data availability values in the presence of node failures with mobile nodes remaining energy and storage capacity. Performance of the proposed approach is analyzed in terms of data availability and is demonstrated by simulations.
Energy efficiency is main design issue for protocols of wireless sensor networks. Node clustering is an energy efficient approach for sensor networks. In clustering algorithms, nodes are grouped into independent clusters and each cluster has a cluster head. Data units gathered at base station depends upon lifetime of network. Cluster head selection is an important issue for energy efficiency of clustering schemes. Intra cluster communication distance depends upon position of cluster head in cluster. In this paper, a new cluster head selection scheme is proposed. Proposed scheme can be implemented with any distributed clustering scheme. In proposed scheme, network area is divided into two parts: border area and inner area. Scheme restricts cluster head selection to only inner area nodes. Scheme is implemented and simulated with LEACH in NS-2. Simulation shows that proposed scheme significantly outperform LEACH for network lifetime and data gathering rate.
A mobile adhoc network is an autonomous network that consists of nodes which communicate with each other with wireless channel. Due to its dynamic nature and mobility of nodes, mobile adhoc networks are more vulnerable to security attack than conventional wired and wireless networks. One of the principal routing protocols AODV used in MANETs. The security of AODV protocol is influence by malicious node attack. In this attack, a malicious node injects a faked route reply claiming to have the shortest and freshest route to the destination. However, when the data packets arrive, the malicious node discards them. To preventing malicious node attack, this paper presents PPN (Prime Product Number) scheme for detection and removal of malicious node.
Online social networking sites have become a must visit daily activity for majority of people in today's scenario. Individuals keep themselves abreast of latest developments around them through these Online Social Networks (OSNs). With so many activities on OSNs, many a times users tend to reveal the information that may not be appeasing or morally acceptable by other users. Quite possible, though unknowingly, any user may enter into malpractices of spreading hatred among people by posting unethical and unacceptable material. Through this paper, the author has tried to resolves zooming issues of socially unacceptable postings by providing a new architecture for controlling the user's actions on OSNs and thereby minimizing the menace of notorious activities.
The IEEE1588 precision time protocol clock recovery algorithm is sensitive to Packet Delay Variation. The Packet Delay Variation can cause jitter/wander in recovered clock. Also accuracy of phase recovery and time it takes for PTP servo to acquire the phase lock, is dependent on the percentage of packets that fall inside the “floor window” or Floor Packet Percentage. So it is important to understand the characteristics of delay, one can expect for 1588 packets, when they pass through the network between 1588 Master and Slave. This paper proposes mathematical methods to analyze different delay characteristics such as lower and upper bounds on delays, probability distribution of the delays etc. The paper also shows how the delay probability of 1588 packets change when the number of links between master & slave or when percentage of load on these links changes. Output of a simple program which implements these methods is also given. The article shows that even with the best possible QOS treatment given to 1588 packets, which is strict priority scheduling at egress of each intermediate node, just changes in the load on the links can greatly affect Floor Packet Percentage and distribution of delay in general.
The use of multiple antennas at source, relay and destination terminal promises huge increase in quality of services, capacity and high data rate. This paper presents the mutual information of Multiple-Input-Multiple-Output MIMO relay channels when perfect channel state Information (CSI) is assumed at relay and destination terminal, it may also include spatially correlated channels and correlated noise. The analytical expression of Mutual information (MI) generating function has been derived by implementing multiple copies of the Gaussian integration. When the number of antenna at the relay terminal is more compare to the source and destination terminal this significantly increase the mutual information of channel. It has been observed that there is significant increase in MI for the value of (Ms,Md,Mr) to be (3,3,4) as compared to (3,3,3). Whereas the response gets worse with (4,4,3) as compared to (4,4,4). The simulation result confirmed that analytical method produces accurate and improved result for antenna array in MIMO for two, three, four and more antenna combinations.
Mobile technology and Internet is becoming an integral part of our daily life. Various transactions like shopping, ticket booking and banking transactions have been done on the fly. The technology like Smartphone adds portability for these activities. To manage information and applications on Smartphone, user must provide credentials or profiles to service provider with their details filled by logging onto different websites. To this purpose, user's profile resides in control of multiple service providers. Due to this, duplication of data occurs which will leads to a data inconsistency. To overcome these issues, this paper proposes Profile Translation based Proactive Adaption using Context Management (PTPACM) in Smartphones which automatically generates user's profile according to the scenarios. Proposed system allows keeping user's full profile in user domain resulting into centralizing or exchanging the profile information with increase in the consistency of profile information. This paper presents the layered architecture for PTPACM with Context Awareness Layer, Proactive Analyzer Layer and Profile Translation in a system. This paper also presents probabilistic representation of PTPACM as well as pseudo codes for different operations in the functional blocks of presented architecture.
An ID-based cryptographic scheme enables the user to public keys without exchanging public key certificates. In these schemes, users can generate their public and private keys using their identity. The positive application of bilinear pairings over elliptic curves makes the system easy and efficient in providing security. In this paper, we propose an ID-based signature scheme using bilinear pairings. We prove that the proposed signature scheme is secure against existential forgery under adaptively chosen message and ID attack in the random oracle model with the assumption that the computational Diffie-Hellman problem is intractable. We compare the efficiency of the proposed scheme with some related ID-based signature schemes.
Live migration is an essential feature in virtualization technology where a running Virtual Machine (VM) from one physical host is migrated to another physical host without any service disruptions. Indisputably the benefits reaped from VM migration are high availability, load balancing, energy saving and disaster recovery which are the desired data centre attributes. The migration is initiated by the administrator and part of the migration procedure is to make an informed decision to isolate and identify an appropriate VM to be migrated, lest an impressive performance may not be achieved. The decision of selecting the right candidate VM to be migrated depends on parameters like total migration time, down time, total transferred data and page dirty rate. The nature of the application influences these parameters. The paper addresses the analysis of some of these parameters empirically, and based on these data a correct VM to be migrated is chosen. In addition to the above, this paper also discusses the dynamic resource allocation for RSA algorithm and JMeter.
This paper is an implementation of genetic algorithm over SVD-based signal detection in cognitive radio networks in wireless communication system. We simulated the proposed algorithm for detection of common mpsk signal and analyze the performance of detector for blind signal detection where the knowledge of primary user is not present. The paper shows the better results and comparison is between the SVD based signal detection method and proposed method for cognitive radio networks.
Push technology has evolved to a great extent since its inception and there have been many additions of new features to the available solutions, in terms of reliability, performance and new standards. In this paper, we propose an application to push data in real time which enables bidirectional flow of data and which is independent of any particular publisher/subscriber. The aim of this work is to develop a push server application for students/faculty for accessing study material, to keep track of last minute notices, and other important announcements, in real time from a central file/data server of the university. Usually, the file systems are heavily guarded by firewalls and accessing them from a remote location poses problems, our application solves this problem and allows secure access while maintaining the integrity with university policies. Server application is called PushNotify which is based on publish/subscribe model and is independent, which enables it to be easily integrated with any file server of any university and with any communication client, to achieve the paradigm, “any publisher, any subscriber”. Communication client for subscribers to receive notifications or alerts can be web browser extension or mobile device application.
Cooperative caching in mobile ad hoc networks aims at improving the efficiency of information access by reducing access latency and bandwidth usage. Cache replacement policy plays a vital role in improving the performance of a cache in a mobile node since it has limited memory. In this paper we propose a new key based cache replacement policy called E-LRU for cooperative caching in ad hoc networks. The proposed scheme for replacement considers the time interval between the recent references, size and consistency as key factors for replacement. Simulation study shows that the proposed replacement policy can significantly improve the cache performance in terms of cache hit ratio and query delay.
Topology of MANET is dynamic in nature due to this characteristic in this network build routing mechanism more convoluted and anxious and consequently nodes are more vulnerable to compromise and are predominantly susceptible to denial of service attack (DoS) assail launched by malicious nodes or intruders [6].Reactive routing for instance AODV is more trendy than table driven routing exploit flooding to find out route. Attackers used this conception to initiate DoS attack akin to flooding; black hole and gray hole are the branded attack in MANET. In this paper we have projected a novel automatic security mechanism using SVM to defense against malicious attack occurring in AODV. Proposed method uses machine learning to categorize nodes as malicious. This system is far further resilient to the context changes general in MANET's, such as those due to malicious nodes changing their misbehavior patterns over time or quick changes in environmental factors, for instance the movement speed and communication range. This paper introduced new proposed algorithm for detection of attacks in Ad-hoc networks based on SVM behavioral routing protocols to detect MANET attacks. In this technique we have used the PMOR, PDER, and PMISR as metrics to evaluate the QoS of a link and into prediction of attacks.
TCP Vegas for Ad hoc network is an end to end congestion avoidance protocol that uses conservative approach to determine and control network state. But this conservation scheme is not good in all conditions and it can unnecessarily reduce the size of congestion window. This paper proposes improved TCP Vegas an improvement over TCP Vegas in Ad hoc network that utilizes round trip time variation of packets at sender side, short term throughput and inter-delay difference at receiver side to measure the network state and then controls congestion window considering the path length and network state. Simulation results show the improvement of 5 to 15 % over Ad hoc TCP Vegas in high mobility and high traffic conditions.
In this paper, an efficient signcryption scheme based on elliptic curve cryptosystem is going to be proposed which can effectively combine the functionalities of digital signature and encryption and also takes a comparable amount of computational cost and communication overhead. The proposed scheme provides confidentiality, integrity, unforgeability and nonrepudiation, along with encrypted message authentication, forward secrecy of message confidentiality and public verification. By forward secrecy of message confidentiality function we mean, although the private key of the sender is divulged inattentively, it does not affect the confidentiality of the previously stored messages. By the public verification function we mean, any third party can verify directly the signature of the sender of the original message without the sender's private key when dispute occurs. It enhances the justice of judge. In addition, proposed scheme will save great amount of computational cost. The proposed scheme can be applied to the lower computational power devices, like smart card based applications, e-voting and many more, due to their lower computational cost. The Proposed Scheme is discussed in this paper and is compared with the existing schemes with respect to computational cost and the security functions it provides.
Efficient sensor deployment scheme for target detection is one of the fundamental issues in wireless sensor networks. In this paper, we have proposed a deployment algorithm named linear order deployment algorithm (LODA). LODA uses transferable belief model (TBM) for target detection. This technique improves the overall performance in comparison to probability theory. The simulation results show that LODA improves the sensing coverage with limited number of sensors than the random deployment method.
In a multi-hop WSN a sensor node spends most of its energy for relaying data packets due to which energy reduction is one of the major issue in the designing of a wireless sensor network to prolong the lifetime of a network. One of the solutions of this problem is to shorten the hop distance a sensor's data that has to travel until reaching the sink. These distances can be reduced effectively by deploying multiple sinks instead of one and every sensor communicates with its closest sink. In order to achieve the shortest distances the sinks have to be deployed carefully. In this paper we have considered the problem of optimal deployment of k sink nodes in a wireless sensor network for minimizing average hop distance between sensors and its nearest sink with maximizing degree of each sink node which can solve hot spot problem which is another critical issue of WSN design. Given a wireless sensor network where the location of each sensor node is known, partition the whole sensor network into k disjoint clusters and place sink nodes optimally. We propose multi sink placement algorithm, based on Particle swarm optimization. The simulation results show that our proposed optimization based algorithm perform better over algorithm without optimization.
Mobile devices often change their location which triggers the handover from one access router to another. Mobility management provides a way to retain the ongoing session of the mobile node. It is crucial to provide efficient handoff mechanism support for mobile devices. Mobile IPv6 (MIPv6) and its extensions have been proposed for this purpose. Fast Mobile IPv6 (FMIPv6) and Hierarchical Mobile IPv6 (HMIPv6) have been developed as host-based mobility management protocols whereas Proxy Mobile IPv6 (PMIPv6) and Fast Proxy Mobile IPv6 (FPMIPv6) have been proposed as network-based mobility management protocols. In this paper, survey and detailed signaling of each protocol is presented followed by analysis of these protocols based on handover latency and signaling cost. Finally numerical results are presented and commented.
The increase in the wireless communication in the last decade has led to an increase in compact, high gain and preferably multi-band antennas. Further the decrease in the prices of handheld devices and services has made available on the move internet and web services facility to the customers. In this paper H-shaped patch antenna is designed using FR4 substrate. The proposed modified H shaped antenna is designed and simulated using HFSS and caters to various wireless applications such as WiMAX, Wi-Fi, UMTS and Digital Multimedia Broadcasting (DMB), etc.
Energy efficiency in wireless sensor network has gained important. Energy efficient routing algorithms are proposed to increase the lifetime of the network. Routing energy is consumed in topology assessments where the sink broadcast the message and after receiving the acknowledgement assesses the current topology of the network. Our work studies the topological behavior of WSN and proposed an algorithm which can maximize the lifetime by reducing communication overheads increase due to topology assessments.
In this paper, we propose few new quasi-orthogonal space-time block codes (QOSTBCs) with three time slots for two transmit antennas. These codes can be decoded using ML detection. Proposed codes provides better bit error rate (BER) performance than one of the existing QOSTBCs with three time slots and two transmit antennas. All Proposed codes give nearly same performance.
Wireless communication system is gaining to much attention for reliable communication. One of modern communication system is Mobile Ad-Hoc Networks (MANET) and it has several routing protocols for communication. The AODV is the one of efficient protocol for the MANET. In this paper we are proposing a novel technique for modeling for AODV routing protocol. The proposed technique is based on the Markov random walk model. Here we calculate the probability distribution for the several operations for efficient communication. This analysis is helpful to understand the dependency of the various factor over the AODV and it impact on QoS. Its effectiveness and efficiency has been checked by various parameters and further analyzed for the reliability and scalability.
Every upcoming generation of wireless communication is based on the integration of different application. This integration needs increase in capacity or spectral efficiency, possible either by decreasing the fading effects or by spectral reuse. Various diversity combining technique has been proposed in literature to mitigate the fading effects. Either using diversity combining techniques or spectral reuse causes interference; moreover the spectral reuse is main reason for the co channel interference. So the Interference modeling has become a challenging research area for the performance analysis and improvement of wireless communication system. Closed form solution for co-channel interference for Rayleigh distribution of received signal have been given in the past. In this paper, we have verified the past model and given a simple analytical model for other important fading distribution like Weibull distribution and Nakagami distribution giving a closed form solution for co-channel interference assuming that the interferers are linearly located.
Energy consumption is the key design criterion for routing data in WSN. However, some applications of WSN like disaster management, battlefield control etc. demand fast data delivery. In this paper, three data forwarding techniques are proposed. Here, the source nodes or intermediate nodes select a next node to forward the data to the destination based on different criteria. The process repeats until data reach the destination. In the first technique, neighbor node nearer to the sink is chosen considering its distance from sink node as a criterion. In the second technique remaining energy of the neighboring nodes is used as a criterion to select the next node. The combination of these two criteria is considered in the third technique based on Multiple Criteria Decision Analysis. A comparative study of the performance of these techniques has been carried out and results are presented in this paper.
In Wireless Sensor Network all sensor nodes have the equal probability to fail and therefore the data delivery in sensor networks is inherently faulty and unpredictable. Most of the sensor network applications need reliable data delivery to sink instead of point-to-point reliability. Therefore, it is vital to provide fault tolerant techniques for distributed sensor network applications. This paper presents a robust recovery mechanism of nodes failure in a certain region of the network during data delivery. It dynamically finds new node to route data from source nodes to sink. The proposed algorithm is integrated easily in data delivery mechanisms [11] where area failure in a certain geographical region is not considered. This recovery mechanism is focused on multiple-sink partitioned network. It is found that it quickly selects alternative node from its 1-hop neighbor list when there are no forwarding nodes available and establishes route from source to sink. Simulations are done in Matlab environment.
In the era of Internet technologies, social networking websites has witnessed thriving popularity. Computer mediated communication has changed the rules of social interaction and communication. Most social networking sites like Orkut, Facebook, Google+, Twitter etc. facilitates user's with the features like online interaction, sharing of information and developing new relationships etc. Online interaction and sharing of personal information in social networking sites has raised new privacy concerns. So, it requires an exploratory insight into user's behavioural intention to share information. This research aims to develop a research model, with security and privacy concerns conceptualized as an antecedent of trust in social networking site and moderator of information sharing. The study aims to understand the impact of security, trust and privacy concerns on the willingness of sharing information in social networking sites. Using an online questionnaire, empirical data were collected from 250 Facebook user's of different age group over the time period of 4 months. Reliability analysis, confirmatory factor analysis, structure equation modelling is used to validate the proposed research framework. This empirical study, based on an established theoretical foundation, will help the research community to gain a deeper understanding of the impacts of privacy concern in the context of Facebook. Practical implications: - The paper increases the understanding of user's willingness to reveal information on social networking site on their level of privacy, security and trust. The proposed ideas and discussion is equally applicable to social networking site operators with useful strategies for enhancing user's acceptance. Findings:-The privacy concerns of research respondents were found statistically significant and suggest that privacy concerns like security, trust has a positive effect on information sharing.
In multi input multi output orthogonal frequency division multiplexing systems, a group of low complexity subspace based time domain channel estimation methods are studied. These methods are based on parametric channel model, where the response of the channel is considered as a collection of sparse propagation paths. Considering the channel correlation matrix, translate estimation of channel parameters into an unconstrained minimized problem. To solve this problem, subspace tracking based Kalman filter method is proposed, which employs the constant subspace to construct state equation and measurement equation. The Least Mean Square and Recursive Least Square algorithms are applied and evaluated. These methods represent a group of low complexity subspace schemes. The approach can be extended to multi carrier code-division multiple-access systems. The simulation results prove that the Kalman filter method in time domain channel estimation can track faster fading channel, and is more accurate with low complexity.
This paper focuses on solving the problem of classification and clustering in social network by using Rough Set. When the data set consists of missing or uncertain data then the Rough set is proved to be an efficient tool. To solve a problem under the domain of social network, the problem must satisfy the fundamental property of rough set i.e., the attribute of the problem must holds true for equivalence relation. Hence, before implementing rough set to the specific problem of social network, it must be redefined in a way that properties of transitive, symmetric and reflexive should holds true. In this paper, we have studied on the concept of Fiksel's societal network and used it for redefining the social network problem in terms of equivalence relationships. Further, we had defined the Social network in terms of graph theory and mathematical relations. We had proceeded further in defining the Fiksel's societal network and social network with respect to rough set. Fiksel had defined the social network in terms of structural equivalence. We have discussed on the limitation of Rough set and observed that use of Covering Based Rough Set as an extension of Pawlak's rough set seems to be a better alternative. There are six types of covering based rough set. To keep continuity in this paper, we have mentioned about Covering based rough sets. Covering based rough set extends from partitioning in rough sets to covering of the universe and is flexible, when compared with rigid equivalence relation.
In this paper, design of a micro-strip patch antenna is proposed, having tunable polarization. The proposed antenna has a simple structure; a slot is created at an angle of 45 degree at the centre of patch antenna, to achieve circular polarization. An RF-MEMS switch is connected across the width of this slot; now the polarization may be tuned to linear or circular by switching the RF-MEMS switch in ON-OFF state respectively. Therefore, the proposed patch antenna has two different polarizations.
Wireless sensor networks (WSN) MAC protocol has been the active research area since last few years because of application specific nature of these networks. This paper studies popular contention based SMAC protocol in multihop scenario and analyzes its suitability in mission critical WSN applications. Along with the residual energy, the throughput and packet delivery ratio are considered as the important parameters for mission critical applications. Improvements are suggested in the SMAC protocol with the simulation results in NS-2.
Beam forming is being used in sensor networks to enhance it communication range. Dual beam can be formed from two sets of sensors array wherein each set forming beam pattern in different direction. When same data need to be transmitted to two different directions, one set of sensors array can be used to form dual beam. The random distribution of sensors causes the beam pattern deviation. The random distribution has been modeled as uniform and Gaussian variables. A compensation technique to reduce the errors has been proposed in this paper. It has been modeled as joint optimization problem and solved by LS estimation method. This above technique has been applied to various geometry such as Uniform Linear Array(ULA) Uniform Circular Array (UCA), Uniform Elliptical Array(UEA) and Uniform Planar Rectangular Array(UPRA) with random perturbation and it is observed that dual beamforming from single set of array has efficient distribution of energy and also enhanced data transmission rate.

abstract2
One of the most widely used wireless communication standard is WLAN (IEEE 802.11 b/g). However, WLAN has a serious power consumption issue in mobile device. This paper proposes an energy saving approach based on clustering. A cluster is a Bluetooth Personal Area Network (PAN) consisting of a cluster head and several regular nodes. The cluster head acts as a gateway between the PAN and the WLAN. The proposed approach presents a Cooperative Clustering protocol (CCP) which dynamically reforms clusters according to the node's distance, and energy use. As clustering is performed independently of WLAN access points, CCP does not require modifications in existing wireless infrastructure.
The transmission of real time multimedia services in wireless ad hoc networks requires optimal multicast routing protocol that satisfies the quality of service guarantees. However, multicast routing protocol in wireless ad hoc networks must also be energy aware since the nodes are energy constrained due to limited battery life. This gives rise to the need for efficient multicast routing protocol that is able to determine multicast routes which satisfies the quality of service guarantees and at the same time conserves energy. The design of such protocol can be formulated as a Multiobjective Multicast Routing Problem (MMRP) that attempts to optimize the objectives simultaneously. The paper proposes a novel multiobjective algorithm based on Ant Colony Optimization (ACO) for MMRP problem. Our protocol attempts to optimize the end-to-end delay and total transmitted power simultaneously to obtain the Pareto-optimal solutions. The simulation results are very promising and show that our algorithm is able to find near optimal solution efficiently.
Orthogonal Frequency Division Multiplexing (OFDM) is proven technology in modern wireless communication because of its high data rate, more immunity to delay spread. In this paper, we proposed probabilistic threshold Selective Mapping Technique which has low PAPR. The simulation result shows that modified technique has better PAPR reduction performance.
As the time is passing and the world is becoming more virtual and online than physical. This is the time to re-establish the business and to reach more and more people throughout the world. The idea is that it should reach to maximum people living around the world i.e. the business needs to be global and online. Aim of this research paper was to find out the various ways and resources from where we can save the money in traditional business while converting it into e-business. So, the motto is to find out the areas where we can save the money in traditional business. The paper is totally based on personnel observations, others experience and own thoughts etc. Found various areas where we can save good amount of money and also the business can perform in better way. In this paper, the possibility of reducing the business cost through e-business is being explored.
In wireless communication, broadcasting is one of the most suitable solutions of information dissemination. It is very attractive for limited resource handheld mobile devices in asymmetric communications. Access time and Tuning time are two criteria to evaluate the performance of air indexing technique. Indexing can reduce tuning time of the mobile devices by switching mobile device into doze mode while waiting for desired of data to arrive and active mode while desired of data has presented. Some air indexing techniques can save limited battery power while incurring only limited overhead on access time. In this paper, we discuss and analyze energy efficient air indexing techniques for single and multi-level wireless channel. The basic indexing techniques i.e. B+ tree, the hashing and the signature schemes of data broadcasting are compared between Single and Multi-level channel.
Heterogeneous wireless technologies deployed in environment to support user requirements, which adversely affect on society by increasing energy consumption of network and by increasing electromagnetic radiation on the environment. The mobiles are connected to their nearest BS. The nearest BS has been considered by the coverage range. With this, we analyzed the performance of the wireless system in heterogeneous environment in terms of outage probability and energy consumption by changing coverage range.
Like any other wireless communication settings, mobile ad-hoc network inherits potential dangerous vulnerabilities in network security. A proactive security defense such as intrusion detection system has become a recent research topic in this area. Many intrusion detection models are proposed by the researchers and most of them are promising. How-ever, the problem in mobile ad-hoc environment is that communication and power resources are very limited. Thus, any additional features which need to be implemented in this environment must be as efficient as possible. This paper presents a comparison study between the co-operative detection model and the aggregative detection model to evaluate the efficiency of resource usage. We use a sample case study of disaster recovery operations system to have a real scenario of mobile ad-hoc practice. An experiment is conducted using that scenario under different treatments. The contribution of our work is to suggest an intrusion detection model that is efficient but still reliable to use.
Underwater wireless communication has been an emerging field in the last few years with many astonishing results published to improve the communication method. With the constraints and evolving problems in the field, underwater wireless communication has been spoken in almost all the conferences of the world with great importance. Although the outputs published couldn't provide a convincing technique to follow as a guideline and to achieve better communication. In this paper,existing networking and routing protocolswere compared to have an insight on the aspects that has delivered results. TCP and Improved UDP have also been discussed with an eye opening idea in the emerging field.
This paper represents the novel numerical simulation and hardware simulation of the proposed Π- shape microstrip Antenna. The proposed antenna can cover mobile Worldwide Interoperability for Microwave Access (WiMAX) bands as well as Wireless-Fidelity (Wi-Fi) bands. This antenna also operates in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz for biomedical applications. Our proposed Π- shape microstrip antenna covers full bandwidth of 2.3 GHz mobile WiMAX (IEEE802.16e-2005 frequency spectrum 2.3-2.4 GHz), 2.4 GHz Wi-Fi (2.4-2.5 GHz) operation with sufficient SWR & gain. The proposed antenna provides excellent SWR of 1.045 and a total gain of 15.92 dBi at 2.45GHz. The overall dimension of the proposed antenna is 51 × 35 × 35 mm. The simulation results are obtained by antenna simulator (4nec2X) and in the antenna laboratory.
Vehicular Ad hoc Networks (VANET) have emerged as a subset of the Mobile Ad hoc Network (MANET) application; it is considered to be a substantial approach to the Intelligent Transportation System (ITS). VANETs were introduced to support drivers and improve safety issues and driving comfort, as a step towards constructing a safer, cleaner and more intelligent environment. At the present time, vehicles are exposed to many security threats. One of them is the User Datagram Protocol (UDP)- based flooding which is a common form of Denial of Service (DoS) attacks, in which a malicious node forges a large number of fake identities, i.e.-, Internet Protocol (IP) spoofing addresses in order to disrupt the proper functions of the fair data transfer between two fast moving vehicles. Incorporating IP spoofing in the DoS attacks makes it even more difficult to defend against such attacks. In this paper, an efficient method is proposed to detect and defend against UDP flooding attacks under different IP spoofing types. The method makes use of a storage-efficient data structure and a Bloom filter based IPCHOCKREFERENCE detection method. This lightweight approach makes it relatively easy to deploy as its resource requirement is reasonably low. Simulation results consistently showed that the method is both efficient and effective in defending against UDP flooding attacks under different IP spoofing types. Specifically, the method outperformed others in achieving a higher detection rate yet with lower storage and computational costs.
This paper presents an enhanced version of RC6 Block Cipher Algorithm (RC6e - RC6 enhanced version), which is a symmetric encryption algorithm [1] designed for 256-bit plain text block. RC6 uses four (w-bit) registers for storing plain text and for data-dependent rotations [2, 3], but this enhanced version (RC6e) uses eight (w-bit) register that helps to increase the performance as well as improve security. Its salient feature includes two-variable algebraic expression modulo 2 w and 2 Box-Type operations, Box-Type I & Box-Type II. Each Box-Type operation uses two (w-bit) registers. Box-Type I works much like two registers (A & B or C & D) operation in RC6 but in Box-Type II bitwise exclusive-or is swapped by integer addition modulo 2 w used in Box-Type I and vice-versa, it improves Diffusion in each round. This enhanced version needs 2r+4 additive round-keys and uses every round-key twice for encrypting the file. This enhanced version performs better with respect to RC5 [4, 5] and RC6 [2, 3] when file size is larger.
In this paper a Novel approach is presented to increase bandwidth up to 122.5 % with return loss less than -10 dBi in frequency range of 4.5 GHz to 18.7 GHz. Designed antenna is a stacked configuration which can also work for bands of frequencies, 19.413GHz to 19.937GHz ,2.99 GHz to 3.92 GHz and 461 MHz to 859 MHz. It provides promising gain which is maximum up to 7.8 dBi, antenna efficiency up to 98%. This antenna can work on various wireless standards and microwave bands. The various parameters like return loss, gain, radiation pattern and bandwidth have been studied and plotted for designed antenna.
Multi-server performance models are used in the modeling of advanced computing systems, communication nodes and networks, and manufacturing systems. In [15], [5], a fast method known as Spectral Expansion was developed. Extension of that method to finite capacity multi-server systems was done in [16]. Application of this model to networks was done in [19]. Extension to heterogeneous systems with breakdowns and repairs was done in [18] in which only a few repair strategies were considered. Principles towards extending the range of repair strategies was attempted in [20], but those models were neither evaluated numerically nor compared. The purpose of the present paper is to show how heterogeneous multi-server systems with FCFS, LCFS (PR), LCFS (NPR) repair strategies can be evaluated numerically using the Spectral Expansion algorithm.
Cluster Based Secure Routing Protocol (CBSRP) is a MANET routing protocol that ensures secure key management and communication between mobile nodes. It uses Digital Signature and One Way Hashing technique for secure communication. According to CBSRP, it forms a group of small clusters consist of 4-5 nodes and after that the communication takes place between mobile nodes. Inside a cluster, there is always a cluster node or cluster head. The cluster head inside the cluster is not permanent as other nodes stay in the queue and based on the priority new cluster node or cluster head is elected from rest of the node. Inside a cluster, mobile nodes are authenticated using One Way Hashing concept and Digital Signature is not necessary inside cluster communication. For Cluster-Cluster authentication we proposed to use Digital Signature. CBSRP ensures secure communication which will be energy efficient as we segmented the whole network into small set of clusters.
Localization in sensor networks can be defined as “identification of sensor node's position”. For any wireless sensor network, the accuracy of its localization approach is highly aimed. Here we explore the possibility of using free-space-optical (a.k.a. optical wireless) communications to solve the 3-D localization problem in ad-hoc networking environments. There are two methods to solve localization problem, Range based & Range Free. In Range-based methods require a higher node density or costly devices such as sonar. In our proposed approach we use direction related information provided by a physical layer using optical wireless, requiring a very low node density (2-connectedness) and no ranging technique. We analyze the accuracy of localization with respect to varying node designs (e.g., increased number of transceivers with better direction related information) and density of GPS-enabled and ordinary nodes as well as messaging overhead per re-localization. This method still works well with sparse networks with little message overhead and small number of anchor nodes as little as 2.
This paper uses location data traces (from GPS, Mobile Signals etc.) of past trips of vehicles to develop algorithm for predicting the end-to-end route of a vehicle. Focus is on overall route prediction rather than predicting road segments in short term. Researches in past for route prediction makes use of raw location data traces data decomposed into trips for such route predictions. This paper introduces an additional step to convert trips composed of location data traces points to trips of road network edges. This requires the algorithm to make use of road networks. We show that efficiency in storage and time complexity can be achieved without sacrificing the accuracy by doing so. Moreover, its well-known that location traces data has inherent inaccuracies due to hardware limitations of devices. Most of the researches don't handle it. This paper presents the results of route prediction algorithms under inaccuracies in data.
Mobile Ad hoc networks (MANETs) are susceptible to having their effective operation compromised by a variety of security attacks because of the features like unreliability of wireless links between nodes, constantly changing topology, restricted battery power, lack of centralized control and others. Nodes may misbehave either because they are malicious and deliberately wish to disrupt the network, or because they are selfish and wish to conserve their own limited resources such as power. In this paper, we present a mechanism that enables the detection of nodes that exhibit packet forwarding misbehavior. The approach is based on the usage of two techniques which will be used in parallel in such a way that the results generated by one of them are further processed by the other to finally generate the list of misbehaving nodes. The first part detects the misbehaving links using the 2ACK technique and this information is fed into the second part which uses the principle of conservation of flow (PFC) technique to detect the misbehaving node. The problem with the 2ACK algorithm is that it can detect the misbehaving link but cannot decide upon which one of the nodes associated with that link are misbehaving. Hence we use the principle of conservation of flow, PFC for the second part which detects the misbehaving nodes associated with that of the misbehaving link.
One of the most classical applications of the Artificial Neural Network is the Character Recognition System. This system is the base for many different types of applications in various fields, many of which we use in our daily lives. This paper attempts to recognize the characters using a back propagation algorithm and studies the effect of variations of error percentage with number of hidden layers in a neural network. First, the aim is to recognize the twenty-six characters i.e. from A to Z &a to z and then to create a network that could recognize the numerals correctly.
An Off-line Signature Verification System (OSVS) with a novel feature extraction procedure has been described. Fusion of concentric squares having geometric features, zone based slope as well as slope angle have been considered as input patterns. The strong feature set thus obtained makes the OSVS accurate. Verification was performed by using Support Vector Machine (SVM) technique with different kernels. Empirically, Radial Basis Function (RBF) based SVM model exhibited the best results as compared to that based on linear and polynomial kernels. That is, the system attained False Acceptance Rate as 1.25% and False Rejection Rate as 1.66%.
Web service supports interoperability for collecting, storing, manipulating and retrieving data from heterogeneous environments. The Wireless Sensor Network is a resource-constrained device, which is low cost, low power and small in size and used in various applications such as industrial control & monitoring, environmental sensing, health care, etc. The main intent is to design a middleware that hides the complexity of accessing the sensor network environment and developing an application for sensor web enablement. This concept holds great importance because integrating wireless sensor network into IP-based systems are still a challenging issue. It is very important to collect patient's details during the emergency period. To create a web service to manage patient's personal data with the help of Radio Frequency Identification Tag (RFID), Web Service is dedicated to collecting, storing, manipulating, and making available clinical information. Context-aware services are needed for searching information more accurately and to produce highly impeccable output.
This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS master slave parallel task allocating algorithm using RR scheduling. This parallel task-allocation is implemented on master-slave system. Task-allocation on slave processors is already presented using FCFS scheduling. Improved master-slave parallel task allocating is also presented that Task groups are arranged in descending order on the basis of their cost and then using FCFS scheduling for arranging in a queue and then task groups are assigned to slave-processors. This paper presents the ZVRS master-slave parallel task-allocating algorithm using RR scheduling. Here Firstly task groups are arranged in descending order on the basis of their costs. Then these task groups are arranged in a queue using RR scheduling. After that, the master processor assigns the task groups to slave processors. This new algorithm shows the advantage that this system consumes the less time, better processor utilization than the previous algorithms. It also improves the efficiency.
In this paper we are proposing a new approach for tasks allocation in a massively parallel system using Finite Automata. On the basis of task flow model of finite automata., we find the turnaround time for a parallel system using finite automata as a directed acyclic graph in the second section of the paper we discuss regarding the finite automata and directed acyclic graph after that we change finite automata into DAG for massively parallel system. All the simulations are performing in Intel C++ parallel compiler and compare these results with several interesting scheduling algorithms and we get better turnaround time.
This papers deals with the implementation of neural network controlled Distributed Active Filters. Distributed Active Filters introduces the concept of installing multiple active filters at different locations. Shunt active filters consists of three phase VSI and an energy source connected at DC side. Synchronous Reference Frame theory is employed for enhancement of power quality. A comparison and analysis of harmonic content of source current is done by using PI controller and Neural Network control. The new concept of proposed Distributed Active Filters are simulated using MATLAB power system Toolbox.
In this paper, a parallel genetic based association rule mining method is proposed to discover interesting rules from a large biological database. Apriori algorithms and its variants for association rule mining rely on two user specified threshold parameters such as minimum support and minimum confidence which is obviously an issue to be resolved. In addition, there are other issues like large search space and local optimality attracts many researchers to use heuristic mechanism. In the presence of large biological databases and with an aim to circumvent these problems, genetic algorithm may be taken as a suitable tool, but its computational cost is the main bottle-neck. Therefore, we choose parallel genetic algorithms to get relief from the pain of computational cost. The experimental result is promising and encouraging to do further research especially in the domain of biological science.
In Mobile Grid systems, the automatic service deployment initially requires the node discovery. Most of the existing security mechanisms on Grid systems rarely consider the mobility of the nodes which may affect the applied security mechanisms leading to insufficient and inaccurate security. In order to overcome these issues, in this paper, we propose an ant based resource discovery and mobility aware trust management for mobile grid systems. Initially the super-grid nodes are selected in the network using ant colony optimization based on the parameters such as distance, CPU speed, available bandwidth and residual battery power. These selected nodes are utilized in the resource discovery mechanism. In order to maintain strong security with mobility management system, a proficient trust reputation collection method has been adopted. By simulation results, we show that the proposed approach is efficient and offers more security.
Driver fatigue plays a vital role in a large number of accidents. In this paper, a real time, machine vision-based system is proposed for the detection of driver fatigue which can detect the driver fatigue and can issue a warning early enough to avoid an accident. Firstly, the face is located by machine vision based object detection algorithm, then eyes and eyebrows are detected and their count (four or less) is computed. By comparison of the calculated number of black spots, with a predefined value, which is four (Two eyes and two eye brows), for a particular time interval, driver fatigue can be detected and a timely warning can be issued whenever there will be symptoms of fatigue. The main advantage of this system is its fast processing time and very simple equipment. This system runs at about 15 frames per second with a resolution of 320*240 pixels. This algorithm is implemented using MATLAB platform along with camera and is well suited for real world driving conditions since it can be non-intrusive by using a video camera to detect changes.
In recent years intelligent soft computing techniques such as fuzzy inference system (FIS), artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are proven to be efficient and suitable when applied to a variety of systems. In this paper we intend to formulate an adaptive neuro-fuzzy inference system (ANFIS), a new sensor based navigation technique for mobile robot. The ANFIS controller uses different sensors based information such as front obstacle distance (FOD), right obstacle distance (ROD), left obstacle distance (LOD) and heading angle (HA) for choosing the optimal direction while moving towards target. The real time experiment has been carried out under different environmental scenarios to collect the data set for modeling ANFIS tool box. Using ANFIS tool box, the obtained mean of squared error (MSE) for training data set in the current paper is 0.031. We also have present the simulation experiments using MATLAB, showing that ANFIS consistently perform better results to navigate the mobile robot safely in a terrain populated by stationary obstacles.
The constrained shortest path problem (CSPP) is a well known NP-Complete problem where the task is to determine the shortest path that satisfies certain constrains like delay and cost. Other than the straight-forward implementation in the area of networking, this problem also finds application in the field of multimedia, crew scheduling etc. In these kinds of applications it is important to take into consideration the uncertainty involved in the network environment to provide the Quality of Service (QoS) assurance which can be modeled by the use of fuzzy numbers for the representation of the parameters involved. In this paper, we extend the algorithm stated by Sahni et al to deal with the fuzzy constrained shortest path problem (FCSPP) by fuzzifying one of the constraints i.e. cost and representing it as a trapezoidal fuzzy number. Fuzzy numbers cannot be ordered like real numbers so the Circumcenter of Centroid method is used to rank the trapezoidal fuzzy numbers and determine the fuzzy constrained shortest path also taking into account the delay involved.
Recently received signal strength (RSS)-based distance estimation technique has been proposed as a low complexity, low-cost solution for mobile communication node with minimum RSSI error. After investigating the existing algorithm of location technique, it is observed that the distribution of RSSI-value at each sample point is fluctuant even in the same position due to shadow fading effect. Therefore, here present a novel method for RSSI error reduction in distance estimation using recursive least square (RLS)-algorithm to the existing deterministic algorithms. The proposed method collects RSSI-values from the mobile communication node to build the probability model. Once the probability models are estimated for different standard deviation related to path loss exponent using adaptive filtering in real time, it is possible to accurately determine the distance between the mobile communication node and fixed communication node. From simulation results it is shown, that the accuracy of RSSI-value for mobile communication node in real time distance estimation is improved in changing environments.
In this paper we compare the two intelligent route generation system and its performance capability in graded networks using Artificial Bee Colony (ABC) algorithm and Genetic Algorithm (GA). Both ABC and GA have found its importance in optimization technique for determining optimal path while routing operations in the network. The paper shows how ABC approach has been utilized for determining the optimal path based on bandwidth availability of the links and determines better quality paths over GA. Here the nodes participating in the routing are evaluated for their QoS metric. The nodes which satisfy the minimum threshold value of the metric are chosen and enabled to participate in routing. A quadrant is synthesized on the source as the centre and depending on which quadrant the destination node belongs to, a search for optimal path is performed. The simulation results show that ABC speeds up local minimum search convergence by around 60% as compared to GA with respect to traffic intensity, and opens the possibility for cognitive routing in future intelligent networks.
The Self-organizing map (SOM) has been extensively applied to data clustering, image analysis, dimension reduction, and so forth. The conventional SOM does not calculate the winning frequency of each neuron. In this study, we propose a modified SOM which calculate the winning frequency of each neuron. We investigate the behavior of modified SOM in detail. The learning performance is evaluated using the three measurements. We apply modified SOM to various input data set and confirm that modified SOM obtain a more effective map reflecting the distribution state of the input data.
Analyzed the particularity of the TBM work environment and the superiority of virtual instrument for condition monitoring, and built a virtual instrument-based TBM condition monitoring systems. Researched the collected method of certainty feature vectors based on wavelet packet transform, and verified the applicability of this approach. A combination diagnostic methods of wavelet packet transform and BP neural network for fault diagnosis was proposed. In the process of applying this method, presented the method to adjust the weights of neural netwoek by the second learning to influent oefficient weighting method. Built a TBM condition monitoring and fault diagnosis system using LabVIEW and Matlab software. And shared the system online by using the web publishing tool. The technical feasibility were validated by the results of the operation of the system.
The threat from spammers, attackers and criminal enterprises has grown with the expansion of Internet, thus, intrusion detection systems (IDS)have become a core component of computer network due to prevalence of such threats. In this paper, we present layered framework integrated with neural network to build an effective intrusion detection system. This system has experimented with Knowledge Discovery & Data Mining(KDD) 1999 dataset. The systems are compared with existing approaches of intrusion detection which either uses neural network or based on layered framework. The results show that the proposed system has high attack detection accuracy and less false alarm rate.
This paper introduces a state of art compressor for DNA sequences that makes use of a replacement method. The replacement method introduces words and a word based compression scheme is used for encoding. The encoder uses frequency distribution for assigning the code of words. The designed statistical compression algorithm is efficient and effective for DNA sequence compression. Experiments show that our algorithm is shown to outperform existing compressors on typical DNA sequence datasets.
With the advancement of time, technology is in its booming phase. In the present era, data whether accessed over internet or the data used by any application needs to be searched effectively and then presented to the naive i.e. end user. Searching plays a vital role in fetching the data. The search by specialization and generalization are in wide practice. The linking and associations between entities are elusive. The result of searching over the entities may engender to obtain multifarious relationships between the entities. So, it is important to make a note of all the important and meaningful relations. There arises a possibility of relationship between two entities which consist of several intermediate entities. So, to distill out the essential paths, user may specify one or more intermediate entities. Learning, optimizing and analyzing existing examples aggrandizes the research scope. Backward search, Bi-Directional search, Bi-Directional breadth first search are such existing examples where relevant path is extracted between source and destination entities. In this paper, we are proposing a qualified bi-directional BFS algorithm to discover the relevant path between the two entities which passes through the intermediate node as specified by the user. Unlike the typical searching methodologies where the all the possible paths between the two entities are discovered and then later, the paths which are relevant to user are filtered out and ranked according to user's requirement, the qualified bi-directional BFS algorithm reduces the time in finding the resultant relevant path between the two entities as it considers only those nodes which contains user specified intermediate node. When the system will be developed and after its empirical evaluation, our proposed algorithm will ameliorate the searching and will also be time efficient.
One of the challenges in detection of data theft is the difficulty to distinguish copy operation from other type of access operations. Existing work in this area focuses on the stochastic model of filesystem behaviour to identify emergent patterns in MAC timestamps unique to copying. Such an approach produces lot of false positives because of the fact that patterns emerging due to copying are similar to other access operations like searching a file in folder, compressing a folder and scanning a folder by antivirus software. This paper proposes a technique that can be used to distinguish copy operation from other type of operations so that forensic analyst can concentrate on more relevant artefacts. The paper describes fuzzy inference system based technique that gives a confidence value to each cluster generated by stochastic forensic approach. Experimental results have shown that the false positives that are generated by the stochastic forensic approach can be filtered using the cluster confidence of our technique.
In this article, the mathematical relationship for Bit Error Rate (BER) and Inter Carrier Interference (ICI) Power for Fractional Fourier Transform (FRFT) based OFDM system has been derived. These expressions are derived in the presence of normalizedCarrier Frequency Offset (CFO, ε). The BER performances has been evaluated for BPSK and QPSK modulation schemes in AWGN channel at different values of FRFT angle parameter `α'. It is found thatICI power is improved by 13.82 dB for ε = 0.1 at α = 3.14 and 2.1 dB for ε = 0.18 at α = 6.28 in FRFT-OFDM system over DFT-OFDM system. For BPSK, improvement of SNR is 0.53 dB for BER 1.132 × 10 -3 atε = 0.15, α = 9. 5 and 0.76 dB for BER 2.924 × 10 -3 atε = 0.1, α = 9.5 . For QPSK, there is SNR improvement of 6 dB forε = 0.15 at α= 6.4.
Soft sensors play an important role in predicting the values of unmeasured process variables from knowledge of easily measured process variables. Online estimation of particle size is vital for efficient control of a grinding circuit. Due to high energy consumption in cement grinding processes and unavailability of reliable hardware sensors for continuous monitoring, soft sensors have tremendous scope of application in cement mills. Modern cement plants are increasingly using vertical roller mills for clinker grinding. While there have been some works reported in the literature about modelling of ball mills, very few research work is available on vertical roller mill modelling. In the present work a PCA based neural network model of a cement mill is developed based on the actual plant data for estimation of cement fineness. Real time data for all process variables relevant to cement grinding process were collected from a cement plant having a clinker grinding capacity of 235 TPH. The collected raw industrial data were pre processed for outlier removal and missing value imputation. Principal component analysis of the input data was performed to transform the original variables to a less number of un correlated principal components. The selected principal component scores were divided to a training set and a validation set using Kennard-Stone subset selection algorithm. The training set was used to develop a back propagation neural network model which was subsequently tested with the validation set. Simulations results show satisfactory prediction capabilities of the developed model over that of linear regression and principal component regression models.
Seasonality is a distinctive characteristic which is often observed in many practical time series. Artificial Neural Networks (ANNs) are a class of promising models for efficiently recognizing and forecasting seasonal patterns. In this paper, the Particle Swarm Optimization (PSO) approach is used to enhance the forecasting strengths of feedforward ANN (FANN) as well as Elman ANN (EANN) models for seasonal data. Three widely popular versions of the basic PSO algorithm, viz. Trelea-I, Trelea-II and Clerc-Type1 are considered here. The empirical analysis is conducted on three real-world seasonal time series. Results clearly show that each version of the PSO algorithm achieves notably better forecasting accuracies than the standard Backpropagation (BP) training method for both FANN and EANN models. The neural network forecasting results are also compared with those from the three traditional statistical models, viz. Seasonal Autoregressive Integrated Moving Average (SARIMA), Holt-Winters (HW) and Support Vector Machine (SVM). The comparison demonstrates that both PSO and BP based neural networks outperform SARIMA, HW and SVM models for all three time series datasets. The forecasting performances of ANNs are further improved through combining the outputs from the three PSO based models.
Clustering analysis is widely used technique in many emerging applications. Assessment of clustering tendency is generally done by Visual Access Tendency (VAT) algorithm. VAT detects the clustering tendency by reordering the indices of objects from the dissimilarity matrix, according to logic of Prim's algorithm. Therefore, VAT demands high computational cost for large datasets. The contribution of proposed work is to develop best sampling technique for obtaining good representative of entire dataset in the form of sub-dissimilarity matrix in VAT, it provides accessing of prior tendency visually by detecting number of square shaped dark blocks along with diagonal in sample based VAT image. This proposed work gives same clustering tendency results when we compare with simple VAT, and it has an advantage of less processing time since it uses only sampled dissimilarity matrix. This sample based VAT (PSVAT) uses set of distinguished features for random selection of progressive sample representatives. Finally, known clustering tendency is used in graph-based clustering technique (Minimum Spanning Tree based clustering) for achieving efficient clustering results. Comparative runtime values of PSVAT and VAT on several datasets are presented in this paper for showing that PSVAT is better than VAT in respect of runtime performance and clustering validity is also tested by Dunn's Index for sampled data.
Biometric Authentication System used for verifying identity of a person is getting highly popularized. In today's world when online communication and transactions are a widespread reality, verification of user's identity has become all the more challenging. Biometric Authentication provides a secured and robust system for verification purpose. This paper proposes a cancelable biometric approach called BioHashing. The method uses ECG features and Tokenized Random Number to generate an inner product. The obtained products which are above a previously defined threshold are coded as 1, and the rest 0, thus generating the BioHash Code.
The proliferation of World Wide Web and the immense growth of Internet users and services requiring high bandwidth have increased the response time of the users substantially. Thus, users often experience long latency while retrieving web objects. The popularity of web objects and web sites show a considerable spatial locality that makes it possible to predict future accesses based on the previous accessed ones. This infact has motivated the researchers to devise new prefetching techniques in web so as to reduce the user perceived latency. Most of the research works are based on the standard Prediction by Partial Match model and its derivates such as the Longest Repeating Sequence and the Popularity based model that are built into Markov predictor trees using common surfing patterns. These models require lot of memory. Hence, in this paper, memory efficient Prediction by Partial Match models based on Markov model are proposed to minimize memory usage compared to the standard Prediction models and its derivatives.
Face localization is a first step for face processing system. A lot of algorithmic work on face processing has already been reported. It has mostly been implemented on computers. For better speed, the algorithms need to be implemented on embedded platforms. The embedded algorithms need to be computationally inexpensive. This paper introduces computationally light face localization, ear and neck separation algorithms for implementation on embedded BeagleBoard-xM platform. Experimental results of the proposed algorithms are presented in the results section along with benchmarking with other contemporary algorithms.
With the help of evolutionary concepts and behavior of biotic components of nature, many optimization algorithms were developed. Optimization techniques like Particle Swarm Optimization and Firefly Algorithms are among the latest research topics. Several advancements have also been made in these algorithms. This paper carves out a comparative analysis of Particle Swarm Optimization (PSO) and Firefly Algorithm (FFA) in detail along with simulation results carried out on some standard benchmark functions.
In This Paper I propose that, in future all corporate office and shopping mall will have their own money transaction machine for their customer and their employee and they will provide their own unique identity card with money transaction facility to their regular customer and employ this Easy Money Transaction [EMT] is like an Automated Teller Machine [ATM] but it has some different feature like call security of that organization at the time of some critical problem. The concept to design that architecture is only to make easy life of the user. The purposed architecture is realized using HDL language.
Huge information is available in distributed database that can be exploited for constructive use. A query posed over a distributed database may get processed against disparate data sources distributed over a network. Each of these sources may contain data relevant to the query. The aim of distributed database system is to provide efficient query processing strategy for the given query. In distributed database scenario, multiple copies of the same data may reside at different sources. As a result, there can be multiple query strategy for a given query and finding an optimal query processing strategy is a combinatorial optimization problem. In this paper, an approach is presented that is able to generate optimal query processing plans for a given user query. The approach uses iterative improvement and simulated annealing algorithms to determine optimal query plans for a given query. The approach uses the cost heuristic defined in [1].
In this paper a new authentication system using Finger Knuckle Surface is examined. This introduces a personal authentication system that can simultaneously extract and exploit Finger back Knuckle surface geometrical features. Unlike, existing work on hand and finger geometrical methods which mainly concentrates of features extraction and recognition, this methods experiments with subsets of extracted feature to achieve better performance by exploiting less number of features. This is achieved by the determination of hybrid convex curves from the finger back knuckle surface. From the identified feature curves, the subset of the features like Knuckle edge points and knuckle tip points were identified. From these identified contours, geometrical structures like tangents and secants were constructed to obtain feature information in terms of angle. This method reduces critical problems that arise due to the extraction of more number of features. Also reduces the computational complexity of the feature extraction and recognition process.
The computational and storage limitations with silicon computers have propelled computer scientists to search for new dimensions in computer science. DNA computing emerges out to be a very promising field. The use of DNA strands would enable us to do complex calculations in seconds, which would have otherwise needed years. The volumes of data that can be stored have reached a new limit. Recently researchers of Harvard crack DNA storage and have been able to cram 700 terabytes of data into a single gram of DNA strand While developing any system a lot of design issues have to be taken into consideration and same is applicable for DNA computers. The paper makes an effort to deal with the consistency issue of DNA computers. A proof of inconsistency is provided and ground rules to a few DNA based cryptosystems are indexed which take advantage of the inconsistency and make use of it for data security.
The aim of the study described herein was to develop and verify an efficient neural network based method for extracting aircraft stability and control derivatives from real flight data using feed-forward neural networks. The proposed method (Modified Delta method) draws its inspiration from feed forward neural network based the Delta method for estimating stability and control derivatives. The neural network is trained using differential variation of aircraft motion/control variables and coefficients, as the network inputs and outputs respectively. For the purpose of parameter estimation, the trained neural network is presented with a suitably modified input file and the corresponding predicted output file of aerodynamic coefficients is obtained. An appropriate interpretation and manipulation of such input-output files yields the estimates of the parameter. The method is validated first on the simulated flight data and then on real flights data obtained by digitizing analogue data from published reports. A new technique is also proposed for validating the estimated parameters using feed-forward neural networks.
Internet is used for exchange of information; subsequently people upload and update web pages and information on constant basis. Due to rapid changes in the content of the web pages it has become very necessary to develop a system which can detect recurrent changes in the minimum browsing time. This paper has devised algorithm for structural changes and defines a text code formula to detect the content changes and also represents Architecture for web page change detection system. Analysis and comparison of various web page change detection algorithms based on various parameters to find out strengths and weaknesses for detecting the changes of web pages is also one of the important features highlighted through this paper.
Security is an important concern for today's generation, where keystroke-scan had come out as a milestone. In this paper, a comparison approach is presented for user authentication using keystroke dynamics. Here we have shown the effect of Dimensionality Reduction techniques on the performance and the misclassification rate is between 9.17% and 9.53%. It helps in improving the performance of the system after reducing the dimensions of input data. We have used three dimensional reduction techniques like: Principal Component Analysis (PCA), Multidimensional scaling (MDS), and probabilistic PCA. Here, PCA provide 9.17% misclassification rate with better performance for keystroke samples of 10 users and each user is having 400 samples for the same password.
As web is the largest collection of information and plenty of pages or documents are newly added and deleted on frequent basis due to the dynamic nature of the web. The information present on the web is of great need, the world is full of questions and the web is serving as the major source of gaining information about specific query made by the user. As per the search engine for the query, a number of pages are retrieved among which the quality of the page that are retrieved is questioned. On the retrieved pages the search engine apply certain algorithms to bring an order to the pages retrieved so that the most relevant document or pages are displayed at the top of list. In this paper a new page ranking algorithm known as the RatioRank is discussed, in which the inlink weights and outlink weights are used with the consideration of number of visit count and is compared with some algorithms by using certain parameters.
There is large amount of information available on web, which is hidden from users. This is because such information is not able to be accessed or indexed by traditional search engines. These search engines are only able to crawl information by following hypertext links. The forms which require login or any authorization process can be ignored by them. Hidden web refers to that deepest part of the Web which is not available for traditional Web crawlers. Obtaining the content from Hidden web is a challenging task. Today many web sites are containing pages that are dynamic in nature. This dynamic nature of web pages creates a problem for retrieving information for traditional web crawlers. The effort done to solve the given problem is discussed in brief. Then, a comparative study among the earlier defined architecture, considering various parameters, is also shown. By analyzing above methods a framework is proposed which uses an intelligent agent technology for accessing the hidden web.
In the present paper,attention has been given to the study of fuzzy linear fractional programming problem(FLFPP) using sign distance ranking method,where all the parameters and variables are characterized by triangular fuzzy numbers.A computational procedure has been presented to obtain an optimal solution by applying simplex method and a numerical example is given to demonstrate the algorithm to solve this FLFPP.
This paper presents a Fast Multi-objective Hyper-heuristic Genetic Algorithm (MHypGA) for the solution of Multi-objective Software Module Clustering Problem. Multi-objective Software Module Clustering Problem is an important and challenging problem in Software Engineering whose main goal is to obtain a good modular structure of the Software System. Software Engineers greatly emphasize on good modular structure as it is easier to comprehend, develop and maintain such software systems. In recent times, the problem has been converted into a Search-based Software Engineering Problem with multiple objectives. This problem is NP hard as it is an instance of Graph Partitioning and hence cannot be solved using traditional optimization techniques. The MHypGA is a fast and effective metaheuristic search technique for suggesting software module clusters in a software system while maximizing cohesion and minimizing the coupling of the software modules. It incorporates twelve low-level heuristics which are based on different methods of selection, crossover and mutation operations of Genetic Algorithms. The selection mechanism to select a low-level heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has been studied on six real-world module clustering problems reported in the literature and the comparison of the results prove the superiority of the MHypGA in terms of quality of solutions and computational time.
In healthcare applications, there is tremendous growth in using the computer assistance for effective and fast diagnostic. There are various modalities such as Magnetic resonance imaging (MRI), computed tomography (CT), digital mammography, and others, to provide an insight of subject's body, noninvasively in order to facilitate diagnostic stakeholders to take decision in diagnosis. Being an important step of imaging systems in diagnostic, MRI imaging has been active area for researchers in computational intelligence and image processing. One of the most important problems in image processing and analysis is segmentation and same is true for biomedical imaging. The main objective of segmentation is separating the pixels associated with different types of tissues like white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). In this paper, we attempted to optimize the feature set constructed from more than three different types of features. It is well-known fact that, long feature vector representation can be boosting the performance. However, irrelevant feature elements from the long feature vector can become hurdle in convergence of classifier. The optimization feature vector is accomplished using genetic algorithm (GA) with an objective function of maximizing the sum of precision and recall. In addition to the elimination of the feature elements, some elements were also weighted to reduce their effect in the feature matching score. This overall process can also be considered as “fusion of features” for MRI segmentation.
Detecting useful patterns from a given data by applying clustering algorithm has many practical applications. In order to perform the task of clustering identifying a set of good exemplars is a challanging job. Success of clustering greatly depends on the initial set of exemplar chosen as representatives. The paper proposes the use of Manhattan distance for identifying high quality exemplars that can act as an initial set of exemplars followed by iteratively refining them on the basis of resemblance between the different data points. The proposed algorithm has been efficiently implemented for identifying the important cities that are easily accessible from the other cities belonging to the same cluster.
Log files are primary source of information for identifying the System threats and problems that occur in the System at any point of time. These threats and problem in the system can be identified by analyzing the log file and finding the patterns for possible suspicious behavior. The concern administrator can then be provided with appropriate alter or warning regarding these security threats and problems in the system, which are generated after the log files are analyzed. Based upon this alters or warnings the administrator can take appropriate actions. Many tools or approaches are available for this purpose, some are proprietary and some are open source. This paper presents a new approach which uses a MapReduce algorithm for the purpose of log file analysis, providing appropriate security alerts or warning. The results of this system can then be compared with the tools available.
In this paper, a novel, simple and efficient airport runway detection algorithm is proposed. Proposed algorithm consists of two stages. First stage of the algorithm coarsely segments the runway. This stage uses anisotropic diffusion and Frangi filter. Anisotropic diffusion provides the noise immunity and Frangi filter detects all possible runway candidates. To avoid the false candidates generated in this first stage, the candidate pixels are passed through a second stage using shape and chroma based features. Qualitative and quantitative analysis performed on various images show that proposed method can detect runways effectively for both simple and complex scenarios. Our proposed algorithm has a significant role for both defence and commercial applications.
Recognition of emotions from speech is one of the most important sub domains in the field of affective computing. Six basic emotional states are considered for classification of emotions from speech in this work. In this work, features are extracted from audio characteristics of emotional speech by Mel-frequency Cepstral Coefficient (MFCC), and Subband based Cepstral Parameter (SBC) method. Further these features are classified using Gaussian Mixture Model (GMM). SAVEE audio database is used in this work for testing of Emotions. In the experimental results, SBC method out performs with 70% in recognition compared to 51% of recognition in MFCC algorithm.
A number of techniques are available for generating low-pass filters [1] that help to remove additive noise from transmitted signals. Frequency transformation techniques [2,3,4] can be subsequently used to convert these filters into those with different specifications. However, they will always result in either high-pass, band-pass or band-reject filter types. However, the need may arise for designing a nonstandard filter of arbitrary specifications. This includes cases where signals in multiple bands may need to be transmitted or rejected at different attenuation levels. In this paper, we discuss a method for designing such a filter with arbitrary frequency response characteristics. The design technique is based on an adaptation of the random search method [5]. The only inputs required for this algorithm are the desired frequency response and resolution. The algorithm will create a frequency domain transfer function of the desired characteristics (both magnitude and phase). A uniform random number generator will be required for the functioning of the algorithm.
Human Machine Interaction and Gesture Recognition is a very challenging field. Scientists and researchers are trying to make communications with machines as easy, smooth and reliable as possible. With the help of learning techniques we present a vision based simulation of CHALK, BLACKBOARD, DUSTER game in which Robot plays this game with human identifying the moves generated by him in the form of gestures and also learns new gestures. Also with presented approach we are able to perform forearm deletion. We have verified our work using Webots simulation platform.
Now a day, the text document is spontaneously increasing over the internet, e-mail and web pages and they are stored in the electronic database format. To arrange and browse the document it becomes difficult. To overcome such problem the document preprocessing, term selection, attribute reduction and maintaining the relationship between the important terms using background knowledge, WordNet, becomes an important parameters in data mining. In these paper the different stages are formed, firstly the document preprocessing is done by removing stop words, stemming is performed using porter stemmer algorithm, word net thesaurus is applied for maintaining relationship between the important terms, global unique words, and frequent word sets get generated, Secondly, data matrix is formed, and thirdly terms are extracted from the documents by using term selection approaches tf-idf, tf-df, and tf2 based on their minimum threshold value. Further each and every document terms gets preprocessed, where the frequency of each term within the document is counted for representation. The purpose of this approach is to reduce the attributes and find the effective term selection method using WordNet for better clustering accuracy. Experiments are evaluated on Reuters Transcription Subsets, wheat, trade, money grain, and ship.
Accomplishing Security in WSNs (Wireless Sensor Networks) is still untapped and is one of the very crucial tasks in field of Research & Development. WSNs constitute various resource-constrained sensor nodes which communicate among themselves through wireless links and have limited computational abilities, memory storage and physical capabilities. Exponential growth in field of intrusion and eavesdropping has lead to challenging task for secure communication. So paying heed on distributing keys among sensor nodes before establishing connection is the key issue. So this proposed paper has come up with a new robust key pre-distribution scheme using Random Prime Numbers & Functions which resolves threat without breaching security aspects. This new proposed mechanism has the power to achieve pair-wise keys between two sensor nodes using Algebraic, Exponential, Logarithmic and Discontinuous function with the key role of random prime number generation and functions which prevents eavesdroppers from performing security hacks. Thus it's very difficult to get spoofed.
With the magnificent amount of information present on web, it is very important to identify whether the search engine satisfy all the requirements of users by their search results. So, it is necessary to evaluate search engines based on user point of view. Basically evaluation of search engines is a process of determining how well the search engines meet the information need of users. In this paper we present our approach of search engine evaluation which is based upon page level keywords. Page level keywords are the keywords found in individual pages of a website. Page level keyword is an important factor to measure the relevancy of the search engine results. The result set retrieved by search engines are containing a huge number of useless web pages. Users may have to sift through dirt's in order to find gemstones or to rethink his query. So our work can be a basis to provide more relevant search results to the users. Three Search engines Google, Yahoo and Bing are evaluated based on educational queries in accordance with page level keywords. We verify the results with precision measurement using 40 educational queries at cut off 10.
Internet has become an indispensible part of today's life. World Wide Web (WWW) is the largest shared information source. Finding relevant information on the WWW is challenging. To respond to a user query, it is difficult to search through the large number of returned documents with the presence of today's search engines. There is a need to organize a large set of documents into categories through clustering. The documents can be a user query or simply a collection of documents. Document clustering is the task of combining a set of documents into clusters so that intra cluster documents are similar to each other than inter cluster documents. Partitioning and Hierarchical algorithms are commonly used for document clustering. Existing partitioning algorithms have the limitation that the number of clusters has to be given as input and the clustering result depends on this input. If the number of clusters is not known, results are not acceptable. In this paper, we have developed a novel algorithm which generates number of clusters automatically for any unknown text dataset and clusters the documents appropriately based on Cosine Similarity between them. We have also detected zero clustering issue in partitioning algorithm and solved it using our novel algorithm.
Sign Language is the most natural and expressive way for the hearing impaired. This paper presents a methodology which recognizes the Indian Sign Language (ISL) and translates into a normal text. The methodology consists of three stages, namely a training phase, a testing phase and a recognition phase. Combinational parameters of Hu invariant moment and structural shape descriptors are created to form a new feature vector to recognize sign. A multi-class Support Vector Machine (MSVM) is used for training and recognizing signs of ISL. The effectiveness of the proposed method is validated on a dataset having 720 images. Experimental results demonstrate that the proposed system can successfully recognize hand gesture with 96% recognition rate.
Watermarking techniques use different ways to embed the watermark within the host. They use different strengths (scaling factor) of the watermark when embedding it. In this paper we have optimized the single scale factor of the watermark using Particle Swarm Optimization (PSO) to yield a watermarking scheme with the best possible robustness(highest Normalised Cross-correlation), keeping the scheme as imperceptible as possible.
This paper presents our experimental work on performance evaluation of the SentiWordNet approach for document-level sentiment classification of Movie reviews and Blog posts. We have implemented SentiWordNet approach with different variations of linguistic features, scoring schemes and aggregation thresholds. We used two pre-existing large datasets of Movie Reviews and two Blog post datasets on revolutionary changes in Libya and Tunisia. We have computed sentiment polarity and also its strength for both movie reviews and blog posts. The paper also presents an evaluative account of performance of the SentiWordNet approach with two popular machine learning approaches: Naïve Bayes and SVM for sentiment classification. The comparative performance of the approaches for both movie reviews and blog posts is illustrated through standard performance evaluation metrics of Accuracy, F-measure and Entropy.
Gathering user requirement is one the most critical task in almost every project development. Complete requirements of the user cannot be perceived at a given point of time. The reason is that they evolve with time, mostly they are observed after the system deployment. This evolutionary nature of user's requirements poses difficulties in almost every phase of software development process of any project. The work of this paper gives an approach to overcome this problem. In this paper, the software intelligent agents are proposed for hospital environment. Algorithms of each agent is also proposed and implemented. These agents are automated in gathering user requirements and automatically evolve over time after deployment of the software. The evolving nature of agents helps agent based systems to enhance automatically their capabilities according to the user ever changing behavior. The intelligent agent is applied in hospital management system (HMS). Four agents and their algorithms has been proposed and implemented that gathers requirements of the user that are related to HMS after deployment of HMS software. Four agents for hospital environment are Patient Agent, Doctor Agent, Nurse Agent and Environment Agent.
The World Wide Web continues to grow at an exponential rate, so fetching information about a special-topic is gaining importance which poses exceptional scaling challenges for general-purpose crawlers and search engines. This paper describes a web crawling approach based on best first search. As the goal of a focused crawler is to selectively seek out pages that are relevant to given keywords. Rather than collecting and indexing all available web documents to be able to answer all possible queries, a focused crawler analyze its crawl boundary to hit upon the links that are likely to be most relevant for the crawl, and avoids irrelevant links of the document. This leads to significant savings in hardware as well as network resources and also helps keep the crawl more up-to-date. To accomplish such goal-directed crawling, we select top most k relevant documents for a given query and then expand the most promising link chosen according to link score, to circumvent irrelevant regions of the web.
We propose a new hierarchical collective learning (HCL) strategy for particle swarm optimization. The algorithm shows better performance than popular PSOs on standard benchmark functions. The algorithm has three components: a) initialization b) create exemplars c) fly under guidance of exemplars created in previous step. The HCL strategy is applied for the first two parts. In collective learning (CL), an exemplar is created by selecting the best combination of values for every dimension from the group members for the fittest one. In hierarchical part, this process is repeated for some levels, where in every level, the exemplars from the last level act as the member for the next higher level. During flying the initial population flies towards the final exemplar in proportion their fitness. These processes i.e. “create exemplars” and flying is repeated till a stop criterion is met. The process of flying reduces the common problems “oscillation” and “two steps forward one step backward” in PSO.
The aim of this paper is to propose an algorithm for particle filter which will overcome its problem of particle impoverishment. Our approach embed cuckoo search via levy flight algorithm into standard particle filter for Non-linear and Non-Gaussian state estimation. The use of cuckoo search via levy flight optimization overcomes the problem of particle impoverishment which is generated during resampling. To validate the efficacy of the proposed algorithm, its performance is compared with the particle filter and PSO Particle Filter (PSO-PF). Simulation results for generic one dimensional problem and two dimensional classic bearing only tracking problem show that our novel Cuckoo-PF outperforms other algorithms when RMSE, robustness and sample impoverishment are considered as metric for performance measurement.
The Extensible Markup Language (XML) has been acknowledging as the defacto standard for data exchange over the web and data representation. But on the other hand its main drawback that of being huge in size. The huge document size means that the amount of information has to be stored, transmitted, and queried is often larger than that of other data formats. Several XML compression techniques have been introduced to deal with these problems. In this paper, we present an experimental study of available XML compression techniques and we provide guidelines for users for making an effective decision towards selecting the most suitable XML compression tool according their needs.
Flexible AC Transmission Systems (FACTS) devices have been used in power systems since the 1970s for the improvement of its dynamic performance. Among these devices, Static Synchronous Compensator (STATCOM) is a shunt connected FACTS device, which is capable of providing reactive power compensation to the power system. STATCOM is a multiple input and multiple output system. In this paper, the CSC based STATCOM is controlled by the pole placement. But the best constant values for pole placement controller's parameters are laboriously obtained through trial and error, although time consuming. So the genetic algorithm (GA) is employed to find the best values for pole placement controller's parameters in a very short time. These methods are tested in MATLAB, and their results are obtained. The simulation results show an improvement in input-output response of CSC-STATCOM.
We can classify clustering into two categories. In K Clustering, we know the number of clusters or K. In other category of clustering, K in unknown. In this paper we have considered the first category only. We can broadly classify features within a data set into continuous and categorical. Here we have considered data set with continuous features only. Clustering can be done by all features or by relevant features only. Researches had commonly used some feature selection techniques to select relevant features for clustering and then did clustering by some clustering algorithm. Here we have used Multi Objective Genetic Algorithm (MOGA) for simultaneous feature selection and clustering. Here, K-means is hybridized with GA. We have used hybridized GA to combine global searching abilities of GA with local searching abilities of K-means. Considering context sensitivity, we have used a special crossover operator called “pairwise crossover” and “substitution”. Elimination of redundant, irrelevant features increases clustering performance, reflected in MOGA Feature Selection (H, S) compared with MOGA (H, S). The main contribution of this paper is simultaneous dimensionality reduction and optimization of objectives using MOGA.
Traditional cryptography provides powerful mechanisms to achieve information security but suffers from the key management problems. Biometrics has been an alternative measure for user authentication and identification based on physiological and behavioral characteristics of persons but still suffers from various biometric variations (due to wear-and-tear, accidental injuries, malfunctions, and pathophysiological development), improper acquisition and inconsistent representation of the biometric signal. Then comes the biometric cryptosystem which blends the cryptography and biometrics to reflect the combined strength of the two fields along with wiping out some common drawbacks in them. The key is dynamically generated from the biometric data instead of storing somewhere and used for authentication or input to cryptographic algorithms. In this paper, we have studied a previously proposed algorithm for biometric key generation from fingerprint, analyzed it, improved it and proposed a new distance based key generation algorithm for the same purpose.
In this paper we propose a novel and faster system for dynamic hand gesture recognition by using Intel's image processing library OpenCV. Many hand gesture recognition methods using visual analysis have been proposed: syntactical analysis, neural networks, the hidden Markov model (HMM). In our research, a HMM is proposed for hand gesture recognition. The whole system is divided into three stages detection and tracking, feature extraction and training and recognition. The first stage uses a more non-conventional approach of application of Lαβ colour space for hand detection. While the process of features extraction is the combination of Hu invariant moments and hand orientation. For the training, Baum-Welch algorithm using Left-Right Banded (LRB) topology is applied and recognition is achieved by Forward algorithm with an average recognition rate above 90% for isolated hand gestures. Because of the use of OpenCV's inbuilt functions, the system is easy to develop, its recognition rate is quite fast and so the system can be practically used for real-time applications.
Nowadays many applications are generating streaming data for an example real-time surveillance, internet traffic, sensor data, health monitoring systems, communication networks, online transactions in the financial market and so on. Data Streams are temporally ordered, fast changing, massive, and potentially infinite sequence of data. Data Stream mining is a very challenging problem. This is due to the fact that data streams are of tremendous volume and flows at very high speed which makes it impossible to store and scan streaming data multiple time. Concept evolution in streaming data further magnifies the challenge of working with streaming data. Clustering is a data stream mining task which is very useful to gain insight of data and data characteristics. Clustering is also used as a pre-processing step in over all mining process for an example clustering is used for outlier detection and for building classification model. In this paper we will focus on the challenges and necessary features of data stream clustering techniques, review and compare the literature for data stream clustering by example and variable, describe some real world applications of data stream clustering, and tools for data stream clustering.
Cloud computing technologies offer major benefits to the IT industries in terms of elasticity and rapid provisioning, pay-as-you-go-model, reduced capital cost, access to unlimited resources, flexibility. Job scheduling is a combinatorial optimization problem in the fields of computer science where the ideal jobs are assigned to required resource at a particular instant of time. In this paper we proposed Hybrid algorithm which combine the advantage of ACO and Cuckoo search. The makespan or completion time can be reduced with the help of hybrid algorithm, since the jobs have been executed with in the specified time interval by allocation of required resources using the Hybrid algorithm. The obtain results shows that Hybrid algorithm performs well than compared with the ACO algorithm in terms of performance of the algorithm and makespan.
Cloud is an emerging technology in the world of information technology and is built on the key concept of virtualization. Virtualization separates hardware from software and has benefits of server consolidation and live migration. Live migration is a useful tool for migrating OS instances across distant physical of data centers and clusters. It facilitates load balancing, fault management, low-level system maintenance and reduction in energy consumption. In this paper, we survey the major issues of virtual machine live migration. We discuss how the key performance metrics e.g downtime, total migration time and transferred data are affected when a live virtual machine is migrated over WAN, with heavy workload or when VMs are migrated together. We classify the techniques and compare the various techniques in a particular class.
This work suggests parallel algorithms for solving a sparse system of N - linear equations in N - unknowns by Jacobi method on Extended Fibonacci Cube EFC 1 (n) [3]. Where n is the degree of EFC 1 (n) and N is the number of processors of EFC 1 (n). Two parallel versions of the algorithm are discussed. The single pass of the first algorithm involves 2 (N - 1) data communications in N steps. Whereas the second algorithm achieves the same number of data communications in N/2 + logN steps. Further each pass of both algorithms have 3N/2 + 1 additions, N/2 - 1 subtractions, N - 1 multiplications and N divisions.
Now days, there are so many book purchasing websites available, claim to recommend users best books according to their interests. Most of the recommendations are based on conventional content, context and collaborative recommendations algorithms. All these algorithms alone fail to recommend best and efficient recommendations to user. So, there is a need to evolve a unique algorithm which combines the features of conventional algorithm along with its new features. This paper describes the NOVA, which is a book recommendation engine, based on a unique Hybrid recommendation algorithm, satisfies a user by providing best and efficient books recommendations. This paper also presents a comparative case study of conventional recommendation algorithms to NOVA's Hybrid books recommendation algorithm. This case study is based on evaluating criteria of recommendation algorithm i.e. accuracy, precision, recall, F-measure etc. Results of this case study are represented in the form of tables and graphs to clearly specify the need of NOVA.
Today's modern society is very dependent on computer based information systems. This dependence on information systems has lead to a need for securing these systems and this in turn has created a need for knowing how secure they are. In this paper a comparative performance analysis of information system security using crisp and fuzzy Analytic Hierarchy Process (AHP) methods are presented, in which the weight index and the security values are calculated under different security domains in crisp AHP then all the results are compared with the results of fuzzy AHP method. The criteria of information security assessment adopt the international norm ISO/IEC27001, which includes eleven security domains.
Automation of character recognition system is an open research problem. Many researchers have worked on recognition of printed characters, numerals but a very little amount of work is available on handwritten character recognition. Our work is just a step towards checking the applicability of genetic algorithm for the recognition of handwritten Kannada characters. Unfortunately, results obtained are very poor. First we have used Euclidian distance method as a fitness function for our genetic algorithm. Then we have used Mahalanobis distance method as a fitness function. Results of the two are compared to check their effects on recognition rate of genetic algorithm.
The economic load dispatch (ELD) is the online process for allocating the generation among the available generating unit to fulfill the load demand in such a way to minimize the total generation cost and satisfying the equality and inequality constraints. In literature many papers have used the particle swarm optimization to solve the economic load dispatch problem with emission constraint which is a population based optimization technique inspired by sociological behavior of bird flocking, It can solve ELD problem to a wide range but it lacks global search ability in the last stage of iterations. So it is difficult to get the global optimal solution for the ELD problem by using PSO. At the time of generation of large power, fossil fuel burns at power plants which produces many toxic gases and pollute the environment. The main objective of this paper is to minimize the total generation cost of thermal units as well as to minimize the pollutant emission emitted by toxic gases. This paper, a novel PSO with a moderate random search strategy called as moderate-random particle swarm optimization (MRPSO) is used for solving ELD problem with emission as constraints. MRPSO enhances the ability of particles to explore the solution spaces more effectively and increases their convergence rates. The validation of the proposed MRPSO algorithm is demonstrated through its application for six generator systems with emission constraints for various load demand.
The strength of Self Organizing Map (SOM) learning algorithm completely depends on the weights adjustments done in its network. Prior to the weight adjustments done, important step is to initialize the values to the weight. The choice of these initial values for weight vectors affects the performance of SOM training when applied to clustering. This paper proposes a different approach for initializing SOM. This approach depends on Frequency Sensitive Competitive Learning (FSCL) algorithm to pre-process the weights in order to improve the results obtained from trained input patterns in terms of better neuron utilization and less quantization and topographic error. Two datasets are used to analyze the performance of SOM algorithm. First dataset is evenly distributed 2D Gaussian data and second dataset is taken from the well reputed Engineering Educational Organization. Applying existing approaches of weight initializations, results obtained with first dataset showed that decreasing learning rate to a specific value gives better performance further but with second dataset results did not improve on decreasing learning rate. But with this new approach, results showed significant improvement as compared to the existing approaches of weight initialization.
One important step in case-based reasoning systems is the adaptation phase. This paper presents a case-based reasoning system which automatically adapts past solutions to propose a solution for new problems. The proposed method for case adaptation is based on support vector regression. At first, case base is partitioned using SOM technique. Then, a support vector regression is constructed for each cluster using local information. For solving a new problem, its local information is computed with respect to the most similar cluster and the corresponding support vector regression propose a solution. Experiment shows this approach greatly improves the accuracy of a retrieve-only CBR system with minimizing each didactic model.
The issue in reachability problem of graph G = (V, E) is whether there is a path between two given nodes or not. This problem plays a key role in areas such as Bioinformatics, Semantic Web, Computer Networks and Social Networks, which have very large graph-structured data. Also, the reachability problem is employed considerably in the graph management and graph algorithms. In this paper, we propose a novel labeling approach for large directed graphs. Our presented method is called GRU (Graph Reachability indexing using United intervals), that can answer reachability queries in constant time even for large graphs. The significant point in this approach is that all the reachability information is computed after indexing time. In addition, this computation is performed only with one time DFS (post-order) traverse and labels are calculated precisely and stored in an efficient way. Analytical and experimental results reveal that effectiveness of our method is more than other interval labeling methods. Furthermore, our approach results show improvement in query time in comparison with GRAIL, which is only a scalable index for reachability queries.
Energy is a scarce resource in WANETs (Wireless Ad Hoc Networks) since the nodes are powered by non-renewable batteries. Traditional multicast routing protocols are not energy aware and thus do not take energy conservation into consideration. One approach for energy conservation is to send the messages of a communication session along the routes which minimizes the total sum of the transmitter powers. Ant Colony Optimization (ACO) is swarm intelligence based method widely used for network routing. The main objective of this paper is to minimize the energy in multicast routing using the ant colony approach. The experimental results are very promising and show that the proposed algorithm is capable of finding minimum energy routes for most of the problems considered which are comparable to the state-of-art algorithms.
Quantum Dot Cellular Automata is one of the six emerging technologies which help us to overcome the limitations of CMOS technology. Design of 4-bit ALU for AND, OR, XOR, and ADD operations using QCA is discussed through this paper. This design of 4-bit ALU using QCA is simple in structure having significantly lesser elements as compared to CMOS design. It also gives better result in terms of speed, area and power. A QCADesigner tool is used for Simulation of different components of 4 bit ALU.
Now a days, user rely on the web for information, but the currently available search engines often gives a long list of results, much of which are not always relevant to the user's requirement. Web Logs are important information repositories, which record user activities on the search results. The mining of these logs can improve the performance of search engines, since a user has a specific goal when searching for information. Optimized search may provide the results that accurately satisfy user's specific goal for the search. In this paper, we propose a web recommendation approach which is based on learning from web logs and recommends user a list of pages which are relevant to him by comparing with user's historic pattern. Finally, search result list is optimized by re-ranking the result pages. The proposed system proves to be efficient as the pages desired by the user, are on the top in the result list and thus reducing the search time.
Requirements are the basic entity which makes the project to be successfully implemented. In the development of the software, theses requirements must be measured accurately and correctly. The requirements of the end users may be changed with respect to different individual personality. As different requirements are given by different customers, all of them cannot be processed by single software system. In such a case, it is essential to make the changes in the software systems automatically which fulfills all the requirements of the user. Such condition is met by the systems with intelligent agents. In this paper, algorithms of intelligent agents adviser agent, personalization agent and content managing agent are proposed to automatically gather the requirements of the students and fulfill them. The algorithms are structured using reinforcement learning. Theses algorithm makes the agents to sense the needs of the students and evolve in the course of their operation. These intelligent agents are applied after the deployment of the e-learning software. All the algorithms have been implemented successfully.
Face recognition technique nowadays is emerging as the most significant and challenging aspects in terms of security for identification of images in various fields viz. banking, police records, biometric etc. other than an individual's thumb and documented identification proofs. Till date for efficient net banking to be initiated, one has to provide the appropriate user name and password for purpose of authentication. This project introduces a vehicle to take a step forward in easy and more reliable authentication of an individual by providing Face Image along with User Name and Password to the system. In this an individual's face is identified by biometric authentication support with which, only a person whose account is, can access it. However while transferring this sensitive data of user image, from client machine to bank server it has to be protected from hackers and intruders from manhandling it, hence it is transferred using covert communication called Wavelet Decomposition based steganography. As face images are affected by different expressions, poses, occlusions, illuminations and aging over a period of time and it differs from the same person than those from different ones is the main difficult task in face recognition. Whenever image information is jointly co-ordinated in three aspects viz. image space, scale and orientation domains they carry much higher clues than seen in each domain individually. In the proposed method combination of Local Binary Pattern (LBP) and Gabor features are used to increase the face recognition performance significantly to compare individual's face presentations. Hence face recognition and representation of Gabor faces are done using E-GV-LBP and CMI-LDA based feature recognition method. Gabor faces uses space, scale and orientation to support accurate face recognition, making net banking easier, authentic, reliable and user friendly.
In this paper, we consider the problem of increasing the lifetime of a target monitoring sensor network. The network consists of two types of sensor nodes, i.e. normal and advance nodes with adjustable sensing ranges that is deployed to monitor a set of targets. An advance node consists of a times more energy than a normal nodes. A sensor can be in one of the three states namely active, deciding and sleep state. We propose a mechanism to increase the lifetime of a sensor network using a heterogeneous energy model. The simulation results demonstrate that the proposed method improves the wireless sensor network lifetime significantly over the ALBPS and ADEEPS methods. We compare the results of the proposed method with that of the ALBPS and ADEEPS methods because these methods have better performance among all the existing methods of the monitoring methods.
In today's world, traffic jams during rush hours is one of the major concerns. During rush hours, emergency vehicles like Ambulances, Police cars and Fire Brigade trucks get stuck in jams. Due to this, these emergency vehicles are not able to reach their destinations in time, resulting into a loss of human lives. We have developed a system which is used to provide clearance to any emergency vehicle by turning all the red lights to green on the path of the emergency vehicle, hence providing a complete green wave to the desired vehicle. A `green wave' is the synchronization of the green phase of traffic signals. With a `green wave' setup, a vehicle passing through a green signal will continue to receive green signals as it travels down the road. Around the world, green waves are used to great effect. Often criminal or terrorist vehicles have to be identified. In addition to the green wave path, the system will track a stolen vehicle when it passes through a traffic light. In contrast to any traditional vehicle tracking system, in which the Global Positioning System (GPS) module requires battery power, our tracking system, installed inside the vehicle, does not require any power. The information regarding the vehicle has to be updated in the system database. So, it is an autonomous 2-tier system which will help in the identification of emergency vehicles or any other desired vehicle. It is a novel system which can be used to implement the concept of the green wave.
This paper presents strategy of particle swarm optimization (PSO) algorithm introduced by Kennedy and Eberhart [1] for solving fractional programming problems. Particle swarm optimization (PSO) is a population-based optimization technique, which is an alternative tool to genetic algorithm (GA) and other evolutionary algorithms (EA) and has gained lot of attention in recent years. PSO is a stochastic search technique with reduced memory requirement, computationally effective and easier to implement as compared to EA. In this paper, possibility of using particle swarm optimization algorithm for solving fractional programming problems has been considered. The particle swarm optimization technique has been tried on a set of 12 test problems taken from the literature whose optimal solutions are known. A penalty function approach [2] is incorporated for handling constraints of the problem. Our experiences has shown that it can be effectively used to solve fractional programming problems also.
This paper proposes a signature verification system that can authenticate a signature to avoid forgery cases. In the real world environment, it is often very difficult for any verification system to handle a huge collection of data, and to detect the genuine signatures with relatively good accuracy. Consequently, some artificial intelligence technique are used that can learn from the huge data set, in its training phase and can respond accurately, in its application phase without consuming much storage memory space and computational time. In addition, it should also have the ability to continuously update its knowledge from real time experiences. One such adaptive machine learning technique called a Multi-Layered Neural Network Model (NN Model) is implemented for the purpose of this work. Initially, a huge set of data is generated by collecting the images of several genuine and forgery signatures. The quality of the images is improved by using image processing followed by further extracting certain unique standard statistical features in its feature extraction phase. This output is given as the input to the above proposed NN Model to further improve its decision making capabilities. The performance of the proposed model is evaluated by calculating the fault acceptance and rejection rates for a small set of data. Further possible developments of this model are also outlined.
Clustering is a data mining technique for finding important patterns in unorganized and huge data collections. The likelihood approach of clustering technique is quite often used by many researchers for classifications due to its' being simple and easy to implement. It uses Expectation-Maximization (EM) algorithm for sampling. The study of classification of diabetic patients was main focus of this research work. Diabetic patients were classified by data mining techniques for medical data obtained from Pima Indian Diabetes (PID) data set. This research was based on three techniques of EM Algorithm, h-means+ clustering and Genetic Algorithm (GA). These techniques were employed to form clusters with similar symptoms. Result analyses proved that h-means+ and double crossover genetics process based techniques were better on performance comparison scale. The simulation tests were performed on WEKA software tool for three models used to test classification. The hypothesis of similar patterns of diabetes case among PID and local hospital data was tested and found positive with correlation coefficient of 0.96 for two types of the data sets. About 35% of a total of 768 test samples were found with diabetes presence.
An image captured in a bad weather suffers from poor contrast. As one of the most common weather conditions, fog whitens the scenery that is the captured image and decreases the atmospheric visibility which leads to the decline of image contrast, gained by optical equipment and produces fuzzy look to the images. All the problems mentioned above might bring great difficulty to the image information extraction, outdoor image monitoring, automatic navigation, target identification, tracking and etc,.. Therefore, it is necessary that image captured in a bad weather or the foggy image is enhanced. In this paper, a new method for foggy image enhancement has been proposed, that integrates multilevel wavelet decomposition, the auto-adapted LUM filter, quadratic thresholding function and so on. Firstly, the multilevel wavelet decomposition is done to the image secondly low-frequency and high-frequency components of the image is obtained and finally the auto-adapted LUM filter is applied to low-frequency component. This new shrinkage function based on wavelet packet approximations turn out to be more flexible than the soft and hard-thresholding function and eventually carrying on wavelet restructuring to the processed components.
The present paper describes an efficient method for detecting and segmenting salient region(s) in an image. The method uses a time-frequency tuned salient region extraction technique based on wavelet transform (WT). WT provides both spatial and spectral characteristics (i.e., texture information) of pixels and hence can be utilized effectively for improving quality of salient region detection. As a result, the proposed method generates full resolution maps with uniformly highlighted regions with well defined boundaries, and invariant to translation, rotation and scaling that make it more useful in applications like object segmentation / recognition and adaptive compression. The superiority of the proposed method over the existing, is demonstrated both qualitatively and quantitatively using the indexes like precision and recall with a large set of benchmark data sets.
Congestion control is a key problem in mobile ad-hoc networks. Congestion has a severe impact on the throughput, routing and performance. Identifying the occurrence of congestion in a Mobile Ad-hoc Network (MANET) is a challenging task. The congestion control techniques provided by Transmission Control Protocol (TCP) is specially designed for wired networks. There are several approaches designed over TCP for detecting and overcoming the congestion. This paper considers design of Link-Layer congestion control for ad hoc wireless networks, where the bandwidth and delay is measured at each node along the path. Based on the cumulated values, the receiver calculates the new window size and transmits this information to the sender as feedback. The sender behavior is altered appropriately. The proposed technique is also compatible with standard TCP.
The incredible evolution of Internet technologies & its applications require high level the security of data over the communication channel. Image steganography is a digital technique for concealing information into a cover image. Least Significant-Bit (LSB) based approach is most popular steganographic technique in spatial domain due to its simplicity and hiding capacity. All of existing methods of steganography focus on the embedding strategy with less consideration to the pre-processing, such as encryption of secrete image. The conventional algorithm does not provide the preprocessing required in image based steganography for better security, as they do not offer flexibility, robustness and high level of security. The proposed work presents a unique technique for Image steganography based on the Data Encryption Standard (DES) using the strength of S- Box mapping & Secrete key. The preprocessing of secrete image is carried by embedding function of the steganography algorithm using two unique S-boxes. The preprocessing provide high level of security as extraction is not possible without the knowledge of mapping rules and secrete key of the function. Additionally the proposed scheme is capable of not just scrambling data but it also changes the intensity of the pixels which contributes to the safety of the encryption.
The digital data are transmitted using the Internet. So digital data must be secure, copyright protected, and authenticated at the same time. This paper proposes an algorithm to protect digital data by embedding watermark that is encrypted by DES algorithm. Two level discrete wavelet transformation (DWT) is applied to the original image. This ensure robustness of the proposed scheme. DES encryption to the watermark with a key and iterating operations ensure security of the watermark information. Encryption and decryption key is same for both the process. If we want to extract the watermark image, we must obtain the secret key. The experimental result shows that the watermark is robust against various attacks.
This paper presents a new technique to find zeros of a real, linear phase, FIR filter. Properties of Z-transform and constraints on location of zeros for this type of filter have been used to reduce search space for zeros. Furthermore, the obtained information on zeros is subsequently used in WDK formulas to obtain more accurate and precise location of zeros. Simulation results validating the proposed technique are also presented.
The Visual Cryptography Scheme is a secure method that encrypts a secret document or image by breaking it into shares. A distinctive property of Visual Cryptography Scheme is that one can visually decode the secret image by superimposing shares without computation. By taking the advantage of this property, third person can easily retrieve the secret image if shares are passing in sequence over the network. The project presents an approach for encrypting visual cryptographically generated image shares using Public Key Encryption. RSA algorithm is used for providing the double security of secret document. Thus secret share are not available in their actual form for any alteration by the adversaries who try to create fake shares. The scheme provides more secure secret shares that are robust against a number of attacks & the system provides a strong security for the handwritten text, images and printed documents over the public network.
In recent days chaos based image encryption is going through under research and implementation. Some chaos based algorithms are working well and resists many type of crypto analysis attacks, but it takes lot of time for encryption and decryption. Some of chaos based algorithms are very fast but their strength to resist attack is questionable. So these have motivated us to design a crypto system which will take less amount of time for encryption and decryption and it should resist all type of crypto analysis attacks. In this paper we have developed an advanced image encryption scheme by using block based randomization and chaos system. Here we discuss a block based transformation algorithm in which image is divided in to number of blocks. Then these blocks are transformed before going through a chaos based encryption process. At the receiver side after decryption, these blocks are re- transformed in to their original position. The main advantage of this approach is that it reproduces the original image with no loss of information during the encryption and decryption process in a reasonable amount of time, and due to sensitive chaos system becomes it more secure and reliable over the network.
In monitoring depth of anaesthesia for several decades, a number of methods have been elucidated and developed for the assessment of level of hypnosis under the effect of anaesthesia. The application of anaesthetic agents shows significant effects on electroencephalograph (EEG) waveform. In this paper we firstly estimated the depth of anaesthesia using detrended fluctuation analysis and wavelet analysis in order to characterize the patient state. Detrended fluctuation analysis assessed and examined the scaling behaviour of EEG in order to assess scaling information & long range correlations in time series. This scaling behaviour exponent consists in performing a linear regression fit of a scale-dependent quantity versus the scale in a logarithmic representation. This includes the Detrended Fluctuation Analysis (DFA). But in time domain, analysis of EEG is complex and time-frequency approach is needed. Therefore, Wavelet analysis, in particular, provides means of time-frequency localization of the information. Time resolution is improved which allows detection of the time of its occurrence. In Discrete wavelet transform, EEG signals were decomposed into sub-bands. A mathematical Probability density of each sub-band of each EEG segment was calculated according to number of wavelet coefficients in order to obtain uniformly time distributed atoms of energy across all the scales. This second method provides more robust results and can be applied to more general models.
There is no alternative of visual percepts to brain for accomplishing any easy to complex problem solution. Sustaining in the everyday world environment demands a lot from visual information processing of brain. But a large number of the human being is devoid of the blessings of visual percepts due to biological or accidental causes. This paper presents a critique on vision aided systems and suggests an inexpensive alternative to develop an aided system for visually impaired persons. We propose a real-time solution which utilizes image processing methodology and low cost hardware to support the visually impaired for every day path navigation and obstacle avoidance. A head mounted camera processes the image to identify plausible path and obstacle. The identified path direction and obstacle are converted into pre-defined vocabularies with audio annotation output to the physically challenged people's ears. This method has been tested over varying environmental conditions and is found to be effective for visually impaired navigation.
Several lung diseases are diagnosed detecting patterns of lung tissue in various medical imaging obtained from MRI, CT, US and DICOM. In recent years many image processing procedures are widely used on medical images to detect lung patterns at an early and treatment stages. Several approaches to lung segmentation combine geometric and intensity models to enhance local anatomical structure. When the lung images are added with noise, two difficulties are primarily associated with the detection of nodules; the detection of nodules that are adjacent to vessels or the chest wall corrupted and having very similar intensity; and the detection of nodules that are non-spherical in shape due to noise. In such cases, intensity thresholding or model based methods might fail to identify those nodules. Edges characterize boundaries and are hence of fundamental importance in image processing. Image edge detection significantly reduces the amount of data by filtering and preserving the important structural attributes. So understanding of edge detecting algorithms is necessary. In this paper Morphology based Region of interest segmentation combined with watershed transform of DICOM lung image is performed and comparative analysis in noisy environment such as Gaussian, Salt & Pepper, Poisson and speckle is performed. The ROI lung area blood vessels and nodules from the major lung portion are extracted using different edge detection filters such as Average, Gaussian, Laplacian, Sobel, Prewitt, Unsharp and LoG in presence of noise. The results are helpful to study and analyse the influence of noise on the DICOM images while extracting region of interest and to know how effectively the operators are able to detect, overcoming the impact of different noise. The evaluation process is based on parameters from which decision for the choice can be made.
While technology keeps growing the world keeps shrinking. Necessity of genome compression is playing a predominant role in the real vogue. Day by day more and more genetic living organism is generated and its accumulation is creating a major problem for processing in centralized and distributed environment. Genome data can be classified into DNA and mRna textures which encoded by four literals of A, C, G and T. Due to the excessive storage of living organism in the public data bases like GenBank and EMBL their size is exponentially growing, for ease of processing the data in the network and storage in the data base genetic compression striving into the world as a major concern. Many classical algorithms are fails to explain genetic sequences due to tandem and non tandem repeats in DNA&mRNA. Some algorithms explained the performance analysis based on tandem repeats in Best, Avg and worst cases but results are not ample. Our proposed technique Genpack uses public 32 bit key for encoding and decoding process. Gen pack will work on both tandem repeats and non tandem repeats and observed that compression is 1.002 bits/characters. It is a finest technique among all existed ones and by using this technique data can be easily managed by substantially reducing its infrastructure in networks, obviously quality of service(Qos) can be reinforced.
We present a physiologically inspired adaptive algorithm for noise removal in an image while preserving significant amount of edge details. The algorithm is motivated by the classical lateral inhibition based receptive field in the visual system as well as the holistic approach of the well known bilateral filter. We propose an adaptive difference of Gaussian (DoG) filter with varying window size depending upon the edge strengths in the image. Our algorithm has advantages over similar other techniques such as simple Gaussian filter, DoG filter, and is comparable to the bilateral filter in terms of edge enhancement. Furthermore, time complexity of our algorithm is much less than the bilateral filter.
Textures play important roles in many image processing applications, since images of real objects often do not exhibit regions of uniform and smooth intensities, but variations of intensities with certain repeated structures or patterns, referred to as visual texture. The textural patterns or structures mainly result from the physical surface properties, such as roughness or oriented structured of a tactile quality. It is widely recognized that a visual texture, which can easily perceive, is very difficult to define. The difficulty results mainly from the fact that different people can define textures in applications dependent ways or with different perceptual motivations, and they are not generally agreed upon single definition of texture [1]. The development in multi-resolution analysis such as Gabor and wavelet transform help to overcome this difficulty [2]. In this paper it describes that, texture classification using Wavelet Statistical Features (WSF), Wavelet Co-occurrence Features (WCF) and to combine both the features namely Wavelet Statistical Features and Wavelet Co-occurrence Features of wavelet transformed images with different feature databases can results better [2]. And further the Features are analyzed introducing Noise (Gaussian, Poisson, Salt n Paper and Speckle) in the image to be classified. The result suggests that the efficiency of Wavelet Statistical Feature is higher in classification even in noise as compared to other Features efficiency. Wavelet based decomposing is used to classify the image with code prepared in MATLAB.
This paper contains a method to implement a mobility aid for blind person and also can be used in automatic robots, self-propelling vehicles in automated production factories etc. Model contains signal processing unit with PIC microcontroller which receives data from Ultrasonic sensor and Temperature sensor then processed it and delivers it to the computer using serial input/output port & gives alert to the blind person using voice processor with earphone. Paper contains temperature compensation method to reduce the error in measurement of distance using ultrasonic sensors. Signal processing unit contains PIC microcontroller which is used for interfacing between different sensors and computer. Then received data is verified using MATLAB.
Reconstruction of a signal based on Compressed Sensing (CS) framework relies on the knowledge of the sparse basis & measurement matrix used for sensing. While most of the studies so far focus on the application of CS in fields of images, radar, astronomy etc.; we present our work on application of CS in field of speech/Audio processing. This work shows a comparative analysis of different sparse basis & measurement matrices which can be used in speech/audio processing. Our work gives a detail analysis of the performance bounds, compression ratios, reconstruction errors etc. which should be taken care of while designing CS based speech applications.
Reconstruction of a signal based on Compressed Sensing (CS) framework relies on the knowledge of the sparse basis & measurement matrix used for sensing. While most of the studies so far focus on the prominent random Gaussian, Bernoulli or Fourier matrices, we have proposed construction of efficient sensing matrix we call Grassgram Matrix using Grassmannian matrices. This work shows how to construct effective deterministic sensing matrices for any known sparse basis which can fulfill incoherence or RIP conditions with high probability. The performance of proposed approach is evaluated for speech signals. Our results shows that these deterministic matrices out performs other popular matrices.
Contrast enhancement plays an important role in image processing system. Enhancement is used to improve the appearance of an image and make it easier for visual interpretation, understanding and analysis of an image. Linear stretching and histogram equalization are the most common methods that are used for contrast enhancement, but the image that is enhanced by linear stretching or histogram equalization has bright and unnatural contrast. So we proposed a method that is based on genetic algorithm. This method enhances an image with natural contrast. In local contrast enhancement image can be enhanced using four parameters `a', `b', `c' and `k', where `a', `b', `c' and `k' are constants. We proposed a method in that the goal of contrast enhancement is achieved using these parameters with the new extension in their range. Local contrast enhancement increases the gray level of original image on the bases of light and dark edges. This proposed method has applied on m×n size of an original gray scale image. The local mean and local standard deviation of entire image, minimum value and maximum value of the image are used to statistically characterize digital image.
Illumination variations significantly affect the performance of the automatic face recognition system. To achieve optimum contrast enhancement, contrast limiting adaptive histogram equalization (CLAHE) has been used in this work. Histogram Equalization (HE) modifies the histogram of the image globally based on intensity distribution of an entire image. However, the feature of interest in an image needs enhancement locally. CLAHE is based on the intensity distribution in a neighborhood of every pixel in the image. Further, for removing the illumination variations in the face image, the appropriate number of low frequency DCT coefficients has been scaled down as illumination variations mainly lie in the low-frequency band. After eliminating illumination variations effect, mapping of the data on to another feature space is done using Kernel PCA (KPCA), which extract higher order statistics. KPCA has the advantage of less computation time and improve performance level. Classification is done by using nearest neighbor classifier. Experiments are performed on Extended Yale B database. The experimental results show that the performance of our method is significantly better than that of any existing state-of-art technique.
In this work the discrete wavelet packet transform (DWPT) based feature extraction method is illustrated for fingerprint matching. The wavelet packet transform is applied on small area of fingerprint image. The performance of wavelet packet decomposition is evaluated on the standard database available at the Website of Bologna University. The redundancy of discrete wavelet packet transform is reduced without compromising the accuracy. The discrete wavelet packet transform with reduced redundancy gives the better performance over the discrete wavelet transform (DWT), Gabor filter and minutiae based method.
The Hilbert Transform is a well known analytical technique which is widely used in demodulation of signals, shift invariant wavelet analysis, and many other domains. Since computation of the Hilbert transform through convolution with the Hilbert kernel is cumbersome, the convolution is usually mapped to multiplication with the frequency response of the Hilbert kernel. However, computation of the Fourier transform is computationally intensive compared to other transforms, such as the Haar wavelet transform and others. This has led to the use of other transforms in computing the Hilbert transform. We demonstrate parameterized methods of computing the Hilbert transform using classical transforms like DCT and Haar, and so called Haar- like transforms. The efficacy of these methods in separating the positive and negative channels is demonstrated on Doppler spectra.
This paper presents a robust watermarking scheme of digital videos using YCbCr color space and Wavelet based techniques. The proposed scheme divides the given watermark in two parts. One smaller part is embedded in Y channel with lesser embedding strength, while other larger part is embedded in Cr channel with higher embedding strength to achieve good balance of imperceptibility and robustness. Embedding in the Y channel provides robustness against compression attacks while embedding in Cr channel provides robustness against other types of attacks. Spread Spectrum Technique of watermarking is used. In the proposed scheme, the watermark is embedded in a plane which is specifically prepared by the temporal information of the video to achieve maximum imperceptibility. The proposed algorithm is tested against various types of attacks. The results are presented on the basis of PSNR between original and modified video as well as correlation of the original and extracted watermark. It is demonstrated that the proposed algorithm performs well on both the counts.
Many algorithms have been proposed for detecting video shot boundaries and classifying shot and shot transition types. This paper presents a novel approach to processing encoded video sequences prior to complete decoding. Detection of gradual transition and the elimination of disturbances caused by illumination change or fast object and camera motion are the major challenges to the current shot boundary detection techniques. These disturbances are often mistaken as shot boundaries. Therefore, it is a challenging task to develop a method that is not only insensitive to various disturbances but also sensitive enough to capture a shot change. To address these challenges, we propose an algorithm for shot boundary detection in the presence of illumination change, fast object motion, and fast camera motion. This is important for accurate and robust detection of shot boundaries and in turn critical for high-level content based analysis of video. First, the propose algorithm extracts structure features from each video frame by using dual-tree complex wavelet transform. Then, spatial domain structure similarity is computed between adjacent frames. The declaration of shot boundaries are decided based on carefully chosen thresholds. Experimental study is performed on a number of videos that include significant illumination change and fast motion of camera and objects. The performance comparison of the proposed algorithm with other existing techniques validates its effectiveness in terms of better Recall & Precision.
This paper proposes a noise robust technique to facilitate edge detection in color images contaminated with Gaussian and Speckle noises. The proposed edge detector uses the concept of Hilbert transform to perform edge sharpening and enhancement. Bilateral Filtering assists in smoothening noisy pixels without affecting high frequency edge contents. Using Bilateral Filtering as a precursor to Hilbert Transform, drastically improves the degree of noise robustness. Simulations have been carried out on medical images and the results have been validated in Gaussian and Speckle noise environment.
A robust watermarking scheme based on multi-resolution property of Discrete Wavelet Transform (DWT) for copyright protection has been proposed here. The division of information of the cover image into sub bands of varying detail and thereby, application of Singular Value Decomposition (SVD) to an appropriate sub band has been performed. Consequently, alteration of its singular values with the singular values of the watermark which is also a color image of same size as the original cover image of 512×512. The imperceptibility and robustness of the scheme has been measured with Peak Signal to Noise Ratio (PSNR) and Normalized Cross Correlation (NCC) values in the YIQ and YUV Color Spaces. High achievable values of the metrics validate the efficiency of the proposed scheme against the various attacks.
Non-linear filters have the property to enhance and preserve edges of lesion. Detection of edges of tumour is required in application such as to evaluate effectiveness of breast cancer treatment. Proposed algorithm uses polynomial filtering technique to enhance the contrast of lesion while preserving its edges with effective suppression of background noise. This algorithm successfully detected edges of lesion in digital mammograms taken from DDSM mammogram database. Also different metrics were used to establish effectiveness of enhancement and noise suppression in mammograms.
Electrocardiogram contains a wealth of diagnostic information normally used to guide clinical decision making for proper diagnosis of cardiovascular diseases disorders. ECG is often contaminated by noises and artifacts that can be within the frequency band of interest and can manifest with similar morphologies as the ECG signal itself. Baseline correction and noise suppression are the two important pre-requisites for conditioning of the ECG signal. This paper presents a novel morphological filtering technique for removing baseline drift using non-flat structuring element. Further, to achieve noise suppression an improved median filtering technique is applied using mask of variable sizes. Depending upon the degree of impulse noise contamination the mask size may vary to a maximum of 1×11. The residual noise left after this stage is then filtered out using morphological filtering. Simulation results show noteworthy improvement in baseline correction and noise filtering in comparison to other proposed morphological filtering based approaches in the literature.
Magnetic Resonance Imaging (MRI) offers a lot of information for medical examination. Fast, accurate and reproducible segmentation of MRI is desirable in many applications. Brain image segmentation is important from clinical point of view for detection of tumor. Brain images mostly contain noise, inhomogeneity and sometimes deviation. Therefore, accurate segmentation of brain images is a very difficult task. In this paper we present an automatic method of brain segmentation for detection of tumor. The MR images from T1, T2 and flair sequences are used for the study along with axial, coronal and sagitial slices. The segmentation of MR images is done using textural features based on gray level co occurrence matrix. The textural feature used is the entropy of image.
Image quality evaluation plays a very important role in quantifying the quality of an image. Most of the indices already proposed can quantify only single types of distortions and they are also not very well correlated with Human Visual System (HVS). Contrast and Sharpness are the two parameters on the basis of which the proposed quality index will evaluate the performance of image enhancement algorithms. The degree of contrast will be evaluated by considering the difference in the average gray level values in its foreground to that of background. The calculated statistical parameters for foreground and background regions are mathematically combined using the Logarithmic Image Processing (LIP) operators to ensure processing of images from HVS point of view. Another parameter, sharpness will be calculated using wavelet decomposition method employing Haar transform. Finally, both the parameters will mathematically combine to assess the quality of enhanced image. Simulation results illustrate the precision and efficiency of the proposed quality index in comparison to previously proposed measure for assessing the image enhancement.
Choosing a suitable cost function for satellite image registration is always a challenging task. Satellite images poses a unique challenge in image registration field as images may consist of noise, clouds, highly undulating terrains etc. Registration of Satellite images lie in the category of multi temporal as well as sometimes multi modality cases. Till today there is no generalized solution suitable for all kind of cases. Choice of cost function plays an important role in image registration task. Different types of cost functions are available like intensity difference, correlation, entropy based methods etc. Here detailed exercise is done on existing similarity metrics, with a final conclusion that mutual information based similarity metric covers most of the cases with satisfactory results.
In this era of climatic changes, heavy wind damages to buildings and structures have become a major issue. Impacts of such disasters can be reduced by a rapid but accurate identification of damaged location and faster reconstruction. In this paper, automatic roof-damaged building identification as well as the damaged area pattern recognition is carried out from aerial images using a novel technique, texture-wavelet analysis. Initially, wavelet features extraction, followed by feature selection using a decision tree algorithm and Support Vector Machine (SVM) classification is performed for roof-damaged building identification. After separating the damaged buildings from the non-damaged ones, the pattern of the roof-damaged area is attained using texture-wavelet analysis and finally percentage area of damaged roof is measured. Comparison is done with the conventional feature extraction methods. The validation is performed with the manually obtained data as well as the field survey information.
Modified Condition / Decision Coverage (MC / DC) is a white box testing criteria aiming to prove that all conditions involved in a predicate can influence the predicate value in the desired way. Though MC/DC is a standard coverage criterion, existing automated test data generation approaches like CONCOLIC testing do not support MC/DC. To address this issue we present an automated approach to generate test data that helps to achieve an increase in MC/DC coverage of a program under test. We use code transformation techniques which consist of the following major steps: Identification of predicates, Simplification of sum of product by QUINE-McMLUSKY method, and generating empty true-false if-else statements. This transformed program is inserted into the CONCOLIC tester (CREST TOOL) to generate test data for increased MC/DC coverage. Our approach helps to achieve an increase in MC/DC coverage as compared to the traditional CONCOLIC testing.
The amalgation of social science and multiagent research can be quite harmonious in the domain of multiagent based simulation providing active interdisciplinary advantages. The pivotal role of MABS is to enable agent modelling for a system reproduced to posses realistic behavior.The challenges present in this paper is to model the manually controlled railway system into a multiagent based coordination system. The real object or entity of the railway system being considered as an agent which has its own computational capabilities. The communication and coordination between respective agents can now be a substitution in the place of manual decisions. Thus the idea can replace the restless job of railway control room personals in a more sophisticated way. The paper finds its aim in 100% collision avoidance as well as optimised system delay in railway traffic thus eliminating manual errors causing disasters by proposing a robust and fully automated model.
Testing is an ultimate phase of product life cycle to which particular attention is paid, namely when dependability is of great importance. Modeling technology has been introduced into the software testing field. However how to carry through the testing modeling effectively is still a difficulty. Based on the combination of simulation modeling technology and dependability we have proposed an approach to generate test cases. In our approach, first, the system is modeled in MATLAB using Simulink/ Stateflow tool. After the model creation we verify that system and generate a dependency graph of that system. From that graph we generate test sequences.
Software maintenance is the most demanding and effort-consuming phase in software development. It has been recognized as being a tedious step in software development process. Two basic activities in software maintenance are the understanding of the system and the assessment of the potential effects of a change. A change to a system, however small, can lead to several accidental effects, which are often not obvious and easy to identify. The main purpose of impact analysis is not only to find the impact set in terms of coding elements but also in terms of effort and resources required for implementing the change so that analysts could analyze the impact of the requested change in terms of budget. The objective of this paper is to find the impact set of the change requested by user or client. By using the impact set we estimate the regression test effort. We illustrate our results with a case study. As the results of this work, we get the impact set having impact element as class names with respect to the requested change, the test suite and the effort required for regression testing after the implementation of the requested change.
In this paper, we describe a User profiling based E-Learning System with Adaptive Test Generation and Assessment. This system uses rule-based Intelligence Technique and implicit User Profiling to judge the proficiency level of the student and generates tests for them accordingly. More specifically it's a Test Generation, Assessment and Remedial System where the student can give a test after he has completed studying a particular concept where the difficulty level of the test will be decided by the expert system engine. After the completion of the test, the system helps the student in improving the proficiency of the concept either if it is expected by the concept or if he faces difficulty in understanding the concept. Based on the type of errors made in the test, the Test Remedial System will help the student to improve in that domain of understanding of the concept. Every phase transition is rule-based which considers the user's profile and the concept importance to make sure that he does well where it is required. Preliminary Experimental Implementations show that with User Profiling we can reduce the amount of efforts required by the user to clear a concept. Moreover, with test remedial we assure that the user actually covers all erroneous domains under a particular concept depending on its importance.
Component-Based Software Engineering is a perfect approach for rapid software development with the maturity of components. The estimation of Component-Based Software (CBS) reliability from the reliabilities of constituent components and architecture is a matter of concern. In this paper we propose a Reliability Estimation Model for CBS to estimate the reliability through path propagation probability and component impact factor. This model incorporates the idea of path propagation to estimate overall system reliability after integration of components, which considers the contribution of the individual components that get activated during an execution path. This model also estimates the impact factor of individual components on overall reliability. The impact factor can be used to focus the efforts to obtain the best reliability improvements. To evaluate the Reliability Estimation Model including both the factors, we implement it through JAVA, which is based on an adapted example case study. Lastly we conclude that proposed model is useful to estimate the reliability of CBS and can be used adaptively in early stages of software development.
With the increasing number of functionally similar web services available on Web, the trustworthiness of published QoS information and reputation for Web service discovery for simple or complex task has been a challenging issue for the practitioners. The QoS includes a number of non-functional parameters such as price, response time, availability, reliability and reputation. The QoS of composite service can be achieved by fair computation of QoS of every component service invoked. Invoking a low quality component service can slow down the overall performance of the composition. Most of the existing approaches do not consider the broker's rating during monitoring of Web service. In this paper, a QoS evaluation and monitoring mechanism with some new QoS parameters Access Rate and Overall Aggregated Effective Quality of Service (OAEQoS) are proposed. The evaluation of proposed parameters is performed at the trusted third party broker's operating environment during new service registration to assure the quality of service. The broker can assign rank from the value of OAEQoS parameter. With the help of proposed parameters, the broker publishes the services with their overall quality score that can help service consumer to retrieve best services.
Global Software Development (GSD) has recently evolved and has been embraced by the competitive software industry today. The major attraction of GSD is due to the greater availability of human resources in distributed zones at a low cost and advances in communication technology. However, recent research reveals that expected benefits of GSD are not always realized as predicted since additional costs are often involved for the communication and coordination activities between the dispersed groups. Therefore, the main challenge of GSD today is to minimize the additional costs and maximize the benefits. A proper work distribution mechanism is particularly important to reduce the additional challenges facing GSD. In this paper, we present a method for work distribution to multiple locations. It starts with the identification of work as stages/phases in the Software Development Life Cycle (SDLC) and grouping them according to the Software Process Model (SPM) used. A final suggestion for the work distribution is based on multiple criteria such as work dependency, site dependency, site specific and work specific characteristics. The priority given to the criteria depends on the project objective.
In this paper, we have proposed a framework for representation and recognition of motion events between two or more objects. The framework combines qualitative spatial abstractions with the technique of syntactic pattern recognition in order to recognize motion patterns in an input stream. The key feature of the framework is recognition of motion events among multiple objects and handling of temporal constraints among events.
Mutation technique in software testing is considered as the most fascinating way to validate the software under analysis. In last decade, many researchers developed various techniques and tools to apply mutation testing for Aspect Oriented Programs. In this paper, authors surveyed several mutation testing techniques based approaches available to test the aspect oriented programs. Along with mutation testing techniques, various AOP testing tools have also been considered and analyzed based on essential requirements need to fill by these tools, discussed by several researchers. This paper analyzed the research work on mutation testing techniques and mutation tools for aspect oriented programs. Paper considered different parameters on which the analysis of mutation testing techniques and tool is carried out. In addition of few other parameters considered for evaluation, some of the resultant metrics may vary slightly under modification in basic requirements. Based on the numeric value calculated, it is finally suggested, which mutation tool may be much better and under what circumstances.
Assessment of Object Oriented Software Design Quality has been an important issue among researchers in Software Engineering discipline. In this paper, we propose an approach for determining the design quality of Object Oriented Software System. The approach makes use of a set of UML diagrams created during the design phase of the development process. Design metrics are fetched from the UML diagrams using a parser developed by us and design quality is assessed using a Hierarchical Model of Software Design Quality. To validate the design quality, we compute the product quality for the same software that corresponds to the UML design diagrams using available tools METRIC 1.3.4, JHAWK and Team In a Box. The objective is to establish a correspondence between design quality and product quality of Object Oriented Software. For this purpose, we have chosen priory known three software of Low, Medium and High quality. This is a work under progress, though; the substantial task has already been completed.
Experimental laboratories in technical institute are important part of the education areas. It demonstrates course concepts and brings theoretical ideas alive so students can more visualize and understand facts of theoretical concepts. Natural phenomenon affects the real world measurement and control the working algorithm. However, equipping a laboratory ubiquitously is an effective in cost and its maintenance can be difficult. Teaching assistants are required to instruct and setting up the well equipped laboratory, and grade the student report. These time-consuming and costly tasks turns in relatively low equipped laboratory also when teaching assistants and laboratory equipment are available experiments can be driven out. A lightweight web server as well as website interface has been developed, which restricts cost efficient laboratory equipment and measures electronic devices as “virtual laboratory” for any technical institute experiments to undergraduate as well as post graduate students. It uses standard browsers without additional plug-ins to provide interface to any user. The present interface is flexible and could be expanded to many other devices and instruments.
The complex software systems consist of a number of input parameters that interact with each other. As the number of input parameters in the system increases, the trade off that the system tester faces is the thoroughness of test cases coverage, versus limited resources of time and expense that are available. An approach to resolving this trade off is to constrain the set of available test cases such that each pair-wise combination of input parameters is covered. This goal gives a well-defined level of test coverage, with a reduced number of test cases. But the problem becomes severe if the domains of input parameters are large and therefore the number of generated test cases is huge. To deal with the problem a novel constrain strategy has been introduced based on the Pairwise testing for selecting a set of test cases in the software systems having the input parameters with large domains. The strategy uses the Fibonacci series driven testcase generation approach to generate the set of intelligent testcases which are as effective as conventional testcases but in total are less in number. In the end paper also presents the experimental results which prove the effectiveness of proposed strategy in terms of number of testcases generated and issues uncovered by these testcases.
Software enterprises are developing projects with an iterative focus on yielding better business value and rich customer experience. As business value is the index of revenue generated, global branding and market leadership attaining the business value is very vital for IT enabled enterprises. Since the projects are developed either by technology push or by market pull, top management has the mission of enabling the projects to deliver expected business value rather that just technical success. Though advanced methodologies, modern project management concepts and quality standards are employed in developing projects, enterprises are not deriving expected business value from projects. Due to this lacuna research is undertaken in six enterprises comprising of fifteen projects spread across five major domains of software development with an intensive focus on quality dimension is taken. Industrial process of quality framework which is not adhered to any standard quality model is studied and reasons for projects not being able to deliver expected business value is found. The reasons are addressed in the proposed customized quality model which is very promising to reap expected business value when implemented.
The implementation of basic logic functions based on current-mode techniques is proposed in this paper. By expanding the logic function in power series expression, using adder, sub-tractor and multipliers realization of the basic logic functions are simplified. To illustrate the proposed technique, a CMOS circuit for simultaneous realization of the basic logic functions NOT, AND, OR, NAND and XOR is considered. In this paper digital logic gates are realized at low voltage using analog current-mode techniques. The circuit can be operated with ± 3.1V supplies. PSPICE simulation and experimental results show good agreement with the theoretical predictions.
This paper represents the design and analysis of ring oscillator using cadence virtuoso tool in 45 nm technology. Ring oscillator consists of odd number of stages with feedback circuit which forms a closed loop in which each stage output depends on the previous stage. In this paper, nine stage ring oscillator have been designed with a capacitor of 1 fF at each stage and simulated for various parameters such as delay, noise, jitter and power consumption. Power consumption, jitter, noise have been reduced in nine stage ring oscillator. Periodic steady state response of ring oscillator is also observed. Power consumption is reduced by 18.9%.
This paper presents a VLIW-SIMD processor based scalable architecture with Data Flow Control (DC) Engine and Classifier Evaluation (CE) Engine for parallel classifier node computation to accelerate the object detection algorithm for embedded applications. The popular algorithm is developed by Viola and Jones [1] with Haar - like features. The architecture has 4-slot very long instruction word (VLIW) processor core and internal memories to hold the integral image data and classifier data. Each VLIW instruction packet has two load/store instruction slots and two 4-way SIMD instruction slots. Generic SIMD instructions are added to the instruction set to compute various classifier parameters in parallel. Nodes in two levels of classifier tree are computed in parallel with the proposed instructions. Fixed point arithmetic is to get faster clock rates at less area. The performance of the architecture is tested using the training data from OpenCV to detect the frontal faces from a set of images. Single instance of the proposed architecture is able detect the faces from CIF resolution images at a rate of 8.33 fps running at 500MHz clock frequency which is 1.6X performance gain over the OpenCV software version running on Pentimum-4, 2GHz processor. Two instances of classifier evaluation engine gives performance of 15.47 fps. The proposed engine is designed using Verilog HDL and it is synthesized using Synopsis Design Compiler with 28nm TSMC target libraries. The clock period is set to 2ns and the timing constraints are met.
Due to the fixed spectrum allocation policy, bandwidth has become one of the scarcest resources for the wireless communications. Therefore, various advanced application, which is very useful in the development of communication system, cannot be used. Currently, the wireless network systems suffer from insufficient bandwidth utilization. However, the licensed users or primary users (PUs) do not use their spectrum all the time. So, to enhance the efficiency of bandwidth usage, the concept of Cognitive Radio (CR) also called secondary user (SU) has emerged as a new design paradigm. By detecting the spectrum holes in PU band as long as they cause no intolerable interference to licensed users, we can make a new dynamic bandwidth sharing strategy between primary and secondary users based on the economic factors, so that the bandwidth utilization and users satisfaction can be enhanced dramatically. We have proposed a simulation model, which allocates free bandwidth of PUs to SUs with an aim to minimize overall bandwidth allocation cost. To share the bandwidth of the primary user with secondary user's, we have developed an algorithm to minimize the cost of the bandwidth.
Signed digit (SD) number systems provide the possibility of constant-time addition, where interdigit carry propagation is eliminated. In this paper, two classes of parallel adder are surveyed with an asynchronous adder based on their delay, area and power characteristics. With the development of high speed processors, a tradeoff is always required between area and execution time to yield the most suitable implementation with low power consumption. In this paper, we proposed an optimum high speed fast adder algorithm by using signed and hybrid signed digit algorithms. This modified parallel hybrid signed digit (MPHSD) adder has high speed and less area as compare to conventional adders like ripple carry adder and carry lookahead adder. The MPHSD adder require few more configuration logic blocks (CLB's) because of redundant logic to optimize execution time with area and power. A relative merits and demerits is also evaluated by performing a detailed analysis in terms of its cost and performance.
In this paper low-power design techniques proposed to minimize the standby leakage power in nanoscale CMOS very large scale integration (VLSI) systems by generating transistor grating technology. In low-power design for circuit to reduce the power supply voltage and this requires the transistor threshold voltages to also be reduced to maintain throughput and noise margins., this increases the subthreshold leakage current in p and n MOSFETs. this begins to increase the overall power in digital circuits. How-ever, this increases the subthreshold leakage current of p and n MOSFETs, which starts to set the power savings obtained from power supply reduction. In transistor grating technology two sleep transistors PMOS and NMOS are inserted in between the supply voltage and ground. A PMOS is inserted in between pull-up network and network output and a NMOS is inserted in between pull-down network and ground. During standby mode both sleep transistor are turned off. By applying this technique reduction in leakage current is 17.58% and power is 24.38%. The tool used is CADENCE VIRTUOSO for schematic simulation. The simulation technology used is 45nm.
It is a combinational logic circuit, it is used in application in which data must be switched from multiple source to a destination as unidirectional device. it represent the simulation of different 2:1 its comperative analysis on different parameter such as power supply voltage, operating A multiplexer is known as mux. It is a device that helps in selection a number of input signals, frequency temperature and area efficiency and its applications in 1bit full adder cell all the simulation have been followed on cadence tool at 180nm technology at virtuoso.
Power consumption plays an important role in any integrated circuit and is listed as one of the top three challenges in International technology roadmap for semiconductors. In any integrated circuit, clock distribution network and flip -flop consumes large amount of power as they make maximum number of internal transitions. In this paper, various techniques for implementing flip-flops with low power clocking system are analyzed. Among those techniques clocked pair shared flip-flop (CPSFF) consume least power than conditional data mapping flip flop (CDMFF), conditional discharge flip flop (CDFF) and conventional double edge triggered flip-flop (DEFF). We propose a novel CPSFF using Multi-Threshold voltage CMOS (MTCMOS) technique which reduces the power consumption by approximately 20% to 70% than the original CPSFF. In addition, to build a clocking system, double edge triggering and low swing clocking can be easily incorporated into the new flip-flop.
Due to the lack of vision, the blind people cannot easily access the latest information and the technologies which can provide them an alternating communication expertise. Modern technological enhancements cannot be easily affordable to the visually impaired people because of their higher cost and the less portability. That is why it has become pretty necessary to develop a low cost, portable and a fast Braille System for the visually impaired people. This paper introduces a new communication channel for the deaf blind and visually impaired people which consist of three different subsystems providing different facilities to improve the communication skill of the visually impaired people. The system consists of the following three modules: i) a portable low cost refreshable Body-Braille system for displaying Braille characters using six micro vibrators. ii) an easy Braille writer for writing the Braille characters and iii) a remote communication system through SMS. This new communication system is cheap, portable, fast and accurate.
Effective procurement is one of the key challenges faced in the Supply Chain Management process. With the increase in awareness among customers, manufacturers emphasize more on quality of the product. In order to produce a quality product at an economical rate, the other attributes of a product often play more important role in deciding the procurement of raw materials than the price alone. Multi-attribute Reverse Auction mechanism is proved to be very effective in addressing this challenge efficiently. But the inherit problem of an optimal supplier selection within a cobweb of constraints is often very difficult to address. Any solution proposed to address this issue needs to be technologically feasible to provide a faster, efficient and effective result. One such solution which promises higher market efficiency through an effective information exchange of buyer's preferences and supplier's offerings is Analytic Hierarchy Process (AHP). This paper presents a framework for purchasing a single item online from multiple suppliers using AHP method. A case study is presented along with numerical analysis, which illustrates the AHP method of supplier selection.
The goal of this paper is to design a mutual authentication scheme that supports secure data service migration among multiple registered devices (like PC, Laptop, Smartphone, etc.) so that each user can use the most suitable device whenever he/she feels. Authentication based on single factor depends on user's knowledge of some secret i.e. a password or a PIN. However, it is not secure enough. Two factor authentication is one which can be used as strong authentication scheme. This paper proposes mutual authentication scheme for session transfer among registered devices using smart card. Its security relies on the hardness of solving discrete logarithm problem and one way hash function. Random nonce is employed as a replacement for timestamp so as to avoid the cost of implementing clock synchronization between user and the server. Security analysis proves that this scheme is immune to the presented attacks and provides essential security features.
The auto sector companies actually face a lot of problems in SCM like efficient management and also a lot of co-ordination is required in between the departments. The human SCM system which exists in the auto sector companies is very much prone to commit errors and therefore leads to the mismanagement in the company's management system. To address this very problem of the auto sector companies herein is proposed the framework model called MAS-SCM AS . This MAS-SCM AS model consists of multi agents which perform the tasks to be carried out in the SCM. These multi agents are really effective as they are automated and are hence less prone to commit mistakes. The fractional mistakes which are caused by these multi agents are also the mistakes done at the human end i.e. programming mistakes (Bih-Ru Lea, 2003).
Quickest path problems (QPPs) have gained considerable attention of researchers in the last two decades due to their enormous applications in a variety of networks such as communication and transportation networks. Most of such networks are classified as stochastic flow network due to their changing states with time. Algorithms have been proposed in the literature to evaluate the probability that a specified amount of data could be transmitted from a source to the sink through a stochastic flow network within a given amount of time. In order to reduce the transmission time while maintaining system reliability, Lin [27, 28] has proposed using multiple disjoint paths for transmission of data/items by distributing the load into two or more segments. The algorithm presented in [26] efficiently solves the problem of finding the most reliable pair of paths among all available pairs from a source to the sink. However, the process of determining the most reliable pair of disjoint paths consumes considerable amount of time and it is practically non-feasible to wait for pair of disjoint paths with highest probability in order to transmit the data/items. In view of this, we have proposed a threshold of probability and pairs of paths that cross the threshold are considered for communication of data. Instead of a globally optimized solution, we focus on minimizing the time to compute the system reliability. We have also presented a comparison of the performance of the proposed method with the method presented in [27] for different graphs. The results show marked improvement in the system overhead for reliability computation without much compromise on the quality of service, since the pair satisfies a minimum reliability constraint. The method is particularly useful for applications involving large networks.
In this paper we have modeled the drain current based on the exponential distribution of tail states for nanocrystalline silicon thin film transistor (nc-Si TFT). The degradation of mobility due to the presence of acoustic phonons and interface roughness are taken into account. The model thus developed has been simulated for two different aspect ratios (W/L= 400 μm / 20 μm and W/L = 400 μm / 8 μm), the shape of the curves obtained are similar to the experimental ones validating the developed model.
Here we propose a novel low power 8-T SRAM cell and compare its stability with conventional 6-T standard models. In the proposed structure we use two voltage sources, one connected with the Bit line and the other connected with the Bit bar line for reducing the voltage swing during the write “0” or write “1” operation. We use 65 nm CMOS technology with 1 volt power supply. Simulation is carried out in Microwind 3.1 by using BSim4 model. We use the approach of write static noise margin, bitline voltage write margin and wordline voltage write margin for analyzing the stability of the proposed SRAM cell. These two extra voltage sources can control the voltage swing at the output node and improve the noise margin during the write operation. The simulation results and the comparison made with that of conventional 6T SRAM justify the efficacy of the superiority of the proposed SRAM structure.
In this paper an analytical crosstalk and delay model for RLC interconnects are developed based on the first and second moments considering the inductance effect into the delay estimation for coupled interconnect lines. Crosstalk and delay estimation using the proposed model are within 1% of SPICE computed delay across a wide range of interconnect parameter values. An improvement in the accuracy of models when compared to the Elmore model (which is independent of inductance) is achieved even though our estimate is as easy to compute as that of the Elmore model. The speed up of delay estimation by the proposed analytical model is served by orders of magnitude when compared to a simulation methodology such as SPICE which gives the most accurate insight into arbitrary interconnect structure but are computationally expensive.
This paper presents design and optimization of DGS based Pentagonal and Circular patch antennas suitable for wireless communication systems. The obtained results are impressive due to its improved bandwidth which covers wide frequency range having return loss less than -10 dB. Also found that Pentagonal antennas have better BW compare to Circular antennas.
Realization of logic gates using CMOS Gilbert multiplier cell is introduced. Power series expression of logic functions are implemented using Gilbert multiplier cell and pool circuit. To illustrate the proposed technique a circuit for simultaneous realization of logic functions NOT, AND, OR, XOR, NAND and NOR is considered. PSPICE simulation results with ±5V power supplies are given to demonstrate the feasibility of the proposed circuit. The proposed circuit is expected to be useful in analog digital and integrated circuits.
The communication switching system enables the universal connectivity. The universal connectivity is realized when any entity in one part of the world can communicate with any other entity in another part of the world. In many ways telecommunication will acts as a substitute for the increasingly expensive physical transportation. The telecommunication links and switching were mainly designed for voice communication. Traffic handling capacity is an important element of service quality and will therefore play a basic role in this choice Microprocessor/microcontroller (MPMC) system can handle sequential operations with high flexibility and use of Field Programmable Gate Array (FPGA) can handle concurrent operations with high speed in small size area. So combined features of both these systems can enhance the performance of the system. High Performance Hybrid Telephone Network System (HTSS) is designed using combination of stored program control (SPC) and VLSI technology. Touch tone receiver follows DTMF (Dual tone Multifrequency) concept and Time division multiplexing chip is used for the call establishment for inter and intra communication.
In this paper, we included designing of low power tunable analog circuits using double gate (DG) MOSFET, where the front gate output is changed by control voltage on the back gate. The DG devices can be used to improve the performance and reduce the power dissipation when front gate and back gate both are independently controlled. In this paper, we included the analysis of the analog tunable circuits such as CMOS Amplifier pair, Schmitt Trigger circuit and Operational trans-conductance Amplifier. Gain, phase and output response of analog tunable circuits have been illustrated in the paper. These circuit blocks are used for low-noise high-performance integrated circuits for analog and mixed-signal applications. The simulation results are predicted by Cadence Virtuoso Tool in 45nm complementary metal oxide semiconductor (CMOS) Technology.
In this work we investigate the influence of segment length distribution on area and speed performance in unidirectional island-style FPGA routing architectures using VPR 5.0.2 tool suite. Modern commercial FPGAs use a combination of segments of varying lengths in their routing architectures. We have presented a detailed analysis and comparison between the performance of single segment and mixed segment distributions, which uses a combination of two segment lengths per channel in the routing network. The main goal of this work is to determine the most optimal distribution of segment lengths for superior performance. We also investigate the impact of process technology scaling on performance parameters, namely area and delay, across various segment distributions, by performing simulations at three technology nodes- 45nm, 90nm and 130nm. Our experimental results prove that both mixed segment and single segment distributions show similar performance across technologies. However, mixed length segment distribution show more constancy and uniformity in the area and delay values, as opposed to the steep variation observed across single segment lengths. This feature renders considerable flexibility to the FPGA vendors in the choice of segment length distribution when using mixed segment distributions.
Occlusion of objects during spatio-temporal phenomena such as navigation is common for intelligent autonomous systems in a domain with real world objects. Enabling to fill gaps in observation as human do, can enhance the cognition and adaptability of autonomous system multifold. Most cognitive process are contextual. The essence of context in our everyday commonsense reasoning cannot be ignored. Further, everyday spatial and temporal reasoning is qualitative rather than quantitative. Building on work done in the area of qualitative spatial and temporal reasoning, we present an approach for completion of qualitative spatio-temporal explanations using context.
Double patterning lithography (DPL) has evolved as the most hopeful next generation technology according to ITRS roadmap. The main objective of this technology is to decompose the entire layout and placing the layout on each layer onto two masks with balanced density. This technology basically enables us to place two different features into two different masks when there are at a distance less than pre-defined threshold value. In this paper, we formulate the DPL decomposition problem as a graph theoretic problem. Without using any heuristic function, we proposed a solution with the help of dual graph. This technique not only gives us a new method of solution of DPL problem but also maintains a high balanced density. Experiment demonstrated that we achieved great performance on resolving conflicts with high balanced density.
Interference caused by bad weather is rather complex. It produces intensity changes in images and videos that can severely impair the performance of outdoor vision systems deployed for many applications including human based transport operations. Subsequent processing algorithms such as segmentation, feature detection, tracking, object recognition as well as stereo correspondence is greatly influenced by the quality of image data. To make outdoor vision systems robust and resilient under varying weather conditions it is necessary to model the degradation effects and develop appropriate methodology to account for these aberrations. This paper, therefore presents an transfer function based approach using atmospheric attenuation model to predict the degradation in the captured images. This model can essentially compute the variations in environmental irradiance and airlight model used for study of atmospheric scattering in the form of a transfer function. This knowledge will finally help to built appropriate image correction and restoration algorithm for application in outdoor all weather vision problem solving.
It has been well accepted by the software professionals as well as researchers that software systems have to evolve themselves to survive successfully. Software evolution is a crucial activity for software organizations. One of the challenging parts of the software change management is to validate the change. Regression testing is used to achieve validation of changed software. In this paper we are proposing an approach for validating the requested changes in earlier stage of the development i.e. we validate changes after design modification so that any invalid change could be corrected before actual change implementation. This reduces the regression test effort, since after change implementation; we only need to verify the change.
With the increasing need of spectrum, various computational methods and algorithms have been proposed in the literature. Keeping these views and facts of spectrum shaping capability by FRFT based windows we have proposed a closed form solution for Bartlett window in fractional domain. This may be useful for analysis of different upcoming generations of mobile communication in a better way which are based on OFDM technique. Moreover, it is useful for real-time processing of non-stationary signals. As per our best knowledge the closed form solution mentioned in this paper have not been reported in the literature till date.
We developed a robotic arm for a master-slave system to support tele-operation of an anthropomorphic robot, which realizes remote dexterous manipulation tasks. In this paper, we describe the design and specifications of the experimental setup of the master-slave arm to demonstrate the feasibility of the tele-operation using exoskeleton. The paper explores the design decisions and trade-offs made in achieving this combination of price and performance. We developed the 6-degree-of-freedom slave arm and exoskeleton master for control of robotic arm.
Modern organizations interact with their partners through digital supply chain business processes for producing and delivering products and services to consumers. A partner in this supply chain can be a sub-contractor to whom work is outsourced. Each partner in a supply chain uses data, generates data and shares data with other partners, and all this collaboration contributes to producing and delivering the product(s) or service(s). The main security challenge in supply chains is the unauthorized disclosure and data leakage of information shared among the partners. Current approaches for protecting data in supply chain rely on the use of standards, service level agreements, and legal contracts. We propose an auditing based approach for protecting shared data in digital supply chains.
The major problem in current day's scenario is the increasing power consumption and vastly decreasing energy resources, especially in developing countries like India. This calls for a dire need to properly monitor power usage and take necessary steps to optimize the usage. To achieve this goal, a proper system needs to be developed that can provide the energy requirements and usage at all levels and build automating systems that reduce the unnecessary usage and losses. Towards this goal, we try to design a architecture which generate models for power usage in real time and that requires the minimal amount to changes to existing structures.
This paper presents an overview of the recent developments for Adaptive Frequency Selective Surfaces (AFSS) integrated patch antennas. The limitations of the FSS based microstrip antennas mainly, narrow impedance and gain bandwidth have been overcome by using adaptive FSS (AFSS). This is because the integration of the lumped-element devices in the FSS has several advantages like radiation bandwidth enlargement, controlling the resonance frequency, high directivity etc. The study includes various FSS structures loaded with different electronic components/devices such as capacitors, inductors, varactors as well as PIN diodes etc. The simulation results shown reveal the effect of these devices on the transmission and reflection characteristics as well as overall performances of the antennas. The frequency tuning over 500-700 MHz around 2.5 GHz has been reported along with a total antenna area reduction by 55%.
Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better error rate for ½ code rates than 1/3. The reduced bit error rate with increasing constraint length shows an increase in efficiency and better utilization of resources as bandwidth and power.
The paper describes the power and area efficient carry select adder (CSA). Firstly, CSA is one of the fastest adders used in many data-processing systems to perform fast arithmetic operations. Secondly, CSA is intermediate between small areas but longer delay Ripple Carry Adder (RCA) and a larger area with shorter delay carry look-ahead adder. Third, there is still scope to reduce area in CSA by introduction of some add-one scheme. In Modified Carry Select Adder (MCSA) design, single RCA and BEC are used instead of dual RCAs to reduce area and power consumption with small speed penalty. The reason for area reduction is that, the number of logic gates used to design a BEC is less than the number of logic gates used for a RCA design. Thus, importance of BEC logic comes from the large silicon area reduction when designing MCSA for large number of bits. MCSA architectures are designed for 8-bit, 16-bit, 32-bit and 64-bit respectively. The design has been synthesized at 90nm process technology targeting using Xilinx Spartan-3 device. Comparison results of modified CSA with conventional CSA show better results and improvements.