IEEE

Technical Papers

  • Along with the fast growing economy of a developing country as far as India is concerned, there has been reported an emerging number of crime and criminal offences. In many of the cases the suspects are quite freely escaped because of the insufficient testimonials and lack of communication links in time. As an instance, suppose a bank robbery takes place and right then no one is able to press the emergency buzzer or to dial up to nearest police station. Eventually, when the nearest police station is informed about the fact, the gang is beyond the reach leaving no clue behind. Same is observed in many cases as rash driving and hit and run, eve teasing and molestation of women at less crowded streets, pick pocketing and blackmailing at Automatic Teller Machines at roadside and many more. Where there is a crime involved, there has to be a quality surveillance and expert actuation to stand against all these activities with minimum human intervention. Our goal is to design an intelligent device that can continuously keep an eye on any activity happening around it and on administrator's request it can track any suspect, be it an object or a human being, with minimum power consumption and least human efforts. As an actuator it sends a signal to the nearest traffic control or police station intimating the state of crime along with its location and also tells about which direction it is heading to. In view of a real-time establishment of the concept, the flying device is designed to be a robot that looks and flies like a Bird, the processor is chosen to be ARDUINO UNO, the image processing is made by mat-lab tool and the communication link between the administrator and the device is via RFID system and between device and the police control is achieved by GPS/GSM technology.

  • In this research paper several interesting themes related to Hopfield network are explored and concrete results are derived. For instance, convergence in partial parallel mode of operation, convergence when synaptic weight matrix, W is row diagonally dominant case. Also, structured recurrent Hopfield network with companion matrix as W is studied. The dynamics when the activation function is a step function, the state space is asymmetric hypercube are also studied. Finally several experimental results are derived.

  • Species distribution models allow the development of strategies and policies for the species sustainability. However, the process of predicting species distribution analyzes and produces a considerable amount of data, which takes hours or days to compute. The predicting process, including modeling and projection stages, requires efficient computational tools and good strategies of parallelization allowing the decrease of execution time. This paper presents a parallel solution for predicting species distribution through the parallelization of the BIOMOD2 platform. The parallel application can be run on a computer with multiple cores or a cluster. Besides decreasing the execution time, it aims at reducing the idleness of the resources. It performs an optimization on the use of the processing resources. The experiments demonstrate that it is possible to reduce the predicting process from 106 minutes to 10 minutes by using 20 cores of a cluster composed by five nodes.

  • Matrix multiplication is an operation used in many algorithms with a plethora of applications ranging from Image Processing, Signal Processing, to Artificial Neural Networks and Linear algebra. This work aims to showcase the effect of developing matrix multiplication strategies that are less time and processor intensive by effectively handling memory accesses. The paper also touches upon on the advantages of using OpenMP, a multiprocessing toolkit to show the effect of parallelizing matrix multiplication.

  • Due to the presence of data streams in many applications like banking, sensor networks, and telecommunication, data stream mining has gained increased attention. Data stream is continuous, ordered sequence of data instances arriving at a rapid rate. One of the key challenges while learning from data streams is the detection of concept drift, i.e., changes in data distribution underlying data streams, observed over time. Drifts being either gradual or sudden, several algorithms have been put forward for detection of different kinds of drift. However, most of them work on only one of these kinds of drift. These algorithms show hampered output if different types of drift are mixed. To solve this issue, there is need of a single system that can handle all drifts simultaneously. In this paper, we propose a system that detects both kinds of drift efficiently. Our system combines features of the online classifier as well as a block-based classifier to achieve the goal. We further analyzed drifts to find out missing values of attributes to be the root cause. Our system handles missing values in different ways for more improved performance.

  • The era of networking has moved from dedicated physical network devices to the abstraction of functional software component of those network devices. The functional software component can be run as a software image on any hardware platform. Routers are vital components in any networking environment which routes the information to different networks spread over a geographical area. VyOS is a software router developed by Vyatta, which provides the functionality of a router on generic hardware platform. It is identified that the Wi-Fi module is not integrated with Vyatta software router. Also, websites which use secured protocols, such as hypertext transfer protocol with security (HTTPS), are allowed by default. In this paper, these issues are identified and a suitable driver for Wi-Fi module is developed and implemented. Also, a control mechanism for secured protocols, to allow or block access to the website, is developed and implemented. Test cases are developed to block and allow specific websites. Results show that the websites are blocked successfully and the integration of the Wi-Fi module with the Vyatta software router functioned as expected.

  • To study weather phenomena over a landscape bounded by a regional boundary, a decision support system is needed for optimally placing the radars of a Weather Radar Network (WRN). Optimization is to be achieved in terms of higher area coverage by radar beam when obstructed through ground elevations in mountainous region and in low cost of radar installations. Traditional exhaustive search algorithm can't be useful here as it would computational infeasible to find out the optimum positions from a huge number of possible radar site locations. In this work applies Particle Swarm Optimization (PSO) to address the complex multi-dimensional optimization problem of WRN. The approach considers several factors in optimization such as radar beam blockage due to complex terrain, limitations of radar range from low level ground based coverage, radar locations confined to a regional boundary, cost of radar installations in terms of number of radars and radar's proximity to nearest road network. The PSO based approach has unique strength of addressing multiple dimensions of the optimization problem together and it provides a simple way for reaching an optimal solution in relatively short span of time.

  • Every organization doesn't necessary to have the common point of view of a particular resume while considering for a job description (JD). Keeping the same role in place, while some stress on technical skills, the other give importance to professional experience and domain expertise. Understanding these hiring patterns are becoming important in today's head hunting. The traditional job search engines offers resumes which matches to the input keywords. As the search outcomes from these search engines grows, the problem in selecting the best profile surges. The role of Human Resource (HR) staff becomes more important in understanding these hiring patterns and suggesting the suitable profiles. HR staff proposes these profiles which are ranked manually. The proposed method is to understand the intelligence behind the hiring pattern and apply the machine learning to accommodate the identified intelligence. The proposed method offers the ranking system according to the hiring patterns. Highly trained models along with the traditional search method, predicts the ranking and sorting of resumes with high accuracy and simplifies the job of human resourcing efficiently.

  • Technology today has progressed to an extent wherein collection of data is possible for every granular aspect of a business, in real time. Electronic devices, power grids and modern software all generate huge volumes of data, which is in the form of petabytes, exabytes and zettabytes. It is important to secure existing big data environments due to increasing threats of breaches and leaks from confidential data and increased adoption of cloud technologies due to the ability of buying processing power and storage on-demand. This is exposing traditional and new data warehouses and repositories to the outside world and at the risk of being compromised to hackers and malicious outsiders and insiders. In this paper the current big data scenario has been summarized along with challenges faced and security issues that need attention. Also some existing approaches have been described to illustrate current and standard directions to solving the issues.

  • A phenomenal growth in the demand for Cloud Computing services by the cloud consumers has necessitated the efficient and proactive management of the data center hosted services having varied characteristics. One of the major issues concerning both the cloud service providers and consumers is the provisioning of highest level of Quality of Service (QoS) under unpredictable service demands, while maintaining required revenue targets. Traditional Admission Control (AC) approaches which are usually mathematical or analytical in nature, have limited performance levels in the situations where service types, QoS parameters and user demands become highly unpredictable. To this end, an opportunity exists to utilize the self-learning capabilities of Machine Learning (ML) approaches to incorporate predictive and adaptive Admission Control of service requests without violating the Service Level Agreements (SLA) and simultaneously ensuring targeted revenue to the providers. This paper proposes, implements and evaluates a Bayesian Networks based predictive modeling framework (termed as BNSAC) to provide an autonomic Admission Control of cloud service requests. In summary, the BN-based model learns the historical behavior of the system involving various performance metrics (indicators) and predicts the desired unknown metric (e.g. SLA parameter) for making admission control decisions. It presents simulated experimental results involving various service demand scenarios which provide insights into the feasibility and applicability of the proposed approach for improving the QoS in the cloud computing setup.

  • As the users on the cloud network increase, the consumption of the Compute, Network and Storage resources also increases. This leads to increase in the cost of deployment, configuration and maintenance. Hence, the Capital Expenditure (CAPEX) of the organization providing the cloud network increases. Network Function Virtualization (NFV) is a technology which virtualizes network functionalities. This paper studies the influence of NFV on CAPEX of cloud based networks and compares it with traditional implementation (without NFV) of such networks. A prototype cloud network based on NFV implementation is developed and implemented. Based on the test cases developed on the prototype, CAPEX of the resources used for both NFV based and traditional implementations are studied and analyzed. RESTful web services are created for the users of the cloud network to orchestrate and manage the network on the cloud. The results accomplished show that NFV based implementation reduces the CAPEX, when compared with the traditional implementation. It is also observed that orchestration mechanism reduces complexity of management of cloud network. A use case with simple web server is developed to compare the performance of a system on Cloud with that of a physical system.

  • Organizing different components collected for effective retrieval and search process, has become the fundamental problem in software reuse. It should be noted that both retrieval techniques and representations are interrelated. Component Based Software can be helpful in achieving software quality enhancement along with efficiency analysis. Neural network has been an active research area which can be applied to various fields and only a few have tried the combination of Neural Networks applied to software engineering in identifying suitable reusable components. The paper tries to throw some light on the research carried out in application of neural networks for identifying reusable components and also proposes a new model for the same.

  • Now-a-days, Speech Recognition had become a prominent and challenging research domain because of its vast usage. The factors affecting Speech Recognition are Vocalization, Pitch, Tone, Noise, Pronunciation, Frequency, finding where the phoneme starts and stops, Loudness, Speed, Accent and so on. Research is going on to enhance the efficacy of Speech Recognition. Speech Recognition requires efficient models, algorithms and programming frameworks to analyze large amount of real-time data. These algorithms and programming paradigms have to learn knowledge on their own to fit in to the model for massively evolving data in real-time. The developments in parallel computing platforms opens four major possibilities for Speech Recognition systems: improving recognition accuracy, increasing recognition throughput, reducing recognition latency and reducing the recognition training period.

  • This paper presents the information pertaining to the damage of crops in recent years due to the growth of weeds. Weeds are one of the considerable menaces to the real home and humankind. They are destroying native habitats, forbidding native plants. Weeds cause decrease in farm and forest yield, invade crops and they can also harm livestock. The only solution to this problem is to avoid the growth of weeds either by removing them or using herbicides. In this proposed automated idea, Support Vector Machine (SVM) Classifier is utilized to make out whether plant is crop or weed. The maize crops are continuously monitored by capturing images using camera. In order to classify a plant as a crop or weed, various features are extricated which among them are shape, texture, color. We have made use of statistical textural features such as Intensity, Mean, Energy, Entropy, Standard Deviation, Smoothness and Third Moment. Machine learning algorithm such as Support Vector Machine (SVM) Classifier is used for accurate prediction to find out whether the plant is crop or weed. The data-set consists of images of maize crop which are 500 in number and equal number of weed images. The accuracy of prediction made by Support Vector Machine (SVM) Classifier is determined by using K-fold cross validation, according to which the accuracy is 82%.

  • Healthcare data is increasingly being digitized today and the data collected today coming in from all modern devices, has reached a significant volume all over the world. In the US, UK and other European countries, healthcare data needs to be secured and Patient Health Records (PHR) need to be protected so that re-identification of patients cannot be done from basic information. Privacy of healthcare is an important aspect governed by Healthcare Acts (e.g. HIPAA) and hence the data needs to be secured from falling into the wrong hands or from being breached by malicious insiders. It is important to secure existing healthcare big data environments due to increasing threats of breaches and leaks from confidential data and increased adoption of cloud technologies. In this paper the current healthcare security scenario in big data environments has been summarized along with challenges faced and security issues that need attention. Some existing approaches have been described to illustrate current and standard directions to solving the issues. Since healthcare governance in the US has a strong focus on security and privacy as opposed to other countries on this day, the paper focuses on Acts and privacy practices in the US context.

  • Requirement prioritization is very useful for making decisions about product plan but most of the time it is ignored. In many cases it seems that the product hardly attains its principal objectives due to improper prioritization. Increased emphasis on requirement prioritization and highly dynamic requirements makes management of composite services time consuming and difficult task. When software project has rigid timelines, limited resources, but high client expectations, an instantaneous deployment of most vital and critical features becomes mandatory. The problem can be solved by prioritizing the requirements. Over the past years, various techniques for requirement prioritization are presented by a variety of researchers in software engineering domain. The proposed Adaptive Fuzzy Hierarchical Cumulative Voting (AFHCV) uses adaptive mechanism with existing Fuzzy Hierarchical Cumulative Voting (FHCV) technique, in order to increase the coverage of events that can occur at runtime. The adaptive mechanism includes Addition of new requirement set, Analysis and Reallocation of requirements, Assignment and Alteration of priorities and Re-prioritization. The re-prioritization is used to improve the results of proposed AFHCV. The proposed system compares the results of proposed AFHCV technique to the existing FHCV technique and the comparison shows the proposed AFHCV yields better results than FHCV.

  • Very huge quantity of data is continuously generated from a variety of different sources such as IT industries, internet applications, hospital history records, social media feeds etc. called as "Big Data". Mostly Data mining algorithms find the interesting patterns of data from the value-based database where the information is exact. It is not so easy to discover interesting patterns from big data. To abstain from squandering a ton of space & time in searching down frequent item uncertain big data, proposed approach permits clients to show their enthusiasm for terms of succinct anti-monotone constraint. MapReduce technique is used to mine frequent patterns. Two sets of map and reduce functions are used by proposed system to mine valid singleton and non-singleton patterns. In proposed work, UF-tree algorithm generates tree structure of dataset and UF-growth mines frequent itemsets recursively. To further reduce the search space and execution time in uncertain big data, proposed work gives importance to the frequency of items using weighting factors, and calculate expected support of item on the basis of weight. It reduces the nodes in the first level of tree, which leads to a reduction in the size of the tree and execution time.

  • Neural Networks are one of the widely used Soft Computing Techniques. Neural Networks are adaptive and learn from past examples. Neural Networks are used successfully in extensive range of applications related to different areas particularly in Medical Domain. Neural Networks mimic human brain to solve problems concern to non-linear and complex data such as clinical samples. Cervical Cancer is a silent cancer which does not disclose any pain and symptoms. But it becomes dangerous silently with in a long period of 10-15 years. Hence early diagnosis is an essential action to prevent it in early stages. In this study most commonly used Neural Networks such as Multi Layer Perceptron (MLP), Probabilistic Neural Network (PNN), Radial Basic Function (RBF) and Linear Vector Quantization (LVQ) networks are used. Dimensionally reduced Cervical Pap smear Dataset using Fuzzy Edge Detection method is considered for classification. The Four Neural Networks are compared and the best suitable network to classify the dataset is evaluated.

  • Hadoop is one of the most popular technologies used in the big data landscape for evaluating the data through Hadoop Distributed File System and Map-Reduce. Problems which are larger in size are becoming tough to handle by a single system these days because the execution time for such problems will be very high in such platform. Instead of processing the tasks in a sequential approach, when the processing is done in parallel through the MapReduce method, then results with better efficiency can be expected. In the present method, firstly the Map task decomposes the input into the intermediate keys and then the intermediate keys are sent to the reduce function for processing of data. The algorithm used for performing matrix multiplication is cache oblivious in nature, for better utilization of the memory hierarchy. Processing with the cache oblivious approach increases the re-usability power of the elements and thus decreases the overall execution time. The proposed work for matrix multiplication shall be fault tolerant in nature as there is a replication of data at three places on three different data nodes.

  • Cloud computing offers service delivery models that facilitate users during development, execution and deployment of workflows. In this Big-data era, Organizations require value out of big data. For this they need not have to deploy complex infrastructure, but can use services that provide value. As such there is a need for a flexible and scalable service called Predictive Analytics as a Service (PAaaS). Predictive analytics can forecast trends, determines statistical probabilities and to act upon fraud and security threats for big data applications such as business trading, fraud detection, crime investigation, banking, insurance, enterprise security, government, healthcare, e-commerce, and telecommunications Prediction algorithms can be supervised or unsupervised with different configurations, and the optimal one may be different for each kind of data. This paper summarizes existing service frameworks for big data and proposes PAaaS framework that can be used by business to deal with prediction in big data. This proposed framework is based upon ensemble model that uses best out of prediction algorithms such as Artificial Neural Networks (ANN), Auto Regression algorithm (ARX) and Gaussian process (GP).

  • Given an application of a spatial data set, we discover a set of co-location patterns using a GUI (Graphical User Interface) model in a less amount of time, as this application is implemented using a parallel approach-A Map-Reduce framework. This framework uses a grid based approach to find the neighboring paths using a Euclidean distance. The framework also uses a dynamic algorithm in finding the spatial objects and discovers co-location rules from them. Once co-location rules are identified, we give the input as a threshold value which is used to form clusters of similar behavior. If the threshold value is too low more clusters are formed, if it is too high less clusters are formed. The comparison of the results shows that the proposed system is computationally good and gives the co-location patterns in a less amount of time.

  • Safety of health care processes is inherently limited by human factors and has contributed to increased medication errors. This has raised the necessity of developing computerized solutions that can play a supportive role in medical decision making process by minimizing the occurrence of human error. In this study, we experiment on the potential of Artificial Neural Networks (ANNs) and Decision Tree (DT) algorithms for analyzing a time-series, high-dimensional clinical data set and study the extent to which those techniques can be utilized to capture the medical expert knowledge integrated within the dataset. Two major empirical studies were conducted on the available clinical data set which includes the medical history of patients with chronic disorders who are getting clinical care on a periodic basis. As the first study, we studied the potential of supervised learning classifiers for correctly classifying a patient instance into the corresponding disease category. In the second experiment, potential of above mentioned supervised learning algorithms in detecting anomalous trends in a pattern of readings for a particular medical parameter was studied. Under each experiment, different learning classifier systems were built using multilayered perceptron (MLP) neural networks and C4.5 DT algorithm and the performance of each classifier was validated using confusion matrix analysis and receiver operating characteristic (ROC) curve analysis. Results obtained for the critical evaluation revealed that these kind of computational approaches have a substantial potential for playing an assistive role in medical decision making process.

  • Big data is a large amount of digital information. Now days, data security is a challenging issue that touches several areas along with computers and communication. The security of data which stored online has become a main concern. Several attackers play with confidentiality of the user. Cryptography is a approach that provide data security to the user. Despite of huge efforts to protect sensitive data, hackers typically manage to steal it. Computing with encrypted data is strategies for safeguarding confidential data. The partial homomorphic encryption is specialized for only one operation on the encrypted data. For example the Pailliers encryption scheme performs only one mathematical operation on encrypted numerical data and is successful to compute the sum of encrypted values. The Pailliers encryption scheme is unable to do multiple mathematical operations on encrypted numerical data. The proposed encryption algorithm computes more than one mathematical operation on encrypted numerical data thereby further protecting the encrypted sensitive information.

  • To provide stable connections between vehicles and nearby roadside units, a reliable routing protocol is highly required in Vehicular Ad-hoc NETworks (VANETs). Though several routing protocols exist which are deliberate for MANETs and could be useful for VANETs as well, the results may not be satisfactory always in later case due to its sole characteristics. This work proposes a multi-objective heuristic algorithm, based on Ant Colony Optimization technique (ACO) to identify the optimal paths for Vehicular Ad-hoc Networks. To achieve this, it has measured different parameters like Signal to Noise Ratio, Throughput, End-to-End Delay Hop-count and Packet Loss. Based on the aforesaid constraints a calculated weight of the route is introduced to choose the finest route among every possible path. This route will in turn maximize the requirements in Intelligent Transportation System (ITS) to enhance safety in road, competence and travellers ease. The QualNet Network Simulator has been extensively used to evaluate the proposed protocol. The proposed method has been found to work satisfactorily with a number of test cases and considerably outperforms compare to the existing technique.

  • Executing clustered tasks has proven to be an efficient method to improve the computation of Scientific Workflows (SWf) on clouds. However, clustered tasks has a higher probability of suffering from failures than a single task. Therefore, fault tolerance in cloud computing is extremely essential while running large-scale scientific applications. In this paper, a new heuristic called Cluster based Heterogeneous Earliest Finish Time (CHEFT) algorithm to enhance the scheduling and fault tolerance mechanism for SWf in highly distributed cloud environments is proposed. To mitigate the failure of clustered tasks, this algorithm uses idle-time of the provisioned resources to resubmit failed clustered tasks for successful execution of SWf. Experimental results show that the proposed algorithm have convincing impact on the SWf executions and also drastically reduce the resource waste compared to existing task replication techniques. A trace based simulation of five real SWf shows that this algorithm is able to sustain unexpected task failures with minimal cost and makespan.

  • Goal of Complex Event Processing (CEP) is to identify meaningful events (such as opportunities or threats) and respond to them as quickly as possible. Traditional CEP frameworks do not have built-in security features. Any intruder may enter into the system and may send incorrect data to CEP engine. It causes to miss some patterns or it may lead to wrong prediction of patterns. Traditional CEP systems can be exploited through one or the other way. CEP system malfunctions when it is exploited. There is no built in procedure to check the health status of traditional CEP systems. Traditional CEP systems do not have procedures to take appropriate actions when system is affected or going to affect. When CEP system identifies some patterns, it will alert corresponding personnel about the occurrence of the event. Sometimes alert message may be undelivered or unnoticed by the corresponding people. Traditional CEP systems will not provide delivery assurance of alert message. These are few security challenges to existing CEP systems. In this paper Secure Complex Event Processing Framework is proposed to address above security challenges. Proposed framework is generic and it is pluggable into any existing CEP systems.

  • This study is to develop a wireless detection system for health and military application. Among the elderly population falling/collapsing is a common and very serious issue. It can cause severe injuries like fractures, joint dislocations, head injuries etc. In cases where person is immobilized or lost consciousness it is not possible to seek immediate help or medical attention. There are several cases where elderly people live alone are not found for hours after the fall which makes the situation more severe. The damage caused because of failure in seeking medical attention at the right time is more than the damage and injuries caused by the fall itself. Similar cases can be found with Border Security Force soldiers and other security people standing for long hours guarding the nation borders. Also VIP securities standing in front of VIP homes are vulnerable for such attacks causing injuries. Objective of this paper is to develop a wireless fall-detection system which can sense such situations and alerts others seeking help. The sensors for fall Detection is wearable by the potential-victim/user. Upon detection of fall/collapse the sensor system transmits the information wirelessly, which will be received by the care-taker mobile. The sensor is a belt shaped wearable device consisting of accelerometer (tri-axial) and gyroscope. These sensors are used to classify the posture and dynamics of the user. The main aim of the project is to develop efficient algorithms to detect falls and distinguish between falls and non-falls using these sensors. The sensor is a part of zigbee node. It will communicate to the microcontroller over zigbee. Zigbee will provide an efficient wireless communication with the microcontroller with relatively small hardware size. The microcontroller is responsible for sending information to the Care taker's mobile using a GSM module interfaced to it. The microcontroller is interfaced to a PC in order to burn the program on to the microcontroller an..

  • Wearable antennas find their importance extensively in ever growing demand for WLAN applications. The deployment of textile material as a substrate in design of wearable antennas for WLAN applications is presented in this work. A crescent shaped antenna is designed by combining two circular shaped antennas with different radii in such a manner to achieve the desired resonant frequency compatible with the WLAN. The proposed crescent shaped antenna is designed and simulated using Ansoft HFSS platform for evaluation of return loss, VSWR, radiation pattern, and current distribution.

  • The implementation of MIMO with OFDM is an effective and more attractive technique for high data rate transmission and provides burly reliability in wireless communication. It has lot of advantages which can decrease receiver complexity, provides heftiness against narrowband interference and have capability to reduce multipath fading. The major problem of MIMO-OFDM is high PAPR which leads to reduction in Signal to Quantization Noise Ratio of the converters which also degrades the efficiency of power amplifier at transmitter. In this paper we mainly focus on one of scrambling and non scrambling technique Iterative clipping and filtering, and partial Transmit sequence (PTS) which results in better performance. The two techniques once united or combined in the system prove that along with trimming down the PAPR value, the power spectral density also gets smoother.

  • With the rapid increase in usage and reliance on the vivid features of smart devices, the need for interconnecting them is genuine. Many existing systems have ventured into the sphere of Home Automation but have apparently failed to provide cost-effective solutions for the same. This paper illustrates a methodology to provide a low cost Home Automation System (HAS) using Wireless Fidelity (Wi-Fi). This crystallizes the concept of internetworking of smart devices. A Wi-Fi based Wireless Sensor Network(WSN) is designed for the purpose of monitoring and controlling environmental, safety and electrical parameters of a smart interconnected home. The user can exercise seamless control over the devices in a smart home via the Android application based Graphical User Interface (GUI) on a smartphone. The overall cost of large scale implementation of this system is about INR 6000 or USD 100.

  • The efficiency and reliability of wireless network mainly depends upon the delivery power of its links. So it is important to measure which link is more reliable and can deliver data effectively over a long period of time. We study the quality of a link based on the available estimators mainly the Packet Reception Rate (PRR), the Received Signal Strength Indicator (RSSI), the Signal to Noise Ratio (SNR) of the CC2420 radio in a purely software based environment. Each one of them has some shortcomings. The result also is not very obvious as different estimators show different links to be better than others. This leads us to design a new LQ metric which would not only come up with better result but also reflect the sudden changes of the quality of the links.

  • In this work total polyphenol contents in tea leaves have been estimated by the near infrared reflectance (NIR) spectroscopy and partial least squares (PLS) algorithm. During sample acquisition the number of variable is quite high for each spectra and whole range of spectra may not play an important role for building the calibration model of PLS algorithm. Selection of proper region for a particular application is an important task. Here, optimum wavelength was determined by genetic algorithm (GA) and particle swarm optimization (PSO). PLS algorithm was used to produce the fitness curve of PSO and GA. Training and testing was done by leave -one-sample out cross-validation during the model calibration. Testing and training was done using specific windows of wavelength. The optimum range was determined to be from 1027.75 nm to 1104.75 nm. The RMSECV value for the optimum range was observed to be 1.05.

  • Tropospheric scintillation can be categorized under one of the transmission impairment. Admitting the affects is only a short time fluctuating signal, but it can give severe signal degradation on satellite-earth link that operates at 10 GHz and above. Most of the scintillation prediction model was implemented based on four season countries climate. Tropical climate was known as uniform temperature, high humidity and heavy rain characteristics in which so different with four seasons climate. Scintillation will occurred no matter the in clear sky condition or raining. So, this proposed paper work aimed to evaluate the performance of tropospheric amplitude scintillation in tropical region at clear sky condition. To estimate the statistic lof tropospheric scintillation in Indian region based on the parameters in ITU-R model. To observe the relationship between the scintillation intensity and the local environmental parameters. An experimental satellite signal measurements need to analyse, and their statistical behaviour of the propagation effects to be compare with the theoretical background. And their link parameters to be compare with meteorological parameters. New prediction models for the propagation effects could be develop and need to specify the improvements to existing models.

  • Optimization Wireless Sensor Network (WSN) is necessary to reduce redundancy and energy consumption. To optimizing wireless sensor networks for secured data transmission both at cluster head and base station data aggregation is needed. Data aggregation is performed in every router while forwarding data. The life time of sensor network reduces because of employing energy inefficient nodes for data aggregation. Hence aggregation process in WSN should be optimized in energy efficient manner. So introduced one protocol on trust based with weights. This paper completely about the attacks, and some methods for secured data transmission.

  • In this paper, we investigate the problem of protecting short-range wireless power from being stolen by rogue devices. We propose an architecture and a generic encryption model in the context of certificateless cryptography for secure wireless power transfer systems. Furthermore, we present insightful research problems which are useful for future research work in this paradigm. The model has two phases: first, an authenticating phase that ensures certificateless authentication and key exchange between the power transmitter (the charging unit) and the power receiver (the device to be charged); Second, an encryption phase which makes use of the obtained session key in the first phase. Simply put, the obtained key is used to generate sequence of switching frequencies in both party devices (encrypt the energy) to allow for secure energy transmission and reception(decryption). In this way, other devices without the knowledge of the session key are prevented from harvesting the transmitted energy from the transmitter to the receiver.

  • Among all the proteins of Periplasmic C type cytochrome A (PPCA) family, only PPCA protein can interact with Deoxycholate (DXCA), while its other homologs can not, as observed from the crystal structures. This article presents a unique encoding scheme of amino acids which consists of six dimensional vectors where first three dimensions use the chemical and physical properties of amino acids and last three dimensions use one mathematical parameter "Impression" which has been previously very effective in explaining the degeneracy of Codon Table [14]. For bringing out the "Impression", the amino acids are denoted by ternary numbers which are done using molecular weights of amino acids in order. The use of chemical properties for the purpose of unique encoding of amino acids is our first agenda. Secondly we expose the reason of PPCA being able to interact alone among its homologs with regards to the embedded chemical properties along with graph theoretic model.

  • In this correspondence, the authors propose a nonuniform quantized data fusion (N-QDF) rule alleviating control channel overhead for energy detection based cooperative spectrum sensing scheme in cognitive radio systems. Though soften hard or quantized data fusion (QDF) technique carries few-bit overhead from each user but it prescribes an improved solution between detection performance and complexity. Again higher-bit QDF provides greater detection probability than lower-bit QDF, due to the loss of more information in lower-bit QDF. In this paper, we derive a non-uniform quantized data fusion (N-QDF) rule that simultaneously enhances the detection probability for a given false alarm probability & higher bit QDF with minimum control channel overhead. We have conducted an extensive simulation study where the performance of variable-bit N-QDF technique is compared with different uniform i.e. 3, 4, 5 bit QDF techniques with respect to different parameters to validate our proposed scheme.

  • Satellite Data concisely convey information about positions, sizes and interrelationships between objects. The satellite image losses information due to lack of Acquisition capability of sensor and atmosphere's effect. It is very difficult to extract useful information at intensity level with low SNR, non wavelet segmented schemes losing high frequency contact with results texture is blurred several preprocesses are applied to make textual image clear and segmentation. Unsatisfied results due with lack of directionality with DWT, Here we can implement advance image processing technique for improving texture based features to multispectral satellite image, find discrepancy distribution of observed and normal region using Higher order statistical methods(HOS) like skewness, Kurtosis. The shape of the distribution of intensity levels are examined by HOG. For improving the visualization quality we examine features based on edges, lines and their gradients using Curvelet and Histogram of oriented Gradient (HOG), intensity distribution using Higher order Statistics (HOS).

  • This paper investigates the performance of hard-decision and soft-data fusion schemes for a cooperative spectrum sensing (CSS) in noisy-Rayleigh faded channel. Hard-decision fusion operations on the local binary decisions and soft-data fusion operations on the energy values obtained from the different cognitive radio (CR) users are performed at fusion center (FC) and a final decision on the status of a primary user (PU) is made. More precisely, the performance of CSS with various hard-decision fusion schemes (OR-rule, AND-rule, and majority-rule) and soft-data fusion schemes (square law selection (SLS), maximal ratio combining (MRC), square law combining (SLC), and selection combining (SC)) is analyzed in this work. Towards that, novel and closed-form analytic expressions are derived for probability of detection under all soft schemes in Rayleigh fading channel. A comparative performance between hard-decision and soft-data fusion schemes has been illustrated for different network parameters: time-band width product, average sensing channel signal-to-noise ratio (SNR), and detection threshold. The optimal detection thresholds for which minimum total error rate is obtained for both soft and hard schemes are also indicated.

  • Last decade has seen an increasing demand for vehicle aided data delivery. This data delivery has proven to be beneficial for vehicular communication. The vehicular network provisions safety, warning and infotainment applications. Infotainment applications have attracted drivers and passengers as it provides location based entertainment services, a value add to the traveling experience. These infotainment messages are delivered to the nearby vehicles in the form of advertisements. For every advertisement disseminated to its neighboring vehicle, an incentive is awarded to the forwarder. The incentive based earning foresee a security threat in the form of a malicious node as it hoards the incentives, thus are greedy for earning incentives. The malicious behavior of the insider has an adverse effect on the incentive based advertisement distribution approach. In this paper, we have identified the malicious nodes and analyzed its effect on incentive based earning for drivers in vehicular networks.

  • Mobile ad hoc network is a dynamic wireless communication network which are vulnerable in reliability because of the topology configuration, infrastructureless characteristics, limited bandwidth and interference. The existence of interference in ad hoc networks has a significant impact on the whole network performance. The link existence in ad hoc networks is intensely related to interference, which means that the creation and the deletion of links are affected because of interference. Most of the researchers of the previous decade have failed to consider and address the effect of interference on reliability evaluation of ad hoc networks. Hence, this work proposes a methodology to calculate the network reliability of mobile ad hoc network considering with and without the effect of interference. In this paper, the results clearly show that the network performance degrades when the interference is high. To improve the ad hoc network performance, it is therefore necessary by the design engineer to take precautionary steps in deploying the network with an optimum network size influenced by considerable amount of interference without any loss of information.

  • Extracting large body movement dataset from textual instruction could be useful for serious games for health to learning by demonstration in robotics. The interpretation of instructions for the automatic generation of the corresponding motions (e.g. exercises) and the validation of these movements are difficult tasks. In this article we analyzed a step towards achieving automated extraction of motions from textual instruction of physical exercise. To achieve this, we have recorded 36 different exercises using a XSENS motion suit. Using the XSENS data we have developed one automated physical exercise engine with three different modes. In this engine all exercises are generated in skeleton mode. Out of these three mode, in first mode it is limited to the mentioned exercises in the application. In second mode user can generate any combination of exercise (upto four) of the mentioned exercises. And in the last mode user can generate any exercise using text input or uploading instruction sheet to the system. We are still working on the third mode, which is not fully efficient now.

  • Verification and validation of control system brings more confidence on software functions under test. Testing of control functions of highly dynamic power electronics drives invariably requires closed loop system. The most energy efficient and reliable methodology proven in the past decade is real time testing using the hardware in loop simulation. It is expected to provide accurate feedback signals for control functions under test for given set of inputs and plant model. The paper provides validation of control function of single inverter medium voltage drive with field data in hardware in loop test system. Detailed plant model has been developed taking care of frequency response that a control function requires during magnetizing, ramping up and steady state operation. Plant model is benchmarked with that of the field including the response in dynamic conditions. The results show the accuracy of the plant model compared with the actual physical system for critical control software functions throughout the operating range thus proving the reliability of the medium voltage drive control software.

  • Rain will be estimated with the formation of cloud consists of water droplets, at higher frequencies such as millimetre band, that experiences a signal degradation and signal reduction due to the cloud consists of water droplets. The atmospheric gases, clouds, rain, snow, fog, cloud droplets, noise, water vapour, hydrometers absorbs electromagnetic energy, which results in the signal degradation. Clouds in generally consists of water droplets of less than 0.10mm in diameter, whereas raindrops in generally consists of range from 0.10mm to 9.5mm in diameter. So all these effects were leads to degradation in the quality of transmissions and in increase in the error rate of digital transmissions. In general, the higher frequency leads to, the more a signal is susceptible to rain in the atmosphere. For the purpose of cloud impact effect evaluation, the cloud cover statistical data, for low level clouds were derived from the earth-satellite link observations. These Extracted statistical data were used to obtain the seasonal drastic fluctuations.

  • In recent times, due to usage of avionics subsystems and the amount of data processed by them there is an increased demand for the usage of digital techniques in aircrafts. MIL-STD-1553 has evolved as an international standard for military applications. To meet the real world specifications, it is required to optimize the area and power and to improve the performance of the data bus. Now-a-days, speed of the system is very much important. But with the speedy improvements in modern avionics system, traditional MIL-STD-1553 bus cannot reach the need of high speed applications. This project aims at designing high performance MIL-STD-1553B bus controller which is compatible to Data Device Corporation (DDC). In most of the aircraft applications DDC devices have been used. The proposed system is designed using Verilog HDL and simulation is done in ModelSim. The area and time calculations have done in Design Compiler Tool of Synopsys. The functionality of the developed architecture is verified in Xilinx ISE 13.4.

  • In this digital era, the need for the efficient communication between different devices with minimum requirement of hardware for the low power, high speed design and also for cost effectiveness is needed. As a result of these digital techniques in aircraft equipment where complexity is increasing due to the number of subsystem and the volume of date processed by them this standard is useful. This protocol is used in military and aerospace electronic systems. Even though 1553B is an old standard, it is an inevitable part of almost all aircrafts including Boeing as it is known for its reliability and flexibility. This project aims at designing Remote Terminal for MIL-STD-1553B. The Remote Terminal has a capability of receiving the valid command send by the Bus Controller (BC) and to decode it and then acknowledge accordingly. The design is developed using HDL-Verilog and simulation is done using QuestaSim. The synthesized report is created using Xilinx ISE 13.4 and the coverage report has been generated using QuestaSim.

  • It is well known that besides the QoS, the potency for energy is additionally a new key technology in coming up for estimating broadband wireless mobile communications. The associated energy potency method is analyzed first for MIMO-OFDM technologies in the transmission system of mobile for applied mathematics. Nature of administration issues utilizing the channel framework SVD system for subchannels unit arranged by their attributes of the channels. Besides, the multi-direct joint change drawback in ordinary MIMO-OFDM correspondence frameworks is redesigned into a multi-target single channel change drawback by gathering all sub channels. In this manner, a shut frame arrangement of the vitality intensity change springs for MIMO-OFDM versatile Multimedia correspondence frameworks. As an outcome, relate vitality effectiveness improved power allotment (EEOPA) algorithmic administer is anticipated to upgrade the vitality strength of MIMO-OFDM versatile transmission correspondence frameworks. Recreation examinations accept that the anticipated EEOPA algorithmic lead will ensure the predefined QoS with high vitality strength in MIMO-OFDM portable transmission correspondence frameworks.

  • To perform emergency response activities, complex networks of emergency responders from different emergency organizations work together to rescue affected people and to mitigate the property losses. However, to work efficiently, the emergency responders have to rely completely on the data which gets generated from heterogeneous data sources during search and rescue operation (SAR). From this abundant data, rescue teams share needed information which is hidden in the abundant data with one another to make decisions, obtain situational awareness and also to assign tasks. Moreover, understanding and analyzing the shared information is a complex and very challenging task. Therefore, in this paper, we provide a concept of a time varying data analysis and present findings from a fire emergency serious game.

  • The newly emerging technology i.e. Wireless Sensor Networks spread rapidly into many field's like medical, habitat monitoring, bio-technology etc. The relevance of WSN are tremendous. The utility of WSN is for collecting the sensed data, storing or processing the sensed data and the transmitting data to the appropriate central station. Agriculture is one of the field which have recently averted their scrutiny to WSN. By taking help of WSN, one can transmit the real-time data quickly with in no time. The WSN system which is developed in this paper, is used for precision agriculture. Precision agriculture is nothing but applying right inputs at the right time to get more cultivation with less power and work. The real-time data is based on the several characteristics of weather like temperature, humidity etc. The architecture of the developed WSN system in this paper comprehend a set of sensors called sensor node, base station and central station. Base station sends the sensed data to the central station.

  • Cognitive Radio can access licensed spectrum as well as unlicensed spectrum, but while accessing licensed spectrum, radiation or signals of cognitive users interfere and disturb the licensed user signal in adjacent bands. Orthogonal Frequency Division Multiplexing (OFDM) is the enabling modulation technique for Cognitive Radios because of its flexible nature. The use of OFDM in Cognitive Radio will reduce intersymbol interference, but Out Of Band (OOB) leakage will still exists which effect the performance of licensed users operating in adjacent bands. Spectral mask based on precoding matrix is an efficient technique for reducing OOB radiation across Cognitive Radio user. In this paper, a new method is proposed for efficient utilization of spectral mask with simple equalization technique by considering the presence of licensed users, their position and controlling Out Of Band radiation without using notch frequencies. The simulation results show that proposed technique effectively reduces the Out Of Band radiation without reducing BER with minimal interference is offered to adjacent licensed users and reduction in computational complexity.

  • A wireless freely moving node which comprises a MANET can talk with each other without the need of centralized administration and network infrastructure by providing the nodes with unrestricted connectivity with each other and unrestricted mobility throughout the network. In MANET there often exist multiple paths from sender to the destination node and constrained network overhead, making the network less immune to malicious node more prone to selfish behaviour of nodes and compromised nodes which are most often undetectable. Our algorithm succeeds in achieving high immunity towards network attacks. In a clustered network one of the cluster members is chosen as cluster head [12]. But if the selected node becomes selfish or malicious, it will affect the performance of the entire group communication. In this paper we are proposing an algorithm to identify the nodes with highest level of trust within the cluster. The nodes are then added to the friend list based on the calculated trust level. The friends having highest trust level are eligible to become cluster heads. Among the eligible cluster heads, one among them is selected as the cluster head [12] while other nodes are considered as cluster members and the nodes with least trust values are considered as malicious and are removed from the list and are stored in a marked unwanted list.

  • In many developing nations such as India the majority of the country's population lives in rural areas. The main challenge of the twenty first century is to enable connectivity in rural areas. Rural Connectivity enables residents to get access to health care services, medical facilities, education services, etc., through the internet. Use of wireless backhaul technologies has the potential to speed up the process of providing connectivity with reasonable bandwidth and optimum coverage. This paper studies all the existing wireless technologies, such as WiMAX, Long range Wi-Fi, Cognitive Radio, LTE and 2G/3G, with respect to their performance in cost, transmission range, vendor support, data rate, bandwidth requirement, latency and spectrum licensing expense. A utility function is evaluated based on these parameters and thereby, a low cost technology for internet access is proposed for providing affordable rural connectivity.

  • Administration in India hypothesizes that there are street mishaps happening at regular intervals and the regular stake are of human blunders under which sum to 93% of all mischances. The aggregate yearly human loss because of street mishaps has crossed 1.18lakh. A dynamic shrewd demonstrative framework on the system that cautions and helps the driver in effective driving and the technical expert with finding among other accessible information is the need of great importance. This demand for an intelligent safe driving system providing safety to driver as well as passengers that assists the driver in handling a situation of sudden probability of collisions. The work proposes ARM7 controller based execution to caution around a crash and overcome it. The controller is employed with a built-in CAN protocol which plays the major role in communicating with all the devices and sensors. Ultrasonic sensor acts as an obstacle detector for front-end of the vehicle and IR sensors for change of lanes. Motor driver acts as an interface between controller and the motor. A GSM module is also employed to get information on phone via short message service. Warnings are obtained with buzzer and LCD display also.

  • In this paper 32 bit adders are designed using existing full adders and proposed adders. Average leakage and average power consumed in all 32 full adders is small compared to all existing, proposed 15 transistor based 32 bit adder and proposed 13 transistor based 32 bit adder.

  • ISRO Satellite Centre of the Indian Space Research Organization develops satellites for variety of scientific applications like communication, navigation, earth observation and many more. These satellites consist of very complex intensive systems which carry out advanced mission functions. Hence software plays a critical constituent for mission success. Some of the geostationary missions onboard software is finalized, changes are minimal for the new spacecraft. This model based system for software change analysis for embedded systems deals with managing changes to existing software items and re configuring in any part of development life cycle. This model helps in handing change management, such as maintenance of a component library, predicting the impacts of changes in reused modules, analyzing the behavior of the combination of reused modules. The use of reusable software modules has augmented the development of embedded software for GSAT series of satellites. This model mainly reduces the time during each phase of software life cycle when the changes are to be implemented within short span of time. This paper describes how complete traceability can be established between specifications and design requirements, model elements and their realization.

  • Now day's computer vision techniques are used for analysis of traffic surveillance videos which is gaining more importance. This analysis of videos can be useful for public safety and for traffic management. In recent time, there has been an increased scope for analysis of traffic activity automatically. Computer based surveillance algorithms and systems are used to extract information from the videos which is also called as Video analytics. Detection of traffic violations such as illegal turns and identification of pedestrians, vehicles from traffic videos can be done by using computer vision and pattern recognition techniques. Object detection is the process of identifying instances of real world objects which include persons, faces and vehicles in images or videos. Object detection is becoming an increasingly important challenge now days as it has so many applications. Vehicle detection helps in core detection of multiple functions such as Adaptive cruise control, forward collision warning. Automatic Generation of Traffic Signal based on Traffic Volume system can be used for traffic control. Traffic Surveillance videos of vehicles are taken as input from MIT Traffic dataset. These videos are further processed frame by frame where the background subtraction is done with the help of Gaussian Mixture Model (GMM). From the background subtracted result some amount of noise is removed with the help of Morphological opening operation and Blob analysis is done in order to the detect the vehicles. Later the vehicles are counted by incrementing the counter whenever a bounding box is appeared for the detected vehicle. Finally a signal is generated depending on the count in each frame.

  • The objective of this paper is to design a power management unit that generates different voltage levels from a constant input voltage. The purpose of this unit is to drive the peripherals operating at different voltage levels and are embedded in single application. The design of power management unit supports the power save mode operation, i.e. the unit supplies voltage only when there is a demand for the operation to be performed by the control unit or request from the peripheral.

  • The growing demand for the wireless connectivity has insisted a new communication technique to exploit the usage of spectrum in an efficient way. Many Techniques including Cognitive Radio (CR) technology provides the capability to share the spectrum and provides the high band width to the users by using dynamic access techniques. These techniques allow unlicensed secondary users to access the licensed spectrum without any disturbance. A special antenna is required in Cognitive Radio applications for spectrum sensing and communicating. Wide Band antennas are being considered as the most suitable structures for spectrum sensing applications and are perfectly suitable these environments. In this paper, an attempt is made to present the design of simple patch antenna structures by using circular, semi-circular, Semi-circular triangle shaped, semi-circular crown shaped antennas for achieving the wide band characteristics. These structures were analyzed theoretically and the simulation was done by using CST Microwave Studio Suite. All the structures were fabricated and analyzed in between 1-18 GHz frequency range on RT-Duroid substrate. It is found that these Structures achieve an enhanced impedance bandwidth over the frequency ranges other than Ultra Wide Band (UWB) range and up to Ku Band.

  • Design of memory consists of two different approaches namely, static random access memory (SRAM) and dynamic random access memory (DRAM). Traditionally SRAM has been used to design memory. The major problem to design a memory using SRAM is area, power and delay. The memories designed with SRAM will result in high power, high delay and consumes more area. To overcome this problem, a DRAM cell is designed witch results in low power, low area and low delay. There are certain disadvantages with these two designs, and to overcome limitations a new gain cell is designed in this paper. A 2Kb dynamic memory architecture has been designed using the proposed modified 3T gain cell. An architecture is designed with features of high speed, low power, and low delay. The dynamic memory architecture is implemented using Cadence Analog Design Environment. The proposed and conventional dynamic memory architectures were compared in terms of power and delay for variable supply voltages and temperatures.

  • Growing of industry and increasing demand in consumer load/distribution side place a new demand in mechanism connected with electrical motors. This is leading to different problems in working operations due to fast dynamic and instability. The stability of the system is essential to work at desired set target but due to non-linearity caused by a motor frequently reduces stability which reduces control ability to maintain speed/position at set points. BLDC motors are widely used in industries because high efficiency, low cost, roughest construction, long operating life noise less operation but the problem arises in BLDC motors are speed controlling by using sensor and senseless controllers and large torque ripples and torque oscillations. This paper presents assessment and evolution of the BLDC motor by providing proper voltage controller methods (back emf controller method) and analysis has done in MATLB/SIMULINK software. Therefore the parameters of the BLDC motor analyzed and compered with BLDC motor drive without any controller.

  • IP-BTS (Internet Protocol Based Base Transceiver Station) is a small computing hardware module designed for integration into an industry standard tower PC housing which provides a complete GSM Access Point. It is used to optimize cost of transmission line for high density multi-band base stations. This paper describes the detailed procedure to develop system level boot loader software (universal boot loader-u-boot) for IP-BTS which is a customized hardware with PowerPC based processor board running on RTLinux operating system. This paper also discusses the brief presentation of the customized hardware platform at system level and the boot loader software development procedure for embedded telecom application IP-BTS. Designing suitable boot software at system or boot level is a complex and challenging task which is included in development of an embedded system for telecom applications. The boot loader connects application software through customized hardware and Real-Time Operating System (RTOS). It is not only responsible for initialization of the processor board and other module of IP-BTS software also upgrades operating system images and IP-BTS software versions. The early initialization code is always developed in the processor's native assembly language. In this paper we presented detailed procedure to transplant boot loader (u-boot) on Power PC based customized board and tested successfully with results.

  • Acquiring security in the event of aggregated data in wireless sensor network is challenging task because of its limitations in terms of calculation, resources, battery power, transmission capacities and so forth. Networks are liable to be made of many tiny sensor nodes, working autonomously without access to renewable energy resource. As wireless sensor networks edge nearer towards broad deployment, security issues become a focal concern. Hence to preserve energy aggregation technique is applied. To mitigate Denial of Service attack and to secure aggregated data one of the client puzzle approach, RSA authentication technique is used.

  • Wireless Sensor Network (WSN) mainly composed of a number of sensor nodes whose prime responsibility is to sense various events from the surrounding, do the processing on top of it and finally propagate the meaningful information to the observer through multiple intermediate nodes. Area coverage is one of the issues in WSN that guarantees the selected active nodes among all the deployed nodes should cover each point of the deployed area. The objective of complete area coverage is to find out redundant nodes and deactivate them, so that the remaining active nodes can cover the deployed area. Among the various existing approaches for area coverage in WSN, a grid-based approach has been proposed that provides a better way to select the active nodes or deactivate the redundant nodes from the grid rather than choosing the nodes from the deployed area. The existing approach goes with certain limitations such as how many numbers of grids need to be made in a certain deployed area with a certain number of nodes and how much percentage of nodes need to be selected from each grid. In order to avoid these limitations, in this paper, we propose a randomized grid-based approach that splits the intended area into small grids depending upon the density of nodes in a different location of the deployed area. At the same time, rather than selecting a certain percentage of nodes from each grid, here we select only one node from each grid and repeat the process till selected nodes satisfy the whole area coverage. Matlab simulator is used to study the simulation results of the proposed work, and it is found that the proposed randomized grid-based approach outperforms the existing grid-based approach both with respect to energy as well as throughput.

  • Sensor nodes are deployed in remote wireless sensor networks (WSNs) which are mainly constricted by battery life and communication range. Single path routing mechanisms in WSNs leads to drastic power utilization and path failure. A multipath routing scheme using cooperative neighboring node concept is presented in this paper. Based on the availability of the energy at the neighboring nodes, multipath routes are recognized. Our simulation results demonstrate that the proposed scheme enhances the performance in terms of multipath identification delay, end-to-end delay, packet delivery ratio and network life time etc.

  • Internet of things is one of the rapidly growing fields for delivering social and economic benefits for emerging and developing economy. The field of IOT is expanding its wings in all the domains like medical, industrial, transportation, education, mining etc. Now-a-days with the advancement in integrated on chip computers like Arduino, Raspberry pi the technology is reaching the ground level with its application in agriculture and aquaculture. Water quality is a critical factor while culturing aquatic organisms. It mainly depends on several parameters like dissolved oxygen, ammonia, pH, temperature, salt, nitrates, carbonates etc. The quality of water is monitored continuously with the help of sensors to ensure growth and survival of aquatic life. The sensed data is transferred to the aqua farmer mobile through cloud. As a result preventive measures can be taken in time to minimize the losses and increase the productivity.

  • The deployment of biosensors is increasing with advancement of bio-electronics. Owing to challenging state of working of biosensors, the present applications of biosensors are capable of capturing only certain types of signals till date. This detection also depends on the type of the bio transducers used for signal generation and therefore, the signals generated from biosensors cannot be considered to be error free. This paper has reviewed some of the existing research contributions towards biosensor validation to find that there are no computational framework that efficiently performs validation as majority of the technique uses either clinical approach or experimental approach, which limits the validation of bio signal performance. Therefore, this paper presents a novel computational framework that uses enhanced version of auto-associative neural network and significantly optimizes the validation performance of biosensors as compared to other conventional optimization techniques.

  • Code Division Multiple Access (CDMA) is one of the channelization protocol. CDMA provides some sort of privacy and only intended receiver can able to receive data. Multi-Carrier CDMA (MC-CDMA) is one of the promising technologies in present and future wireless communications and one of the 4G component. The performance of CDMA system depends on the characteristics of spreading sequence. The spreading sequences must be unique and have better auto correlation and cross correlation values. Otherwise Multiple Access Interference (MAI) arises that limits the system capacity. In this paper MC-CDMA system is implemented with maximal-length, gold and ZCZ (Zero Correlation Zone) sequences [4]. Multiple input-multiple output (MIMO) technology provides high transmission rate, reliability and diversity for communication systems. In this paper MC-CDMA system is implemented with 4 transmitting and 4 receiving antennas (4x4). BER (Bit Error Rate) performance is measured with three sequences in AWGN, Rayleigh fading channel. And simulated results are shown.

  • This journal explains about the most common problem experienced in our day-to-day lives that is regarding GAS container going empty. We bring this paper to create awareness about the reducing weight of the gas in the container, and to place a gas order using IOT. The gas booking/order is being done with the help IOT and that the continuous weight measurement is done using a load cell which is interfaced with a Microcontroller (to compare with an ideal value). For ease it is even has a been added with an RF TX & Rx modules which will give the same information. When it comes it to security of the kit as well as gas container we have an MQ-2(gas sensor), LM 35(temperature sensor), which will detect the surrounding environment for any chance of error. When ever any change is subjected in any of the sensors (load cell, LM35, Mq-2) a siren (60db) is triggered.

  • The cost effective routing is one of major operations in WSNs. The Mamdani fuzzy inference system used to select the chance of a sensor node to become a cluster head (CH) on the basis of input parameter distance and energy of sensor nodes. The heuristic search algorithm A* is used to find the minimum path length from source to sink node. The aggregated data packets are routed from source CH node to sink node on the selected path.

  • Text mining, also referred to as text data mining, is the process of extracting interesting and non-trivial patterns or knowledge from text documents. It uses algorithms to transform free flow text (unstructured) into data that can be analyzed (structured) by applying Statistical, Machine Learning and Natural Language Processing (NLP) techniques. Text mining is an evolving technology that allows enterprises to understand their customers well, and help them in redefining customer needs. As e-commerce is becoming more and more established, the number of customer reviews and feedback that a product receives has grown rapidly over a period of time. For a popular asset, the number of review comments can be in thousands or even more. This makes it difficult for the manufacturer to read all of them to make an informed decision in improving product quality and support. Again it is difficult for the manufacturer to keep track and to manage all customer opinions. This article attempts to derive some meaningful information from asset reviews which will be used in enhancing asset features from engineering point of view and helps in improving the support quality and customer experience.

  • In this research, linear regression (ordinary least square and principal component) and non-linear regression (standard and least square support vector) models are developed for prediction of output quality from sulphur recovery unit. The hyper parameters associated with standard SVR and LS-SVR are determined analytically using the guidelines proposed in the literature. The relevant input-output data for process variables are taken from open source literature. The training set and validation set are statistically designed from the total data. The designed training data were used for design of the process model and the remaining validation data were used for model performance evaluation. Simulation results show superior performance of the standard SVR model over other models.

  • The demand for the electricity was gradually increasing during the past years causing imbalance in the existing utility power grid which leads to several problems like load shedding and unbalanced voltage which ultimately affect the end consumers. In order to overcome this situation the usage of non conventional energy sources have become research hotspot during the past decade. In his context autonomous PV and wind integrated hybrid systems have been found economically viable alternative to supply energy continuously to the numerous deficit customers. Individual configuration of solar or wind power leads to fluctuations in system. When combined together will provide a reliable and sustainable source with constant flow energy. PV and Wind energy systems are combined at the DC bus and its voltage is magnified to required level by using DC-DC Step-Up converter. This paper deals with common grid perturbations like balanced voltage dips, unbalances voltages and harmonic distortions. The generated output power from the PV/wind system is fed to all connected loads. Under surplus power condition excess power is fed to the utility grid. This paper presents integrated hybrid system model in MATLAB/ SIMULINK environment.

  • This report furnishes the brushless DC motor (BLDC) performance when fed by multi level inverter is compared with voltage source inverter (VSI) fed BLDC motor. Total harmonic distortion (THD) of stator currents and back EMF's of BLDC Motor for these two inverter topologies are compared. Over past years for high power and medium voltage control applications multi level inverters are employed. To obtain higher levels of voltage different types of multilevel inverters are used. Among them, Cascaded H-bridge multi level inverters are mostly used due to its advantages over other. Here experimental results of a voltage source inverter fed BLDC motor performance are compared with cascaded H-bridge multilevel inverter driven BLDC motor performance.

  • In This paper portrays about real, power quality issues mostly voltage quality[2] in the power system, massively consequences for delicate loads[10], these heaps can be secure by presenting custom power device[3] called Dynamic Voltage restorer (DVR),in the line this is related in series, it can infuse/absorb voltage [8][9]with the assistance of self upheld capacitor[6] - infuse voltage in quadrature with the line current amid sag[2] by including a (BESS)battery energy storage system infuse voltage in phase with Vs(v). With the goal that voltage source converter rating is decreased, gate pulses should be generated with the help of synchronous reference theory(SRF) and that BESS supportive network connected to IEEE 11 ,33 and 69 Bus distribution systems.

  • Normal feedback controllers may not perform well, because of the variations in process or Plant due to nonlinear actuators, changes in environmental conditions. The design of a controller for speed control of DC Motor with Model Reference Adaptive Control scheme using the MIT rule for adaptive mechanism is presented in this paper. The controller gives reasonable results, but to the changes in the amplitude of reference signal it is very sensitive. It is shown from the simulation work carried out in this paper that adaptive system becomes oscillatory if the value of adaptation gain or the amplitude of reference signal is sufficiently large. This paper also deals with the use of MIT rule along with the normalized algorithm to handle the variations in the reference signal, and this adaptation law is referred as modified MIT rule. The Modeling of MRAC is shown by means of simulation on MATLAB.

  • Multipliers are the most commonly used elements in today's digital devices. In order to achieve high data throughput in digital signal processing systems, hardware multiplication is the important factor. Depending on the applications which are emerging with the electronics devices, various types of multipliers are emerged. Among all the multipliers, the basic multiplier is Array Multiplier. This paper aims at design of an optimized, low power and high speed 4- bit array multiplier by proposing Modified Gate Diffusion Input (MOD-GDI) technique. With this technique the total propagation delay, power dissipation, and no. of transistors required to design are much more decreased than compared to the Gate Diffusion Input (GDI) and CMOS techniques. Simulations are carried out on mentor graphics tools with 90nm Process technology.

  • The simulation of Blind adaptive beamforming using Normalized Constant Modulus Algorithm (NCMA) for the application of smart antenna systems is presented. The significance and basics of smart antenna design in terms of mathematical model is discussed. In this work 16-point QAM (Quadrature Amplitude Modulation) data is considered for simulation, which is one of the preferred modulation formats in design of modems and other fixed location wireless applications in industry. The simulation results in terms of array factor and antenna array response are presented, in which significant improvement is observed compared to other related work.

  • Emerging sensor based electronic gadgets desire to seek high levels of energy conservation by adopting extreme low power techniques in combination with traditional techniques. In this study the authors examine memory units with data retention capability in the Energy-Delay space for an emerging application namely Transiently Powered System for three levels of power and performance optimization. The study presents a novel Dual Edge Triggered Flip-Flop (DETRFF) with retention latch that is suitable for ultra low power application with dynamic voltage switch between super and sub threshold levels. The DETRFF designs are simulated in 45nm NCSU CMOS technology using Cadence. The proposed design excels in the EDP and Leakage Energy metrics as compared to the existing DETFF designs.

  • In modern IC chips, on chip interconnects plays a dominant role because of the device parasitic's. Several analytical and mathematical models are used for the interconnect analysis for delay reduction. But with the technology scaling towards nanometers, interconnect length is increasing exponentially. So, more accurate and efficient techniques for interconnect analysis are essential to reduce the propagation delays and increase the overall circuit performance in-terms of speed. Here a new interconnect scheme based on the transmission line model is proposed. This interconnect scheme has a resistance and conductance at the input, output and a lossless T-type LC network between them. Closed form delay model of interconnect is developed, which is a third order approximation to achieve better accuracy and less delays. The lumped delay models such as RC, RLC and RLGC are compared with the proposed model of interconnect for performance analysis in-terms of propagation delays and average power. The comparison analysis have shown that the proposed model of interconnect has less transients, average and propagation delays when compared to the other delay models.

  • Recently in the operation of power systems, transient stability analysis has become a major issue because of increasing stress on power system. The transient stability analysis for large power system network is very difficult and highly non-linear problem. In this paper, different faults (like 3-phase faults, LG fault and sudden removal of generator) are considered at different bus levels in combined cycle power plant and the fault clearing times for stable operation of system (critical clearing time) are calculated by using the ETAP (Electrical Transient Analyzer Program) software.

  • Diabetes is one of the life threatening diseases in the world. Nowadays, diabetes patients are increasing due to improper monitoring of blood glucose level. The diabetes infected patients have to check the amount of glucose level present in the body using invasive method. By using this method, they have to take a drop of blood from the body and check the amount of glucose level, by which they can inject the required amount of insulin into the body. To overcome the difficulties caused by invasive method, in this prototype we are using non-invasive methodology. The main objective of this work is to design a portable non-invasive blood glucose level monitoring device using near infrared sensors. The device includes Infrared LED, Photodiode and ATMEGA328 Microcontroller. Besides being able to detect glucose concentration in blood, the device also displays the required insulin dose based on glucose level corresponding to the Body Mass Index (BMI) of the user.

  • Over the past few decades, the designers concentrating on different techniques to design low power chips. The power consumption can be reduced by minimizing the leakage power and leakage current in that specified design. Power consumption is main criteria in digital memory circuits, to reduce and to recover the power, we have many techniques are available. A Stacked Keeper Body Bias (SKBB) is one of the technique is applied to the conditional circuitry of the memory block. The modification and replacements were done in the conditional circuitry. The Bit Line, Write Line decoder, Priority Encoder was used to design efficient 2Byte CAM. The result shows that it is dissipating 50% less power than the conventional CAM Design.

  • Traffic light control unit can be designed as a synchronous sequential machine with finite number of states. Explicit finite state model is used to design the necessary coding for control system using Verilog HDL. The machine is modelled with only six states and these states are chosen based on the traffic control algorithm. In each state necessary delay is provided and for that particular delay the necessary traffic lights are set ON and OFF. For illustration just only two roads are chosen and control algorithm controls the traffic lights of that roads. This paper proposes a flexible framework which provides a delay in particular state using clock divider, also discusses the issue of modelling the state machine in a synthesis friendly manner. The design is aimed for Xilinx Spartan-6 XC6LX16- CS324 FPGA. Also this paper discusses the issue of choosing a state encoding scheme.

  • Networking and communication systems are meant to perform data operations to obtain data integrity in secured states. For this reason, Block Cipher and Hash Function play an important role in providing the data integrity and security. Authenticated Encryption (AE) is a technique that performs both encryption and authentication with single algorithm and aids achieving high speed implementation goal in FPGAs. For security purposes, AES-GCM circuits are utilized in many of the applications. Key-synthesized technique is described with VPNs(Virtual Private Networks).

  • Testing a system for the detection of faults is a big challenge once the circuit has been designed. The system so designed may consist of sub blocks and Modules. The entire system performance depends upon individual modules performance. Hence testing the system includes testing each & every individual module with various test inputs is required. Due to the limitations in Design for testability techniques like scan based techniques and Built in self test (BIST), with lack of at speed testing and large overheads like area and delay. The VLSI circuit designers are attempting to automate the synthesis of RTL circuits from a Behavioral description. This test generation scheme is independent of bit width and as a result it is capable of handling convoluted control/data path circuits. The Design for testability hardware and test generation algorithms is self-contained. As a result they are independent of the data path bit width. It is proposed to analyze the performance of Analog signal circuit by Behavioral synthesis. By using which parameters like response, delay, etc. can be analyzed.

  • Firstly, this study proposed a new parallel adder circuit model in Carry Value Transformation (CVT)-Exclusive OR (XOR) paradigm. Secondly, an efficient multiplication algorithm is discussed along with its performance analysis on various inputs selection. Our design of proposed model for the addition of many integer pairs using parallel Cellular Automata Machines (CAMs) can perform the addition in a much better way with setting a preprocessing testing logic in it. CVT and XOR operations together can do the efficient addition of two non-negative integers for any bulk inputs using CAM. Multiplication is the repetitive addition process, which could be designed using recursive use of CAM. Our analysis up to 10 bits selection of all integer pairs suggest that the recursive use of CAM for multiplication becomes much faster in real life scenario for any types of inputs. Further exponential operation is highly needed for various fields of computer science which is also described in this paradigm.

  • Water is a precious source vital for healthy living. Most of the infectious diseases are due to contaminated water which leads to millions of deaths every year. There is a need to establish Water quality monitoring system to verify whether the determined water quality is suitable for intended use. This paper presents the application of Wireless Sensor Network (WSN) technology for real time online Water quality monitoring. In this paper, the details of system design and implementation of WSN are presented. Wireless Sensor Network (WSN) for a water quality monitoring is composed of number of sensor nodes with networking capability which are deployed at different overhead tanks and water bodies in an area. Each sensor node consists of an Arduino microcontroller, Xbee module and water quality sensors, the sensor probes shall continuously measure the different water quality parameters like pH, Temperature, Conductivity. The parameters are measured in real time by the sensors and send the data to the data center. Solar panel is used to power the system for each node. Data collected from remote nodes are displayed in the user PC. This developed system will demonstrate online sensor data analysis and has the advantages of power optimization, portability and easy installation.

  • Memories are the most universal components of our day to day real time applications. Almost all application system chips contain some type of embedded memory chips, such as ROM, Static RAM, Dynamic RAM, and flash memory. The goal of the semiconductor technology is to continue to scale the technology in overall performance. Memories have complex design structures than any other core in SoC. Due to higher levels of integration and huge memory size, manufacturing cost of the device is reducing and testing cost is increasing. Testing is required to guarantee fault free products. Exhaustive number of bit patterns will take more time to test the circuit. Test algorithms are necessary to optimize the testing time. In this paper memory is designed and tested for different fault models. We have simulated and analyzed the memory design using ChipScope Pro and synthesis is done by using Spartan 3E FPGA.

  • Multiplication is an essential vital role in arithmetic operations. In fact, multiplication is allotted on operations like Multiply and Accumulate (MAC). The exaggerate form of Braun multiplier is the Baugh-Wooley multiplier. This work proposes the design of Low-power Baugh-Wooley Multiplier with CMOS full adder with different topologies like 10T, 14T, 17T as well as with CNTFET full adders using different topologies like 10T, 14T, 17T. All circuits are designed and simulated using HSPICE Tool.

  • Independent mobility is an important aspect of self-esteem and plays a pivotal role in ”aging in place”. The smart wheelchair is an effort to provide an independent mobility to those persons who are unable to travel freely. A smart wheelchair is an amalgamation of a standard power driving modules and a collection of sensors with an interfacing section.Input control signal may be given by means of wireless joystick or touchpad that can be mounted on armrest also.This feature makes it useful for the disabled persons who are able to control the joystick and touchpad, and also it can be helpful for the caregiver like a nurse. Ultrasonic and infrared sensor systems are going to be integrated into this wheelchair for providing best possible and safe movement.

  • Now a day's vehicle recognition has become a wide area of research. Vehicle recognition has several applications such as in automatic parking, toll gate management etc., Because of numerous applications, vehicle recognition in computer vision has become a research area in Intelligent Transport System (ITS). There are many different vehicles from different manufacturers which are increasing day by day. Analyzing the attributes of these vehicles and recognizing different vehicles is a complex task. The main objective of this project is vehicle identification. Vehicles can be identified by performing recognition of its iconic license plate but the License Plate Recognition System does not work when license plate is manipulated, missing or covered. Another important attribute of a vehicle is its logo which contains important information about the car and as it cannot be changed easily. Logo plays an important role in recognition of vehicles. Here vehicle recognition is concentrated based on logos in order to give the manufacturers name or brand. Vehicle recognition is performed by extracting logo using ROI selection. Then by using gray level co-occurrence matrix feature extraction method features are extracted and classification is performed based on probabilistic neural network.

  • The rapid growth of World Wide Web has resulted in massive information from varied sources rising at an exponential rate. The high availability of such disparate information has precipitated the need of automatic text categorization for managing, organizing huge data and knowledge discovery. Main challenges of text classification include high dimensionality of feature space and classification accuracy. Thus, to make classifiers more accurate and efficient, there arises the need of Feature Selection. Genetic algorithms have gained much attention over traditional methods due to its simplicity and robustness to solve the optimization problem and high exponential search ability. Thus, the paper focuses on using Genetic Algorithm (GA) for Feature Selection to obtain optimal features for classifying unstructured data. We build a fuzzy rule-based classifier that automatically generates fuzzy rules for classification. The experiments are conducted on two datasets namely 20-Newsgroup and Reuters-21578 and the results indicate that GA outperforms Principal Component Analysis (PCA).

  • This paper presents a new optimized DWT-SVD based watermarking technique using Genetic Algorithm. The singular value component of the original Image is modified by adding the singular component of the watermark image along with a suitable scaling factor. This scaling factor is optimized by GA using the PSNR values as the fitness criteria in order to achieve high values or robustness without compromising the transparency of the watermark. Further application based analysis is done by using the Noise Correlation as a fitness function to test for better results in robustness.

  • Worldwide statistics inform that breast cancer occupies second position causing mortality among women. Symptomatic detection of the disease in its early stage is important for treatment to help the internists and radiologists in their diagnosis. In the proposed module, nuclei locations are obtained using Hough Transform. Nuclei Segmentation of the pre-processed Hematoxylin and Eosin stained breast cancer histopathological images is done using Proposed Modified - Marker Controlled Watershed Approach (MMCWA). Small fixed Structuring Element (SE) size removes respective bright and dark details during opening and closing morphology & large SE size removes huge contour details of the input image. So, in the proposed MMCWA, by using weighted variance method, the adaptive Structuring Element size of the SE map is obtained to protect all details in the image. A total of 20 features, including 5 shape based features and 15 texture features were extracted for classification using Decision Trees, SVM and KNN classifiers. Algorithmic performance evaluation is accomplished and proved that the proposed integrated MMCWA provides better results than the traditional marker controlled watershed. The proposed module was trained with 96 images and tested over 24 images taken from the digital database.

  • Steady State Visually Evoked Potential (SSVEP) is one of the most popularly used signals in Brain Computer Interface (BCI) applications. A new method to detect SSVEP signals of three different frequencies (6Hz, 8Hz and 15Hz) has been proposed. This method uses Fast Walsh Hadamard Transform (FWHT) for feature extraction and Naive Bayes Classifier (NBC) for feature classification. The algorithms used in the proposed method FWHT and NBC consumes vey less memory and also makes the method less computationally complex. The proposed method also uses less execution time making it suitable for real time BCI application.

  • This paper presents an image histogram-based image manipulation detection method in android. The method consists of using a mathematical morphological-based algorithm to extract pictorial feature information from an original digital image. This will be useful in situations where it is important to find evidence of specific events such as investigation of crimes or simply image comparison. The morphological pattern spectrum was implemented in android platform using Java and OpenCV. Analysis of manipulated images indicated that the proposed detection method was able to clearly identify the differences from the original images. The results show that the proposed technique has sufficient ability to distinguish the very slight manipulation of upto one pixel size.

  • Wheelchair usage in India is increasing rapidly due to higher ageing population, road accidents etc. In case of nuclear families, the elder people those who are unable to do their regular activities without assistance, are left alone at their home. Manual propelling is difficult for aged population for which they employ use of the motorized wheelchair, but their costs is quite high making it difficult for the middle income people to procure it. In order to overcome the above challenge, we proposed a novel wheel chair design "low cost local-mapnavigation based intelligent wheelchair". In this paper, we implemented design of voice control module for a motorized wheelchair which works based on the speech processing technique and local map navigation system. In speech processing, we employed the concept of mel - frequency cepstrum coefficients (MFCCs) and mean squared error (MSE). The simulation result shows the robustness of the voice recognition module. A microphone is provided with the wheelchair, if the user utters the destination i, e, where he/she wants to go, the wheel chair automatically takes to the destination. Based on the result we ensure that the proposed wheelchair will improve the quality of life of this population and helping them in achieving their daily routine tasks with comfort.

  • In this paper, Weighted water-filling problem (WeWF) is solved with lesser computational complexity. Unlike previous algorithms, the proposed algorithm finds initially the lower bound and upper bound for the number of positive powers (L) first and using these bounds, L is computed later. The proposed algorithm calculates these bounds with minute number of noise levels whereas existing algorithms considers all noise levels for computing L. This uniqueness of the proposed algorithm reduces the number of floating point operations. Also, with out computing the water level itself, L powers are computed from one another which again leads to lesser complexity.

  • This paper recommends an idea for authenticating the actual ownership by embedding a share of the secret image invisibly into the host or the original image. Frequency domain Techniques such as DWT and SVD are used for embedding shares into the host image. In frequency domain, compared to the original image, the obtained watermarked image is less distorted which in turn creates an environment that makes the embedded share of secret image unavailable for any kind of misuse. The leftover share of secret image is used to prove the actual ownership. The observed end result of this technique is robust against various available image processing attacks.

  • Content Based Image Retrieval (CBIR) deals with the automatic extraction of images from a database based on a query. For efficient retrieval the digital image CBIR requires support of scene classification algorithms. The Cognitive psychology suggests that the basic level classification is efficient with the global features. However, a detailed classification requires a combination of the global and the local features. In this paper, we propose a decision fusion of the classification results based on local and global features. The proposed algorithm is a multi stage approach, in the stage-1 the algorithm separates the complete database into natural and artificial images using spectral features. In the stage-2, the texture and color features are used to further classify the image database into subcategories. The results of the proposed decision fusion algorithm give a 5% better classification accuracy than the single best classifier.

  • Most diseases are not triggered by a single genome but by a combination of genomes together. Sequences occurring more frequently in the diseased samples than in the healthy samples indicate the generic factors of the disease. DNA has become an extremely useful tool for predicting disease. By allowing medical professionals to identify genes in DNA that are markers for diseases, a person can make appropriate lifestyle or similar modifications to help lower the risk of disease. We propose a system in which the above knowledge is provided by determining the probabilistic levels of a disease occurring if the causal gene or the associated genes are mutated.

  • The video classification algorithm is designed based on Hidden Markov Model (HMM) by targeting mobile applications. The DM3730 processor on Beagle Board-xM is chosen for the implementation as this type of processors are suitable for the design requirements of many mobile devices. To solve the dimensionality problem of video content representation, key frame extraction, wavelet transform and clustering are applied. The feature set is derived based on the color and edge information in the video. A symbol sequence is generated through Clustering process based on feature vectors derived from the video. This is used as an input to the HMM. A separate HMM is used for each class and decisions are made on the outputs of HMM classifiers for assigning corresponding class label to unknown video. The algorithm is tested on three different genres of videos such as sports, news and cartoon and achieved the classification accuracy of 92%.

  • Video quality assessment aims to compute the formalmeasure of perceived video degradation when video is passedthrough a video transmission/processing system. Most of theexisting video quality measures extend Image Quality Measuresby applying them on each frame and later combining the qualityvalues of each frame to get the quality of the entire video. Whencombining the quality values of frames, a simple average or invery few metrics, weighted average has been traditionally used. In this work, saliency of a frame has been used to compute theweight required for each frame to obtain the quality value ofvideo. The goal of every objective quality metric is to correlateas closely as possible to the perceived quality, and the objectiveof saliency is parallel to this as the saliency values should matchthe human perception. Hence we have experimented by usingsaliency to get the final video quality. The idea is demonstratedby using a number of state of art quality metrics on some of thebenchmark datasets.

  • A performance and time efficient 2.1D sketch extraction from a given monocular image is proposed in a global optimization framework that exploits the divided rectangles (DIRECTs) but otherwise extracted by heuristic global optimization methods, like genetic algorithms, particle swarm evolution algorithms, and simulated annealing. An appeal of these algorithms is that they are guaranteed to yield the global minimum in probability one sense. The flip side of these algorithms is that they are usually time consuming and also may not reproduce the results consistently. Contrastingly, the currently proposed in this paper, we formulate 2.1D sketch extraction using DIRECT algorithm, which not only apparently aims at providing the better global minimum but as a deterministic approach. Further, the extraction of 2.1D sketch are found to yield comparatively better than the results obtained by the hybrid differential evolution algorithm [1]. Interestingly, the proposed algorithm is superior since it takes far less computations in converging to either same global minimum or far better global minimum value that gives a 2.1 D sketch extraction.

  • In every educational institution, attendance for students is compulsory to obtain good knowledge of the lessons taught in the classrooms and also for gaining wisdom as well as the personality development. A biometric attendance system is designed and implemented for efficient real time monitoring as well as transparency in the management of student's genuine attendance using GSM based wireless fingerprint terminals (WFTs). This system can be suitable to deploy at the educational institutions. Each individual has a unique pattern of fingerprints motivates the use of them for biometric authentication and are verified for finding the presence of the students in the institute. In this paper, two different approaches are discussed to authenticate the captured fingerprint in the verification process. The first approach uses data base created by the organization itself and the second approach uses the Aadhaar Central Identification Repository (CIDR). Wireless fingerprint terminals are responsible to capture and store the attendance records of the students in the device data base and updating them to the server data base. SMS Alerts to students and their parents are sent in case of their irregularity, absence or shortage of attendance. The same system can be extended for the use of avoiding malpractice in the examinations as a part of examination monitoring system.

  • Prevention of silver nano particle from aggregation is a very difficult task. The stabilization of nano silver particle is one of the most important factors which Nano particles are may easily interact with high acidic and basic solution. Zeta potential can reveal the particle stability once those particle were kept in a gel like capping solution. The main challenge is to stabilize an ion bound particle while a small impurity destabilizes the nano form. The use of capping agent like polyacrylamide gel gives an encapsulation, partial dual layers to store nano particle cluster in nano form. The size of the particle can be studied through image processing technique.

  • Previously many works regarding addition and subtraction of two integers are done in CVT-XOR paradigm. It has been also seen that performance using Cellular automata machine (CAM) for addition and subtraction in CVT and XOR paradigm is much faster. In this current study, we mainly focus to handle the division algorithm and associated complexity in this paradigm. We also proposed a block diagram model of division algorithm which could be very much helpful for VLSI implementation on using recursive CAM.

  • With the advent of technology, there has been a rapid increase in data acquisition. Among the several types of data gathered, interpreting multimedia data by a machine without human intervention is a challenge. Extracting meaningful content from videos will help provide better solutions in various domains. Banking on processing the videos as our rudimentary concept, this paper intends to detect the expressiveness of an individual. Many traditional approaches exist to address this situation, however Deep learning is used in this work to achieve the goal. LSTM (Long Short Term Memory) is our selected implementation construct. For training the network MIT Affectiva dataset has been chosen, which comprises of videos of individuals responding to Superbowl commercials. Extending this network, two front ends for testing the video sample are provided. One is a web page for uploading the videos and displaying the results and the other is an IoT device. This device records the video of an individual's response and sends it to the processing server. The responses of individuals towards a particular commercial are recorded and the system is tested on it. The results obtained are examined and the scope for various interpretations are shown. The expressiveness detected becomes a pivotal feedback for the makers of commercials, paving the way for their improvement.

  • Digitalization of music has grown deep into people's daily life. Derived services of digital music, such as recommendation systems and similarity test, then become essential for online services and marketing essentials. As a building block of these systems, music genre classification is necessary to support all these services. Previously, researchers mostly focused on low-level features, few of them viewed this problem from a more interpretable way, i.e., a musicological approach. This creates the problem that intermediate stages of the classification process are hardly interpretable, not much of music professionals' domain knowledge was therefore useful in the process. This paper approaches genre classification in a musicological way. The proposed method takes into consideration the high-level features that have clear musical meanings, so that music professionals would find the classification results interpretable. To examine more musicological elements other than additional statistical information, we use a dataset of only symbolic piano works, including more than 200 records of classical, jazz, and ragtime music. Feature extraction and n-gram text classification algorithm are performed. The proposed method proves its concept with experimental results achieving the prediction accuracy averaged above 90%, and with a peak of 98%. We believe that this novel method opens a door to allow music professional to contribute their expert knowledge meaningfully in the music genre classification process, the proposed approach would contribute significantly for future music classification and recommendation systems.

  • Fractal Dimension was firstly introduced by Mandelbort [1]. Fractal Dimension describe about the shape and appearance of object, which have the property of self similarity. Fractal Dimension of several objects are calculated by using the concept of self similarity. Because Fractal objects are self similar to the original object and dimensions are little varies as per scale length. Our main purpose is to find smoothness and roughness of images and image analysis. Various methods were proposed to estimate the fractal dimension of Grey scale images. Some existing methods were described using Fractal Dimension methodology for finding the roughness and smoothness of images. So many experiments has been done by using existing methods of fractal dimension and found various results. In this report we have described some proposed approach for finding the fractal dimension of color images. We found out fractal dimension of both gray scale and color image using Differential Box Count (DBC) method, cell counting method and our proposed approach.

  • In this article, we create a capsule of a lecture video using visual quality techniques which is a abstract of the lecture video. We first perform temporal segmentation to divide the video into number of shots. The activities of a lecture video can be divided into three categories: slide show, talking head, writing hand. We have developed a different and new approach for determining the quality of non-content and content frames crucial in the lecture video. Different features of the frame histogram are exploited to derive a quality factor for the detected frames. Hence we are able to extract high quality frames with high information content as well as few non content frames (talking head frames) for video clip selection. Finally in recreation of media to produce the capsule of the video, we select a video segments around these selected non content and content frames. Using this technique we are able to obtain a preview of the video which can be viewed in lesser time, hence it is beneficial to the viewers.

  • Local Binary Pattern (LBP) is one of the successful texture analysis methods. However, LBP suffers from noise robustness and rotation invariance. This paper proposes a novel noise insensitive texture descriptor, Adjacent Evaluation Local Ternary Count (AELTC) for rotation invariant texture classification. Unlike LBP, AELTC uses an adjacent evaluation window to change the threshold scheme. It is enhanced to Adjacent Evaluation Completed Local Ternary Count (AECLTC) with three operators to improve the performance of texture classification. During the performance evaluation, various experiments are conducted on Outex and CUReT databases using seven existing LBP variants and with proposed AECLTC. The results demonstrated the superiority of AECLTC when compared to other LBP variants.

  • Chromosome images are captured by microscopicimaging and hence analysis of these chromosomes is very difficulttask. To correctly analyse the chromosomes, segmentation ofchromosome images is necessary. Segmentation is necessary foridentification genetic diseases. For accurate diagnosis, M-FISHlabelling technique is used. This technique helps in correctlyidentifying the genetic diseases. This technique uses 5 colorthat labels each of chromosomes and DNA stain. In thisproject, chromosome images are segmented using fuzzy cmeans algorithm and by using watershed algorithm. Thesegmentation results of both the algorithms are compared byfinding segmentation accuracy. The algorithms were tested onM-FISH database which is available online.

  • In medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field in engineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as PreIn medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field inengineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as Preprocessing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).processing, Feature extraction and Cla...

  • Scene text in images is finding lots of application in real life. When text in images are extracted efficiently they can be used for guiding tourists, visually impaired people, license plate detection, providing location information etc. Extracting scene text faces many problems, as images are subjected to lots of deprivation such as noise, uneven lighting, blur etc. In our work, we propose an algorithm to extract and analyze the text in from complex scene images efficiently. In the earlier stage, edges are detected using DWT. Then connected component clustering and AdaBoost classifier are used for localizing the text regions. In the next stage morphological operations and heuristic rules are used for character extraction. Finally using OCR extracted texts are analyzed. The proposed algorithm evaluated on different database images has achieved good results in spite of variations in font type, style, orientations and complex background.

  • This paper proposes a strategic approach for the generation of a number of variable length keys from a selected image. In any symmetric key algorithm, the same key cannot be used for a longer time and this enforces the user to change the key frequently. Images have more features than text like color, edges, ridges etc. Hence these features facilitate us to generate many keys of variable length. To strengthen the security of key we are applying Chinese Remainder Theorem (CRT) on the selected values from the selected image. Our experimental results proved that keys of variable length suitable for any algorithm can be generated from featured image in a secure way by using Chinese Remainder Theorem.

  • The main aim of the project is to capture the natural movement of eye and applying it as an aiding application to mute or paralyzed patients. In order to accomplish the development of this aid, Electrooculography method is applied. An Electrooculography is a method based on observing the eye movements also called as ocular movements. The Voltages are acquired as output based on the changes in the orientation of eye. The difference in voltage levels obtained are divided into four different ocular movements. They are Saccade movement, Smooth-pursuit movements, Vergence movements, Vestibulo ocular movements described in the later sections The EOG acquisition system for detecting eye movements that is able to restore communication abilities to patients who have nonfunctional limbs and facial muscles has been developed and an accuracy of 78% has been achieved.

  • In this paper, we present a novel algorithm for multi instance multi label classification of attributes in user submitted restaurant photos. The features are extracted from images using a seven layer convolution neural network architecture which are then represented using the bag of words model with earth mover's distance as a metric to compute the distance between each bag. For classification, we employ a custom kernel support vector machines that is trained for each attribute to obtain binary classifiers which are then used to predict the labels for an unknown business. However, central to our approach is the mining of association rules between different labels to further increase the accuracy of the proposed system. The method is evaluated on the Yelp Kaggle database and the performance is measured on the basis of F1 score. The experimental results prove that our algorithm is effective in classification of restaurant images.

  • Fourier transformation infrared spectroscopy has been used widely for understanding the structure and organization of membranes. The data obtained from FTIR spectroscopy will be huge and preprocessing of the data is an essential step. For accomplishing this task, we need to depend on various software which are not open source. Also the data processing will be time consuming and rigorous. This paper discusses a novel approach for analysis of multiple FTIR spectra. This paper discusses our tool developed Using MATLAB Image processing tool box functions for analyzing the multiple FTIR spectroscopy's.

  • Spams are one of the major problems for the quality of Internet services, specially in the electronic mail. Classifying emails into spam and ham category without any misclassification is the concerned area of study. The objective is to find the best feature set for spam email filtering. For this work to be carried out, four categories of features are extracted. That are Bag-of-Word (BoW)s, Bigram Bag-of-Word (BoW)s, PoS Tag and Bigram PoS Tag. Rare features are eliminated based on Naive Bayes score. We chose Information Gain as feature selection technique and constructed Feature occurrence matrix, which is weighted by Term frequency-Inverse document frequency (TF-IDF) values. Singular Value Decomposition used as matrix factorization technique. AdaBoostJ48, Random Forest and Popular linear Support Vector Machine (SVM), called Sequential Minimal Optimization (SMO) are used as classifiers for model generation. The experiments are carried out on individual feature models as well as ensemble models. High ROC of 1 and low FPR of 0 were obtained for both individual feature model and ensemble model.

  • OCR is technical approach to analyze the handwritten text and turn it into a structure which the process the system more effectively for searching, re-storing, retrieval and indexing purpose. Many innovations are discovered in the field of OCR, but still many challengers are waiting for the solutions such as the recognition of document-backside-layered characters along with front side layer characters. The recognition of paper-backside-layered handwritten cum printed text is very difficult than the recognition of paper-frontside-layered characters. The fuzzy logic system process the data with help of bunch set of primary-based knowledge and delivers it into high successive recognition rate by using fuzzy classification functions, inference rules with decision-making procedures. Neural networks are compatible and best area to solve the pattern cum text recognition tasks, the learning based process start to implement from the basic imprecise data and algorithmic steps are formed with help of neuron-based learning process cum observations, but it is not effective to accomplish the user expected requirements for making the decisions. The successful combinational platform of neural fuzzy based closed loop system is proposing many technical ideas with effective solutions to solve the significant problems in OCR. This research methodology has proposed the mirror image reflection approach for recognition of document backside layer characters by using neural fuzzy hybrid system: it categorize the single page into the two sub layers, the first sub layer contains the paper front side text and the second sub layer contains paper back side text with corresponding pictures, characters, numerals and figures. Here the front sub layer text is completely bypassed with backside layer. The document-backside-layer input characters converted into neural-inputs, transformed it into fuzzy-sets, then fuzzy-rules applied based on the knowledge in fuzzification, related output responses are g...

  • The literature reviewed, has suggested that Technology Enhanced Learning and the related domains of education, technology and business interact in a complex and contextual manner as a new emerging episteme. This is one that faces numerous challenges in terms of the effective management when introducing a change for pedagogical practice that involves the ubiquitous use of computing and ICT against a global user base that is influenced by known and unknown barriers to technology adoption. It is suggested that that with a history of organizational change considerations being a pre-dominantly strategic business related construct, and technology related change being a predominantly project managed or service model construct, that a traditional CM or project approach may not adequately address many barriers to technology adoption. As has occurred in other business/IT related interactions such as the Information systems strategy triangle, learning using technology requires a clear marriage of the technological managerial and pedagogical domains. Organizational, Business, IS, and education strategy all need to interact synergistically in order to move forward with technology use for education. The purpose of this project was to conduct a study that focused on the broad context of technology barriers faced by students using technology around a Moodle based online learning environment. The underlying objective was to examine the interaction between change management activity and student perceptions of both technology barriers and change focusing on encouraging the adoption of technology for learning. The project created and presented a typical open source Moodle based virtual Learning Environment (VLE) in a BYOD situation, to a group of 35 second year university students in the form of activities and a quantitative and qualitative survey of the barriers and interactions.

  • The maritime industry represents one of the oldest industries known to man. From time immemorial human beings have been navigating the water ways as a transport medium for trade, leisure, basic transport and a myriad of other things including war between nations over territorial rights. The water ways also present dynamics in international relations as well with countries vying for the control of water locked mineral resources. The interest of nations in who controls the water ways and the resources therein also play a critical role in conflict and high politics amongst nations [1]. The impact of Information Technology on organisations that provide services to the mari-time sector is also brought under review with the aim of identifying the criticality of IT as a strategic business tool in positioning the organization for growth in the chosen market place, achieving reduction in the cost of doing business as well as criticality for survival as an organization is taken into consideration. Literature review and information gathering constitute the bulk of data for this research. Surveys were used to collate responses from industry practitioners. The responses were analysed and relevant details ascertained. The results of the survey and studies conducted clearly established the fact that Information Technology infrastructure is critical to the sustenance of the LNG shipping industry. Ensuring retention of seafarers who are key human resources to an shipping organization is seen to heavily depend on the provision of reliable IT infrastructure.

  • Heterogeneity and complexity of distributed computing increases rapidly as high speed processors are widely available. In modern computing environment, resources are dynamic, heterogeneous, geographically spread over different computational domains and connected through different capacity of high speed communication links. In a large distributed environment a modular program can be considered as a set of loosely coupled interacting modules/tasks (since all the modules/tasks are considered as simultaneously and independently executable) and represented by task interaction graph (TIG) model. Parallel execution of these interacting modules/tasks is highly preferred to reduce the overall completion time of a program. During parallel execution of tasks, the communication overhead due to message passing may increase the cost of parallel execution. Parallel execution of tasks is chosen if and only if parallel execution cost together with communication overhead is less than serial execution cost. So, resources are to be allocated such that advantage of parallel execution is maintained. In this paper, for any task and resource graph, we propose a heuristics based approach to find out an optimal number of tasks that can be executed in parallel on a set of resources where they can be executed.

  • In comparative genomics, genome rearrangement evolution is an important effort. Genome conversion is the major problem in this field using different sorting process. Transforming one sequence into another and finding an optimal solution is a useful tool for analyzing real evolutionary scenario but it will be much better if we find all possible solution for that. In order to obtain more accurate result, some solution should be taken into consideration as there is large number of different optimal sorting sequence. Reversal and translocation are the two common genome sorting process used in development of mammalian species. The problem of genome sorting using reversal and translocation is to find the shortest sequence that transforms any source genome A into some target genome B. Currently the question is resolved by lessening of sorting by reversal and sorting by translocation problem separately, but here we are applying both the sorting process together at the same time. By this paper we present an algorithm for the two sorting process that explicitly treats them as two distinct operations, along with that finding the various solutions which is a better hypothetical and real-world solution than just finding a solo one. If we have single solution for any problem then we cannot decide whether this solution is the perfect one or not but if we have more solution indeed we can find the best one among them and say this is the perfect solution. We also present an example which proves that this solution is more prominent than previous one.

  • Sentiment Analysis(SA) for Kannada documents has been explored recently. In the recent study [8], the sentiment analysis for Kannada text is explored using Naive Bayes classifier. The objective of this work is to improve the performance of the previous study on the sentiment analyzer for Kannada language explored in the paper [8]. In this work, we propose the ensemble of classifier with random forest technique to identify the polarity of the sentiment and test the performance of the same. Also in this work, some of the limitations of [8] such as handling multi class labels, identification of sentiment polarity of comparative and conditional statements have been addressed. The over all accuracy is improved from 65% to 72 %, indicating our approach based on Random Forest technique is more efficient for SA for Kannada.

  • Solving large-scale problems in a variety of scientific and engineering fields requires efficient hierarchical methods to exploit parallelism. In this paper we present optimizations to enhance the performance of parallel N-body simulations (NBS) using the Barnes Hut approximation for a 60-core MIC accelerator. We focus on two sources of performance degradation in NBS: (1) the semi-static parallelism which leads to dynamic load unbalancing and (2) the processing of very large data exceeding the cache capacity. A first proposed optimization is to dynamically balance the load by computing load in an iteration as an estimate for the load in the next iteration. This optimization helps in even distribution of the load for the next iteration. The second proposed optimization subdivides the data into well-adjusted chunks to enhance data reuse in shared caches. The proposed optimizations are tested on a 60-core MIC accelerator. Evaluation results showed that optimized NBS produces a speedup of up to 33% due to dynamic load balancing and 260% due to enhanced cached data reuse.

  • In this note, we see first quantum algorithm, Simon's algorithm. It was used for search a string from number of input streams. Here, we enhance its search strategies for search a string into multiple sources. For it, we use n-qubits Hadamard gates for search a string into the multiple sources. Further, it enhances for search an entity into multiple databases as consider of dynamic size. Finally, single entity result generated by removes entities redundancy and cancel out opposite sign entities.

  • Kernel tracing facilitates to demonstrate various activities running inside the Operating System. Kernel tracing tools like LTT, LTTng, DTrace, FTrace provide details about processes and their resources but these tools lack to extract knowledge from it. Pattern recognition is a major field of data mining and knowledge discovery. This paper presents a survey of widely used algorithms like Apriori, Tree-projection, FPgrowth, Eclat for finding frequent pattern over the database. This paper presents a comparative study of frequent pattern mining algorithm and suggests that the FP-growth algorithm is suitable for finding patterns in kernel trace data.

  • In the era of information burst, optimization of processes for Information Retrieval, Text Summarization, Text and Data Analytic systems becomes utmost important. Therefore in order to achieve accuracy, redundant words with low or no semantic meaning must be filtered out. Such words are known as Stopwords. Stopwords list has been developed for languages like English, Chinese, Arabic, Hindi, etc but standard stopword list is still missing for Sanskrit language. Identifying stop words manually from Sanskrit text is a herculean task hence this paper reflects an automated stop word generator algorithm based on frequency of word and its implementation to ease the task. To fine-tune the generated list still manual intervention by language expert is required thus following a hybrid approach. The paper presents the first of its kind, a list of seventy-five generic stopwords of Sanskrit language extracted from a data amounting to nearly seventy-six thousand words.

  • Vast growth and the broad accessibility of data on the web have driven the surge of exploration movement in the zone of data retrieval on the Web. Topic and Trust are the important factors for data retrieval framework. As the extent of the web is huge, it is difficult to fulfill the user's need. To this end, the Web offers a rich site of data, which communicates through the hyperlinks. This paper explains the idea of improving PageRank, "Upgraded Page Rank with Topic utilizing Trust Component" which has the vast limit as compared with Conventional Page Rank Algorithm. In Page Rank algorithm, Page Rank score of a page ignores whether a page is relevant to a query or not, this becomes drawback and in the paper, a solution is presented by taking trust into consideration along with topical aspects.

  • Medical image processing is a highly challenging field. Medical image techniques are used to mage the inner portions of the human body for medical diagnosis. MR Images are widely used in the diagnosis of brain tumor. In this paper, we present an automated method to detect and segment the brain tumor regions. The proposed method consists of three main steps, initial segmentation, modeling of energy function and optimize the energy function. To make our segmentation more reliable we use information present in the T1 and FLAIR MRI images. We use conditional random field(CRF) based frame work to combined the information present in T1 and FLAIR in probabilistic domain. Main advantages of CRF based frame work is we can mode complex shapes easily and we incorporate the observation of energy function.

  • Here during this Paper, Eye chase is employed in studies to look at education and learning processes. Additionally, lecture rooms and labs square measure being equipped with this technology so as to show tomorrow's work force the way to use eye chase in several fields. From our company shoppers, our team hears that there's a growing demand for those who square measure consultants all told aspects of eye chase. Universities have a singular chance to fulfill this growing demand by militarization students with eye chase tools. This expands their future choices by teaching them however visual attention may be analyzed and applied to several analysis fields. In order to relinquish each the university and also the students a a lot of competitive come on the market place, we are able to facilitate develop curriculums that demonstrate for college kids however eye chase may be integrated as a tool to answer analysis queries, solve business problems, and even build businesses. Eye chase will capture attention-grabbing insights regarding student learning behavior and teaching strategies in a very broad vary of academic things. The info from an eye fixed hunter will reveal completely different learning ways for researchers to higher perceive student psychological feature employment. This data can even be accustomed measure teacher performance and instruction strategies. By understanding the room dynamics, as well as interaction between students and academics, researchers will outline acceptable coaching programs and directions so as to boost education, Eye chase shows the immediate reactions of users, additionally because the distribution of their attention in AN interface. Testing code and applications throughout development is vital to making sure they're effective for the user.

  • An efficient and accurate method has been proposed in this manuscript to identify a Output Error (OE) structure based Wiener model with Craziness based Particle Swarm Optimization (CRPSO) algorithm. The accuracy and the precision of the identification scheme have been justified with the achieved bias and variance values, respectively, of the estimated parameters. Mean square error (MSE) of the output is considered as the performance measures or the fitness for the CRPSO algorithm. The various statistical measures associated with MSE confirm the superior performance of the proposed CRPSO based identification of the Hammerstein system. Accurate identification of the parameters associated with the linear as well as nonlinear block with the noisy environment ensures the robustness and stability of the overall system.

  • Earlier When the bug is detected(Manually or with the help of testing tools) it is corrected manually, as it is time consuming process therefore automatic debugging techniques have been introduced, which have tremendous value in the software development. Towards this step we experimented with our theoretical knowledge in automated debugging. We proposed a new idea for "Ranking the fixes of the bugs". When we are using some fix schemas for fixation of the bugs, selecting suitable fix for the detected bug is very important, so in order to select the suitable fix we are ranking the fixes depending on the some parameters like reliability, time, cost, priority.

  • Rapid changes in application and internet technology have been seen the evolution of the internet. Internet of things (IoT) describes a global network of nomadic devices. It integrates ubiquitous and pervasive computing with digital intelligence. The fundamental concept of IoT is interconnecting the physical "things" using wired or wireless digital communication media like Ethernet, Wi-Fi, Bluetooth and let these things exchange information with each other so that they take a smart decision themselves. In this paper, we discussed the major requirements for incorporating the IoT paradigm in such organization where a large number of people act on shared electrical devices. We propose cost effective IoT framework for an organization to manage and monitor the electrical power consumption of an organization. This prevents wastage of electrical energy.

  • One of the major tasks of data mining is association rule mining, which is used for finding the interesting relationships among the items in itemsets of huge database. Aproiri is the familiar algorithm of association rule mining for generating frequent itemsets. Apriori uses minimum support threshold to find frequent items. In this paper, an algorithm called hybridization of ABC with BAT algorithm is proposed which is used for optimization of association rules. Instead of onlooker bee phase of ABC, random walk of BAT is used in order to increase the exploration. Hybridized ABC with BAT algorithm is applied on the rules generated from apriori algorithm, for optimizing association rules. The experiments are performed on datasets taken from UCI repository which show the proposed work performance and proposed methodology can effectively optimize association rules when compared to the existing algorithms. In the paper, we also proved that the rules generated using proposed work are simple and comprehensible.

  • Various parameters affect the performance of Genetic Algorithms in terms of the accuracy of the optimal solution achieved and convergence rate. In this paper, effect of one such important parameter (elite count) on the behavior of Genetic Algorithms is meticulously analyzed, A standard benchmark function 'Rastrigin's Function' is used for the purpose of the study, and the results indicate that the extremely high values of elite count result in premature convergence on local minima, while low values of elite count result in much better solutions, near to the global optima.

  • Clustering is a useful data exploratory method with its wide applicability in multiple fields. However, data clustering greatly relies on initialization of cluster centers that can result in large intra-cluster variance and dead centers, therefore leading to sub-optimal solutions. This paper proposes a novel variance based version of the conventional Moving K-Means (MKM) algorithm called Variance Based Moving K-Means (VMKM) that can partition data into optimal homogeneous clusters, irrespective of cluster initialization. The algorithm utilizes a novel distance metric and a unique data element selection criteria to transfer the selected elements between clusters to achieve low intra-cluster variance and subsequently avoid dead centers. Quantitative and qualitative comparison with various clustering techniques is performed on four datasets selected from image processing, bioinformatics, remote sensing and the stock market respectively. An extensive analysis highlights the superior performance of the proposed method over other techniques.

  • Time series analysis has drawn the attention of many researchers as the findings from analysis are of huge importance and value. Although there exists many models for Time Series Prediction, there is still no single model which can accurately predict data, this is because of the wide range of existing applications. Artificial Neural Networks are widely used for forecasting data because of its high capability to understand non linear relationships among data. An appropriate combination of neural algorithms can provide better prediction results for time series data. This paper aimed at hybridizing the traditional Back Propagation Neural Network (BPNN) with Genetic Algorithm(GA) to achieve better prediction accuracy. Levenberg Marquardt Algorithm(LMA) is chosen as the Back Propagation algorithm. The proposed model is tested on L&T stock market data, Air Quality data, Surface roughness and Concrete Strength data and the performance measures of both BPNN and GA-BPNN are evaluated using the standard error measures such as Mean Squared Error(MSE), Mean Absolute Error(MAE) and Root Mean Square Error(RMSE). The proposed model proves to produce better prediction results than the existing BPNN networks for both univariate and multivariate data sets.

  • Investigating evolutionary relationship between various species through similarity/dissimilarity analysis is a fundamental method. In this present work firstly 20 canonical amino acids and 3 stop codons (terminations) are classified into five different classes depending upon their frequency mapping with 64 codons of genetic table. Secondly, each DNA sequence is represented by a weighted directed multi graph based on that classification. Thirdly, the procedure has been implemented to find out the evolutionary relationship between various species of alpha globin and beta globin genes. Here a new mathematical tool has been constructed to derive similarity/dissimilarity matrix, to get suitable phylogenetic trees for each data set. It is completely alignment free approach and hence the time complexity is directly proportional to the sequence length, that is O(N). Moreover the classification rule will decrease the complexity of graph constructions.

  • The Internet was growing with huge amount of information, through Blogs, Twitter tweets, Reviews, social media network and with other information content. Most of the text in the internet was unstructured and anonymous. Author Profiling is a text classification technique that is used to predict the profiling characteristics of the authors like gender, age, country, native language and educational background by analyzing their texts. Researchers proposed different types of features such as lexical, content based, structural and syntactic features to identify the writing styles of the authors. Most of the existing approaches in Author Profiling used the combination of features to represent a document vector for classification. In this paper, a new model was proposed in which document weights were calculated with combination of POS N-grams and most frequent terms. These document weights were used to represent the document vectors for classification. This experiment was carried out on the reviews domain to predict the gender of the authors and the achieved results were promising when compared with the existing approaches in Author Profiling.

  • Privacy preserving of personal data is done by many techniques like k-anonymity, l-diversity, t-closeness etc., but the techniques proposed are implemented only when there is the availability of a laptop or a computer. Now-a-days people are interested to carry a mobile instead of a lab-top, because some of the works done by a lap-top can also be done by a mobile, like files sharing, images and videos sharing, and many more, but the sharing may lose the privacy of the data. With a specific end goal to give the protection to the information the strategy k-anonymity is utilized, which chooses the k-esteem where at any rate k-1 people whose data additionally shows up in the discharge. This technique is implemented using Android SDK. Whenever the user requests the information, instead of sending original information, the data is sent in an anonymized way. This paper presents the implementation of this technique and results are shown.

  • This paper, introduces utilizing the magnitude component of Local Binary Pattern (LBP) apart from sign component (which is considered as conventional method). We applied this Completed Local Binary Pattern (CLBP) on plant leaf classification by randomly taken divergent blocks of each texture data set. This approach is also useful for the identification of quality leaves for the automation of grading process in commercial crops like Tobacco etc. By combining Center pixel CLBP (CCLBP), Signed component of CLBP (SCLBP) and magnitude part of CLBP (MCLBP) there is a considerable development can be achieved for rotationally invariant texture classification.

  • The rating and review based algorithms like collaborative filtering faces the cold start problem on addition of new items or new users in the recommender system because of lack of ratings and reviews. The primary aim of this work is to solve the cold start problem faced by rating based recommendation system and also increase the accuracy of recommender systems. We used the content-based filtering algorithm along with the cross-domain data taken from Facebook so that the problem of addition of a new user problem and a new item in recommender system can be solved. The combination of both content-based filtering and cross-domain data obtains better results than the collaborative filtering and shows a good way to deal with the cold start problem.

  • This paper reports a novel approach for heterogeneous face identification which makes use of fusion of MB-LBP (Multi-Block Local Binary Pattern) and MLBP (Modified Local Binary Pattern) descriptors at feature extraction level. Face images obtained from different sources such as visible, sketch (viewed and forensics) and NIR (Near Infrared) face images are used for the experiment. In order to achieve robust heterogeneous system, MB-LBP and MLBP are applied to the individual heterogeneous face images for extracting local features from visible, sketch and NIR face images. These local features are used to capture the structural description of a face image. Feature vectors obtained from these two local descriptors are fused together by concatenation and fused feature vector is passed through the classifier. The experimental results are determined from two well-known heterogeneous face databases, namely, LDHF and IIIT databases. Rank order statistics is used to determine the ranks of the face images according to their identification probability. Experimental results for identification in heterogeneous environment are found to be very consistent and convincing.

  • Scaling database is one of the important aspects of testing the database engines. Traditionally, scaling performed in the most of the databases is with respect to the size of the database. That is, given a scaling factor, the row cardinality of each relation is scaled up by a factor. In this paper, we describe a new method of scaling database called cost based scaling. The cost is the time taken to execute a query workload on scaled database. The goal of cost based scaling is to scale the database so that time taken for executing queries on scaled database is some multiple of time for executing the queries on original database. This cost based scaling can be used as an alternative to the traditional size based scaling for testing the database engines.

  • Here we proposed a new approach which is based on Neutrosophic logic to help the patient by taking proper decision through pathological report based analysis. Neutrosophic set is used for uncertain data. Fuzzy data is used to handle incomplete data by only truth value and vague data is applicable for uncertain data by truth and false values. But both are unable to handle uncertain data for any analytical based system. Now neutrosophic set is being used to handling uncertain data in the form of neutrosophic data for pathological test report based decision making operation.

  • Most of the existing algorithms used for the purpose of Markov network structure learning are either restricted to learning interactions among small number of variables or are extremely slow in nature because of the large number of possible structures, as given in the literature. In this paper, we propose a novel method of using Alternating Decision (AD) trees for learning Markov network structures. The advantage in using the AD trees is that complex interactions among many variables can be represented and high prediction accuracy is obtained. Given a data set, using AD trees for structure learning involves learning an AD tree for the prediction of each variable, conversion of each tree to a set of conjunctive features and weight learning. The set of conjunctive features define the Markov network structure. In this paper, we compare Decision Tree Structure Learning (DTSL) method proposed by Lowd and Davis with the AD tree approach empirically over five data sets and find AD tree approach to be more accurate than the DTSL approach for two data sets. However, the method using AD trees is slower than the DTSL method.

  • In general, opinion mining has been used to know about what people think and feel about their products and services in social media platforms. Millions of users share opinions on different aspects of life every day. Spurred by that growth, companies and media organizations are increasingly seeking way to mine information. It requires efficient techniques to collect a large amount of social media data and extract meaningful information from them. This paper aims to provide an interactive automatic system which predicts the sentiment of the review/tweets of the people posted in social media using hadoop, which can process the huge amount of data.Till now, there are few different problems predominating in this research community, namely, sentiment classification, feature based classification and handling negations. A precise method is used for predicting sentiment polarity, which helps to improve marketing strategies. This paper deals with the challenges that appear in the process of Sentiment Analysis, real time tweets are considered as they are rich sources of data for opinion mining and sentiment analysis. This paper focus on Sentiment analysis,Feature based Sentiment classification and Opinion Summarization. The main objective of this paper is to perform real time sentimental analysis on the tweets that are extracted from the twitter and provide time based analytics to the user.

  • Agility is the change in requirement based on the environmental factor. In an enterprise solution, different applications are working as per the organization's business areas. Based on the domain, many enterprises developed insubstantial enterprise software solution. Different business process in enterprise solution can face problems with changing business conditions. This paper shows different types of agility in supply chain management system. In SCM, different business process as per the architectural level help to find affecting agility. Different type of agility in an enterprise solution helps to find the affecting agile parameter. This case study helps to identify kaizen concept based process agility also to improve business agility. Enterprise solution complexity can be solved by resolving the process as per domain.

  • Today people are more interconnected with Internet ever before. Internet not only brought advantages, it also brings security threats, cyberattacks, phishing, etc. To mitigate cybercrimes governments are enforcing cyberlaws. In this process authentication of digital objects are essential. To provide authentication many watermarking schemes are proposed. Among edge based watermarking schemes special category because of low distortion while watermarking. However, present edge based watermarking scheme are suffering from smoothing effect and also reversibility is an uncertain parameter. In this paper we are proposing a Reference Image and Edge (RIE) based watermarking scheme to overcome smoothing effect problem in existing edge based watermarking schemes. RIE watermarking scheme also consider cover content information while embedding watermark pattern. Compared to existing edge based data hiding schemes proposed RIE watermark scheme improves visual perception with more or less same embedding capacity.

  • This paper is regarding the lack of semantic factor in recommendation systems and describes the different recommendation techniques that are being employed in the current e-commerce website. Recommendation system can be broadly classified into three categories: content-based, collaborative, and hybrid recommendation approaches. Content based systems consider the properties of the items to be recommended. For instance, if a Amazon user has purchased many romantic novels, then content based recommendation system recommends novels in the database as having the "romantic" genre. Collaborative filtering systems recommend items based on similarity measures between like minded users and/or items. The items recommended to a user are those preferred by similar users. This paper also emphasizes the need for semantics in current recommendation system to recommend products accurately. This also describes various limitations that are present in the current recommendation methods and suggests possible solutions that can improve current recommendation system used in e-commerce websites. It also includes a survey on popular e-commerce websites such as Amazon, Ebay, Flipkart Snapdeal and Paytm by rating them on different parameters and doing their comparative analyses This paper also focuses on how graph algorithm can be used to improve recommendation in ecommerce websites. The proposed system compares flickr.com recommendation of images with the proposed method. The method incorporates semantic recommendation using overlap technique based in graph.

  • The past decade has shown us the power of cyber space and we getting dependent on the same. The exponential evolution in the domain has attracted attackers and defenders of technology equally. This inevitable domain has led to the increase in average human awareness and knowledge too. As we see the attack sophistication grow the protectors have always been a step ahead mitigating the attacks. A study of the various Threat Detection, Protection and Mitigation Systems revealed to us a common similarity wherein users have been totally ignored or the systems rely heavily on the user inputs for its correct functioning. Compiling the above we designed a study wherein user inputs were taken in addition to independent Detection and Prevention systems to identify and mitigate the risks. This approach led us to a conclusion that involvement of users exponentially enhances machine learning and segments the data sets faster for a more reliable output.

  • Retailing is not only an important aspect of the economic structure but very much a part of our lives. Its current contribution which is 10% of the country's GDP, proves its significance towards the growth of country's GDP. For centuries now, India has been operating within its own unique concept of retailing. In the words of David Gilbert retail is a business that guides its marketing effort towards the final consumer satisfaction primarily based upon the organization of selling goods and services as a way of distribution. It is sure and certain that India will be the destination for retailing as existing Indian middle class purchasing power is increasing at a rapid pace. Rural India can be considered as heart land of retailing. But according to Indian Market Research Bureau (IMRB) study 0.6 million villages in India did not have a retail outlet of any kind. Further, the outlet density in rural India is lower than that of urban India. It indicates a lot of potential being left untapped. The retail companies are rising in India at a remarkable rate year after year. And, they have brought upon a striking change in the shopping attitude of the Indian customer. Many major international retail enterprises such as TESCO, METRO AG, WAL-MART have already recognised the scope of growth in Indian retail industry. As India is the second fastest growing economy in the world, it provides plethora of opportunities and increasing scope for the Indian retail market. Investor have outstanding investment options in Indian retail segment and allied sectors. India has the uppermost shop compactness in the entire world with around 1, 20,000 shops diagonally the country. Furthermore over, India has been rated surrounded by the top 10 FDI destinations of the world.

  • Data Warehouse is a repository of strategic data from many sources gathered over a long period of time. Traditional DW operations mainly comprise of extracting data from multiple sources, transforming these data into a compatible form and finally loading them to DW schema for further analysis. The extract-transform-load (ETL) functions need to be incorporated into appropriate tools so that organisations can utilise these tools efficiently as required. There is a wide variety of such tools available in market. In this paper, we have compared different aspects of some popular ETL tools (Informatica, Datastage, Ab Initio, Oracle Data Integrator, SSIS) and have analysed their advantages and disadvantages. We have also highlighted some salient features (performance optimisation, data lineage, real time data analysis, cost, language binding etc.) of these tools and represented them with a comparative study. Apart from the review of the ETL tools, the paper also provides feedback from data science industry which narrates the market value and relevance of the tools in practical scenario. However, the traditional DW concept is expanding rapidly with the advent of big data, cloud computing, real time data analysis and the growing need of parsing information from structured and unstructured data sources. In this paper, we have also identified these factors which are transforming the definition of data warehousing.

--> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> -->