======================================================= -->

Technical Papers

In this modern era of globalization, e-commerce has become one of the most convenient ways to shop. Every day people buy many products through online and post their reviews about the product which they have used. These reviews play a vital role in determining how far a product has been placed in consumers' psyche. So that the manufacturer can modify the features of the product as required and on the other hand these will also help the new consumers to decide on whether to buy the product or not. However, it would be a tedious task to manually extract overall opinion out of enormous unstructured data. This problem can be addressed by an automated system called 'Sentiment Analysis and Opinion Mining' that can analyze and extract the users' perception in the whole reviews. In our work we have developed an overall process of 'Aspect or Feature based Sentiment Analysis' by using a classifier called Support Vector Machine (SVM) in a novel approach. It is proved to be one of the most effective ways to analyze and extract the overall users' view about the particular feature and whole product as well.
Spam is an unsolicited message, usually sent in the bulk. It is an unwanted activity that is performed to deceive people, to theft their personal information, to inject virus in their system, to redirect them on malicious sites. On OSN, spammers share malicious link looking like genuine one, place discount messages on their wall, develop malicious apps and sometimes create fake accounts. While on blog sites, Intruders use the portal for spreading rumors, mislead about certain campaign, and overload the forums with off-topic comments. When readers are only interested in reading strictly on-topic information, unrelated comments creates confusion. So It is necessary to analyze the unrelated content on online social media. In this paper, we have applied two text mining approaches to measure relatedness between posts and comments over two public pages India-forum.com and Wikipedia on Facebook.
Convenience and safety of road is very important and crucial in society. Statistics of accidental deaths has shown a growing drift in India. Accidents on road network with vehicle collision can lead to injuries or even deaths and causes economy loss. Reasons for accident includes over speeding, drunken driver and defects in vehicle as well as bad road condition. Places near residential areas, pedestrian crossing, around schools, college and other educational institutions are the major zone of accidents. Mega cities traffic in India is increasing day by day due to increase in the population. In this paper we have figure out the factor influencing the accidents and identified which factor is more accident prone with Info Gain Attribute Evaluator function using WEKA tool. We have also performed association classification using Apriori algorithm and identified best rule for accident dataset.
Online Social Network (OSN) has become a popular and de facto portal for digital information sharing. Preserving information privacy is indispensable in such applications, as the shared information would be sensitive. The issue becomes more challenging due to participation of multiple parties on the shared resources or data. In this paper, we propose an effective access control technique to allow or disallow the shared resources considering authorization requirements of all the multiple parties. A logical representation of the proposed access control technique is prepared to analyse the privacy risk.
Extracting the customers who share similar interests that are connected via set of relationships in Telecom social network is a challenging scenario. This paper addresses an efficient method of building multi-relational and heterogeneous social network for telecom customers and identifying social structures present in the telecommunication network. The telecom social network is constructed by considering multiple attributes and different services provided by the telecom industry. The telecom network is constructed by using adjacency matrix for all the customers. This approach deals with finding the social position of the customer by different measures like centrality, betweenness, density and closeness. The analysis on the centrality measure are made to identify the central and most influential customers in the network, to provide customized services for retention of that customer. This paper also describes the extraction of dynamic patterns, future structures which aids in retention of customer and managing the market requirement more efficiently.
Feature Selection(FS) is an important step to enhance the classification accuracy. Using the lazy learning classification algorithm, the feature selection methods calculate relevancy to reduce the storage. Dimensionality reduction technique on scientific data is a popular area to understand the underlying scientific knowledge in a data set, resulted from scientific experiments. This paper presents a review and systematic comparative study of methods and techniques used in scientific data mining. The performances of the techniques are compared and a meaningful direction has been arrived. It is understood that there are several techniques such as Sequential Forward Selection(SFS), Sequential Floating Forward Selection(SFFS) and Random Subset Feature Selection (RSFS) etc., which are used to minimize the storage space of the scientific data set in combination of Nearest Neighbor classifier. The paper explains these approaches by identifying various dimensionality reduction techniques to improve the performance.
Mining frequent and maximal Periodic Patterns is the challenging task for the data scientists due to unstructured, dynamic and huge raw data generated from web. Big Data is a latest tendency used to examine the datasets from large, complex databases. Developers cannot supervise them through conventional algorithms or Knowledge discovery software tools. Big Data mining is the ability to extract valuable information either from these huge datasets or else streams of data, due to its three V's (Volume, Velocity, and Variety). The previous studies focus on how to find frequent patterns from large traffic and sensor data based applications. However, mining maximal and useful periodic patterns from spatiotemporal datasets is still an open research problem in weather conditions, Fraud recognition and forest fire prevention Applications. We propose two algorithms such as Enhanced Tree based pattern mining algorithm (ETMA) and Extended Frequent Pattern Mining algorithm (EFPMA, which generates all frequent and maximal periodic patterns from spatiotemporal databases. A novel framework is introduced to mine spatiotemporal patterns from Big Data. All Existing algorithms are fit in calculation of essential patterns, but more tedious if they apply for Big Data. The proposed framework and algorithms produce better results than UDDAG, STNR algorithms towards scalability and decreased run time compared to DBSCAN, ECLAT algorithms.
Semantic textual similarity measures the semantic equivalence between a pair of sentences. Lexical overlapping approach evaluates similarity among a sentence pair depending on the number of terms the sentence pair shares. The similarity can be measured at same level of abstraction or at multi levels. This paper presents the influence of token similarity measures using lexical overlap semantic similarity method in conjunction with quadratic assignment problem (QAP) alignment. Lexical overlap method is used to find whether a sentence pair is semantically equivalent or not.
Nearest Neighbor Classifiers demand high computational resources i.e, time and memory. Two distinct methods are followed by researchers in Pattern Recognition to reduce this computational burden. The first method is reducing the reference set or training set and the second method is dimensionality reduction which are referred as Prototype Selection and Feature Reduction(a.k.a Feature Extraction or Feature selection) respectively. In this paper, we cascaded the two methods to reduce the data set in both directions there by reducing the computational burden of Nearest Neighbor Classifier. The experiments are done on the bench mark datasets and the results obtained are satisfactory.
Classification of Image is conquering a vital role within the field of computer Science. Classification of Image can be distinct as processing techniques that concern quantitative methods to the values in a technology field or remotely sensed scene to set pixels with one and the same digital number values into attribute classes or categories. To create thematic maps of the land wrap present in an image, the classified data thus obtained may then be used. Classification includes influential an appropriate classification system, selecting, training sample data, image pre-processing, extracting features, selecting appropriate categorization techniques, progression after categorization and precision validation. Aim of this study is to assess Support Vector Machine for efficiency and prediction for pixel-based image categorization as a contemporary reckoning intellectual technique. Support Vector Machine is a classification procedure estimated on core approaches that was demonstrated on very effectual in solving intricate classification issues in lots of dissimilar appropriated fields. The latest generation of remote Sensing data analyzes by the Support Vector Machines exposed to efficient classifiers which are having amid the most ample patterns.
Lectures are one of the medium used in Teaching-Learning strategy. However, amount of knowledge gained by the student is not always equal to the amount of knowledge shared in the lecture. This can be due to several reasons like long lecture hours, lack of concentration of student and many others. If the concepts which are not clearly understood are pre-requisite to understand next/other concept/topic then there are high chances that student won't understand even that. In such scenario, to ensure that all students have understood each and every concept taught, there is a need for identifying which concepts the students have not understood properly. So to fulfill the requirement, an online exam evaluation system is proposed that will not only assess the competence but also highlights the concepts that the student have not understood properly or is confused with, through custom reports generated using Machine Learning and Rule Based Reasoning.
In this paper we are proposing a proficient approach of using flat files or synthetic files as database. Realizing that there are much of disadvantages of using text file as a database, in this literature we are striving to reduce some specific hindrances with the help of regular expressions (regex). Regular expression is therefore an unbelievable powerful language which is no longer just for the programmers rather it is showing up in all sorts of places today. In this paper, we are using regex to recommend a productive method of data or text manipulation in the flat files. Hence an improved data manipulation procedure in the text file can lay a huge impact in the path of upgradation of the flat file system.
Itemset mining identifies group of frequent itemsets that signify possibly of relevant information. Unique constraints are usually forced to emphasis the analysis on most interestingness itemsets. In this paper we proposed unique constraint based mining on relational dataset. The constrained-based mining helps us to merge all itemsets, which are interrelated to each other. Specifically it chooses itemsets with same consequent part of an association rule and evaluates the highest itemsets with minimum coverage in that relational database. This paper mainly concentrates to propose a new Apriori-based algorithm, which satisfy the certain properties of constrained itemset based mining like anti-monotonicity.
Database management systems are used to store the data and to perform various operations on the data such as insertion, deletion and updation which is stored on the databases. To access the data from the databases, it is required to use a structured query language which is difficult to learn for the naive users. In this paper, designing of a generic natural language interface to the database management systems for retrieving the knowledge through structured query language is attempted. The proposed system can be used to retrieve the data from single columns and multiple columns. The system can also works on various types of queries which can join multiple tables with selected attributes with multiple conditions. The proposed system is independent of the underlying database tables and the relations among the tables. It is also independent of the natural language used to query the database.
While training a model with data from a dataset, we have to think of an ideal way to do so. The training should be done in such a way that while the model has enough instances to train on, they should not over-fit the model and at the same time, it must be considered that if there are not enough instances to train on, the model would not be trained properly and would give poor results when used for testing. Accuracy is important when it comes to classification and one must always strive to achieve the highest accuracy, provided there is not trade off with inexcusable time. While working on small datasets, the ideal choices are k-fold cross-validation with large value of k (but smaller than number of instances) or leave-one-out cross-validation whereas while working on colossal datasets, the first thought is to use holdout validation, in general. This article studies the differences between the two validation schemes, analyzes the possibility of using k-fold cross-validation over hold-out validation even on large datasets. Experimentation was performed on four large datasets and results show that till a certain threshold, k-fold cross-validation with varying value of k with respect to number of instances can indeed be used over hold-out validation for quality classification.
To solve heterogeneity and gauge problems cloud computing proffer abundance of services to users. Users without percipient how transcendent the service and without any cognizance of Quality of Service (QOS) of services in cloud computing, users use the services and feel perturb, unsatisfied. To avoid user ennui, dissatisfaction, soreness and annoy by using a service it is very important to induce and elucidate awareness of Quality of Service (QOS) of services to users before using the services in cloud. For any cloud service provider to accumulate profit, to cope with other service providers and to perpetuate in the business field successfully it is very much imperative to emolument customer satisfaction, so cloud service provider should ameliorate QOS of service to augment customer satisfaction. How cognizance of QOS of services is useful for users who use services in forthcoming and how improving QOS of service is useful for service providers in cloud is inaugurated and designed in this paper. Knowing about QOS for one service from one user feedback is agile but it is very striving and time conceiving to get awareness of QOS of all services in cloud computing by collecting feedback of users who already used the service, so in order to surmount this predicament clustering technique is used. One of the important task in data mining is clustering which is propitious for profuse users so by using clustering concept users who want to use service in future will dexterously and agilely can get awareness of QOS of services in cloud through Intutionistic Fuzzy C-means clustering algorithm. Multiple Abettors are used to comply and dispose this process so multi Abettor system is inaugurated to transact the work. K-means, Hard C-means and Fuzzy C-means clustering algorithms are not much efficacious, proficient, and conducive for clustering QOS values of services so in this paper for giving awareness of QOS of services in cloud Intutionistic Fuzzy C-means algorithm is used for...
Image segmentation technique assigns a label to every pixel in an image, such that certain similar characteristics are shared by pixels with same label. Segmentation simplifies the representation of an image into something that is more meaningful and easier to analyze. It plays a crucial role in medical diagnosis and treatment of diseases by cutting out Region Of Interest (ROI) from an image. The ROI differs based on the application and thus image segmentation still remains a challenging area of research. This paper presents the comparison of K-Means and K-Nearest Neighbor image segmentation techniques for segmenting the slide of Syringocystadenoma papilliferum which is a sweat gland tumor appearing at birth or puberty. Segmentation technique is used by pathologists to distinguish different types of tissues and focus on the region of interest. The evaluated results with different algorithms showed that K-NN segmentation technique revealed higher mutual information, hence proving it to be comparatively a better algorithm.
One of the distinguished objective of science and technology is providing the scientific relevance to non-scientific approaches. This has open up many new avenues in the field of Research and Developments. One such domain is the quantification of human intelligence and its related qualities. Intelligence may be connected to the main and complex organ of the human body, i.e., brain. Hence this paper is one such attempt to study the function of a brain in the form of quantification of Intelligent Index (Combination of EQ and IQ in a varied proportion) through the computation of Emotional Quotient and Intelligent Quotient using Kmeans clustering technique.
Diabetic retinopathy is an important branch of ophthalmology. Non - proliferative diabetic retinopathy is used to detect Microaneurysms in the early stage. Microaneurysms are verified through fundus images, where in the fine red-dots near the blood vessels confirm this defect. Conventional methods and their weak resolution seldom can identify to such accuracies. In this work, we present a procedure to identify Microaneurysms with higher accuracy. The retinal vessels are extracted, from collected fundus image, using a Gabor wavelet which delivers high accuracy output. For accurate analysis the image it is sub divided into two regions, neighborhood and non-vessel neighborhood for expediting support vector machine (SVM) analysis. Further the SVM engine is trained for positive and negative samples of identified region fundus images. Then by sliding window technique, the entire test image is analyzed limiting analysis by SVM engine for near vessel region. This improves overall performance of the analysis and permits time available for a deeper/ sensitivity analysis of near vessel areas. The logic and the code has been tested on sample images and the results have been satisfactory.
Enterprise Resource Planning [ERP] is a widely used system across all organizations regardless the size. The need for making rapid decision based on the analysis of the complex data lead to the use of ERP systems. In the beginning ERP systems were limited to large organizations. Later this trend has changed with the advent of open source ERP systems. With its added advantage, it started to take over the confidence across small, medium and large sized industries. There are lots of open source ERP systems currently available in the market. OpenERP is one of the prominent system. The beauty of framework, the open source technologies which they used in the system makes the OpenERP unique and efficient. By drawing a detailed comparison between OpenERP and other prominent ERP system, it became very clear that OpenERP meets the expectation above its competitors. And with the introduction of Odoo/OpenERP 8.0, the OpenERP have taken the ERP system to new dimension. They have incorporated a CMS, an e-commerce system and added business intelligence to the new system. But at the same time, the complexity of the system has risen which should be analyzed and addressed accordingly and that the current user should have compatibility to upgrade to new version.
We present our preliminary research work intended to lay the foundation for providing a cost effective, secure and reliable system for real time clinical data acquisition required for analysis. We identify a specific problem, namely the need to achieve interoperability by applying a standards based data modeling approach to achieve a common platform that serves to improve the health data mapping of unstructured data and addresses ambiguity issues when dealing with health data from heterogeneous systems. We propose an original algorithm directed at mapping unstructured data and performing semantic integration to form a uniform interoperable system. In order to achieve this, efficient data modeling techniques are introduced for improving data storage and extraction, as demonstrated by our preliminary results.
In this paper we proposed SVM algorithm for MNIST dataset with fringe and its complementary version, inverse fringe as feature for SVM. MNIST data-set is consists of 60000 examples of training set and 10000 examples of test set. In our experiments we started with using fringe distance map as feature and found that the accuracy of system on trained data is 99.99% and on test data it is 97.14%, using inverse fringe distance map as feature and found that the accuracy of system on trained data is 99.92% and on test data is 97.72% and using combination of above two feature as feature and found that the accuracy of system on trained data is 100 and on test data is 97.55%.
Music Information Retrieval (MIR) focuses on retrieving useful information from collection of music. The objective of research work in this paper is to explore clustering approaches which can be useful in automatically mining the content from Carnatic instrumental music. The content to be retrieved is the instrument that is primarily used to play the song. Carnatic music songs with ten different instruments namely, Flute, Harmonium, Mandolin, Nagaswara, Santoor, Saxophone, Sitar, Shehnai, Veena and Violin are considered as input. Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coefficients (LPC) features are used for representing music information. In the first step, visualization technique is used to explore the capability of different features in distinguishing Carnatic music with different instruments. Then different clustering techniques are used for understanding natural way of grouping among this instrumental music. A discussion on the comparison of instrument clustering results with different algorithms, combined with various features is also presented.
About 35% of India's Gross National Product comes through agricultural sector. Any losses in gross agricultural product also affect Indian economy. Percentage of crop yield loss due to pests and dieses is considerable. As grapes are grown in most of states in India, it plays important role in gross crop yield. Having a computerized system for managing pests and disease occurring in grapes will help in increasing total yield of grapes. The paper proposes expert system for pest and disease management of grapes where we provide forecasting of probable pests and diseases. This work considers current weather conditions at grape farm location for forecasting. Knowledge base for pests and disease is generated by extracting information from internet and storing it as OWL document. Inference engine for grape expert system is based on fuzzy logic as weather conditions can logically be represented as fuzzy variables. It uses rule base developed by experts.
The increased on-line applications are leading to exponential growth of the web content. Most of the business organizations are interested to know the web user behavior to enhance their business. In this context, users navigation in static and dynamic web applications plays an important role in understanding user's interests. The static mining techniques may not be suitable as it is for dynamic web log files and decision making. Traditional web log preprocessing approaches and weblog usage patterns have limitations to analyze the content relationship with the browsing history This paper, focuses on various static web log preprocessing and mining techniques and their applicable limitations for dynamic web mining.
Word Sense Disambiguation (WSD) is the process of identifying the proper sense of an ambiguous word depending on the particular context. It is to find the accurate sense si among the set of senses {s 1 , s 2 ,..., s n }. This task was motivated by its interpretation in various Natural Language Processing (NLP) applications like IR, MT, QA, TC, SP etc. In this paper, machine learning technique - Naive Bayes Classifier was used for automatic disambiguation task. Training data was prepared with sense annotated features. For preparing sense annotated data we took help of the sense inventory. Currently, about 160 ambiguous words are present in the sense inventory derived from 18K and 25K words from Assamese Corpus and WordNet. The system is implemented in two phases. In the first phase, a total of 2.7K sense annotated training data and 800 test data were taken and a result of 71% accuracy was found. Analyzing the result depicts that accuracy improves as the training data size gradually increases and by the learned model generated in the previous iteration. In second phase we manually validate the outcomes of first-phase and we add those clean sense tagged data to previous training data set. Than we train our system with our increasing training data (3.5K) which enhance the result accuracy. An iterative learning is adopted by the system and more accuracy of 7% is achieved. This paper aims to implement Assamese WSD system by NB classifier using lexical features and enhancement of the baseline method turns out in improving the classifier accuracy to 78%.
This work proposes a system that percepts hand signs and gestures via computer vision system and extract sufficient amount of images from it. After applying image processing and extracting the features of the images, the system uses an algorithm to recognize the hand signs and gestures. In the process of recognizing the hand signs, the Artificial Neural Network (ANN) is being trained with some specific data sets obtained from the feature extraction of the images. Then the system applies Instance Based Learning with a back-propagated neural network and Pattern matching Techniques to make the Artificial Neural Network learn and classify the signs and gestures. This approach of recognition decreases the error rate which increases the efficiency of the system.
Cloud computing is a booming service oriented architecture which supports multiple tenants and users. They allow clients to use their infrastructure on a pay-per use model. With the increasing popularity of cloud systems, cyber crimes on cloud systems are also on the rise. But there are no standardized methods established for performing digital forensic investigation on the cloud. Our paper focuses on forensic analysis of cloud systems with specific emphasis to the analysis of cloud service logs. One of the major issues with service logs in cloud is that there is no segregation as all events related to all tenants and users are logged together. Hence it is essential to group events relevant to specific users or tenants who are of interest to the investigation. This paper discusses event correlation techniques to group events logged by tenants of interest. OpenStack cloud is used for the testing and evaluation of the solution.
Online product reviews provide data about the users perspective on the features that were experienced by them. Product features and corresponding opinions form a major part in analyzing the online product reviews. Extracting features from a huge number of reviews is categorized into three main categories such as utilizing language rules, sequence labeling and the topic modeling. Latent Dirichlet Allocation (LDA) is one such topic model which clusters the document words into unsupervised learned topics using Dirichlet priors. The words so clustered are the features and opinion words in the product reviews domain. These clusters contain words which are non features of the product. To identify appropriate product features from these clusters a hierarchical, domain independent Feature Ontology Tree (FOT) is applied to LDA clusters. This improves the accuracy of the features using extracted LDA topic clusters.
Feature extraction in a time series data, due to its sequential nature and missing data, is very essential for any decision process and control. Many workers have carried out this process, using various techniques, in the past, but a comparative study and analysis along with its application has been missing. The present work has carried out this process for wind speed time series data and has established a best suited type of wavelet transform for the data mentioned. Further this study shall be helpful in decision making for selection of a technique for feature extraction.
Statistical Machine Translation (SMT) is a part of Natural Language Processing. This translates one language to another language. SMT consists of Language Model (LM), Translation Model (TM) and decoder. The decoder will make use of LM and TM to generate the translation. In this, probability of target language sentences is computed by LM, the given source sentence probability of the target sentence is computed by TM and maximizing the probability of translated text is done by Moses. English and Telugu parallel corpus have been used for training the system. Translation mainly depends on quality and quantity of corpus. In this, a model was proposed for translation of Telugu language to English language.
The ability of space borne sensors is limited to acquire images having wide swath with high temporal resolution and high spatial resolutions simultaneously. Fine details of the earth's surface features can be captured using high resolution (HR) remote sensors in remote sensing (RS) data collection. Data fusion from two or more sensors can be used to derive enough information to satisfy observation needs for a specified application. This work aims to predict a HR image for the corresponding low resolution (LR) image using a LR-HR image pair. Quality Assessment is performed for original HR and predicted HR images. Simulation results proved the strength of the proposed method of HR image prediction.
In this paper, multiple solution vector based random search algorithm is presented. The conventional random search method has single solution vector and has difficulties associated with local optima for higher-dimensional numeric problems, this paper introduces multiple solution vector in random search method solving optimization problems. The proposed algorithm is compared with evolutionary algorithm (EA) and particle swarm optimization (PSO) methods using standard test functions and the performance results are presented.
Swarm robotics is a field of research inspired from how a biological system coordinates in a distributed and decentralized fashion. This distributed autonomous control mechanism makes swarm robotics a better alternative for mobile surveillance application. This paper presents a swarm intelligence algorithm to detect intrusion on a land under surveillance. The system consists of hundreds of robots governed by a set of decentralized control laws. Social Potential Field, a popular approach for distributed autonomous control, is used for the controlling motion of every robot in a swarm. Since intrusion is highly dynamic in nature, a parameter optimization approach using online gradient descent is introduced to make intrusion detection and response fast and effective. An artificial simulation environment consisting of Guard robots, Castle and Invader is created to verify the working of the proposed approach. Computer simulation results are presented for illustration and a comparison is made with the results obtained when the system lacks the ability to adapt to the changing surrounding environment.
Cloud provides convenient and on demand network access for computing resources available over internet. Individuals and organizations can access the software and hardware such as network, storage, server and applications which are located remotely easily with the help of Cloud Service. The tasks/jobs submitted to this cloud environment needs to be executed on time using the resources available so as to achieve proper resource utilization, efficiency and lesser makespan which in turn requires efficient task scheduling algorithm for proper task allocation. In this paper, we have introduced an Optimized Task Scheduling Algorithm which adapts the advantages of various other existing algorithms according to the situation while considering the distribution and scalability characteristics of cloud resources.
Integrated structural analysis and design software packages, which generally work on finite element method for analysis and design, have been gaining popularity in the field of designing since they have reduced the tedious calculation process to a simple process of just giving input values. The result generated is according to the values entered without the consideration of the feasibility. Moreover, optimization of structures has been a lesser used concept in day-to-day working and is independent of design and analysis of the structures. In this paper, an attempt has been made to integrate analysis and design along with optimization for a 3-bar truss. The objective functions for optimization are weight minimization and minimization of vertical displacement. The design and analysis components are included as constraints of the objective function to be optimized. Optimization is performed by two different methods: Genetic Algorithm and Particle Swarm Optimization. This is done to compare which of the algorithms gives better results and is less time consuming. The objective functions, that are weight and vertical displacement, are evaluated individually as single objective functions and combined as multi-objective functions. Pareto-optimal method is used to get a non-dominated set in multi-objective optimization. The inputs given are range of the area of the bar and number of generations. Also, the comparisons of results are of Particle Swarm Optimization and Genetic Algorithm is done.
The increasing burden on conventional energy sources and the environmental problems are motivating the world towards the use of solar energy as it offers various advantages such as lack of emission of green house gases, everlasting sun energy and low maintenance cost. The performance of solar system is influenced by partial shaded conditions (PSC). This results in reduction of power from photo-voltaic (PV) system. So to enhance power from solar system under varying weather conditions, maximum power point tracking (MPPT) is essential. The presence of PSC leads to generation of multiple peaks on PV characteristics which minimize the effectiveness of traditional MPPT techniques. This paper introduces the approach of particle swarm optimization (PSO) algorithm for extraction of peak power of PV structure. The recommended technique provides robustness, high efficiency and reliability towards maximum power point (MPP). The accuracy of intended algorithm is validated using MATLAB SIMULINK with DC-DC boost converter and results have been compared with perturb and observe (PO) approach.
Successful software systems must be developed to evolve or die. Although object-oriented software systems are to be built to last over the time but they will degrade as much as any legacy software system. As a consequence, one may identify various reengineering patterns which capture best practice in reverse and reengineering object-oriented legacy systems. Software reengineering basically focuses on re-implementing older systems to improve or make it more maintainable. Refactoring is on kind of re-engineering with-in an Object-Oriented context. In this paper, with the given object-oriented refactoring opportunities, the cost of refactoring is resembled using RCE. The opportunities are class misuse, violation of the principle of encapsulation, lack of use of inheritance concept, misuse of inheritance, misplaced polymorphism.
The technique of Dynamic Programming (DP) is used to solve a wide variety of combinatorial search and optimization problems. For a subset of these problems, each update to a cell in the DP table is a function of the contents of the previously computed cells belonging to the immediately previous row. We call these problems Horizontally Local Dynamic Programming (HLDP) problems. In this paper, we develop a parallel GPU based framework for solving HLDP problems. We categorize them depending upon the distribution and position of dependencies. We propose suitable techniques for each of these categories. Finally, we consider several case study problems to show how our framework can be used.
Diagnosis of dental diseases is conventionally carried out with the help of radiographic films. As a result of noise and other environmental interferences, use of radiographic films introduces errors. This paper presents an algorithm to use digital periapical radiographic images to detect enamel caries and interproximal caries. This will help dental practitioners to identify caries lesion with ease. This study makes use of MATLAB and it performs caries detection in three stages. First stage is the preprocessing phase where rotation of an image is performed, if necessary. Histogram study of the image is carried out to understand the intensity of carries. The second phase is image segmentation where individual tooth area is separated. Third phase is the identification phase where caries are detected. If caries are not detected, tooth is definitely healthy.
Cloud Computing provides the computing environment where different resources, infrastructures, development platforms and software are delivered as a service to customers virtually on pay-as-use basis. In cloud computing a job request may requires m number of resources types to complete its tasks. Scheduling of cloud resources for end users is an important task in cloud computing. In this paper we have proposed Multi Stage scheduling in cloud computing to schedule Virtual Machines (VM) for the requested jobs received from customers. We considered the model that a job requires 'm' different types of VM's in a sequence to complete its task. This model also extended for deadline aware Multi Stage scheduling with respect to response time and waiting time. We developed and analyzed a model for evaluation of average turnaround time, average waiting time and violation in deadlines when compared with First Come First Serve (FCFS), Shortest Job First (SJF) and Multi Stage Scheduling strategies.
Extraction algorithm of multi-scale pyramid spatial-Temporal interesting points (STIPs) is widely applied to various vision applications, especially in pedestrian detection. The traditional algorithm's computational cost is too much to save much time, so a new method based on prediction is proposed in this paper. Firstly, put a radio into several groups on the basis of original pyramid. Then calculate the relationship between layer and layer. And finally, get low-level features of each adjacent layer by calculating the several layers'. In this way, when drawing the spatial-Temporal interesting points in the multi-scale Pyramid, a lot of time can be saved through this prediction method.
Ad Hoc wireless network is a type of wireless network, in which there is no any fixed infrastructure. Devices in Ad Hoc network can move around the network within a given range. Currently most of the transactions are performed through the computer networks so they are more susceptible to many physical security threats. One of the major DOS Attacks that degrade the performance of the whole MANET is Black Hole attack. In the presence of black hole attack, nasty nodes are not forward the packets rather they drop packets. In this work, black hole attack is detected and eliminated through implementing Digital Signature with Twofish Algorithm. We modified on-demand routing protocol Temporally Ordered Routing Algorithm (TORA) and named it as STORA. Our proposed STORA performs well under normal conditions and under black hole attack than original TORA.
This paper predicts the Diabetes Disease based on Data Mining Techniques of Classification Algorithms. Classification Algorithm and tools may reduce heavy work on Doctors. In this paper Evaluated as Classification Algorithms for the Classify of some Diabetes Disease Patient Datasets. Data Mining is one of the main Algorithm is Classification. Classification Algorithm Examine of the Decision Tree Algorithm, Byes Algorithm and Rule based Algorithm. These algorithms are evaluate Error Rates and identify of the patients based evolution Function of the measure the accurate results.
Advent in microarray technology enables the researchers of computational biology to apply various bioinformatics paradigms on gene expression data. But microarray gene expression data, obtained as a matrix has much larger number of genes as rows than number of samples (columns representing time points / disease states) which makes many bioinformatics jobs critical. In this direction a subset of genes are selected from the large noisy dataset so that their features are distinguishable between different types of samples (normal/diseased). Here, in this paper, the gene selection process is performed by sample classification using k Nearest Neighbor (k-NN) method. Comparing normal and preeclampsia affected microarray gene expression samples, collected from human placentas, we have selected a set of genes which may be termed as critical for a disease named Preeclampsia, a common complication during pregnancy causing hypertension and proteinuria. Both the normal and diseased dataset contain 25000 genes (rows) having 75 samples (columns) and we have selected 30 genes as disease-critical genes. We have applied two meta-heuristic algorithms, namely, Simulated Annealing (SA) and Particle Swarm Optimization (PSO). Sample classification of normal and preeclampsia shows high fitness (number of samples properly classified). Here, out of 150 (75 normal + 75 diseased) samples, 80-90 samples are properly classified. The number of samples properly classified, denotes the fitness of a solution. So, achieved solution here is of good quality. In our experiments, PSO outperformed SA in respect of best fitness and SA defeated PSO in average fitness.
Evolutionary algorithms provide solutions to optimization problem and its suitability to eye tracking is explored in this paper. In this paper, we compare the evolutionary methods Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) using deformable template matching for eye tracking. Here we address the various eye tracking challenges like head movements, eye movements, eye blinking and zooming that affect the efficiency of the system. GA and PSO based Eye tracking systems are presented for real time video sequence. Eye detection is done by Haar-like features. For eye tracking, GAET and PSOET use deformable template matching to find the best solution. The experimental results show that PSOET achieves tracking accuracy of 98% in less time. GAET predicted eye has high correlation to actual eye but the tracking accuracy is only 91 %.
The genetic code is the set of rules by which information encoded in genetic material (DNA or RNA sequences) is translated into proteins (amino acid sequences) by living cells. The code maps a tri-nucleotide sequence, called codons, into corresponding amino acids. Since there are 20 amino acids and 64 possible tri-nucleotide sequences, more than one among these 64 triplets can code for a single amino acid which incorporates the problem of degeneracy. This manuscript explains the underlying logic of degeneracy of genetic code based on a mathematical point of view using a parameter named "Impression". Classification of protein family is also a long standing problem in the field of Bio-chemistry and Genomics. Proteins belonging to a particular class have some similar bio-chemical properties which are of utmost importance for new drug design. Using the same parameter "Impression" and using graph theoretic properties we have also devised a new way of classifying a protein family.
The recent advancement in remote sensing has made it possible to capture hyper spectral images with more than hundred bands which include spectrum beyond the visible range as well. This increased number of spectral dimension gives detailed information about the objects and hence increases the classification accuracy. But at the same time it also increases the computational complexity. So, reducing the number of bands without much compromising the information content has been a challenge in the field of hyper spectral image classification. This paper attempts to address a correlation based approach for band selection. This approach entails calculation of the correlation among the bands of the hyper spectral image and subsequent selection of those bands having correlation less than a threshold value. The experimental results obtained, have shown that with only a very limited number of bands we can achieve accuracy closer to that of using all the bands.
In the present scenario when all data is in network, cloud or some data center, security and protection of data is a major concern. Encryption is one of the techniques used for this purpose. Image encryption is applied for protecting images from different kinds of attacks. Image Encryption is possible by a kind of transposition in color images or 3D images by displacing the rgb components of the color image. This paper presents a method for encryption and decryption of Color images using RGB pixel displacement. In the proposed method the original plain image is split into its basic three components, that is the RGB components and the key image is also split into RGB Components. Further by application of XOR operation and scrambling of the three components the cipher image is generated. This method is suitable for encrypting 3D images. The algorithm is implemented in MATLAB environment and tested on various color images.
Due to the lack of classification accuracy in pattern recognition, in this paper we propose a new algorithm for pattern analysis based on the symbolic pattern method. Proposed algorithm is constructed by using symbolic method and finite state automata model and used for classifying the textures based on the patterns. This algorithm performs symbolization of the data and portioning the texture images into two dimensional data. Features are extracted from the symbolic image and finite automata transition model. A probabilistic based classifier is designed to classify the texture images. The experimental study shows the better classification accuracy of the proposed system for the texture images for varied number of samples.
The theme work presented in this paper is adetailed analysis of various transforms like Discrete Cosinetransform, Singular Value Decomposition, Discrete Hadamardtransform, Slant transform, Discrete Haar transform whichare applied to a set of considered biomedical images to achieveimage compression. The operations on images are performedin transform domain where the DC coefficients are stored andtruncation operation is performed by setting correspondingthreshold to achieve desired PSNR to maintain the quality ofreconstruction. In this paper, the biomedical images aresubjected to all compression schemes mentioned by settingPSNR to 25dB and 30dB. The reconstruction qualities for boththe results (25dB and 30dB) are tabulated and detailed analysisis done based on the quality of reconstruction which throwslight on optimal transform. The metrics used for analysis areMean Square Error, Peak Signal to Noise ratio, StructuralSimilarity Index, Compression ratio, Energy Compaction and Auto-Correlation.
It can be observed from the past ten years, remarkable progress has been made in the field of science and technology in terms of speed, throughput and computation. Scientists gained the ability to elucidate information from any given advanced computing systems with the help of technology. Efforts have been put forth for a new ciphering scheme with common division and genetic algorithm for image encryption process. This approach is unique in its way as it is applicable for any type of file, the feature of this encryption approach provides data security and feasibility for practical applications. In this paper binary image is considered for the evaluation process. The results show that the proposed method is computationally efficient ciphering scheme for image encryption. The security of the encryption process is further enhanced by using genetic algorithm.
An exact replica of a high dynamic range image which embeds within it a wide range of luminance values on a standard display device (e.g. LCD monitor) is impossible to achieve without significant loss of information as the display devices are designed to handle very low range of intensity values compared to the high dynamic range scene. We propose an algorithm whose output is visually pleasing and preserves maximum information from the input image. In addition to that, proposed method is fast and easy to implement. Proposed algorithm uses the local and global histogram information of the image and produces the best tone mapped image. Results verify the efficiency of the algorithm. Proposed algorithm requires no parameter adjustment which makes it useful for any image dataset.
In the present scenario, the major concern in today's society is about securing information and providing personal privacy. Researches from decade proven that Biometric authentication systems are commercially spreading and widely used for large scale authentication systems. Due to this, there is an immense growth in biometric authentication systems for data protection. This paper is mainly concerned about protection of biometric templates generated and stored in the system to avoid the misuse of these templates by fraudulent user in order to access the data of the legitimate. We make use of chaotic logistic maps, to obtain highly random keys which are highly sensitive to initial conditions, for providing security to iris templates.
Reservoirs are the porous and permeable rocks that contain commercial deposits of hydrocarbons. Porosity is a very important property of reservoirs. This paper presents a method of determining the porosities of different types of rocks based on image analysis. Stereological research for analysis of porosity were conducted by traditional methods before image analysis which were time consuming and lacked accuracy. The method proposed determines the porosity by computing the part of the whole sample for which the pores account. The steps involved in the above method are a series of contextual, non-contextual and morphological operations that are commonly used in image processing and analysis. The procedure was tested on thin sections of sandstone and limestone rock samples. The results were computed in the form of total porosity which includes all types porosities observed in rocks including isolated and connected porosities. The porosity obtained can also be called as visual porosity as it is being determined from photomicrographs. Values obtained show that the method proposed can lead to satisfying results. Obtained porosity values can be used further to determine determine other important properties of reservoir like permeability.
Segmentation process of an Image, represents meaningful way for further analysis and use in medical imaging, remote sensing and military objects detection. Due to poor resolution, segmentation become complex especially is an image is blurred or mixed pixels. Active contour technique is popular to get smooth curve around the boundary of 2D images but due to image gradient this techniques is unable to detect the boundaries of an objects. In this paper, first to calculate the entropy threshold based method to find the region of interest, then further contour model was applied to extract the efficient external features. Using this method we have extracted external features of an image, while traditionally only internals feature was able to extract. From the results it has revealed that after comparing traditional and proposed method of segmentation we get more enhanced and accurate results.
There are various types of applications for fingerprint recognition which is used for different purposes. Fingerprint is one of the challenging pattern Recognition problem. The Fingerprint Recognition system is divided into four stages. First is Acquisition stage to capture the fingerprint image, The second is Pre-processing stage to enhancement, binarization, thinning fingerprint image. The Third stage is Feature Extraction Stage to extract the feature from the thinning image by use minutiae extractor methods to extract ridge ending and ridge bifurcation from thinning. The fourth stage is matching(Identification, Verification) to match two minutiae points by using minutiae matcher method in which similarity and distance measure are used. The algorithm is tested accurately and reliably by using fingerprint images from different databases. In this paper the fingerprint databases used are FVC2000 and FVC2002 Databases, we see that, the FVC2002 database perform better results compare with FVC2000 database. The recognition system evaluate with two factor FAR and FRR, In this system the result of FAR is 0.0154 and FRR is 0.0137 with Accuracy equal to 98.55%.
Image inpainting is an ancient art. There may be different factors which cause deterioration of images including environmental factors, chemical processing, improper storage and more. Inpainting is the process which restores the deteriorated parts of the image. This paper proposes two major types of inpainting approaches i.e. 2D image inpainting and depth map inpainting. Image inpainting refers to recover corrupted parts of images by applying structural, textural or both the methods simultaneously. Whereas, depth map inpainting mainly used to improve the 3D visualization effect by improving the depth map associated with scene. Some of the image inpainting approaches can also be applied to depth maps. This paper gives overview of such approaches as well as those which are specifically meant for depth maps.
To extract hand tracks and hand shape features from continuous sign language videos for gesture classification using backpropagation neural network. Horn Schunck optical flow (HSOF) extracts tracking features and Active Contours (AC) extract shape features. A feature matrix characterizes the signs in continuous sign videos. A neural network object with backpropagation training algorithm classifies the signs into various words sequences in digital format. Digital word sequences are translated into text with matching and the suiting text is voice translated using windows application programmable interface (Win-API). Ten signers, each doing sentences having 30 words long tests the performance of the algorithm by computing word matching score (WMS). The WMS is varying between 88 and 91 percent when executed on different cross platforms on various processors such as Windows8 with Inteli3, Windows8.1 with inteli3 and windows10 with inteli3 running MATLAB13(a).
Classification of Land Use Land Cover (LULC) data from satellite images is extremely remarkable to design the thematic maps for analysis of natural resources like Forest, Agriculture, Water bodies, urban areas etc. The process of Satellite Image Classification involves grouping the pixel values into significant categories and estimating areas by counting each category pixels. Manual classification by visual interpretation technique is accurate but time consuming and requires field experts. To overcome these difficulties, the present research work investigated efficient and effective automation of satellite image classification. Automated classification approaches are broadly classified in to i) Supervised Classification ii) Unsupervised Classification iii) Object Based Classification. This paper presents classification capabilities of K-Means, Parallel Pipe and Maximum Likelihood classifiers to classify multispectral spatial data. Using statistical inference, classified results are validated with reference data collected from field experts. Among three, Maximum Likelihood classifier (MLC) inflate a extensive credit with Overall accuracy and Kappa Factor.
Some of the image processing algorithms are verycostly in terms of operations and time. To use these algorithmsin real-time environment, optimization and vectorization arenecessary. In this paper, approaches are proposed to optimize, vectorize and how to fit the algorithm in low memory space. Here, optimized anisotropic diffusion based fog removal algorithm isproposed. Fog removal algorithm removes the fog from imageand produces an image having better visibility. This algorithmhas many phases like anisotropic diffusion, histogram stretchingand smoothing. Anisotropic diffusion is an iterative process thattakes nearly 70% of time complexity of the whole algorithm. Here, optimization and vectorization of the anisotropic diffusion is proposed for better performance. However, optimizationtechniques cost some accuracy but that can be neglected forsignificant improvement in performance. For memory constraintenvironment, a method is proposed to process the entire blockof image and maintains the integrity of operations. Resultsconfirm that with our optimization and vectorization approaches, performance is increased up to 90 fps (approximately) for VGAimage on one of the image processing DSP simulator. Even if, system doesn't have vector operations, the proposed optimizationtechniques can be used to achieve better performance (2× faster).
Target tracking is an important requirement in the battle field. The proposed target classification algorithm is a video based surveillance system. This kind of system is very useful in tracking and classification of targets. The target area and tyre detection are the key steps of this algorithm used for detection of target. A target classification algorithm based on area and tire axle spacing was formulated and deployed in an embedded system for field testing. The experimental comparative results are provided in the form of tables. To overcome the disadvantages of traditional methods here target area and tyres are determined by using image processing in MATLAB. The algorithm has to distinguish between major classes i.e. two wheelers, four wheelers and heavy vehicles (e.g. Tanks). The target acquisition videos recorded by day camera and during night time by thermal camera can be used. The percentage of error of the proposed method are shown and discussed with analysis.
In mobile speech communications, because of the noise interfering with speech at one end, the intelligibility of speech degrades at the other end. In this paper, we focus on suppression of noise produced by vehicular and automobile mechanical engines, in whose presence the intelligibility of the speech deteriorates, when transmitted to the other end (or) while recording. We propose a method using single microphone (channel) to detect and reduce high frequency non-stationary vehicular engine noise to improve the quality of speech considerably. Simulation results demonstrate that the proposed method reduces the vehicular engine noise better than conventional noise reduction techniques. The parameters of the proposed algorithm are tuned and adapted such that optimal trade-off is achieved between noise reduction and speech retention which is also proved experimentally.
Mandibular fracture, also known as fracture of alower jaw is a break in the mandibular bone. Mandibularfracture occurs due to facial injuries. After clinical observations, doctors suggest to have a radiograph to view a position of thefracture. Orthopantamogram (OPG) is suggested to viewmandible fracture and directions of fracture lines. For anengineer, automating the process of locating fracture is achallenging task and he/she desires to make it a useful assistanceto medical practitioners. This paper presents an algorithm todetect location of a fracture from an OPG image. Fracture inmandible bone which is present in lower jaw is located usingproposed algorithm. Fracture line in an image is a break incontinuous texture. It is difficult to detect such a fracturebecause, these breaks are minor changes in intensities of grayscale structures in an image. The algorithm presented in thepaper makes use of image processing techniques and textureanalysis to detect fracture location from mandible bone which ispresent in the lower jaw. A mask is used to find a break inmandible structure. Detecting such a break in mandible structureis critical as it requires separating fracture from other featureslike teeth and gum area present in OPG image.
In this paper a new method is proposed for automatic segmentation of the Optic Disk (OD) in retinal images. The proposed OD segmentation method is based on Discrete Wavelet Transform (DWT) and morphological operations. This method is evaluated on publicly available standard databases DIARETDB0 and DIARETDB1. The proposed method attained accuracy rate 89.88 % on DIARETDB0 and 91.53% on DIARETDB1 database for localization of the OD. The proposed method achieved OD segmentation average sensitivity and specificity 71.54%, 99.53% and 71.11%, 99.56% on DIARETDB0 and DIARETDB1 databases respectively. Experimental results show the better performance with low computation time over existing OD segmentation methods.
This paper aims to present an approach to facilitate differently abled people to operate a digital camera. This approach will enable them to enjoy photography and videography as a profession. The approach is to design a standard interface that can be controlled through speech or customized hand gestures that differently abled person can make. The speech or hand gestures are mapped to adjust the settings of camera and scrolling through camera menus e.g. setting the exposure triangle, enabling the digital filters, adjusting the white balances and other different shooting modes in a digital camera.
Code clones are code fragments identical to each other. Simple code clones are easily detected by automated clone detection tools. But huge numbers of simple clones are reported by these tools. Careful observation is required to extract useful information from this large number of simple clones. There are recurring patterns of simple clones which indicate design level similarities called structural clones. Structural clones can be detected from the co-located simple clones by applying data mining techniques. Integrated Architecture is proposed to detect structural clones form simple clones. There are different types of clones which come under category of higher level clones. In this paper, various higher level clone terminologies are highlighted and hierarchy of clones is formulated. Also Integrated architecture is proposed by combining data mining and simple code clone detection techniques. This will be helpful to determine search path to detect clones. Also three Data Mining Techniques: FIM (Frequent Item Set Mining), FCIM (Frequent Closed Item Set Mining), WFIM (weighted Frequent Item Set Mining) have been discussed. Among these three techniques, WFIM is efficient, because it further reduces number of frequent item sets of recurring patterns reported by FCIM.
Region of interest (ROI) based image compression is an intelligent technique in medical image storage and classification applications. A combination of both lossless and lossy compression methods are used for such application. Many wavelet transform derived techniques are proposed in this regard. Embedded zero-tree wavelet (EZW) is among them which is simple and efficient. In this paper, MRI medical images are considered for compression and ROI based image compression is reported. The results reported good image quality in terms of several metrics. Also the comparison of lossless compression using two different wavelets is made possible to analyse the performance of each technique.
In the Disk Diffusion Antibiotic Sensitivity test (The Kirby-Bauer test) a thin film of bacteria applied on a plate is subjected to various antibiotics. The Zone of inhibition is a circular area around the spot of the antibiotic in which the bacteria colonies do not grow. The zone of inhibition can be used to measure the susceptibility of the bacteria to wards the antibiotic. The process of measuring the diameter of this Zone of Inhibition can be automated using Image processing. In this work an algorithm is developed, using Computer Vision, which will detect the zones of inhibition of the bacteria. This work demonstrates an effective approach of measuring the Zone of Inhibition by calculating the radius of the zone by drawing contours and setting the right value of threshold. This work also determines if a particular bacteria is susceptible or resistant to the applied antibiotic using the calculated Zone of Inhibition and the prescribed standard values.
The manufacturing strategies have been modernised and became autonomous by using robotic control and voice control system. Factors analysed for effectiveness of the present system for the application of welding. Pre operational setups, control variables for welding process are used in the motion of the robot. Robots are considerably complicated electromechanical system with mutual interactions of robot mechanics and drives. The parametric studies are performed for the motion of the welding robot. The virtual prototype modelling and optimization of a 6 DOF robot developed using solid works, MATLAB and simmechanics. The voice control technology is implemented for controlling robot. This paper presents the kinematic solution in simmechanics and modelling of complex mechatronic systems. Using similar robot prototype and it is possible to get the optimized controller for the actual robot. Speech recognition system is used starting and appear in standard software applications. The present problem is integration of graphical user interfaces with speech process. A prototypemodel developed for studying integration of speech dialogue into graphical interfaces designed to programming of industrial robot arms in this paper. The aim of the prototype is to develop a speech dialogue system for solving simple relocation tasks in a robot work cell using ABB IRB 1410 industrial robot arm.
In this paper an effective and most reliable method for appropriate classification of cardiac arrhythmia using automated based Artificial Neural Network (ANN) has been proposed. The results are encouraging and are found to have produced a very confident and efficient arrhythmia classification, which is easily applicable in diagnostic decision support system. In this paper the authors have employed 3 neural network classifiers to classify three types of beats of ECG signal, namely Normal (N), and two abnormal beats Right Bundle Branch Block (RBBB) and Premature Ventricular Contraction (PVC). The classifiers used in this paper are K-Nearest Neighbor (KNN), Naive Bayes Classifier (NBC) and Multi-Class Support Vector Machine (MSVM). The performance of the classifiers is evaluated using 5 parametric measures namely Sensitivity (Se), Specificity (Sp), Precision (Pr), Bit Error Rate (BER) and Accuracy (A). Hence MSVM classifier using Crammers method is very effective for proper ECG beat classification.
Most preferred method for early diagnosis of breast cancer is Mammography. Subjective diagnostics sometimes lead to wrong diagnosis and in some cases normal tissue can be diagnosed as malicious. We propose a method to reduce the human error by providing an objective analysis so that it could aid subjective analysis and increase the efficiency. First we extract the region of interest marked by radiologist and perform feature extraction, train the features using classifier. In the next stage we take random data from the database which is processed to remove x-ray annotation and pectoral muscle. The processed image is divided into small grids and features are extracted from every grid. The features from every grid is tested for suspicion, either malignant or benign. We have tested our algorithm on 36 data sets. 100% efficiency was observed for removal of x-ray annotation and 94.4% for pectoral muscle. Grid based textural analysis could classify the suspicious region with an efficiency of 91.67%. The efficiency for grid analysis has been tested manually based on the prior knowledge of presence of tumor, provided by the radiologist in MIAS database.
Segmentation of the pelvic bone from computed tomography (CT) images is a challenging task as CT images are noisy, have wide inhomogeneities in the intensity distribution and its anatomical structure is quite complex. In this paper a framework is proposed to automatic segmentation and 3D visualization of the pelvic bone from computed tomography images to assist the physician to take medical decisions. Further, we have compared the results with the active contour based segmentation and the results are satisfactory. The CT data set used is PELVIX data set from OSIRIX. We have implemented the proposed segmentation frame work in MATLAB and the results are reported.
In different applications like Complex document image processing, Advertisement and Intelligent transportation logo recognition is an important issue. Logo Recognition is an essential sub process although there are many approaches to study logos in these fields. In this paper a robust method for recognition of a logo is proposed, which involves K-nearest neighbors distance classifier and Support Vector Machine classifier to evaluate the similarity between images under test and trained images. For test images eight set of logo image with a rotation angle of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° are considered. A Dual Tree Complex Wavelet Transform features were used for determining features. Final result is obtained by measuring the similarity obtained from the feature vectors of the trained image and image under test. Total of 31 classes of logo images of different organizations are considered for experimental results. An accuracy of 87.49% is obtained using KNN classifier and 92.33% from SVM classifier.
Gesture identification plays a vital role in today's human-computer interaction. In this paper, we proposed a sensor based gesture recognition system which makes the teacher to write in Telugu language on digital board from anywhere within the class room. Various classification algorithms k-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Decision tree are individually used for hand gesture based Telugu character recognition. Here, we assess the performance of three classification algorithms which are compared with 16 different Telugu character vowel gestures. Each gesture is collected by using an inertial sensor based embedded device. The dataset contains 16 gestures, each gesture repeated for eleven times from three different people. The gesture identification accuracy for k-Nearest Neighbor classification is 97.2%, SVM is 92.8% and Decision tree is 86.5%.
A complex background is a well-known problem in any vision-based sign language recognition system. This paper presents Arabic sign language alphabet recognition system in complex backgrounds using Microsoft Kinect. The proposed system passes by three phases. The first phase which is the signer segmentation process, focuses on the isolation of the active signer rapidly from the background. The system assumes that the active signer is the closest person to the Kinect sensor, so that the system isolates this person from any other persons or any skin-like object that may exist in the scene. After that, hand segmentation is achieved using RGB-Ratio color model. Histogram of Oriented Gradients (HOG) is extracted from the image then Principal Component Analysis (PCA) is applied on HOG so that HOG-PCA is used to train a support vector machine (SVM) classifier. The system is able to recognize the 30 Arabic alphabets with an accuracy of 99.2%. The proposed online system has the ability to recognize Arabic alphabet almost correctly in a reasonable response time.
Paper describes method that use intentional downsampling of the video sequence as part of the pre-processing step before being compressed, and the application of SR techniques as a post-processing of the compressed sequence. This scheme can be utilized as a bitrate control mechanism to optimize the overall quality of the high resolution (HR) reconstructed video. The main difference between the conventional super resolution (SR) methods and the SR for compression methods described here is that, in the latter case, the original sequence is available and, hence, the performance of the reconstruction can be evaluated. Par Key area of interest is in wireless and Real-time application, In this paper we presented a method to apply SR concepts and techniques in designing new video compression codecs. In this context, we propose a novel video decoding technique using SR. The proposed method downsample the all (not specific) picture frames of input video sequence and make low-resolution video sequence. And then encodes the low-resolution video sequence using H.264/MPEG-2. In decoding side first low resolution frame is converted to original resolution frame with a proposed SR method and other low resolution frames will be super resolved using novel motion interpolation technique or key framed based technique. This method will reduce the bit rate as compared to H.264/MPEG-2, since this method using h.264/mpeg-2 in spatially reduced video frames.
Immense analysis has been done on optical character recognition (OCR). Numerous works has stated for English, Chinese, Devanagari, Malayalam, Arabic scripts, etc. Segmentation has imp phase in OCR and various articles have been published on different segmentation methods like Thinning, histogram etc for different script during last few years. Generally there is not work done on Overlapped and touching scripts. In this article implemented the latest methods of Overlapped character recognition. Segmentation of Overlapped characters has an extremely strenuous task due to the large variety of characters and their shape, font in the script. It is an important step because inaccurate segmentation will cause errors in recognition. Normalization, binarization and thinning are the pre-processing mechanism used in handwritten character recognition. Proposed Method uses threesholding and Blob Analysis for segmentation and to detect overlapped region using Freeman Chain Code. Finally, we used Support Vector Machine (SVM) for resulting feature vectors and obtain classification performance in the character recognition scheme. An overall performance of 93 % at line and curve set of overlapped images are better than existing methods. Here we have empirically performance of segmentation of overlapped characters with the help of different overlapped images.
Most of the human computer interaction interfaces that are designed today require explicit instructions from the user in the form of keyboard taps or mouse clicks. As the complexity of these devices increase, the sheer amount of such instructions can easily disrupt, distract and overwhelm users. A novel method to recognize hand gestures for human computer interaction, using computer vision and image processing techniques, is proposed in this paper. The proposed method can successfully replace such devices (e.g. keyboard or mouse) needed for interacting with a personal computer. The method uses a commercial depth + rgb camera called Senz3D, which is cheap and easy to buy as compared to other depth cameras. The proposed method works by analyzing 3D data in real time and uses a set of classification rules to classify the number of convexity defects into gesture classes. This results in real time performance and negates the requirement of any training data. The proposed method achieves commendable performance with very low processor utilization.
This paper presents VLSI implementation of handwritten digit recognition system based on analog neural network. The recognition system is based on the least hamming distance neural network which both learning and classification. The circuit is simulated using SPICE tool at 180nm CMOS technology. The proposed circuit is modified version of existing circuit where initially first winner output voltage is set to logic one and rest all to logic zero and also determines next subsequent winner having minimum hamming distance. This type of circuit can be utilized by visual tracking system providing them ability to have backup recognition utility in case first recognized pattern proves to be incorrect. This design shows low power consumption of 34mW.
Day by day scanned documents are increasing to preserve the old literatures digitally, especially Telugu literatures. During this scanning so much noise comes into the scanned documents, in addition to noise, other geometry distortions, scale changes will take place while scanning. In this scenario, general optical character recognition (OCR) systems fail to extract the text due to these problems. So, an efficient mechanism is needed to extract TELUGU text. Towards this, in this work, an efficient algorithm to search for the TELUGU text has been proposed, which uses Speeded Up Robust Feature Transform (SURF) to extract descriptors and uses the KD tree algorithm for matching purpose. The presented results prove the robustness of the system under different noise and partial occlusion by strip-lines, orientation changes like rotation etc.
The Cognitive learning process of a human being normally begins with sensory inputs. This process of learning gets hampered in case of persons with one or more perceptual disability. Vision being one of the most important perception, maximum learning happens through it. The absence of vision in humans is often supported by Tactile or Auditory perception during the cognitive learning process. Colour identification by a visually impaired person using such sensory substitution still remains a challenge for researchers. This paper discusses a RGB-HSI based mathematical model supported by experimental studies to perceive colours via auditory substitution. The Colour space transformation from {R, G, B} to {H, S, I} is used to establish a mapping from colour to sound, using hue values. The experimental results indicate that this model provides auditory perception as an effective substitute in identifying colours by visually impaired learners.
In India, voting procedure strictly adheres to the principle of Electronic Voting Machines (EVM'S) known as offline E-Voting. EVM'S have the flexible characteristics like simple design, ease of use, reliability and fast accessing. Unfortunately these EVM'S are criticized for the irregularity reports in elections. So these criticisms leads to damaging the main objective of the voters and Election commission also faces arduous task to conduct free and fair elections. To decrease these criticisms many of the researches are into the survey to find out the legitimate voter. Due to lack of photo clarity in the identity cards or any other reasons like Hardware problems in EVM'S, malfunctioning officer's invalid votes are being casted. This paper provides the conceptual solution through multimodal biometrics which helps in enhancing the security, eradicating the fraud and provides the high level authentication by linking with the Aadhaar card database. High accuracy will be achieved by fusion of face and finger print recognition systems compared to present EVM system.
In WSN there is always a scope of improvement in efficient energy utilization, network lifetime. Efficient energy utilization always seems to be an important aspect in WSN's. Because of the fact that energy consumption directly effect the overall performance of WSN. Numbers of protocols are designed for enhancement of network lifetime and prevent the batteries from earlier drainage. Most of the protocols are designed to assuming that all the sensor nodes remain at the same place after deployment. Also the energy level of nodes is similar. In this paper we introduce multihop protocol having some moveable nodes on the boundaries of the whole network. These mobile nodes are rechargeable. The outcome of proposed scheme shows the better performance as compare to basic SEP.
Many online businesses expose their technological product and service offerings as web services. To offer the desired functionality, often a number of simple web services need to be combined to create a composite web service. Traditionally, services were implemented as SOAP based services. However, RESTful services are now becoming more popular. Thus, there is a large pool of services available to users for inclusion in a composite service. The BPEL (Business Process Execution Language) has been in use for creation of complex services as per the workflow. At present, the BPEL standard does not support inclusion of RESTful web services in the business process. This prevents inclusion of many useful services in a complex business use case where lots of web services are required in order to achieve desired functionality. In this paper, a method RESO Co has been proposed that enables the conventional BPEL engine to consume SOAP based as well as RESTful services in composition. The presented method thus helps in increasing the number of services available for composition while also preserving the advantage of lightweight nature of RESTful services.
Network mobility standard released by the Internet Engineering Task Force (IETF), which allow the deployment of TCP/IP networks onboard a vehicle and maintain continuous network connectivity to the Internet through a mobile router. It enables the passengers travelling on public transports can avail the efficient mobile computing facility on move. It is challenging task to ensure network performance due to vulnerable nature of wireless link and network mobility. In this paper, a performance analysis of onboard networks is conducted with different transport protocols. It has been shown that high bit error rate and disruptions degrade network throughput. The performance of these protocols was obtained from the ns-2 simulations. Simulations were designed to test the protocols in non-congested environments. The performance of the protocols was measured based on the throughput parameter.
Due to on-demand, ubiquitous and shared resources facility, the cloud computing attract more user towards its services to use it. Cloud services are provided through the Internet, so there is possibility of attacks over the Internet. User to root, Denial of service(DoS) and Port scanning are the possible attacks over the Internet. These types of attacks are example of network intrusion. There is a need of machine learning techniques based model with high rate of detection and minimal rate of false alarm. For this, An intrusion detection model using an AdaBoost algorithm is proposed. For weak classifiers, we are using decision stumps in the algorithm. Strong classifier are built by merging the weak classifiers.
This research paper proposes a hybrid combination of semi-fragile Singular Value Decomposition (SVD) accompanied by Binocular Notification Difference (BND) for stereoscopic image watermarking pattern. The proposed model originated with two basic segments. At initial stage the logical intrinsic algebraic properties is used to generate content dependent watermarking through Mersenne Twister Algorithm. Once the generation of watermarking content, the BND scheme is directing the watermark embedding process. The BND inherits the tradeoff between total watermark content with clarity on visual acuity. The anticipated approach enriched through dual authentication method by chaotic map method to ensure credibility on security. Consequently the projected hybrid model is equated with other models in experimental results and analysis. This model is ensures the optimal performance against other methods in terms of tamper detection with authentication.
Digital information is precious asset in modern period as everything is offered in digital mode. A prime concern for digital data is to guard it from getting seep out to eavesdropper. In this regard many cryptosystems were explored to prevent data from disclosure. In this paper, we explore a new cryptosystem that is implemented in three stages on plain text blocks. Our method applies simple operations on them based on keys that are constructed randomly. Our approach achieves the following three goals. Firstly it creates high defense cryptosystem. Second, it offers low complex encryption and decryption systems. Finally all actions are randomly done that avoids tediousness and stands firm for the assailant to break it.
Digital watermarking is act of hiding data within a digital image in order to resolve issues related to authentication and copyrights. Reversible watermarking is suitable in sensitive applications like medical images, as it can recover original image after extraction of the watermark. Many reversible watermark schemes have been proposed earlier but there are very few hardware implementations available. Further they are all synchronous clock -- based and consume more than 40% of the total power. As the power -- efficient designs are in demand, an asynchronous clock -- less VLSI architecture for hardware implementation of Dragoi et al. method of reversible image watermarking is proposed in this paper. Results show that, by incorporating asynchronous pipelining, the power consumption is reduced considerably.
Network intrusion detection system (IDS) plays a major role in any security based architecture. Various IDS have been developed to detect the intrusions that occur in the real world. The most commonly used network security tool used is Snort IDS. Snort is a rule-based system that generates alerts for the matching network patterns. Most of the rules stored in the Snort database fail to generate alerts for real network traffic. It is necessary to create rules that detect the attacks efficiently. In this paper we have made an attempt to autonomously generate rules using the evolutionary approach. The rules produced were tested for Darpa 1999, ISCX 2012 and ICMP network packets and were able to detect attacks with a high detection rate.
Due to the limited computational resources and communicational bandwidth in wireless communication systems, Certificateless Public key Cryptography came into light and reduced a significant amount of data load on the network. Certificateless cryptographic schemes not only eliminate the certificate management complexities in the traditional Public Key Cryptography (PKC) but also resolve the key escrow problem, which is a major drawback in identity based PKC. This paper proposes a new certificateless signature scheme with formal security proof using pairings. It improves the computational and communicational efficiency along with security against the known adversaries.
Security is one of the main challenges in Ad Hoc Mobile Networks (MANETs - Mobile Ad Hoc Networks). The natural characteristics of MANETs make these networks highly vulnerable to many attacks, starting from physical layer to the application layer. There are various algorithms and protocols to deal with these threats. All these algorithms have a common element: the use of encryption. Among the cryptographic systems found in the literature, those based on identity seem to be better adapt to the paradigm of MANETs. Its main advantages are the low computational cost and reduced overhead. This work presents an evaluation of the main identity-based cryptographic schematic for the MANETs, Identity Key Management (IKM). The evaluation was performed considering the renewal and revocation of keys and the presence of attacks of false accusation. The results show that the IKM is vulnerable to this attack and the partitioning of the network may lead you to an unstable state.
Computer networks are becoming more and more important because of variety of services being applied through them, so is the trend of increase of threats to these networks. Traditional approaches of network security like packet filtering, IDS and more advanced IPS suffer from various problems. E.g. These mechanisms are not aware of the resources they are protecting, these mechanisms are independent of the context of their application, their working is common to every kind of environment, these approaches do not adapt to the changing environment (configuration of the network and changing scenarios) on the run, these approaches do not take the holistic view of security situation. The concept of Network Security Situational awareness has been proposed to tackle the network security in a holistic manner. For holistic view of security biggest challenge for computing community is an efficient interoperability among heterogeneous data sources Ontologies have proved to play important role in resolving semantic heterogeneity by providing formalization of knowledge in a particular domain. Ontology represents the concepts of interest in a domain and the relations among them in a machine processable format. Separate ontologies may be developed for related domains and various ontologies may be integrated. Individual ontologies allow the development of ontology for one information source independently, hence making possible the addition of new and removal of obsolete information sources. This feature makes the application of ontology in areas where configuration of a system is continuously changing like in a computer network. In this paper we have implemented a tool OntoSecure, a semantic web based tool for network security status prediction. The effectiveness of this tool has been proved by considering various test cases.
Quantum cryptography renders a cryptographic solution which is imperishable as it fortifies prime secrecy that is applied to quantum public key distribution. It is a prominent technology wherein two entities can communicate securely with the sights of quantum physics. In classical cryptography, bits are used to encode information where as quantum cryptography i.e. quantum computer uses photons or quantum particles and photon's polarization which is their quantized properties to encode the information. This is represented in qubits which is the unit for quantum cryptography. The transmissions are secure as it is depended on the inalienable quantum mechanics laws. The emphasis of this paper is to mark the rise of quantum cryptography, its elements, quantum key distribution protocols and quantum networks.
In the present computer era of cloud computing and its security issues pertaining identification and restricting malicious users or unknown users is considered to be critical as they tend to hack or attack cloud tenant's bilinear data. The proposed work protects data pertaining seller, retailer and carrier from attacker by classifying various cloud users. Bilinear Computational Diffie-Hellman (BCDH) and Bilinear Decisional Diffie Hellman (BDDH) are used to implement Analytical Cloud Model. The classification is performed based on classification of working of processing time at Virtual Machine VM and Physical Machine PM based on their threshold values calculated based on the life time of the arrival job in Instance Allocation Pool Access Tree.
Peer to peer and distributed systems are generally susceptible to sybil attacks. Online social networks (OSNs), due to their fat user base and open access nature are also prone to such attacks. Current state-of-art algorithms for sybil attack detection make use of the inherent social graph created among registered users of OSN service. They rely on the inherent trust relationship among these users. No effort is made to combine other characteristic behavior of sybil users with properties of social graph of OSNs to improve detection accuracy of sybil attacks. Sybil identities are also used as gateways for spreading spam content in OSNs. The proposed approach exploits this behavior of sybil users to improve detection accuracy of existing sybil detection algorithms. In the proposed approach, content generated/published by each user is used along with the topological properties of the social graph of registered users. A machine learning model is used for assigning a fractional value called "trust value" which denotes the amount of legitimate content generated by the user. A modification to sybil detection algorithm is proposed which makes use of the trust value of each user to improve the accuracy of detecting a sybil identity. Real dataset from Facebook is crawled and used for analysis and experiments. Analytical results show the superiority of proposed solution. Results are compared with SybilGuard and SybilShield which shows ~14% decrease in false positive rates with very minimal effect on acceptance rate or false negative rate of the sybil detection algorithms. Also, the proposed modification does not affect the performance of existing sybil detection algorithms and can be implemented in a distributed manner.
Trust is a soft security solution for Ad-hoc Wireless Networks such as Mobile Adhoc Networks (MANETs), Wireless Sensor networks (WSNs). Due to openness of the networks a malicious node can easily join and disrupt the network. Trust models are used to discriminate between legitimate and malicious nodes in the network. The characteristics of MANETs and WSNs are different so, there are some significant differences in the Trust mechanisms also. In this paper we briefly explain some of the trust models in MANETs and WSNs. We highlight some of the important issues to be considered in designing of a new trust model. We also discuss the major differences of trust models for MANETs and WSNs.
Wireless Sensor Networks are energy-constrained and thereby require energy-efficient protocols to maximize the network lifetime. Since many sensors have closely-related readings, correlated data gathering has emerged as an efficient approach for achieving energy conservation. Hence, the objective of this work is to study the existing literature on correlated data gathering in WSN and propose an improvement. Many existing approaches are based on traditional layered architecture which optimizes some layers' functions independently. This strict layered approach results in unnecessary overhead in the context of resource-scarce sensor networks. Therefore, we focus on adaptive cross-layer design that jointly optimizes the activities of various layers. Existing cross-layer solutions optimize the activities of Routing and MAC layers only. RMC is an energy-aware cross-layer protocol that considers the clustering activity in addition to Routing and MAC layer functions for joint optimization. This research work studies the performance of RMC protocol in terms of energy consumption and network lifetime. We have proposed an enhancement to the basic RMC protocol named Enhanced-RMC (E-RMC) and show that it improves the network lifetime significantly. A thorough simulative study is carried out using Avrora simulator on TinyOS platform.
Wireless sensor networks consist of low power lightweight sensor nodes that are often placed remotely for applications like wildlife monitoring, rainforest monitoring, military surveillance, forest fire detection etc. Here energy is a critical issue as nodes cannot be recharged and/or replaced frequently. In this paper we propose an energy efficient hierarchical routing protocol for Wireless Sensor Networks to increase network lifetime. Here the cluster head is selected based on the residual energy, distance of the node from the base station, how recently the node was selected as cluster-head etc. Along with this, the proposed solution can also detect malicious nodes in the network and can prevent them from becoming cluster heads. The simulation results show that proposed solution provides good performance in terms of extending network lifetime, uniform selection of nodes as cluster head that eliminates the probability of a single node from draining away its power by its repetitive selection as a cluster head.
The popularity of Online Social Networking Sites (OSNs) is increasing dynamically. However, an information overload problem has troubled many users as thousands of groups are created each day. Cohesion is one of the widely adopted approach in social networking sites to ensure better accuracy in recommendation systems. Implementation of recommender systems however publishes social network data in one way or another, preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network an adversary can attack easily. Existing works on identity privacy protection on social networks make the assumption that all the user entities are private and ignore the fact that there are considerable amount entities whose identity is public. In this paper we propose a noble approach of group recommendation preserving the identity of user from unauthorized attacker based on the concept of k-anonymity.
Internet of Things is a network of interconnected objects embedded with computational power and digital communications. The objects have an addressable and locatable counterpart on the Internet and can open a communication channel with any other entity at any time and in any place and generate data which is considered private by many users. Hence, one of the requirements of ubiquitous applications would be privacy preservation. After collecting the data, we need to do event filtering and complex event processing which enable us to process simple events in the data. This is achieved through data aggregation. In this paper we first discuss the challenges in security and privacy of Internet of Things. It provides a comprehensive review on the existing data aggregation techniques as applied to wireless sensor networks. Further we propose an integrated approach for data aggregation in the Internet of Things environment. Our approach takes into account the computational limitations and communication constraints associated with the Internet of Things network while also incorporating security features with the aim of designing a full-fledged secure data aggregation technique.
Wireless sensor networks has become envisage for various purpose in all the fields of technology. Random deployments are a fundamental crisis to realize coverage and/or connectivity in sensor networks. Path-Trace-Back protocol is proposed proficiently to preserve the data at specific geographic area coverage in a network with node mobility in nature. This scheme can turn out to be more difficult for isolated places in the area where only low density of sensor nodes exist. To tackle this problem, the path-trace-back scheme, initially permits a mobile node to periodically return the data to fixed location about the trail area coverage whose packets travel from the region of source and utilizes the reported trail to enhance the path revisited. Based on the optimality conditions, we devise the distance based and trail based approach for an explicit area using Path-trace -- back protocol that shows an enhancement of 80 percent against the distance based and Max Propagation protocol. The result obtained achieves higher performance when the region of coverage area is less connected.
Wireless networks (WN) have emerged like a huge wave since few years. WN has captured the entire networking domain with the new devices like tablets, smartphones etc. emerging in market. The key idea of wireless networking is self-configuration. The self-configuration of networks can be achieved with neighbour discover. This paper proposes a neighbour discovery method which will consume minimum amount of device energy ultimately making it energy efficient. The method will find 2-hop neighbours of each node in the network and will calculate the shortest path from source to destination. This will be updated as and when the node changes its position.
The basic objective of Wireless Sensor Network are to strengthen the network life time by using the energy efficiently which are available in every nodes of the system. In this paper, Distance based Multi Single Hop Low Energy Adaptive Clustering Hierarchy(MS LEACH) protocol for Wireless Sensor Network is being introduced to better the safety area of the network which finally increases the survivability of the system with energy usage efficiently. It is build upon the span between cluster head(CH) and node(sink) to either go for single or Multi hop communication. Finally, the result of our simulation depicts that MS LEACH out-performed the existing system LEACH protocol.
Forest is one of the most valuable and indispensible natural resource. The forest fire is one of the natural disasters that destroy the forest not only in India but countries like Australia, USA etc which is called as bush fire, wild fire respectively. Now in terms of deploying Wireless Sensor Networks for forest fire monitoring and detection, there is a need to investigate appropriate routing protocol which fits well. There are lots of routing protocols available in Wireless Sensor Networks which are classified as Flat, Data Centric, Hierarchical, and Location based and so on. But in terms of forest fire where these wireless sensors are dependent on battery power and not any other power sources. There is a need to look into energy efficiency of such routing protocol being deployed for forest fire towards continuous monitoring and analysis. So accordingly we have compared DSDV, LEACH and APTEEN routing protocols for forest fire monitoring. Further, the APTEEN routing protocol under forest fire scenario for high, moderate and less fire prone zones been simulated using ns-2 and result compared in terms of packet delivery and zone wise energy consumption.
A Mobile Ad Hoc networks is a dynamic environment which due to frequently mobile wireless nodes experiences Communication failures due to network partitioning, and nodes failures exhibiting different faulty actions temporary or long lasting arising out of glitches related to hardware or software. As the mobile nodes are mostly resource constrained, in case of faulty nodes packets forwarding could be lead to further complications. Hence in designing a robust mobile ad hoc network fault tolerance plays a major role. Due to the presence of faulty nodes, the Performance of routing degrades and the reason for the faulty nodes has to be identified to address routing by exploring network redundancies. In our previous work[13],we devised Genetic Algorithm(GA) based Energy efficient QoS routing (GAEEQR) protocol. As extension of previous work, we devised protocol called Fault Tolerance QoS Routing Protocol, which has capability to send the data with an alternative route when route break occurs. This protocol gives the better results than GAEEQR in terms of delay, packet delivery ratio, and throughput and energy consumption.
Mobile Sinks have done a good job aiding transfer of Data across a group of static networks in recent research endeavours. A major problem is the design of paradigms for the mobility of the Collector Node (Mobile Sink) for varying applications of WSNs. Most existing methods either generate mobile sink routes in order to merely reduce the energy expenditure in the networks. In this paper, we design and implement a strategy to regulate load while also improving the energy efficiency of the network. Each node has a threshold level of the load beyond which if there is inflow of data then the node automatically sends an MC_REQ requesting for aid from the mobile collector (MC). Based on the requests received from all the nodes, the MC estimates the most suitable position to move to among a bunch of static WSNs deployed to monitor and agricultural area. Simulations show the efficiency of the proposed MCER protocol.
The lack of infrastructure in mobile Ad Hoc networks has posed a lot of opportunities for the researches in relaying the data to destination. Mobility of nodes in such an environment offers many challenges due to link failures. Different routing protocols have considered specific aspect of such a network considerably the energy consumption, bandwidth, geographic location etc. for efficiently routing the packets. If security aspect is considered in such a protocol, they will be concentrating mainly on effect of different type attacks. In this paper a novel approach for securely routing the packets is proposed to support volatile nature of the network. Being membership history count the foundation, the routing decision is made according to the number of registrations made by a node with the source node and by establishing a private trusted path of its own to securely route the packets. The protocol proposed here works in a safe manner to ensure data security and end to end connectivity. The results evaluated in the simulation reveal that, the proposed algorithm provides significant improvement in terms of delivery ratio and level of security.
Passive Optical Network (PON) is an optical based network architecture, which can offers higher data rates than the conventional copper based access networks. On the other hand, it is often observed that bandwidth requirements increasing rapidly day by day. To beat this issue, it is needed to increases the bandwidth of PON by deploying the Wavelength Division Multiplexing (WDM) so that, multiple wavelength can be supported in upstream and downstream direction such PON are termed as Wavelength Division Multiplexed Passive Optical Network (WDM-PON). We proposed the static traffic grooming algorithm, to minimize the bandwidth blocking probability of the hybrid optical-wireless networks, using light trail approach. The proposed algorithm is simulated the with the hybrid network and compared the performance with existing algorithm. The simulation results demonstrate that the proposed algorithm performance superior than existing algorithm in terms of bandwidth blocking probability.
In past patient suffering from any disease or physiologicaldisorder is difficult to monitor patient health. Now a day'spatients are monitored continuously through wirelessnetwork. In ICU's nurses or other care taker may not beavailable for constant monitoring of patient health due tothis patient health becomes critical condition. To avoid thiscontinuous monitoring of patient health using wirelessnetwork this device is developed. Goal of this system is tomonitor patient's blood pressure, heart beat rate, bodytemperature, body position. These all health datacontinuously read by ARM cortex M3 Processor. Processoris connected to different sensors and these sensor valuescontinuously read and display these health data on LCDand Remote PC. If sensors value of patient changes toabnormality level then read values of patient are messagedto doctor's mobile via GSM. It helps to monitor patientcontinuously at anywhere and anytime.
Term Big spectrum data refers to big data in the wireless domain, containing information about radio environment awareness. Radio environment awareness is essential for the process of regulating the use of radio frequencies, for social benefit. It is an important step towards the efficient and effective utilization of the spectrum, called spectrum management. Near real time big spectrum data analysis is crucial for effective spectrum management. In this paper, we present a system architecture for big spectrum data analysis in Dynamic Spectrum Access (DSA) enabled Long Term Evolution Advanced (LTE-A) networks. The proposed architecture is based on the open source Elasticsearch, Logstash and Kibana stack. The contributions of the paper also include the experimental setup to validate the proposed architecture. The experimental setup involves the generation of data sets of DSA enabled LTE-A networks, setup of the ELK stack for the spectrum analysis of LTE-A log data and sample visualizations of the spectral data analysis.
The aim of the paper is to discuss and analyze the recent solutions based on coordinated Inter Cell Interference (ICI) techniques premeditated by researchers. The most thriving need of improved spectral density and user data throughput demands dense deployment of base stations and simultaneous resource allocation in same geographical area. This approach eases the problem of spectrum scarcity but increases ICI drastically. The higher generations of wireless network demand support of real time applications like video streaming, interactive online gaming, on demand data streaming and many more. To support these applications, wide bandwidth is essential. The 4th generation (4G) based on Long Term Evolution (LTE), advance releases (Rel 12 and 13) of LTE (LTE-A) and proposed 5th generation(5G) wireless networks focuses on the key parameter of higher system spectral efficiency to support simultaneous users with high peak bit rate. Inter Channel Interference (ICI) is the result of dense channel assignment in multi-layered network which causes drastic degradation in channel throughput. Being the most urgent need, enhanced ICI coordination (eICIC) methods encourage researchers to meet new technical challenges. This paper emphasis on frequency- time coordination based ICI avoidance methods for downlink transmission i.e., cross layer interference between femtocell base stations (HeNB) to Macrocell base station (MeNB). We present other interference coordination methods for understanding of multifaceted views of researchers. At the end of the paper performance parameter matrix for ICIC methods is described.
Elections casts an ennobling influence on the minds of people and forms the prominent feature of our country-India, in which every individual participates vigilantly and the populace remains the sovereign power. The world has been experimenting with diverse technologies to conduct controversy free elections, to fulfill the elementary needs of the people. It should be taken into account that no elector should be disenfranchised from their voting rights. Our main objective is to design a model to provide equitable elections. In this project, the baseline information stored in the RFID tag provides topical reference data that mediate intensive computational prerequisite, it also provides easy match-on-card sui generis feature and meet the constraint that only eligible voters can cast their vote. Moreover, added tier of security is enforced as bio-metrics. It is seen that there is a noticeable diminution in the turnout rate of voters, especially the youth. To foster participation among all demographic groups, our design looks into the multiple dimensions and notes all security requirements and provides an inexpensive solution based on RF-ID, GSM and fingerprint module in quest for election legitimacy.
Acquisition is the important frontend operation of a GPS receiver signal processing. It estimates Doppler change in carrier frequency and delay in PRN code of the GPS signal and gives for tracking of signal. Tracking refines the Doppler shift and delay in code phase and removes the carrier, PRN code from the received GPS signal and acquires the navigational data. The performance of tracking directly depends on performance of acquisition algorithm. Performance of acquisition is compromise between computational time and Doppler frequency resolution. So this paper presents fast acquisition of Doppler shift and code phase from GPS Intermediate Frequency (IF) signal using radix4 and radix2 FFT algorithms. Performance of these acquisition algorithms is compared with time domain correlation and also with signal acquisition using radix2 FFT on decimated with zero padded data. Computational time and correlation magnitude are taken as metrics to evaluate the performance of the algorithms.
As the dependency on Internet is increasing, the service providers are launching numerous web applications to facilitate the users. Due to threat of unauthorized access they want to identify their users accurately. In the same way, leakage of sensitive information makes clients aware enough to make sure whom they are dealing with. This leads to the requirement of a Centralized Authentication System(CAS). The involvement of CAS introduces the single sign on capability at client side. Once the individual proves his identity to the CAS, it will be assured to all those services that trust on CAS. There are several existing solutions like Kerberos and SAML. This paper introduces a single sign-on protocol which combines the protocol architecture of Kerberos with the functionality of SAML. Finding the deficits in the SAML architecture we try to improve it by adapting strong protocol architecture. As a proof of concept, we study the formal analysis of the proposed security protocol through different tools like CASPER and SCYTHER.
Intrusion behavior and detection analysis particularly rely upon the type of data. Most of the datasets used in intrusion analysis are heterogeneous and imbalanced data sets. In these data sets, the features vary with a huge difference in between and within the feature values. This is very effective while taking decision, especially in the supervised learning. To analyze the intrusion problem, support vector machine uses distance calculation. But data set contains both numerical and nominal values. Here, we have replaced these nominal data points with the appropriate values and balanced the dataset. This method enhances the detection rate of the anomaly detection system.
In recent technological advances, RFID system is efficient for finding the position or identifying an object in an environment where direct human access is difficult or impossible. In this paper, we introduce 2-sensing covered (each point of the application field falls within the sensing zone of at least two readers) wireless networks of anchor-free RFID readers. We show that in a 2-sensing covered network every target (RFID tag) can be located with at most two possible positions which are useful in real applications. We derive a characterization to identify a 2-sensing covered network. We also provide an efficient algorithm to recognize 2-sensing covered networks. We adapt the algorithm to track the tags under the distributed environment.
Sub-Pixel Motion Estimation and compensation is a high-precision algorithm with very high complexity in HEVC/MPEGH/H.265 Video Codec's. This is because moving objects do not move by integer pixel locations between successive video frames. Typically, fractional pixel accuracy is obtained by means of bilinear interpolation producing a spatially blurred predicted signal. The motion estimation and compensation is improved in this paper by means of the filtering effect using a very effective spatial digital low pass FIR filter. This filter allows the motion to be detected, at very high precision using the fractional motion estimation. The fractional pixel accuracy was achieved using a total of 112 8-tap digital FIR filter for one-eighth pixel precision, which includes half and quarter pixel accuracy. The design has been implemented on a 28nm foundry process, with a speed of 1.101 GHz and it has achieved 2262 GOPS at this speed, outputting data at the rate of 1.8 Tera bits per second, for one-eighth pixel accuracy. Computational complexity, Memory & I/O Bandwidth has been reduced by inputting the Mean Square Error Map of the pixels to the Fractional Pixel Estimator and then searching in the sub-pixel grid. This design is targeted for 8K Ultra High Definition Television (UHDTV).8K HDTV format is 8192 x 4320 pixels. This amounts to 35.4 Million Pixels per Frame. The incoming Video Pixel Rate for 8K HDTV at 60 Frames per Second(fps) is 2.12 Billion Pixels per Second or 2.12 Giga Pixels/second. This amounts to an incoming Video Data Rate of 51 Billion bits per second or 51 Gbps. At 120 frames per second the Incoming Video Pixel rate is 4.24 Billion Pixels per Second or 4.24 Giga Pixels/Second. This amounts to incoming Video Data Rate of 102 Gbps. For Quarter Pixel Motion Estimation, we are adding 3 Sub-Pixels for every Integer Pixel. Pixel Count increases by a factor of 16. For 4K HDTV this becomes 142 Million Pixels per Frame. At 60fps, the Pixel rate is 8.5 Billion Pixels p...
Electronic transaction security is becoming more and more important in all aspects of Automated Teller Machine (ATM). As the number of persons using an ATM, are becoming more desirable targets for attacks. These attacks could be counted as a security risk in the form of card cloning or PIN release, etc. It has been observed that the secure electronic transaction has become a top priority to avoid ATM fraud. In this research article, the tools and techniques of ATM fraud are contemplated. A secure layer electronic transaction mechanism of ATM is developed to prevent ATM frauds. Through this mechanism a cardholder identification, authentication, authorization and security clearances are boosted. Against shoulder suffering (a fraud technique) two technical security tools are proposed to enhance ATM transaction security.
Cybersecurity has become a top concern and important topic in both public and private companies, financial institutions, regulators and law enforcement due to ever increasing cyber risks. However, the governance of cybersecurity remains a challenge. The current approaches to the governance of cybersecurity in enterprises fail to consider its multidisciplinary nature and complexity. In this paper, we explore the governance aspects of cybersecurity in enterprises that are multidisciplinary in nature and propose a cybernetic model for continuous and good governance of cybersecurity leveraging the multidisciplinary collaboration, conversational, goal directedness and dynamic feedback control aspects of cybernetics. The implementation of the cybernetic model in a big telecom enterprise improved its systems and processes of cybersecurity governance.
In this modern and fast moving world, human safety and security has become an important issue. In the past few years, crime against school going children has grown rapidly. In this paper, a prototype Children Location Monitoring System (CLMS) is implemented using Global Positioning System (GPS) and Global System for Mobile communications (GSM) technologies. The system is built on ARM7 LPC2148 microcontroller board and uses a commercial GPS receiver to compute the position of the child continuously. The child's position information is periodically sent through GSM to the parent's smart phone (as a Google map link). A School monitoring system database developed using Visual Basic 6.0 is used to monitor child's location from the school. Sample experimental results are obtained from CLMS are presented here. This system can help the parents and the school authorities to monitor the children when they leave the school or they go missing.
WSN have several inherent limitations. In several applications of WSNs each point in the field is required to be sensed by some sensors. A network of this type is called a sensing covered network. In random deployment, avoidance of holes (a region with no point within sensing range of nodes) is very difficult. We develop an efficient distributed technique to detect boundary nodes of a hole. We also propose an efficient distributed technique to detect a hole, if it exists at all, in the region.
Brain-computer Interface (BCI) systems authorize to communicate individual person with neuromuscular destruction. A multi-frequency stimuli based on Steady-state evoked potential (SSVEP) exiting a pragmatic continuous brain responses. Most researches used a unique stimulus frequency, therefore increase the number of stimuli frequency would generate a large number of required targets. Human-brain showed a good evokes responses only in a limited range of stimuli frequencies. This study presents new technique by extracting low/high duty-cycle of visual flicker frequency band to evoked users. The flickers frequency between 2-35 Hz transient the visual evoked potentials with overlap resulting in a steady state response. The proposed design used an ordinary Light-emitting diode (LEDs), which driven by sequences flickering that consist of rhythmic stimulus cycles with fixed duration in between. Comfortable flicker was adopted in a Phase-tagged trigger(PTT). Questionnaire survey embrace offline SSVEPs analyzes that induced by different frequencies and duty-cycle flickers. Due to the amplitude of spectral characteristic of fast Fourier transform (FFT) based SSVEP to achieve a higher response. Indicate dynamics Brain-wave (BW) activities that demonstrate a distinction differences between frequencies which are effects on brain activity.
Recommender systems become extremely popular and widely applied in recent years. Researchers have done many work to developing recommender systems in social network. However, recommendation algorithm is still a challenging problem in practice. In this paper, we address the problem of recommending both friends and product simultaneously in the social network. Recommendation system are widely researched in social network with kinds of methods such as Collaborative Filtering. However, most of the methods only recommend relationships or product separately. To address this problem, we propose an User-Item(UI) bipartite graph with Topic Model, which simultaneously incorporates relationships and interest information to model complex relation among users and products. And then we apply Random Walk on the UI bipartite graph to measure the relevance between users and products, and the relevance among users as well. Finally, we evaluate our approach on CiteULike dataset and last.fm dataset. Experiments show the effectiveness of our approach. Comparison with other methods on the two datasets indicates that our approach do make a better job.
Web-based applications have gained universal acceptance in every sector of lives, including social, commercial, government, and academic communities. Even with the recent emergence of cloud technology, most of cloud applications are accessed and controlled through web interfaces. Web security has therefore continued to be fundamentally important and extremely challenging. One major security issue of web applications is SQL-injection attacks. Most existing solutions for detecting these attacks use log analysis, and employ either pattern matching or machine learning methods. Pattern matching methods can be effective, dynamic, they however cannot detect new kinds of attacks. Supervised machine learning methods can detect new attacks, yet they need to rely on an off-line training phase. This work proposes a multi-stage log analysis architecture, which combines both pattern matching and supervised machine learning methods. It uses logs generated by the application during attacks to effectively detect attacks and to help preventing future attacks. The architecture is described in detail, a proof-of-concept prototype is implemented and hosted on Amazon AWS, using Kibana for pattern matching and Bayes Net for machine learning. It is evaluated on 10,000 logs for detecting SQL injection attacks. Experiment results show that the two-stage system has combined the advantages of both systems, and has substantially improved the detection accuracy. The proposed work is significant in advancing web securities, while the multi-stage log analysis concept would be highly applicable to many intrusion detection applications.
A real-time system should operate correctly within deadlines. A failure in response will lead to loss of human life or a big damage to the property. The systems are sometimes considered only mission critical, with mission being very expensive. A Mission Critical System needs study real time software which is highly complex and is vital for the success of the mission. The mission capability depends on this software and proves catastrophic in case of any failure. The software design of real-time system is very complex. To keep in pace with today's modern computing technology the real-time software should adopt new software design methodologies. This paper brings out a comparison of various design methodologies that has evolved over time. The shortcomings of each method to satisfy the requirements of real-time system and the need for a generalized approach as Design patterns.
The cubic spline interpolation is frequently used for analysis the data set in various aspects of engineering and science problem. For a large set of data points defined with very large range, it is very difficult to interpolate by using traditional sequential algorithm. In this paper, we proposed a more systematic approach which has a parallel component known as skeleton which is implemented in various parallel paradigms like OpenMP, MPI, and CUDA etc. It is interesting that the skeleton approach is used with pipelining technique that gives better result as compared to the previous studies. This approach is applied to compute the cubic spline interpolating polynomial based on a large data set. The experimental result using the parallel skeleton technique on multi-core CPU and GPU shows better performance with respect to other parallel methods.
Traditionally Cognitive Radio Networks (CRNs) have been designed to support real-time secondary users (SUs) like VoIP. Hidden Markov Model (HMM) based prediction may provide non-real time access of channels (e.g: Wireless Body Area Network) by the cognitive users in presence of high primary users activity (PUs) that uses sensing information of PU arrival prior to SU transmission. But, HMM model is in general computationally intensive causing high prediction time. In this paper, an advanced model of HMM predictor is designed using reconfigurable FPGA platform that takes the advantage of parallel processing and pipelining architecture. The strength of this work is the design architecture for HMM predictor named as H2M2 engine which is a Hardware Co-simulation (HW-CoSim) based design taking a novel edge trigger based input. This H2M2 engine runs autonomously with minimum interaction with the embedded processor of the cognitive radios without any setup and hold time violations. Thus this predictor is 75% time efficient in comparison with normal HMM predictor making the whole system of cognitive radio transmission energy efficient.
As apps on smartphones are becoming more complex, they tend to take more time for execution and consume more power. This paper describes a system (APPS) that supports both class level code offloading as well as thread migration of Android applications to remote server connected through Wi-Fi or 3G to reduce time of execution and power consumption. APPS performs better than any other previous approaches due to: 1) invocation of separate threads in the server to handle multiple clients, 2) grass root level binary serialization used for thread migration to make the system faster, more dynamic and robust, 3) transfer of execution state from server to client to resume computation in the client on the fly in case of deterioration of network health, 4) drastic reduction of average state transfer size between the server and client. APPS monitors dynamic network conditions to automatically offload classes or threads at runtime as guided by a decision maker module. The system was tested on the classical N Queens problem and achieved an improvement of one order of magnitude in performance and three orders of magnitude in power consumption for N=14 as compared to execution on smartphone alone.
Predistortion is a well know method used in the linearization process of high power amplifier. In this paper, a stegnographic approach has been introduced during the digital predistortion. The proposed stego predistorter gives double advantage over conventional LUT predistorter 1. Reduction of bit error rate 2. Secret data transferred between sender and receiver. The work has been implemented in OFDM environment using Matlab software.
Carry Value Transformation (CVT) is one of the modified structures of Integral Value Transformations (IVTs) and coming under the category of discrete dynamical system. Earlier in [5] it has been proved that the addition of two non-negative integers is equal to the addition of their CVT values and XOR values and is true in any base of the number system. In the present study, this phenomenon is extended to perform CVT and XOR operations for many non-negative integers in any base system. To achieve this both the definition of CVT and XOR are modified over the set of multiple integers instead of two. Also some important properties of these operations have been studied. With the help of cellular automata the adder circuit designed in [14] on using CVT-XOR recurrence formula is used to design a parallel adder circuit for multiple numbers in binary number system.
In this paper, an architecture is proposed to calculate the histogram of image. Which is faster than the previous serial methods, this architecture achieves the parallelism but needs the enough resources and gives the better performance. If, resources is not an issue then this is one of the best method for histogram calculation in FPGA (Field Programmable Gate Array). Some other methods are also proposed to use the same architecture with less number of resources which cost some reduction in speed.
The performance of multimodal biometric systems usually advances with increased use of number of modalities. A multimodal biometric system is designed to integrate extensively used reliable physiological traits namely face, finger and palmprint at feature level. The proposed multimodal system accuracy is enhanced by incorporating novel and efficient optimal content matcher adhering to Monge property and North West Corner transportation method. This optimal matcher is highly significant in large scale biometric identification systems in terms of maximizing the performance, reducing the response time with lesser iterations and cost effective. The fusion at feature level is considered to consolidate the evidences of face, finger and palmprint in contrast to matching or decision levels because the feature vector consists of wealthier data on the biometric origins. The experiments are carried out with real time datasets of physiological traits acquired from engineering graduates. The results proved that optimal content matcher accomplished with feature level fusion achieves higher recognition accuracy when compared with various multimodal systems employing other fusion levels and matchers for identification.
The performance of neural network as a classifier depends on several factors such as initialization of weights, its architecture, between class imbalance in the dataset, activation function etc. Though a three layered neural network is able to approximate any non-linear function, number of neurons in the hidden layer plays a significant roll in the performance of the classifier. In this study, the importance of the number of hidden layer neurons of neural network is analyzed for the classification of ECG signals. Five different arrhythmias and the normal beat are classified for different number of hidden layer neurons to examine the performance of the classifier. In this study we get the best number as 35. After the training of the neural network with the optimized number of neurons in the hidden layer, we have tested the performance with three different datasets. The average sensitivity, specificity and accuracy achieved is 94.91%,99.69% and 99.46% respectively.
The Electroencephalogram (EEG) is a recording of the electrical activity of the brain from the scalp. The recorded waveforms reflect the cortical electrical activity. EEG activity is quite small, measured in microvolts (μV). EEG is a non-invasive method to record electrical activity of the brain. Most commonly it is used to identify the type and location of the abnormal activity in the brain during a seizure. It is also helpful in diagnosis of sleep disorders, encephalitis and brain dead. The main design aspect of this system is aimed to observe the behavior and analyze electrical activity of the brain. The Electroencephalography (EEG) signals are acquired by suitable signal conditioning designed exclusively. In order to analyze the behavior of the brain, the functioning of the brain must be studied. This is achieved by analyzing the electrical activity of the brain. The brain signals are very low voltage signals normally range from 0.5 to 100 μV in amplitude. Although the spectrum is continuous, the signal frequencies range from 0.65Hz to 65Hz. The results and the wave forms is compared with the threshold values to determine the disorders in the brain of the subject. The system has an amplification of 5000 and can be varied up to 10,000. The resultant output has been implemented using LabVIEW. Discrete Wavelet Transform (DWT) construction and reconstruction program has been developed in LabVIEW and output results are acquired through computer as well as Cathode ray oscilloscope (CRO) using offline data.
Modern growth in electronics systems, communication system, and information technology assist in designing canal automation system. The Canal irrigation is tremendously use source of water in irrigation. Microcontroller based system is very flexible for any modification required at site, while this system is cheaper than PLC can interface different modules easily. System involves intelligent Microcontroller based Remote Terminal Unit (RTU) which can communicate different sensors, communication modems, memory, ADC and different modules. In this paper we propose a microcontroller based design for flow control system for gate in canal automation. Flow control system consists of sub systems: RTU, Solar Power system, level measurement system, flow measurement system, gate actuator system, and communication system. In this paper more focus on flow control activities of Distributary, laterals and Direct Pipe Outlets (DPOs). Remote Terminal Unit monitors upstream level, downstream level, downstream flow, power status, gate opening, gate health and security. All system components designed to operates on solar power and battery backup. Conventional operational system has some drawbacks and inaccuracies. Proposed system helps to improve irrigation operational efficiency, power use, accuracy in measurement, water distribution and reactions to imbalance and controls the flow at gate location continuously. This system also helps to reduce water wastage and labour dependency.
In recent years, the usage of Computer Based Courses all over the globe have increased drastically in different domains of engineering education and its application development. There are more openings for a fresh engineer with the computer expertise skills. The man power required with these skills is like a hot cake in the Industry all over the world. Designing a computer based teaching methodology for various courses in engineering education is certainly a challenging task and should meet the requirements of the Engineering Education standards. Today exploring different ideas and innovation of new techniques are mandatory for a fresh engineering graduate. Computer Based Active Learning Teaching Methodology is essential in the area of Electronics and Computer Based Engineering Education at under graduate level. This paper describes about the procedure to develop Compute Based Teaching Methodology (CBTM) and successfully implemented for various courses like Linear Digital Integrated Circuit Applications, Micro Processors & Micro Controllers and Embedded System courses for engineering graduates. This paper presents the procedure to implement a CBTM for different engineering courses and how it is useful to develop applications or projects in the area of Electronics and Computer Engineering Education. This paper also discusses the effectiveness of usage of CBTM with the results and the case study courses described for better attainment of values for Course Outcomes (COS) and Program Outcomes (POS) in Outcome-Based Education (OBE).
Outcome based education (OBE) focuses on abilities procured by the learners. It assists the progress of learners by creating abilities, impacting qualities, states of mind and determination and expanding expertise. Quality education which empowers the society can be achieved through information and communication Technologies (ICT) in teaching learning process to accomplish the concurred learners' level of ability. The rapid increase in student enrolment, knowledge explosion, advances in information and communication technologies (ICTs), globalisation, economic restructuring, and financial constraints have all contributed to reforms in higher education. ICT as an artifact facilitates information gathering, exchange and makes learning more easy and productive. It also offer students tools for knowledge construction, reflecting, sharing and collaboration in project work, while also connecting to communities, network and resources of their own interest. With globalization, there is a great need for higher education to provide a platform for gradual integration of its degrees with the best available in the world. The implementation of ICT in OBE can effectively accomplish the goals of quality education which is a process that reduces consumption of resources and increases learning outcomes. Quality can be ensured only if institutions can face competition to attract talented students, provide choices and innovate subject combinations. This paper demonstrates the role of ICT to restructure higher education to Outcome based model and its effective usage to improve quality and accessibility of education.
Engineering education impacts very much in the country development in various facets like productivity, services and skilled personnel. It is the engineering institutes responsibility to provide the interactive environment for the all-round development of the students. The students who joins for the engineering education would be differed in their attitudes, back ground knowledge, intellectual levels, learning approaches. To improve the students learning capabilities and in turn make them more employable it is necessary to consider the individual properties of students in choosing the type of instruction and teaching methods. To develop the creative and critical thinking skill of students it is necessary to make them to participate in social and working environments. To facilitate the students with different learning approaches, curriculum is to be reviewed and redesigned. In this paper we reviewed four different learning models and we proposed a new integrated learning model which integrates five learning approaches.
Bloom's Taxonomy describes the classification of learning into various domains. The "Cognitive domain" tries to separate learning process into different levels based on the ability of a person to think. This is very helpful to detect whether a given question is memory based or application based. The primary aim of this paper is to demonstrate the utilization of Bloom's Taxonomy to grade a given paragraph and the utilization of prediction models over that grading. This paper describes the technique to utilize the past marks of a student and the question paper contents to classify the question paper to a particular level using the taxonomic principles of the Cognitive domain and the application of linear regression to foretell the total marks that the student may score. This paper also describes various techniques in which Bloom's Taxonomy can be applied and analyses the accuracy of each of those techniques.
In the wake of refurbishment of basic teaching-learning paradigm owing to renaissance of online media we report empirical investigations relating to the curriculum framing. We argue that, no matter how well the instructional model is designed, unless we pay equal consideration towards curriculum, the overall transformational effect will be a void. Based on the scholarly literature survey, we showcase that there are not many studies undertaken in the direction of using curriculum as a change agent in the higher education. We put forth a new model of curriculum framing based on the student academic performance metrics as an input for a meaningful student centric curriculum as an output. Grounded on the conception of Choice based Credit System (CBCS) and prerequisites for the choice, our results evidence, association between the student performances at various subjects as one of the vital parameters for curriculum framing. We adopt the designed framework for a real case study of postgraduate program in Computer Science.
Cloud Technology has established as a scientifically, commercially and industrially important technology worldwide on the basis of its huge storage capacity and scalability. Online examinations have gained nationwide popularity and demand from students, guardians and faculties of all academic levels but these online examinations have a serious drawback. After an examinee log into an online examination system, there is a strong possibility of accidental loss of internet connection. In this type of situation, without internet connection, an examinee cannot submit their e-answer paper and the online examinations are cancelled. In this paper, we have proposed an advanced online examination system which can successfully overcome the said drawback. In the system we propose to develop it on the cloud platform. While appearing for any examination through this system, if accidentally the internet connection gets lost, then also an examinee will be able to continue and complete answering the paper fully and finally will be able to submit the e-answer sheet, even in the absence of internet connection.
This paper focuses on resolving an issue which occurs in a large IT Industry where the number of associates would be thousands and working in different locations of the organization. So recruitments happen at their respective locations and the project requirements belong to a particular technology would be similar in all the locations. Hence making the associates to be competent enough for delivering the quality project, depends on the training we are providing for the junior resources. When we need to conduct the technical training for the associates' project readiness, we should have good hands-on training material and sufficient faculty who are willing to travel to the respective locations for reviewing the developed code and guiding them. We have good hands-on material but the issue lies on the faculty part that is impacting on huge cost to the organization in terms of a) travel arrangements b) deputation allowance c) Learning department time to find the number of faculty from different projects d) impact of the faculty current project delivery due to their absence. Hence our online evaluation and guidance solution allow faculty to do their review, provide guidance from their own work locations itself and at their leisure times.
Inquiry based Inductive learning methodology is one of the best technique especially for the engineering students who are expected to solve real world problems. But it is very difficult to standardize a particular learning methodology for an institution with diverse attitude, diverse characteristics, diverse languages, diverse financial back grounds, and diverse cultural scenario with variable educational standard of the students. As the teaching methodology depends on the content and time as well as learners' varied style of learning, the same learner may prefer inductive teaching for one topic and deductive approach for the other topic. This paper initiates a sample study that is taken for a particular institution, in the particular environment, for the particular batch and particular set of students. The sample data are collected from a classroom by distributing the questionnaire attempted by two different batches of student having questions pertaining to Inquiry based and deductive learning. The system is developed and tested twice after teaching the content using inductive method and implemented using attribute relevance, discriminant rules of class discrimination mining. The results are visualized through bar charts and shows that the two batches of learners of different years have different learning characteristics.