Technical Papers

  • A membership function (MF) is a curve that defines how each point in the input space is mapped to a membership value (or degree of membership) between 0 and 1. The input space is sometimes referred to as the universe of discourse. This paper further develops the fuzzy-based algorithm to add the feature of automatic membership function generation in the fuzzy logic module of the algorithm. From this context, a short review of related work in membership function generation is given, and rules associated with it have been incorporated. In this paper, a one step ahead to the nature of the fuzzy logic-based design, a fitness finding method has been proposed. This paper also evaluates the proposed algorithm for deriving membership function based on association rule using control parameters with its implementation. The algorithm is applied by considering a case study of share market data and results are analyzed and compared with the intuitive cases.

  • This paper provides the details about the Quality Assurance practices and techniques to be followed by the QA professional (also called SQA-Software Quality Assurance) in continuous delivery mode of software development. QA professionals are responsible for the process definition, audit, training and other assurance activites in the project. The paper provides a QA model named 'ACID-QA' model which comprises of key practices which can be used by the QA professional in continuous delivery mode of software development. The objective of the 'ACID-QA' model is to provide a working model for the SQA which can be used during the planning, requirement, design, coding, testing, continuous integration, audit and release activities of the project. The paper provides an overview of each of the practice areas of the model in the further sections. This model is implemented in Big Data Hadoop File system and Map Reduce and it is found that the product quality issues found by SQA Professionals are improved by 100%. The audit findings are further detailed down in the paper.

  • Dams provide us with a wide range of social, economic, environmental benefits by helping us in controlling the flow of water, generating hydroelectric power, flood control, waste management, navigational purposes and act as habitats for aquatic life. India has progressed a lot in the construction of dams and water reservoirs after Independence and now we are among the best dam builders in the world. We have around 4300 dams in India and many more are already under the process of construction. But even today most of these dams use the conventional methods of dam management for controlling the dam gates and dam maintenance. In the current fast paced modern world where we are trying to automate all the processes around us, it's high time that we revamp the management of our dams using Internet of Things. In this paper we have proposed and implemented a novel idea of automating the process of dam management from collecting the data of water level to control the dam gates. This idea will help us to streamline the control of dams throughout the country and reduce the manpower for dam maintenance.

  • Due to the perceptual advantages, pie charts are frequently used in digital documents to represent meaningful numerical data. Automatic extraction of underlying slice data of pie charts is necessary for further processing of chart data. In this paper, a novel technique have been presented for identification of pie charts in document images followed by the extraction of chart data. To identify pie charts in documents, a Region-based Convolutional Neural Network (RCNN) model has been trained with 2D and 3D pie chart images. Then, different slices of a pie chart are analyzed using image gradients as one of the primary feature and compute the values of different slices. The algorithm successfully identifies different 3D structural information of a 3D pie chart which are used only for a 3D representation of such charts and are excluded from processing. To demonstrate the superiority, the algorithm has been tested on a number of 2D and 3D pie chart images.

  • Rapid advancement in the development of Internet of Things (IoT) based smart wearable devices has motivated us to develop a device which can monitor the performance and analyze the shooting form of basketball players remotely. In this paper, we present the design of a system that can measure and analyze in real time, the free throw shooting action of a professional basketball player. A new heuristic tool has also been developed to analyse every phase of the shooting action to segment out an ideal shooting action of individual players. The developed tool is proven to be more efficient than the conventional k-map clustering approach.

  • Due to varied applications of wireless sensor networks (WSNs), data is required by any user when and wherever they require. Usually, base station gathers all information from the sensor and sends it periodically to the user. But for real-time application, mutual authentication among the communicating nodes is required. During user authentication base station checks that the user is authorized to gather the collected information from the sensor node through an insecure channel. In this paper, we propose an efficient authentication scheme which provides anonymity of user in WSNs that uses Markov chain. The Markov chain is a stochastic process that can be used for a system in which it follows a chain of linked events but next event depends only on the current state of the system. Stationary limit distribution of matrix is created by the base station to help the user to keep their password and identity safe. The security analysis verifies that the proposed scheme is safe against various attacks like forgery, parallel session attacks, user impersonation, etc.

  • Convolution Neural Networks have been the standard neural network architecture for Image Classification and Object Segmentation. Convolutional Neural Network involves a fundamental operation for feature learning on the images which is called as Convolution, where a kernel is convoluted with the corresponding pixel values on the image. Various types of kernels exist which serves different purposes from pixel mapping to edge detection and image blurring. Gaussian-Gabor kernels have been the standard filter for edge detection. This paper presents new robust edge detection filters which produce sharper edge representations as compared to the traditional Gaussian Gabor Filters.

  • This paper, introduced a new methodology to raise the metric of a journal's impact. This method is depending on finding clusters from SC Imago database and creates datasets utilizing a modified k-means clustering algorithm. Farther, developing of linear regression analysis on these datasets is perplexed by seeing index values are dependent variables and citation parameters as independent variables result in assessing contributing factors to increase bibliometric index of any journal. next step, cluster quality metrics enforced to evaluate the perfectness of fit of the cluster such as homogeneity score, completeness score, V measure, accommodated rand score and silhouette coefficient. The output of modified k-means algorithm on a dataset of 1445 journals resulted in 3 clusters (k=3). Each cluster data clustered depending on the title. The regression analysis states that the publisher who desires to enhance his journal bibliometric indexes should deliberate the advice conferred, in this work, bring large number of paper submissions to their journal especially. Almost four indices which are of main importance in the publisher industry having been used this. The analysis ensure in strong advantage as the testing of output produced including regression parameters clarified with the identification of outliers by the inclusion of relative error calculation. Accordingly, seeing the suggestive features with increase or decrease in TD3, TC3, CD3, CD2 and RD values, the publisher would profit from raising their respective bibliometric index.

  • Consumers are going through a huge transition in terms of their choices as well as the propensity to spend. People increasingly travel outside the country and understand the spectrum of products or services available in other countries. This has given a huge impetus to E-commerce companies and start-ups offering a variety of products and services. The continuous development of E-commerce platforms and the convenience of purchasing goods and services has increased the customer base continuously. The broad objective of the study is to extract information from consumer searches and use it analytically for driving the business in the future. The purpose of the research is to use supervised classification techniques to categorize product related search queries into category (level 1) and subcategory (level 2), which is further required to derive shopping patterns and trends among the consumers. In this paper, we explore the various multiclass classification techniques, like Naïve Bayes, Random Forests, and SVM. The Naïve Bayes classification at the category (level 1) and subcategory (level 2) outperformed the other algorithms to achieve maximum accuracy of the search query classification.

  • Short Message Service (SMS) is one of the well-known and reliable communication services in which a message sends electronically. In the current era, the declining in the cost per SMS day by day by overall all the telecom organizations in India has encouraged the extended utilization of SMS. This ascent pulled in assailants, which have brought about SMS Spam problem. Spam messages include advertisements, free services, promotions and marketing, awards, etc. Individuals are utilizing the ubiquity of cell phone gadgets is growing day by day as telecom giants give a vast variety of new and existing services by reducing the cost of all services. Short Message Service (SMS) is one of the broadly utilized communication services. Due to the high demand for SMS service, it has prompted a growth in mobile phones attacks like SMS Spam. In our proposed approach, we have presented a general model that can distinguish and filter the spam messages utilizing some existing machine learning classification algorithms. Our approach builds a generalized SMS spam-filtering model, which can filter messages from various backgrounds (Singapore, American, Indian English etc.). In our approach, preliminary results are mentioned below based on Singapore and Indian English based publicly available datasets. Our approach showed promise to accomplish a high precision utilizing Indian English SMS large datasets and others background's datasets also.

  • This paper aims to develop a method to extract 3D information from surrounding space in real time and to develop a control system to track a target object continuously. We used two cameras and utilized the concepts of ray optics, epipolar geometry and image processing to identify the target and find its world coordinates with reference to the cameras.

  • Document classification particularly in biomedical research plays a vital role in extracting knowledge from medical literature, journal, article and report. To extract meaningful information such as signs, symptoms, diagnoses and treatments of any disease by classification, the context needs to be considered. The need to automatically extract key information from medical text has been widely accepted and it has been proven that search based approaches are limited in their ability. This paper presents a novel method of information identification for a particular disease using Gaussian Naïve Bayes and feature weighting approach that is then classified by the context. It is useful to enhance the effectiveness of analytics by considering the importance of the term as well as the probability of every feature of the disease during classification. Experimental results show that our method upgrades performance of classification system and is an improvement from traditional classification system.

  • Wireless sensor networks (WSNs) are often deployed remotely; hence, typical disposable chemical batteries with limited lifetimes may not be suitable for powering the network. In such cases, photovoltaic (PV) systems that generate electricity from sunlight can serve as a better alternative energy source. The intensity of sunlight varies over time, and thus the rates at which the batteries in the PV system get charged also vary. Monitoring the charging and discharging currents and voltages of the batteries enables us to modify the operation of the system in order to improve its overall efficiency. Moreover, it enables us to detect any fault in the solar panel, battery, or network node. We have designed an independent, low cost, ultra-low power microcontroller-based wireless solar power monitor that can be plugged easily into a PV system. The monitor measures the currents and voltages across the panels, batteries, and the load, and periodically transmits these values through an independent wireless interface to a control center for observation and analysis. We have performed a power analysis of the monitor and learnt about the power consumption in its various states. The use of this power monitor should extend the overall life of the PV system and also minimize power failures in the WSN nodes powered by the PV system. This paper reports about the design of the power monitor as well as the results of our analyses.

  • Researches have been carried out in the past and recent years for the automation of examination systems. But most of them target on-line examinations with either choice-based or very short descriptive answers at best. The primary goal of this paper is to propose a framework, where textual papers set for subjective questions, are supplemented with model answer points to facilitate the evaluation procedure in a semi-automated manner. The proposed framework also accommodates provisions for reward and penalty schemes. In the reward scheme, additional valid points provided by the examinees would earn them bonus marks as rewards. By incremental up-gradation of the question case-base with these extra answer-points, the examiner can incorporate an automatic fairness in the checking procedure. In the penalty scheme, unfair means adopted amongst neighboring examinees can be detected by maintaining seat plans in the form of a neighborhood graph. The degree of penalization can then be impartially ascertained by computing the degree of similarity amongst adjoining answer scripts. The main question-bank as well as the model answer points are all maintained using Case Based Reasoning strategies.

  • Imaging from space involves certain complications which are quite different from airborne platforms such as MAVs, UAVs and drones. All these platforms require mathematical models to represent the geometry of image acquisition and further georeferencing the acquired image. Conventionally, a Rigorous Sensor Model (RSM) involving mission critical parameters and a sequence of rotations serves the purpose, alternately Rational Functional Models (RFM) are developed which empirically mimics RSM to certain degree of acceptable accuracy. In this paper, a machine learning approach is proposed for georeferencing of satellite images and compares the results with RFM and RSM.

  • Elliptic curve cryptography (ECC) is an emerging and efficient cryptography technique which can be applied in various fields of application such as sensor network, network security, authentication, signature verification and in the different applications of the internet of things (IOT). ECC is lightweight, efficient and more secure compare to any other public key cryptography. Different methods have been proposed in the literature to convert input message to elliptic curve point but all of them lack in security, scalability and computationally inefficient for large input size. So, a scalable and computationally efficient algorithm is highly required. In this paper, we propose two different algorithms for input message to elliptic curve point conversion which will reduce communication cost and computational cost of encryption and decryption. The experimental result also shows that the proposed algorithms give better performance and best suitable for large size input text compared to any other existing algorithms.

  • A large part of the video surveillance systems involves dealing with face detection techniques on unlabeled faces. We define several classes of faces to detect them from a surveillance footage defined using different clustering algorithms. In this paper, authors have proposed a facial clustering technique for low-resolution facial dataset obtained from video surveillance footage with the help of HAAR cascade classifier. Different models like ResNet 50 and Inception ResNet V2 were used for feature extraction with weights pre-trained on ImageNet Dataset. Further, several combinations of Scaling and calculated Dimensionality Reduction techniques were applied before being fed into clustering algorithms and finally accuracy was calculated on obtained clusters.

  • Nowadays, various image processing methods are broadly being used as a part of the biomedical zones. It is crucial to diagnose the disease and to classify the specific stage for the radiologists to give reasonable remedial to the patients. Lung cancer is the most widely recognized known cancer among individuals, which can be delegated little cell and non-little cell. In this paper, we have proposed a model for the detection of pulmonary lesions at the initial and advanced stages of lung disease on CT (Computed Tomography) images. The proposed framework consists of four stages; change of RGB to grey scale image, smoothing will be performed using median filter to lessen the effect of noise from images, segmentation will be performed using thresholding and watershed techniques and after that the features are extracted for processed image. A framework has been tested with 12,645 images, a dataset of 50 patients. We have noticed that the proposed model perform better than already existing techniques and performance of this model is zero false positive acceptances.

  • Cloud computing provides various services to the cloud consumers based on demand and pay per use basis. To improve the system performance (such as energy efficiency, resource utilization (RU), etc.) more than one virtual machine (VM) can be deployed on a server. Efficient VM placement policy increases the system performance by utilizing all the computing resources at their maximum threshold limit and reduce the probability to become a server overloaded/underloaded. Overloaded/underloaded servers consume more energy and increase the number of VM migration in comparison to the server which is in a normal state. In this paper, Energy and Resource-Aware VM Placement (ERAP) algorithm is presented. This algorithm considers both, energy as well as central processing unit (CPU) utilization to deploy the VMs on the servers. CloudSim toolkit is used to analyze the behavior of the ERAP algorithm. The effectiveness of the ERAP algorithm is tested on real workload traces of Planet Lab. Results show that ERAP algorithm performs better in comparison to the existing algorithm on the account of the number of VM migrations, total energy consumption, number of servers shutdowns, and average service level agreement (SLA) violation rate. Results show that on average 13.12% energy consumption is minimized in contrast to the existing algorithm.

  • Vehicular ad hoc networks (VANETs) have fetched great interest in both industry and research oriented fields owing to the highly mobile nature and randomly changing topology exhibited by these networks. These characteristics make them susceptible to frequent disconnections, contention and collision related problems. Designing a set of protocols which would cater to the characteristic features of VANETs is a very daunting task. This paper presents a detailed survey of a wide variety of Position-based routing (PBR) protocols. PBR protocols exploit the on-board global positioning receivers to acquire location information of vehicles. Moreover on-board maps are used to fetch the details regarding layout of the road thereby purging the need to set up and maintain routes between the vehicular nodes, making these protocols highly desirable for VANETs. Further a novel classification methodology of the protocols under study along with a comparative analysis depicting their similarity and dissimilarities has been presented.

  • Optical image data have been used by Remote Sensing workforce to study land use and cover, since such data are easily interpretable. The aim of this study is to perform land use classification of optical data using maximum likelihood (ML) and support vector machines (SVM). Essential geo corrections were applied to the images at the pre-processing stage. To appraise the accuracy of the two familiar supervised algorithms, the overall accuracy and kappa coefficient metrics were used. The assessment results demonstrated that the SVM algorithm with an overall accuracy of 88.94% and the kappa-coefficient of 0.87 has a higher accuracy than the ML algorithm. Therefore, the SVM algorithm is suggested to be used as an image classifier for high-resolution optical Remote Sensing images due to its higher accuracy and better reliability.

  • Research in assistive technology has been on the rise over the last decade. Numerous solutions and consumer products have flooded the market to guide visually impaired making use of beacon technology, depth cameras and many more. Though certain products and solutions are available for highly structured and regular indoor environments, we are still a long way from an industrial level product for unstructured, dynamic and irregular outdoor environments. Our work harnesses the decision making power of sighted individuals and crowd as a group surrounding the visually impaired. This information extraction from the crowd along with coarse terrain mapping of major obstacles like footpath edges, walls and large pot holes will help the subject to navigate dynamic and irregular environments. This out of the box approach provides us the margin to use low grade equipment and develop algorithms with low computational complexity. The paper explains the theoretical aspects of this approach along with its proof of concept and some remarkable results achieved in real life implementation.

  • With the increasing applications of 3D printing, podiatric research has received considerable attention from researchers worldwide. 3D-printed customized soles came into use to mitigate a patient foot's pain and ameliorate comfortability. The presented work is aimed to provide customized foot sole with variable infills and appropriate depths in order to get the adequate pressure and comfort on the precise nerve areas, which are the origins of pain.In the proposed work, a 3D-sole is reconstructed conceptualizing variable infill density and appropriate depth fitting using foot plantar pressure measurements. The given work comprised of four phases: attaining foot plantar pressure readings, data processing, infill density distribution and 3D printing of the sole. Initially, the foot plantar data is obtained by a platform using an array of 32 X 32 piezo-electric sensors. Secondly, the input data is corrected with the removal of the rigid pattern from the foot sole via median filtering and interpolated via bicubic interpolation to obtain the smooth surface. Thereafter, modifiers are created to dispense different densities to distinct portions of the model. At last, the model is 3D-printed using fused deposition modeling (FDM) technology. The novel work can be extremely considerable in various medical and commercial applications.

  • Enhancing the security of the dual party computation is considered as primitive to establish a secured multiparty computation in the geographically distributed networks. With the advent of variously distributed paradigms like Cloud computing, IoT and Fog computing, securing ubiquitous computation that involves multiparty collaboration is considered an open research area that attains the attention of the researchers to develop novel protocols. Addressing the problem of secured computation over the network this paper presents a novel and hybrid quantum protocol in with Quantum key distribution is integrated with 3DES to enhance the secure computations through a quantum channel within the cloud infrastructure. Simulation results of the proposed protocol show that it outperforms many security protocols developed based on quantum resources.

  • The Internet is growing rapidly with huge amount of data mainly through social media. Most of the text in the World Wide Web is anonymous. In recent days, knowing the details of the anonymous text is the hot research area to the research community. Author Profiling is one such area attracted by the several researchers to know the information about the anonymous text. Author Profiling is a technique of predicting the demographic characteristics like gender, age and location of the authors by analyzing their written texts. The field of Stylometry is one area used by the researchers to discriminate the authors style of writing. In Author Profiling approaches the researchers proposed various types of stylistic features to distinguish the authors style of writing. The accuracies of demographic characteristics of the authors are not satisfactory when stylometric features were used. Later the researchers experimented with different types of term weight measures to improve the accuracies. In this work, we concentrated on two demographic characteristics such as gender and age. The experimentation is performed on 2014 PAN competition reviews corpus in English language. In this work, a new Profile specific Supervised Term Weight measure is proposed to predict the accuracy of gender and age of the author's anonymous text. The experimental results of proposed measure is compared with different weight measures and identified that the proposed weight measure obtained best results for predicting gender and age.

  • Biometric systems are playing an important role in identifying a person, thus contributing to global security. There are many possible biometrics, for example height, DNA, handwriting etc., but computer vision based biometrics have found an important place in the domain of human identification. Computer vision based biometrics include identification of face, fingerprints, iris etc. and using their abilities to create efficient authentication systems. In this paper, we work on a dataset [1] of iris images and make use of deep learning to identify and verify the iris of a person. Hyperparameter tuning for deep networks and optimization techniques have been taken into account in this system. The proposed system is trained using a combination of Convolutional Neural Networks and Softmax classifier to extract features from localized regions of the input iris images. This is followed by classification into one out of 224 classes of the dataset. From the results, we conclude that the choice of hyperparameters and optimizers affects the efficiency of our proposed system. Our proposed approach outperforms existing approaches by attaining a high accuracy of 98 percent.

  • Inter-satellite optical wireless communication (IsOWC) uses light technology which makes it feasible to achieve long haul communication at high data rate. In this paper, the proposed work is aimed to achieve a high speed long haul communication by using continuous phase frequency shift keying modulation technique in combination with orthogonal frequency division multiplexing in IsOWC. The system's performance is investigated in terms of received power and Q-factor for different transmission ranges and bit rate values. It is observed that the system can successfully transmit the 10 Gbps data stream over a range of 8000 km with high value of received power and an acceptable value of Q-factor.

  • The objective of analysis is to identify gender specific interaction patterns in primary school children. The social network approach is taken for the purpose. Dyadic relationships formed as a result of face-to-face interaction between two children was analyzed quantitatively in the boundary of social network research. The strength of dyads was a key consideration to measure various temporal interaction behavior patterns such as dyadic churn rate, persistence rate, retention rate, new dyads. The analysis conducted was also a motivation to determine differentiating patterns w.r.t students' mobility and social collaboration ability with students of same and other gender. The variations in degree centrality measures for each node was also suggestive of the preference for gender and grade specific ties. The outcome of analysis was also fundamental in the phenomenon of social contagion and information diffusion.

  • Wireless sensor network (WSN) communication has gathered a lot of attention of research scholars due to its various features such as high wireless data transmission. A large number of techniques have been developed till now in order to achieve an energy efficient network. The clustering and cluster head selection is the major and difficult task to perform in a network. LEACH serves as a basic for rest of the energy efficient clustering protocols. This study considers the LEACH-Mobile Fuzzy (LEACH-MF) as base for developing the proposed work. Fuzzy Inference System (FIS) with LEACH along with threshold based data transmission concept is developed in this work. The major objective of this work is to utilize the allotted energy to sensor nodes in an effective way. The proposed model is parted in two forms i.e. Modified Parameter-LEACH-MF (MP-LEACH-MF) and Limited Communication-LEACH-MF (LC-LEACH-MF). LC-LEACH-MF is a reactive protocol whereas the former one is periodic. In order to assure the performance efficiency of the proposed work, the parameters such as Packet Delivery Ratio (PDR), Last Node Dead (LND), Half Node Dead (HND), First Node Dead (FND), Energy Consumption of the network are evaluated and along with this a comparison analysis has been done with traditional LEACH, LEACH-Mobile (LEACH-M), LEACH-MF. After analyzing the obtained results it is concluded that the LC-LEACH-MF outnumbers the rest of the traditional energy efficient clustering techniques.

  • Finding communities in a complex network is tedious task. In this paper, we have proposed a Fast Cosine Shared Link (FCSL) method for unveiling and analyzing concealed behavior of the communities in the network. We have used Cosine similarity measure to find the node's similarity. Further, we have evaluated the time taken to identify the communities in the network. Substantial experiments and results shows the potential of the proposed method to successfully find real world communities in real world network datasets. The experiments we carried out exhibit that our method outperforms other techniques and slightly improve results of the other existing methods, proving reliable results. The performance of methods evaluated in terms of communities, modularity value and time taken to detect the communities in network.

  • Pot-holes on road will make transportation slower and costly. India has a big network of roads to connect the villages and cities, the authority persons cannot travel across the region for identification of holes. As per advancement in machine learning in recent time, we can use this technology for the identification and patching the pot-holes. As per the recent survey around 400 millions, people have a smartphone in India. We can use smartphone sensors (such as Accelerometer and gyroscope) to identify the pot-holes on road and GPS for the location of the pit. The major task of this problem is to capture the data and annotation. We have developed an android app for capturing the value of displacement while travelling on road. We have applied different classification algorithms to sensor raw-data. SVM is the most suitable classification technique for this problem. The android app will sound an alarm when a pothole is detected.

  • Utility pattern mining addresses the current common challenges of E-business by analysis of market behavior and customer trends of transactional data. However, it has some important limitations when it comes to analyzing customer transactions in any business as buying quantities are not considered into account. Thus, it leads to misappropriate analysis due to consideration of an item may only appear once or zero times in a transaction data and a weight of all item have given same importance. To address the above said confines, the problem of identification of frequent set of items as patterns has been defined in E business as High Utility Pattern Mining (HUPM). The focus of this paper is finding high utility patterns by using weighted utilization value of each product. This is implemented in two modules finding top k high utility patterns by constructing UP growth tree and TKU algorithm and finding top-k utilities in one phase approach with TKO algorithm to mine HUPs without any assumptions of minimum utility threshold. Experimental results show that the proposed algorithms take a smaller amount of computational cost, thus it shows more efficiency once compared with other present methods on standard data sets.

  • Person authentication using footprint is still an abandoned field even though it has physiological and behavioral both types of available features due to unavailibilty of dataset. To examine the credibility of footprint we have collected the footprint dataset. This dataset collection is done in 2 phases. 1) We have collected the 2 footprint samples of each foot from 110 persons and 2) We have collected the 5 footprint sample of each foot from 80 people. The paper scanner is used for the data collection and whole footprint is captured. The collected samples are taken at different orientations and position, sometimes scanner is not aligned and creates noise.To overcome these problem a footprint image requires extensive preprocessing. To make any image invariant to translation and rotation, we use Hu's 7 moment invariant features. It can efficiently check that an input image belongs to a particular person or not even after translation, scaling and rotation. The probability of translation and scaling is very less in footprint, but slight rotation in foot image is noticeable, which could result in different geometry features for same person. This technique is not suitable for the authentication but it can surely reduce the sample space by rejecting the samples. If the difference of 3rd order moment invariant value of two samples is more then the decided threshold, then samples surely does not belong to the same person. This reduced sample size could be used further in authentication. It reduces the time complexity and computation cost. We tested it on 1320 images with the FMR of 4.52% and FNMR of 5.18%. It leads us to the conclusion that 3rd order of moment is enough to make any image rotation invariant.

  • Graphs are discrete objects with myriad applications in science and engineering. Several graph theoretic problems are shown to be hard. However, for restricted versions of graphs based on the type of restriction the problems that are hard to solve for a general graph become tractable. Layered graphs have been defined and are shown to have applications in social networks and computational molecular biology. We define a new class of graphs called cyclic layered graphs that are related to layered graphs. We pose three problems that can be modeled as graph theoretic problems on cyclic layered graphs. We design efficient algorithms for these problems.

  • Layered graph G = (V, E) is defined as a graph containing several subgraphs also called as layers: G 1 = (V 1 , E 1 ), G 2 = (V 2 , E 2 ), ...G q = (V q , E q ) where the edges incident on the V i are restricted to the vertices from V i-1 ∪ V i ∪ V i+1 . Layered graphs have applications in computational molecular biology and social networks. Several hard graph theoretic problems such as Maximum Independent Set (MIS), Minimum Vertex Cover (MVC) and Minimum Dominating Set (MDS) are shown to be computationally tractable on layered graphs when the corresponding upper bound is imposed on the number of vertices that a layer can have. We present algorithmic improvements and design parallel algorithms for the computing these measures.

  • An experiential study is conducted to solve binary classification problem on big dataset of European Survey of Schools: ICT in Education (known as ESSIE) using IBM modeler version 18.1. The survey was conducted by ESSIE at various levels [1]-[3] of schools ISCED (International Standard Classification of Education). To predict the gender of teachers based on their answers, the authors applied 4 supervised machine learning algorithms filtering out of 12 classifiers using auto classifiers on ISCED-1 and ISCED-2 level of schools. Out of total 158 attributes, self-reduction and auto classifier stabilized only 134 attributes for the Bayesian Network (BN) and Random Tree (RT) at level-1 and 134 attributes for logistic regression and 41 attributes for Decision Tree (C5) at level-2. The MissingValue filter of Weka 3.8.1 tool handled well 55641 in ISCED-2 level and 19415 at the ISCED-1 level and normalization is also applied as well. The outcomes of the study reveal that decision tree (C5) classifier outperformed the logistic regression (LR) after feature extraction at ISCED-2 level schools and Random Tree classifier predicted more accurately gender of the teacher as compare to the Bayesian Network at level-1 schools. Further, presented predictive models stabilized 134 attributes with 2926 instances for predict gender of teachers of level-1 schools and 134 attributes with 7542 instances for level-2 schools.

  • Availability of publications data is significant in research development and a global publications data in semantic web will help research community in great manner. W3C has provided a semantic web standard termed as RDB to RDF Mapping Language(R2RML). R2RML allows us to express mappings to be customized from relational database to RDF database. This paper discusses a convergence approach-PubWorld using R2RML that generate r2rml mapping files from three disparate relational databases- two for publication and one for world database. Publications data are made shareable directly from mapping file by converting into local ontologies and merging the local ontologies along with one existing ontology into one global ontology. The Header-Dictionary-Triples(HDT) compression technique is used for storing the global ontology to achieve large spatial savings. Simple Protocol and RDF Query Language(SPARQL) queries using Jena ARQ(And RDF Query) on both RDF and HDT version shows similar running time.

  • Now a day's the uses of digital content or media is increasing rapidly. So, there is a need to secure the digital document from both unauthorized users and authorized users. In this paper a secure technique of image fusion using hybrid domain for copyright protection and data distribution is proposed. The proposed method provides a secure technique for the digital content in cloud environment. Two services of cloud are used to develop this work which eliminates the role of trusted third party (TTP). Previously the user and content provider rely on this TTP. First is the design of an infrastructure as a Service (IaaS) to store different images with encryption process to speed up the image fusion process and save storage. Second is a Platform as a Service (PaaS) to enable the digital content to achieve great computation power and to increase the bandwidth. These two services provided by the cloud plays a very important role because it reduces communication overhead in the process of image fusion. Imperceptibility and robustness measures are used to calculate the performance of the proposed approach.

  • In today human life, a social network plays a significant role in the user's decision-making. In the social network, an opinion leader is a critical person who influences the behavior of the person with their own knowledge and skills. The major contribution of this paper is to recommend an advance approach to discover the opinion leader in the social network using fuzzy logic and trust generation model. In the first step, we evaluate the fuzzy trust rules based on the user's trust. In the next step, these fuzzy trust rules apply to the online social network and then the de-fuzzification process applied to find out the trust value for each user and at last, identify the top-N user according to their prominence value that directly used to obtain their trust value for each user. We demonstrate our approach on the synthesized dataset and show the result that is better than the standard Social network analysis measures with respect to accuracy, precision, F1-score, and recall.

  • Analysis of motion of lower limbs is required in different fields including health monitoring, robotics, rehabilitation sciences, biometrics and consumer electronics. Motion sensors, such as accelerometers are prominently used in such analysis since they are non-invasive and are readily available in low cost. However, it is evident from literature that fusion of accelerometer data with those recorded from other types of sensors improves the recognition of human activities. In this paper, the use of surface electromyogram (sEMG) along with accelerometers is explored to recognize nine activities of daily living. The effect of the placement of the sEMG sensor on two of the most popularly reported muscle locations on leg, namely soleus and tibialis anterior, is studied in more detail to determine the appropriate positioning of such sensors for human activity recognition and hence, reduce the number of sensors that are required for classification. It is demonstrated using actual data that the use of sEMG along with accelerometer improves the overall classification accuracy to 98.2% from around 94.5%, which is obtained if only accelerometer is used. In particular, the classification of stationary activities is improved with the inclusion of sEMG. Moreover, the placement of the sEMG sensor on soleus muscle aids the classification more as compared to tibialis anterior muscle.

  • A robot needs to predict an ideal rectangle for optimal object grasping based on the intent for that grasp. Mask Regional - Convolutional Neural Network (Detectron) can be used to generate the object mask and for object detection and a Convolutional Neural Network (CNN) can be used for ideal grasp rectangle prediction according to the supplied intent, as described in this paper. The masked image obtained from Detectron along with the metadata of the intent type has been fed to the Fully-Connected layers of the CNN which would generate the desired optimal rectangle for the specific intent and object. Before settling for a CNN for optimal rectangle prediction, conventional Neural Networks with different hidden layers have been tried and the accuracy achieved was low. A CNN has then been developed and tested with different layers and sizes of pool and strides to settle on the final CNN model that has been discussed here. The optimal predicted rectangle is then fed to a robot, ROS simulation of Baxter robot in this case, to perform the actual grasping of the object at the predicted location.

  • Cochlear Implant (CI) system is an auditory prosthesis that provides the perception of hearing sensation to the sensorineural deafened people by surgically implanting and electrically stimulating the auditory nerve. Four to six weeks after surgical implantation of CI, audiologist activates the implant and systematically stimulates the auditory nerve through dedicated software usually called `clinical programming software (CPS)', to fine tune the electrodes so that the patient perceives the sound. It is understood that every cochlear implanted child needs to visit the audiologist periodically up to an age of 18 to undergo routine evaluations. This paper presents a new smartphone-based CPS developed to address the requirements of audiologist with attended advantages to the patients to carry CPS with previous history of stimulated values commonly referred as `MAP' in CI programming. Besides, it provides flexibility to the patient by especially in the case of emergency or visiting nearest doctor for creating a new MAP or attend for routine evaluations, which is not the case provided by the contemporary CI manufacturers.The Smartphone-based CI Programme (SCIP) was developed using Android Studio IDE. SCIP assists audiologist to perform audiological evaluations such as (i) finding electrode status (Active, Short or Open), (ii) impedance measurement, (iii) fine tuning of, threshold and maximum audible level values for each electrode and (iv) updating the speech processor with the fine tuned values. The performance of SCIP software application was validated against a standard Electrode Impedance Tester device during in vitro testing of the developed Indian cochlear implant prosthesis, which is expected to undergo human clinical trials soon.

  • HLA matching is conventionally assessed by counting the number of MisMatches (MM) in the class I antigens and class II antigens of the donor and recipient. There is an overlap of matching scores in the MM scoring method. A numerical method has already been developed to quantify the degree of HLA matching in Low Resolution HLA matching. This study has been done to formulate an algorithm for High resolution HLA matching, with a parameter named as HLA Matching (HM) score. Mathematically, 4096 discrete values of HM score existing between 0 and 1 are obtained in 3 loci comparison for renal transplantation. Each value of HM score is unique for every possible matching combination. Donor with the highest percent HM score is considered as the best HLA matched donor. This method overcomes all the disadvantages of the conventional MM scoring method. This algorithm is useful in the donor or recipient selection and the graft survival studies.

  • Analysis of EEG (Electroencephalography) signals provides an alternative ingenious approach towards Emotion recognition. Nowadays, Gradient Boosting Machines (GBMs) have emerged as state-of-the-art supervised classification techniques used for robust modeling of various standard machine learning problems. In this paper, two GBM's (XGBoost and LightGBM) were used for emotion classification on DEAP Dataset. Furthermore, a participant independent model was fabricated by excluding participant number from features. The proposed approach performed well with high accuracies and faster training speed.

  • This paper proposes the use of artificial neural networks(ANNs) to classify human postures, using an invasive(intrusive) approach, into 6 categories namely standing, sitting, sleeping and bending - forward and backward. Human posture recognition has numerous applications in the field of healthcare analysis like patient monitoring, lifestyle analysis, elderly care etc. Most importantly, our solution is capable of classifying the aforementioned postures in real-time, by wirelessly(Wi-Fi) acquiring and processing the sensor data on a Raspberry-Pi device with minimal lag. A data-set of 44,800 samples was collected - from 3 subjects - which was used to train and test the neural network. After experimenting and testing with a plethora of network architectures, an optimal neural network architecture(6-9-6) with suitable hyper-parameters was determined which gave an overall accuracy of 97.589%.

  • Hyperspectral images (HSIs) are satellite images that provide spectral and spatial detail of a given region. This makes them uniquely suitable to classify objects in the scene. Classification of Hyperspectral images can be efficiently performed using the Convolutional Neural Network (CNN) in Machine Learning. In this research, a framework is proposed that leverages Transfer Learning and CNN to classify crop distributions of Horticulture Plantations. The Hyperspectral dataset consists of images and known labels, also known as groundtruth. However, some of the HSIs are unlabelled due to the lack of groundtruth available for the same. Hence, the proposed method adopts the Transfer Learning technique to overcome this. The model was trained on a publicly available and labelled hyperspectral dataset. This was then tested on the field samples of Chikkaballapur district of Karnataka, India which was provided by the Indian Space Research Organisation (ISRO). The CNN built leverages both the spectral and spatial correlations of the HSIs. Due to the amount of detail in HSIs, they are fed in as patches into the convolutional layers of the network. The diverse information provided by these images is exploited by deploying a three-dimensional kernel. This joint representation of both spectral and spatial information provides higher discriminating power, thus allowing a more accurate classification of the crop distributions in the field. The experimental results of this method prove that feeding images as patches trains the CNN better and applying Transfer Learning has a more generic and wider scope.

  • Scene is a view which contains various objects in a real-world environment. The global view of an image can be called as scene classification. Scene classification is a very challenging work to be done by computers as it is very difficult for a computer to recognize the global view of an image. Therefore, this task is one of the challenging tasks in computer vision area. Object classification task has drastically improved by using the Deep Learning, Alex Net Convolutional Neural Network. Highly motivated from this work, we used one of the already trained architecture of deep learning called Alex Net Convolutional Neural Network for extracting the features of input image automatically and then applying the transfer learning approach for classification task to reduce the overall computational complexity of the neural network. We then have performed scene classification task on various classifiers and then computer their accuracies which comes out to be greater than the state-of-the-art methods. To perform scene classification task, the dataset we used is the Places dataset containing 2.5 million real-world images and 201 scene classes.

  • Images have become a standard for information consumption and storage, far replacing text in various domains such as museums, news stations, medicine and remote sensing. Such images constitute of the majority of data being consumed on the Internet today and the volume is constantly increasing day by day. Most of these images are unlabeled and devoid of any keywords. The swift and continuous increase in the use of images and their unlabeled characteristics have demanded the need for efficient and accurate content-based image retrieval systems. A considerable number of such systems have been designed for the task that derive features from a query image and show the most similar images. One such efficient and accurate system is attempted in this paper which makes use of color and texture information of the images and retrieves the best possible results based on this information. The proposed method makes use of Color Coherence Vector (CCV) for color feature extraction and Gabor Filters for texture features. The results were found to be significantly higher and easily exceeded a few popular studies as well.

  • Machine Translation is a branch of re- search under Computational Linguistics that deais with the automatic/semi-automatic translation of a natu- ral/human language into another. The language that is being translated is termed as Source Language(SL) and the language into which translation is done, is termed Target Language(TL). This paper presents an English to Indian Languages Machine Translation technique that is based on the rules of grammar, namely word deelensions or inflections, and sentence formation rules of the target languages, i.e. Indian Languages. Declensions are variations or inflections of words in language and Indian languages are richly declensional or inflectional languages. This study is on generating Noun Declension-case markers for English to Indian languages in Declension Rule based Machine Translation. This paper also describes about the various approaches to machine translation along with their system architectures. The proposed Declension based RBMT is explained with its architecture and each of the modules and their functionalities are elaborated in detail. The input and output, to and from the System are also described with an example. The research works that are similar with the proposed system, such as ANUSAARAKA and ANGLABHARATI are also explored.

  • A pattern and polarization reconfigurable antenna (PPRA) for WLAN applications has been presented in this paper. The proposed antenna has a square ring patch fed by T-shaped microstrip line through gap coupling on upper layer. A defective ground surface has been deployed at the ground for increasing the gain of the antenna. Two diodes PD1 and PD2 have been incorporated in the upper diagonal gap of square ring to achieve polarization reconfigurability and two more switches PD3 and PD4 have been inserted in ground plane to achieve pattern diversity. Depending on diode (ON/OFF) conditions the PPRA can switch to different polarization states viz. horizontal, vertical, S(-45) and Z(45) linear polarization. Also it is able to switch its pattern by 180° in E-plane. Extensive simulations have been performed in CADFEKO for antenna design and optimization. HPND-4005 PIN diodes equivalent circuit for ON and OFF state has been used for simulation purpose. The proposed antenna is able to cover the entire (2.412-2.484) GHz band for IEEE 802.11b/g/n standard.

  • The work attempts to characterize and classify speech emotions using the spectrogram. Initially, it extracts the individual Red, Green, and Blue parameters from the raw speech spectrogram image of every individual emotional utterance. Further, it computes the statistical parameters of individual RGB components to characterize the chosen emotional states. The utterances of anger, happiness, neutral, and sad emotional states from the standard Berlin (EMO-DB) database has been used for this purpose. The individual statistical R, G, and B spectrogram parameters are found to be different within an emotion as well as across emotional states. Thus, these values have been used as different feature sets to classify the designated emotional states using the popular Multilayer Perceptron Neural Network (MLPNN).

  • Colon Cancer is one of the most common types of cancer. The treatment is planned to depend on the grade or stage of cancer. One of the preconditions for grading of colon cancer is to segment the glandular structures of tissues. Manual segmentation method is very time-consuming, and it leads to life risk for the patients. The principal objective of this project is to assist the pathologist to accurate detection of colon cancer. In this paper, the authors have proposed an algorithm for an automatic segmentation of glands in colon histology using local intensity and texture features. Here the dataset images are cropped into patches with different window sizes and taken the intensity of those patches, and also calculated texture-based features. Random forest classifier has been used to classify this patch into different labels. A multilevel random forest technique in a hierarchical way is proposed. This solution is fast, accurate and it is very much applicable in a clinical setup.

  • IoT and big data technologies have embarked the modern data science. As nowadays lots of data have been generated from wireless sensors connected via a network. Detecting anomalous events in this large amount of data is the topic undergoing intense study among researchers. Most of the existing solutions for the detection of anomalous events in big data are based on machine learning models. The proposed technique is a hybrid approach to detect outliers in weather sensor data. The approach comprises of three phases. Initially, for handling big data efficiently, dimensionality reduction is performed in the first phase. In the second phase, the detection of anomalous events is done using multiple classifiers. Finally in the third phase, for final classification, the results of the different classifiers are combined. With the aid of the proposed approach, we can extract the meaningful information from a complex dataset. It can be perceived from the experimental results that the proposed approach outperforms the various state-of-the-art algorithms for outlier detection.

  • Motif discovery also known as motif finding is a challenging problem in the field of bioinformatics that deals with various computational and statistical techniques to identify short patterns, often referred to as motifs that corresponds to the binding sites in the DNA sequence for transcription factors. Owing to the recent growth of bioinformatics, a good number of algorithms have come into limelight. This paper proposes a competent algorithm that extracts binding sites in set of DNA sequences for transcription factors, using successive iterations on the sequences provided. The motif we work on are of unknown length, un-gapped and non-mutated. The algorithm uses suffix trie for finding such sites. In this approach the first sequence is used as base for constructing the suffix trie and is mapped with other sequences which results in extraction of the motif. Additionally, this algorithm can also be applied to related problems in the field of data mining, pattern detection, etc.

  • The most important issue of Quality of Service is network connectivity and coverage of sensing in the design of Wireless Sensor Network. The target area is monitoring or tracking by the sensors, called as coverage. Where the human intrusion is difficult or impossible in a hostile environment then the sensors are dropped by airplanes. In this situation sensors cannot be same in the whole area, therefore some area may be covered or uncovered and some sensors may be overlapped. These redundant sensors improve coverage and connectivity, but increase energy depletion. Monitoring of the coverage holes is an important task because of their harmful and denying effect on the WSNs. In the present paper, we have proposed a model to extend the network lifetime and maximum coverage rate using multi-objective optimization approach. This model can achieve maximum coverage, minimum energy consumption and maximize network lifetime. This paper considers non-dominated sorting genetic algorithm (NSGA-II) for optimizing coverage problems. The results of the simulation show that the proposed method can improve the coverage probability and lifetime of the network at the same time can maintain the connectivity of the network.

  • Quite a few times when the problem of study involves binary classification we are dealt with a situation of unbalanced class labels; the negative class often dominates the positive class leading to the problem that the model was not able to learn enough complexities to correctly classify the label which are lower in comparison. The Bagging and boosting classifiers in recent times have gained in popularity due to its robustness against the unbalanced class labels, both uses the notion of ensemble to generalize the model and predict on the unseen data. Through this paper we aim to explore the improvement in the classification performance by bagging and boosting classifiers on an unbalanced binary classification dataset.

  • An image denoising plays an important role in wide variety of applications. It is one of critical operation in image processing. Image denoising without losing important features is very difficult and challenging task. Many of the techniques have been proposed for image denoising. But, most of the techniques fail to preserve fine details in the image. In this work, a fractional anisotropic model is being presented which not only removes noise but also preserve fine details present in the image. Qualitative and quantitative analysis has been performed. It has been found that the proposed method is superior in image de-noising.

  • Bill Gates was once quoted as saying, "You take away our top 20 employees and we [Microsoft] become a mediocre company". This statement by Bill Gates took our attention to one of the major problems of employee attrition at workplaces. Employee attrition (turnover) causes a significant cost to any organization which may later on effect its overall efficiency. As per CompData Surveys, over the past five years, total turnover has increased from 15.1 percent to 18.5 percent. For any organization, finding a well trained and experienced employee is a complex task, but it's even more complex to replace such employees. This not only increases the significant Human Resource (HR) cost, but also impacts the market value of an organization. Despite these facts and ground reality, there is little attention to the literature, which has been seeded to many misconceptions between HR and Employees. Therefore, the aim of this paper is to provide a framework for predicting the employee churn by analyzing the employee's precise behaviors and attributes using classification techniques.

  • Image classification technique analyzes images and its features to unmask the underlined facts. The estimation of age via faces is an area of prime research relevance that deals with several challenges because of its rapid emergent flow in real world applications. In this paper a classifier is built which scans the upper body image i.e. facial images of a person to classify a image to detect the age group namely child, adult and old. The sole purpose of the research is to detect the underage people for enhancing the security system. Taking into account the geometrical features along with wrinkle features, underage is detected using three techniques namely KNN (k-nearest neighbor), ANN (Artificial Neural Network), and SVM (Support Vector Machine) classification algorithm.

  • Owing to changing climatic conditions, crops often get affected, as a result of which agricultural yield decreases drastically. If the condition gets worse, crops may get vulnerable towards infections caused by fungal, bacterial, virus, etc. diseases causing agents. The method that can be adopted to prevent plant loss can be carried out by real-time identification of plant diseases. Our proposed model provides an automatic method to determine leaf disease in a plant using a trained dataset of pomegranate leaf images. The test set is used to check whether an image entered into the system contains disease or not. If not, it is considered to be healthy, otherwise the disease if that leaf is predicted and the prevention of plant disease is proposed automatically. Further, the rodent causing disease is also identified with image analysis performed on the image certified by biologists and scientists. This model provides an accuracy of the results generated using different cluster sizes, optimized experimentally, with image segmentation. Our model provides useful estimation and prediction of disease causing agent with necessary precautions.