Technical Papers
The growth of E commerce has led to the abundance growth of opinions on the web, thereby necessitating the task of Opinion Summarization, which in turn has great commercial significance. Feature extraction in Opinion Summarization is very crucial as selection of relevant features reduce the feature space which successfully reduces the complexity of the classification task. The paper suggests extensive pre-processing technique & an algorithm for extracting features from Reviews/Blogs. The proposed technique of Feature Extraction is unsupervised, automated and also domain independent. The improved effectiveness of the proposed approach is demonstrated on a real life dataset that is crawled from many reviewing websites such as CNET, Amazon etc.
A rapid growth and development of life styles as well communicating media in recent years have changed considerably and called as information age. In this information age lot of changes were taken place in all domine like speed, data capacity, and distance of communication due to the development. Also hacking, tracing communicating bodies are also in place. The news channels and cameras are playing the major role in communication. The advancement in the TV news channel enabled by all the recent information occurs in the world available instantaneously as for as possible. This is due to moving line which is associated with a TV news channel but few treats the moving line text as disturbance where the above that watching above video so we plan to detect and extract the moving text from the news video using hybrid technology in association of edge and connected component detection.
Now-a-day's monitoring the objects (human beings, animals, buildings, vehicles etc.,) in a video is a major issue in the areas such as airports, banks, military installations etc., Classification of objects in a video involves the process of searching, retrieving and indexing. This process is implemented by extracting the features such as color, texture and shape. This technique is difficult but it has its limitations at various situations. Techniques such as edge detection using various filters, edge detection operators, CBIR (Content Based Image Retrieval) and Bag-of visual words are used to classify videos into fixed broad classes which would assist searching and indexing using semantic keywords. The proposed approach extracts three types of features viz. Color features using RGB and HSV histograms, Structure features using HoG, DHoG, Harris, Prewitt, LoG operators and Texture features using LBP, Fourier and Wavelet transforms. Additionally BoV is used for improving the classification performance and accuracy. SVM, Bagging, Boosting, J48 classifiers is used for classification.
Now-a-days large amount of data is generated from various stake holders such as data from sensors and satellites regarding environment and climate, social networking sites about messages, tweets, photos, videos and data from telecommunications etc. This big data, if processed in real-time, helps decision makers to make timely decisions when an event occurred. When source data sets are large (velocity, variety, veracity) traditional ETL (Extract, Transform, Load) is time consuming process. This paves path to extend traditional data management techniques for extracting business value from big data. This paper extends the hadoop framework for performing entity resolution in two phases. In phase 1 MapReduce generate rules for matching two real world objects with similarities. The more the similarity, the objects are similar. Similarity is calculated using domain dependent and independent Natural language processing measures. In Phase 2 these rules are used for matching stream data. Our proposed approach uses 13 semantic measures for resolving entities in stream data. Stream data such as tweets, messages, e-catalogues are used for testing the proposed system.
Ciphertext-policy attribute based encryption (CP-ABE) is becoming very significant in distributed computing environment. Distributed computing environment is very popular way of storing and distributing information, because it is easy to store some information at one place and distribute the information from there. In CP-ABE scheme, every information is encrypted under an access structures, this access structures defines, who can access the encrypted information. But in CP-ABE scheme, we also store access structures along with encrypted information and sometimes this access structures can reveal many things about the plaintext information and the decryptor, therefore any adversary can learn many things. Therefore, it is appropriate to have CP-ABE scheme in which the access structures are not public means access structures are hidden from the users. We have proposed one scheme, in which users do not store access structures with encrypted information (ciphertext). We have used composite-order bilinear groups in our scheme.
Vehicular Ad-Hoc Networks is an ad hoc network connecting mobile vehicles which possess dynamic topology thus low link stability in terms of connectivity. The critical concern in ad hoc routing is routing which is handled via various routing solutions. This paper focuses on improving the I-AODV routing by considering V2V and V2I communication. The I-AODV (Infrastructure based AODV) is routing protocol that facilitate communication among vehicles through RSUs and is broadcasting in nature. This paper discusses prediction based multicasting which aids in reducing delay and improves other performance metrics. Broadcasting technique does not utilizes network resources which makes the network inefficient thus reduces throughput of the network, applying multicasting solves the purpose of proper utilization of resources as well as prediction technique helps in improving localization overhead. This technique improves the network performance metrics such as end-end delay, throughput, fuel emission, packet overhead and Packet Delivery Ratio.
The advancement of wireless communication leads researchers to conceive and develop the idea of vehicular networks, also known as vehicular ad hoc networks (VANETs). In Sybil attack, the WSN is destabilized by a malicious node which create an innumerable fraudulent identities in favor of disrupting networks protocols. In this paper, a novel technique has been proposed to detect and isolate Sybil attack on vehicles resulting in proficiency of network. It will work in two-phases. In first phase RSU registers the nodes by identifying their credentials offered by them. If they are successfully verified, second phase starts & it allots identification to vehicles thus, RSU gathers information from neighboring nodes & define threshold speed limit to them & verify the threshold value is exceed the defined limit of speed. A multiple identity generated by Sybil attack is very harmful for the network & can be misused to flood the wrong information over network. Simulation results show that proposed detection technique increases the possibilities of detection and reduces the percentage of Sybil attack.
Heuristic redundancy optimization with ST (Source-Terminal) reliability measure for complex network; an extensively studied problem has been modified to heuristic redundancy optimization with SAT (source to all terminal) reliability measure.
Computer based automated system is one of the important diagnostic tools in medical field. Diabetic Retinopathy is an eye disorder in which red lesions due to blood leakages can be spotted on retinal surface. This disease is commonly observed in long term diabetic patients. Ignorance to this disease can result into permanent blindness. Early stage signs of diabetic retinopathy are called as Red lesions viz. microaneurysms and hemorrhages. This paper presents a unique methodology for automatic detection of red lesions in fundus images. Proposed methodology employs modified approach to matched filtering for extraction of retinal vasculature and detection of candidate lesions. Features of all candidate lesions are extracted and are used to train Support Vector Machine classifier. In turn support Vector Machine classifies input image object into lesion or non-lesion category. The method is tested on 89 fundus images from DIARETDB1 database. The proposed algorithm gives performance as sensitivity 96.42%, specificity 100% and accuracy 96.62%.
In sensing it's the ad-hoc sensor and data routing which is an important research direction. Security work is prioritized in this area and focusing primarily at medium access control or the routing levels on denial of communication. Attacks focusing on routing protocol layer are known as resource depletion attacks in this paper. This attack impacts by persistently disabling the network and causing the node's battery power drain drastically. There are protocols established which tends to protect from DOS attacks, however it isn't possible perfectly. Vampire attack is one such DOS attack. These Vampire attacks depends on various characteristics of well-known many classes of routing protocols as these are not specific to any particular protocol. These Vampire attacks can be easily executed using even a single malicious intruder, who sends simply protocol complaint message, these attacks are thus destructing and very hard to detect. In the nastiest condition, an individual attacker has the ability to enlarge the energy usage of the network by a factor of O(N), where N is the quantity of nodes in the network. A new proof-of-concept protocol is a method discussed to mitigate these kinds of attacks. This protocol limits the damage caused at the time of packet forwarding done by Vampires. To diminish the Vampire attacks using PLGP-a which identifies malicious attack, certain approaches have also been discussed
An effective content-based image retrieval system is essential to locate required medical images in huge databases. This paper proposes an effective approach to improve the effectiveness of retrieval system. The proposed scheme involves first, by detecting the boundary of the image, based on intensity gradient vector image model followed by exploring the content of the interior boundary with the help of multiple features using Gabor feature, Local line binary pattern and moment based features. The Euclidean distance are used for similarity measure and then these distances are sorted out and ranked. As a result, the Recall rate enormously improved and Error rate has been decreased when compared to the existing retrieval systems.
The main purpose of optimization of relay coordination in an extensively large electrical network is to enhance the selectivity and at the same time reducing the fault clearing time to improve reliability of the system. The relays provided are set to function properly in normal as well as abnormal condition. In this paper; the main focus is to find out minimum Time Dial Setting (TDS) for the relays connected in-whatever configuration using Flower Pollination Algorithm. The said algorithm is compared with Linear Programming Technique. The algorithm has been implemented in MATLAB and tested on radial feeder fed from one end as well as on parallel Feeder system. The innovative feature of the paper is application of nature inspired algorithm to one of the major problem of optimization in the field of power system protection.
Network-on-Chip (NoC) is a new approach for designing the communication subsystem among IP cores in a System-on-Chip (SoC). NoC applies networking theory and related methods to on-chip communication and brings out notable improvements over conventional bus and crossbar interconnections. NoC offers a great improvement over the issues like scalability, productivity, power efficiency and signal integrity challenges of complex SoC design. In a NoC, the communication among different nodes is achieved by routing packets through a pre-designed network according to different routing algorithms. Therefore, architecture and related routing algorithm plays an important role to the improvement of overall performance of a NoC. The technique one which is used presently in node is priority based technique packet routing which leads the packet stacking which intern leads to performance degradation. In this paper, proposes a modified random Arbiter combined with deterministic XY routing algorithm to be used on router of NoC. In this method router contains random arbiter along with priority encoder which results into fast way to transfer packet via a specific path between the nodes of the network without stacking. This in turn optimizes the packet storage area and avoids collision because node arbiter will service the packets randomly without any priority. In addition to that this method will ensure a packet always reaches the destination through the possible shortest path without deadlock and livelock.
In the Twenty First Century wireless communication being used like essential communication mean for data transfer, video streaming & multimedia applications. To use buoyant application fit for such growing demand for data transfer in smart phones latest technology is being searched. To satisfied such demand; two hop Orthogonal Frequency Division Multiple Access (OFDMA) relay network are being powerfully used in combination with Multicasting which forms a encouraging communication model for many applications along with data transfer. Innovation shows that, Association of relay network and cooperation between them is proper way of optimization. For active use of relay network for retransmission, cross layer optimization technique is processed. Joint incorporation of OFDMA scheduling and network coding mechanism is carried out in intellectual method. The aim of this paper is to know how wireless node automatically selects the channel in presence of network coding also how OFDMA scheduling assigns the network source to it. We then formulated the optimization problem for different channel gain subcarrier and stated the helpful solution to assign sub channels to relay node dynamically with code aware and allocation scheme. This problem is NP-hard we applied a heuristic algorithm to implement it. Comparative rational algorithm uses the combination of scheduling and coding the relay logically their by aggregate the throughput, trustworthiness of the network and decreasing the overhead of the coordinated nodes.
A sincere and ingenious effort has been made towards addressing the localization problem in the modern world by marrying the primary and well established inter-vehicular communication assisted localization (IVCAL) using GPS method to newly devised previous path detection (PPD) technology. The outcome is a new robust methodology combining the two systems standalone benefits of absolute and relative location methods respectively. The proposed system includes GPS, Kalman Filter, IVCAL and PPD as basic building blocks. The system architecture of the proposed method has been presented in a stepwise manner. The pitfalls of earlier localization methods using GPS and then the suggested improvements using PPD had been demonstrated with the help of developed model framework within MATLAB© simulation environment. The results are presented for an assumed localization problem similar to the one reported out in literature, which has been built up so as to challenge and test effectively the basic capabilities of proposed system.
Cognitive Radio networks (CRNs) implement dynamic spectrum sensing and management techniques to address the underutilization of limited available spectrum in wireless networks. Both dynamic and opportunistic spectrum sensing techniques are frequently used by the cognitive radios (CRs) to identify the spectral holes in licensed spectrum of the primary users (PUs) and allocate these bands to the unlicensed secondary users (SUs). In this paper we have presented a state-based probabilistic sensing model for primary and secondary user nodes to access the communication channel. Simulations are performed to measure the uniform and cumulative distribution functions the degree of the primary nodes, and channel accessing probabilities for the secondary nodes.
As technology improves, it is possible to integrate more number of transistors on a single die, which means it is possible to design any complex system on single chip. The networking in such system is very difficult as compared to less complex systems one which uses switching and bus technique. In order to overcome this, a special networking technique is used which is Network on Chip. In this paper, a technique is proposed to develop the platform level design to receive the error-free packets which improves the performance of Network on Chip. By receiving the error free packets, there is no need for the network to check the packet once again for correctness thereby reducing the time taken for the packet to reach from source to destination resulting in an increase in efficiency. This helps in improving the performance since the error packet will be eliminated at the platform level before sending it to the cluster level. The packets are stored into RAM and are classified with the help of packet classifier. The packet classifier will send the packets to respective cluster and node agents at lower hierarchical level. This will improve the performance by avoiding the erroneous packets in network.
Magnetic field effects the charge transport in quantum dot. The grain(quantum dot) I-V characteristics are subject to change due to magnetic field, extent of variation in the Quantum Dot I-V characteristics is function of magnetic field strength, size and type of the Quantum Dot. We present the study of the I-V characteristics of the grain in non-equilibrium and steady state conditions in a magnetic field. We use non-equilibrium green function based technique for the computations.
Green computing is the environmentally responsible use of computers and related resources. Such practices include the implementation of energy-efficient computing solutions. Multi-core and General Purpose Graphics Processing Units (GPGPUs) computing has become trend of high performance processors and energy efficient computing. Video processing technique like moving object detection which is a computationally intensive task can be made to exploit the multi-core architecture to extract information more efficiently. Implementing moving object detection on GPU using CUDA or other platforms provides greater speedup and scalability in terms of input size. Different types of input videos or videos with different types of effects are tested. The output obtained from the system is compared with the ground-truth to verify the correctness of the system.
Femtocell technology has gradually evolved in the field of wireless technology. Femtocell base stations (FBS) are installed mainly indoor to improve the signal coverage for the user held equipment. Interference management is of prime concern in FBS installation. The work is carried on understanding the Dynamic Assignment of Transmit Power for interference avoidance in co-tier femtocell networks. The drawback of dropping Femtocell User Equipment (FUE) has been identified. Solution to the problem is achieved by introducing handoff mechanism. Several handoff strategies have been categorized under survey. The work proposes handoff strategy, appended with existing Dynamic Assignment of Transmit Power system. The formulation to Dynamic Assignment of Transmit Power-Handoff (DATP-H) improves the throughput of existing system.
Vehicle crime has long been a menace to the world with a gamut of felonies which go as far as terrorism. Fortunately, with the advent of Global Positioning System (GPS) technology the tracking of a vehicle has been made remarkably simpler. However, the task of tracking of multiple vehicles and scaling such a system to track an enormous number of vehicles is a rigorous task. The need for similar inexpensive system is quite demanding. In this paper, a cloud based multi-vehicle tracking and locking system is presented that lets the owner of the vehicle to track any vehicle in real time. In the event of a malicious activity such as burglary, the owner can lock the system. The paper also professes a cost-effective system that lets the owner learn the instances of accidents or possible drink driving cases.
Information assimilation and dissemination is the major task and challenge in the new era of technology. There are many means of information storing. Underwater Videos and images reveal much information when they are analyzed. The proposed method uses Mixture of Gaussian as a basic model to segment the moving object under dynamic condition. The proposed method remove the motion of the underwater algae or plants which exists as background by checking the status of each foreground pixels in each frame and decides whether to be present as output or not in the post processing stage. Finally the output is compared with a validated ground truth.
Airplane cockpit security is a crucial function of flight management. A key aspect of cockpit security is the identification of the persons occupying the pilot and co-pilot seats. Existing biometric techniques have certain drawbacks, and this paper presents a novel method to identify the person sitting in a seat, based on pressure patterns. In this paper, a standard machine learning tool is used to compute specific patterns in the pressure signature; these patterns are compared with existing patterns in a database to identify the correct person.
This paper deals with Bi-modal Image segmentation with active shape model. This method has been applied to varieties of segmentation problems, where considerable information about the shape is available. The discussion about the implementation on a set of both synthetic and natural bimodal images is presented.
Wireless Mobile Adhoc network is an infrastructure less network which consists of equally distributed self configuring mobile nodes. Secured access to these mobile nodes is a major issue, since these devices are most widely used in our day to day life due to their diverse capabilities like online transactions processing. Designing a reliable authentication technique for users of these mobile nodes with minimum delay incurred for the authentication process is the most vital and challenging task, so that only legitimate users can access their personal data and also communicate with the other mobile devices in the network. In this paper we present an approach for authentication of the Mobile users with minimum time delay incurred for authentication process, which is well explained with a scenario of setting up a call session during an emergency, unlike traditional techniques and hence reducing the average delay caused due to setting up a call session after authenticating the user. Performance valuation indicate that this approach achieves a reliable security for nodes with a minimum time overhead.
This paper describes an experimental set up for generating database of model aircraft images. The database thus obtained is useful in validating image processing algorithms used in aircraft image recognition. The physical models are prepared from the three dimensional (3D) Computer Aided Design (CAD) models of aircraft by using Rapid Prototyping (RP) Fused Deposition Modeling (FDM) machine. Acrylonitrile Butadiene Styrene (ABS) plastic material is used for the physical models. Five types of models are printed for the sample database, viz., Mirage 2000, P51 Mustang, F-16 Fighting Falcon, G1-Fokker and MIG25-F. The experimental set up for generating database of model aircraft images consists of a 89C2051 microcontroller based 3D-movement controlling mechanism for capturing images of the aircraft models. Three stepper motors are used in the system to simulate yaw, pitch and roll movements of aircraft model. These motors can be rotated clockwise or anticlockwise independently to any desired angular position. A program has been developed in the assembly language of AT89C2051 microcontroller using Keil software to activate the stepper motors. A digital camera is mounted at predefined location to capture the 3D maneuvering of the aircraft models. Finally, validation of both phase correlation and colour based image detection algorithms are made using this experimental set up.
An effective mechanism to improve the performance of a computing device is to include multiple processing units on a single integrated die. To exploit the performance gain of this mechanism, developing parallel programs is necessary. However, some existing programs are developed for serial execution and manual redesign of all such programs is tedious. Hence, automatic parallelization of existing serial programs is advantageous. One of the methods to execute programs in parallel is to make use of parallel computing platforms. With myriad number of parallel computing platforms, abstracting them from the developer is propitious. In this paper we propose an algorithm which is capable of detecting potential `for' loops in C code that can be parallelized using OpenMP platform. The algorithm proposed ensures the correctness of the program. It performs the required parallelization without post execution analysis, which avoids both execution of code and monitoring of resources accessed by the program.
Cloud data center have to accommodate many users with isolated and independent networks in the distributed environment to support multi-tenant network and integrity. User applications are stored separately in virtual network. To support huge network traffic data center networks require software to change physically connected devices into virtual networks. Software defined networking is a new paradigm allows networks easily programmable that helps in controlling and managing virtual network devices. Flow decisions are made based upon real-time analysis of network consumption statistics with software defined networking. Managing these virtual networks is the real challenge for the network administrators. In this paper, we propose a novel network management approach between the controller and virtual switches and provide QoS of virtual LAN in the distributed cloud environment. This approach provides a protocol to deploy in Cloud Data center network environments using OpenFlow architecture in switches. Our approach delivers two different features, dynamic network configuration and Virtual management protocol between controller and open vswitch. This technique is very useful for cloud network administrators to provide better network services in cloud data center to multi-tenant users quickly.
Wireless sensor network (WSN) is considered as a most trusted technology because of its wide range of applications in various fields like healthcare, industry, military, agriculture etc. WSN consists of seveal sensor nodes with each having the capacity to sense the data, process the sensed data and communicate the processed data. Usually sensor nodes are densely, randomly deployed at the region of interest. This kind of deployment leads to the generation of enormous ammount of redundant sensor data. Routing of such redundant data consumes more energy and saturates the network resources. Hence Data fussion technique is used to reduce the redundant transmissions in the network by fussing the redundant data packets so that network lifetime is enhanced. There are different data fussion techniques which perform data fussion in a single level or in two levels. In this paper we are proposing a multilevel hierarchical data aggregation technique which handles the redundant transmissions in an efficient manner.
Rapid increase in the demand for scientific, business and web applications has led to large scale computation. Cloud computing has emerged as a scalable, reliable, affordable, flexible source for such type of applications. The need to mange such applications require proper load balancing and scheduling techniques. These techniques are different from the algorithms used for distributed computing. This is mainly due to the high scalability and high availability in the cloud environment. The proposed algorithm based on load balancing is presented in this paper. The principle of time scheduling and priority is utilized. The approach implements division of time into multiple slices and allocating each process to particular time interval based on priority. The processor serves the user request within the allotted time slot. At the end of the time slice, the next queued user request is ready for execution. The user exits from the queue upon completion of user request, otherwise user waits for its next slot. The increase in waiting time increases the time slot the user requests gets in the virtual machine. This reduces the overhead of context switching.
The emerging technological demands of users call for expanding service model which avoids problem of purchasing and maintaining IT infrastructure and supports for computation-intensive services. This has directed to the development of a new computing model termed Cloud Computing. In cloud computing, the computing resources are distributed in various data centers worldwide and these resources are offered to the customers on demand on a pay as usage basis. Currently, due to the increased usage of cloud, there is a tremendous increase in workload. The uneven distribution of load among the servers results in server overloading and may lead to the server crash. This affects the performance. Cloud computing service providers can attract the customers and maximize their profit by providing Quality of Service (QoS). Providing both QoS and load balancing among the servers are the most challenging research issues. Hence, in this paper, a framework is designed to offer both QoS and balancing the load among the servers in cloud. This paper proposes a two stage scheduling algorithm. The servers with different processing power are grouped into different clusters. In the first stage, Service Level Agreement (SLA) based scheduling algorithm determines the priority of the tasks and assigns the tasks to the respective cluster. In the second stage, the Idle-Server Monitoring algorithm balances the load among the servers within each cluster. The proposed algorithm has used the response time as a QoS parameter and is implemented using CloudSim simulator. Experimental results shows that our algorithm provides better response time, waiting time, effective resource utilization and balancing load among the servers as compared to other existing algorithms.
Natural Language Processing (NLP) involves many phases of which the significant one is Word-sense disambiguation (WSD). WSD includes the techniques of identifying a suitable meaning of words and sentences in a particular context by applying various computational procedures. WSD is an Artificial Intelligence problem that needs resolution for ambiguity of words. WSD is essential for many NLP applications like Machine Translation, Information Retrieval, Information Extraction and for many others. The WSD techniques are mainly categorized into knowledge-based approaches, Machine Learning based approaches and hybrid approaches. The assessment of WSD systems is discussed in this study and it includes comparisons of different WSD approaches in the context of Indian languages.
Vehicular Adhoc NETworks (VANET), a special category of Mobile Adhoc Networks (MANET) are networks formed by vehicles which help the vehicles communicate with one another. The challenging mode of communication in VANETs is the video mode which can be used to give faster and clear information to the end users in vehicles. The transmission of video streams is prevalently seen in the applications of VANETs like the infotainment applications and safety messages dissemination. As video streams are normally immense and expected to maintain strict deadlines, the parameters delay and jitter play a vital role in maintaining the quality of streaming. One of the popular techniques used in reducing the above said parameters is Network Coding. The mobility of the vehicles influence the network characteristics to a large extend. This paper analyses the delay and jitter parameters in VANETs by simulating the network coded video streams being transmitted between the vehicles which are not in the range of each other where the vehicles show varying mobility scenarios.
Degraded character recognition is one of the most challenging topic in the field of Kannada character recognition. The degraded characters which are broken and deformed will have missing features and will be difficult for any recognition method. Rebuilding the degraded character is very important for better recognition. This paper proposes a novel method to rebuild the broken characters. These characters are thinned and the endpoints of the lines are obtained. The line segments are effectively rebuilt so as to preserve the degraded character. Experimental results on this method are presented to establish its efficiency.
Energy conservation in the field of Wireless Sensor Networks (WSN) is of a great concern to the modern day researchers which has lead to numerous research studies. However, irrespective of the various energy efficient routing protocol for WSN, the energy of a sensor node gets exhausted soon which in turn might lead to disastrous problems. To address this problem of energy depletion, a technique to intensify the sensor node energy is proposed. A centralized energy accumulator node will transmit the energy to the nearest nodes through intermediate hops using Energy Reconciling Medium Access Protocol (ERMAC) via radio frequency. Considering the unjustness, the protocol can efficiently adapt itself to power up the lifetime of sensor networks. The results through simulation show the stability of the wireless sensor network with ERMAC and the energy harvesting rate of a sensor node.
The Cognitive Radio is the most emerging research area in communication and to bringing this technology into implementation the first and foremost challenge for researchers is the estimation of the spectral holes of a wide band spectrum. So the spectral estimation techniques are most going to play a vital role in Cognitive Radio. In this paper we are proposing a novel non parametric spectral estimation technique for better spectral efficiency of the signal with minimized power errors in the estimation. ERLS Technique is introduced for better result of power signal spectral estimation. The ERLS Technique is the combination of wavelet algorithm and artificial neural network (ANN). The wavelet algorithm is used to extract the frequency components of the power signal. Then, using the neural network, the power error signals is determined. So, the complexity and computational time of spectral estimation are reduced.
Transmission Control Protocol is commonly used communication protocol. Congestion Control is one of the hardest problems in robust networked systems. Throughput is a measured performance metric in all communication systems. Many TCP variants play an important role in controlling the congestion. TCP New-Reno works comparatively better when congestion is more in the network, however the data rate will be always constant. HSTCP works optimal with scaled data rate. A mode of switching from HSTCP to New-Reno and also from New-Reno to HSTCP depending upon the number of users in the network called as Switching TCP (STCP). Switching TCP variant avoids the congestion and also increases the data rate in the ad-hoc networked systems.
Since the wireless sensor network is upgrading, the need for high quality and energy efficient way of using sensors are becoming more important. This paper works with the inventive wireless sensor network based on the selection of better Cluster Head among the other sensor nodes. The communication happens in the network through the Cluster Head, so the battery power of Cluster Head is required to be high. This can be achieved by dynamic elimination of far away sensors. At the same time addition of new sensors for servicing is also possible.
In this paper, we present a face part detection such as eyes, nose and mouth using shared memory programming concept of OpenMP. The task parallelism is exhibited in multiple threads of a core where each thread is assigned with different task that are independent. The existing algorithm haar classifiers has been used for face part detection where initially the face is detected, localized and then the parts are detected, localized. This proposed idea of face part detection resulted in depletion in time compared to sequential execution and increase in speedup.
The technical adoption of optical network is gaining a fast pace to accomplish the internet connectivity in a larger geographic region. The system has proved to be one with effective utilization as photons are used to carry a massive bundle of data packets, something which cannot be done in a conventional networking system which exists today. Significant amount of research attempt have been done to enhance the performance of optical network, however that there are still further scope for improvement. Hence, we propose a model that targets to enhance the quality of the signal in optical network during for peak traffic condition. The formulation of the model is solely based on considering the investigation of channel power and the signal noise as the prime attributes of signal quality in optical network. Incorporated with potential features of ROADM (Reconfigurable Optical Add-Drop Multiplexer), the proposed system has been simulated in Matlab under multi-channel condition and power constraint to see the proposed model has outperformed the existing research benchmark.
This paper presents the 3 rd Generation Partnership Project-Long Term Evolution (3GPP-LTE) [10] HandOver (HO) administration to minimize target Femto Access Points (FAPs) list amid macro-to-femto and femto-to-femto handover. HandOver is the procedure that transfer an ongoing call from one cell to another as the users move through the coverage area of cellular system. The femtocell covers a little scope zone (15 meters to 30 meters), and subsequently, HandOver will rely on car (mobile femtocell users) or User Equipment (UE) speed to handover between macrocell-to-femtocell and femtocell-to-femtocell. During handover procedure, femto will decide cell selection using the signal strength and capacity of the target femtocell. Femto-to-Macro HO has two main failure scenario's i.e. Too Late HO (TLH) and Too Early HO (TEH). TLH results in the Failure in Radio Link (FRL); TEH affects the Ping-Pong or FRL. In this paper, we present the handover management scheme for cognitive femtocell network to reduce target FAPs to avoid overloaded femtocells and also reduce the number of handovers.
Automation testing is efficient and less time consuming as compared to manual testing. Automation testing does not require human intervention to execute the test steps. Test automation tools remarkably reduce human efforts. Automation testing reduces errors. It eases the procedure and is faster than manual testing. Manual testing results in delay of the release. It affects the financial growth of the company. The limitations of manual testing are overcome using automation testing that can cover wider range of testing possibilities. Automation Procedure (AP) is a process executed before starting automation testing to improve the speed and efficiency of automation testing. In this paper, a unique method of automation procedure is defined which is mainly used for upgrading the remote network devices by software images, called as software builds. AP is developed using Python. The AP described provides an efficient method for Automation testing. As AP requires minimal input and less processing time, it reduces the time consumed and human efforts involved over manual testing.
The Hierarchical Chinese postman problem is a special type of Chinese postman problem. The aim is to find a shortest tour that traverses each edge of a given graph at least once. The constraint is that the arcs are partitioned into classes and a precedence relation orders the classes according to priority. Different forms of the HCPP are applied in real life applications such as snow plowing, winter gritting and street sweeping. The problem is solvable in polynomial time if the ordering relation is linear and each class is connected. Dror et al. (1987) presented an algorithm which provides time complexity of O (kn 5 ). CPP which is lower bound for HCPP. We give alternate approach by using Kruskal's method to reduce number of edges in graph which is having time complexity of O (k 2 n 2 ), where k is number of layers in graph and n is number of nodes in graph. It is found that the suggested kruskal-based HCPP-solution gives average 21.64% improvement compare to simple HCPP and we get average 13.35% improvement over CPP when number of hierarchy is less than 3 and numbers of edges in graph are less than 10.
Dynamic Partial Reconfiguration (DPR) is an efficient technique in terms of power, resources and performance in order to achieve varying user requirements. In this paper we have identified various modes of operation of electronic stethoscope and also studied several existing implementations. We propose a Dynamic Reconfigurable Design for the multi-mode electronic stethoscope with five operational modes. Implementations were performed on Virtex-5 XC5VLX110T FF1136 device. Comparative results were presented between the proposed method and the normal FPGA implementation method. Also the reconfiguration timings were analyzed and the results were compared with the ideal reconfiguration time of Virtex-5. The proposed Dynamic Reconfigurable design is able to achieve a reconfiguration time of 28.86 ms with the reconfiguration speed of 3.46 MB/sec.
The area of underwater acoustic sensor networks (UWASN) is developing expeditiously as it shows a major aspect in various military and civilian practices, such as avoidance of disaster, diplomatic vigilance, seaward analysis, environmental monitoring, oceanographic data collection and mine reconnaissance. There are numerous challenges in UWASN posed by underwater acoustic propagation channel, which includes gradual propagation of acoustic waves, bounded bandwidth, immense and irregular propagation delay, ambient noise and transmission loss. This paper provides the analysis on the working of an acoustic channel in conditions of sound speed and transmission loss using MATLab simulation. The parameters analyzed are absorption coefficient, propagation delay and sound speed at various depth and transmission loss.
SQL Injection (SQLI) is a quotidian phenomenon in the field of network security. It is a potent and effective way of intruding into secured databases thereby jeopardizing the confidentiality, integrity and availability of information in them. SQL Injection works by inserting malicious queries into legal queries thereby rendering it increasingly arduous for most detection systems to be able to discern its occurrence. Hence, the need of the hour is to build a coherent and a smart SQL Injection detection system to make web applications safer and thus, more reliable. Unlike a great majority of current detection tools and systems that are deployed at a region between the web server and the database server, the proposed system is deployed between client and the web server, thereby shielding the web server from the inimical impacts of the attack. This approach is nascent and efficient in terms of detection, ranking and notification of the attack designed using pattern matching algorithm based on the concept of hashing.
Security is the main concern in today's wireless network environment. However, cipher algorithms consume a lot of resources to provide the required confidentiality. Ad-Hoc wireless networks are one area where the devices are extremely resource constrained. Therefore computationally simple yet cryptographically strong cipher algorithms are required for such kind of networks. In this paper a light weight Quasigroup based stream cipher is proposed and implemented on a Virtex-6 FPGA. It is also subjected to the NIST-STS test suite. Its performance is evaluated in MANETs using Glomosim simulator.
The vision of Internet of Things (IoT) is to enable devices to collaborate with each other on the Internet. Multiple devices collaborating with each other have opened up various opportunities in multitude of areas. It has presented unique set of challenges in scaling the Internet, techniques for identification of the devices, power efficient algorithms and communication protocols. Always connected devices have access to private sensitive information and any breach in them is a huge security risk. The IoT environment is composed of the hardware, software and middleware components making it a complex system to manage and secure. The objective of this paper is to present the challenges in IoT related to security, its challenges and recent developments through a comprehensive review of the literature.
Crop yield prediction is important as it can support decision makers in agriculture sector. It also assists in identifying the relevance of attributes which significantly affect the crop yield. Wheat is one of the widely grown crops around the world. Its accurate prediction can solve various problems related to wheat farming. This work analyses how yield of a particular crop is determined by few attributes. In this paper, the models of Fuzzy logic (FL), Adaptive Neuro Fuzzy Inference System (ANFIS) and Multiple Linear Regression (MLR) are used for predicting the yield of wheat by considering biomass, extractable soil water (esw), Radiation and rain as input parameters. The outcome of the prediction models will assist agriculture agencies in providing farmers with valuable information as to which factors contribute to high wheat yield. We compare all these models based on RMSE values. Results show that the ANFIS model performs better than MLR and FL models with a lower RMSE value.
This paper illustrates the application of a Hybrid Quantum Inspired Evolutionary Algorithm (HQIEA) in evolving a variety of Quantum equivalents of classical circuits. Taking the matrix corresponding to an oracle as input, this HIQEA designs classical circuits using quantum gates. A library consisting of single, two and three qubit Quantum gates and the desired circuit matrix were given as input and algorithm was able to successfully design half adder, full adder and binary-gray conversion circuits apart from circuits for two, three and four qubit Boolean functions, using Quantum gates. The circuits obtained compare favorably with earlier attempts in terms of number of gates, ancillary inputs and garbage outputs required for constructing these circuits and the time taken to evolve them.
Microarrays store gene expression data of each cell; thereby microarray contains thousands of features. Each row represents gene samples and each column represents conditions. In any classification task, selection of irrelevant or redundant features greatly reduces the performance of classifier. Therefore, selection of optimal number of significant features is a major challenge for any classification problem. Filter and wrapper approaches are mostly used for feature subset selection. Filters are computationally faster but wrapper approach is more efficient in terms of classification accuracy. This paper proposes a hybrid approach combining the filters and wrappers is proposed which takes the features from both the filters and the wrappers. In the initial step, a feature subset is selected using filters and the feature subset size is further reduced using the wrapper approach. The proposed method is tested on Colon Tumor, B-Cell Lymphoma (DLBCL) and Leukemia datasets. Simulation results show that the hybrid method achieved higher accuracy with a smaller feature subset when compared to the existing methods.
This research work carried on GI/Geo/1/K queue with N threshold policy. The server resumes it service when the number of consumers in the queue reaches a pre-specified value N. This article describes modeling and detail analysis on the above mentioned model. The arbitrary and prearrival epochs probabilities are being obtained for the steady state distribution. The distribution of the waiting time and various performance measures are studied. At the end, the computational results are presented. This model can be applied on various networking/industrial automation system.
In today's HPC world, numerous applications executed with high speed. The multi core architecture has provided a wide scope of exploration towards any kind of high-end applications. The paper gives an analogy on ability to handle the high-end input triggers by managing the core utilities at the lower end of the computer. The optimized way of memory block utilization, debugging proliferation and data management at configured input level discussed in the paper. The Data Proliferation framework model named as COMCO (Component for Compiler Optimization) model and it elaborates on how to handle configured inputs at the OS level. The paper gives an exploration of the configured inputs safely at the multi thread level using the COMCO (Component for Compiler Optimization) strategy.
In cognitive radio networks (CRNs), the unlicensed nodes opportunistically access the unutilized spectrum that is owned by licensed nodes. The article presents a fuzzy logic based spectrum handoff and assignment approach that enhances the channel utilization and avoids frequent channel switching. The approach considers interference as well as bit error and signal strength in order to find quality channel. Based on generated fuzzy patterns of channel quality, the neural network is trained in order to estimate the channel gain. It is used to select the efficient spectrum in heterogeneous environment.
Software testing is a process which is used to examine the correctness and quality of a computer software. Software testing is important to point out the defects and errors that are made during software development and to ensure that the developed software works fine in the real environment with different operating systems, devices, browsers and concurrent users. Making reliable software applications is becoming increasingly important and thus software testing is indispensable. One of the main challenges in software testing is that there are many quality concerns that cut across different modules in the software and thus testing requires modifying the source code of various modules. In this paper, we propose the use of Aspect Oriented Programming (AOP) for the purpose of creeping inside the program's modules without modifying their source code and test components where we suspect bugs. Aspects in AOP can capture one or more execution points in the program using pointcuts and further advices can be written to insert relevant code at such execution points for the purpose of testing. We examine the suitability of using aspects for writing testing codes and perform various types of software testing.
Relay technologies have been studied and considered in the standardization process of the next generation of MIMO system, such as LTE 8, Advance LTE 10, IEEE 802.16e, IEEE 802.16m. Presently two wireless technologies, WiMAX and LTE both based on IEEE standard, are two rival technologies, nevertheless, are technically very similar but deployment is differ. This paper introduces and compares features of two advance technologies in physical layer, and also gives performance analysis of different modulation schemes (BPSK, QPSK, and 16-QAM) in WiMAX & LTE technologies.
Web Services are emerging technologies enabling machine-machine communication and services reuse over Web. They have an innovative mechanism to render services over diversified environments. Semantic web services are reusable, self-contained software components, to be used independently to fulfil needs or combined with other web services for carrying out complex aggregation through web services composition. There are many factors, due to which methods used for web services composition vary. Service categorization facilitates service retrieval using manual browsing service repositories or through the mechanism of automatic discovery. Classification taxonomies are huge, comprising 1000s categories, at multiple hierarchical levels. Multi-Layer Perceptron Neural Network (MLPNN) is used for classification problems. In this work, Multi-Layer Perceptron optimized with Tabu search (MLP-TS) for learning is proposed. Experimental results demonstrate that the proposed MLP-TS outperforms Multi-Layer Perceptron-Levenberg-Marquardt (MLP-LM) and Multi-Layer Perceptron Back Propagation (MLP-BPP) for web service classification.
This paper presents HDL implementation of Kalman Filter. Kalman Filter is a mathematical tool, which uses sequence of noisy measurement taken over time to predict unknown state vector parameter. In this paper implementation in HDL has been done using new method that is chebyshev inversion method. The approach of new method is for reducing hardware and complexity. Kalman Filter has very complex matrix calculation and matrix inversion. To perform matrix inversion in HDL(Hardware Description Language) using less hardware is difficult. Chebyshev method is less complex so hardware can reduced for calculation of Kalman Filter equations Here simulation is done in FPGA(Field Programmable Gate Array) Vertex-6.
The existing System on Chip (SoC) design will soon become a critical bottle neck in chip performance with its inability to scale its communication network effectively with decreasing feature sizes and increasing number of transistors. The Network on Chip (NoC) has been recognized as the next evolutionary step to tackle these issues by using an intelligent and common communication network between all the different components within chip. In this paper we propose a new routing algorithm that uses a combination of a fully adaptive and partial adaptive routing algorithm called Adaptive Look Ahead algorithm. The algorithm decides next two hops within one node to allow quick packet transfer in next node, hence the algorithm only periodically calculates the packets route along the minimal path. Experimental results show that our proposed algorithm has lower latency and higher throughput than existing benchmarks.
In advanced intelligent transport systems, detection of the vehicles has become very popular in the traffic area and also to identify the density of the vehicles in that particular area. As per the survey background subtraction is identified as one of the best approaches in identifying the vehicles for static camera. An improvised background subtraction model is adopted, wherein it works for real time tracking and also solves the problems of shadow detection. In background subtraction each pixel is updated with update equations. A component labeling technique is introduced after background subtraction to label the different objects so as to bifurcate between the two objects and each region is labelled with the different label values. Detections of the moving vehicles are identified and the density of vehicles travelling in the sight of the camera is determined.
Bit Parallel String Matching algorithms are the most efficient and latest class of String Matching Algorithms. These algorithms use the intrinsic property of computer known as Bit Parallelism for performing bit operations in parallel with-in single computer word. BNDM, TNDM, BNDMq are the popular single pattern bit parallel string matching algorithms whereas Shift-OR and Shift-OR with Q-grams algorithms are popular in multiple pattern string matching. All these algorithms are restricted by the limitation of pattern size. These all algorithms are not working on patterns whose sizes are longer than computer word. In this paper, we proposed the generic method named WSR (Word Size Removal) for bit parallel string matching. With the inclusion of WSR these bit parallel string matching algorithms can work on longer than computer word size patterns. This paper presented WSR method and all modified algorithms.
This paper proposes a new index and method to find strings approximately in spatial databases. Specifically, the task of candidate generation is as follows. Given a location name with wrong spelling, the system finds location in OSM dataset which are most similar to that location name which are misspelled. An approximate solution is proposed using log linear model which is defined as a conditional probability distribution of a corrected word and a rule set for the correction conditioned on wrong location name. An Aho-corasic tree which is used for storing and applying correction rules referred to as rule index and an Aho-Corasic algorithm which is efficient and gives guarantee to find top k candidates. Experiment on large real OSM dataset demonstrates the accuracy of proposed method upon existing methods.
Mobile location-based services and GPS enabled devices has gained increasing popularity by using spatial data outsourcing over the past few years. There is increasing trends in the industry to store data on cloud to gain the benefit of its flexible infrastructure and affordable storage cost, which support location-based applications. This article talks about outsourced spatial databases (OSDB) model and a competent method EX-VN Auth, which provide accurate and complete results. EX-VN Auth used to verify the result set as well as allows a customer to offer the approach called neighborhood information derived from the underlying spatial dataset based on Voronoi diagram. Different methods of finding nearest locations are adopted like Voronoi diagram Spatial dataset underlying and basic spatial query type, as like k Nearest neighbor and range queries, also very superior query such as total closest neighbor, reverse k nearest neighbor, and spatial horizons. EX-VN Auth had been tested as real world data sets by means of mobile gadgets (Android OS smart phone) as client. The results of Merkle hash trees had been compared, VNAuth with EX-VN Auth and experiments produce significantly minor substantiation items and more data processing capability, with lesser-search criteria.
Software that gathers information regarding the computer's use secretly and conveys that information to a third party is Spyware. This paper proposes a click based Graphical CAPTCHA to overcome the spyware attacks. In case of traditional Text-Based CAPTCHA's user normally enters disorder strings to form a CAPTCHA, the same is stored in the key loggers where spywares can decode it easily. To overcome this, Click-Based Graphical CAPTCHA uses a unique way of verification where user clicks on a sequence of images to form a CAPTCHA, and that sequence is stored in pixels with a random predefined order. This paper also analyzes the proposed scheme in terms of usability, security and performance.
In BioWorld, a medical intelligent tutoring system, novice physicians are tasked with diagnosing virtual patient cases. Although we are often interested in considering whether learners diagnosed the case correctly or not, we cannot discount the actions that learners take to arrive at a final diagnosis. Thus, the consideration of the sequence of actions becomes important. In this preliminary study, we propose a line of research to investigate learner actions involved in diagnosing virtual patient cases using Hidden Markov Models.
Data clustering has been playing important roles in many areas like pattern recognition, image segmentation, social networks and database anonymisation. Since most of the data available in real life situation are imprecise by nature, many imprecision based data clustering algorithms are found in literature using individual imprecise models as well as their hybrids. It was observed by Krishnapuram and Keller that the possibilistic approach to the basic clustering algorithms is more efficient as the drawbacks of the basic algorithms are removed. This approach was used to develop the possibilistic versions of fuzzy, rough and rough fuzzy C-Means algorithms to develop their corresponding possibilistic versions. In this paper, we extend these algorithms further by proposing a possibilistic rough intuitionistic fuzzy C-Means algorithm (PRIFCM) and compare its efficiency with other possibilistic algorithms and the RIFCM. Experimental analysis is carried out by taking both numeric as well as the image data. Also, DB and the D indices are used for the comparison which establishes the superiority of PRIFCM.
Residue Number System (RNS) is widely used in various applications such as design of cryptoprocessor, digital filters etc. For better performance of these RNS systems conversion module should be fast enough. This paper presents a new reverse converter architecture for a novel five moduli set Residue Number System (RNS) {2 n - 1, 2 n , 2 n + 1, 2 n + 1 - 1, 2 n - 1 - 1} for even values of n. It exploits the special properties of the numbers of the form 2 n ± 1, and extends the dynamic range of the present triple moduli {2 n - 1, 2 n , 2 n + 1} based systems. The proposed moduli set has a dynamic range that can represent upto 5n - 1 bit numbers while keeping the moduli small enough and the converter efficient considering computation required. In our new Five moduli set Reverse Converter design we use both the Chinese Remainder Theorem (CRT) and Mixed Radix Conversion (MRC) techniques.
Circulating Tumor Cells (CTCs) in Peripheral Blood (PB) testing is measured as significant investigative and promising microarray technology for breast cancer examination. Few numbers of the work have been proposed in earlier for the role of CTCs detection in breast cancer; but still the development of novel method for identification of CTC becomes difficult because of hundreds and thousands of indicative genes is presented. The main intention of the work is to the identification of CTC in PB during Breast Cancer (BC) regarding to Metastatic (MS), Non-Metastatic (NMS) and hybrid MS and NMS. The proposed method is not only the identification of CTC in BC, in addition it solves gene selection by proposing hybrid fuzzy online sequential Particle Swarm Genetic (PSG) kernel extreme learning machine finally named as (FOP-KELM) classification. The proposed FOP-KELM method calculates the mean values for each gene features and it is compared objective function of KELM to select and remove unimportant gene features. In order to reduce the fuzzy membership assumption value in ELM, it is optimized using PSG algorithm. The impact of selected features from FOP-KELM has been investigated using clustering method. To perform classification task for selected gene features, a novel Hierarchical Artificial Bee clustering algorithm (HABCA) is proposed. It capably distinguishes the CTC through the separation of tumor samples into a hierarchical tree structure in a top-down manner, where the distance between two gene tumor samples is determined by using ABC. Clustering results are classified into MS, NMS, MS and NMS.
In this era of technology with an increasing usage of Internet, data security has become a major issue. Various cryptographic hash function such as MD4, MD5, SHA-1, SHA-2 has been defined to provide data security. In this paper we proposed a new algorithm, TL-SMD (Two Layered-Secure Message Digest) for building a secure hash function, which can provide two level processing security. For the construction of this algorithm, various techniques have been used that includes block cipher technique, modified combination of Merkle-Damgard construction and fast wide pipe construction. For computing the hash value from the input block, combination of cipher block chaining (CBC) mode and electronic codebook (ECB) mode with some modification is used.
Digital transactions have permeated almost every sphere of activity in today's world. This increase in digital transactions has introduced additional and stringent requirements with regard to security and privacy. People reveal lots of personal information by compromising their private credentials during digital interactions. It is imperative that in addition to security of on-line transactions, user credentials must also be safeguarded. This has necessitated the requirement for a rigorous and foolproof credential system with provision for anonymous and revocable identity management system. Biometric system based identity management systems offer advantage over conventional knowledge and possession based systems. Considerable research has been undertaken in the past to identify newer and reliable biometrics for more efficient and secure identity management. Fusion of multiple biometrics to achieve better results is also an area of active research. However, making biometric credential systems revocable and anonymous without sacrificing efficiency and efficacy of detection, still remains a challenge. This survey paper makes an attempt to give an insight into the approaches that have been made in the direction of multimodal biometric fusion and into the various options that have been explored to make biometric authentication systems revocable and anonymous.
A database is a vast collection of data which helps us to collect, retrieve, organize and manage the data in an efficient and effective manner. Databases are critical assets. They store client details, financial information, personal files, company secrets and other data necessary for business. Today people are depending more on the corporate data for decision making, management of customer service and supply chain management etc. Any loss, corrupted data or unavailability of data may seriously affect its performance. The database security should provide protected access to the contents of a database and should preserve the integrity, availability, consistency, and quality of the data This paper describes the architecture based on placing the Elliptical curve cryptography module inside database management software (DBMS), just above the database cache. Using this method only selected part of the database can be encrypted instead of the whole database. This architecture allows us to achieve very strong data security using ECC and increase performance using cache.
The paper proposes a novel method to analyze the correctness of frequency correction loop in a 2G system, so that the mobile station accurately synchronizes to the base station. The setup contains an automatic frequency control (AFC) module to track and control the frequency/timing offset of the crystal. The main functionality of the AFC module is to estimate the optimum frequency correction value and minimize the frequency error of the system. It also contains a long term learning (LTL) module whose functionality is to derive an estimate for the crystal behavior due to temperature and aging and maintain a learning database containing the frequency correction values for the crystal in the operating temperature range. A separate module is used to correct advance/forward time drift on slave SIM, when oscillator is locked to the Master SIM. Various COST 207 GSM/EDGE channel models for mobile radio communication are implemented to test the frequency correction loop.
Power efficient, bandwidth optimized and fault tolerant sensor management for IOT in Smart Home
Currently face recognition has reached a certain degree of maturity when operating under constrained environments. When it comes to real time situations, the system degrades sharply in handling variations like illumination, occlusions, skin tone, cosmetics, image misalignment, age, pose, etc., inherent in the face images acquired. Hence understanding and eliminating the effects of each of these factors is crucial to any face recognition system. This paper deals with studying the effect of variances in the Eye Blink Strengths (EBS) on a face image undergoing face recognition, thereby testing the efficiency of face recognition algorithm. The study makes exclusive usage of Brain Computer Interface (BCI) technology to detect eye blinks and to measure their corresponding EBS values using Electroencephalograph (EEG) device. The face recognition algorithm under test was the amalgamation of Principal Component Analysis (PCA), Local Binary Pattern (LBP) based feature extraction and Support Vector Machine (SVM) based classification. EBS is assessed using an inexpensive, portable, non-invasive EEG device. The efficiency of the face recognition algorithm to withstand the eye blinks with varying degree of EBS values for the given face images was determined. It was found that the proposed methodology of test case generation can be effectively be used to evaluate various other face recognition algorithms against varying eye blinks.
The proliferation of data mining techniques is common across various corporate functions in an organization to discover deeper insights for making better decisions. One such opportunity emerges in the procurement function to streamline the process of procuring indirect materials. This paper proposes a two-step approach 1) adaption of association rule mining to derive the associated materials and 2) identification of right set of supplier(s) for the associated materials based on supplier selection methodology - Data Envelopment Analysis (DEA). The two step approach is used in the purchase requisition process, as a recommendation engine to assist the requester (user who request for materials) with a list of associated materials that can be requested together and also recommend the right supplier(s) for the associated materials. This significantly reduces the number of purchase requests (PR), and thus reduces the man hours in the procure-to-pay cycle and optimizes the supplier base. This is implemented on a sample dataset and a case study is provided for illustration.
A highly accelerated growth of e-market has lead to a well flourished online auctions scenario. Along with the attraction of numerous users world-wide, the online auctions have also allured in multiple frauds, periodically changing in nature and strategy to accustom to the proposed fraud detection and prevention approaches. As per the Internet Crime Complaint Center report 2013, auction fraud is enlisted as the topmost fraud accounting for drastic monetary losses. Amongst the online auctions frauds, shill bidding seems to be the most prominent fraud. In this paper, we present a variable bid fee methodology as a prevention technique for shill bidders. A bidder is charged for each of his bid based on the amount he bids. The winner of an auction wins back the charges he paid as bid fee; he gains an additional benefit to recover the bid fee he paid for the auctions he earlier lost in. This maintains the competitive spirit of an auction. On the contrary, the inherent nature of a shill bidder of frequently bidding in an auction and never winning one, will cause him perpetual monetary losses. We proposed this methodology based on the idea that the risk of losing money will reduce the tendency to exhibit shill behavior.
Wireless sensor network (WSN) is the vast area of research in the field of networking. A number of challenges are faced by such type of battery operated networks. In WSN, the sensors are attached to the hardware motes devices. These sensors are responsible to sense the certain events happening in the near surrounding. This sensed data is communicated with the radio/antenna attached to the sensor nodes via wireless medium. There are basically four layers discussed in WSN, namely, physical layer, MAC layer, network layer and application layer. In our paper, we discussed some recent congestion control techniques of 2013, 2014 and 2015 also.
Threat of security issues in Information Science has now become an important subject of discussion amongst the concerned users. E-Commerce is one of the parts of Information Science framework and its uses are gradually becoming popular. However now-a-days, ironically, these users are gradually found to be bit reluctant on pain of threats of security and privacy issues. Needless to say, E-Commerce business has opened a new era in banking industry too. But unfortunately the banking business through E-Commerce is covered with risks for these issues. Thus if these threats of privacy and security are not eliminated, users will not have trust and users will not visit or shop at a site and the sites will also not be able to function properly. These two issues i.e. security and privacy are required to be looked into through social, organizational, technical and economic perspectives. In this paper attempts are being taken to discuss with overview of security and privacy issues in E-Commerce transactions. We shall also discuss in particular different steps required to be taken before online shopping and also shall discuss the purpose of security and privacy in E-Commerce and after discussion we shall provide a guideline to be adopted to mitigate risks and vulnerabilities while an user is involved in E-Commerce transaction.
Diabetes mellitus (DM) is reaching possibly epidemic proportions in India. The degree of disease and destruction due to diabetes and its potential complications are enormous, and originated a significant health care burden on both households and society. The concerning factor is that diabetes is now being proven to be linked with a number of complications and to be occurring at a comparatively younger age in the country. In India, the migration of people from rural to urban areas and corresponding modification in lifestyle are all moving the degree of diabetes. Deficiency of knowledge about diabetes causes untimely death among the population at large. Therefore, acquiring a proficiency that should spread awareness about diabetes may affect the people in India. In this work, a mobile/android application based solution to overcome the deficiency of awareness about diabetes has been shown. The application uses novel machine learning techniques to predict diabetes levels for the users. At the same time, the system also provides knowledge about diabetes and some suggestions on the disease. A comparative analysis of four machine learning (ML) algorithms were performed. The Decision Tree (DT) classifier outperforms amongst the 4 ML algorithms. Hence, DT classifier is used to design the machinery for the mobile application for diabetes prediction using real world dataset collected from a reputed hospital in the Chhattisgarh state of India.
In the era of big data, the applications generating tremendous amount of data are becoming the main focus of attention as the wide increment of data generation and storage that has taken place in the last few years. This scenario is challenging for data mining techniques which are not arrogated to the new space and time requirements. In many of the real world applications, classification of imbalanced data-sets is the point of attraction. Most of the classification methods focused on two-class imbalanced problem. So, it is necessary to solve multi-class imbalanced problem, which exist in real-world domains. In the proposed work, we introduced a methodology for classification of multi-class imbalanced data. This methodology consists of two steps: In first step we used Binarization techniques (OVA and OVO) for decomposing original dataset into subsets of binary classes. In second step, the SMOTE algorithm is applied against each subset of imbalanced binary class in order to get balanced data. Finally, to achieve classification goal Random Forest (RF) classifier is used. Specifically, oversampling technique is adapted to big data using MapReduce so that this technique is able to handle as large data-set as needed. An experimental study is carried out to evaluate the performance of proposed method. For experimental analysis, we have used different datasets from UCI repository and the proposed system is implemented on Apache Hadoop and Apache Spark platform. The results obtained shows that proposed method outperforms over other methods.
Technical support service providers receive thousands of customer queries daily. Traditionally, such organizations discard the data due to lack of storage capacity. However, value of storing such data is needed for the better results of analysis and to improve the closure rate of the daily customer queries. Data mining is the process of finding important and meaningful information, patterns through the large amount of data. Clustering is used as one of the best concept for data analysis, using machine learning approach with mathematical and statistical methods. Cluster analysis is widely applicable for practical applications in emerging trends in data mining. Analysis of clustering algorithms such as K-Means, Dirichlet, Fuzzy K-Means Canopy algorithms is done by means of the practical approach, in this research work. Performance of algorithm is observed based on the execution or computational time and results are compared with each of these algorithms. This paper proposes the streaming K-Means algorithm which resolves the queries as it arrives and analyses the data. Cosine distance measure plays an important role in clustering dataset. Sum of Square error is measured to check the quality of the cluster.
Translation of English documents into Hindi language is becoming an integral part for facilitating communication. The major population of India where 366 million people uses Hindi as primary language, for them providing information in Hindi is an important task the translation of English documents may be manually or automatically. When translation done manually the chances are rare for errors, but when we use any translation machines or engine the outputs received from these machines for Hindi have lots of grammatical mistakes or errors. One of the major issues observed is of tense. Thus, we propose solution to build a rule based tense synthesizer that would recognise the subject, verb and auxiliary verb, analyse the tense, then modify the verb and auxiliary verb according to the subject and put the sentence in the correct tense. This system could be integrated with Machine Translation engines to boost up the quality of Hindi translation.
One of the major causes of vision loss is Diabetic Retinopathy (DR). Presence of Hard Exudates (HE) in retinal images is one of the prominent and most reliable symptoms of Diabetic Retinopathy. Thus, it is essential to clinically examine for HEs to perform an early diagnosis and monitoring of DR. In this paper, a classification-based approach using Functional Link Artificial Neural Network (FLANN) classifier to extract HEs in a retinal fundus image is illustrated. Luminosity Contrast Normalization pre-processing step was employed. Classification performances were compared between Multi-Layered Perceptron (MLP), Radial Basis Function (RBF) and FLANN classifiers. Better classification performance was observed for FLANN classifier. GUI package with Region of Interest (ROI) selection tool was developed.
Network of things is expanding day by day, with that security, flexibility and ease of use became concern of the user. We do have a different technique to full fill user's demands. Some of them are: Single Sign On (SSO), Cryptography techniques like RSA-VES, Serpent etc. In this paper an effort is made to provide all mentioned facilities to the user. Single Sign On (SSO) authorizes user only once and allow user to access multiple services and make the system very easy to use and also provides flexibility to use multiple programs or applications. The combination of cryptographic algorithms: Serpent (symmetric encryption) and RSA-VES (asymmetric encryption) which are known as one of the secured cryptographic algorithms are used with “session time” which makes communication very secure and reliable.
This article compares the Bit Error Rate (BER) performance of soft and hard decision decoding algorithms of LDPC codes on AWGN channel at different code rates and Signal to Noise Ratio (SNR) levels. Even though, hard decision decoding algorithm is computationally simple, its BER performance is not appreciable. Devising soft decision decoding algorithms which are simple and good in BER performance requires comparison of probabilistic and log domain methods. Towards this, a code word is generated through modulo(2) addition between message bits and generator matrix. After Binary Phase Shift Keying (BPSK) modulation the AWGN noise is introduced to the modulated code word. BER performance is computed by comparing the message decoded by soft and hard decision algorithms with the transmitted message. The experiment is conducted in MATLAB. Soft decision decoding algorithm in log domain provides better BER performance than hard decision decoding algorithm regardless of the SNR level.
This paper proposes a new dynamic packet scheduling scheme which guarantees the delay and jitter properties of differentiated services (DiffServ) network for both real time and non-real time traffics. The proposed dynamic packet scheduling algorithm uses a new weighted computation scheme known as dynamic benefit weighted scheduling (DB-WS) which is loosely based on weighted round robin (WRR) or fair queuing policy. The novelty of this scheduler is that it predicts the weight required by expedited forwarding (EF) service for current time slot (t) based on two factors: (i) previous weight allocated to it at time slot (t-1), and (ii) average increase in weights of EF traffic in consecutive time slots. This prediction provides smoother and balanced bandwidth allocation to EF, assured forwarding (AF) and best effort (BE) packets by not allocating all the resources to EF and also ensuring minimal packet losses for EF service. Adopting such a dynamic resource allocation effectively achieves reduction in packet loss, end to end delay and delay jitter. The algorithm is tested with different data rates and found to outperform other existing methods in terms of packet loss and end to end delay.
Nearly 70% of the earth is surrounded by water; hence it is appropriate to use underwater sensor networks to enable oceanographic research. In case of UWSN radio waves are not suitable for communication as propagation capability of radio waves is very poor in under water. Hence UWSN uses acoustic signals for communication. The transmission speed of acoustic wave is less than the transmission speed of radio wave due to various physical parameters of underwater acoustic channel. Hence the throughput of the channel is affected by this large delay. The long and uncertain delay makes many classical protocols unsatisfactory because they depend on multiple handshakes and appropriate calculations of the roundtrip time (RTT) between two nodes. We are designing and implementing a novel approach to eliminate inconsistent delays by reducing the control message exchanges in the network.
This paper deals with the study of the control mechanism and the practical application of electrical appliances using an android phone in a Zigbee network. The system measures the voltage and current parameters of electric devices and thus helps to view the power consumed. The proposed system is a flexible system which provides an efficient and effective control mechanism from a remote location. The system also focuses on voice based control and thus helps to save the electricity expense of the consumers. The other alternatives to Zigbee are also discussed in the paper.
Authentication is one of the important security aspects to secure the critical or sensitive information in a system. The authentication system must allow only the authorized users to access the critical information. So it must be strong enough to identify only the valid users and at the same time it should be user friendly. There are many authentication systems designed and used, but most commonly used authentication system is login-password. But this suffers with the attack called shoulder surfing, and brute force method of password guessing. The work carried out to explore the strengths of different graphical based password system to avoid the attack of shoulder surfing and enhance the security in terms of authentication. Also we have proposed a new graphical based authentication system.
Time series analysis is one of the major prediction techniques for forecasting of time dependent variables. These days the time series analysis is applicable to a variety of applications. In this work the time series analysis technique using ARIMA model is applied on per capita disposable income for future forecasting. Per capita disposable income is the average available money per person after income taxes have been accounted for. It is an indicator of the overall state of an economy. Forecasting of per capita disposable income is necessary as it may help government assess country's economic condition in comparison with the economy of other countries of the world. Forecasting per capita disposable income may also help assess inflation and financial critical situation. The results obtained from this work can be used by the planning commission of a country to formulate future policies and plans.
In cloud computing scenario, efficiency of application execution not only depends on quality and quantity of the resources in the data center, but also on underlying resource allocation approaches. An efficient resource allocation technique is necessary for building an efficient system. The objective of this work is to propose an agent based Best-Fit resource allocation scheme which increases utilization of the resources, lowers the service cost and reduces the execution time. The work employs two types of agents: user's cloudlet agent and provider's resource agent. Cloudlet agent is located at the client system, which collects job requirement and offers various QoS options for its execution. Resource agent at the server, uses Best-Fit approach to allocate resources for jobs received from Cloudlet agent. The proposed work is simulated and the results are compared with other agent based resource allocation approaches using First-Come-First-Serve and Round-Robin. It is observed that Best-Fit approach performs better in terms of VMs allocation, job execution time, cost and resource utilization.
Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.
In this paper a novel and enhanced version of Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF) have been proposed for removing high density impulse noise from images. The MDBUTMF algorithm fails at high density noise. Hence a new algorithm is presented to remove high density noise. Histogram estimation technique is used to decide whether the processing pixel is noisy or noise free. Parameters like Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR) are used to compare the filter with its previous versions. Simulation results show that the filter works better with high density noise and out performs MDBUTMF.
In a Wireless Sensor Networks (WSNs) the sensor nodes are placed in an environment depending on the applications where secure communication is in high demand. To ensure the privacy and safety of data transactions in the network, a unique identification for the nodes and secure key for transportation have become major concerns. In order to establish a secure communication channel in the network, care and address the recourse constraints related to the devices and the scalability of the network when designing a secure key management. An approach for secure communication channel establishment is made in order to suite the functional and architectural features of WSNs. Here a hybrid key management scheme for symmetric key cryptography is attempted to establish a secure communication. An ECC and DH based key management and a certificate generation scheme, where the key is generated to decrypt the certificates to establish link for communication in the network. The hybrid scheme is tested based on amount of energy consumed and security analysis by simulation.
In the world of image processing domain, removal of high density impulse noise is always a prominent area for research. In this paper, an efficient Modified Decision based unsymmetric trimmed median filter algorithms for the removal of impulse noise has been proposed with color images rather than gray scale images by separation red- green- blue plane of color image. The performance of the system is analyzed in terms of Mean square error (MSE), Peak signal to noise ratio (PSNR) image enhancement factor (IEF) and time required for executing the algorithms for different noise densities. Simulation results shows that proposed algorithm outperforms the existing algorithms even at high noise densities for color images. Many experiments are conducted to validate efficiency of the proposed algorithm.
Mobile application developers and users can feel a direct advantage within mobile based cloud computing for overcoming the inherent constraints present in mobile devices - be it battery life, memory space or processing power.
The field of Network Centric Warfare seeks to use information technology in order to gain competitive advantage over the enemies during war. An essential part of network centric warfare deals with the firing of highly destructive weapons. It includes the determination of exact direction of firing the weapon so as to successfully destroy the target for a given unguided shell. This can be done using different trajectory models. The major challenge is that this decision be quick in the time critical scenario of war and this urges the use of previous experience using data mining.
Session Initiation Protocol is an IP based signaling protocol used for establishing, modifying and terminating sessions. During the signaling process, both the peers may initiate similar or controversial signaling messages resulting in Race / Glare conditions. SIP protocol handles glare conditions by transmitting the signaling messages after a random interval of time, within a specified time range. This process reduces the chances for occurring of glare conditions, but results in increased signaling messages. This paper proposes a method for avoiding / reducing glare conditions for SIP BYE requests by enabling state-full SIP Proxy servers with extra intelligence to identify and reduce signaling messages transmitted end to end during Glare conditions. This work analyzes the amount of bandwidth reduced and number of signaling messages reduced with proposed method in scenarios where the glare condition is identified between two proxy servers and between a proxy server and a UserAgent and the glare condition that occurs at a proxy server.
Voltage pulse or current pulse can be used as the main monitoring parameter of the sparks that occur in micro electrical discharge machining (μEDM) process. The spark gap should be monitored for many purposes including study of material removal characteristics, which can be followed by tool wear monitoring and compensation. The system which fulfils the requirements of monitoring spark gap by using gap waveforms is called pulse discriminating (PD) system. The purpose of this research is to compare the capability of effective pulse discrimination using voltage and current pulses by online application of an algorithm written in LABVIEW separately. The comparison is based on the evaluation of the following responses - effective counting of total number of pulses and percentage of different kind of pulses that exist in μEDM.
A patent is an intellectual property document that protects new inventions. It covers how things work, what they do, how they do it, what they are made of and how they are made. The owner of the granted patent application has the ability to take a legal action to stop others from making, using, importing or selling the invention without permission. While applying for a patent, the inventor has issues in identifying similar patents. Citations of related patents, which are referred to as the prior art, should be included while applying for a patent. We propose a system to develop a Patent Search Engine to identify related patents. We also propose a system to predict Business Trends by analyzing the patents. In our proposed system, we carry out a query independent clustering of patent documents to generate topic clusters using LDA. From these clusters, we retrieve query specific patents based on relevance thereby maximizing the query likelihood. Ranking is based on relevancy and recency which can be performed using BM25F algorithm. We analyze the Topic-Company trends and forecast the future of the technology which is based on the Time Series Algorithm - ARIMA. We evaluate the proposed methods on USPTO patent database. The experimental results show that the proposed techniques perform well as compared to the corresponding baseline methods.
The diffusion of ICT - Information and Communication Technology has witnessed a remarkable growth in the Past decade across the globe. This growth is fuelled and propelled by technological advances, economic investment, social and cultural changes that have facilitated the integration of ICT into everyday life. As information technology in the healthcare industry evolves, the scope of information sharing is expanding beyond four walls of individual institutions. Achieving this level of integration will require the software models overcome a host of technical obstacles and they are accessible, affordable and widely supported primarily from the government. Indian healthcare sector is undergoing a transformation with improved services becoming available to a larger population. Indian healthcare is substantially poised under the influence of government and diplomatic beliefs, interests and the state of affairs of the economic conditions rather than the documentation and conformation of the facts and figures. There have been many attempts to improve quality and investment in healthcare, but most have been based on management fads and have been unsustainable. In this paper we have evaluated the technology acceptance and financial investment of healthcare professionals from field survey.
In recent generation higher performance and high computational capability are possible by small feature size and high density of transistors in integrated circuit. In CMOS circuit as scaling down of both supply voltage (Vdd) and threshold voltage (Vt) result in increased sub-threshold leakage current hence more power dissipation. Small feature size and decrease in both Vdd and Vt has hostile delay reduction. LECTOR and INDEP are the techniques for CMOS circuit designing which mitigate leakage current without affecting the dynamic power dissipation. This paper presents the comparative study for area, delay and power dissipation of CMOS inverter for LECTOR and INDEP techniques. Simulation of INDEP Inverter and LECTOR Inverter circuit with and without body biasing has been done. The sizing effect of extra and conventional transistor is also addressed in this paper. Simulation of the circuit is done using Tanner EDA tool at 70nm with supply voltage of 1V is considered.
The channels for expressing opinions seem to increase daily and hence, they are important sources of business insight. For product planning, marketing and customer service, it is necessary to capture and analyse these opinions. The social blogs are massive corpus for text and opinion mining. So, in this competitive and greedy world we find the need of analysis of the opinions about the products a vital task, to evaluate and improve the product quality, and as a result these opinions are relevant to companies. These opinions are easily accessed through the social blogs. This paper discusses various methods in performing sentiment analysis. We can see that presently we are provided facility to post the opinions in different languages and this paper tried to consider the processing of texts of different languages that are posted in blogs. Along with that the relevance of the quality of dataset used for the analysis are also the subject of discussion in this paper.
Now a days, cloud computing is most popular network in world. Cloud computing provides resource sharing and online data storage for the end users. In existed cloud computing systems there are many security issues. So, security becomes essential part for the data which is stored on cloud. To solve this problem we have proposed this paper. This paper presents client side AES encryption and decryption technique using secret key. AES encryption and decryption is high secured and fastest technique. Client side encryption is an effective approach to provide security to transmitting data and stored data. This paper proposed user authentication to secure data of encryption algorithm with in cloud computing. Cloud computing allows users to use browser without application installation and access their data at any computer using browser. This infrastructure guaranteed to secure the information in cloud server.
In the emerging advances of communication technologies, Wireless Sensor Networks (WSN)s, consisting of numerous sensor nodes, are extensively used in the various application areas such as, vehicle tracking, agriculture, military, forest surveillance, healthcare, environment and earthquake observation etc., The sensor nodes are provided with the smaller computing power, little memory, lesser battery power and slighter range of communication strength. These sensor nodes are to be deployed in a particular location to monitor the environmental system on the basis of applications of WSN. The complexity of deployment of a wireless sensor networks is depending on optimization and application requirements. The deployment of WSN is categorized as static, dynamic and energy aware node placement. This work is an extension of our paper “Analogy of Static and Dynamic Node Deployment Algorithms in WSN”. The present work provides the comparison between the different deployment algorithms of static, dynamic and energy aware protocols. The comparison of algorithms and protocols is achieved on the basis of different parameters like energy consumption; coverage of nodes, average distance between the nodes etc., The result indicates that, which deployment algorithm has the better performance. The main goal of this paper is to afford the knowledge for the best of static, dynamic and energy aware deployment schemes.
Consider a cloud deployment where the organizational network pertaining to a tenant having routers, switches sharing network telemetry data on regular basis. Among different ways of managing networks flow-based network monitoring is most sought after approach because of accuracy and economies of scale. In the event of host compromise the device credentials are revoked thereby disabling its ability to read future communications. Broadcast Encryption techniques having strong key revocation mechanism can be used in this context. Waters et. al [?] is one the broadcast encryption schemes which facilitate efficient sharing using small size keys and the related Attribute-Based Encryption scheme uses dual encryption technique and is capable of handling non-monotonous access structure again with small keys. In this paper we experiment with broadcast encryption and attribute based encryotion schemes with real-time network telemetry data and provide detailed analysis of performace. Though the original scheme provides smaller keys, few changes to the algorithm improves the performance and efficiency and makes it acceptable for large scale usage. We found the optimized scheme is 20% more performant than inital scheme.
The optical uplink, ground station to satellite, is more susceptible to signal fading compared to the downlink channel due to beam wandering experienced at the transmitter in the ground station. Alamouti space time codes have been used to partially mitigate this handicap. With the use of these codes, a bit error rate(BER) of 1 × 10 -8 can be achieved for a link margin of 6dB. A transmitter beam radius of 47cm and a receiver diameter of 1m are noted to be suitable for the uplink optical system with transmitter at an altitude of 2.5km from the ground and the satellite at a zenith angle of 0°. The performance of the Alamouti codes is simulated in MATLAB 2006b and analyzed for one, two and four receivers and the use of 2 × 2 is proposed in this work, since the deployment of four receivers at the satellite does not seem practical. It is demonstrated that a 2×2 Alamouti space time code achieves a total gain of 6dB when convolutional coding and QPSK modulation are used whereas a gain of about 5dB is achieved by using 8PSK or 16QAM. Link Budget analysis of the uplink scenario suggests that the proposed method reduces the required transmitter power from 24.883kW to 6.2207kW.
In this paper, we propose an adaptive receive antenna selection technique for Spatial Modulation (SM) MIMO systems. The proposed method is simple and uses channel state information (CSI) available at the receiver to choose the best subset of active receive antennas among the available N r receive antennas. Simulation results show that SM MIMO system with the proposed receive antenna selection scheme provides a performance gain of approximately 5 dB over conventional SM system (SM MIMO system without antenna selection). The proposed scheme does not increase the RF chain requirement at the receiver and retains all the benefits of SM MIMO.
Facial expression analysis plays pivotal role for all the applications which are based on emotion recognition. Some of the significant applications are driver alert system, animation, pain monitoring for patients and clinical practices. Emotion recognition is carried out in diverse ways and facial expressions based method is one of the most prominent in non verbal category of emotion recognition. The paper presents detection of all the six universal emotions based on statistical moments i.e. Zernike moments. The features extracted by Zernike moments are further classified through Naïve Bayesian classifier. Rotation Invariance is one of the important properties of Zernike moments which is also experimented. The simulation based experimentation provides average detection accuracy as 81.66% and recognition time less than 2 seconds for frontal face images. The average precision with respect to positives is 81.85 and average sensitivity is obtained as 80.60%. Robustness of system is verified against rotation of images till 360 degrees with step size as 45 degrees. The detection accuracy varies with reference to emotion under consideration but the average accuracy and detection time remains at par with frontal face images.
The Internet of Things (IoT) is changing the way we perceive information. It has inspired solutions for a variety of everyday problems. With the advent of IoT, the Internet will house several “intelligent” objects capable of making their own decisions and communicate with each other in an efficient manner. Being a new area of research, there is a lack of standard framework for developing IoT based solutions. The basic reference architecture of IoT is characterized by the presence of three distinct layers. It includes the sensing layer, the network layer and the application layer. The proposed paper aims at providing an efficient platform to develop solutions for Internet of things. In the proposed paper a SOA layer is built on top of the network layer to manage the data and information. The current architecture of IoT does not offer consensus decision making. When information availability is either inadequate or loaded at the various IoT edge nodes, a consensus decision making approach is required for efficient and consistent means of combining information. A cluster based approach is used to calculate the consensus locally which is then combined to reach a global consensus.
A novel order reduction approach for continuous time systems which utilizes recently developed Cuckoo Search Optimization and Routh Approximation (RA) technique is proposed to simplify transfer functions/transfer matrices. The method may be applied to continuous time systems and assures stability of lower order model when the given higher dimension system is stable. The efficient approach is provided for reduction of MIMO and SISO Linear Time Invariant (LTI) systems. To describe the proposed method two numerical examples are solved. The results obtained are judged with other popular order reduction techniques in respect of Integral Square Error (ISE). Finally, the steady state value of reduced system is matched with the given original high dimensional system.
Agriculture is backbone of Indian economy, where agricultural production is estimated based upon its sown area. The probable reasons for not having accurate and transparent statistics on Indian agronomy would be the existing inadequate facilities, unstable mechanism, and sluggish government functionaries. With the advent of remote sensing technologies, researchers are optimistic towards addressing such problems. It would be great challenge to classify multi-spectral satellite images due to its complexity, processing skills, and classification. The reviews on problems and prospects of supervised and unsupervised classification techniques, is highlighted in this paper. Literatures stated that one of such statistical learning model is support vector machine algorithm, which reveals to be the best suitable algorithm for vegetation discrimination using remote sensing images. In this paper, an attempt is made in order to enhance the performance of SVM by optimizing its training part. The approach aims at investigating improvement corners on SVM classification for estimating agricultural area using remote sensing data and explores futuristic research in this domain too. Several attempts have been made using supervised SVM model, but use of EA for enhancement of SVM is the novelty that distinguishes this research weigh against the traditional approaches. Findings of such intermingling is put forth through this approach in constructive way so as to address crop identification for reducing the manual efforts taken to measure the agricultural area covered by the specific crops.
With the increasing proliferation of multicore processors, parallelization of applications has become a priority task. In order to take advantage of the multi-core architecture of modern processors, the legacy serial code must be analyzed to discover the regions where the parallelization effort can be more rewarding. This paper presents a parallel implementation of Doolittle Algorithm using OpenMP allowing the users to utilize the multiple cores present in the modern CPUs. The Serial Doolittle Algorithm is analyzed for computing the solution of dense system of linear equations, and is parallelized in C using the OpenMP library which makes it highly efficient, cross-platform compatible and scalable. The performance (speedup) of the Parallel Algorithm on multi-core system has been presented. The experimental results on a multi-core processor show that the proposed Parallel Doolittle Algorithm achieves good performance (speedup)compared to the sequential algorithm.
Recent advances in wireless sensor networks (WSNs) made strong impact on the development of low cost remote information monitoring systems. This paper proposes algorithm and simulates the performance matrices analysis of Quality of Service (QoS) parameters of WSN based on IEEE802.15.4 cluster topology for integrated automation of public utility management such as electricity, water and gas. It is used for data fusion and data analytics, theft and leakage detection. We investigate performance metrics which include packet delivery ratio (PDR), end_to_end delay (EED), Jitter, Throughput and energy consumption with respect to varying network size. Simulation analysis is done using QUALNET ver 5. 2.
The strategy of face recognition involves the examination of facial features in a picture, recognizing those features and matching them to 1 of the many faces in the database. There are lots of algorithms effective at performing face recognition, such as for instance: Principal Component Analysis, Discrete Cosine Transform, 3D acceptance methods, Gabor Wavelets method etc. This work has centered on Principal Component Analysis (PCA) method for face recognition in an efficient manner. There are numerous issues to take into account whenever choosing a face recognition method. The main element is: Accuracy, Time limitations, Process speed and Availiability. With one of these in minds PCA way of face recognition is selected because it is really a simplest and easiest approach to implement, extremely fast computation time. PCA (Principal Component Analysis) is an activity that extracts the absolute most relevant information within a face and then tries to construct a computational model that best describes it.
This paper presents a low power continuous time 2 nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.
In this paper, we propose an approach based on Lattice Reduction (LR) algorithm which preserves the channel norm in the presence of estimation errors. We analyze the channel norm of perfect and imperfect channel by employing LR algorithm on both perfect and imperfect channels in MIMO systems with 2, 4, 8 and 16 antennas for various error variances. We conclude that effective detection can be achieved even with imperfect channel by employing LR on those channels.
In this paper, we propose a hybridized Likelihood Ascent-Mixed Gibbs Sampling (LAS-MGS) for effective detection with channel estimation error. We analyze its performance in the presence of channel estimation error for 2×2 and 4×4 MIMO systems employing BPSK modulation scheme. At low SNRs, performance of ZF-MGS and LAS-MGS is similar but at high SNRs, LAS-MGS performs significantly better. LAS-MGS outperforms conventional Mixed Gibbs Sampling (MGS) and we are able to harness similar gain even with channel estimation errors. We conclude that LAS-MGS is a worthy candidate for further research.
In recent times cloud computing is transpiring as a new model for hosting and delivering user services through internet media. Cloud computing offers computing resources in the form of Virtual Machines (VMs) on demand and payment are made on the basis of the amount of resources used by user application. These unique feature attracted more number of users to host their requirements on Cloud Provider which has increased number of VM in data centers. This creates an issue of proper management of VM such that resources are efficiently utilized. Efficient utilization of resources has realized VM Consolidation which leads to efficient management of VM on as few hosts as possible, switching idle hosts (physical machines) into a power saving mode. Noteworthy research have been done in the area of efficient VM consolidation to reduce power utilization. VM migration is powerful utility to achieve VM Consolidation. But VM migration involves cost of Bandwidth and Resources between two machines. It leads to trade-off between energy utilized in migration vs energy utilized during workload. Our solution in this paper describes how to reduce trade off by efficiently migrating VM to proper Machine i.e. number of migration can be reduced.
Data mining is very effective technique for extracting useful information from large amount of structured dataset. Number of algorithms are available that can mine the useful and relevant information. Use of particular data mining algorithm has great impact on the results obtained. An innovative classification algorithm based on History Bits is developed for extracting useful and relevant information from large structured dataset. For implementation and testing of the History Bits based algorithm we have designed a structured criminal dataset. This algorithm analyses criminal information in lesser time and reduces the constraints of manual investigation process.
Cloud based services have become an integral part of our life. These services are based on infrastructure known as a data Center. As the demand for cloud based services increases the load on the data centers also increases and if this load is not properly managed the overall performance of the cloud degrades. This paper proposes a dynamic approach for load balancing a cloud that exploits the presence of datacenters globally. The proposed approach applies a strategy based on inter data center load migration. The approach illustrated in the paper tries to improve overall performance of the cloud by minimizing service delay.
Diabetic Retinopathy that is DR which is a eye disease that affect retina and further later at severe stage it lead to vision loss. Early detection of DR is helpful to improve the screening of patient to prevent further damage. Retinal micro-aneurysms, haemorrhages, exudates and cotton wool spots are kind of major abnormality to find the Non- Proliferative Diabetic Retinopathy (NPDR) and Proliferative Diabetic Retinopathy (PDR). The main objective of our proposed work is to detect retinal micro-aneurysms and exudates for automatic screening of DR using Support Vector Machine (SVM) and KNN classifier. To develop this proposed system, a detection of red and bright lesions in digital fundus photographs is needed. Micro-aneurysms are the first clinical sign of DR and it appear small red dots on retinal fundus images. To detect retinal micro-aneurysms, retinal fundus images are taken from Messidor, DB-rect dataset. After pre-processing, morphological operations are performed to find micro-aneurysms and then features are get extracted such as GLCM and Structural features for classification. In order to classify the normal and DR images, different classes must be represented using relevant and significant features. SVM gives better performance over KNN classifier.
The most economical operation of modern power systems is to provide the power generation optimally from different units with possible lowest cost by trying to meet all the system Constraints. This work necessitates an answer to security constrained unit commitment (SCUC) problem with an objective function incorporating equality and inequality constraints of the system. The objective of the problem will be solved using multiple optimization function. The constraints such as real power operating limits, power balance, minimum up and down time, emission, spinning reserve etc. will be subjected to project a solution to the problem by using BAT procedure. The performance of the proposed method is implemented in MATLAB working platform and the performance is evaluated with the testing system of 3-unit and 10-unit system.
The computation of gaze direction is important in modern interactive systems. The displays in real-time monitoring systems depend on spatial and temporal characteristics of eye movement. Research studies indicate the requirement for efficient and novel techniques in human computer interaction. A strong need for gaze tracking methods that eliminate initial setup and attune procedure is required. The pupil, iris and eye corners provide parametric data to determine gaze direction. Gaze tracking algorithm is initiated by iris localization. The approach of iris detection using frames captured from the video is significant for feature based gaze tracking. In this paper, the procedures for face and eye detection in visible light are discussed. The novel method discussed in this paper identify single face image appropriate for gaze tracking by elimination of multiple and non-face images. Iris detection is performed using Hough gradient method. The correctness rate of iris detection obtained is 95%.
The online learning gains more popularity in recent days; its key success is delivering content over internet and can be accessed by students from anywhere and anytime. In general, attraction is the quality of arousing interest. Similarly, motivation is the other hand to support for learning. Since, the online learning has less control over students compared to the conventional teaching method. Therefore engagement of student gets more importance on online learning. Most of the learning systems stores learners activities in log files and their profile related informations in database. Usually log file analysis alone could not have enough data to find out disengagement. Thus we integrate the log file information with database and develop a novel disengagement detection strategy using quasi framework. This study result reveals that quasi framework is effective in term of quality compared to previous proposals.
Hand gestures serve as primary tools for man-machine interaction. Hand Gesture Recognition System provides a natural, innovative and modern way on non verbal communication. A wide area of applications in Human Computer Interaction and Sign Language Recognition are available over last decade. This paper is a part of the project that aims at designing a real time system that would recognize sign language accurately. The gestures are recognized through camera based and 5DT data glove based systems respectively and these are combined to increase the recognition rate. The proposed fusion algorithm provides a high recognition rate as compared to discrete approaches.
As we know in urban areas now a day the voting system is getting most complicated only because of the person's identity. They only have the voting card as a proof of identification. So there are lots of chances of fake voting. To avoid this we are developing this project which will store the identity of the voters using android mobile through facial recognition systems. This system will capture faces of the voters and match with the existing faces in the stored database. After the confirmation of valid face detected, the OTP (One-Time Password) is generated and send to the voters registered mobile number. Then the voter is validated and he is allowed to do the voting. This is very fast and helpful technique to do the verification of the voters. This will also reduce voter's time to stand in queue for doing vote.
This paper studied the bit error rate (BER) analysis of various Ultra-wide band (UWB) pulses over different channel models using pulse position modulation (PPM) technique. Gaussian and its derivatives, modified Hermite pulse (MHP) and composite Hermite pulse (CHP) are used to transmit data bits in UWB communications. For BER simulation analysis of UWB pulses, earlier proposed additive white Gaussian noise (AWGN) and Saleh-Valenzuela (S-V) channel model for UWB are considered. The results show the performance of CHP is superior compare to other pulses for all channel models due to its good spectrum compatibility with Federal Communications Commission (FCC) spectral mask and higher fractional bandwidth. The observation done in this paper is very helpful in the selection of pulse for UWB communication system.
This paper identifies an innovative design for signature verification which is able to extract features from an individual's signatures and uses those feature sets to discriminate genuine signatures from forgeries. An innovative JAVA-PYTHON platform is used for the development. Detailed feature study, algorithm development and feature set verification is carried out during this experiment.
This paper introduces and motivates the use of artificial neural networks (ANN) for recognition of speaker independent phoneme in voice signal. It shows the utilization of neural network's parallel characteristics as well as self-learning characteristics in phoneme recognition using the Kohonen learning rule. It demonstrates the utility of machine learning algorithms in signal processing by trying to emulate biological neuron arrangements. Therefore, different types of neural networks are used at every stage of the whole process. Artificial neural network's implementation has improved the performance of feature extraction, and matching techniques of phoneme recognition. This solution based on self organizing clustering of speech features on time axis forming phonemes and unsupervised learning of these clusters together attains an accuracy of 97.77 % giving 3 seconds clean speech input and an accuracy of 98.88% giving 15 seconds of clean speech input. Speech samples were taken from 9 speakers.
We present an ensemble method to classify Parkinson patients and healthy people. C&R Tree, Bayes Net and C5.0 are used to generate ensemble method. Using supervised learning technique, the proposed method generates rules to distinguish Parkinson patients from healthy people. The proposed method uses single classifier to generate rules which are used as input for the next used classifier and in this way final rules are generated to predict more accurate results than individual classifier used to generate ensemble method. This method shows lower number of misclassification instances than single classifiers used to build model. Ensemble method shows better results for training and testing accuracy than single classifier.
There are many challenges in building mobile Web applications today. One of the critical challenges that the developers face is on the “performance” of the mobile Web application. With the ever boosting hardware on the mobile phones and tablets, the demand for the best performance on the mobile is something that the developers cannot ignore. Studies after studies have indicated that the probability of the user putting up with the mediocre performing application is very less. This paper illustrates the different techniques that the developers need to bring in to measure and optimize the performance of the mobile Web application in different layers like HTML, CSS, and Java script and also during deployment. The paper takes up a use case application, measures and baselines the performance and then applies various techniques to measure the gain/loss in performance after application of each technique.
In this paper a scheme for segmentation of unconstrained handwritten Devanagari and Bangla words into characters and its sub-parts is proposed. Firstly, the region above headline is identified by counting the number of white to black transitions in each row, which followed by its separation. Then the characters are segmented using fuzzy logic. For each column, the inputs to the fuzzy system are the location of first white pixel, thickness of the first black stroke, count of white pixels, and the run length count of white pixels.
This paper presents analysis of text based data retrieval system and opinion mining on social networking website. This data is collected from various sources like local machine, email accounts, social networking accounts of respective user. Multiple users can use this system by providing log in credentials. This paper explains significance of Inverse Document Frequency and Term Frequency in Lucene scoring formula. Most of the people express their correct reviews on social networking websites than any other discussion forums. This paper discusses an approach to classify each tweet from Twitter into positive, negative or neutral category. This paper also presents techniques to improve performance of Lucene by modifying certain parameters of document scoring formula. Lucene performance also can be improved by modifying algorithm for incremental indexing and parallel processing. The purpose of developing such system is to reduce manual efforts of searching to greater extent. This system is portable and secure.
In the recent times with the increasing affect of malfunctioning of software developed using conventional approach on the world of embedded computer based systems, the use of model based approach is often advocated as a means of increasing confidence in such systems. Designing and validating real time systems using models helps to improve system safety, and reliability.
For reliable and error free transmission of data in communication systems we require a system employing forward error correction schemes. This paper analyze the performance comparison of Convolution code, Reed-Solomon code as well as concatenation of Convolution code and Reed-Solomon code over optical communication links with interleaving in terms of their probability of Bit error rate and signal to noise ratio with different code rate and identify with code rate will give best performance.
The Human eyes are the most complicated organ, having ideal and interconnected subsystems namely pupil, iris, lens, retina, cornea and optic nerve. Cataract is one of the major health problems that occur mainly in the old age. Protein layer will develop gradually in the eyes and the lens become cloudy over a long time period. This reduces vision and leads to blindness. There are various Automatic Cataract detection and classification methods available today. All the Cataract detection and classification systems have 3 basic steps: Preprocessing, Feature extraction and Classification. In this paper some of the recent methods are discussed and analyzed.
In general, security evaluation of communication networks has always been of prime interest and in particular the ever increasing use of mobile phones in the last decade has led to keen interest in studying the possibility of hacking cellular networks. Security comes at an overhead in terms of either CPU cycles (computational overhead), bandwidth (communication overhead) and/or memory. While it is possible to theoretically design a system that is 100% secure, the operational overhead makes it uneconomical to deploy such a system in the real world. Often compromises are made in the real world implementation of a communication system and a trade-off is made between security and cost of operation of the communication system. In this paper we build a low cost GSM testbed to evaluate the security features in the commercially deployed 2G and 2.5G cellular networks in India.
Medical imaging has grown tremendously over years. The CT and MRI are well thought-out to be most extensively used imaging modalities. MRI is less dangerous, but one cannot underrate the unsafe side effects of CT. Current study reveals the actuality of escalating risk of cancer as side effect for patients who go through recurring CT scanning. Consequently the devise of low dose imaging protocol is of the enormous significance in the current scenario. In this paper we present modified highly constrained back projection (M-HYPR) as a most promising method to address low dose imaging. HYPR is basically an iterative process in nature and hence computational greedy, and is the root cause for being neglected by CT developers. The weight matrix module, being main reason for huge computation time is modified in this work. Considerable speed up factor is recorded, as compared original HYPR (O-HYPR) on a lone thread CPU implementation. The superiority of reconstructed image in each platform has been analyzed. The evidenced results convey substantial improved performance by M-HYPR algorithm, and appreciable usage of GPU in medical image applications.
In this paper a multiband fractal based rectangular microstrip patch antenna is designed. FR4 substrate having thickness of 1.58 mm is used as substrate material for the design of proposed antenna and microstrip feed line provides the excitation to the antenna. The antenna operating frequency range is from 1 to 10 GHz. The proposed antenna resonate at twelve different frequencies as 1.86, 2.33, 3.67, 4.57, 5.08, 6.06, 7.03, 7.75, 8.08, 8.84, 9.56 and 10 GHz and the return losses are -15.39, -16.48, -10.02, -17.29, -13.15, -23.41, -10.22, -11.28, -17.02, -10.94, -15.15 and -15.48 dB respectively. The proposed antenna is designed and simulated by using the Ansoft HFSS V13 (high frequency structure simulator) software.
Cryptography is defined as the practice and study of techniques for secure communication in the presence of third party attackers. It is a good way to protect sensitive information. Over the years, the need to protect information has increased. Confidentiality is of utmost importance. Complete protection of information is not an easy task. In this paper, a method is proposed that consists of three different levels of encryption, accomplished using Red Black Trees and Linear Congruential Generator. Due to the existence of three levels, it becomes extremely difficult for an attacker to hack data.
De-facto storage model being used by health-care information systems is Relational Database Management Systems (RDBMS). Albeit relational storage model is mature and widely used; they are incompetent to store and query data encompassing high degree of relationships. Health-care data is heavily annotated with relationships and hence are a suitable candidate for a specialized data model - Graph databases. Graph databases will empower health-care professionals to discover and manage new and useful relationships and also provides speed when querying highly-related data. To query related data, relational databases employ massive joins which are very expensive, in contrast graph data-stores have direct pointers to their adjacent nodes. Hence achieving much needed scalability to handle huge amount of medical data being generated at a very high velocity. Also, healthcare data is primarily semi/un-structured - inciting the need of a schema-less database. In this proposal a methodology to convert a relational to a graph database by exploiting the schema and the constraints of the source, is proposed. The approach supports the translation of conjunctive SQL queries over the source into graph traversal operations over the target. The experimental results are provided to show the feasibility of the solution and the efficiency of query answering over the target database. Tuples are mapped to nodes and foreign key is mapped into edges. Software have been implemented in Java to convert a sample medical relational database with 24 tables to a graph database. During transformation, constraints were preserved. MySQL as relational database and popular graph database - Neo4j was used for the implementation of proposed system - SQL2Neo.
In this paper we propose an integrated test environment for combinatorial testing and discuss its realization at macro level. We describe the entities that make up the integrated test environment and their interaction, high-lighting the combinatorial test aspects. The proposed environment can be used for the automated as well as manual testing. Test tools supplied by research institutions and commercial tool vendors need to be integrated to realize the test environment proposed in this paper. The proposed system is theoretical in nature which can be embraced by large scale industrial and research projects to achieve an efficient integrated environment for combinatorial testing. This in turn can aid in the adoption of combinatorial testing to achieve the desired quality improvements in the products.
Hybrid fractal multiband antenna is designed using Koch and meander geometry and its characteristics are investigated. The proposed antenna helps to achieve multiband behavior due to its multiple resonance characteristics. It has planar structure, compact size and suitable for wireless applications. IFS approach has been used to obtain the hybrid structure using MATLAB and scripting method of HFSS. Perturbation of basic structure is done to achieve quad-band behavior. Proposed antenna resonates at four different frequencies including Bluetooth (2.12-2.95 GHz), WLAN (4.82-5.95 GHz), 4.07 GHz and 7.3 GHz. It is a low cost antenna designed on easily available FR4 substrate. It exhibits nearly omnidirectional radiation pattern and VSWR ≤ 2 for all resonating frequencies.
World-wide interoperability for Microwave Access (WiMAX) is a wireless metropolitan area networks (WMANs). Routing and multimedia application plays an important role in the WiMAX networks. In this study, the main focus is on analyzing and comparing the different routing algorithms such as AODV, DYMO, DSR, Bellman-Ford and Fish-Eye for the WiMAX networks. We have considered few QoS parameters such as total data received, average end-to-end delay, average jitter and throughput. Here, we have considered the CBR application. From the obtained result DSR routing algorithm outperforms high throughput and total data received. Bellman-Ford and Fish-Eye provides less average end-to-end delay. Bellman-Ford routing algorithm also provides less average jitter.
Multipliers are the major contributors to the overall throughput in most SoCs. Vedic arithmetic is a novel and simplified approach to perform complex operations. Any good design must be targeted for optimal Speed-Area Trade-off. Commercial application demands reliable and economical design which makes testability an important parameter. Stuck-at-fault model for the design is to be developed and proper metrics have to be used to measure testability. Good design implies high fault coverage also. In this paper, design of Vedic multiplier with high fault coverage is proposed. Vedic multiplier designed using Urdhva-Triyagbhyam Sutra operates faster than the conventional multipliers like Booth and Array multipliers. Comparative analysis of VLSI parameters such as throughput, area and fault coverage is done with other multipliers.
This research work focuses on to the development of neural network based detection and characterization of electrocardiogram (ECG) and electroencephalogram (EEG) signal. ECG and EEG signals have prime importance for patients under critical care. These signals have to be continuously monitored and processed as they are inter dependent. In this research Dyadic wavelet transform (DyWT) is used to process ECG data and Daubechies wavelet transform (DWT) is used to process EEG data. Emerging back propagation NN algorithm and Hopfield algorithm is used to detect and characterize both ECG and EEG signals. The different ECG and EEG data's have been collected and simultaneously processed and recognized.
Nowadays extra attention is being paid to budget management. Including sales marketing sectors, where sales agents have to spend budget wisely and find prospective clients. This paper presents an architecture based on SAP HANA framework a powerful database platform that takes advantage of large main memory and extensive parallel processors, identifies prospective clients based on fuzzy item set approach and improves the accuracy of identifying clients with minimal budget expenditure.
This paper presents a novel method of zooming binary images. An algorithm that utilizes zooming techniques and represents large-dimension 2D binary images on miniature monochrome display screens is presented. An effort has been made to put the display screen to best use for representing the images on such displays by implementation of the zooming techniques very frequently required in microcontroller based industrial embedded systems. This algorithm is capable of scrolling through the large image to bring desired content into view. The image can also be viewed in different zoom scales using this algorithm during run-time. The algorithm is tested on ARM context M3 based handheld with an embedded monochrome graphics display module. Results have shown the use of the prescribed algorithm over other algorithms where the fine details of the image has to be preserved for a monochrome display.
Mobile ad hoc network (MANET) is a decentralized communication network.. In MANET during real time traffic multicasting, there is increased energy consumption and delay. In order to overcome this issue we propose to design Node connectivity, Energy and Bandwidth Aware Clustering Routing Algorithm. An efficient ENB cluster head selection algorithm [1] based on the combination of important matrices Residual Energy (E), Node connectivity (C) and Available Bandwidth (B) is considered for election of the cluster head efficiently. The multimedia stream splits into multiple sub-streams prior to transmission using Top-N rule selection approach algorithm [2]. Shortest path multicast tree construction algorithm [3] to transmit the real time traffic effectively among the nodes in MANETS. Using the cluster head as group leaders and cluster members as leaf nodes, a shortest path multicast tree is established. With the help of constructed shortest path multicast tree we are proposing Node connectivity, Energy and Bandwidth Aware Clustering Routing Algorithm.
Document polarity detection is a part of sentiment analysis where a document is classified as a positive polarity document or a negative polarity document. The applications of polarity detection are content filtering and opinion mining. Content filtering of negative polarity documents is an important application to protect children from negativity and can be used in security filters of organizations. In this paper, dictionary based method using polarity lexicon and machine learning algorithms are applied for polarity detection of Kannada language documents. In dictionary method, a manually created polarity lexicon of 5043 Kannada words is used and compared with machine learning algorithms like Naïve Bayes and Maximum Entropy. It is observed that performance of Naïve Bayes and Maximum Entropy is better than dictionary based method with accuracy of 0.90, 0.93 and 0.78 respectively.
Non-linear system identification is gaining much importance in present days. The paper introduces a system identification technique based on chaos theory. We observe system output data over a specified period of time and characterize the system behavior (stable, unstable or chaotic). The algorithm is implemented on Cortex M3 development board.
Large scale digitization of essential services like governance, banking, public utilities etc has made the internet an attractive target for worm programmers to launch large scale cyber attack with the intention of either stealing information or disruption of services. Large scale attacks continue to happen in spite of the best efforts to secure a network by adopting new protection mechanisms against them. Security comes at a significant operational cost and organizations need to adopt an effective and efficient strategy so that the operational costs do not become more than the combined loss in the event of a wide spread attack. The ability to access damage in the event of a cyber attack and choose an appropriate and cost effective strategy depends on the ability to successfully model the spread of a cyber attack and thus determine the number of machines that would get affected. The existing models fail to take into account the impact of security techniques deployed on worm propagation while accessing the impact of worm on the computer network. Further they consider the network links to be homogenous and lack the granularity to capture the heterogeneity in security risk across the various links in a computer network. In this paper we propose a stochastic model that takes into account the fact that different network paths have different risk levels and also capture the impact of security defenses based on memory randomization on the worm propagation.
Hadoop cluster is specifically designed to store and analyze a large amount of data in distributed environment. With ever increasing use of Hadoop clusters, a scheduling algorithm is required for optimal utilisation of cluster resources. The existing scheduling algorithms are limited to one or more of the following crucial problems such as limited utilization of computing resources, limited applicability towards heterogeneous cluster, random scheduling of non-local map tasks, and negligence of small jobs in scheduling. In this paper, we propose a novel job aware scheduling algorithm that overcomes the above limitations. In addition, we analyze the performance of the proposed algorithm using MapReduce WordCount benchmark. The experimental results show that the proposed algorithm increases the resource utilization and reduces the average waiting time compared to existing Matchmaking scheduling algorithm
Today the network is seemingly complex and vast and it is difficult to gauge its characteristics. Network administrators need information to check the network behavior for capacity planning, quality of service requirements and planning for the expansion of the network. Software defined networking (SDN) is an approach where we introduce abstraction to simplify the network into two layers, one used for controlling the traffic and other for forwarding the traffic. Hadoop is used for distributed processing. In this paper we combine the abstraction property of SDN and the processing power Hadoop to propose an architecture which we call as Advanced Control Distributed Processing Architecture (ACDPA), which is used to determine the flow characteristics and setting the priority of the flows i.e. essentially setting the quality of service(QoS). We provide experimental details with sample traffic to show how to setup this architecture. We also show the results of traffic classification and setting the priority of the hosts.
Present multimedia applications demand highly energy-efficient devices due to need of intensive computation. Most filtering applications engage 2D gaussian smooth filter which is slow and severely affects the performance. In this paper, we propose a novel energy efficient approximate 2D gaussian smoothing filter. The proposed approach is based on “Mirror Short Pixel Approximation” and rounding-off gaussian kernel coefficients. In Mirror Short Pixel Approximation the elements of input image block is replaced via mirror pixel value. The proposed approach is modelled in high level language to check efficacy on benchmark image, which results in minor degradation of image quality. The proposed design is realized on Virtex 6 FPGA. The Simulation results show reduction of 85%, 8%, 58% in area, power and delay respectively compared to existing approximate 2D gaussian smooth filter.
Mutation testing is an effective adequacy criteria and has been researched upon a lot in the past decades, but lacks practical application due to its high cost. Mutation testing for test data generation has not been studied much. This paper is an up-to-date review of the technologies that have been applied with mutation testing for automatic generation of test data which is optimized regarding time, cost and code coverage. The survey reveals an increase in interest regarding the meta-heuristic techniques along with mutation testing.
In this paper, we propose a pattern classification approach to learn and recognize human pulse signal. Different from previous work, the approach introduced in this paper of processing signals is oriented by the perspective of system analysis, and is aimed to recognize the pulse signal of pregnant objects from that of unfertilized female adults. Firstly, we apply homomorphic deconvolution model to get two types of human pulse signal curves in cepstrum domain and extract its Mel-Frequency Cepstrum Coefficient. This step is the learning process which learns characteristic parameters of human pulse signal, and besides, frequency characteristics and formant parameters of human pulse transmission system. Secondly, the Mel-Frequency Cepstrum Coefficient is processed via Dynamic Time Warping and Fuzzy C-Means Clustering, thus detecting parameter ranges of human pulse signal and utilizing them as the classifier in the subsequent recognition process. Instead of optimizing by solely applying Dynamic Time Warping, our approach, which combines Fuzzy C-Means Clustering and Dynamic Time Warping, tends to optimize the recognition rate significantly due to its advantage of searching for the globally optimal solution.
Some of the challenges involved in current Internet routing are limitations of the process enabling routing techniques, handling of explosion of messages and absence of awareness about the environment. This paper presents a comparative analysis using Bayesian model of a network having a randomly distributed quality parameters when subjected to quality grading and direction oriented for optimal path determination. Optimal path determination was performed upon self aware nodes using Memetic algorithm and ABC. The agents distributed among the nodes accumulate relevant information about itself and neighbouring nodes. The grading operation makes use of the agents to determine the quality of service information of the node in the network. The scheme has been simulated on various network topology for performance analysis of direction oriented graded network in terms of throughput and end-to-end delay. It has been found that graded cognitive network exhibits more flexibility and adaptability for facilitating routing.
As the embedded computing becomes advanced, more and more functionality is becoming available on the mobile devices. The workloads on earlier generations of mobile devices were mostly limited to chat, e-mail or Web browsing apart from the use as phones. Multi-media workloads such as the video are on the rise; in addition many users play games or use apps on the latest mobile devices. The emergence of these new workloads has resulted in the high performance demands on the mobile devices. System level design space exploration for high performance embedded systems is a very important problem that has become very challenging due to the advent of multi cores, GPUs, FPGAs and DSPs along with a large variety of energy efficient memory systems. To perform efficient design space exploration for SoCs adopted workload characterizations approach. This paper shows workload characterization for a variety of heterogeneous processors such as the DSPs and FPGAs.
Fault tolerant Resource consumption in Desktop grid is a motivational area in research. The present research paper focuses on the dimensions of Fault Tolerant resource usage especially in area of available computational power. Desktop grid resources are accountable for generation of computational power. Alchemi Desktop middleware is useful for collection of computational power on diverse machines in Microsoft window based environment. Failure and Fault in execution side can create serious problem, in addition to a direct impact on computational power in Real Time Environment. In the Environment of faults, control on the available computational power is very necessary in grid middleware. This problem has not been addressed so far. Alchemi Desktop Grid Middle ware provide manual Procedure for control of computational power in Real Time Environment. There is no automated mechanism available for controlling the processing power in alchemi desktop grid. This Research work has proposed, designed & developed automated framework for Alchemi Grid middleware. Framework can take control on available computational power in Real Time Environment at Time of Fault in execution processes. Testing for the framework is done in Real Time environment. Results after test show that framework gives quick response for controlling available computational power. Framework is able to detect defective process machine and correct fault in milliseconds which will cooperative to maintain level of available computational power In Real time Environment. This Research work has tried to eliminate Manual Procedure for controlling computational power by using automated Method for quick action in case of execution side faults.
Sentiment Analysis (SA) is a very popular research area in the field of text mining as its computational capabilities have found a lot of research applications. Sentiment Prediction,Subjectivity Detection,Text summarization,Sentiment Summarization for Opinions etc. are some example applications. There are many research studies in the area of SA in different languages. However, Kannada SA has not been explored extensively and in particular, for the analysis of product reviews. In this paper, a case study of Kannada SA for mobile product reviews is proposed as there are many user generated Kannada product reviews available online. In this approach a lexicon based method for aspect extraction has been developed. Furthermore, the Naive Bayes classification model is applied to analyze the polarity of the sentiment due to its computational simplicity and stochastic robustness. This is the first attempt in Kannada to the best of author's knowledge. Therefore, a customized corpus has been developed. The weekly reviews from the column `Gadget Loka' by U.B Pavanaja are considered to develop this corpus. The source for this is the famous Kannada news paper `Prajavani'. The preliminary results indicate this approach is an efficient technique for Kannada SA.
Any given Euclidean space can be partitioned into non-overlapping regions using Voronoi diagram and the Delaunay triangulation connects sites using nearest-neighbor fashion. Realistically in this context, all the edge servers are scattered over the Earth surface and can be clustered using Voronoi diagram. Now nearest edge server selection by Delaunay triangulation over the Voronoi diagram is our prime target. Due to the large demand of Internet content coming from burst crowd, performance of the Cloud-oriented content delivery networks is drastically reduced. To improve the said performance degradation, nearest edge server selection is a primary goal of cloud service provider (like Akamai Technologies, Amazon CloudFront, Mirror Image Internet etc.). Empirically all the time load of the nearest edge server is not eligible for responding the user request. Therefore load balancing is also important criteria for selecting suitable edge server. In this paper, we have presented Fuzzy Based Least Response Time (FLRT) dynamic load balancing algorithm and which is effective for crisps input from different heterogeneous system. Thus, FLRT is a novel paradigm which can select nearest neighbor edge server from user's current location where response time and load of the edge server is lowest.
Normal legitimate network traffic on both LANs and wide area IP networks has self-similarity feature i.e. scale invariance property. Superimposition of legitimate traffic and high intensity non-self-similar traffic results into degradation in self-similarity of normal traffic. Rescaled range method is used to calculate Hurst parameter and its deviation from normal value. Two inputs and one output fuzzy logic block is used to determine the intensity of Denial of Service (DoS) attack. In order to detect self-similarity, we have used synthetic self-similar data generated using Fractional Gaussian Noise process and to identify existence of Denial of Service, DARPA IDS evaluation dataset is used. C code for statistical method is implemented on DSP Processor TMS320C6713 platform.
The Internet of Things (IoT) is a booming terminology that is used nowadays. It is interconnection of various objects such as RFID(radio frequency identifications), actuators, smart devices, sensors over an Internet. It is a interdisciplinary approach to study computing devices and their behavior. A lot of research is going on (IoT) devices to gain the intrinsic potential from them. Over the last few years, we have seen great efforts from industry and academia to provide IoT solutions and develop customer oriented market for IOT devices. In this paper, we will examine the various IoT solutions for twitter data visualization and analysis. Twitter being a popular social networking site always gives researchers a scope for innovation. A wide variety of solutions have been proposed for 3-D visualization of twitter data, but “Twitter Mood Light” is popular among all.
Development of intelligent transportation system is the need of all the developing countries where urbanization and industrialization is rapidly growing. VANETs are being used as a tool for improving road safety by alarming the drivers about accidents occurred ahead of them or for providing internet access to the passengers via gateways along the road. Due to highly dynamic nature of nodes in VANETs, designing a routing protocol for VANET is quite challenging compared to MANET environment. Researchers have suggested several routing mechanism for VANETs. Few routing decision are based on topology based selection whereas others have considered different parameters like location information of nodes, traffic lights etc. As no benchmarking scheme is available for choosing a routing protocol in VANET, this article gives an insight on how to choose a routing protocol depends on varying condition of traffic. Three popular protocols AODV, DSR and LAR have been chosen for analysis on varying traffic environment. All the three protocols have been critically tested for different metrics such as Throughput, Packet Delivery Ratio and Routing overhead during the simulation. Simulation is carried out with the help of open-source simulation tools NS2, a network simulator, and SUMO, a traffic simulator.
With the advent of Information Technology, large volume of hardcopy documents are being scanned and stored as document images. Due to the age of source document, quality of the ink and recurring photo copies of same source, the document images generated are degraded in quality. Degraded document images obtained from different sources are stored in different places depending on the requirement of images for various purposes. This leads to storage of multiple copies of same document image with variations in degradation. Establishment of equivalence between two degraded document images using Optical Character Recognizer (OCR) is not possible as OCR fails to recognize the characters in degraded conditions. In this paper, a novel approach has been proposed to establish equivalence between two degraded document images based on layout and content structure. Through projection profile, the number of components and the occurrence of components in the document images are compared correspondingly to establish layout equivalence. The components at paragraph level are compared based on foreground density and entropy quantifier to establish content structure equivalence. The efficacy of the proposed model is tested over variety of degraded document images.
False data injection attacks (FDIA) on smart grid is a popular subject of current research. The presence of FDIA and other such attacks in smart grid is partly due to the combination of Information and Communication Technology with Power Systems. The FDIA on linear model of power system has been extensively analyzed in literature. However the non linear system model has not received the same amount of attention. This paper proposes the concept of balanced and unbalanced measurement set for the purpose of corrupting the state variables in linear and non-linear power system state estimators. The effect of balanced and unbalanced measurement sets for targeted constrained and unconstrained attacks are analyzed for linear and non-linear state estimators.
The developed system is intended to generate Tamil lyrics from sequence of images, and derived tune. The choice of tune is automatically identified from the input situation and the notes are generated using a newly devised algorithm based on Carnatic music characteristics. The system, thus, generates two set of lyrics, based on the input: `Context (from Text) and Tune' and `Context (from text) and Image'. Image sequence helps in generating lyrics that are in accordance with the visual effects of the song. This is achieved by object extraction from the input images and location identification using a heuristic algorithm. Singable lyrics are generated by creating tune using Carnatic raga lakshanas which are its characteristics. An appropriate raga for the extracted emotion is determined by referring to a designed Raga-Emotion database. The raga characteristics are then used to synthesize notes leading to tune.
Researchers believe that the power reduction at the earliest stages of the system design process will have higher impacts on the final result. Multiple supply voltage design is broadly acknowledged as a compelling approach to reduce the power consumption of a CMOS circuit. A SAT-based approach which targets operation scheduling with varying voltages and produces a circuit that consumes less power is proposed in this paper. Experiments with HLS benchmarks shows that the proposed schemes achieve more reduction in power once the number of operating voltage levels are increased (here 5v, 3.3v and 2.4v).
Cloud Computing is the recent technology that is based on shared pool of resources and provides features like Ubiquitous access, multi-tenancy, flexibility, scalability and pay as you use, which makes it more resource efficient and cost effective. But Cloud-based systems open unfamiliar threats in authentication and authorization. Explicit authorization accordance must be defined at smallest level, especially in multi-tenant environments. The liaison between Cloud Service Provider & customer must also be clearly mentioned in relation like who holds administrative rights and indirect access to privileged customer information. Moreover the scenario of cloud in educational and research community is still developing and has some security concerns. This paper provides a brief review about Cloud Security concerns for adoption of cloud computing in data sensitive research and technology aided education. Also this paper proposes, ECK based framework for securing end-user data in Community Cloud. Implications and considerations for additional research are provided as well.
Constraints based wireless sensor networks (WSNs) provides automated solutions to solve the generic or specific problems as per specific requirements of every application which are available nowadays at low price. There is always trade-off between reliability and energy consumption which are main objectives of resourced constraint WSNs. Considering various challenges, protocols are examined and analysis tests are performed on CODA, ECODA, PETLP, RT 2 , and ESRT. We found some challenges and improvement in the areas where in scope could be explored or refined to address few unsolved or naive issues in order to make them robust and scalable within specific context. This study reveals the factors like unreliable event detection, lack of right decision at right time, no collaborative work, inappropriate delay bounds, unreliable heterogeneous data transmission schemes, and incorrect routing layer procedures to address specific problem that typically affects in achieving QoS parameters such as priority based or variable reliability, congestion control and network lifetime of sensor network. Finally, we offered suggestions for improvements of existing study.
Smart devices offer plenty of applications which are very power consuming. Number of applications running on smart devices and users are increasing day by day, but battery capacity is not improving with the same pace. This paper describes the implementation of android based Wakelock Tracking and Releasing System (WTRS) that can be used for reducing power consumption on android devices. The WTRS system detects improper behavior of buggy wakelock and releases them in real time environment. The main aim of the work presented is to highlight how power consumption can be reduced in smart device. The key concept of WTRS system is described with the analysis of power consumption and resource utilization on android device. The WTRS system is develop on android operating system by using JAVA programming language. Experiments are performed on android device which shows reduction in power consumption maximum up to 62%.
Over the last 20 years, analysis, modeling and simulation of network traffic in different networks adopted techniques based on statistics and probability theory. We bring out the limitations of these approaches and implement alternative approach using the long range dependence and self-similarity in the network traffic focused around wavelet analysis method. Hurst (H) parameter estimates amount of self similarity is evaluated using this proposed method. The algorithm is implemented in C programming language. The synthetic self similar traffic with predefined H parameter is used as an input. Real-time implementation of the proposed algorithm using C programming language is implemented in TMS320C6713 processor.
Document Image binarization is a process of converting the document image into binary image containing text as foreground and plain white as background or vice versa. Characters from the document image should be extracted from the binarized image, in order to recognize them. So performance of the character recognition system is completely depends on the binarization quality. This paper presents a simple and efficient binarization method to binarize the degraded document image. The proposed technique is efficient to tolerate the high inter and intra intensity variation in the degraded document image. The proposed method is based on spatial domain techniques: Laplacian operator, Adaptive Bilateral filter and Gaussian filter and works well for degraded documents and palm leaf manuscript images.
This paper presents an enhanced architecture for integrating cloud with wireless sensor networks to analyze weather data and notify SaaS users alert during weather disasters at low cost. The occurrence of natural disasters affects lives, damages property and changes our lives completely. Existing system does not support node and network level virtualization for weather sensors. The proposed system overcomes the above limitation by deployment of WSN infrastructure for multiple weather applications using virtual sensor and overlay concept. Monitoring weather data and providing SaaS and social network disaster alerts based on decision ID3 technique and provide cloud authentication using secure shell. These factors improve and provide high quality disaster alters to users and weather analysts at low cost.
Cognitive Radio technology holds great promise in solving the problem of spectrum scarcity. A plethora of routing protocols exist for Cognitive Radio networks, however most of them relay on establishing an end-to-end path using a Common Control Channel. This paper focuses on scenarios where the Primary User traffic is very high and erratic and therefore trying to set up end-to-end paths is not feasible. A novel solution to this problem is proposed where the cognitive users form a Cognitive Delay Tolerant Network through a modification in the network stack. Well researched delay tolerant networking routing protocols designed for networks with unreliable links, configured for multiple channel can used for routing in high primary user traffic environments. Through extensive simulation we show the that proposed architecture provides very high delivery ratio (close to 1) in the presence of very high primary user traffic with negligible computational complexity and the absence of a common control channel. We also show that trying to rely on routing protocols that try to establish end to end paths such as Multi-Channel AODV is not feasible. The performance of Multi-Channel AODV and proposed architecture is compared and analyzed with bundle/packet delivery ratio, end-to-end delay and hop count as performance metrics.
Studies in Wireless Mesh Networks have often focused on the comparison of various mesh protocols, or the design of new mesh protocols in a simulation environment (such as NS2, OPNET etc.). However the results obtained in a simulation environment are sometimes not valid in a real site where the network is meant to be deployed. This paper describes the development of a new large scale wireless mesh network test bed and a preliminary analysis of three mesh protocols. This experimental test bed can be used to compare various protocols, to design new protocols and perform a detailed testing and so on.
In Digital Signal Processing (DSP), Multiply-Accumulate Computation (MAC) unit plays a very important role and lies in the critical path. Multiplier is one of the most important block in MAC unit. The overall performance of the MAC unit depends on the resources used by the multiplier. Therefore, this paper describes the design of a Partial Product Reduction Block (PPRB) that is used in the implementation of multiplier having better area, delay and power performances. PPRB reduces the partial products row wise by using different multi-bit adder blocks instead of conventional coloumn wise reduction. MAC unit consisting of the multiplier realized using the proposed partial product reduction technique has a delay reduction of 46%, power consumption is reduced by 39% and area requirement is reduced by 17% when compared to MAC unit realised using conventional multiplier architecture.
In olden days people were only information consumers but since advent of Web 2.0 they plays more important role in publishing information on Web in the form of comments and reviews. The user generated content forced organization to pay attention towards analyzing this content for better visualization of public's opinion. Opinion mining or Sentiment analysis is an autonomous text analysis and summarization system for reviews available on Web. Opinion mining aims for distinguishing the emotions expressed within the reviews, classifying them into positive or negative and summarizing into the form that is quickly understood by users. Feature based opinion mining performs fine-grain analysis by recognizing individual features of an object upon which user has expressed opinion. This paper gives an insight of various methods proposed in the area of feature based opinion mining and also discuss the limitations of existing work and future direction in feature based opinion mining.
One of the most prominent applications of Wireless Sensor Networks (WSN) is for the purpose of surveillance. Here, a number of sensor nodes are deployed to monitor a particular area. But these sensors run on limited battery capacity and are also costly. Thus, the sensor node selection technique needs to be optimized so that using minimum number of sensors, maximum possible area can be covered so that the energy is used efficiently and the power consumption is reduced. This paper reviews three existing algorithms: minimax, lexicographic minimax and greedy forward. It also introduces a new algorithm, maximum coverage area algorithm and compares its performance with the three existing algorithms for optimized selection. Performance comparison in terms of coverage ratio has been made between the four algorithms. Coverage ratio is the measure of the area effectively covered with respect to the total area. According to the simulation results, the maximum coverage area algorithm outperforms minimax, lexicographic minimax and greedy forward algorithms.
In this digital world due to rapid growth in image processing technology and internet, Piracy of the images is becoming more and more serious problem. In order to prohibit such piracy, watermarking is widely used approach. In conventional watermarking, watermark is inserted in host image by modifying its original information. This approach creates a trade-off between robustness and imperceptibility. To overcome this, zero watermarking is used. In this process instead of embedding watermark, it is created using host image and original watermark of ownership identification. Zero Watermarking does not alter original information of the image and provides perfect imperceptibility. In this paper we are proposing a robust and dynamic zero watermarking using Hessian Laplace Detector and Logistic map. Here, Feature points of host image are detected using Hessian Laplace detector and used along with original watermark of ownership identity for constructing zero-watermark. Finally constructed zero-watermark is scrambled using Logistic map to improve its security before storing it into database. Our dynamic approach lets original watermark size to be solely decided by total number of detected feature points and all pixels of original watermark to be used for creating zero-watermark. We have compared our algorithm with previous work and got better reconstruction of original watermark under noise, filtering, compression, translation and cropping attacks.
Grid Computing pools the resources from various heterogeneous computers to solve a particular problem which requires huge computation. In a grid, a number of known and unknown entities from same or different domain participate in communication where in every entity need to undergo a strong authentication and authorization scheme. There is risk while making the communication among untrusted entities since there is a chance of misusing resources. So, in-order to avoid this problem a strong trust establishment phenomenon is required. This paper demonstrates a randomized algorithm for developing a trust model which makes the user and service provider to maintain consistency among their ratings from each other every time, so that they reach the eligible criteria for communication.
Agricultural image processing is one of most innovative and important image processing areas recognized in last few years. Because of the vast range of associated sub domain it is having the current attention of the researchers. In this paper, the exploration of different domains associated with agricultural image processing is defined. The paper has also explored the recognition model with broader view. The paper has presented a generalized framework for plant disease classification and recognition. The paper has also defined a study on some of the effective classification approaches including SVM, Neural Network, KMeans and PCA.
Functional Magnetic Resonance Imaging (fMRI) is a neuroimaging technique used to capture images of brain activity. These images have high spatial resolution and hence are very high dimensional. Each scan consists of more than one hundred thousand voxels. All of the scanned voxels are not activated for every stimulus. Therefore, finding the informative voxels with respect to stimulus becomes a prerequisite for any machine learning solution using fMRI data. The specific problem attempted to be solved in this paper is that of decoding cognitive states from multiple-subject fMRI data. Decoding multiple-subject data is challenging owing to the difference in the shape and size of the brain of different subjects. A Genetic algorithm based technique is proposed here for selection of voxels that capture commonality across subjects. Some popular feature selection techniques are compared against Genetic algorithms. It is observed that feature selection using Genetic algorithms perform consistently and predictably better than other techniques.
The exponential growth in the number of users on internet has lead to the invention of variants of the algorithms and data structures traditionally used in peer to peer networks. Many data structures like adjacency matrix, skip webs, hash tables, skip lists and skip graphs have been proposed to represent the peer to peer networks. This paper explores usage of one of these data structures, skip graph, a variant of skip list for peer to peer networks. The existing algorithm for searching in skip graph has O(log n) time complexity which can be further decreased by using the proposed approach named as Adaptive Probabilistic Skip Graph (APSG). Modifications have been proposed in the scenario where a node is repeatedly being queried by a certain node and it has been experimentally verified that, in such scenarios, the search time is reduced to O(1). The major focus has been on optimizing the search algorithm by adding probability vector in the basic structure of the skip graph node.
With extensive usage of multimedia databases in real time applications, there arises a great need for developing efficient techniques to find the images from huge digital libraries. To find an image from a database, every image is represented with certain features. Texture and color are two important visual features of an image. In this paper we compare and analyze performance of image retrieval using texture and color features. Further we propose and implement an efficient image retrieval technique using both texture and color features of an image. Experimental evaluation is carried out on Wang image database having 1000 unique images consisting of 10 classes of images.
Wireless Sensor Networks (WSNs) comprise of constrained measure of assets, have unlimited applications like battlefield observation, target tracking, environment observing and so forth. As sensor nodes are deployed in hostile or remote environment and unattended by human, they are prone to different kind of attacks. So to make message transmission more secure and reliable adaptation of dynamic key is very important for secure key management. Because of the limitations of WSNs like limited memory, battery life and processing power, use of cluster-based wireless sensor network reduces system delay and vitality utilization. For the same cluster based protocol for sensor networks, achieves energy efficient and scalable routing. Whereas storage issue in sensor network, can be reduced by using dynamic key management scheme. In this context, An Efficient Session Key Management Scheme for Cluster Based Mobile Sensor network (ESKM) is the scheme which provides an improved session key by updating periodically within cluster, hence avoids the different type of attack from malicious nodes. Still it has some issues like it is not scalable and CHs are static in each round. So we can improve the ESKM by dynamically electing CH for each round and reduce the energy consumption by transmitting messages via CH. Thus we can improve the ESKM and make it using Efficient in terms of energy consumption.
Today's high density fabrications on chip have resulted in closer interconnects. As a downside, external radiation, crosstalk coupling, supply voltage fluctuations, temperature variations, electromagnetic interference (EMI) or combinations of these manufacturing defects have increased. Of these the major concern is that of crosstalk and its consequences. In the past, multiple techniques such as shielding, use of repeaters and guard rings were proposed to alleviate the undesired signal transitions. These bygone techniques where debunked since implementing them resulted in additional wires which increases routing area. These shortcomings laid the path towards innovative encoding schemes like the No Adjacent Transitions (NAT), Boundary Shift Coding (BSC) and Bus invert technique etc. This paper proposes a novel Zero Crosstalk Encoding Scheme and explores its potential applications to alleviate the concerns of crosstalk in decisive communication modules. The Zero Crosstalk Scheme generates codes which results in a 100% reduction in unenviable signaling transitions for Type-3 and Type-4 crosstalk.
PDF (Portable Document Format) documents have become popular in recent times. PDF documents have named destinations that have reference to info like page number, zoom level, scale factor etc. These named destinations should be resolved when user loads URL with named destination or select link in the PDF. Resolving named destinations in PDF is taking more time that affects response time in PDF after user selection or resolving named destination at the URL load time for rendering PDF in Chrome Browser. In this paper, we present a new dictionary based novel solution for improving time to resolve named destinations. Dictionary is the map of named destinations and corresponding page number in the PDF document. We then present the results of some tests on the purposed approach with the already existing approach to show that the purposed approach performs better compared to existing approach while creation of named destinations dictionary does not take significant time. We conclude that our algorithm can perform better as asynchronous message passing is avoided and we have access to page number corresponding to the named destination in constant time and in synchronous manner.
Retinopathy of Prematurity (ROP) is referred to as one of the important epidemic diseases in developing countries including India. There is an unprecedented alarming increase in the incidence of ROP in India. ROP needs to be accurately diagnosed as it can lead to permanent blindness if it is overlooked. The algorithm which is developed for ROP screening has been tested on 37 images which included 5 images of healthy infants. Sensitivity of 95.83% and an accuracy of 96.55% have been achieved for the classification of the stages 2, 3 and 4.
Opportunistic routing (OR) is promising routing technique, which takes advantage of broadcast nature of wireless communication. The key idea behind OR is to utilize overheard packets from a neighbouring node, which in traditional routing technology were dropped or ignored. By utilizing these packets, OR saves the cost of retransmission and achieves energy efficiency which is one of the most challenging problem in design of routing protocol in wireless sensor networks. This paper reviews existing opportunistic routing protocols in wireless sensor networks. It also gives details about basic concept and components of OR in WSNs.
Proposed antenna is simple, low cost, multiband antenna. It gives resonance at frequencies 0.90 GHz, 2.40 GHz and 5.50 GHz which are suitable for GSM, Bluetooth and Wireless Fidelity (Wi-Fi) applications. It gives required bandwidth with good gain. This is Sierpinski fractal monopole antenna with rectangular defected ground structure. It shows proper impedance matching and also gives omnidirectional radiation pattern with low back lobe radiation. This antenna design is based on Sierpinski Gasket Geometry with 2 iterations. Antenna is constructed on low cost FR4 material with permittivity 4.4.
Volatility is used to indicate the stock market movement; in general terms can be defined as the risk associated with stocks. Volatility is measured as standard deviation and variance of Closing Prices. Forecasting volatility has been a prime issue in financial market and lots of researchers are working on it since more than a decade. The main goal of this paper is to forecast volatility with a high accuracy. The volatility is calculated using traditional volatility calculation techniques called volatility estimators. The volatility is calculated using Close, Garman klass, Parkinson, Roger and Yang estimating methods. Time series forecasting techniques ARIMA, ARFIMA and a feed forward Neural Network based techniques are used for forecasting volatility. The results of all the three techniques are compared to find an accurate estimation and forecasting technique. The best forecasting technique is shortlisted by comparing the error results of all the forecasting techniques with error measuring parameters such as ME, RMSE, MAE, MPE, MAPE, MASE and ACF1. Garman klass estimator with Arima technique as the forecasting methods yields more accurate volatility forecasts for next 10 days.
India has a multilingual society and most Indians are polyglots, capable of speaking several languages. However, many of them would not be familiar with all those scripts. Thus, the script may be a barrier to access content in some of the languages. This paper presents a browser plugin to Google Chrome, which instantly transliterates a website present in any Indic script to Kannada. Our plugin exploits the Unicode block parallelism and also uses a rule-based approach to transliterate web pages to Kannada. This enables a polyglot user to read online documents in other Indic scripts through Kannada script. Currently, it supports transliteration from Tamil, Telugu, Malayalam, Bangla, Gujarati, Odiya, Punjabi, Sanskrit and Hindi pages. The quality of transliteration was scored by 45 users on a scale of 1 to 5 and a mean opinion score of 4.6 has been achieved.
Protection and authentication of digital multimedia content during transmission has become very important with the ongoing development in communication and networking field. Aiming at this popular research topic of recent time, this paper presents a comparative analysis of different watermarking techniques performed in MATLAB. It also presents a robust VLSI architecture for watermarking which survives most of the attacks to which data is exposed on Internet and contributes towards anti-plagiarism rule. This paper exhibits the ASIC implementation of an efficient digital watermarking algorithm for images based on frequency domain Discrete Wavelet Transform and spatial domain bit plane slicing.
Recommender Systems are becoming inherent part of today's e-commerce applications. Since recommender system has a direct impact on the sales of many products therefore Recommender system plays an important role in e-commerce. Collaborative filtering is the oldest techniques used in the recommender system. A lot of work has been done towards the improvement of collaborative filtering which comprises of two components User Based and Item Based. The basic necessity of today's recommender system is accuracy and speed. In this work an efficient technique for recommender system based on Hierarchical Clustering is proposed. The user or item specific information is grouped into a set of clusters using Chameleon Hierarchical clustering algorithm. Further voting system is used to predict the rating of a particular item. In order to evaluate the performance of Chameleon based recommender system, it is compared with existing technique based on K-means clustering algorithm. The results demonstrates that Chameleon based Recommender system produces less error as compared to K-means based Recommender System.
EAP-AKA is used as an authentication protocol during handoff across heterogeneous systems with different underlying technologies like the 3GPP-WLAN internetwork. However the protocol cannot be put to practical use due to its high authentication delay and vulnerabilities to several attacks like user identity disclosure, man in the middle attack and DoS attack. Moreover, the validity of Access Point of the WLAN network is often not checked, leaving the user vulnerable to several attacks even after heavy authentication procedure. For this purpose we propose a modified, secure EAP-SAKA protocol using Elliptic Curve Diffie Hellman for symmetric key generation by taking into consideration the validation of access point. Additionally, we make EAP-SAKA faster by decreasing the propagation delay of the signaling messages. The proposed protocol is supported using detailed security analysis and performance analysis. Also, security validation of EAP-SAKA is carried out using a widely accepted formal verification tool called AVISPA and is found to be safe.
Mathematical morphology (MM) helps to describe and analyze shapes using set theory. MM can be effectively applied to binary images which are treated as sets. Basic morphological operators defined can be used as an effective tool in image processing. Morphological operators are also developed based on graph and hypergraph. These operators have found better performance and applications in image processing. Bino et al. [8], [9] developed the theory of morphological operators on hypergraph. A hypergraph structure is considered and basic morphological operation erosion/dilation is defined. Several new operators opening/closing and filtering are also defined on the hypergraphs. Hypergraph based filtering have found comparatively better performance with morphological filters based on graph. In this paper we evaluate the effectiveness of hypergraph based ASF on binary images. Experimental results shows that hypergraph based ASF filters have outperformed graph based ASF.
Storage virtualization is the most applied word in the industry due to its importance. Now a day's data become more import, to hold and to extract needful information. Datacenter become an integral part of any organization, so its management too. For best and efficient result as well as proper storage utilization and management we need storage area network (SAN). In the environment of SAN, there is the compatibility issue with the different vendors and their drivers, so we are going for storage virtualization. Storage virtualization is applied in SAN environment. The classical techniques [1] to achieve storage virtualization is suffering from many problems like improper disk utilization, high latency, power consumption, different attacks and security issues. In this paper we design and implement storage virtualization technique EC2S2 to get better yield in terms of security, high throughput, efficient management and least latency. Through the security and performance analysis we show that our method is secure and efficient.
The object recognition is a complex problem in the image processing. Mathematical morphology is Shape oriented operations, that simplify image data, preserving their essential shape characteristics and eliminating irrelevancies. This paper briefly describes morphological operators using hypergraph and its applications for thinning algorithms. The morphological operators using hypergraph method is used to preventing errors and irregularities in skeleton, and is an important step recognizing line objects. The morphological operators using hypergraph such as dilation, erosion, opening, closing is a novel approach in image processing and it act as a filter remove the noise and errors in the images.
Existing tab based browser applications have a limited area to render the tabs and display tab texts. Due to its size, tabs in the existing applications or browsers restrict from rendering more display information. For reasons discussed above, the lists of tabs or sequence are cluttered for viewing. This paper describes a new invention methods and system for organizing browser tabs in an electronic device. This paper also describes a method of organizing browser tabs and rendering tab specific contents and explains enhanced user experiences. The field of invention for Smart Tabs (A new invention for intelligently organizing browser tabs) is majorly in Smart Phones, Laptops, Desktops, and Devices with Multiple displays, Edge Devices and Web Browsers. Many tab management related features and patents are found in the field of browser management. Still there are open spaces for new inventions and feature enhancements. Different tab management features can bring software differentiation for the newly introduced hardware devices. Virtualization for wider and older devices shall improve the user scope further.
Multicore systems along with GPUs enabled to increase the parallelism extensively. Few compilers are enhanced to emerging issues with respect to threading and synchronization. Proper classification of algorithms and programs will benefit largely to the community of programmers to get chances for efficient parallelization. In this work we analyzed the existing species for algorithm classification, where we discuss the classification of related work and compare the amount of problems which are difficult for classification. We have selected set of algorithms which resemble in structure for various problems but perform given specific tasks. These algorithms are tested using existing tools such as Bones compiler and A-Darwin, an automatic species extraction tool. The access patterns are produced for various algorithmic kernels by running against A-Darwin and analysis is done for various code segments. We have identified that all the algorithms cannot be classified using only existing patterns and created new set of access patterns.
This paper presents a system for power monitoring and scheduling in industry using power line communication. By using power line communication an embedded system is developed without any new additional wiring. An embedded server supports web page user interface, hence making user to control industrial equipment from a Remote location. Power monitoring can be done remotely by using GUI designed using visual basics.
Internet Protocol version6 (IPv6) ad-hoc is a conceptual abstract to solve some of the issues of the present IP versions, say Internet Protocol version4 (IPv4). Some of the problems are delay, latency, reliability, error, address exhaustion, testing, resilience etc. The present paper will be dealing with the conversion from a protocol IPv4 to a next generation IPv6 via optical network configured with a routing table where the analysis of the liquidity of data like multimedia data transfer is done. A virtual connection path between server and client systems (as in the enterprise edition of Java -J2EE) is established using TCP (Transmission Control Protocol). The work proposed is allowed to implement networking via optical cables with a cost effective IPv4 migration to IPv6 for the multimedia communication while having a couple of optical converter devices explicitly. During experimental analysis, the tunnelling method of IPv4 to IPv6 conversion established via optical network with a routing table proved to be an easy verification routine. The duration required to ingress the data at the client end was evaluated and the results obtained while downloading an image file(.jpeg), audio file(.mp3) and video file(.mpeg4) are 0.21, 3, and 10 seconds respectively; the same selection of algorithms was also implemented with a streaming through a server at a bit rate of 10 Gbps . The file sizes of the different multimedia data is found to be constant for an image file, an audio file and a video file to be 20 Mb. Hence we have done an experimental analysis if these multimedia data is transferred via a client server configuration in the optical network by making use of our own routing table.
Design of conventional protocols for wireless sensor networks(WSN) are mainly based on energy management. The solutions for layered protocol of the WSN network are inefficient as sensors network mainly delivers real-time content thus, cross layer communication between layers of the protocol stack is highly required. In this paper, a reliable cross layer routing scheme (CL - RS) is proposed to balance energy to achieve prolonged lifetime through controlled utilization of limited energy. CL - RS considers 2 adjacent layers namely, MAC layer and network layer. Optimization issues are identified in these two layers and solutions are provided to reduce energy consumption thereby increasing network lifetime. To achieve higher energy efficiency MAC layer protocols compromise on packet latency. It is essential to attempt reduce the end-to-end delay and energy consumption using low duty cycle cross layer MAC (CL-MAC). The joint optimization design is formulated as a linear programming problem. The network is partitioned into four request zones to enable increase in network performance by using an appropriate duty cycle and routing scheme. We demonstrate by simulations that the strategy designed by combining (CL - RS) and (CL-MAC) algorithms at each layer significantly increases the network lifetime and a relation exists between the network lifetime maximization and the reliability constraint. We evaluate the performance of the proposed scheme under different scenarios using ns-2. Experimental results shows that proposed scheme outperforms the layered AODV in terms of packet loss ratio, end-to-end delay, control overhead and energy consumption.
Teaching learning based optimization (TLBO) and Artificial bee colony (ABC) algorithm is population based modern method of optimization, used to solve diverse complex engineering and real time applications. To obtain best solution for the complex problem it requires more time and results in performance degradation. To improve the performance of the population based algorithm, they are either parallelized or implemented on General Purpose Graphic Processing Unit (GPGPU). In this paper, the GPGPU based implementation of TLBO and ABC algorithm is discussed to solve unconstrained benchmark problems. The performance of both the approaches is compared based on standard deviation, standard error mean and time. It is observed that both the approaches gives good results but time taken by TLBO algorithm is more as compared to ABC algorithm.
Cloud computing has been considered as the architectural model for future generation Information Technology. Inspite of its numerous advantages in both technical and business aspects, cloud computing still poses new challenges particularly in data storage security. The main threat here is trustworthiness. Data centers which power a cloud cannot perform computations on encrypted data stored on cloud. With the advances in homomorphic encryption techniques, data stored in cloud can be analyzed without decryption of the entire data. This paper discusses about various homomorphic encryption schemes and their applications on various domains. A homomorphic method with byte level homomorphism has been proposed.
A simple and efficient method for communication of text information using joint source-channel coding is presented in this paper. The information is encoded by the use of symmetric reversible variable length code and corresponding header sequence is generated. The header information, headers and codewords forms a frame. Since the header plays a crucial role for decoding the codewords at the receiver, it is channel encoded and the coded frame is transmitted after modulation. Orthogonal Frequency Division Multiplexing (OFDM) technique, which is known to provide high data rate and better performance over wireless channels, is used in designing the transceiver system.
The cloud computing environment is a vast and important field subjected to various technological innovations. We contribute to this wide area of cloud computing by proposing a paradigm using agents which can be applied to improve the commerce between the producers and consumers in any industry. Our proposed work comprises of providing a platform on a cloud environment in which agents act together between the customers and producers, thereby analyzing the requests put forth by the customers and mapping it to the best suitable producer. This work involves the creation of producer agents and consumer agents. The proposed platform showcases the interaction of these agents to solve the presented problem efficiently. Also, a mathematical model has been proposed which acts as the underlying principle describing the working of the agents. The outcome of this work is the development of a cloud platform in which agents interact with one another to cater the requests of the users optimally.
Wireless Sensor Networks have emerged as one of the most promising technologies and promoted research avenues due to their widespread applicability. Wireless Sensor Networks have found applications in critical information infrastructure like military surveillance, nuclear power plants, etc., hence there arises the need to restrict access to critical information of such systems. So as to maintain confidentiality, user authentication is required so that only legitimate users are allowed to retrieve the information. Several two-factor user authentication schemes have been suggested by the research community. In this paper, a brief review of various security issues, security attacks and authentication schemes pertaining to Wireless Sensor Networks has been presented.
SMILES coding is a popular and state of the art representation for input of the chemical reactions in computer. On the other hand, Bond Electron (BE) matrix proposed by Ugi and his co-workers is the state of the art method to represent a chemical compound in computer. However both the SMILES coding and BE matrix representation are for organic compounds. This paper extends SMILES coding to Extended SMILES (ESMILES) coding and BE matrix representation to Extended BE (EBE) matrix representation to include both organic and inorganic compounds. Additionally the ESMILES coding and corresponding EBE matrix representation presented in this paper include anions and cations, co-ordinate compounds and addition compounds in the chemical reactions. Finally, an algorithm to convert chemical reactions in ESMILES notation to EBE matrix representation is presented.
Images are compressed using lossy and lossless compression schemes to utilize the Bandwidth in an effective way and also to provide enough space for storing voluminous data. Fractal Image Compression (FIC), is one such lossy compression scheme based on contractive mapping theorem. Affine transforms are being employed in Fractal Image compression to map the range blocks and domain blocks incorporating the property of self similarity in the images. As there is a need to enhance the performance measures like compression ratio (CR) and peak signal to noise ratio (PSNR), an attempt is made in this research paper to analyze FIC using Quad-tree Partitioning technique and Embedded Block Coding Optimization Truncation (EBCOT) technique for Medical Images. Here in this paper FIC with EBCOT is being implemented on Medical Images like CT of Bone and MR Images of Brain and the Performance measures like CR, PSNR at different Threshold Values were measured. Mat lab simulated results for these Performance Measures showed that the PSNR and CR is high when FIC is performed along with EBCOT Encoding.
The paper aims to determine the forest cover type of the dataset containing cartographic attributes evaluated over four wilderness areas of Roosevelt National Forest of Northern Colorado. The cover type data is provided by US Forest service inventory, while Geographic Information System (GIS) was used to derive cartographic attributes like elevation, slope, soil type etc. Dataset was analyzed, pre processed and feature engineering techniques were applied to derive relevant and non-redundant features. A comparative study of various decision tree algorithms namely, CART, C4.5, C5.0 was performed on the dataset. With the new dataset built by applying feature engineering techniques, Random Forest and C5.0 improved the accuracy by 9% compared to the raw dataset.
Data mining is used for mining useful data from huge datasets and finding out meaningful patterns from the data. Many organizations are now using data mining techniques. Frequent pattern mining has become an important data mining technique and has been a focused area in research field. Frequent patterns are patterns that appear in a data set most frequently. Various methods have been proposed to improve the performance of frequent pattern mining algorithms. In this paper, we provide the preliminaries of basic concepts about frequent pattern tree(fp-tree) and present a survey of the recent developments in this area that is receiving increasing attention from the Data Mining community. Experimental results show that fp- Tree based approach achieves better performance than Apriori. So here we concentrate on recent fp-tree modifications and some other new techniques other than Apriori. A single paper cannot be a complete review of all the algorithms, here we have included only four relevant papers which are recent and directly using the basic concept of fp-tree. A brief description of each technique has been provided. This detailed literature survey is a preliminary to the proposed research which is to be further carried on.
Palm-Vein Biometric Authentication system is a physiological pattern recognition technique that uses the vein patterns of an individual's palm for providing authentication. This paper presents a comprehensive comparative study of basic transforms: PCA (Principal Component Analysis), DCT, DST, Walsh-Hadamard and Slant for Palm Vein recognition. These transforms were implemented on a database consisting of 576 images, containing unmodified as well as modified images after noise introduction, brightness and contrast changes in the original images. Performance evaluation metrics FAR (False Acceptance rate), FRR (False Rejection Rate) and EER (Equal Error Rate) have been obtained. Then, comparative analysis of these methods has been done on the basis of performance evaluation metrics obtained, robustness of implemented system, feature vector size and time taken for execution. Results obtained showed that Walsh-Hadamard transform performs the best and can be successfully used for Palm Vein biometrics.
Lightweight cryptographic algorithm is intended for implementation in resource constrained devices such as smart cards, wireless sensors, Radio Frequency Identification (RFID) tags which aim at providing adequate security. Hummingbird is a recent encryption algorithm based on ultra-lightweight cryptography and its design is based on blend of block cipher and stream cipher. This paper presents design space exploration of the algorithm and optimisation using different architectural approaches. It provides comparative analysis of different models of substitution box, cipher and encryption blocks.
Imaging noise is unwanted information embedded within the original image. Image noise mainly refers to variations in the brightness, illumination, contrast or intensity in images. It is usually formed by the circuitry of scanner or camera sensors used while the acquisition of an image. This paper has focused on to replace the noisy pixel in given window using sorted non-iterative median filter. It has considered traditional median and mean filter with variable window size based filters. The non-iterative median filter has shown effective results as it will replace the noisy pixel with its best suitable alternative among the available noise free pixels. Image processing toolbox has been used to design the given filters in MATLAB in order to perform the experiments. Comparative analysis among the selected filters has shown that the variable window size based medina has quite effective results among others.
Currently, in a web browser, the menus are fixed and cannot be changed. It would be helpful if the browser menu incorporates relevant information from the website currently being browsed, so that such information becomes easily accessible to the end user. In this paper we propose a method to automatically extract and display a website specific menu as part of the browser menu for any website. Selecting the website specific menu options enables the user to launch the appropriate application for the type of menu option. For example, upon selecting the Contact Number menu option, the device's default calling application is launched with the website's pre-registered contact number. This information can also be presented by way of clickable icons on the URL bar or status bar of the browser or via a voice interface. The system can be implemented by having specific meta tags, HTML tags or manifests for each of the extracted options. The web developer specifies the entries by means of these tags. We present the implementation details of the system on a mobile device and describe various user interfaces.
Today, technology has been repetitively subjected to changes. So with the inception of technology and improvisation of the same, developments in embedded systems have sky rocketed and are fast approaching the zenith. As an effect of such change over the past several years NHTSA (National Highway Traffic Safety Administration) has been actively involved in using Event Data Recorders (EDR) in high end automobiles like flights, cars and some two wheelers like Kawasaki's Ninja. EDR's collect crash information which assists in real world data collection and also helps in understanding specific aspects of the crash. India ranks first in road accidents, for every 3.7 minutes road mishap snuffs out a life. What are the root causes for so many accidents to happen? Where are the details stored after the accident? Was there any fault with the motorcycle? How about false insurance claims? The proposed work addresses the above questions and aims to collect the information which aids investigations of causes of accident and helps in improvement of motorcycle standards. Information from this device can be collected to determine the condition of motorcycle before the time of accident. An embedded system is mounted on two wheelers which records the events like brake, gear, speed, stand and congestion. The results of analysis show that the recorders can report real world crash data and therefore be a powerful tool by providing useful information to crash reconstruction experts.
Search engine advertising in the present day is a pronounced component of the Web. Choosing the appropriate and relevant ad for a particular query and positioning of the ad critically impacts the probability of being noticed and clicked. It also strategically impacts the revenue, the search engine shall generate from a particular Ad. Needless to say, showing the user an Ad that is relevant to his/her need greatly improves users satisfaction. For all the aforesaid reasons, its of utmost importance to correctly determine the click-through rate (CTR) of ads in a system. For frequently appearing ads, CTR is empirically measurable, but for the new ads, other means have to be devised. In this paper we propose and establish a model to predict the CTRs of advertisements adopting Logistic Regression as the effective framework for representing and constructing conditions and vulnerabilities among variables. Logistic Regression is a type of probabilistic statistical classification model that predicts a binary response from a binary predictor, based on one or more predictor variables. Advertisements that have the most elevated to be clicked are chosen using supervised machine learning calculation. We tested Logistic Regression algorithm on a one week advertisement data of size around 25 GB by considering position and impression as predictor variables. Using this prescribed model we were able to achieve around 90% accuracy for CTR estimation.
Data hiding is a process of hiding information. There are various techniques used for hiding data. Data hiding can be done in audio, video, image, text, and picture. This method is steganography i.e., embedding data in another data. Usually we use image for data hiding especially digital images. For embedding data in images there are many techniques are used. Some techniques will embed data but embedding cause some distortion to image, some techniques can embed only small amount of data, and some techniques will cause distortion during the extraction of data. So the various methods that are used for embedding and extraction of data are described in this.
In recent years, on-line lecture videos are becoming significant pedagogical tool for both course instructors and students. Text present in lecture video will act as an important modality for retrieving videos as it is closely related to its content. In this paper, we present a distributed system for counting occurrences of each textual word from video frames using Apache Hadoop framework. As Hadoop framework is suitable for batch processing operations and, the processing of images is highly concurrent, we can implement batch processing operation of reading text information and counting the occurrence of each word by using MapReduce framework. We tested the working of text recognition and word count algorithms on Hadoop framework of cluster size 1, 5 and 10 nodes. Also we compared the performances of multimode clusters with a single node machine. On a data set of size around 3GB lecture video frames, Hadoop with a cluster size of 10 nodes executes 5 times faster than a single node system. Our results prove the advantage of using Hadoop for improving computational speed of processing image and video processing applications.
Due to open deployment of sensor nodes in hostile environment and lack of physical shielding, sensor networks are exposed to different types of physical threats including Clone attack where an adversary physically compromises a node, extract all the credentials such as keys, identity and stored codes, make hardware replicas with the captured information and introduce them at specified positions in the network. Replica detection has become an important and challenging issue in the field of security. This paper surveys the existing schemes for clone attack detection. To conclude the paper, a comparison is shown of all the existing techniques in the literature.
Currently used description based image retrieval is not suitable for an effective image search with a large unstructured repository of images. Thus to overcome the issue the concept of Content Based Image Retrieval (CBIR) aroused, where image search is done using the features which can be extracted from the image content. Color, edge, shape and texture are the most common. In most of the CBIR systems, there are few sub functions as Feature Extraction, Clustering and Storing, Similarity Matching and Display results. Quality of the last result in CBIR heavily depends on Feature Extraction module which is time consuming. Proposed solution was designed for and Effective CBIR by improving the efficiency of Feature Extraction. In most of CBIR systems search image set must be inserted. Then query image can be inserted. As the output user can retrieve group of images alike the queried image. When image set is inserted, system extracts features of each image, average them, cluster them, indexed them and store them in appropriate clusters. Features of set of images are extracted by the Feature Extraction Module. Before clustering, image matrices are averaged to one dimensional array using a revised averaging algorithm to reduce the complexity of calculations and perform efficiency. Averaged features are clustered using K-mean algorithm and stored appropriately. When query image inserted, again extracts the features of it and compares the stored features and calculates a similarity value for each image in the nearest cluster. Finally it displays the result image set according to the order they are matched with the query image.
Bioinformatics is an emerging research area. Classification of protein sequence dataset is the biggest challenge for researcher. This paper deals with supervised and semi-supervised classification of human protein sequence. Amino acid composition (AAC) used for feature extraction of the protein sequence. The classification techniques like Support Vector Machine (SVM), Naive Bayes, K-Nearest Neighbour (KNN), Random Forest, Decision Tree are using for classification of protein sequence dataset. Amongst these classifiers SVM reported the best result with higher accuracy. The limitation with SVM is that it works only with supervised(labeled dataset). It doesn't work with unsupervised or semi-supervised dataset (unlabeled dataset or large amount of unlabeled dataset among small amount of labeled dataset). A novel semi-supervised support vector machine (SSVM) classifier is proposed which works with combination of labled and unlabled dataset. In results it observed that the proposed approach gives higher accuracy with semi-supervised dataset. Principal component analysis (PCA) used for feature reduction of protein sequence. The proposed semi-supervised support vector machine (SSVM) using PCA gives increased accuracy of about 5 to 10%.
Multichannel blind deconvolution is an ill posed problem where regularizers plays an important role. Adaptive regularization is used to obtain better quality restored image. This adds penalty weighted term along with image regularizer. The Isotropic TV image regularizers have chosen along with directional priors which preserves the over smoothing of the edges when compare to horizontal and vertical priors, thereby improving the Peak Signal to Noise Ratio(PSNR) of the reconstructed image. The Blind deconvolution problem can be solved using Alternating minimization algorithm as it is best suited for two unknown variables. The formulation of blind deconvolution can be l 1 regularized optimization and geometry can be l 2 . It results in hard optimization problem. The minimization step alternates between two sub optimization problems and this can be solve efficiently using Augmented Lagrangian method(ALM), where each iteration undergoes Bregman variable splitting or iterative method. This converted unconstrained problem in to constrained. This method can be applicable to medical images such as X Ray in order to reduce the amount of rays penetrating in to the body and to observe more details.
Automatic and semi-automatic approaches for classification of web services have garnered much interest due to their positive impact on tasks like service discovery, matchmaking and composition. Currently, service registries support only human classification, which results in limited recall and low precision in response to queries, due to keyword based matching. The syntactic features of a service along with certain semantics based measures used during classification can result in accurate and meaningful results. We propose an approach for web service classification based on conversion of services into a class dependent vector by applying the concept of semantic relatedness and to generate classes of services ranked by their semantic relatedness to a given query. We used the OWLS-tc service dataset for evaluating our approach and the experimental results are presented in this work.
In this paper we are going to propose a logic by simulation, that an automatic system for detection of cervical cancer based on spectrum obtained from photonic crystal based bio-sensor, is possible. A 2-dimensional photonic crystal based bio-sensor in a static environment under the influence of electromagnetic radiation, whose range spans from UV to IR, designed to be highly sensitive for the changes in the dielectric constant e under the applied electric and magnetic induction for a set of concentration ranges of the analyte. As the refractive index of normal and cancer infected tissue is inferred, the sensor can easily differentiate normal tissue and cancer infected cervical tissue. The output of the sensor is known to exhibit distinct signatures in the frequency and amplitude spectra even while slight changes in the refractive index of the cervical cell takes place. A machine learning technique Naïve Bayesian classifier is used to distinguish the normal and cancerous tissue spectrum based on automatically extracted parameters of the attributed sensor output. The combined amalgamation of the sensor data and the incorporated automated classification archive into photonic crystal based bio-sensor, achieves better performance in detecting cervical cancer.
A social network is a set of relationships and interactions among social entities such as individuals, organizations, and groups. The social network analysis is one of the major topics in the ongoing research field. The major problem regarding the social network is finding the most influential objects or persons. Identification of most influential nodes in a social network is a tedious task as large numbers of new users join the network every day. The most commonly used method is to consider the social network as a graph and find the most influential nodes by analyzing it. The degree centrality method is node based and has the advantage of easy identification of most influential nodes. In this paper a method called “Enhanced Degree Centrality Measure” is proposed which integrates clustering co-efficient value along with node based degree centrality. The enhanced degree centrality measure is applied to three different datasets which are obtained from the Facebook to analyze the performance. The response obtained is compared with existing methods such as degree centrality method and SPIN algorithm. By comparison, it is found that highest number of active nodes identified by the proposed method is 64 when compared with SPIN algorithm which identifies only 55.
This paper presents a novel design of Very High Frequency RF Amplifier using 180nm CMOS technology. Role of silicon in the semiconductor industry and the necessity of low power RF circuits are discussed. The function of RF amplifier along with its applications and classes are explained. Existing designs of RF amplifiers are analyzed, implemented and simulated. A new architecture with better performance, which can operate at a Very High Frequency of 15.849GHz with meager power dissipation of 0.20mW and with considerable bandwidth of 1.8068GHz, is proposed, analyzed and simulated in Synopsys HSPICE to verify the architecture. Comparison between the existing designs of RF amplifiers and the proposed RF Amplifier is carried out with respect to operating frequency, power dissipation, Bandwidth, supply voltage and Gain which are the critical parameters of RF design.
In this paper a new automated method for sorting and grading of mangos based on computer vision algorithms is presented. The application of this system is to replace the existing manual technique of sorting and grading used in India. The system is developed for Alphonso mangos, the premium variety of mango exported from India. The developed system was able to sort the Alphonso mangos with an accuracy of 83.3% and can identify a defective skin up to an min area of 6.093845×10 -4 sqcm.
Cloud computing is revolutionizing on how information technology is used by organizations and individuals. Cloud computing provides dynamic services with virtualized resources over the Internet. It ensures facilities to develop, deploy, and manage applications `on the cloud' for end users entailing resources virtualization that maintains and manages itself. Scheduling is a task performed to get maximum profit to increase cloud computing work load efficiency. Its objective is using resources properly and managing load between resources with minimum execution time. High communication cost incurs in clouds prevent task schedulers from being applied in a large scale distributed environment. This study proposes a hybrid Particle Swarm Optimization (PSO) which performs better in execution ratio and average schedule length.
Computer virus is a rapidly evolving threat to the computing community. These viruses fall into different categories. It is generally believed that metamorphic viruses are extremely difficult to detect. Metamorphic virus generating kits are readily available using which potentially dangerous viruses can be created with very little knowledge or skill. Classification of computer virus is very important for effective defection of any malware using anti virus software. It is also necessary for building and applying right software patch to overcome the security vulnerability. Recent research work on Hidden Markov Model(HMM) analysis has shown that it is more effective tool than other techniques like machine learning in detecting of computer viruses and their classification. In this paper, we present a classification technique based on Hidden Markov Model for computer virus classification. We trained multiple HMMs with 500 malware files belonging to different virus families as well as compilers. Once trained the model was used to classify new malware of its kind efficiently.
Vehicular Adhoc Network provides ability to wirelessly communicate between vehicles. Network fragmentations and frequent topology changes (Mobility of the nodes) and limited coverage of Wi-Fi, are issues in VANET, that arise due to absence of central manager entity. Because of these reasons, routing the packets within the network is difficult task. Hence, provisioning an adept routing strategy is vital for the deployment of VANETs. The optimized link state routing is a well-known mobile adhoc network routing protocol. In this paper, we are proposing an optimization strategy to fine-tune few parameters by configuring the OLSR protocol using metaheuristic method. We considered some of the quality parameters such as packet delivery ratio, latency, throughput and fitness value for fine tuning OSLR protocol. Then we made Comparison of genetic algorithm, particle swarm optimization algorithm by using QoS parameters. We implemented our work on Red Hat Enterprise Linux 6 platform. And results are shown by simulations using VanetMobiSim and NS2 simulators; the fine-tuned OSLR protocol behaves better than the original routing protocol with intelligence and optimization configuration.
Face detection using skull has been referred as the most complex and challenging task in the domain of computer vision, by the large intra-class variations occurred due to the changes in facial perspectives, expression and illumination. Distinct approaches have paid attention on this skill, still only open source implementations have been greatly exercised by researchers. The best example is the Viola-Jones Object Detection framework that especially in the case of facial processing has been recurrently used which imparts real-time contentious object detection rates. The important stages stated in this research work is to first eradicate the detected feature parts from face which will be correlated through Canonical Correlation Analysis(CCA) against the feature extracted from the skull. The Canonical correlation analysis is largely concerned with the estimation of a linear composition of each of two pairs of variables suchlike the correlation present between two functions should be maximized. Hence the combination of Viola-Jones with CCA will definitely boost up the matching accuracy as well as ease the task eradication and correlation.
Light Field, a computational photography technique has various exciting features like refocusing and novel view synthesis. Previous works in light field photography have focused their work on creating light field images having a limited field of view. This limitation of light field images motivates the research for a acquisition system which can capture light field images with larger field of view. Recent works have been done to acquire light fields using a cylindrical arrangement of cameras which resolve the conventional complexity issue of acquisition of light field and subsequently merging them by essentially combining the capture and merging processes into a single exercise. We use this cylindrical system to generate a novel True Zoomed Light field panorama from the captured light field, which gives a wide 360° field of view similar to traditional panoramas along with the zoom applied to the whole panorama. The captured panorama can also be manipulated in various ways as is possible with other light field recordings. Also various zooming options which can be easily found in traditional camera, either tends to change the relative shape of the objects in image or introduces pix-elation effect. We use true zoom so as to improvise the zooming effect. We generate novel “True Zoomed Light Field Panorama” using our system. This type of panorama gives user a feeling of being closer to or farther from the whole environment at the same time.
Cloud computing is a fast growing technology offering a wide range of software and infrastructure services on a pay-per-use basis. Many small and medium businesses (SMB's) have adopted this utility based Computing Model as it contributes to reduced operational and capital expenditure. Though the resource sharing feature adopted by Cloud service providers (CSP's) enables the organizations to invest less on infrastructure, it also raises concerns about the security of data stored at CSP's premises. The fact that data is prone to get accessed by the insiders or by other customers sharing the storage space is a matter of concern. Regulating access to protected resources requires reliable and secure authentication mechanism, which assures that only authorized users are provided access to the services and resources offered by CSP. This paper proposes a strong two-factor authentication mechanism using password and mobile token. The proposed model provides Single Sign-on (SSO) functionality and does not require a password table. Besides introducing the authentication scheme, the proof of concept implementation is also provided.
In this paper we develop two new algorithms viz., CCP (Cluster-based Collection Point) and CRP (Cluster-based Rendezvous collection Points) that focus on i) reducing the number of data collection points to be visited by the Mobile Element (ME), and ii) determining an optimal path for ME. The algorithms follow a clustering approach, where a Cluster Head (CH) aggregates data from its cluster nodes and keeps this information ready for onward transmission to the ME. Due to this approach, the ME need only visit the CH instead of visiting each cluster node individually. The CCP algorithm determines an optimal path for ME by connecting all CH/Collection Points (CP). Taking advantage of the transmission range of CH/CPs, the CRP algorithm determines optimum number of Rendezvous Points that cover all CPs, which further reduces the tour length. Both algorithms were subjected to extensive simulations to study their efficacy. The algorithms have outperformed the best known algorithms in terms of tour length and latency.
This paper explores the detection of the acoustic signals underwater with a MOEMS structure. The movement of the micro optical elements, which manipulates the light passes through all the dimensional spaces, is what constitutes MOEMS technology, which stands for micro opto-electro mechanical systems. These are used to detect stress, strain and other mechanical parameters based on the displacement. The technology about the mechanical part of the sensing layer, modeled device, used material and the properties of the acoustic wave for streamlines across the buoyant surfaces gives information of the hydrodynamics of sharp structures. The mechanical properties of the rods are allowed to undergo transformation with applied energy which then allows conversion into corresponding changes in electrical and optical properties of the device in the closed system of observation. By using these coupled-optical properties the MOEMS based mechanical structure can be used for flow-metry, leakage detection, blood-pressure monitoring, structural health monitoring among others. In this paper, we investigated a design of a photonic crystal micro-displacement sensor. A theoretical model is constructed to approximate the change of the refractive index impelled by the application of a pressure over a sensing surface. By the actuation of the sensing layer, a linear calibration curve is got by relating the resonant drop location to the applied pressure at a point on the surface. This is what MEMS does and with a photonic crystal sensing technology, it has improved the sensitivity and stability of the sensor. Thus we are promised with this to be successfully implemented in damage detection of civil and military structures under water. The simulation tool used in the paper is MEEP and MATLAB.
Digital images are becoming main focus of work for the researchers. Digital image forensics (DIF) is at the forefront of security techniques, aiming to restore the lost trust in digital imagery by uncovering digital counterfeiting techniques. Source camera identification provides different ways to identify the characteristics of the digital devices used. Study of these techniques has been done as literature survey work; from this sensor imperfection based technique is chosen. Sensor pattern noise (SPN), carries abundance of information along with wide frequency range allows for reliable identification in the presence of many imaging sensors. Our proposed system consists of a novel technique used for extracting sensor noise from the database images, and then the feature extraction method is applied to extract the features. The model used for extracting sensor noise consists of use of Gradient based operators and Laplacian operators, a hybrid system consisting of best results from the above two operators obtain a third image giving the edges and noise present in it. The edges are removed by applying threshold to get the noise present in the image. This noisy image is then provided to the feature extraction module consisting of Gray level Co-occurrence Matrix (GLCM). It is used to extract various features based on its properties such as Homogeneity, Contrast, Correlation, and Entropy. The extracted features are used to do the performance evaluation based on various parameters. The accuracy parameter will give the matching rate for the entire dataset. The Sensor Pattern Noise (SPN) is extracted in the GLCM features and used for matching with the test set to get the exact match. The hybrid system used for SPN extraction along with the GLCM feature extraction yields better results.
Sparse representation is one of the powerful emerging statistical techniques for modeling images. This representation approximates the image as a combination of the codewords within a dictionary which is over complete. Recent years have seen a tremendous growth in the field of sparse representation. Here a method is proposed for image inversing by using iterative deblurring based on sparse representation of an image which is uniformly blurred. The key idea behind this methodology is the sparseness of natural images in some domain. The quality of recovered image majorly depends on the domain or the dictionaries that are used to represent it. In this paper, the K-SVD algorithm is used to train the group of codewords from a set of quality natural image patches. For each of the local patch within the blurred image, the best suited sub-dictionary from the trained dictionary data base is selected. In addition, a smoothness regularization constraint is added that prevents the reblurring of the image edges. For numerical stability the sparsity weight is adaptively computed which also improves the reconstructed image quality. Comparative study on some of the existing restoration algorithm proves the proposed method outperforms them all.
The study of natural hazards like earthquakes, avalanche, tsunami, volcanic eruptions, etc. is a challenging domain in geo-spatial research. Seismology is the process of recording the seismograms on earth surface which are recorded through sensors called Seismometers. Seismology plays a vital role in handling disaster events. Seismogram provides the impact of natural disaster events by retrieving the bandwidth of seismic waves through seismographs. Seismograms are handled by Global Seismographic Network (GSN) which represents the spectral characteristics of disaster signals or elastic waves. Seismograms are acquired through various models by eliminating the frequency domain coefficients. Unless the auto regressive models, the acquisition of seismograms will be wide of the mark. Characteristic functions can be used to reduce noises that are outside the frequency range in seismograms. A characteristic function handles the signals in the spectrum and removes the noise through the regressive models. This paper provides the exposure of regressive models towards processing seismograms and GSN for acquiring the seismograms to handle the natural hazards using seismic imagery. The objective of this article is providing the knowledge on auto regressive models in terms of handling the seismograms.
The Hadoop Distributed File System (HDFS) is a representative cloud storage platform having scalable, reliable and low-cost storage capability. It is designed to handle large files. Hence, it suffers performance penalty while handling a huge number of small files. Further, it does not consider the correlation between the files to provide prefetching mechanism that is useful to improve access efficiency. In this paper, we propose a novel approach to handle small files in HDFS. The proposed approach combines the correlated files into one single file to reduce the metadata storage on Namenode. We integrate the prefetching and caching mechanisms in the proposed approach to improve access efficiency of small files. Moreover, we analyze the performance of the proposed approach considering file sizes in range 32KB-4096KB. The results show that the proposed approach reduces the metadata storage compared to HDFS.
This paper deals with the implementation of HMM based video classification algorithm using color feature vector on the open source BeagleBoard mobile platform. To simplify the development of video IO interfaces to the processor running the algorithm, we first choose the BeagleBoard-xM, a low-cost, low-power, portable computer with a Cortex-A8 processor with a speed of 1GHz. The algorithm uses color feature vector with HMM as a classifier to classify videos into different genres. Video classification task can be often treated as a primary step for many other applications including data organization and maintenance, search, retrieval and so on. Most of the existing work includes only implementations on general purpose processors which are inadequate to meet the performance requirements of machine vision applications. For mobile platforms, the algorithms need to be implemented on embedded hardware to meet the requirements like size, power, cost etc. Various optimization techniques such as key frame extraction and feature extraction that are carried out to allow the execution of the algorithm are discussed. It further leads to efficient video browsing and retrieval strategies on mobile platforms. Experimental results obtained from the implementation of the video classification task on the ARM- based computing platform BeagleBoard-xM, showed that the classification efficiency of 89.33% was achieved.
Artificial Bee Colony Algorithm is a nature-inspired evolutionary algorithm that imitates the intelligent behavior of honey bee swarm. It is one of the recent algorithms that imitate the foraging behavior of bees. A bee swarm optimization to solve various numerical problems has been presented in this paper. The paper describes an algorithm for implementation of different mathematical problems, such as single integration, eigen value and vector, etc., using Artificial Bee Colony Algorithm. Results with different number of bees are shown and best suited range of the number of bees, giving best solution to solve all problems has been shown and concluded. The results have shown that the Artificial Bee Colony algorithm has helped to achieve a near optimum solution of the problem defined and thus is successful in solving the mathematical problems with good accuracy.