Technical Papers

In this paper, we discuss the problem of reoptimization of Steiner tree. We are given an instance of Graph and also an optimal Steiner tree of it. If some changes occur later on in the given graph, a new optimal Steiner tree is to be determined. This process is known as re optimization. We consider two cases of change: one is addition of a new edge and second is, Deletion of an existing edge from the given graph. For both the cases, we provide approximation algorithms with corresponding approximation ratio equal to (1+ δ) where 0<δ<1.

Computer science and Engineering Department, Motilal Nehru National Institute of Technology, Allahabad, India.
Subhash Panwar (panwar.subhash@gmail.com)
Suneeta Agarwaal (suneeta@mnnit.ac.in)

With the emergence of Grid technologies, the problem of scheduling tasks in heterogeneous systems has been arousing attention. Task scheduling is a NP-complete problem[5] and it is more complicated under the Grid environment. To better use tremendous capabilities of Grid system, effective and efficient scheduling algorithms are needed. In this paper, we are presenting a new heuristic scheduling strategy for Independent tasks. The strategy is based on two traditional scheduling heuristics Min-Min and Max-Min. The strategy also considers the overall performance of machines to decide the scheduling sequence of tasks. We have evaluated our scheduling strategy within a grid simulator known as GridSim. We compared the results given by our strategy with the existing scheduling heuristics Min-Min and Max-Min and the results shows that our strategy outperforms in many cases than the existing ones.

Electronics & Computer Engineering Department, Indian Institute of Technology Roorkee, Roorkee - 247667, India.
Sameer Singh Chauhan (chauhan.sam@gmail.com)
R. C. Joshi (rcjosfec@iitr.ernet.in)

The management of resources and scheduling computations is a challenging problem in grid. Load Balancing is essential for efficient utilization of resources and enhancing the performance of computational grid. In this paper, we propose a decentralized grid model, as a collection of clusters. We then introduce a Dynamic Load Balancing Algorithm (DLBA) which performs intra cluster and inter cluster (grid) load balancing. DLBA considers load index as well as other conventional influential parameters at each node for scheduling of tasks. Simulation results show that the proposed algorithm is feasible and improves the system performance considerably.

Department of Computer Science & Applications, Kurukshetra University, Kurukshetra, India.
P. K. Suri (pksuritf25@yahoo.com)
Department of Computer Engineering, M. M. Engineering College, M. M. University, Mullana, Ambala, Haryana, India.
Manpreet Singh (manpreet_nishu@yahoo.co.in)

Branch prediction is crucial to maintaining high performance in modern Superscalar processor. Today’s Superscalar processors achieve high performance by executing multiple independent instructions in parallel. One of the most impedement to the performance of wide-issue superscalar processor is the presence of conditional branches. Conditional branches can occur as frequently as one in every 5 or 6 instructions, leading to heavy misprediction penalties in superscalar architectures. Ideal speed-up in superscalar processor is seldom achieved due to stalls and breaks in the execution stream. These interrupts are caused by data and control hazards which deteroits the superscalar processor performance. Branch target buffer (BTB) can reduces the performance penalty of branches in superscalar processor by predicting the path of the branch and caching information used by the branch. No stalls will be encountered if the branch entry is found in the BTB and prediction is correct. Otherwise, the penalty will be of atleast ‘2’ cycles. This paper proposes an algorithm for superscalar processor based on changing the BTB structure to eliminate the misprediction penalty. It also highlights a problem in the previous BTB algorithm (nested branches problem) and proposes a solution to it.

Research Scholar, DAVIET, Jalandhar.
Rubina Khanna
Associate Professor (IT), GCET, Greater Noida.
Sweta Verma
Professor (CSE/IT), ITM Gurgaon
Ranjit Biswas
Professor (CSE/IT), Shobhit University, Meerut
J. B. Singh

Computer vision involves image edge detection which is crucial in outline capturing systems for decomposing and describing an object. This paper presents a scalable parallel algorithm skeleton for outline capturing and object recognition based on first order difference chain encoding. UNIX based Intel Xeon 2-Quadra-Core system is used for the implementation of the parallel algorithm. The algorithm complexity of the averaging process is independent of the size of image and, the speedup of the proposed parallel algorithm is observed to be near linear. The parallel processing approach presented here can be extended to solve similar problems such as, image representation, restoration, compression, matching etc.

Sinhgad Institute of Business Administration and Research, Kondhwa (Bk.), Pune-411048, Affiliated to University of Pune, India.
Arpita Gopala (arpita.gopal@gmail.com)
Sonali Patilb (sonalimpatil@gmail.com)
Amresh Nikamc (amresh_n2000@gmail.com)

An efficient technique for multiplying two binary numbers using limited power and time is presented in this paper. The work mainly focuses on speed of the multiplication operation of multipliers, by reducing the number of bits to be multiplied. The framework of the proposed algorithm is taken from Mathematical algorithms given in Vedas and is further optimized by use of some general arithmetic operations such as expansion and bit-shifting. The proposed algorithm was modeled using Verilog, a hardware description language. It was found that under a given 3.3 V supply voltage, the designed 4 bit multiplier dissipates a power of 47.35 mW. The propagation time of the proposed architecture was found to 6.63ns Keywords: Multipliers, Vedic Mathematics, Bit Reduction, Binary Multiplication.

Lecturer - ECE & Research Member - SONA SIPRO, Advanced Research Centre, Sona College of Technology, Salem, Tamil Nadu, India.
M. E. Paramasivam (sivam@sonatech.ac.in)
Assistant Professor - ECE & Centre Head - SONA SIPRO, Advanced Research Centre, Sona College of Technology, Salem, Tamil Nadu, India.
R. S. Sabeenian (sabeenian@sonatech.ac.in)

Determining the number of clusters present in a data set automatically is a very important problem. Conventional clustering techniques assume a certain number of clusters, and then try to find out the possible cluster structure associated to the above number. For very large and complex data sets it is not easy to guess this number of clusters. There exists validity based clustering techniques, which measure a certain cluster validity measure of a certain clustering result by varying the number of clusters. After doing this for a broad range of possible number of clusters, this method selects the number for which the validity measure is optimum. This method is, however, awkward and may not always be applicable for very large data sets. Recently an interesting visual technique for determining clustering tendency has been developed. This new technique is called VAT in abbreviation. The original VAT and its different versions are found to determine the number of clusters, before actually applying any clustering algorithm, very satisfactorily. In this paper, we have proposed an out-of-core VAT algorithm (o-VAT) for very large data sets.

Kalyani Government Engineering College, Kalyani, West Bengal, India.
Malay K. Pakhira (malay_pakhira@yahoo.com)

Digital representation of terrain surface is an important research area. A number of techniques have been proposed to represent the terrain surface in a realistic manner. They have been broadly categorized as 2D or 3D terrain models. Each of these models has its own merits and demerits. An ideal model will capture the minute details of the terrain, thus requiring more sample points of the terrain, whereas an optimum model will require the dominant features to represent the terrain effectively. Hence there is a debate amongst researchers on which are the “appropriate level of detail of the terrain which captures the terrain features without compromising the signature of the terrain”. The Very Important Point (VIP) algorithm captures the essential terrain samples which encode the dominant physical characteristics of the terrain surface so that the properties such as height, slope, aspect of the terrain can be preserved accurately. The level of details or the dominant points captured through VIP algorithm is decided through a major of significance of the sample point with respect to its surroundings. The major of significance is empirically being decided through a threshold value in most of the techniques which select the sample points. This paper attempts to investigate and understand the optimum threshold value of the undulating terrain for sampling the dominant points. The threshold value of VIP algorithms which filters the dominant points is investigated for different types of terrain. The range of the threshold value which filters the optimum number of terrain points so that the important physical characteristic of the terrain is preserved is derived from the experiment.

Center for Artificial Intelligence and Robotics (CAIR), Defence Research & Development Organization, C. V. Raman Nagar, Bangalore-560093, India.
Narayan Panigrahi (npanigrahi7@gmail.com)
G. Athithan (athithan.g@gmail.com)
Associate Professor, CSRE, IIT Bombay, Powai, Mumbai-400076, India.
B. K. Mohan (bkmohan@csre.iitb.ac.in)

The task scheduling problem in a heterogeneous system (TSPHS) is a NP-complete problem. It is a multiobjective optimization problem (MOP).The objectives such as makespan, average flow time, robustness and reliability of the schedule are considered for solving task scheduling problem. This paper considers three objectives of minimizing the makespan (schedule length), minimizing the average flow-time and maximizing the reliability in the multiobjective task scheduling problem. Multiobjective Evolutionary Computation algorithms (MOEAs) are well suited for Multiobjective task scheduling for heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with non-dominated sorting are developed and compared for the various random task graphs and also for a real-time numerical application graph. This paper also demonstrates the capabilities of MOEAs to generate well-distributed pareto optimal fronts in a single run.

Senior Lecturer, Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai, Tamilnadu, India.
P. Chitra (pccse@tce.edu)
Final Year Student, Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai, Tamilnadu, India.
S. Revathi
Associate Professor, Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai, Tamilnadu, India.
P. Venkatesh
Dean CSE/IT, Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai, Tamilnadu, India.
R. Rajaram

Parallel implementation of Principal Component Analysis(PCA) using Eigen Value Decomposition(EVD) poses many significant challenges such as Load Balancing of its modules, reducing interprocessor communication and hiding significant memory latency incurred in its modules in a manner such that the optimization can increase. It requires massive computational power while maintaining a trade-off between its numerical precision and processing time. The contribution of this paper lies in presenting an optimized parallel implementation of PCA using EVD on multi-core PowerXCell 8i without compromising on the numerical precision. It employs all the features of this architecture including SIMD vectorization, double buffering, High Memory Bandwidth and signal notification techniques. A speedup of around 40 times was achieved over single core processor for a 512 x 1024 matrix.

Department of Electronics & Computer Engineering, Indian Institute of Technology Roorkee, Roorkee, India.
Gautam Seshadri (gausesh@gmail.com)
Ramnik Jain (ramnik89@gmail.com)
Department of Computer Science and Engineering, College of Engineering Roorkee, Roorkee,India.
Ankush Mittal (dr.ankush.mittal@gmail.com)

This paper describes a novel algorithm for tamperproof watermarking of 3D models. Fragile watermarking is used to detect any kind of tamper i.e. unauthorized modifications in the model. The best and the simplest way to do this is by inserting a watermark at each and every vertex of the model. This poses as a challenge as insertion of watermark in every vertex can cause perceptible distortion and inserting such a watermark is computationally expensive. The challenge of perceptible distortion is overcome by using a measure that controls perceptible distortional called the hausdorff distance. Thus, the objective of the Genetic Algorithm is to minimize the hausdorff distance between the 2 ring neighbourhood of the original and the watermarked vertex. The other challenge of time complexity is overcome by running the Genetic Algorithm for just 20 generations and causing it to converge prematurely. This significantly reduces the computational cost. The experimental results indicate that the algorithm effectively detects any distortion in model.

Univerity of Nevada, Reno, USA.
Mukesh Motwani (mukesh@cse.unr.edu)
Frederick C. Harris (fredh@cse.unr.edu)
Rakhi Motwani
Vishwakarma Institute of Technology, Pune, India.
Balaji Sridharan (balaji.sridharan@ieee.org)

This paper describes a novel approach towards the modification of Genetic Algorithms. The novelty of the modified Genetic Algorithm lies in the addition of a new parameter called the age of the chromosome that would select its ability to reproduce. Also, the concept of dynamic population and elitism size has been introduced. The modified Genetic Algorithm converges to the near optimum value at a faster rate, i.e. lesser number of generations are required for the convergence and due to the concept of dynamic population size the results obtained are more accurate. Thus, the modified algorithm is observed to be computationally more efficient. The algorithm was tested for some standard functions and curves and the results were found to be highly satisfactory.

Vishwakarma Institute of Technology, Pune, India.
Balaji Sridharan (balaji.sridharan@ieee.org )

We propose an innovative approach for handling dynamic memory, arrays, pointers, structures and union by interprocedural dynamic slicing technique which combines the basic techniques from past and current trends of dynamic interprocedural slicing. At first an improved algorithm for interprocedural dynamic slicing in the presence of derived and user defined data type is given. Secondly the dynamic slices for different derived and user defined data types used in the respective programs are obtained. The proposed extended interprocedural dynamic slicing algorithm is more efficient then the existing algorithm as it gives a detailed idea about the slices that can be obtained for one dimensional pointers, two dimensional pointer, pointer and arrays, dynamic memory allocation, structures and union. The illustrations are given with the programs for the proof of correctness of the proposed algorithm.

School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar, India.
Santosh Kumar Pani (spanifcs@kiit.ac.in)
Mahamaya Mohanty (mahamayamohanty@yahoo.co.in )

An efficient non-orthogonal pyramid representation was proposed by Burt. However it has been stated in literature that the Laplacian sub bands of Burt pyramid have redundant information. In this paper, we propose a modified pyramid representation to reduce the redundancy in Laplacian sub bands. The proposed pyramid representation makes use of well studied de-blurring algorithm to get a prediction of blurred Gaussian images. The proposed pyramid is an improvement on Burt pyramid as it exhibits reduced sub-band frequency overlap, cross correlation and mutual information. The advantage of using this pyramid representation in image magnification and Progressive image transmission (PIT) in noisy and unreliable networks is discussed here.

Centre for Artificial Intelligence and Robotics Defense research and development organization Bangalore, India.
Lakshmi
Subrata Rakshit

Radio spectrum is limited resource in wireless mobile communication system. Cellular system has to serve the maximum possible number of calls while the number of channels available is limited. Hence the problem of determining an optimal allocation of channels to mobile users that minimizes call-blocking and call-dropping probabilities is of paramount importance. This paper proposes a hybrid channel allocation model using an evolutionary strategy with an allocation distance to give efficient use of frequency spectrum.

Vishwakarma Institute of Technology, Pune, India.
S. R. Shinde (sandeep.shinde@vit.edu)
M. L. Dhore (hodcomp@vit.edu )
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad, India.
G. V. Chowdhary (girish.chowdhary@gmail.com )
Sinhagad Academy of Engineering, Pune, India.
Archana S. Shinde (jad_arch25@yahoo.co.in )

Change mining gives an insight to the retailers on the changing purchase patterns of the shoppers. Purchase pattern of shoppers sometimes depends on other products purchased. Hence, conditional part of a pattern in the form of an association rule contains products. The research paper proposes an approach for change mining where conditional part may contain products or items.

OM & DS Area Xavier Institute of Management, Bhubaneswar, India.
Pradip Kumar Bala (p_k_bala@rediffmail.com)

In the Low power Nanocomputing era, Reversible and Conservative logic gate design is emerging as an important area of research. In this paper, we present a Novel approach to design conservative logic gate (CLG) using 3x3 tile nanostructure, as reversible logic design research gets amplitude. On the other hand study of 3x3 tile make fruitful result as it have diverse application, mentioned in this paper. It is a Novel nanostructure that is applied here to implement CLG. The basic principle of CLG is Parity preserving in both input as well as output. Here we applied 3x3 orthogonal MV to implement the logic and Cross wire is implemented with the help of 3x3 Baseline tile. The main advantage of this design we achieve that the numbers of Layer required only one. It also been demonstrated that the proposed design offers less numbers of QCA cell as well as less area and less clocking zones then the existing counterparts. We also analyzed the logic synthesis using our proposed gate. Here, we also found an effective and promising result and excels all existing counterparts. We demonstrate the testability of proposed CLG by means of behavioral approach of both inputs and outputs.

Department of Computer Science & Engineering, West Bengal University of Technology, BF-142, Sector-1, Saltlake City, Kolkata-700064, India.
Kunal Dasa (kunaldasqca@gmail.com)
Debashis Deb (debashis.de@wbut.ac.in)

In this paper, the performance of OFDM-BPSK and - QPSK system in Nakagami-m channel has been reported. Here our approach is based on decomposition of Nakagami random variable into orthogonal random variables with Gaussian distribution envelopes. Results have been presented to obtain optimum value of m based on BER and SNR.

Department of Electronics and Communication Engineering, National Institute of Technology, Jalandhar, India.
Neetu Sood (neetu.kath@gmail.com)
Department of Computer Science Engineering, National institute of Technology, Jalandhar, India.
Ajay K. Sharma (sharmaajayk@rediffmail.com)
National Institute of Technology, Jalandhar, India.
Moin Uddin (director@nitj.ac.in)

This paper presents a clustering routing protocol for event-driven WSNs with Reduction of Reporting node in each cluster. We demonstrate that decreasing the number of reporting nodes in each cluster; increase the number of reports that need to be sent to the sink in order to achieve the energy efficient and desired information reliability. The algorithm also aims at even energy dissipation among the nodes in the network by alternating the possible routes to the Sink and autonomous selection of energy efficient cluster head. This helps to balance the load on sensor nodes while avoiding congested links at the same time. Moreover, the algorithm proposes using an energy efficient approach by choosing high energy values of a node stored in buffer table for each round. We discuss the implementation of our protocol, and present its performance evaluation through Network Simulator.

Assitant Professor, Department of Information Technology, PSNACET, Dindigul.
Narendra Kumar (nandhume@gmail.com)
Principal, KSR College of Technology, Thiruchengode, Tamil Nadu, India.
K. Thyagarajah
IACSIT Member.
Irrai Anbu (irrai.research@gmail.com)

Organ delineation from volumetric dataset is often encountered problem in medical imaging. Numerous 3D polygonal surface mesh model based segmentation algorithms have been reported in this area. These algorithms aim for a fully automated solution for segmenting anatomical structures in volumetric image datasets. But attraction to false boundaries results in inaccurate segmentation, which can happen because of poor model initialization or weak image feature response at correct locations. To be acceptable in clinical practice, it is crucial for a segmentation approach to include the possibility of integrating input from the user when automatic segmentation fails to provide the desired accuracy. This work presents a new 3D triangular surface mesh editing technique to correct the results of automatic segmentation. The method directly projects subset of mesh vertices that are closest to a user drawn line, representing the true boundary of the anatomical structures. Its neighboring vertices that are within user defined depth get deformed based on shape constraints resulting in a smooth edited surface. The shape constraint is formulated to maintain the distribution of the vertices in the region of edited mesh in correspondence with the initial surface model. The result of the study shows that the manually edited surface results in accurate (1.52+/- 0.5 mm) anatomical boundary on the edited image plane while preserving the shape of the surface in the neighboring region.

Philips Electronics India Pvt. Ltd., Philips Innovation Campus, Manyata Tech Park, Bangalore 560045, India
Yogish Mallya
Prashant Kumar

Image segmentation form an important preliminary step in many high level image processing and computer vision applications. Its importance necessitates the quantitative evaluation of image segmentation results. A few methods have been developed, based on the general principals. In this paper, we propose a novel segmentation evaluation method based on region cardinality ratio and variance. It addresses the limitations in the prior methods and attempts to remove them. The results of our method are superior to the prior quantitative segmentation evaluation techniques due to the explicit usage of inter-cluster relation.

Centre for AI and Robotics, Bangalore 560093, India.
Nitin Kumar Sharma (nitinsharma@cair.drdo.in)
Shah Ronak (ronak@cair.drdo.in)
Malay K. Nema (malay@cair.drdo.in)
Subrata Rakshit (srakshit@cair.drdo.in)

Hybrid amplifiers with different gain bandwidths are indensible for long haul wavelength multiplexed optical communication systems in C-band and L-band. In this paper, the gain spectrum of EDFA has been broadened and flattened by cascading EDFA with TDFA along with a dielectric Interference filter (TFF). On using this configuration we have obtained an amplification bandwidth of 100 nm ranging from 1460 nm to 1560 nm with a +2.5% gain deviation.

Assistant Professsor, Electronics and Communication Engineering Department, R.B.I.E.B.T, Sahuran(Pb.), Mohali, India.
Inderpreet Kaur (inder_preet74@yahoo.com)
Assistant Professsor, (FIETE,MIEEE), Electrical & Electronics Communication Engineering Department, PEC, University of Technology, Chandigarh, India.
Neena Gupta (neenagupta@ieee.org )

Script identification for handwritten document image is an open document analysis problem. In this paper, we propose an approach to script identification for documents containing handwritten text using the texture features. The texture features are extracted based on the co-occurrence histograms of wavelet decomposed images, which capture the information about relationships between each high frequency subband and that in low frequency subband of the transformed image at the corresponding level. The correlation between the subbands at the same resolution exhibits a strong relationship, indicating that this information is significant for characterizing a texture. This scheme is tested on seven Indian language scripts alongwith English. Our method is robust to the skew generated in the process of scanning a document and also to the varying coverage of text. The experimental results demonstrate the effectiveness of the texture features in identification of handwritten scripts. The experiments are also performed by considering the multiple writers.

Department of Computer Science, Gulbarga University, Gulbarga, Karnataka, India.
P. S. Hiremath(hiremathps53@yahoo.com)
V. Mouneswara (mounivishwa@gmail.com)
Department of Computer Science, Karnatak Science College, Dharwad, Karnataka, India.
S. Shivashankar (s_shivashankar@rediffmail.com)
Department of Information Science, SDM College of Engineering, Dharwad, Karnataka, India.
Jagdeesh D. Pujari (jaggudp@yahoo.com)

RFID (Radio Frequency Identification) technology uses radio waves to transfer data between readers and movable tagged objects. In a networked environment of RFID readers, enormous data is generated from the proliferation of RFID readers. In RFID environment, the database becomes more pervasive, therefore, various data quality issues regarding data legacy, data uniformity and data duplication arise. The raw data generated from the readers can’t be directly used by the application. Thus, the RFID data repositories must cope with a number of quality issues. These data quality issues include data redundancy, noise removal and synonymy, to name a few. Therefore, data generated in large volume has to be automatically filtered, processed and transformed. In this paper, we have investigated the existing literature on filtering techniques. Finally, we have proposed a dynamic threshold based sliding-window filtering technique for data generated from RFID networked reader. We have presented a scenario where the raw data occurs less than the defined threshold value and noise occurs more than the threshold. In this case, the existing filtering technique recognizes noise as a RFID data and discards the real raw RFID data [2]. Therefore, we have proposed the updation of threshold value periodically and examination of EPC data format and associate values (Header information).

Institute of Management Studies, Ghaziabad, UP, India.
Sapna Tyagi (sapna.tyagi@imsgzb.com)
Department of Electrical Engineering, Jamia Millia Islamia, New Delhi, India.
A. Q. Ansari (aqansari@ieee.org)
Senior Member, ACEEE.
M. Ayoub Khan (softayoub@gmail.com)

Automated systems for understanding display boards are finding many applications useful in guiding tourists, assisting visually challenged and also in providing location aware information. Such systems require an automated method to detect and extract text prior to further image analysis. In this paper, a methodology to detect and extract text regions from low resolution natural scene images is presented. The proposed work is texture based and uses DCT based high pass filter to remove constant background. The texture features are then obtained on every 50x50 block of the processed image and potential text blocks are identified using newly defined discriminant functions. Further, the detected text blocks are merged and refined to extract text regions. The proposed method is robust and achieves a detection rate of 96.6% on a variety of 100 low resolution natural scene images each of size 240x320.

Department of Computer Science & Engineering, Basaveshwar Engineering College, Bagalkot, Karnataka, India.
S. A. Angadia (vinay_angadi@yahoo.com)
M. M. Kodabagib (malik123_mk@rediffmail.com)

As the present day technology is shrinking towards nanometer regime, interconnect delay is more dominant compared to gate delay. Hence the calculation of interconnect delay is more crucial and plays a major role for both performance and physical design optimization for high speed CMOS integrated circuits. Many approaches primarily concentrated to find the interconnect delay rather than gate delay so that one can enhance the speed of the circuit by simply decreasing interconnect length. Statistical timing analysis techniques are being developed to tackle this important problem. The variations of critical dimensions in modern VLSI technologies lead to variability in interconnect performance that must be fully accounted for in timing verification. However, handling a multitude of inter-die/intra-die variations and assessing their impacts on circuit performance can dramatically complicate the timing analysis. For optimizations like physical synthesis and statistical timing analysis, efficient interconnect delay computation is critical. By considering the impulse responses of linear circuit as a Probability Distribution Function (PDF), Elmore first estimated the value of interconnects delay. Several approaches have been proposed after Elmore Delay metric[1] like, PRIMO[2], AWE[5], h-gamma[4] etc. are proven to be more accurate than Elmore delay metric. Moments of the impulse response are widely used for interconnect delay analysis, from the explicit Elmore delay[1] (first moment of the impulse response) expression, to moment matching methods which create reduced order trans-impedance and transfer function approximations. This paper describes an approach for fitting moments of the impulse response to probability density functions so that delay can be estimated from probability tables. The accuracy of our model is justified with the results compared with that of SPICE simulations and the models that have already being proposed with other probability distribution function.

Department of Electronics & Communication Engineering, National Institute of Technology, Durgapur, West Bengal, India.
R. Kar (rajibkarece@gmail.com)
V. Maheshwari (maheshwari_vikas1982@yahoo.com)
A. K. Mal (toakmal@gmail.com )
Md. Maqbool
A. K. Bhattacharjee

Visual Interpretation of gestures can be useful in accomplishing natural Human Computer Interactions (HCI). In this paper we proposed a method for recognizing hand gestures. We have designed a system which can identify specific hand gestures and use them to convey information. At any time, a user can exhibit his/her hand doing a specific gesture in front of a web camera linked to a computer. Firstly, we captured the hand gesture of a user and stored it on disk. Then we read those videos captured one by one, converted them to binary images and created 3D Euclidian Space of binary values. We have used supervised feed-forward neural net based training and back propagation algorithm for classifying hand gestures into ten categories: hand pointing up, pointing down, pointing left, pointing right and pointing front and number of fingers user was showing. We could achieve up to 89% correct results on a typical test set.

Department of Computer Applications, Madhav Institute of Technology and Science, Gwalior, M.P. India.
G. R. S. Murthya (murthy.grs@gmail.com)
R. S. Jadonb (rsj_mits@yahoo.com )

In this paper, a new algorithm for image indexing and retrieval using Multi-scale Ridgelet Transform (MRT) is presented. In MRT, ridgelet transform has been implemented by using Gabor wavelet sub bands. This method captured image edge information more accurately than spectral method such as Gabor transform (GT). MRT is applied on Coral database and computed the low order statistics from the transformed images. Feature database has been generated by using this extracted texture features and the retrieval results demonstrate significant improvement in precision and average retrieval rate, compared to GT.

Ph. D. Student, Instrumentation and Signal Processing Laboratory, Indian Institute of Technology Roorkee, Roorkee - 247667, Uttarakhand, India.
Anil Balaji Gonde (abgonde@gmail.com)
Professor, in Department of Electrical Engineering, Instrumentation and Signal Processing Laboratory, Indian Institute of Technology Roorkee, Roorkee - 247667, Uttarakhand, India.
R. P. Maheshwari (rpmaheshwari@gmail.com )
Assistant Professor, Department of Mathematics, Instrumentation and Signal Processing Laboratory, Indian Institute of Technology Roorkee, Roorkee - 247667, Uttarakhand, India.
R. Balasubramanian (balarfma@iitr.ernet.in )

Image descriptors encode the images in the database as feature vectors. Feature vectors play main role in content based image retrieval. This paper proposes a new feature vector based on wavelets. Most of the natural images have short span high frequencies and low frequencies extending for larger span. Hence, the design of our feature vector is such that it provides higher spatial localization and lower frequency resolution at higher frequencies and the reverse for lower frequencies. The energy of the frequency content of the image at various subbands and different spatial resolution (higher for higher frequency bands) is stored as feature vector. Thus, the given feature vector encodes high frequency information as well. The superiority of the proposed algorithm over some traditional algorithms is substantiated with results.

Centre for Artificial Intelligence and Robotics, Defense Research and Development Organization, Bangalore, India.
Lakshmi
Subrata Rakshit

In this paper, we introduce an efficient method to substantially increase the recognition performance of object recognition by employing feature selection method using bag-of-visual-word representation. The proposed method generates visual vocabulary from a large set of images using visual vocabulary tree. Images are represented by a vector of weighted word frequencies. We have introduced on-line feature selection method, which for a given query image selects the relevant features from a large weighted word vector. The learned database image vectors are also reduced using the selected features. This will improve the classification accuracy and also reduce the overall computational complexity by dimensionality reduction of the classification problem. In addition, it will help us in discarding the irrelevant features, which if selected will deteriorate the classification results. We have demonstrated the efficiency our method on the Caltech dataset.

Centre for AI and Robotics(CAIR), DRDO Complex, C. V. Raman Nagar, Bangalore - 560093, India.
A. G. Faheema (faheema@cair.drdo.in)
Subrata Rakshit (subrata@cair.drdo.in )

This paper presents the study on the cost analysis for the Video-on-Demand (VoD) system exploiting the advantages of the Peer-to-Peer (P2P) architecture. Our objective is to tackle the problems in analyzing cost and to come up with efficient cost analysis model (CCAM) thereby enhancing the revenue to the system. We adopt a partially decentralized P2P architecture with a cluster based approach to find the probability of the requests for the videos in that cluster and to place the videos closer to the customers. The results of the simulation experiment demonstrates that most of the video requests are delivered to the customers almost immediately with a faster response-time. The results also indicate that our approach accomplishes maximum revenue with less rejection ratio and a moderate peer-load when compared to the conventional approach which uses random arrival pattern contributing a substantial revenue to the VoD system.

Department of Computer Science and Engineering, University Visvesvaraya College of Engineering, Bangalore University, Bangalore-560001, India.
D. N. Sujatha (suj sat@yahoo.com)
K. Girish
N. Sunil Kumar
K. R. Venugopal
Microprocessor Applications Laboratory, Indian Institute of Science, Bangalore-560012, India.
L. M. Patnaik

In textile industry, reliable and accurate quality control and inspection becomes an important element. Presently, this is still accomplished by human experience, which is more time consuming and is also prone to errors. Hence automated visual inspection systems become mandatory in textile industries. This Paper presents a novel algorithm of fabric defect detection by making use of Multi Resolution Combined Statistical and Spatial Frequency Method. Defect detection consists of two phases, first is the training and next is the testing phase. In the training phase, the reference fabric images are cropped into non-overlapping sub-windows. By applying MRCSF the features of the textile fabrics are extracted and stored in the database. During the testing phase the same procedure is applied for test fabric and the features are compared with database information. Based on the comparison results, each sub-window is categorized as defective or non-defective. The classification rate obtained by the process of simulation using MATLAB was found to be 99%.

Assistant Professor - ECE & Centre Head - SONA SIPRO, Advanced Research Centre, Sona College of Technology, Salem, Tamil Nadu, India.
R. S. Sabeenian (sabeenian@sonatech.ac.in)
Lecturer - ECE & Research Member - SONA SIPRO, Advanced Research Centre, Sona College of Technology, Salem, Tamil Nadu, India.
M. E. Paramasivam (sivam@sonatech.ac.in )

This paper presents the design and performance measurement of the hardware JPEG codec on an ARM926EJS emulation base board. JPEG is one of the best compression algorithms for still images. It preserves the quality with high compression ratio. JPEG codec encodes and decodes coloured as well grey image formats. The design exploits the pipeline architecture for high throughput. Overall size of the codec is controlled by sharing the common resources between JPEG encoder and decoder. Hardware JPEG codec was synthesized for XilinxTM Virtex II FPGA device on ARM926EJS emulation base board. The paper covers all the RTL modifications done for performance measurement. FPGA resource utilization is tabulated at post-synthesis and post-mapping stage. Real time performance measurement is done for encoder and decoder for colored and grey images.

Samsung India Software Operations Pvt. Ltd., Bangalore, Karnataka, India.
Naveen Tiwari (naveen.t@samsung.com)
Sagar Chaitanya Reddy (sagarcr.a@samsung.com )

Neural Network is an effective tool in the field of pattern recognition. The neural network classifies the pattern from the training data and recognizes if the testing data holds that pattern. The classical Back propagation (BP) algorithm is generally used to train the neural network for its simplicity. The basic drawback of this algorithm is its uncertainty and long training time and it searches the local optima and not the global optima. To overcome the drawback of Back propagation (BP) algorithm, here we use a hybrid evolutionary approach (GA-NN algorithm) to train neural networks. The aim of this algorithm is to find the optimized synaptic weight of neural network so as to escape from local minima and overcome the drawbacks of BP. The implementation is done taking images as input in “.png” and “.tif” format.

Department of Computer Science, College of Engineering & Technology, Bhubaneswar, India.
Sangita Pal (sangitapalmtech.cet@gmail.com)
Prashanta Kumar Patra (hodcomputer@yahoo.co.in)
Department of Computer Science, NIT Rourkela, Rourkela, India.
Swati Vipsita (vipsita_swati@yahoo.co.in)

A new dynamic routing protocol (CSTR) for mobile networks is proposed in this paper. The rules for mapping between cell number and corresponding co-ordinates are discussed. The routing protocol has been formulated with the help of a tree structure generated inter-alia. All possible routing paths could also be enumerated in a simple manner. This method is simpler than other techniques reported so far. The simulation study confirms routing path analysis for destination nodes.

Department of CSE, NIT, Durgapur, Durgapur-713209, India.
P. K. Guha Thakurta (parag.nitdgp@gmail.com)
Rajarshi Poddar (rajarshi.poddar@gmail.com)
Senior Member IEEE, Department of CSE, University of Calcutta, Kolkata-700009, India.
Subhansu Bandyopadhyay (subhansu@computer.org)
The WiMAX forum has adopted IEEE 802.16 Orthogonal Frequency Division Multiplexing (OFDM) based adaptive Physical Layer (PHY) layer due to its robust performance and provision of various mechanisms to improve Quality of Service (QoS). This paper evaluates the performance of PHY layer with various QoS aspects for different coding schemes and channel conditions. A robust PHY layer with improved QoS performance is developed from the investigations. Department of Electronics and Communication Engineering, National Institute of Technology, Jalandhar, India.
Vinit Grewal (abvinit_ab@yahoo.co.in)
Department of Computer Science and Engineering, National Institute of Technology, Jalandhar, India
Ajay K. Sharma (sharmaajayk@rediffmail.com)

We present a novel DCT technique for digital watermarking of textured images based on the concept of graylevel co-occurrence matrix (GLCM). We provide analysis to describe the behavior of the method in terms of correlation as a function of the offset for textured images. We compare our approach with another spatial and temporal domain watermarking techniques and demonstrate the potential for robust watermarking of textured images. From our extensive experiments, results indicate that our DCT approach is robust and secure against a wide range of image processing operations like JPEG compression, additive noise, cropping, scaling, and rotation. Also, the experimental results show that the proposed scheme has good imperceptibility.

Motilal Nehru National Institute of Technology, Allahabad, India.
Sushila Kamblea (sushila@mnnit.ac.in)
Suneeta Agarwalb (suneeta@mnnit.ac.in)
V. K. Shrivastavac (vinay@mnnit.ac.in)
Vikas Maheshkar

Signature Detection modules in IDS/IPS though accurate in pattern matching, yet it leads to false positives. This is due to the incompleteness of the signatures which lacks or has very little information about when, where and how to match these signatures. The signatures enriched with this information significantly brings down the false positives and at the same time enhances the performance of the signature detection module. In this paper we propose a state base signature detection model which leverages on our state aware signatures with sufficiently complete information to match these signatures. The proposed model keeps track of the state of the connection and matches the signatures within appropriate packets. We further classify our signatures that span across multiple packet and across multiple sessions. We also provide the notion of virtual signatures which represents patterns within packets in a distributed form. In this paper we demonstrate the capabilities of our proposed model to detect these virtual patterns, multi-packet and multi-session leveraging on our state aware signatures.

Computer Network and Internet Engineering(CNIE), Center For Development Of Advanced Computing(CDAC), Bangalore, India.
Pramod S. Pawar (pramod@cdacbangalore.in)
Mayank Pal Singh (mayank@cdacbangalore.in)
Sachin Narayanan (sachin@cdacbangalore.in)

The Object Calculi proposed by Abadi and Cardelli, treat objects as primitive constructs and define operations on these objects directly. This approach used by the object calculi overcomes the problem of complex encoding of objects as functions. The object calculi do not provide the direct support for aspects and its related concepts. We propose a calculus which provides direct support for aspects and other construct of Aspectoriented programming paradigm. Our proposed calculus is an extension to the Untyped Imperative Object Calculus which is a part of the family of object calculi. We have worked upon the syntax and operational semantics of the proposed Untyped Aspect Calculus. The paper discusses the syntax and operational semantics of the calculus. Interpreter for the calculus is also designed and implemented and the same is discussed in the paper.

Assistant Professor, Department of Computer Engineering, Malaviya National Institute of Technology, Jaipur, Rajasthan, India.
Dinesh Gopalani (dg@mnit.ac.in)
Professor, Department of Computer Engineering, Malaviya National Institute of Technology, Jaipur, Rajasthan, India.
M. C. Govil (govilmc@yahoo.co.in)

This paper proposes an extended secure data communication[1] by using the concept of underdetermined BSS problem and HC-128. The purpose of the HC-128 is to generate the pseudorandom sequence, which in turn is used for encryption and specific mixing. The proposed method has high security performance. The experimental results illustrate that performance of this is highly secured.

Computer Science & Engineering Department, Sikkim Manipal Institute of Technology, Sikkim, India.
Anil Kumar (dahiyaanil@yahoo.com)
M. K. Ghose (headcse.smit@gmail.com)
K. V. Singh (krishnajay81@yahoo.co.in)

In this paper, a novel watermarking scheme for color images is proposed. The host image is transformed from RGB color space to YCbCr color space. First discrete cosine transform is applied on all the components of the image and then all the transformed components are further decomposed by l-level wavelet packet transform. A gray scale watermark is embedded in all the frequency sub-bands of the image. A reliable extraction scheme is developed for the extraction of the watermark from the distorted images. Experimental results show that the proposed watermarking algorithm provides good results in terms of imperceptibility and is also robust against variety of attacks.

Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee - 247667, India.
Sanjay Rawat (sanjudma@gmail.com)
Balasubramanian Raman (balaiitr@ieee.org )

Residue Number Systems (RNS) based on Chinese Remainder Theorem (CRT) permits the representation of large integers in terms of combinations of smaller ones. The set of all integers from 0 to M-1 with RNS representation and component wise modular addition and multiplication constitutes direct sum of smaller commutative rings. Encryption and decryption algorithm based on the properties of direct sum of smaller rings offers distinct advantages over decimal or fixed radix arithmetic. In this paper representation of integer using RNS, is successfully utilized in additive, multiplicative and affine stream cipher systems.

The property of the cipher system based on RNS number system allow speeding up the encryption / decryption algorithm, reduce the time complexity and provides immunity to side channel, algebraic, and known plain text attacks. In this paper, the characteristics of additive, multiplicative and affine stream cipher systems, the key generation, and encryption and decryption based on RNS number system representation are discussed.

Research Scholar, Department of Electronics and Communication, National Institute of Technology, Karnataka, Srinivasanagar, Suratkal, Mangalore, India.
Ganesh Aithal (ganeshaithal@gmail.com)
Department of Electronics and Communication, Nagarjuna College of Engineering, Venkatagiri Kote post, Bangalore 562110, Karnataka, India
K. N. Hari Bhat
Department of Electronics and Communication, National Institute of Technology Karnataka, Srinivasanagar, Suratkal, Mangalore 575025, Karnataka, India.
U. Sripathi

We propose an innovative approach for handling dynamic memory, arrays, pointers, structures and union by interprocedural dynamic slicing technique which combines the basic techniques from past and current trends of dynamic interprocedural slicing. At first an improved algorithm for interprocedural dynamic slicing in the presence of derived and user defined data type is given. Secondly the dynamic slices for different derived and user defined data types used in the respective programs are obtained. The proposed extended interprocedural dynamic slicing algorithm is more efficient then the existing algorithm as it gives a detailed idea about the slices that can be obtained for one dimensional pointers, two dimensional pointer, pointer and arrays, dynamic memory allocation, structures and union. The illustrations are given with the programs for the proof of correctness of the proposed algorithm.

Associate Professor, Department of Computer Science and Applications, Kurukshetra University, Kurukshetra, Haryana, India.
Rajender Nath (rnath_2k3@rediffmail.com)
Lecturer, MMICT & BM, MM University, Mullana(Ambala), Haryana, India.
Pankaj Kumar Sehgal (pankajkumar.sehgal@gmail.com)
Student, MMICT & BM, MM University, Mullana(Ambala), Haryana, India.
Atul Kumar Sethi (atulsethi.cool@gmail.com )

Steganography plays an important role in the field of information hiding. It is used in wide variety of applications such as internet security, authentication, copyright protection and information assurance etc. In Discrete Wavelet Transform (DWT) based steganography approaches the wavelet coefficients of the cover image are modified to embed the secret message. DWT based algorithm for image data hiding has been proposed in the recent past that embeds the secret message in CH band of cover image. This paper intends to observe the effect of embedding the secret message in different bands such as CH, CV and CD on the performance of stegano image in terms of Peak Signal to Noise Ratio (PSNR). Experimentation has been done using six different attacks. Experimental results reveal that the error block replacement with diagonal detail coefficients (CD) gives better PSNR than doing so with other coefficients.

Department of Computer Science & Engineering, J. C. D. M. College of Engineering, Sirsa, Haryana, India.
Vijay Kumar (vijaykumarchahar@gmail.com)
Department of Computer Science & Engineering, Guru Jambheshwar University of Science & Technology, Hisar, Haryana, India.
Dinesh Kumar (dinesh_chutani@yahoo.com)

2-tuple Digital Signature scheme has two elements: a message and a signature. A tempered message can be verified by the decryption of the message digest, encrypted by the secret key of the signer, with the help of its corresponding public key. On the contrary, if the signature element is replaced then it cannot be verified. This is termed as signature replacement attack hitherto not discussed in the literature. In case of signature replacement attack, proof of origin is compromised. In this paper this attack is brought into focus for the first time. A solution for digital signature, resilient to signature replacement attack, is also proposed, where a trusted central arbiter is used as an in-line TTP. However, the central arbiter becomes the main bottleneck of performance. The problem is equally true for XML signature scheme used in Web service security today. This paper also proposes a solution with a BPEL process which acts as a central arbiter in the proposed special protocol.

Department of Computer Science, Assam University, Silchar, Assam, India.
Subrata Sinha (subra_s29@rediffmail.com)
Department of Computer Science & Engineering, Tezpur University, Napaam, Assam, India.
Smriti Kumar Sinha (smritiksinha@rediffmail.com)

Each data-centric mobile middleware has an unique abstraction of the data. They vary in data format, mode of operation etc. As each of these middlewares has some advantage over others, it is important to establish interoperability among these. There might be application-level interconnection among these components. But that is disadvantageous. In this paper DIMM (Data-centric Interoperable Mobile Middleware) is proposed as a framework to build an interoperable middleware. It handles all the headaches related to different underlying middlewares at different devices.

Indian Institute of Technology, Kharagpur, India.
Rajarshi Pal (rajarshi@cse.iitkgp.ernet.in)
Connectiva Sysytems, Kolkata, India.
Souvik Mazumder (souvik.mzmdr@gmail.com)
Jadavpur University, Kolkata, India.
Samiran Chattopadhyay (samiran@it.jusl.ac.in)

In 2004, Das et al. proposed a dynamic identity based remote user authentication scheme. They claimed that their scheme is secure against different attacks. Unfortunately, many researchers demonstrated that Das et al. scheme is vulnerable to various attacks. Furthermore, this scheme does not achieve mutual authentication and thus can not resist malicious server attack. In 2005, Liao et al. improved Das et al.’s scheme and claimed that the improved scheme achieves mutual authentication, withstand password guessing attack and insider attack. In 2006, Yoon and Yoo demonstrated a reflection attack on Liao et al.’s scheme that breaks the mutual authentication. In this paper, we found that Liao et al.’s scheme is also vulnerable to malicious user attack, impersonation attack, stolen smart card attack and offline password guessing attack. Moreover, Liao et al.’s scheme does not maintain the user’s anonymity and its password change phase is insecure. This paper presents a secure dynamic identity based authentication scheme using smart cards to resolve the aforementioned problems, while keeping the merits of different dynamic identity based authentication schemes.

Department of Electronics & Computer Engineering, Indian Institute of Technology, Roorkee, India.
Sandeep K. Sood (ssooddec@iitr.ernet.in)
Anil K. Sarje (sarjefec@iitr.ernet.in)
Kuldip Singh (ksconfcn@iitr.ernet.in)

Routing Overhead is an important issue of any Routing Protocol in wireless networks. It incurs during broadcast of Route Request packets (RREQ) during route discovery & broadcast of HELLO packets during link connectivity. A new routing protocol called Enhanced Ad-hoc on Demand Distance Vector (E-AODV) routing protocol has been proposed in this paper which merges the Blocking Expanding Ring Search (BERS) & Routing packets as HELLO packets techniques to reduce routing overhead. Results shows that, performance of E-AODV routing protocol is better than existing AODV in wireless networks.

Lecturer, Department of Computer Science & Engineering, Baba Hira Singh Bhattal Institute of Engineering & Technology, Lehragaga, District Sangrur, Punjab, India.
Sandeep Suman (Er_sandeepsuman@rediffmail.com)
Lecturer, Department of Computer Science & Engineering, Yadavindra College of Engineering, Guru Kashi Campus, Talwandi Sabo, Punjabi University Patiala, Punjab, India.
Balkrishan (Balkrishan_76@rediffmail.com)

Network-on-Chip(NoC) has been proposed as a solution for addressing the design challenges of future high-performance nanoscale architecture. Application specific SoC design offers the opportunity for incorporating custom NoC architectures that are more suitable for a particular application, and do not necessarily conform to regular topologies. In this paper, fast deterministic methodologies are proposed for synthesis of energy aware communication architecture along with corresponding routing tables of an application specific NoC where the traffic communication traffic characteristics can be well characterized at design time.

Department of Computer Engineering, Malaviya National Institute of Technology, Jaipur, India.
Naveen Choudhary (naveenc121@yahoo.com)
M. S. Gaur (gaurms@mnit.ac.in)
V. Laxmi (vlaxmi@mnit.ac.in)
Super Computer Education and Research Center, Indian Institute of Science, Bangalore, India.
V. Singh (viren@serc.iisc.ernet.in)

Rogue Access Points (RAPs) is one of the leading security threats in current network scenario, if not properly handled in time could lead from minor network faults to serious network failure. Most of the current solutions to detect rogue access points are not automated and are dependent on a specific wireless technology. In this paper, we propose a Multi-Agent Based Methodology, which not only detects Rogue Access Point but also completely eliminates it. This Methodology has the following outstanding properties: (1) it doesn’t require any specialized hardware; (2) the proposed algorithm detects and completely eliminates the RAPs from network; (3) it provides a cost-effective solution. The proposed technique can block RAPs as well as remove them from the networks both in form of Unauthorized APs or as a Rogue Clients Acting as APs.

Department of Information Technology, Birla Institute of Technology, Mesra, Ranchi, India.
V. S. Shankar Sriram (sriram@bitmesra.ac.in)
G. Sahoo (drgsahoo@yahoo.com)
Department of Computer Science & Engineering, Birla Institute of Technology, Mesra, Ranchi, India.
Krishna Kant Agrawal (krishna.agrawal@sify.com)

Many multimedia communication applications require a source to transmit messages to multiple destinations subject to delay and delay-variation constraints. To support delay constrained multicast communications, computer networks have to guarantee an upper bound end-to-end delay from the source node to each of the destination nodes. On the other hand, if the same message fails to arrive at each destination node at the same time, there will probably arise inconsistency and unfairness problem among users. The problem to find a minimum cost multicast tree with delay and delay variation constraints has been proven to be NP-Complete. In this paper, we present a more efficient heuristic algorithm, namely, Economic Delay and Delay Variation Bounded Multicast Algorithm (EDVBMA), based on a novel heuristic function, to construct a least cost delay and delay variation bounded multicast tree. A noteworthy feature of this algorithm is that it has very high probability of finding the optimal solution in polynomial time with low computational complexity.

Department of Computer Science & Engineering, Veer Surendra Sai University of Technology, Burla, Sambalpur, Orissa, India.
Manas Ranjan Kabat (manas_kabat@yahoo.com)
Manoj Kumar Patel (patel.mkp@gmail.com)
Chita Ranjan Tripathy (cse_uce@yahoo.co.in)

Due to its object based nature, flexible features and provision for user interaction, MPEG-4 encoder is highly suitable for parallelization. The most critical and time-consuming operation of encoder is motion estimation. Nvidia’s general-purpose graphical processing unit (GPGPU) architecture allows for a massively parallel stream processor model at a very cheap price (in a few thousands Rupees). However synchronization of parallel calculations and repeated device to host data transfer is a major challenge in parallelizing motion estimation on CUDA. Our solution employs optimized and balanced parallelization of motion estimation on CUDA. This paper discusses about frame-based parallelization wherein parallelization is done at two levels – at macroblock level and at search range level. We propose a further division of macroblock to optimize parallelization. Our algorithm supports real-time processing and streaming for key applications such as e-learning, telemedicine and video-surveillance systems, as demonstrated by experimental results.

Department of Electronics & Computer Engineering, Indian Institute of Technology Roorkee, Roorkee, India.
Dishant Ailawadi (dishant.iitr@gmail.com)
Milan Kumar Mohapatrab (mohapatramilan@gmail.com)
Ankush Mittal (dr.ankush.mittal@gmail.com)

Wireless Mesh Networks (WMNs) have the potential for improving network capacity by employing multiple radios and multiple channels (MRMC). Channel Assignment (CA) is a key issue that plays vital role in defining WMN throughput by efficient utilization of available multiple radios and channels there by minimizing network interference. The two important issues that are needed to be addressed by CA algorithm are Connectivity and Interference. CA problem is proven to be NP-Hard [2] [4] even with the knowledge of network topology and traffic load. In this paper we present improvements to CLICA [2] algorithm first by extending it in ECLICA and we propose a new method based upon Minimum Spanning Tree, MSTCA algorithm. Our proposed algorithms are centralized, interference-traffic aware, routing independent, connectivity preserving algorithms. The ECLICA and MSTCA algorithms run in two phases. In the first phase they temporarily assign channels to links throughout the network. In second phase, they take feasible and necessary channel reassignment decisions for further reducing the interference and improving overall network throughput. Proposed CA algorithm assumes relatively stable traffic in the wireless mesh network. Proposed CA algorithms can be easily implemented on commodity IEEE 802.11 hardware. Our simulations demonstrate that our proposed algorithms are improvements compared to existing CLICA algorithm.

Department of Computer Science and Engineering, JNTUH College of Engineering, Hyderabad-85, India.
Kavitha Athota (aathotakavitha@gmail.com)
Department of Computer and Information Sciences, University of Hyderabad, Hyderabad, India.
Atul Negi (atul.negi@ieee.org)
C. Raghavendra Rao (crrcs@uohyd.ernet.in)

Zigbee (IEEE 802.15.4) standard interconnects simple, low power and low processing capability wireless devices. The Zigbee devices facilitate numerous applications such as pervasive computing, national security, monitoring and control etc. An effective positioning of nodes in a ZigBee network is particularly important in improving the performance (e.g., throughput) of ZigBee networks. In the wireless sensor network (WSN) literature, the use of a mobile sink is often recommended as an effective defense against the so-called hot-spot phenomenon. But the effects of mobile coordinator on the performance of the network are not given due consideration. In this paper, we perform extensive evaluation, using OPNET Modeler, to study the impact of coordinator mobility on ZigBee mesh network. The results show that the ZigBee mesh routing algorithm exhibits significant performance difference when the router are placed at different locations and the trajectories of coordinator are varied. We also show that the status of ACK in the packet also plays a critical role in deciding network performances.

Computer Science and Engineering Department, Thapar University, Patiala, India.
Harsh Dhaka
Atishay Jain
Karun Verma

Various tools, which are capable to evade different security mechanisms like firewall, IDS and IPS, exist and that helps the intruders for sending malicious traffic to the network or system. So, inspection of malicious traffic and identification of anomalous activity is very much essential to stop future activity of intruders which can be a possible attack. In this paper we present a flow based system to detect anomalous activity by using IP flow characteristics with chi-square detection mechanism. This system provides solution to identify anomalous activities like scan and flood attack by means of automatic behavior analysis of the network traffic and also give detailed information of attacker, victim, type and time of the attack which can be used for corresponding defense. Anomaly Detection capability of the proposed system is compared with SNORT Intrusion detection system and results prove the very high detection rate of the system over SNORT for different scan and flood attack. The proposed system detects different stealth scan and malformed packets scan. Since the probability of using stealth scan in real attack is very high, this system can identify the real attacks in the initial stage itself and preventive action can be taken.

Computer Networking and Internet Engineering, Centre for Development of Advanced Computing (C-DAC), Bangalore, India.
N. Muraleedharan (murali@ncb.ernet.in)
Arun Parmar (parmar@ncb.ernet.in)
Manish Kumar (manish@ncb.ernet.in)

Performance and availability of resources cannot be guaranteed in the highly distributed and decentralized grid environment. For a reliable application execution in grid, mechanisms are needed to minimize or neutralize the effects of resource related faults and their volunteer leaving or joining the grid. In this paper a fault tolerant application execution model for grid has been investigated. The proposed model is an efficient solution towards resource usage and application execution cost. Analytical study of the reliability for the proposed model is specified. Illustrative examples are also presented.

Department of Computer Science and Engineering, Sant Longowal Institute of Engineering and Technology, Longowal, Punjab, India.
Major Singh (mjrsingh@yahoo.com)
Department of Computer Science and Engineering, University College of Engineering, Punjabi University, Patiala, India.
Lakhwinder Kaur (mahal2k8@yahoo.com)

In a (t,n) threshold proxy signature scheme based on RSA, any t or more proxy signers can cooperatively generate a proxy signature while t-1 or fewer of them can’t do it. The threshold proxy signature scheme uses the RSA cryptosystem to generate the private and the public key of the signers[8]. In this article, we discuss the implementation and comparison of some threshold proxy signature schemes that are based on the RSA cryptosystem. Comparison is done on the basis of time complexity, space complexity and communication overhead. We compare the performance of four schemes: Hwang et al. [1], Wen et al. [2], Geng et al. [3] and Fengying et al. [4] with the performance of a scheme that has been proposed by the authors of this article earlier and proposed an advanced secure (t, n) threshold proxy signature scheme. In the proposed scheme, both the combiner and the secret share holder can verify the correctness of the information that they are receiving from each other. Therefore, the proposed scheme is secure and efficient against notorious conspiracy attacks.

Department of Computer Science and Engineering, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, India.
Raman Kumar
Harsh Kumar Verma

Keyword auctions are being used to sell the positions along the side of organic results shown by search engine when user types a keyword or a query related to keyword in a search engine. It has been a huge revenue generating arena for search engines since last decade. Irrespective of the great success of these types of auctions there are certain research issues which are still in inchoate state and needs urgent attention of research communities e.g. how much a naive bidder should bid without referring to any complex agents, how much he/she will be minimally charged for the participation etc. In this paper we propose a novel scheme to compute effective bidding range based on fuzzy logic which has threefold advantages. Firstly, it provides bidders with the information of his effective range of bids which can ensure his chances of participation and winning. Secondly, it provides auctioneer with the information about bidders bidding behavior which can help in predicting their revenues and lastly it can enforce the minimum reservation prices in natural way. Experimental results are presented to illustrate working of the proposed scheme.

School of Computer and Systems Sciences, Jawharlal Nehru University, New Delhi, India.
Madhu Kumari (madhu.jaglan@gmail.com)
Kamal K. Bharadwaj (kbharadwaj@gmail.com)

Ultrasonography is considered to be one of the most powerful techniques for imaging organs for an obstetrician and gynecologist. The first trimester of pregnancy is the most critical period in human existence. This evaluation of the first trimester pregnancy is usually indicated to confirm presence and number of pregnancy, its location and confirm well being of the pregnancy. The first element to be measurable is the gestational sac(gsac) of the early pregnancy. Size of gestational sac gives measure of fetus age in early pregnancy and also from that EDD is predicted. Today, the monitoring of gestational sac is done non-automatic, with human interaction. These methods involve multiple subjective decisions which increase the possibility of interobserver error.

Because of the tedious and time-consuming nature of manual measurement, an automated, computer-based method is desirable which gives accurate boundary detection, consequently finding accurate diameter. Ultrasound images are characterized by speckle noise and edge information, which is weak and discontinuous. Therefore, traditional edge detection techniques are susceptible to spurious responses when applied to ultrasound imagery due to speckle noise. Algorithm for finding edges of gsac are as follows. In first step, we are using contrast enhancement, followed by filtering. We are smoothing image using lowpass filter followed by wiener filter. This image is segmented using thresholding. This results in image having large number of gaps due to high intensity around sac. These false regions are minimized by morphological reconstruction. Then boundaries are detected using morphological operations. Knowledge based filtering is used to remove false boundaries. In this prior knowledge of shape of gestational sac is used. First fragmented edges are removed then most circular shape is found as our sac is generally circular. Once sac is located, sac size is measured to predict the gestational age.

Lecturer, Information Technology Department, Government College of Engineering, Aurangabad, Maharashtra, India.
Vrishali A. Chakkarwar (vrush.a143@gmail.com)
Head of Department, Information Technology Department, Government College of Engineering, Aurangabad, Maharashtra, India.
Madhuri S. Joshi (madhuris.joshi@gmail.com)
Head of Department, Computer Science & Engineering Department, Government College of Engineering, Aurangabad, Maharashtra, India.
Praveen S. Revankar (p_revankar@yahoo.com)

Internet is facilitating numerous services while being the most commonly attacked environment. Hackers attack the vulnerabilities in the protocols used and there is a serious need to prevent, detect, mitigate and identify the source of the attacks. Network forensics involves monitoring network traffic and determining if the anomaly in the traffic indicates an attack. The network forensic techniques enable investigators to trace and prosecute the attackers. This paper proposes a simple architecture for network forensics to overcome the problem of handling large volumes of network data and the resource intensive processing required for analysis. It uses open source network security tools to collect and store the data. The system is tested against various port scanning attacks and the results obtained illustrate the effectiveness in its storage and processing capabilities. The model can be extended to add detection and investigation of various attacks.

Department of Electronics & Computer Engineering, Indian Institute of Technology Roorkee, Roorkee, India.
Atul Kant Kaushik (akk22pec@iitr.ernet.in)
Emmanuel S. Pilli (emshudec@iitr.ernet.in)
R. C. Joshi (rcjosfec@iitr.ernet.in)

WWW’s expansion coupled with high change frequency of web pages poses a challenge for maintaining and fetching up-todate information. The traditional crawling methods are no longer catch up with this updating and growing web. Alternative distributed crawling scheme that uses migrating crawlers try to maximize the network utilization by minimizing the network load but are hampered due to the deficiency in their web page refresh techniques. The absence of effective measures to verify whether a web page has been changed or not is another challenge. In this paper, an efficient approach for computing revisit frequency is being proposed. Web pages which frequently undergo up-dation are detected and accordingly revisit frequency for the pages is dynamically computed.

Department of Computer Enginnering, YMCA Institute of Engineering, Faridabad, Haryana, India.
Ashutosh Dixit (dixit_ashutosh@rediffmail.com)
A. K. Sharma (ashokkale2@rediffmail.com)

Next generation wireless networks are supposed to provide best quality services, minimum delays which results in reduce total access cost, increased coverage and more reliable wireless access to the users at anytime and anywhere. One of the most challenging problems facing deployment of 4G technology is to support quality of service (QoS). In this paper, architecture (4UATT) is proposed along with the algorithm to find out next possible point of attachment (PoA) for the purpose of call continuation which further reduces the packet delivery cost and location update cost.

Department of Computer Engineering, YMCA institute of Enggineering, Faridabad, India.
Sapna Gambhir (sapnagambhir@rediffmail.com)
Department of Computer Engineering, Faculty of Engineering & Technology, Jamia Millia Islamia, Delhi, India.
M. N. Doja (ndoja@yahoo.com)
Dr. B. R. Ambedkar National Institute of Technology, Jalandhar, India.
Moinuddin

Code Division Multiple Access (CDMA) is the predominant multiple access technology for future generation wireless systems. The performance of CDMA based wireless systems is largely based on the characteristics of user specific spreading codes. The objective of this paper is to highlight the various factors affecting the choice of these spreading codes and present a comparative evaluation of correlation properties of Orthogonal Gold codes, Orthogonal Golay complementary sequences and Walsh-Hadamard codes for application to next generation CDMA based wireless mobile systems.

Department of Electronics & Communication Engineering, Guru Jambheshwar University of Science & Technology, Hisar (Haryana), India.
Deepak Kedia
Manoj Duhan
Dean (Academics & Research), Jaypee Institute of Information Technology University, Noida (U.P.), India.
S. L. Maskara

Most students in India choose their undergraduate major solely on the basis of persisting trends in the society. Due to the lack of a holistic guidance system, students often end up making choices solely on the basis of the above parameter, which in eventuality, may fail to align with the student’s actual interest and inherent aptitude towards a particular major. In this paper we propose an expert system-SAES which aims to provide intelligent advice to the student as to which major he/she should opt. SAES acquires knowledge of academic performances as well as explicit and implicit interests of the candidate. Knowledge representation in SAES is done by the use of a combination of case based and rule based reasoning. SAES draws inferences on the basis of acquired knowledge and also takes into account the degree of dilemma faced by the candidate and the time he/she takes to decide the interest areas. SAES then recommends the most suitable majors for each candidate, which are further classified as strong, mild and weak on the basis of calculated relative probabilities of success. At the end, we analyze results of the test conducted on a working prototype of SAES.

Student, Computer Science and Engineering Department, Thapar University, Patiala, India.
Sourabh Deorah (sourabhdeorah@gmail.com)
Srivatsan Sridharan (srivastsan.genius@gmail.com)
Senior Lecturer, Computer Science and Engineering Department, Thapar University, Patiala, India.
Shivani Goel (shivani@thapar.edu)

Wireless Mesh Networks (WMNs) have the potential for improving network capacity by employing multiple radios and multiple channels (MRMC). Channel Assignment (CA) is a key issue that plays vital role in defining WMN throughput by efficient utilization of available multiple radios and channels there by minimizing network interference. The two important issues that are needed to be addressed by CA algorithm are Connectivity and Interference. CA problem is proven to be NP-Hard [2] [4] even with the knowledge of network topology and traffic load. In this paper we present improvements to CLICA [2] algorithm first by extending it in ECLICA and we propose a new method based upon Minimum Spanning Tree, MSTCA algorithm. Our proposed algorithms are centralized, interference-traffic aware, routing independent, connectivity preserving algorithms. The ECLICA and MSTCA algorithms run in two phases. In the first phase they temporarily assign channels to links throughout the network. In second phase, they take feasible and necessary channel reassignment decisions for further reducing the interference and improving overall network throughput. Proposed CA algorithm assumes relatively stable traffic in the wireless mesh network. Proposed CA algorithms can be easily implemented on commodity IEEE 802.11 hardware. Our simulations demonstrate that our proposed algorithms are improvements compared to existing CLICA algorithm.

Reader, Department of Informmation Technology, UIT RGPV Bhopal, India.
Asmita A. Moghe (aamoghe@rgtu.net)
Assisstant Professor, Department of Electronics & Communication MANIT, Bhopal, India.
Jyoti Singhai (_singhai@rediffmail.com)
Professor, Department of Electronics & Communication, MANIT, Bhopal, India.
S. C. Shrivastava (scs_manit@yahoo.com)

Data Mining (DM) is the process of automated extraction of interesting data patterns representing knowledge, from the large data sets. Frequent itemsets are the itemsets that appear in a data set frequently. Finding such frequent itemsets plays an essential role in mining associations, correlations, and many other interesting relationships among itemsets in transactional database. In this paper an algorithm, SAR (Strong Association Rule), is designed and implemented to check whether an Association Rule (AR) is strong enough or not. Apriori algorithm is also implemented to generate Frequent k-itemsets. A Binary Transactional Dataset is used for implementing the algorithm in java language.

M. M. Institute of Comp. Tech. and Business Management, Maharishi Markandeshwar University, Mullana-133203, Haryana, India.
G. S. Bhamra (bhamra.gs@gmail.com)
Department of Computer Science & Engineering, TIET, Thapar University, Patiala-147004, Punjab, India.
A. K. Verma (akverma@tiet.ac.in)
Department of Computer Engineering MMEC, Maharishi Markandeshwar University, Mullana-133203, Haryana, India.
R. B. Patel (patel_r_b@indiatimes.com)

In this paper, we study the fundamental property of the ad hoc network using connectivity index. We investigate the construction of minimum cost multicast trees by selecting a link having minimum connectivity Index and comparing application required bandwidth with (available bandwidth – allocated bandwidth) of the link. We have shown that Increase in total connectivity Index of the entire network increases no. of spanning trees. Due to the lack of redundancy in multi-path and multicast structures, the multicast routing protocols are vulnerable to the failure in ad-hoc networks. So it is the dire need to come across the fault tolerant solution. This paper proposes edge disjoint spanning tree Multicasting based on connectivity index with bandwidth constraint.

Department of MCA, Sir MVIT, Bangalore and Department of CS, School of Science & Technology, Dravidian University, Kuppam - Andhra Pradesh, India.
B. R. Arun Kumar (aksresearchcentre@gmail.com)
Department of CS, School of Science & Technology, Dravidian University, Kuppam - Andhra Pradesh, India.
Lokanath C. Reddy
Department of PG studies and Research Group, Gulbarga University, Gulbarga- Karnataka, India.
Prakash S. Hiremath

The scanned text image is a non editable image though it has the text but one can not edit it or make any change, if required, to that scanned document. This provides a basis for the optical character recognition (OCR) theory. OCR is the process of recognizing a segmented part of the scanned image as a character. The overall OCR process consists of three major sub processes like pre processing, segmentation and then recognition. Out of these three, the segmentation process is the back bone of the overall OCR process. We can say that the segmentation process is the most significant process because if the segmentation is incorrect then we can not have the correct results; it is just like garbage in and garbage out. But it is not an easy job, because segmentation is one of the complex processes. It is more difficult if the document is handwritten because in that case only few points are there which can be used to make segmentation. In this paper, we formulate an approach to segment the scanned document image. As per this approach, initially this considers the whole image as one large window. Then this large window is broken into less large windows giving lines, once the lines are identified then each window consisting of a line is used to find a word present in that line and finally to characters. For that purpose we used the concept of variable sized window, that is, the window whose size can be adjusted according to needs. This concept was implemented and results were analyzed. After the analysis the same concept was modified and finally tried on different documents and we got good reasonable results.

SMCA, Thapar University, Patiala, Punjab, India.
Rajiv Kumar (rajiv.patiala@gmail.com)
UCoE, Punjabi University, Patiala, Punjab, India.
Amardeep Singh (amardeep_dhiman@yahoo.com)

Requiring all the nodes in a large-scale wireless sensor network to communicate their data to their respective destination will deplete the energy of the nodes quickly due to the long-distance and multi-hop nature of the communication and will also results in network contention. Therefore to increase longevity and support scalability, nodes are often grouped into disjoint and mostly non-overlapping clusters. Clustering saves energy and reduces network contention by enabling locality of communication: nodes communicate their data over shorter distances to their respective cluster-heads. The cluster-heads aggregate these data into a smaller set of meaningful information. Not all nodes, but only the cluster-heads need to communicate far distances to their respective destinations. In this paper, we propose a distributed clustering approach for our proposed real time data placement model for WSNs. It is assumed that the sensor nodes are aware of their locations in their deployment area, and they are time synchronized. For data dissemination and action in the wireless sensor network the usage of Action and Relay Stations (ARS) has been proposed.

Doeacc Society, Autonomous body of Department of Information Technology, Government of India, Chandigarh, India.
Sanjeev Gupta (Sanju_anita@yahoo.com)
Department of Computer Engineering, National Institute of Technology, Kurukshetra, India.
Mayank Dave (mdave@nitkkr.ac.in)

Autonomic Computing is the practice to reduce complexity involved in integrated systems across an enterprise. It is the term associated with a system capable of operating and managing on its own [1]. A business establishment is composed of a variety of legacy and contemporary systems. The operation and maintenance of such systems can be eased if we integrate them and develop a temperate system. An Enterprise Management infrastructure is a copybook example of a system where autonomic computing can prove vital. Such a system will have the capability to take action and provide relevant information to people in various roles in an office. It will employ the business rules stored in it and the business intelligence capabilities of the subsystems involved. An autonomic Enterprise Management system will thus carry out management of various functions related to the enterprise and its employee. The information will be presented using the rich client interfaces as and when necessary. The paper suggests a model for Enterprise Management which has been implemented using the autonomic computing principle.

Education and Research Department, Infosys Technologies Limited, Chandigarh, India.
Manoj Manuja (Manoj_manuja@infosys.com)
Rajender Kalra (Rajender_kalra01@infosys.com)

XML (eXtensible Markup Language) have been adopted by number of software vendors today, it became the standard for data interchange over the web and is platform and application independent also. A XML document is consists of number of attributes like document data, structure and style sheet etc. Clustering is method of creating groups of similar objects. In this paper a weighted similarity measurement approach for detecting the similarity between the homogeneous xml documents is suggested. Using this similarity measurement a new clustering technique is also proposed. The method of calculating similarity of document's structure and styling is given by number of researchers, mostly which are based on tree edit distances. And for calculating the distance between document's contents there are number of text and other similarity techniques like cosine, jaccord, tf-idf etc. In this paper both of the similarity techniques are combined to propose a new distance measurement technique for calculating the distance between a pair of homogeneous XML documents. The proposed clustering model is implemened using open source technology java and is validated experimentally. Given a collection of XML documents distances between documents is calculated and stored in the java collections, and then these distances are used to cluster the XML documents.

Department of CS&E, NIT, Raipur, India.
Naresh Kumar Nagwani (nknagwani.cs@nitrr.ac.in)
Department of IT, OPJIT, Raigarh, India.
Ashok Bhansali (bhansali00@gmail.com)

Software bug estimation is a very essential activity for effective and proper software project planning. All the software bug related data are kept in software bug repositories. Software bug (defect) repositories contains lot of useful informaton related to the development of a project. Data mining techniques can be applied on these repositories to discover useful intersting patterns. In this paper a prediction data mining technique is proposed to predict the software bug estimation from a software bug repository. A two step prediction model is proposed In the first step bug for which estimation is required, its summary and description is matched against the summary and description of bugs available in bug repositories. A weighted similarity model is suggested to match the summary and description for a pair of software bugs. In the second step the fix duration of all the similar bugs are calculated and stored and its average is calculated, which indicates the precicted estimation of a bug. The proposed model is implemented using open source technologies and is exaplained with the help of illustrative example.

Department of CS&E, NIT, Raipur, India.
Naresh Kumar Nagwani (nknagwani.cs@nitrr.ac.in)
Department of Information Technology, NIT Raipur, India.
Shrish Verma (shrishverma@nitrr.ac.in)

Clustering is the process of classifying objects in to different groups by partitioning sets of data into a series of subsets called clusters. Clustering has taken its roots from algorithms like k-means and k-medoids. However conventional k-medoids clustering algorithm suffers from many limitations. Firstly, it needs to have prior knowledge about the number of cluster parameter k. Secondly, it also initially needs to make random selection of k representative objects and if these initial k medoids are not selected properly then natural cluster may not be obtained. Thirdly, it is also sensitive to the order of input dataset. First limitation was removed by using cluster validity index. Aiming at the second and third limitations of conventional k-medoids, we have proposed an improved k-medoids algorithm. In this work instead of random selection of initial k objects as medoids we have proposed a new technique for the initial representative object selection. The approach is based on density of objects. We find out set of objects which are densely populated and choose medoids from each of this obtained set. These k data objects selected as initial medoids are further used in clustering process. The validity of the proposed algorithm has been proved using iris and diet structure dataset to find the natural clusters in this datasets.

Department of Electronics and Computer Engineering, Indian Institute of Technology Roorkee, Roorkee-247667, India.
Bharat Pardeshi (bhaarat001@gmail.com)
Durga Toshniwal (durgafec@iitr.ernet.in)

Implementing web data extraction means we can directly extract data from various web pages, where they mostly formed in an unstructured HTML format, into a new structured format such as XML or XHTML. In this paper we review the implementation of web data extraction and stages in making a Mashup. We implement web data extraction by visually extract targeted data from data sources (web pages). Afterward, we combined web data extraction with the stages of making a Mashup, e.g. data retrieval, data source modeling, data cleaning/ filtering, data integration and data visualization. Problems arise in querying data sources due to unstructured contents of web pages (HTML), we cannot directly extract data into a new structured form. To address this problem, we propose a system, called Xtractorz, that can perform web data extraction in a Mashup format. We provide a fully visual and interactive user interface with new technique and approach using PHP and AJAX as the programming languages, and MySQL as the Data Repository. Furthermore, Xtractorz enables the user to conduct their job without the need to write a script or program or even without any knowledge of computer programming. The test results shows that Xtractorz requires less number of steps in making a Mashup compared with RoboMaker and Karma.

Department of Electrical Engineering, University of Indonesia, Depok 16424, Indonesia.
Rudy AG. Gultom (rudy.agus81@ui.ac.id)
Riri Fitri Sari (riri@ui.ac.id)
Bagio Budiardjo (bbudi@eng.ui.ac.id)

Data Warehouses (DWs) and On-Line Analytical Processing (OLAP) systems rely on a multidimensional model that includes dimensions and measures. Such model allows expressing users' requirements for supporting the decisionmaking process. Spatial related data has been used for a long time; however, spatial dimensions have not been fully exploited. To exploit the full potential of the spatial and temporal data for analysis spatial dimensions is a necessity for building a data warehouse. It has been observed that OLAP possesses a certain potential to support spatio-temporal analysis. However, without a spatial framework for viewing and manipulating the geometric component of the spatial data, the analysis remains incomplete. This paper presents a multi dimensional design framework adapted for effective spatio-temporal exploration and analysis. This includes an extension of a conceptual model with spatial dimensions to enable spatial analysis. The proposed design framework addresses the problem of spatial and temporal data integration by providing information to facilitate data analysis in a Spatial Data Warehouse (SDW) that uniformly handles all types of data.

Assistant Professor, School of Computer Engineering, KIIT University, Bhubaneswar, India.
Animesh Tripathy
Research Associate, School of Computer Engineering, KIIT University, Bhubaneswar, India.
Lizashree Mishra
Professor, Department of Computer Science & Engineering, CET, Bhubaneswar, India.
Prashanta Kumar Patra

Privacy has in recent times become an astounding akin to an oxymoron. It can either be embellished or marred with technology; confiscating more consideration in many data mining applications. We are focusing on information safety measures in order to preserve the individual’s privacy, so that no personal information can be gained by the hacker from the data. Under the modern state of affairs of technological developments which has eradicated the distinction of domain data kept in private and public; we are inadequate in expertise of protecting the individual privacy. With today’s scenario of data strewn globally, the records get incremented from various sources, which further masquerade a greater confrontation.

In this paper we propose a new technique called Cabalistic fortuity strategize based approach for Incremental data stream based PPDM. Our technique optimizes the privacy level by toughening the re-identification of original data without compromising the processing speed and data utility. Thus, it solves the re-identification predicament which is found in the conventional random projections. Here the encryption based random projection assigns secret keys to the positions of random matrix elements and not to the random numbers, (viz., where the random matrix is going to hold the random numbers). We have tackled two kinds of random sequences for generating the random sequences called determinist and indeterminist random sequences and encrypted it in a new way. And also we have proposed a projection based sketch for incremental data stream. We hope the proposed solution will tarmac way for investigation track and toil well according to the evaluation metrics including hiding effects, data utility, and time performance.

School of Information Technology and Engineering, V.I.T, Tamilnadu, India.
J. Gitanjali (gitanjalij@vit.ac.in)
Department of Computer Science and Engineering, Anna University, Chennai - 600 025, Tamilnadu, India.
J. Indumathi (indumathi@annauniv.edu)
School of Computer Science and Engineering, V.I.T, Tamilnadu, India.
N. Ch. Sriman Narayana Iyenga (nchsniyr@vit.ac.in)

The most time consuming operation in Priori-like algorithms for association rule mining is the computation of the frequency of the occurrences of itemsets (called candidates) in the database. In this paper, a fast algorithm has been proposed for generating frequent itemsets without generating candidate itemsets and association rules with multiple consequents. The proposed algorithm uses Boolean vector with relational AND operation to discover frequent itemsets. Experimental results shows that combining Boolean Vector and relational AND operation results in quickly discovering of frequent itemsets and association rules as compared to general Apriori algorithm.

Department of Computer Science Engineering, Sikkim Manipal Institute of Technology, East Sikkim, India.
M. Anandhavalli (anandhigautham@yahoo.com)
Sandip Jain (sunny20053@rediffmail.com)
Abhirup Chakraborti (achakraborti88@gmail.com)
Nayanjyoti Roy (nayanroy@gmail.com)
M. K. Ghos

Recently Negative Association Rule Mining (NARM) has become a focus in the field of spatial data mining. Negative association rules are useful in data analysis to identify objects that conflict with each other or that complement each other. Much effort has been devoted for developing algorithms for efficiently discovering relation between objects in space. All the traditional association rule mining algorithms were developed to find positive associations between objects. By positive correlation we refer to associations between frequently occurring objects in space such as a city is always located near a river and so on. Recently the problem of identifying negative associations (or “dissociations”) that is absence of objects has been explored and considered relevant. This paper presents an improved design approach for mining both positive and negative association rules in spatial databases. This approach extends traditional association rules to include negative association rules using a minimum support count. Experimental results show that this approach is efficient on simple and sparse datasets when minimum support is high to some degree, and it overcomes some limitations of the previous mining methods. The proposed form will extend related applications of negative association rules to a greater extent.

Assistant Professor, School of Computer Engineering, KIIT University, Bhubaneswar, India.
Animesh Tripathy
Research Associate, School of Computer Engineering, KIIT University, Bhubaneswar, India.
Subhalaxmi Das
Professor, Department of Computer Science & Engineering, CET, Bhubaneswar, India.
Prashanta Kumar Patra

Curbing schedule slippage is a daunting task in software industry. This problem evolves mainly because of poor analysis of risk factors and their management. This paper aims to handle this predicament with the help of Influence Diagram (ID). The three main risk factors that adversely affect the schedule are creeping user requirements, requirement instability and use of unnecessary features in the project. An integration of impacts of these with experts’ opinion and with the existing databases (consisting of probability of occurrence of risk factors) helped us to create an ID based system that is capable to model the schedule slippage. This system can be used by the software manager at any stage of software development.

Department of Computer Science, DAV College, Jalandhar, India.
Kawal Jeet (kawaljeet80@yahoo.com)
Vijay Kumar Mago (vijay.mago@gmail.com)
Department of Computer Science & Engineering, NIT, Jalandhar, India.
Renu Dhir (dhirr@nitj.ac.in)
Department of Computer Science, MLUDAV College, Phagwara, India.
Rajinder Singh Minhas (minhas_rajinder@yahoo.com)

Radio spectrum is limited resource in wireless mobile communication system. Cellular system has to serve the maximum possible number of calls while the number of channels available is limited. Hence the problem of determining an optimal allocation of channels to mobile users that minimizes call-blocking and call-dropping probabilities is of paramount importance. This paper proposes a hybrid channel allocation model using an evolutionary strategy with an allocation distance to give efficient use of frequency spectrum.

Department of CSE, Bharath University, Chennai, India.
S. Phani Kumar (phanikumar.s@gmail.com)
Department of CS&SE, Andhra University College of Engineering, Visakhapatnam, India.
P. Seetha Ramaiah
Department of IT, Bharath University, Chennai, India.
V. Khanaa

Embedded Systems can be engineered using Cleanroom Software Engineering (CRSE) methodology as it considers all the quality issues as an integral part of the CRSE Life cycle model and lays stress on the reduced size and effort of testing through statistical use testing. Both CRSE and Embedded systems development methodology are based on stimulus-response models. The stimulus-response models are used for designing the external behavioral requirements. Thus, CRSE in a revised form can conveniently be used for the development of reliable embedded systems.

Verification and validation of one model with the other, such as verifying the external behavior models (Black Box Structures) with the requirement specifications and vice versa, are the most important built-in features of CRSE. The verification and validation methods described in the literature are manual procedures which are based on either intuition or experience.

CRSE suffers from lack of Formal Frameworks to verify Box structures with the requirements specification. In this paper, a framework is proposed for verifying Black box structures which are derived using END-TO-END processing requirements of the embedded systems. The verification mechanism is built around generation of stimulus-response sequences in two different ways and proving that the sequences generated are the same. Thus, the mechanism ensures that the system has been designed properly.

Department of Computer Science and Engineering, K. L. University, Guntur district, A.P., India.
J. K. R. Sastry (drsastry@klce.ac.in)
V. Chandra Prakash (vchandrap@rediffmail.com)

Accurate, precise and reliable estimates of effort at early stages of project development holds great significance for the industry to meet the competitive demands of today’s world. The inherent imprecision present in the inputs of the algorithmic models like Constructive Cost Model (COCOMO) yields imprecision in the output, resulting in erroneous effort estimation. The development of software is characterized by parameters that possess certain level of fuzziness which requires that some degree of uncertainty be introduced in the models, in order to make the models realistic. Fuzzy logic based cost estimation models enable linguistic representation of the input and output of a model to address the vagueness and imprecision in the inputs, to make reliable and accurate estimates of effort. In this paper, we present an enhanced fuzzy logic based framework for software development effort prediction. The intermediate COCOMO is extended in the proposed study by incorporating the concept of fuzziness into the measurements of size, mode of development for projects and the cost drivers contributing to the overall development effort. The said framework tolerates imprecision, incorporates experts knowledge, explains prediction rationale through rules, offers transparency in the prediction system, and could adapt to changing environments with the availability of new data.

Department of Computer Science and Engineering, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar, India.
Harsh Kumar Verma (vermah@nitj.ac.in)
Vishal Sharma (vishals_1977@yahoo.com)

Patterns of software architecture help in describing structural and functional properties of a system in terms of smaller components. The emphasis of this work is on capturing the aspects of pattern descriptions and the properties of inter-component interactions including non-deterministic behavior. Through these descriptions we, capture structural and behavioral specifications as well as properties against which the specifications are verified. The patterns covered in this paper are variants of Proxy, Chain, MVC, Acceptor-Connector, Publisher-Subscriber and Dinning Philosopher patterns. While the machines are CCS-based, the properties have been described in Modal μ-Calculus. The approach serves as a framework for precise architectural descriptions.

Department of Computer Science and Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India.
Dharmendra K. Yadav (dharmendra@cse.iitb.ac.in)
Rushikesh K. Joshi (rkj@cse.iitb.ac.in)

The chaos all through the development of requirements evolves due to disparity between users and developers resulting in project devastations and terminations. Business and product requirements often change as development proceeds, making a straight-line path to requirements engineering impracticable. Considering the adaptable nature of requirements and multi-dimensional concerns of stakeholders, spiral model based proposed framework bridges the gap between users and developers by extending an agent oriented approach to requirements engineering. Requirements encapsulated in the form of User Story Cards signify user oriented view of requirements, whilst Agent Cards facilitate developers in observing the requirements of a system in terms of software agents. Negotiation process obtains an integrated view of all stakeholders over conflicting requirements that leads to ascertain correct, prioritized and comprehensive list of requirements.

Department of Computer Science, University of Delhi, Delhi, India.
Vibha Gaur (3.vibha@gmail.com)
Anuja Soni (30.anuja@gmail.com)
Punam Bedi (pbedi@du.ac.in)