Clonal Selection Algorithm Based FLANN Model for Recognition of Odia VowelsPushpalata Pujari1 and Babita Majhi2 Department of CSITGuru Ghasidas Vishwavidyalaya Bilaspur, India e-mail: [email protected], babita.

[email protected] Abstract— Recognition of handwritten character is still a challenging problem due to a number of variations found in writing style. This paper aims to develop a robust model for the recognition of handwritten character using functional link artificial neural network (FLANN) as classifier.

The weights of the FLANN classifier are further optimized by using clonal selection algorithm (CSA) which is inspired by the clonal selection theory of acquired immunity. Discrete wavelet transform (DWT) is used to extract features from the handwritten characters and principal Component Analysis (PCA) is used to further reduce the dimensionality of the features. The proposed model is applied on Odia dataset containing 1200 samples of Odia handwritten vowels and recognition accuracy of 85.

75% is achieved on test dataset. Keywords— Handwritten character recognition; Discrete wavelet transform (DWT); Functional link artificial neural network (FLANN); Clonal Selection Algorithm (CSA)Introduction Hybrid system is the integration of one or more techniques to solve a problem. The objective of hybrid system is to provide a better solution by integrating various techniques so as to overcome the weakness of one technique with the strength of other technique. In literature several research works have been reported by using hybrid techniques for recognition of characters. Most of the reported works have used multilayer network and SVM as the classifier. But the complexity of the multilayer neural network is more when the number of input features grow on. Hence in this paper, functional link artificial neural network (FLANN), having single layer and single neuron is proposed for recognition of hand written Odia vowels. FLANN has been successfully applied in many applications like channel equalization 2526,stock market prediction 2728, detection of impulse noise in images 29 and classification of micro array data 30.

Generally FLANN is having its own updation rule, known as delta learning. In order to avoid the chances of falling to local minima problem, the weights of FLANN are optimized using real coded genetics algorithm (RCGA). Wavelet transform has been applied in applications like tumour tissue identification 19, character recognition 20,24 license plate localization21,22 and analysis of protein sequence23. Being motivated by the results of WT, the features of Odia vowels are extracted using WT in this paper. The paper is organized as follows: Section I presents introduction and related work on hybrid models for character recognition.

Section II discusses the dataset, preprocessing, feature extraction and feature reduction step. Section III describes the recognition phase. Section IV shows the simulation study and experimental results. Conclusion and future scope of the research are discussed in section V and VI. Dataset, Preprocessing And Feature Extraction DatasetThe dataset used in this paper is taken from computer vision and pattern recognition centre of NIT Rourkela. The database contains 1200 samples of Odia handwritten vowels. All the samples of the database belong to twelve classes (1-12).

Each vowel (1-12) appears 120 times in the database. 80% of the dataset are used for training and the rest are used for testing. Few samples of Odia vowels are shown in Fig.1. Fig.

1. Samples of Odia handwritten vowels PreprocessingPre-processing is a series of operations which include background noise reduction, filtering, original image restoration etc which are performed on the input image. This step is carried out for improving the quality of the image before the application of other character recognition steps.

In this paper first the data is normalized to a standard size of 64X64 pixels. Then the gray scale image of the data is generated by using Mean filtering method. The normalized images of the vowels are shown in Fig.2. Fig.2. Image of Odia Vowels after normalization Feature extraction by using discrete wavelet transformFeature extraction is carried out to find important features to be used in the recognition phase. In this paper DWT based approach is used for feature extraction.

In DWT, a time-scale representation of the digital signal is obtained using digital filtering techniques 313233 and 34. Due to the better energy compaction property of wavelet transform it provides substantial improvement in picture quality at high compression ratio. In DWT 1-D wavelet transform is applied along the rows of the image for obtaining 2-D DWT decomposition. The results obtained are then decomposed along the columns which splits the given input image into four decomposed sub band images. The sub band images are represented as LL, LH, HL and HH frequency bands. 2-D DWT decomposes the image into two parts: approximation and detailed part.

Approximation part contains a low frequency sub- band LL and detailed part contains three high frequency sub bands LH, HL and HH. The process is repeated to obtain multiple scale wavelet decomposition. Any given signal can be decomposed by DWT into a set of basic functions called wavelets. It is realized from a single prototype wavelet ?(t) by mother wavelet using dilations and shifting. DWT requires a two-dimensional scaling function, ?(u, v) and three two-dimensional wavelet functions, ?H(u,v), ?V(u,v), and ?D(u, v).

They are represented as the products of two one-dimensional functions,?(u,v)=?(u)?(v)?H(u,v)=?(u)?(v)?V(u,v)=?(u)?(v)?D(u,v)=?(u)?(v) (1) where ?(?) is a one-dimensional scaling function and ?(?) is a one dimensional wavelet function. These wavelets measure intensity functional variations along different directions. ?H measures variations along columns, ?V measures variations along rows and ?D measures variations along diagonals.The two-dimensional discrete scale and translated basis functions are defined as?_(j,m,n) (u,v)=2^(j?2) ?(2^j u-m,2^j v-n (2)?_(j,m,n)^i (u,v)=2^(j?2) ?^i (2^j u-m,2^j v-n),i ? {H,V,D} (3)Where j is a scale and m, n are the translation quantities. The transform of image f(u,v) of size M X N is expressed asW_? (j_0,m,n)=1/?MN ?_(u=0)^(M-1)??_(