Popular in Course
verified elite notetaker
Popular in Business
This 11 page Document was uploaded by an elite notetaker on Sunday December 20, 2015. The Document belongs to a course at a university taught by a professor in Fall. Since its upload, it has received 7 views.
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 12/20/15
InternaINTERNATIONAL JOURNAL OF GRAPHICS ANDedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME MULTIMEDIA (IJGM) ISSN 0976 - 6448 (Print) ISSN 0976 -6456 (nline) IJGM Volume 4, Issue 1, January - April 2013, pp. 09-19 © IAEME: www.iaeme.com/ijgm.asp Journal Impact Factor (2013): 4.1089 (Calculated by GISI) © I A E M E www.jifactor.com CHARACTER RECOGNITION OF KANNADA TEXT IN SCENE IMAGES USING NEURAL NETWORK M. M. Kodabagi , S. A. Angadi , Chetana. R. Shivanagi 1Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India, 2Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India 3 Department of Information Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka ABSTRACT Character recognition in scene images is one of the most fascinating and challenging areas of pattern recognition with various practical application potentials. It can contribute immensely to the advancement of an automation process and can improve the interface between man and machine in many applications. Some practical application potentials of character recognition system are: reading aid for the blind, traffic guidance systems, tour guide systems, location aware systems and many more. In this work, a novel method for recognizing basic Kannada characters in natural scene images is proposed. The proposed method uses zone wise horizontal and vertical profile based features of character images. The method works in two phases. During training, zone wise vertical and horizontal profile based features are extracted from training samples and neural network is trained. During testing, the test image is processed to obtain features and recognized using neural network classifier. The method has been evaluated on 490 Kannada character images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200, which contains samples of different sizes, styles and with different degradations, and achieves an average recognition accuracy of 92%. The system is efficient and insensitive to the variations in size and font, noise, blur and other degradations. Keywords: Character Recognition, Display Boards, Low Resolution Images, Neural Network Classifier, Zone Wise Profile Features. 9 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 1. INTRODUCTION In recent years, the hand held devices with increased computing and communication capabilities are widespread and being used for various purposes such as information access, mobile commerce, mobile learning, multimedia streaming, and many more. One such new application that can be integrated in such devices is a text understanding and translation system for low resolution natural scene images of display boards. Everyday, several people visit various places across the world for business and other activities, often they face problem with the language where they travel. This is especially true in countries like India, which are multilingual. For these reasons, there is a demand for an automated system that understands text in low resolution natural scene images and provides translated information in localized language. Natural scene display board images contain text information which is often required to be automatically recognized and processed. Scene text may be any textual part of the scene images such as names of streets, institutes names, names of shops, building names, company names, road signs, traffic information, warning signs etc. Researchers have focused their attention on development of techniques for understanding text on such display boards. There is a spurt of activity in the development of web based intelligent hand held systems for such applications. In the reported works [1-10] on intelligent systems for hand held devices, not many works pertain to understanding of written text on display boards. Therefore, scope exists for exploring such possibilities. The text understanding involves several processing steps; text detection and extraction, preprocessing for line, word and character separation, script identification, text recognition and language translation. Therefore, text recognition at character level is one of the very important processing steps for development of such systems prior to further analysis. Therefore, text recognition at word/character level is premise for the later stages of text understanding system. The recognition of text in low resolution images of display boards is a difficult and challenging problem due to various issues such as variability in font size, style and spacing between characters, skew, perspective distortions, viewing angle, uneven illuminations, script specific characters and other degradations . The current work aims at investigating the use of zone wise statistical features for recognition of Kannada characters in scene images. The proposed method uses zone wise horizontal and vertical profile based features of character images. The method works in two phases. During training, zone wise horizontal and vertical profile based features are extracted from training samples and neural network is trained. During testing, the test image is processed to obtain features and recognized using neural network classifier. The method has been evaluated on 490 Kannada character images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200, which contains samples of different sizes, styles and with different degradations, and achieves an average recognition accuracy of 92%. The system is efficient and insensitive to the variations in size and font, noise, blur and other degradations. The rest of the paper is organized as follows; the detailed survey related to character recognition of text in scene images is described in Section 2. The proposed method is presented in Section 3. The experimental results and discussions are given in Section 4. Section 5 concludes the work and lists future directions of the work. 10 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 2. RELATED WORKS The character recognition of text in low resolution natural scene images is a necessary step for development of various tasks of text understanding system. A substantial amount of work has gone into the research related to character recognition of text in natural scene images. Some of the related works are summarized in the following. A robust approach for recognition of text embedded in natural scenes is given in . The proposed method extracts features from intensity of an image directly and utilizes a local intensity normalization to effectively handle lighting variations. Then, Gabor transform is employed to obtain local features and linear discriminant analysis (LDA) is used for selection and classification of features. The proposed method has been applied to a Chinese sign recognition task. This work is further extended integrating sign detection component with recognition . The extended method embeds multi-resolution and multi-scale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection. The affine rectification recovers deformation of the text regions caused by an inappropriate camera view angle and significantly improve text detection rate and optical character recognition. A framework that exploits both bottom-up and top-down cues for scene text recognition at word level is presented in . The method derives bottom-up cues from individual character detections from the image. Then, a Conditional Random Field model is built on these detections to jointly model the strength of the detections and the interactions between them. It also imposes top-down cues obtained from a lexicon-based prior, i.e. language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. The method reports significant improvements in accuracies on two challenging public datasets, namely Street View Text and ICDAR 2003 compared to other methods. The test results showed that the reported accuracy is only 73% and requires further improvement. The hierarchical multilayered neural network recognition method described in  extracts oriented edges, corners, and end points for color text characters in scene image. A method called selective metric clustering which mainly deals with color is employed in . A fast lexicon based and discriminative semi-Markov models for recognizing scene text are presented in [16, 17]. An object categorization framework based on a bag-of-visual-words representation for recognition of character in natural scene images is described in . The effectiveness of raw grayscale pixel intensities, shape context descriptors, and wavelet features to recognize the characters is evaluated in . A method for unconstrained handwritten Kannada vowels recognition based upon invariant moments is described in . The technique presented in  extracts stroke density, length, and number of strokes for handwritten Kannada and English characters recognition. The method found in  uses modified invariant moments for recognition of multi-font/size Kannada vowels and numerals recognition. A model employed in  calculates features from connected components and obtains 3k dimensional feature vectors for memory based recognition of camera-captured characters. A character recognition method described in  uses local features for recognition of multiple characters in a scene image. After the thorough study of literature, it is noticed that, the some [18, 12, 23, 14] of the reported methods work with limited datasets, other cited works [18, 17, 16] report low recognition rates in the presence of noise and other degradations and very few works [18-22] pertain to recognition of Kannada characters from scene images. Hence, more research is 11 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME desirable to obtain new set of discriminating features suitable for Kannada text in scene images. In the current work, zone wise statistical features are employed for recognition of Kannada characters in low resolution images. The detailed description of the proposed methodology is given in the next section. 3. PROPOSED METHODOLOGYFORCHARACTERRECOGNITION The proposed method uses zone wise horizontal and vertical profile based features for recognition of Kannada characters in mobile camera based images. The proposed method contains various phases such as Preprocessing, Feature Extraction, Construction of Knowledge Base for Training Neural Network, Training and Character Recognition with Neural Network Classifier. The block diagram of the proposed model is given in Fig 1. The detailed description of each phase is given in the following subsections. 3.1 Preprocessing The input character image is preprocessed for binarization, noise removal, bounding box generation and resized to a constant resolution of size 30x30 pixels. Further, the image is thinned. Training Samples Test Sample Preprocessing Preprocessing Extraction of Zone Wise Extraction of Zone Wise Horizontal and Vertical Horizontal and Vertical Profile Features Profile Features Construction of Character Recognition using Knowledge Base using Neural Network Classifier Train Neural Network Recognized Character Fig. 1. Block Diagram of Proposed Model 3.2 Feature extraction In this phase, each image is divided into 15 vertical zones and 15 horizontal zones, where size of each horizontal zone is 2*30 pixels and the size of each vertical zone is 30*2 pixels. Then sum of all on pixels in every zone is determined as a feature value for the zone. Finally, 30 features are computed from all zones and are stored in to a feature vector X as described in the equations (1) to (5): X = [VFeatures )(HFeatures )] (1) 12 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME VFeatures = [Vf ] i 1≤ i ≤15 (2) HFeatures = [Hf ] i 1≤ i ≤15 (3) Where, Hf is a feature value of i horizontal zone and it is computed as shown in (4). i Vf is a feature value of i vertical zone and it is computed as shown in (5). 2 30 Hf i ∑∑ gi(x, y) (4) 1 1 30 2 Vf = g (x, y) (5) i ∑1 1 i th Where, g isii zone that encompasses the chosen region of the character image. The dataset of such feature vectors obtained from training samples is further used for construction of knowledge base. 3.3 Construction of Knowledge Base for Training Neural Network For the purpose of knowledge base construction, the images were captured from display boards of Karnataka Government offices, names of streets, institute names, names of shops, building names, company names, road signs, traffic direction and warning signs captured from 2 Mega Pixels cameras on mobile phones. The images are captured at various sizes 240x320, 600x800, 900x1200 at a distance of 1 to 6 meters. All these images are used for evaluating the performance of the proposed model. The images captured with a size of 240x320 at a distance of 1 to 3 meters are found to be clear when the viewing angle is parallel to the text plane, perspective distortions and other degradations occur beyond 3 meters with other viewing angles. But the images captured at a distance of 1 to 6 meters with other stated resolutions are clear, perspective distortions still occur when the viewing angle is not parallel. The images in the database are characterized by variable font size and style, uneven thickness, minimal information context, small skew, noise, perspective distortion and other degradations. The image database consists of 490 Kannada basic character images of varying resolutions. Then from the database, 50% of samples are used for training. During training, the features are extracted from all training samples and knowledge base is organized as a dataset of feature vectors as depicted in (6). The stored information in the knowledge base sufficiently characterizes all variations in the input. Testing is carried out for all samples containing 50% trained and 50% untrained samples. Some sample images captured using 2 Mega Pixels cameras on mobile phones from display boards are shown in Fig 2. KB =[X ] j 1≤ j ≤ N (6) Where, KB is knowledge base comprising feature vectors of training samples., X is a feature j vector of j thimage in the KB and N is the number of training sample images. 13 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME Fig. 2. Sample Images Captured from 2 Mega Pixels Cameras on Mobile Phones 3.4 Training and Recognition with Feed Forward Neural Network After the data set is obtained and organized into knowledge base of basic Kannada character images, training and recognition tasks are carried out using feed forward neural networks. The details of training and recognition are described in the following; Before network design, the data from in the knowledge base is prepared to cover the range of inputs for which the network will be used. The feed forward neural network does not have the ability to accurately extrapolate beyond the range of inputs, so the training data is chosen to span the full range of the input space. Later, the normalization step is applied to both the input vectors and the target vectors in the data set. In this way, the network output always falls into a normalized range. Once the data is ready, the feed forward neural network object is created with 30 neurons in the input layer, 15 neurons in the hidden layer, and configured with default weights and biases for the prepared data set in the knowledgebase. The network is configured with tan sigmoid functions in the input and hidden neurons, linear transfer functions for output neurons and Levenberg-Marquardt and Gradient Descent with Momentum learning algorithms. The default performance function for feed forward network used is mean square error. The parameters learning rate and minimum performance are initialized with value 0.01. The magnitude of the gradient and the number of validation checks are used to terminate the training. The number of validation checks parameter is configured with value 10 and represents the number of successive iterations that the validation performance fails to decrease. After the network weights and biases are initialized and configured with other training parameters, the network is ready for training. The multilayer feed forward network is trained for function approximation (nonlinear regression) or pattern recognition with network inputs and target outputs. The training process tunes the values of the weights and biases of the network to optimize network performance, as defined by the network performance function. After the network is trained, its performance is verified using several trained and test character images. The neural network classifier gives an average recognition accuracy of 92%. 14 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 4. EXPERIMENTALRESULTS AND ANALYSIS The proposed methodology has been evaluated for 490 low resolution basic Kannada character images of varying font size and style, uneven thickness and other degradations. The experimental results of processing a sample character image is described in section 4.1. And the results of processing several other character images dealing with various issues, the overall performance of the system and comparison results with other methods are reported in section 4.2. 4.1. An Experimental Analysis for a Sample Kannada Character Image The Character image with uneven thickness, uneven lighting conditions, and other degradations given in Fig. 3a is initially preprocessed for binarization, resized to a constant size of 30x30 pixels and thinned as shown in Fig. 3b. Fig. 3. a) A Sample Character Test Image b) Preprocessed Image Further, the image is divided into 15 vertical zones and 15 horizontal zones. Then, the zone wise statistical features are computed from all zones and are organized into a feature vector T as in (1) to (5). The experimental values of all zones are shown in Table 1. TABLE 1. Zone Wise Vertical and Horizontal Features of Sample Input Image in Fig. 3b Feature Vector [ VFeatures (4 3 13 5 6 6 6 8 6 7 6 9 13 13 4) T HFeatures (2 2 3 6 3 4 9 5 5 6 4 4 5 9 15) ] T= [ 4 3 13 5 6 8 6 6 8 6 7 6 9 13 13 4 2 2 3 6 3 4 9 5 5 6 4 4 5 9 15] The experimental values in Table 1 clearly depict the distribution of pixels in various segments/primitives of the character image. And these distributions are different from character to character because of varying positions and shapes of segments/primitives of basic Kannada characters. This is demonstrated considering two sample images in Table 2. TABLE 2. Vertical and Horizontal Features of Two Sample Images Demonstrating Pixel Distribution Patterns Character Image Zone Wise Statistical Features 9 5 6 2 3 2 4 3 11 7 8 11 21 10 2 13 1 5 11 4 4 4 13 9 4 8 5 2 3 5 4 12 8 6 6 6 6 14 18 8 6 6 6 9 14 10 3 2 2 6 8 22 2 2 17 17 9 7 12 10 16 The values in Table 2 clearly show that, the feature values in most of the corresponding zones of the characters are distinct. For example, the feature values 9, 5, 6, 2 of vertical zones 1, 2, 3 and 4 of character in first row of Table 2 are distinct from feature values 12, 8, 6, 6 in the corresponding zones of character in the second row. The similar characteristic exists with the feature values in other zones. The arrangement of these features into a feature vector creates a pixel distribution pattern that makes 15 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME samples distinguishable. It is also observed that, the proposed zone wise features also take care of uncertainty in the appearance of primitives of character image. After extracting features from test input image in Fig. 2a, the neural network classifier is used to recognize the character. 4.2. An Experimental Analysis dealing with various issues The proposed methodology has produced good results for low resolution images containing Kannada characters of different size, font, and alignment with varying background. The advantage lies in less computation involved in feature extraction and recognition phases of the method. During experiments it is noticed that, the zone wise features made samples separable in the feature space. Hence, the proposed work is robust and achieves an average recognition accuracy of 92%. The overall performance of the system after conducting the experimentation on the dataset is reported in Table 3. The comparison of the proposed method with other scene text recognition methods is described in Table 4. TABLE 3. Overall system performance Character Number Number of Number % of Character Number Number of Number of % of Image of Samples of Recognitio Image of Samples Samples Recognition Samples Correctly Samples n Accuracy Samples Correctly Miss Accuracy Tested Recognized Miss Tested Recognized Classified Classified 10 9 1 90 10 10 0 100 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 10 0 100 10 9 1 90 10 9 1 90 10 10 0 100 10 9 1 90 10 10 0 100 10 8 2 80 10 9 1 90 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 8 2 80 10 10 0 100 10 10 0 100 10 8 2 80 10 10 0 100 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 8 2 80 10 8 2 80 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 8 2 80 10 10 0 100 10 9 1 90 10 9 1 90 16 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME A closer examination of results revealed that misclassifications arise due to noise, more similarity between character structures/primitives and other degradations. It is also noticed that, zonal features takes care of variations in the appearance of character primitives. It is also found that, if the knowledge base is trained for all variations and degradations, better performance can be obtained. TABLE 4. Comparison of Proposed Method with Other Scene Text Recognition Methods Author Approach Features Recognition Accuracy Jerod J. Weinman. et. al. A Discriminative Wavelet features 82.08% (2008) Semi-Markov Model for Robust Scene Text Recognition Onur Tekdas. et. al Recognizing Raw intensities, 85.328 (2009) Characters in Natural Shape Contexts, and Scenes: A Feature wavelet features Study Masakazu Iwamura. et. Recognition of Scale invariant 76.5% al (2011) Multiple Characters feature transform and in a Scene Image voting method Using Arrangement of Local Features Anand Mishra., etal., Top down and Bottom up cues, 73% (2012) bottom up cues for language statistics scene text recogntion and condtional random field model. Proposed Method Character Zone wise vertical 92% Recognition of and horizontal profile Kannada Text in based features Scene Images Using Neural Network 5. CONCLUSION In this work, a novel method for recognition of basic Kannada characters from camera based images is proposed. The proposed method uses zone wise horizontal and vertical profile based features and neural network classifier for basic Kannada character recognition. The system works in two phases, training phase and testing phase. Exhaustive experimentation was done to analyze zone wise horizontal and vertical profile based features using neural networks classifier. The results obtained by considering zone wise horizontal and vertical profile features and neural network classifier are encouraging and it has been observed that the system is robust and insensitive for several challenges like, unusual fonts, variable lighting condition, noise, blur etc. The method is tested on 490 samples and gives an average recognition accuracy of 92%. The proposed method can be extended for character recognition considering new set of features and classification algorithm. 17 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME REFERENCES  Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper, and Mike Pinkerton, 1997, “CyberGuide: A mobile context-aware tour guide”, Wireless Networks, 3(5): pp.421-433.  Natalia Marmasse and Chris Schamandt, 2000, “Location aware information delivery with comMotion”, In Proceedings of Conference on Human Factors in Computing Systems, pp.157-171.  Tollmar K. Yeh T. and Darrell T., 2004, “IDeixis - Image-Based Deixis for Finding Location-Based Information”, In Proceedings of Conference on Human Factors in Computing Systems (CHI’04), pp.781-782.  Gillian Leetch, Dr. Eleni Mangina, 2005, “A Multi-Agent System to Stream Multimedia to Handheld Devices”, Proceedings of the Sixth International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’05).  Wichian Premchaiswadi, 2009, “A mobile Image search for Tourist Information System”, Proceedings of 9th international conference on SIGNAL PROCESSING, COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION, pp.62-67.  Ma Chang-jie, Fang Jin-yun, 2008, “Location Based Mobile Tour Guide Services Towards Digital Dunhaung”, International archives of phtotgrammtery, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B4, Beijing.  Shih-Hung Wu, Min-Xiang Li, Ping-che Yanga, Tsun Kub, 2010, “Ubiquitous Wikipedia on Handheld Device for Mobile Learning”, 6th IEEE International Conference on Wireless, Mobile, and Ubiquitous Technologies in Education, pp. 228-230.  Tom yeh, Kristen Grauman, and K. Tollmar., 2005, “A picture is worth a thousand keywords: image-based object search on a mobile platform”, In Proceedings of Conference on Human Factors in Computing Systems, pp.2025-2028.  Fan X. Xie X. Li Z. Li M. and Ma. 2005, “Photo-to-search: using multimodal queries to search web from mobile phones”, In proceedings of 7th ACM SIGMM international workshop on multimedia information retrieval.  Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, 2005, “SnapToTell: Ubiquitous information access from camera”, Mobile human computer interaction with mobile devices and services, Glasgow, Scotland.  Jing Zhang, Xilin Chen, Andreas Hanneman, Jie Yang, and Alex Waibel.,2002, “A Robust Approach for Recognition of Text Embedded in Natural Scenes”, proc. 16th International conf. Pattern recognition, volume 3, pp. 204-207 (2002).  Xilin Chen, Jie Yang, Jing Zhang, and Alex Waibel, January 2004, “Automatic Detection and Recognition of Signs From Natural Scenes”, IEEE Transactions On Image Processing, Vol. 13, No. 1, pp. 87-99 (January 2004).  Anand Mishra, Karteek Alahari, C. V. Jawahar, 2012, “Top-Down and Bottom-Up Cues for Scene Text Recognition” , Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.  Zohra Saidane and Christophe Garcia, 2007, “Automatic Scene Text Recognition using a Convolutional Neural Network”, CBDAR, p6, pp. 100-106 (2007) 18 International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME  Céline Mancas-Thillou, June 2007, “Natural Scene Text Understanding”, Segmentation and Pattern Recognition, I-Tech, Vienna, Austria, pp.123-142 (June 2007)  Jerod J. Weinman, Erik Learned-Miller, and Allen Hanson, September 2007, “Fast Lexicon-Based Scene Text Recognition with Sparse Belief Propagation”, Proc. Intl. Conf. on Document Analysis and Recognition, Curitiba, Brazil (September 2007)  Jerod J. Weinman, Erik Learned-Miller and Allen Hanson, December 2008, “A Discriminative Semi-Markov Model for Robust Scene Text Recognition”, IEEE, Proc. Intl. Conf. on Pattern Recognition (ICPR), Tampa, FL, USA, pp. 1-5 (December 2008)  Te´ofilo E. de Campos and Bodla Rakesh Bab, 2009, “Character Recognition In Natural Images”, Computer Vision Theory and Applications, Proc. International Conf. volume , pp. 273-280 (2009)  Onur Tekdas and Nikhil Karnad, 2009, “Recognizing Characters in Natural Scenes: A Feature Study”, CSCI 5521 Pattern Recognition, pp. 1-14 (2009)  Sangame S.K., Ramteke R.J., and Rajkumar Benne, 2009, “Recognition of isolated handwritten Kannada vowels”, Advances in Computational Research, ISSN: 0975– 3273, Volume 1, Issue 2, pp 52-55 (2009)  B.V.Dhandra, Mallikarjun Hangarge, and Gururaj Mukarambi, 2010, ”Spatial Features for Handwritten Kannada and English Character Recognition”, IJCA Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR), pp 146-151 (2010)  Mallikarjun Hangarge, Shashikala Patil, and B.V.Dhandra, 2010, “Multi-font/size Kannada Vowels and Numerals Recognition Based on Modified Invariant Moments”, IJCA Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR), pp 126-130 (2010)  Masakazu Iwamura, Tomohiko Tsuji, and Koichi Kise, 2010, “Memory-Based Recognition of Camera-Captured Characters”, 9 IAPR international workshop on document analysis systems, pp. 89-96 (2010)  Masakazu Iwamura, Takuya Kobayashi, and Koichi Kise, 2011, “Recognition of Multiple Characters in a Scene Image Using Arrangement of Local Features”, IEEE, International Conference on Document Analysis and Recognition, pp. 1409- 1413(2011)  Primekumar K.P and Sumam Mary Idicula, “Performance of on-Line Malayalam Handwritten character Recognition using Hmm And Sfam”, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 115 - 125, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.  Mr.Lokesh S. Khedekar and Dr.A.S.Alvi, “Advanced Smart Credential Cum Unique Identification and Recognition System. (Ascuirs)”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 1, 2013, pp. 97 - 104, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. 19
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'