Popular in Course
verified elite notetaker
Popular in Business
This 10 page Document was uploaded by an elite notetaker on Sunday December 20, 2015. The Document belongs to a course at a university taught by a professor in Fall. Since its upload, it has received 9 views.
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 12/20/15
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERINGg and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME & TECHNOLOGY (IJCET) ISSN 0976 – 6367(Print) ISSN 0976 – 6375(Online) IJCET Volume 4, Issue 2, March – April (2013), pp. 632-641 © IAEME: www.iaeme.com/ijcet.asp © I A E M E Journal Impact Factor (2013): 6.1302 (Calculated by GISI) www.jifactor.com RECOGNITION OF BASIC KANNADA CHARACTERS IN SCENE IMAGES USING EUCLIDEAN DISTANCE CLASSIFIER 1 2 M.M.Kodabagi , Shridevi.B.Kembhavi 1Department of Computer Science and Engineering, Basaveshwar Engineering College, 2 Bagalkot-587102, Karnataka, India Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India ABSTRACT Character recognition in scene images is a challenging visual recognition problem. The research field of scene text recognition receives a growing attention due to the proliferation of digital cameras and the great variety of potential applications, as well. Such applications include robotic vision, image retrieval, intelligent navigation systems and applications to provide assistance to visual impaired persons. In this paper, a novel methodology for recognition of basic Kannada characters in scene images is proposed. It is divided into two phases namely: training and testing. During training, zone wise horizontal and vertical profile based features are extracted from training samples and knowledge base is created. During testing, the test image is processed to obtain features and recognized using euclidean distance classifier. The method has been evaluated on 460 Kannada character images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200 which contains samples of different sizes, styles and with different degradations, and achieves an average recognition accuracy of 91%. The system is efficient and insensitive to the variations in size and font, noise, blur, dark background, slant/tilt and other degradations. Keywords: Character Recognition, Scene Images, Zone Wise Horizontal and Vertical Features, Euclidean Distance Classifier. 632 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME 1. INTRODUCTION Character recognition in scene images is a challenging visual recognition problem. Until a few decades ago, research in the field of Optical Character Recognition was limited to document images acquired with flatbed desktop scanners. The usability of such systems is limited as they are not portable because of large size of the scanners and the need of a computing system. Moreover, the shot speed of a scanner is slower than that of a digital camera. Hence research field of scene text recognition receives a growing attention due to the proliferation of digital cameras and the great variety of potential applications, as well. Such applications include robotic vision, image retrieval, intelligent navigation systems and applications to provide assistance to visual impaired persons. Recognition of characters from scene images is a very complex problem. Natural scene images usually suffer from low resolution and low quality, perspective distortion, complex background, font style and thickness, background as well as foreground texture, camera position which can introduce geometric distortions, image resolution, shadows, non- uniform illumination, low contrast and large signal dependent noise, slant and tilt as shown in figure 1. The problem is significantly more difficult than recognizing text from scanned images. Figure 1. Sample Images of Display Boards In this paper, a novel method for recognizing basic Kannada characters in natural scene images is proposed. The proposed method uses zone wise horizontal and vertical profile based features to extract features of character images. The method works in two phases. During training phase, zone wise horizontal and vertical profile based features are extracted from training samples and knowledge base is created. During testing, the test image is processed to obtain features and recognized using euclidean distance classifier. The method is evaluated on 460 Kannada character images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200 which contains samples of different sizes, styles and with different degradations, and achieves an average recognition accuracy of 91%. The system is efficient and insensitive to the variation in size and font, noise, blur, dark background, slant/tilt and other degradations. The rest of the paper is organized as follows; the detailed survey related to character recognition of character in scene images is described in Section 2. The proposed method is presented in Section 3. The experimental results and discussions are given in Section 4. Section 5 concludes the work and lists future directions of the work. 633 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME 2. RELATED WORK Some of the related works on character recognition of text in scene images are summarized below: A robust approach for recognition of text embedded in natural scenes is given in . The proposed method extracts features from intensity of an image directly and utilizes a local intensity normalization to effectively handle lighting variations. Then, Gabor transform is employed to obtain local features and linear discriminant analysis (LDA) is used for selection and classification of features. The proposed method has been applied to a Chinese sign recognition task. This work is further extended integrating sign detection component with recognition . The extended method embeds multi-resolution and multi-scale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection. The affine rectification recovers deformation of the text regions caused by an inappropriate camera view angle and significantly improve text detection rate and optical character recognition. A framework that exploits both bottom-up and top-down cues for scene text recognition at word level is presented in . The method derives bottom-up cues from individual character detections from the image. Then, a Conditional Random Field model is built on these detections to jointly model the strength of the detections and the interactions between them. It also imposes top-down cues obtained from a lexicon-based prior, i.e. language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. The method reports significant improvements in accuracies on two challenging public datasets, namely Street View Text and ICDAR 2003 compared to other methods. The test results showed that the reported accuracy is only 73% and requires further improvement. The hierarchical multilayered neural network recognition method described in  extracts oriented edges, corners, and end points for color text characters in scene image. A method called selective metric clustering which mainly deals with color is employed in . A fast lexicon based and discriminative semi-Markov models for recognizing scene text are presented in [16, 17]. An object categorization framework based on a bag-of-visual-words representation for recognition of character in natural scene images is described in . The effectiveness of raw grayscale pixel intensities, shape context descriptors, and wavelet features to recognize the characters is evaluated in . A method for unconstrained handwritten Kannada vowels recognition based upon invariant moments is described in . The technique presented in  extracts stroke density, length, and number of strokes for handwritten Kannada and English characters recognition. The method found in  uses modified invariant moments for recognition of multi-font/size Kannada vowels and numerals recognition. A model employed in  calculates features from connected components and obtains 3k dimensional feature vectors for memory based recognition of camera-captured characters. A character recognition method described in  uses local features for recognition of multiple characters in a scene image. After the thorough study of literature, it is noticed that, the some [18, 12, 23, 14] of the reported methods work with limited datasets, other cited works [18, 17, 16] report low recognition rates in the presence of noise and other degradations and very few works [18-22] pertain to recognition of Kannada characters from scene images. Hence, more research is desirable to obtain new set of discriminating features suitable for Kannada text in scene images. In the current paper, zone wise horizontal and vertical profile based features are 634 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME employed for recognition of Kannada characters in low resolution images. The detailed description of the proposed methodology is given in the next section. 3. PROPOSED METHODOLOGY The proposed method uses zone wise horizontal and vertical profile based features for recognition of basic Kannada characters. The proposed method contains various phases such as preprocessing, feature extraction, construction of knowledge base and character recognition using euclidean distance classifier. The block diagram of the proposed model is as shown in Figure 2. The detailed description of each phase is given in the following subsections. 3.1 Preprocessing The input character image is preprocessed for binarization, bounding box generation and resized to a constant resolution of size 30×30 pixels. Further, the image is thinned. Training Pre-Processing Feature Sample Extraction Database Images Testing Pre-Processing Feature Character Recognised Character Extraction Recognition Image Model character Figure 2. Block Diagram of Proposed Model 3.2 Feature extraction Features are extracted from the pre-processed image, each image is divided into 15 vertical zones and 15 horizontal zones, where size of each horizontal zone is 2*30 and the size of each vertical zone is 30*2. Then sum of all pixels in every zone is determined as a feature value. Finally we obtain 30 features that are stored in feature vector FV as described in equations (1) to (5): ܨܸ ൌ ሾሺܸ݁ݎݐ݈݅ܿܽ_ܨ݁ܽݐݑݎ݁ݏሻ ሺܪݎ݅ݖ݊ݐ݈ܽ_ܨ݁ܽݐݑݎ݁ݏሻሿ (1) ܸ݁ݎݐ݈݅ܿܽ_ܨ݁ܽݐݑݎ݁ݏ ൌ ሾ ܸܨ ሿ 1≤ i ≤ 15 (2) ܪݎ݅ݖ݊ݐ݈ܽ_ܨ݁ܽݐݑݎ݁ݏ ൌ ሾ ܪܨ ሿ 1≤ i ≤ 15 (3) Where, ܪܨ is a feature value of ihhorizontal zone and it is computed as shown in (4). ܸܨ is a feature value of i vertical zone and it is computed as shown in (5). 635 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME ଶ ଷ ܪܨ ൌ g▯ x,y ሺ ሻ ሺ4ሻ ଵ ଵ ଷ ଶ ܸܨ ൌ g▯ x,y ሺ ሻ ሺ5ሻ ଵ ଵ th Where, g▯ x,y is i zone that encompasses the chosen region of the character image. The dataset of such feature vectors obtained from training samples is further used for construction of knowledge base. 3.3 Construction of knowledge base For the purpose of knowledge base construction, the images are captured from display boards of Karnataka Government offices, names of streets, institute names, names of shops, building names, company names, road signs, traffic direction and warning signs captured from 2 Mega Pixels cameras on mobile phones. The images are captured at various sizes 240x320, 600x800, 900x1200 at a distance of 1 to 6 meters. All these images are used for evaluating the performance of the proposed model. The images in the database are characterized by variable font size and style, uneven thickness, minimal information context, small skew, noise, perspective distortion and other degradations. The image database consists of 460 Kannada basic character images of varying resolutions. Then from the database, 80% of samples are used for training. During training, the features are extracted from all training samples and knowledge base is organized as a dataset of feature vectors as depicted in (6). The stored information in the knowledge base sufficiently characterizes all variations in the input. Testing is carried out for all samples containing 80% trained and 20% untrained samples. ܭܤ ൌ ൣ ܨܸ ൧୨ 1 ݆ ܰ ሺ6ሻ Where, KB is knowledge base comprising feature vectors of training samples. FVj is a feature vector of j image in the KB and N the number of training sample images as shown in figure 3. Figure 3. Sample Characters Images 636 InternattonallJoournalloff Comppuuter Enngineering andd Teechnologyy (IJCETT)),ISSNN 09976- 6367(Print), ISSN 0976 – 63755(Onllne) Volumee 4,Issue 2,, March –Appril(2013),© IIAEMMEE 3.4 TTrraining and Reecognniion ussnggEuuclidean Diistance Classiferr Affeerthe data set s obtained annd organized into knowlledge baase ofbaasc Kaannaada character imaages,,raining andd recognition tasks are carried outtusing eeucldeean disannce classifierTheedeeailsof taining and recognition aredescribed in he following: In his phase testimaage s processed to obtainzzone wise horizontalanddveeticalprofilebaased features and stored into feature vector FV1 using the above equation (1).. TThhen classiier determineesmiinimuum vvalue betweeen heetest maage andd every record n the knoowledgge base using theeuucideanndistance meeasureassin equatonn(7).. 1≤ j ≤ N The minimum distance between the testimaage and the record in he knnowledgge base isuseddto recognize thechharacerr.The propossedmeethoddology performssweellfor variability in font size, style and darbackgrounnd images..Howwevver,themeethod requires suficienttraining of al varations n fonttsze,,syle and other degradations.. 4. EXXPPERRIME ENTTAAL RRESSULLTT ANND AANNALLYYSIIS The proposed methodolo oggy haas been eevaluated for 4600 basic Kaannadda character images of varying g fonnt size andd style,,uneveen thickness,,daak bbackgground and other degradaaions..The expeeimeentalresultsof processing asampple characterimaage isdescribed in sectonn 41..Annd heeresultsof processing severalotherrcharacter maages dealing with various issues,the overallperformaance ofthe system are epporeddin secton 4.2.. 4.1 Ann ExpeerimeentalAnnalysis orra SaamppleKaannnadaa Chaaracter Imaage Thhe Chharacterimaage wiih uuneven thicknness,unneven lightngg conditonns,annd other degradaaions ggveen in Figure..4a is initially preprocessed for binariation,,bounnding boox generation,,esized to a constant sze of 30x330 pixelsandd hinneed asshoownn n figure 4b, 4c and figure 4d. a) b) c) d) Figure 4. a) Tesstimaage b) Image with Bounding Box c) R Resized imaage d) Thinned image Fuurherr,heeimaage s divided inoo155veeticalzonnesandd155 horzoontalzoness. Then, the zone wise horizontal and vertical profile based feures are compputed for he imaages and are organized into a feature vectorFVV as in (1) to (5). Thexppermeental values of al zones are shown in Table 1. 637 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME TABLE 1. Zone Wise Vertical and Horizontal Profile based Features of Sample Input Image in Figure 4d Feature Vector [Vertical_Features (3 12 6 6 8 10 8 4 10 8 4 4 5 14 0) FV Horizontal_Features (0 7 9 8 8 8 11 10 12 4 4 4 6 11 0) ] FV= [3 12 6 6 8 10 8 4 10 8 4 4 5 14 0 0 7 9 8 8 8 11 10 12 4 4 4 6 11 0] The experimental values in Table 1 clearly depict the distribution of pixels in various primitives of the character image. And these distributions are different from character to character because of varying positions and shapes of primitives of basic Kannada characters. This is demonstrated considering two sample images in Table 2. TABLE 2. Vertical and Horizontal Features of Two Sample Images Demonstrating Pixel Distribution Patterns Zone Wise Vertical and Horizontal Profile based Character Image Features 21 5 5 8 8 16 4 4 20 4 9 14 4 5 9 0 25 8 8 7 9 8 8 8 11 5 4 6 7 22 15 7 8 9 10 11 8 8 10 9 8 7 6 9 18 3 23 3 3 5 13 15 13 16 3 4 7 11 14 10 The values in Table 2 clearly show that, the feature values in most of the corresponding zones of the characters are distinct. The arrangement of these features into a feature vector creates a pixel distribution pattern that makes samples distinguishable. It is also observed that, the proposed zone wise features also take care of uncertainty in the appearance of primitives of character image. After extracting features from test input image in Figure. 4a, the euclidean distance classifier is used to recognize the character. 4.2 An Experimental Analysis Dealing with Various Issues The proposed methodology has produced good results for scene images containing basic Kannada characters of different size, font, and alignment with varying background. The advantage lies in less computation involved in feature extraction and recognition phases of the method. Since the feature set is reduced by taking sum of consecutive values of zone wise horizontal and vertical profile based features. Hence, the proposed work is robust and achieves an average recognition accuracy of 91%. The overall performance of the system after conducting the experimentation on the dataset is reported in Table 3. 638 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME TABLE 3. Overall System Performance Character Number Number of Number % of Character Number Number of Number % of Image of Samples of Recognition Image of Samples of Recognition Samples Correctly Samples Accuracy Samples Correctly Samples Accuracy Tested Recognized Miss Tested Recognized Miss Classified Classified 10 10 0 100 10 8 2 80 10 9 1 90 10 8 2 80 10 9 1 90 10 10 0 100 10 8 2 80 10 9 1 90 10 9 1 90 10 8 2 80 10 10 0 100 10 8 2 80 10 10 0 100 10 9 1 90 10 10 0 100 10 10 0 100 10 8 2 80 10 9 1 90 10 10 0 100 10 9 1 90 10 10 0 100 10 9 1 90 10 9 1 90 10 10 0 100 10 8 2 80 10 8 2 80 10 9 1 90 10 10 0 100 10 10 0 100 10 9 1 90 10 10 0 100 10 8 2 80 10 10 0 100 10 9 1 90 10 9 1 90 10 8 2 80 10 10 0 100 10 8 2 80 10 9 1 90 10 8 2 80 10 10 0 100 10 8 2 80 10 8 2 80 10 10 0 100 10 8 2 80 10 10 0 100 5. CONCLUSION In this paper, a novel methodology for an approach to recognition of basic Kannada characters from scene images is proposed. The proposed method uses zone wise horizontal and vertical profile based features and euclidean distance classifier for basic Kannada character recognition. The system works in two phases: training phase and testing phase. Exhaustive experimentation was done to analyze horizontal and vertical profile based features. The results obtained by considering zone wise horizontal and vertical profile features and euclidean distance classifier are encouraging and it has been observed that the system is robust and insensitive for several challenges like unusual fonts, variable lighting 639 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME condition, noise, blur, orientation etc. The method is tested with 460 samples and gives recognition accuracy of 91%. The proposed method can be extended for character recognition considering new set of features and classification algorithms. This method can be extended to recognize the characters of other languages. REFERENCES  Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper, and Mike Pinkerton, 1997, “CyberGuide: A mobile context-aware tour guide”, Wireless Networks, 3(5): pp.421-433.  Natalia Marmasse and Chris Schamandt, 2000, “Location aware information delivery with comMotion”, In Proceedings of Conference on Human Factors in Computing Systems, pp.157-171.  Tollmar K. Yeh T. and Darrell T., 2004, “IDeixis - Image-Based Deixis for Finding Location-Based Information”, In Proceedings of Conference on Human Factors in Computing Systems (CHI’04), pp.781-782.  Gillian Leetch, Dr. Eleni Mangina, 2005, “A Multi-Agent System to Stream Multimedia to Handheld Devices”, Proceedings of the Sixth International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’05).  Wichian Premchaiswadi, 2009, “A mobile Image search for Tourist Information System”, Proceedings of 9th international conference on SIGNAL PROCESSING, COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION, pp.62-67.  Ma Chang-jie, Fang Jin-yun, 2008, “Location Based Mobile Tour Guide Services Towards Digital Dunhaung”, International archives of phtotgrammtery, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B4, Beijing.  Shih-Hung Wu, Min-Xiang Li, Ping-che Yanga, Tsun Kub, 2010, “Ubiquitous Wikipedia on Handheld Device for Mobile Learning”, 6th IEEE International Conference on Wireless, Mobile, and Ubiquitous Technologies in Education, pp. 228-230.  Tom yeh, Kristen Grauman, and K. Tollmar., 2005, “A picture is worth a thousand keywords: image-based object search on a mobile platform”, In Proceedings of Conference on Human Factors in Computing Systems, pp.2025-2028.  Fan X. Xie X. Li Z. Li M. and Ma. 2005, “Photo-to-search: using multimodal queries to search web from mobile phones”, In proceedings of 7th ACM SIGMM international workshop on multimedia information retrieval.  Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, 2005, “SnapToTell: Ubiquitous information access from camera”, Mobile human computer interaction with mobile devices and services, Glasgow, Scotland.  Jing Zhang, Xilin Chen, Andreas Hanneman, Jie Yang, and Alex Waibel.,2002, “A Robust Approach for Recognition of Text Embedded in Natural Scenes”, proc. 16th International conf. Pattern recognition, volume 3, pp. 204-207 (2002).  Xilin Chen, Jie Yang, Jing Zhang, and Alex Waibel, January 2004, “Automatic Detection and Recognition of Signs From Natural Scenes”, IEEE Transactions On Image Processing, Vol. 13, No. 1, pp. 87-99 (January 2004).  Anand Mishra, Karteek Alahari, C. V. Jawahar, 2012, “Top-Down and Bottom-Up Cues for Scene Text Recognition” , Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. 640 International Journal of Computer Engineering and Technology (IJCET), ISSN 0976- 6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME  Zohra Saidane and Christophe Garcia, 2007, “Automatic Scene Text Recognition using a Convolutional Neural Network”, CBDAR, p6, pp. 100-106 (2007).  Céline Mancas-Thillou, June 2007, “Natural Scene Text Understanding”, Segmentation and Pattern Recognition, I-Tech, Vienna, Austria, pp.123-142 (June 2007).  Jerod J. Weinman, Erik Learned-Miller, and Allen Hanson, September 2007, “Fast Lexicon-Based Scene Text Recognition with Sparse Belief Propagation”, Proc. Intl. Conf. on Document Analysis and Recognition, Curitiba, Brazil (September 2007).  Jerod J. Weinman, Erik Learned-Miller and Allen Hanson, December 2008, “A Discriminative Semi-Markov Model for Robust Scene Text Recognition”, IEEE, Proc. Intl. Conf. on Pattern Recognition (ICPR), Tampa, FL, USA, pp. 1-5 (December 2008).  Te´ofilo E. de Campos and Bodla Rakesh Bab, 2009, “Character Recognition In Natural Images”, Computer Vision Theory and Applications, Proc. International Conf. volume , pp. 273-280 (2009).  Onur Tekdas and Nikhil Karnad, 2009, “Recognizing Characters in Natural Scenes: A Feature Study”, CSCI 5521 Pattern Recognition, pp. 1-14 (2009).  Sangame S.K., Ramteke R.J., and Rajkumar Benne, 2009, “Recognition of isolated handwritten Kannada vowels”, Advances in Computational Research, ISSN: 0975– 3273, Volume 1, Issue 2, pp 52-55 (2009).  B.V.Dhandra, Mallikarjun Hangarge, and Gururaj Mukarambi, 2010, ”Spatial Features for Handwritten Kannada and English Character Recognition”, IJCA Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR), pp 146-151 (2010).  Mallikarjun Hangarge, Shashikala Patil, and B.V.Dhandra, 2010, “Multi-font/size Kannada Vowels and Numerals Recognition Based on Modified Invariant Moments”, IJCA Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR), pp 126-130 (2010).  Masakazu Iwamura, Tomohiko Tsuji, and Koichi Kise, 2010, “Memory-Based Recognition of Camera-Captured Characters”, 9th IAPR international workshop on document analysis systems, pp. 89-96 (2010).  Masakazu Iwamura, Takuya Kobayashi, and Koichi Kise, 2011, “Recognition of Multiple Characters in a Scene Image Using Arrangement of Local Features”, IEEE, International Conference on Document Analysis and Recognition, pp. 1409-1413(2011).  Primekumar K.P and Sumam Mary Idicula, “Performance of on-Line Malayalam Handwritten character Recognition using Hmm And Sfam”, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 115 - 125, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.  Mr.Lokesh S. Khedekar and Dr.A.S.Alvi, “Advanced Smart Credential Cum Unique Identification and Recognition System. (Ascuirs)”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 1, 2013, pp. 97 - 104, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.  M. M. Kodabagi, S. A. Angadi and Chetana. R. Shivanagi, “Character Recognition of Kannada Text in Scene Images using Neural Network”, International Journal of Graphics and Multimedia (IJGM), Volume 4, Issue 1, 2013, pp. 9 - 19, ISSN Print: 0976 – 6448, ISSN Online: 0976 –6456  M. M. Kodabagi, S. A. Angadi and Anuradha. R. Pujari, “Text Region Extraction from Low Resolution Display Board Images using Wavelet Features”, International Journal of Information Technology and Management Information Systems (IJITMIS), Volume 4, Issue 1, 2013, pp. 38 - 49, ISSN Print: 0976 – 6405, ISSN Online: 0976 – 6413 641
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'