Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11861/9007
Title: Recognition of printed Urdu script in Nastaleeq font by using CNN-BiGRU-GRU Based Encoder-Decoder framework
Authors: Zia, Sohail 
Dr. AZHAR Muhammad 
Lee, Bumshik 
Tahir, Adnan 
Ferzund, Javed 
Murtaza, Fazia 
Ali, Moazam 
Issue Date: 2023
Source: Intelligent Systems with Applications, 2023, vol.18, article no. 200194.
Journal: Intelligent Systems with Applications 
Abstract: RNN based Deep learning models has shown tremendous success in sequential and temporal data where the order is critical to achieve higher accuracy in context understanding. RNN family like LSTM, BLSTM, GRU, BiGRU etc. are the mainly used models in these kind of sequential tasks. RNN family based Encoder-decoder frameworks are widely used for the recognition of various languages scripts. However, in Urdu, very less research has been done especially with the deep learning models. The existing research work for printed Urdu recognition have shown that the current models only work for very basic sentences of Urdu but in case of complex words and sentences, these algorithms totally fail in terms of accuracy and the time complexity in identification of the Nastaleeq font writing. To identify printed Urdu text in images, we have proposed an encoder-decoder based hybrid deep learning approach with Convolutional Neural Network (CNN) for feature extraction part, bi-directional Gated Recurrent Unit network (BiGRU) as encoder and Gated Recurrent Unit network (GRU) as decoder. The CNN layer of the algorithm is used to obtain ligature features in Urdu, which are subsequently utilized by encoder (BiGRU) and decoder (GRU) to recognize the sentences by accurately distinguishing the characters and joiners. Experimental results have shown that our proposed CNN-BiGRU-GRU hybrid technique with specific hyper-parameter tuning performs well as compared to other state-of-the-art algorithms in terms of epochs (70 epochs as compared to 100 with BLSTM-LSTM based encoder decoder), 6 percent increase of Character Recognition Accuracy (86.95 percent as compared to 81.08 percent of BLSTM-LSTM), 10 percent increase of Word Recognition Accuracy (WRA) (89.48% as compared to 79.06 percent of BLSTM-LSTM) and less time complexity (18 seconds less than BLSTM-LSTM with same system configuration).
Type: Peer Reviewed Journal Article
URI: http://hdl.handle.net/20.500.11861/9007
ISSN: 2667-3053
DOI: https://doi.org/10.1016/j.iswa.2023.200194
Appears in Collections:Publication

Show full item record

Google ScholarTM

Impact Indices

Altmetric

PlumX

Metrics


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.