Options
Efficient transformer-based abstractive urdu text summarization through selective attention pruning
Date Issued
2025
Publisher
MDPI AG
Journal
ISSN
2078-2489
Citation
Information, 2025, vol. 16(11), article no. 991.
Description
Open access
Type
Peer Reviewed Journal Article
Abstract
In today’s data-driven world, automatic text summarization is essential for extracting insights from large data volumes. While extractive summarization is well-studied, abstractive summarization remains limited, especially for low-resource languages like Urdu. This study introduces process innovation through transformer-based models—Efficient-BART (EBART), Efficient-T5 (ET5), and Efficient-GPT-2 (EGPT-2)—optimized for Urdu abstractive summarization. Innovations include strategically removing inefficient attention heads to reduce computational complexity and improve accuracy. Theoretically, this pruning preserves structural integrity by retaining heads that capture diverse linguistic features, while eliminating redundant ones. Adapted from BART, T5, and GPT-2, these optimized models significantly outperform their originals in ROUGE evaluations, demonstrating the effectiveness of process innovation and optimization for Urdu natural language processing.
Loading...
Availability at HKSYU Library

