Skip to main content

Text Summarization for Urdu: Part 2

In the last article, I have used the extractive method to get the summary of text. Now in this article I'm going to show you how you can get the abstractive summary using deep learning model based on transformers.


Using Pre-trained model.

There is a model which performs best as compared to other models available on `hugginface` site for Urdu language.

Here is how can you use it to generate the abstractive summary.

import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))

article_text = """چھٹیوں کے دن ختم ہونے میں آخری دس دن باقی تھے اور میں ہمیشہ کی طرح کتابوں کا پہاڑ سامنے رکھ کر بیٹھ گئی تھی اور ہر بار کی طرح اس بار بھی میں نے چھٹیوں کے آغاز میں سوچا تھا کہ سارا کام پہلے ہی کر لوں گی،مگر ہر بار کی طرح اس بار بھی اس پر عمل نہیں کر سکی اور پھر وہی ہوا کہ آخری کے دس دنوں میں کتابوں کا پہاڑ سامنے رکھ کر بیٹھ گئی اور اپنی قسمت کو کوستی اور خود کو تسلی دیتی کہ میری کوئی غلطی نہیں ہے کام ہی بہت زیادہ ہے۔
دو مہینے کی چھٹیوں کے ان آخری دس دنوں میں کام کم کرتی اور یہ فقرہ زیادہ دہراتی تھی کہ ننھی سی جان اور اتنا سارا کام۔جیسے تیسے آدھا اور ادھورا کام کرکے سکول گئی سکول میں پورا کام نہ کرنے پر ڈانٹ پڑی۔میں نے گھر جا کر یہ فیصلہ کیا کہ آئندہ سے ہر سال چھٹیوں کے شروع میں ہی سارا کام کر لوں گی۔
اس بار بھی میں کچھ ایسا ہی فیصلہ کرنے لگی تو مجھے یاد آ گیا کہ میں اگلے سال سے اوپن یونیورسٹی میں داخلہ لوں گی،جس میں حاضری کی پابندی نہیں ہوتی،یعنی چھٹیاں ہی چھٹیاں۔"""

model_name = "csebuetnlp/mT5_multilingual_XLSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]

output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]

summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)

print(summary)

Here is the output

چھٹیوں کے دن ختم ہونے میں آخری دس دن باقی تھے اور یہ فقرہ زیادہ دہراتی تھی کہ ننھی سی جان، اتنا سارا کام کر لوں گی۔


And that's it. You have a very good abstractive summarization model for Urdu language. This model also provides summarization for other languages as well.

Here is the link to model and GitHub repository.

https://github.com/csebuetnlp/xl-sum
https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum

Comments

  1. Assalam o Alaikum, can you help me with URDU OCR for my BsCS FYP

    ReplyDelete

Post a Comment

Popular posts from this blog

Text Summarization for Urdu: Part 1

 Text Summarization is an important task for large documents to get the idea of the document. There are two main summarization techniques used in NLP for text summarization. Extractive Text Summarization :  This approach's name is self-explanatory. Most important sentences or phrases are extracted from the original text and a short summary provided with these important sentences. See the figure for the explanation. Abstractive Text Summarization : This approach uses more advanced deep learning techniques to generate new sentences by learning from the original text. It is a complex task and requires heavy computing power such as GPU. Let's dive into the code for generating the text summary. I'm using Arabic as a parameter because the contributor did an excellent job of handling a lot of things like stemming, Urdu characters support, etc. from summa.summarizer import summarize text = """ اسلام آباد : صدر مملکت ڈاکٹر عارف علوی بھی کورونا وائرس کا شکار ہوگئے۔ سما...

Transformer Based QA System for Urdu

Question Answer Bot   The Question-Answer System is the latest trend in NLP.  There are currently two main techniques used for the Question-Answer system. 1 -  Open Domain: It is a wast land of NLP applications to build a QA system. A huge amount of data and text used to build such a system. I will write a blog post later about using the Open-Domain QA system. 2 - Closed Domain:  A closed domain question system is a narrow domain and strictly answers the questions which can be found in the domain. One example of a Closed Domain question system is a Knowledge-Based system. In this tutorial, I will explain the steps to build a Knowledge-Based QA system. Knowledge Base (KB) question answers are mostly used for FAQs. Where the user asks the questions and the model returns the best-matched answer based on the question. It's easy to implement and easy to integrate with chatbots and websites.  It is better to use the KB system for small datasets or narrow domains like...

Urdu Tokenization using SpaCy

SpaCy is an NLP library which supports many languages. It’s fast and has DNNs build in for performing many NLP tasks such as POS and NER. It has extensive support and good documentation. It is fast and provides GPU support and can be integrated with Tensorflow, PyTorch, Scikit-Learn, etc. SpaCy provides the easiest way to add any language support. A new language can be added by simply following Adding Languages article. I’ve added the Urdu language with dictionary-based lemmatization, lexical support and stop words( Urdu ). Here is how you can use the tokenizer for the Urdu language. First, install SpaCy . $ pip install spacy Now import spacy and create a blank object with support of Urdu language. I’m using blank because there is no proper model available for Urdu yet, but tokenization support available. import spacy nlp = spacy.blank('ur') doc = nlp(" کچھ ممالک ایسے بھی ہیں جہاں اس برس روزے کا دورانیہ 20 گھنٹے تک ہے۔") print("Urdu Tokeniza...