Skip to main content

Text Summarization for Urdu: Part 1

 Text Summarization is an important task for large documents to get the idea of the document. There are two main summarization techniques used in NLP for text summarization.

Extractive Text Summarization:  This approach's name is self-explanatory. Most important sentences or phrases are extracted from the original text and a short summary provided with these important sentences. See the figure for the explanation.


Abstractive Text Summarization: This approach uses more advanced deep learning techniques to generate new sentences by learning from the original text. It is a complex task and requires heavy computing power such as GPU.

Let's dive into the code for generating the text summary. I'm using Arabic as a parameter because the contributor did an excellent job of handling a lot of things like stemming, Urdu characters support, etc.
from summa.summarizer import summarize

text = """ اسلام آباد: صدر مملکت ڈاکٹر عارف علوی بھی کورونا وائرس کا شکار ہوگئے۔
سماجی رابطے کی ویب سائٹ ٹویٹر پر ڈاکٹر عارف علوی نے لکھا کہ میرا کورونا ٹیسٹ مثبت آگیا ہے،
اللہ سب کورونا متاثرین پر رحم فرمائے، ویکسین کی پہلی خوراک لی تھی جب کہ دوسری ڈوز ایک ہفتے
بعد لگنی تھی جس کے بعد اینٹی باڈیز بننا شروع ہوتی ہیں، برائے مہربانی محتاط رہیں۔"""

summary = summarize(text, ratio=0.2, language="arabic", words=15)
print(summary)
and here is the output:
سماجی رابطے کی ویب سائٹ ٹویٹر پر ڈاکٹر عارف علوی نے لکھا کہ میرا کورونا ٹیسٹ مثبت آگیا ہے،
Isn't it easy!!! Let me know if you have any questions.


Comments

  1. Irfan! Sent u an email regarding development of an urdu sentiment analysis library.

    ReplyDelete
  2. Hi Irfan, can you help doing urdu text summarization using spacy.

    ReplyDelete
  3. SpaCy does not provide summarization.

    ReplyDelete
  4. which algorithm or technique you are using

    ReplyDelete
  5. This only works with the provided text.If you change the text, it shows nothing.

    ReplyDelete
  6. hey its only working for the given text

    ReplyDelete

Post a Comment

Popular posts from this blog

Transformer Based QA System for Urdu

Question Answer Bot   The Question-Answer System is the latest trend in NLP.  There are currently two main techniques used for the Question-Answer system. 1 -  Open Domain: It is a wast land of NLP applications to build a QA system. A huge amount of data and text used to build such a system. I will write a blog post later about using the Open-Domain QA system. 2 - Closed Domain:  A closed domain question system is a narrow domain and strictly answers the questions which can be found in the domain. One example of a Closed Domain question system is a Knowledge-Based system. In this tutorial, I will explain the steps to build a Knowledge-Based QA system. Knowledge Base (KB) question answers are mostly used for FAQs. Where the user asks the questions and the model returns the best-matched answer based on the question. It's easy to implement and easy to integrate with chatbots and websites.  It is better to use the KB system for small datasets or narrow domains like...

Urdu News Classification

News Classification is the latest buzz word in NLP for identifying the type of news and figuring out its a fake or not. There is a dataset available Urdu News extracted from web and has multiple classes and can be used for news classificaiton and other purposes. Preprocessing: News dataset is in multiple excel files, for the sake of classification, we need to convert it to single csv file. Here is how I did it import pandas as pd import glob files = glob.glob( "data/*.xlsx" ) df = pd.DataFrame() # if you want to use xlrd then 1.2.0 is good to go, openpyxl has a lot of issues. for file in files: excel_file = pd.read_excel(file , index_col= None , na_values=[ 'NA' ] , usecols=[ "category" , "summery" , "title" ] , engine= "xlrd" ) df = df.append(excel_file , ignore_index= True ) df.drop_duplicates(inplace= True ) # use single word for classification. df.category = df.category.str.replace( "weird ne...