Aspect-based sentiment analysis using deep learning for Arabic language
Students names: Tasnim Ahmed AlZahrani
Supervised by Dr. Miada Almasre - Dr.Tagreed Bagies
-
-
-
Nowadays, many users around the world tend to express their feelings and opinions on the web. The volume of these user-generated contents is increasing rapidly day by day, resulting in massive unstructured data over the web. The shared public opinions are considered as a significant source of information for any organization because they contain a great business value. Sentiment analysis is an important task that aims to analyze the unstructured data to extract information about the user opinion state, and it has become a growing field of research recently. Since users’ opinion on the web usually contains different sentiment polarities about several aspects of the same text, aspect-level sentiment analysis is a finer-grained type introduced lately and obtaining many researchers interest. In this study, we will build a Transformer-based deep learning model since the Transformer has achieved state-of-the-art performance. The proposed model will be capable to predict the sentiment polarity of each aspect in a given text for the Arabic language. At the end, we will evaluate our model with previous work done in sentiment analysis.
The aim of this research is to improve the aspect-based sentiment analysis task for an Arabic text. The following objectives will be fulfilled in order to achieve this aim:
- Collect and annotate data to create a dataset for Arabic aspect-based sentiment analysis using actual social media reviews.
- Use a Transformer to build an NLP deep learning model that can predict the sentiment polarity of each aspect in a given sentence for Arabic reviews.
- Evaluate and analyze the results using standard metrics, then compare them against other similar models.
In this study, we aimed to determine the sentiment polarity associated with a particular aspect of a sentence. To address this classification problem, we propose an NLP model that applies Transformer and BERT contextual word embedding, which is able to detect the sentiment label of an aspect into positive, negative, neutral. We will utilize the Arabic-specific language BERT model that is based on the original implementation of BERT. Figure illustrates the process overview of the proposed model. The BERT word embedding layer will process the input text to generate the token-level representations based on the whole sentence. Then, we fine-tune the BERT model by adding an additional output layer on top of it that is responsible for classifying the aspect’s sentiment polarity.

|
Last Update
6/4/2023 11:31:32 AM
|
|
|