Exploiting stacked embeddings with LSTM for multilingual humor and irony detection

Abstract

Humor and irony are types of communication that evoke laughter or contain hidden sarcasm. The opportunity to diversely express people’s opinions on social media using humorous content increased its popularity. Due to subjective aspects, humor may vary to gender, profession, generation, and classes of people. Detecting and analyzing humorous and ironic emplacement of informal user-generated content is crucial for various NLP and opinion mining tasks due to its perplexing characteristics. However, due to the idiosyncratic characteristics of informal texts, it is a challenging task to generate an effective representation of texts to understand the inherent contexts properly. In this paper, we propose a neural network architecture that couples a stacked embeddings strategy on top of the LSTM layer for the effective representation of textual context and determine the humorous and ironic orientation of texts in an efficient manner. Here, we perform the stacking of various fine-tuned word embeddings and transformer models including GloVe, ELMo, BERT, and Flair’s contextual embeddings to extract the diversified contextual features of texts. Later, we use the LSTM network on top of it to generate the unified document vector (UDV). Finally, the UDV is passed to the multiple feed-forward linear architectures for attaining the final prediction labels. We present the performance analysis of our proposed approach on benchmark datasets of English and Spanish language. Experimental outcomes exhibited the preponderance of our model over most of the state-of-the-art methods.

Type
Publication
Social Network Analysis and Mining
Radiathun Tasnia
Radiathun Tasnia
Research Assistant (Full Time)
Nabila Ayman
Nabila Ayman
Research Assistant
Afrin Sultana
Afrin Sultana
Research Assistant (Full Time)
Abu Nowshed Chy
Abu Nowshed Chy
Assistant Professor