Detecting Toxicity in a Diverse Online Conversation Using Reinforcement Learning
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2020-01-01
Type of Work
Department
Computer Science and Electrical Engineering
Program
Computer Science
Citation of Original Publication
Rights
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan through a local library, pending author/copyright holder's permission.
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Abstract
In today's world, we have many online social media sites like Twitter, Facebook, Reddit, CNN, etc. where people actively participate in conversations and post comments about published articles, videos, news, and other online content. These comments by users may be toxic. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions, so there is a need to protect voices in conversations. This theses aims to implement a self-learning model using reinforcement learning methods to detect toxicity in an online conversation. We have designed and implemented the model in the following phases: pre-processing of data, designing the scope of the problem in reinforcement learning, detection of toxicity, and evaluation with comparison to a baseline. We show in our results that the proposed model gets competitive results in terms of F1 score and accuracy when compared to the baseline models, but has computational advantages.