Detecting Toxicity in a Diverse Online Conversation Using Reinforcement Learning
dc.contributor.advisor | Oates, James T | |
dc.contributor.author | Singh, Arti | |
dc.contributor.department | Computer Science and Electrical Engineering | |
dc.contributor.program | Computer Science | |
dc.date.accessioned | 2022-02-09T15:52:30Z | |
dc.date.available | 2022-02-09T15:52:30Z | |
dc.date.issued | 2020-01-01 | |
dc.description.abstract | In today's world, we have many online social media sites like Twitter, Facebook, Reddit, CNN, etc. where people actively participate in conversations and post comments about published articles, videos, news, and other online content. These comments by users may be toxic. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions, so there is a need to protect voices in conversations. This theses aims to implement a self-learning model using reinforcement learning methods to detect toxicity in an online conversation. We have designed and implemented the model in the following phases: pre-processing of data, designing the scope of the problem in reinforcement learning, detection of toxicity, and evaluation with comparison to a baseline. We show in our results that the proposed model gets competitive results in terms of F1 score and accuracy when compared to the baseline models, but has computational advantages. | |
dc.format | application:pdf | |
dc.genre | theses | |
dc.identifier | doi:10.13016/m2locg-gs7y | |
dc.identifier.other | 12314 | |
dc.identifier.uri | http://hdl.handle.net/11603/24171 | |
dc.language | en | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Theses and Dissertations Collection | |
dc.relation.ispartof | UMBC Graduate School Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.source | Original File Name: Singh_umbc_0434M_12314.pdf | |
dc.subject | Computational Linguistic | |
dc.subject | Deep Q-learning Network | |
dc.subject | Detecting Toxicity | |
dc.subject | Reinforcement Learning | |
dc.title | Detecting Toxicity in a Diverse Online Conversation Using Reinforcement Learning | |
dc.type | Text | |
dcterms.accessRights | Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan through a local library, pending author/copyright holder's permission. | |
dcterms.accessRights | This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu |
Files
Original bundle
1 - 1 of 1