Connecting Language and Emotion in Large Language Models for Human-AI Collaboration
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Computer Science and Electrical Engineering
Program
Computer Science
Citation of Original Publication
Rights
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Distribution Rights granted to UMBC by the author.
Subjects
Abstract
Large Language Models demonstrate linguistic abilities on par with humans, able to generate short texts, stories, instructions, and even code that’s often indistinguishable from what is created by humans. This allows humans to use large language models (LLMs) collaboratively — as communication aides or writing assistants. However, humans cannot always assume an LLM will behave the same way another person would. This is particularly evident in subjective scenarios such as where emotion is involved. In this work, I explore to what depth do LLMs perceive and understand human emotions, and look at ways of describing an emotion to an LLM for collaborative work. First, I study the problem of classifying emotions and show that LLMs perform well on their own, and can also improve smaller models at the same task. Secondly, I focus on generating emotions, using the problem space of keyword-constrained generation and a human participant-study to see where human expectations and LLM outputs diverge and how we can minimize any such misalignment. Here, I find that using English Words and Lexical expressions Valence-Arousal-Dominance (VAD) scales lead to good alignment and generation quality, while Numeric dimensions of VAD or Emojis fare worse.
