Wang, Xiaomeng and Sharma, Dharmendra P. and Kumar, Dinesh (2024) Cognitive Reframing via Large Language Models for Enhanced Linguistic Attributes. [Conference Proceedings]
Full text not available from this repository. (Request a copy)Abstract
Cognitive Reframing aims to reshape negative thoughts into more positive perspectives to enhance mental well-being. While previous research has highlighted the efficacy of Large Language Models (LLMs) for cognitive reframing, there has been limited focus on enhancing reframing quality across multiple linguistic attributes in the final output. We build ReframeGPT, which fills this gap by employing LLMs to generate and iteratively refine reframed thoughts. The results of our study outperform in helpfulness, empathy and rationality in GPT-4 evaluation.
Item Type: | Conference Proceedings |
---|---|
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science Q Science > QA Mathematics > QA76 Computer software |
Divisions: | School of Information Technology, Engineering, Mathematics and Physics (STEMP) |
Depositing User: | Dinesh Kumar |
Date Deposited: | 20 Jul 2025 21:54 |
Last Modified: | 20 Jul 2025 21:54 |
URI: | https://repository.usp.ac.fj/id/eprint/15031 |
Actions (login required)
![]() |
View Item |