Telling ChatGPT That You're Distressed Or Scared Makes It Perform Better
The latest breakthrough in AI development ironically appears remarkably human
Who would've thought that AI would eventually become 'more human'?
Large Language Models (LLM) such as ChatGPT, Flan-T5-Large, Vicuna, Llama 2, BLOOM, and GPT-4 have been found to be capable of understanding emotional stimuli.
A new research shows that AI models can perform better when users express emotions, such as urgency or stress
In the research paper by Li et al., the methodology employed for the study was outlined. The primary objective of the research was to determine the capacity of LLMs to comprehend emotional stimuli.
One of the methodologies employed various deterministic and generative tasks, encompassing a wide range of evaluation situations.
The automated experiments conducted in the study demonstrated that LLMs exhibit a certain level of emotional intelligence, and their effectiveness can be enhanced by incorporating emotional cues.
Dubbed as 'EmotionPrompt' by the researchers, this approach exhibits significant improvement in terms of the response quality
EmotionPrompt combines the original prompt with emotional stimuli, leading to an approximate 8% relative performance improvement in instruction induction, and a remarkable 115% improvement in BIG-Bench tests.
The researchers also conducted a human study involving 106 participants to assess the quality of generative tasks. The study utilised both standard and emotional prompts for the evaluation process.
The results show that using EmotionPrompt led to a substantial 10.9% average improvement in performance, accuracy, and responsibility metrics for generative tasks.
AI has seemed to evolve far beyond our comprehension
However, researchers like Li are at the forefront of discovering new nuances within the evergrowing AI models such as GPT4 and ChatGPT. These discoveries will surely supercharge AI users looking to further optimise the AI model's responses.