In a time when political divisions run deep and climate change remains a fiercely debated topic, how we communicate information matters more than ever. A recent study led by MediaFutures PhD candidate Jeng Jia-Hua together with Research Assistant Gloria Anne Babile Kasangu, Associate Professor Alain D. Starke, Associate Professor Erik Knudsen and Professor Christoph Trattner sheds light on a powerful combination: emotional reframing and artificial intelligence. Their research shows that when news summaries are crafted by Large Language Models (LLMs) like GPT, and framed with both fear and hope, they can encourage more pro-social engagement with news and promote pro-environmental behaviour.
The core of the problem lies in how people consume news. Many of us instinctively gravitate toward stories that confirm what we already believe. This phenomenon, known as selective exposure, narrows our understanding of complex issues and fuels societal polarisation. While traditional media struggles to overcome these deeply rooted habits, emerging tools from the field of AI might offer a new way forward, if they are used wisely.
The researchers set out to understand whether the emotional tone of news summaries could influence how people respond to environmental information. In their experiment, 300 U.S. participants read articles about environmental topics, originally published by The Washington Post. These articles were presented in three different emotional framings: fear-only, neutral, and a blend of fear and hope. Some summaries were generated by GPT models, while others used standard article previews.

The findings were clear: participants who read fear-hope framed summaries—especially those generated by GPT—were significantly more likely to report intentions to engage in pro-environmental behaviour. The emotional balance seemed to matter. Fear alone captured attention but often left readers feeling helpless. Hope, when layered in, created a sense of possibility and personal agency. Together, they formed a message that was not only urgent but also actionable.
This emotional reframing didn’t necessarily change which articles participants chose to read. Those decisions may still shaped mostly by personal interests and pre-existing preferences. But once engaged, the content itself had a measurable impact on how readers felt and what they intended to do.


What makes these results especially promising is the role played by AI. GPT-generated summaries were not just more efficient. They were more effective. They conveyed nuance, carried emotional tone, and resonated more deeply with readers. When paired with emotional framing, they proved more successful at encouraging reflection and action. This suggests that LLMs, often viewed with skepticism in journalistic circles, could actually be powerful tools for fostering empathy and bridging divides.
Of course, AI is not a magic fix. It can’t eliminate bias or dismantle echo chambers on its own. But this study offers a glimpse of what’s possible when we rethink how information is presented. By leveraging emotionally intelligent AI-generated content, media platforms could help readers move beyond passive consumption, and toward engagement that is thoughtful, informed, and socially constructive.
In the face of global challenges that demand cooperation and shared understanding, the combination of emotional reframing and AI holds real promise. Not just for improving how we talk about the environment, but for building a media landscape that encourages connection over division, and action over apathy.
RELATED PUBLICATIONS
The role of GPT as an adaptive technology in climate change journalism Conference. Jeng, Jia Hua; Kasangu, Gloria Anne Babile; Starke, Alain D.; Seddik, Khadiga; Trattner, Christoph, UMAP 2025.
Negativity Sells? Using an LLM to Affectively Reframe News Articles in a Recommender System Workshop. Jeng, Jia Hua; Kasangu, Gloria Anne Babile; Starke, Alain D.; Knudsen, Erik; Trattner, Christoph, 2024.