The Future of Media Technology and AI

Technological innovation has significantly transformed the media industry in the past two decades, creating both opportunities and challenges. We at MediaFutures keep an eye on these dynamics by investing largely in research in responsible media technology.

We focus on identifying research gaps, improving existing technologies, and ensuring responsible editorial practices. Through our dedicated efforts, we aspire to make a significant impact on society and shape a media industry that is resilient and adaptive to the needs of the future.

Here we write about our thoughts on the future of media and AI.

Our Vision

At MediaFutures, responsible media technology represents our commitment to maximizing benefits for both news organizations and society while minimizing potential negative impacts. Acknowledging the pivotal role of artificial intelligence technologies and machine learning in shaping the media’s future, our primary focus centres on comprehending their influence on the industry and society. MediaFutures’ mission is to spearhead responsible media technology that not only responds to current challenges but also shapes the future of the media landscape.

MediaFutures Overall Vision Paper

Responsible media technology and AI: challenges and research directions

AI in recommendation

At MediaFutures, our mission is to revolutionize media recommendation systems by addressing the pressing issues arising from AI-driven algorithms. These systems, while powerful, have inadvertently led to negative effects such as filter bubbles, echo chambers, popularity bias, discrimination, unfairness and the spread of misinformation across media platforms.

In Work Package 2, we are dedicated to the development of innovative solutions to counteract these negative impacts. Our primary objective is twofold: first, to identify and mitigate the biases inherent in recommendation technologies, and second, to pioneer a new era of responsible media technology utilizing cutting-edge AI advancements. Our approach extends beyond mere algorithmic adjustments. We intervene strategically in the recommendation feedback loop, targeting three core components—user, data, and model. By intervening at different stages, we aim to diversify recommendations and minimize undesired biases. To achieve this, we adopt various methods.

These include re-ranking recommendation outputs to amplify diversity and enhancing system transparency to foster user trust. These strategies form the backbone of our pursuit to create recommendation systems that not only deliver content but also prioritize fairness, diversity, and accuracy. Our vision is forward-thinking. We aspire not only to rectify the negative impacts prevalent in current recommendation systems but also to set a precedent for a media landscape that is more inclusive, equitable, and reliable for all users. 

 AI in content generation

Artificial intelligence and machine learning are powerful tools for crafting high-quality journalism. In Work Package 3, we thoroughly analyze how AI techniques can revolutionize news production across its entire cycle, aiming to amplify trust. Our goal is to use AI for positive purposes, aiding journalists and editors in sharing informative content.

We believe trustworthy AI journalism requires a balanced mix of smart machines and human input. Our vision includes creators, consumers, and those challenging media trust. To ensure quality media, we need robust models, independence, and ethical values.

One significant challenge in developing advanced AI platforms for journalism is the potential for misuse by malevolent actors such as rogue media, political groups, or governments. Our strategy revolves around fostering evidence-based journalism rooted in democratic values and transparency. This approach makes it harder for malicious content to persist: unreliable evidence becomes more apparent, while content lacking evidence raises suspicion.

While AI isn’t a complete solution for today’s media challenges, harnessing its potential is crucial for reputable news sources’ survival. Tools that automate background information gathering, recommend sources, facilitate verification, fact-checking, and support content creation aim to enhance journalists’ efficiency. We believe the primary aim of AI in journalism is to alleviate journalists from monotonous tasks, allowing them more time for creativity and critical thinking. Although entirely AI-generated news might not be practical or desirable, except in specific settings where its role is clearly defined, it’s essential to encourage journalists and editors to embrace and learn to use new AI-driven tools in their daily work.

 Explainable AI

Everyone is concerned about the ethical use of artificial intelligence (AI). How can we develop AI tools that benefit society and uphold our values?

Those who create models and principles for AI tools often only think about the technical process, and rarely spend time understanding what it means for users. Instead of creating a black box, computer scientists must also understand the effect of simple processes. They have tools for so many things, but guidelines are needed that describe what it means for society.

Our researchers looked at journalism to find an answer. Journalism has adapted to technological changes while retaining ethical principles that benefit society. They have guidelines which do not exist for those who develop AI models. The solution is industry-specific guidelines based on common values within the relevant industry. Computer scientists, engineers, and everyone who creates AI models can take their industry-specific values as a starting point for design, engineering, and evaluation of responsible AI systems. Guidelines that are adhered to during the creation process in any industry can help us have ethical AI.

Leveraging Professional Ethics for Responsible AI:

Applying AI techniques to journalism.

Find us

Lars Hilles gate 30
5008 Bergen

Contact us



Responsible Editor:
Centre Director Prof. Dr. Christoph Trattner


Subscribe to our monthly Newsletter by sending mail to


Hosted by 




Copyright © University of Bergen 2024