Our Vision
AI in recommendation
At MediaFutures, our mission is to revolutionize media recommendation systems by addressing the pressing issues arising from AI-driven algorithms. These systems, while powerful, have inadvertently led to negative effects such as filter bubbles, echo chambers, popularity bias, discrimination, unfairness and the spread of misinformation across media platforms.
In Work Package 2, we are dedicated to the development of innovative solutions to counteract these negative impacts. Our primary objective is twofold: first, to identify and mitigate the biases inherent in recommendation technologies, and second, to pioneer a new era of responsible media technology utilizing cutting-edge AI advancements. Our approach extends beyond mere algorithmic adjustments. We intervene strategically in the recommendation feedback loop, targeting three core components—user, data, and model. By intervening at different stages, we aim to diversify recommendations and minimize undesired biases. To achieve this, we adopt various methods.
These include re-ranking recommendation outputs to amplify diversity and enhancing system transparency to foster user trust. These strategies form the backbone of our pursuit to create recommendation systems that not only deliver content but also prioritize fairness, diversity, and accuracy. Our vision is forward-thinking. We aspire not only to rectify the negative impacts prevalent in current recommendation systems but also to set a precedent for a media landscape that is more inclusive, equitable, and reliable for all users.
AI in content generation
Artificial intelligence and machine learning are powerful tools for crafting high-quality journalism. In Work Package 3, we thoroughly analyze how AI techniques can revolutionize news production across its entire cycle, aiming to amplify trust. Our goal is to use AI for positive purposes, aiding journalists and editors in sharing informative content.
We believe trustworthy AI journalism requires a balanced mix of smart machines and human input. Our vision includes creators, consumers, and those challenging media trust. To ensure quality media, we need robust models, independence, and ethical values.
One significant challenge in developing advanced AI platforms for journalism is the potential for misuse by malevolent actors such as rogue media, political groups, or governments. Our strategy revolves around fostering evidence-based journalism rooted in democratic values and transparency. This approach makes it harder for malicious content to persist: unreliable evidence becomes more apparent, while content lacking evidence raises suspicion.
While AI isn’t a complete solution for today’s media challenges, harnessing its potential is crucial for reputable news sources’ survival. Tools that automate background information gathering, recommend sources, facilitate verification, fact-checking, and support content creation aim to enhance journalists’ efficiency. We believe the primary aim of AI in journalism is to alleviate journalists from monotonous tasks, allowing them more time for creativity and critical thinking. Although entirely AI-generated news might not be practical or desirable, except in specific settings where its role is clearly defined, it’s essential to encourage journalists and editors to embrace and learn to use new AI-driven tools in their daily work.
Explainable AI
Everyone is concerned about the ethical use of artificial intelligence (AI). How can we develop AI tools that benefit society and uphold our values?
Those who create models and principles for AI tools often only think about the technical process, and rarely spend time understanding what it means for users. Instead of creating a black box, computer scientists must also understand the effect of simple processes. They have tools for so many things, but guidelines are needed that describe what it means for society.
Our researchers looked at journalism to find an answer. Journalism has adapted to technological changes while retaining ethical principles that benefit society. They have guidelines which do not exist for those who develop AI models. The solution is industry-specific guidelines based on common values within the relevant industry. Computer scientists, engineers, and everyone who creates AI models can take their industry-specific values as a starting point for design, engineering, and evaluation of responsible AI systems. Guidelines that are adhered to during the creation process in any industry can help us have ethical AI.
Find us
Lars Hilles gate 30
5008 Bergen
Norway
Contact us
MediaFutures
Office@mediafutures.no
Responsible Editor:
Centre Director Prof. Dr. Christoph Trattner
Christoph.Trattner@uib.no
Copyright © University of Bergen 2024