/ Introduction
/ Introduction
/ Introduction
Recommendation enables media applications to support users in discovering additional media content (e.g., news articles and videos) and to keep consumers engaged. The main challenge in this context is that some recommendation approaches may have little potential for the discovery of new types of content for the consumer, and they might cause the popular media content to become even more popular. Such problems can ultimately lead to filter bubbles, echo chambers, or groupthink conditions. The research stream will tackle these undesired phenomena, which are likely to originate from current personalization and recommendation approaches.
This will be done by computing responsible (predictive) models for fair recommendations that will enhance user engagement through novel mechanisms by (i) providing explanations of recommendations to users (transparency), (ii) expanding recommendations to cover a rich spectrum of media content (diversity), (iii) ensuring that niche or minority content is suggested to users (fairness).
In addition, as media users face a media environment that is increasingly perceived as fragmented, understanding users’ trust in and use of media is crucial to democracy, as media use continues to be central for citizens’ information about and engagement in society.
New knowledge: The outcome will be novel recommendation algorithms taking into account multiple competing objectives (e.g., relevance vs. information balance). In doing so, the research stream will address the following main research questions: To what extent can we effectively and fairly model online user behaviour and predict this behaviour? To what extent can we personalize and engage users online to efficiently keep them informed, and at the same time do this responsibly?
Objective: To develop user modeling and personalisation techniques capable of effectively eliciting user preferences in order to enchance the user experience when interacting with media content while taking into account important competing factors (e.g., business values, societal values, individual values).
Recommendation enables media applications to support users in discovering additional media content (e.g., news articles and videos) and to keep consumers engaged. The main challenge in this context is that some recommendation approaches may have little potential for the discovery of new types of content for the consumer, and they might cause the popular media content to become even more popular. Such problems can ultimately lead to filter bubbles, echo chambers, or groupthink conditions. The research stream will tackle these undesired phenomena, which are likely to originate from current personalization and recommendation approaches.
This will be done by computing responsible (predictive) models for fair recommendations that will enhance user engagement through novel mechanisms by (i) providing explanations of recommendations to users (transparency), (ii) expanding recommendations to cover a rich spectrum of media content (diversity), (iii) ensuring that niche or minority content is suggested to users (fairness).
In addition, as media users face a media environment that is increasingly perceived as fragmented, understanding users’ trust in and use of media is crucial to democracy, as media use continues to be central for citizens’ information about and engagement in society.
New knowledge: The outcome will be novel recommendation algorithms taking into account multiple competing objectives (e.g., relevance vs. information balance). In doing so, the research stream will address the following main research questions: To what extent can we effectively and fairly model online user behaviour and predict this behaviour? To what extent can we personalize and engage users online to efficiently keep them informed, and at the same time do this responsibly?
Objective: To develop user modeling and personalisation techniques capable of effectively eliciting user preferences in order to enchance the user experience when interacting with media content while taking into account important competing factors (e.g., business values, societal values, individual values).
Recommendation enables media applications to support users in discovering additional media content (e.g., news articles and videos) and to keep consumers engaged. The main challenge in this context is that some recommendation approaches may have little potential for the discovery of new types of content for the consumer, and they might cause the popular media content to become even more popular. Such problems can ultimately lead to filter bubbles, echo chambers, or groupthink conditions. The research stream will tackle these undesired phenomena, which are likely to originate from current personalization and recommendation approaches.
This will be done by computing responsible (predictive) models for fair recommendations that will enhance user engagement through novel mechanisms by (i) providing explanations of recommendations to users (transparency), (ii) expanding recommendations to cover a rich spectrum of media content (diversity), (iii) ensuring that niche or minority content is suggested to users (fairness).
In addition, as media users face a media environment that is increasingly perceived as fragmented, understanding users’ trust in and use of media is crucial to democracy, as media use continues to be central for citizens’ information about and engagement in society.
New knowledge: The outcome will be novel recommendation algorithms taking into account multiple competing objectives (e.g., relevance vs. information balance). In doing so, the research stream will address the following main research questions: To what extent can we effectively and fairly model online user behaviour and predict this behaviour? To what extent can we personalize and engage users online to efficiently keep them informed, and at the same time do this responsibly?
Objective: To develop user modeling and personalisation techniques capable of effectively eliciting user preferences in order to enchance the user experience when interacting with media content while taking into account important competing factors (e.g., business values, societal values, individual values).
/ People





Bilal Mahmood
PhD Candidate
University of Bergen


Snorre Alvsvåg
Industry Leader
TV2
Read more

Dietmar Jannach
Advisor & Key Researcher
Universität Klagenfurt
Read more

Svenja Lys Forstner
PhD Candidate
University of Bergen
Read more


Erik Knudsen
Researcher
University of Bergen

Anja Svartberg
NRK
/ Publications
2026
C2PA Provenance Labels Increase Trust in News Platforms Across Western Countries Conference Forthcoming
AAAI ICWSM 2026, Forthcoming.
2025
Using Large Language Models to ‘Lighten the Mood’: Satirically Reframing News Recommendations to Reduce News Avoidance Proceedings
2025.
Evaluating Image Trust Labels in a News Recommender System Proceedings
2025.
Hope, Fear, or Anger? How Emotional Framing in a News Recommender System Guides User Preferences Working paper
2025.
Offline in the Closet, Online and Out, then Offline, Out and Proud: The Online/Offline-ness of Teenager's Queer Worldmaking Book Chapter
In: Reynolds, Rachel R.; Pajé, Dacia; Medina, Sienna; Gigante, John (Ed.): Mediating Sex, Gender, and Sexuality in the GenZ Era, Chapter 15, pp. 14, Routledge, 2025.
Teen Boys and their Smartphones as Worldmaking Devices: In the Palm of their Hands Book
Palgrave MacMillan, 2025.
Integrating Digital Food Nudges and Recommender Systems: Current Status and Future Directions Journal Article
In: IEEE Access, 2025.
Supporting healthier food choices through AI-tailored advice: A research agenda Journal Article
In: PEC Innovation, 2025.
Computational Visual Content Verification PhD Thesis
2025.
The role of GPT as an adaptive technology in climate change journalism Conference
UMAP 2025, 2025.
Evaluating Sequential Recommendations in the Wild: A Case Study on Offline Accuracy, Click Rates, and Consumption Conference
ECIR 2025 conference, 2025.
Understanding news experience: The resonance between content, practices, and situatedness in everyday life Journal Article
In: Journalism, 2025.
Report on NORMalize: The Second Workshop on the Normative Design and Evaluation of Recommender Systems Workshop Forthcoming
Forthcoming.
2024
Exploring the Ethical Challenges of AI and Recommender Systems in the Democratic Public Sphere Conference
NIKT, 2024.
Picture This: How Image Filters Affect Trust in Online News Conference
Norsk IKT-konferanse for forskning og utdanning, 2024.
Exploring the Ethical Challenges of AI and Recommender Systems in the Democratic Public Sphere Conference
2024.
Negativity Sells? Using an LLM to Affectively Reframe News Articles in a Recommender System Workshop
2024.
Bridging Viewpoints in News with Recommender Systems Conference
ACM RecSys2024, 2024.
Examining the Merits of Feature-specific Similarity Functions in the News Domain using Human Judgments Journal Article
In: User Modeling and User-Adapted Interaction, 2024.
Incorporating Editorial Feedback in the Evaluation of News Recommender Systems Conference
ACM UMAP 2024, 2024.
Shaping the Future of Content-based News Recommenders: Insights from Evaluating Feature-Specific Similarity Metrics Conference
ACM UMAP '24, 2024.
Perception versus Reality: Evaluating User Awareness of Political Selective Exposure in News Recommender Systems Conference
ACM UMAP 2024, 2024.
Emotional Reframing of Economic News using a Large Language Model Conference
ACM UMAP 2024, 2024.
A Survey on Popularity Bias in Recommender Systems Journal Article
In: User Modeling and User-Adapted Interaction (UMUAI), 2024.
Human Factors in User Modeling for Intelligent Systems Book Chapter
In: pp. 3–42, A Human-Centered Perspective of Intelligent Personalized Environments and Systems, 2024.
Psychologically Informed Design of Energy Recommender Systems: Are Nudges Still Effective in Tailored Choice Environments? Book Chapter
In: pp. 221–259, A Human-Centered Perspective of Intelligent Personalized Environments and Systems, 2024.
Branding or visual storytelling? How legacy media use visual journalism to reach young people in the age of digitalization Journal Article
In: Journal of Applied Journalism & Media Studies, pp. 1-24, 2024.
2023
Modeling news recommender systems’ conditional effects on selective exposure: evidence from two online experiments Best Paper Journal Article
In: Journal of Communication , 2023.
Topical Preference Trumps Other Features in News Recommendation: A Conjoint Analysis on a Representative Sample from Norway Conference
Association for Computing Machinery (ACM) RecSys ’23, 2023.
Understanding How News Recommender Systems Influence Selective Exposure Conference
Association for Computing Machinery (ACM) RecSys ’23, 2023.
Evaluating The Effects of Calibrated Popularity Bias Mitigation: A Field Study Conference
Association for Computing Machinery (ACM) RecSys ’23, 2023.
The Interplay between Food Knowledge, Nudges, and Preference Elicitation Methods Determines the Evaluation of a Recipe Recommender System Conference
Association for Computing Machinery (ACM) RecSys ’23, 2023.
Using Visual and Linguistic Framing to Support Sustainable Decisions in an Online Store Conference
Association for Computing Machinery (ACM) RecSys ’23,, 2023.
Towards Attitudinal Change in News Recommender Systems: A Pilot Study on Climate Change Workshop
2023.
The Burden of Subscribing: How Young People Experience Digital News Subscriptions Journal Article
In: Journalism Studies , 2023.
How Rally-Round-the-Flag Effects Shape Trust in the News Media: Evidence from Panel Waves before and during the COVID-19 Pandemic Crisis Journal Article
In: Political Communication, 2023.
Monitoring the infection rate: Explaining the meaning of metrics in pandemic news experiences Journal Article
In: Journalism , 2023.
2022
Nudging Towards Health? Examining the Merits of Nutrition Labels and Personalization in a Recipe Recommender System Conference
Nudging Towards Health? Examining the Merits of Nutrition Labels and Personalization in a Recipe Recommender System, 2022.
Developing and Evaluating a University Recommender System Journal Article
In: Frontiers in Artificial Intelligence , 2022.
2021
Nudging Healthy Choices in Food Search Through Visual Attractiveness Journal Article
In: no. April 2021, pp. 1-18, 2021.
Framing Protest in Online News and Readers’ Comments: The Case of Serbian Protest “Against Dictatorship” Journal Article
In: International Journal of Communication, vol. 15, no. 21, pp. 82-102, 2021, (Pre SFI).
2020
Folk theories of algorithms: Understanding digital irritation Journal Article
In: Media, Culture & Society, 2020, (Pre SFI).
Changing news use. Unchanged news experiences? Book
Routledge, 2020, ISBN: 9780367485788, (Pre SFI).
The complexity landscape of outcome determination in judgment aggregation Journal Article
In: Journal of Artificial Intelligence Research, vol. 69, pp. 687–731, 2020, (Pre SFI).
Operationalizing exposure diversity. Journal Article
In: European Journal of Communication, pp. 1-2, 2020, (Pre SFI).
Addressing the New Item problem in video recommender systems by incorporation of visual features with restricted Boltzmann machines. Journal Article
In: Expert Systems, vol. e12645, pp. 1-20, 2020, (Pre SFI).
Temporal ambivalences in smartphone use: Conflicting flows, conflicting responsibilities. Journal Article
In: New Media and Society, vol. 22, no. 9, pp. 1715–1732, 2020, (Pre SFI).
Changing News Use. Unchanged news experiences? Book
1st, Routledge, London & New York, 2020, ISBN: 9781003041719, (Pre SFI).
Audiences’ Communicative Agency in a Datafied Age: Interpretative, Relational and Increasingly Prospective. Journal Article
In: Communication Theory, vol. 0, no. C, pp. 1-19, 2020, ISSN: 1050–3293, (Pre SFI).
Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations Conference
Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '20), New York, 2020, (Pre SFI).
Find us
Lars Hilles gate 30
5008 Bergen
Norway
Contact us
MediaFutures
Office@mediafutures.no
Responsible Editor:
Centre Director Prof. Dr. Christoph Trattner
Christoph.Trattner@uib.no
Copyright © University of Bergen 2024














