Loading Events

« All Events

  • This event has passed.

Annual Meeting 2024

November 14 @ 09:45 - November 15 @ 15:15

MediaFutures is a centre for research-based innovation with the goal to develop responsible media technology, leveraging AI technology, for the media sector.

The centre is a consortium of the most important media players in Norway. The University of Bergen is the host of the centre. User partners include NRK and TV 2, the two main TV broadcasters in Norway, Schibsted, including Bergens Tidende (BT), and Amedia, the two largest news media houses in Scandinavia/Norway, as well as the world-renowned Norwegian media tech companies Vizrt and Faktisk.no. The centre collaborates with renowned national research institutions including the University of Oslo, the University of Stavanger and NORCE, and works together with high-profile international research institutions.

This years Media Futures Annual Meeting will be held at November 14-15 at Scandic Ørnen, Bergen in Norway. The 2024 Annual Meeting constitutes a forum for the exchange of scientific results and industry insights within the field of responsible media technology.

This year’s focus topic are Trust and Usability of Generative AI .

Following last year’s success, the 2024 Annual Meeting is expected once again to attract, and bring together Norwegian and international researchers, and industry practitioners with the intent of engaging in discussions on different topics.

Join us for inspirational keynote talks, prototype demonstrations, presentations held by our researchers, and industry partners, and poster session.

KEYNOTE SPEAKERS

Vanessa Murdock

Vanessa Murdock

Sr Manager Applied Science - Amazon Web Services

Keynote Speech

Title:  Responsible AI in the Generative Era

Abstract:  Responsible AI (RAI) seeks basic guarantees of fairness, safety, privacy, robustness, controllability, explainability, transparency, and governance for traditional ML systems and generative AI systems. With recent legislation, including the EU AI Act, RAI has become a central focus of the AI/ML product development cycle. This talk provides an overview of the current practices for measuring and mitigating RAI dimensions in generative systems, and recent research in AWS AI/ML.

Bio

Vanessa Murdock leads a research group in AWS AI/ML, whose focus is Responsible AI (fairness, safety, privacy, robustness, and veracity).  In addition to doing fundamental research in RAI topics, her team builds tools for assessing aspects of responsible AI, used in AWS Services such as Bedrock, Rekognition and Transcribe.  Prior to joining AWS, she led a science team in Alexa Shopping focused on recommender systems, search and HCI. Her team provided the machine learning that backed Amazon’s Choice, and Alexa Shopping List, in addition to contributing content moderation for the generative AI system Rufus. She was previously at Microsoft, working on location inference and notifications at Bing and Cortana. Prior to Microsoft, Murdock led the Geographic Context and Experience Group at Yahoo! Research in Barcelona, which centered on geographic information retrieval and user-generated content. She has been awarded 20 patents, resulting in a Master Inventor Award from Yahoo! (2012). She received the OAA Award for Outstanding Achievement by a Young Alum from the University of Massachusetts in 2014. She is currently serving as the Chair of the ACM SIGIR Executive Committee.  Murdock received a Ph.D. in Computer Science from the University of Massachusetts Amherst Center for Intelligent Information Retrieval, advised by Bruce Croft.

Alejandro (Alex) Jaimes

Alejandro (Alex) Jaimes

Chief Scientist and Senior Vice President of AI - Dataminr

Keynote Speech

Title: Responsible AI in Critical Real-Time Applications

Abstract: Dataminr’s AI Platform discovers the earliest signals of events, risks, and threats from billions of multi-modal inputs from over one million public data sources. It uses Predictive AI for detecting events, Generative AI for describing them, and ReGenerative AI for generating live briefs that continuously update as events unfold. The events discovered by the platform help first responders quickly respond to emergencies, they help corporate security teams respond to risks (including Cyber risks), and they help news organizations discover breaking events so they can provide fast and accurate coverage. Building and deploying a large-scale AI platform like Dataminr’s is fraught with research and technical challenges. This includes tackling the hardest problem in AI (determining the real-time value of information), which requires combining a multitude of AI approaches. In this talk, I will briefly describe the main use cases of the work which I do, but I will focus specifically on Responsible AI: in the domains we work, the alerts we send out save lives, which implies the need for many levels of decision-making that can be impacted by AI. I will describe a framework on the deployment of responsible AI that is based on the types of decisions in which AI is involved, and the types of factors that need to be considered in deploying AI in critical applications.

Bio

Alex Jaimes is the Chief Scientist and Senior Vice President of AI at Dataminr. His work centers on blending qualitative and quantitative approaches to understand user behavior and drive product innovation.

With over 15 years of global experience, Alex has contributed to research with real-world impact at companies such as Yahoo, KAIST, Telefónica, IDIAP-EPFL, Fuji Xerox, IBM, Siemens, and AT&T Bell Labs. Previously, he served as Head of R&D at DigitalOcean, CTO at AiCure, and Director of Research and Video Products at Yahoo, where he led teams of scientists and engineers across New York City, Sunnyvale, Bangalore, and Barcelona.

He has also been a visiting professor at KAIST. A prolific author, Alex has published extensively in top conferences like KDD, WWW, RecSys, CVPR, and ACM Multimedia, and is a regular speaker at international academic and industry events. He earned his Ph.D. from Columbia University.

PROGRAM

Day 1

OPENING SESSION
09:45 Registration & Coffee & Event Information
10:15 Poster & Demo Exhibition Vol. 1
12:00 Lunch 
KEYNOTES SESSION
13:00 Welcome Address: Siri Gloppen (UiB), Christian Birkeland (TV2), Christoph Trattner (MediaFutures)
13:15 First Keynote Speech: Vanessa Murdock (Amazon Web Services): "Responsible AI in the Generative Era"
13:45 Moderated QA with Alain D. Starke (University of Amsterdam)
14:00 Coffee Break 
14:15 Second Keynote Speech: Alex Jaimes (Dataminr): "Responsible AI in Critical Real-Time Applications"
14:45 Moderated QA with Mehdi Elahi (MediaFutures)
15:00 Coffee break & Group Picture
 TACKLING MIS- & DISINFORMATION SESSION
15:15 Presentation: Project Reynir Results: Christoph Trattner & Helge O. Svela
15:45 Panel: Kayleen Devlin, (BBC Verify), Helge O. Svela (Media Cluster Norway), Vinay J. Setty (Factiverse, UiS), Sergej Stoppel (Wolftech), Morten Langfeldt Dahlback (Faktisk.no), Moderation: Bjørnar Tessem (MediaFutures)
FINAL SESSION
16:30 Interactive Poster & Demo Pitches Vol. 2
18:00 Poster & Demo Award
19:30 Conference Dinner (Scandic Ørnen)

Day 2

OPENING SESSION
09:45 Registration & Coffee & Event Information
INSIGHTS & INNOVATION SESSION 
10:15 Book Teaser: John Magnus R. Dahl (MediaFutures): "Making a World with the Smartphone", Moderation: Peder Haugfos
10:45 Coffee Break
11:00 Presentation by Sanja Šćepanović (Nokia Bell Labs, Cambridge): "Responsible AI: Innovating from Design to Deployment", Moderation: Morten Fjeld (MediaFutures)
12:00 Lunch 
AI & NEWS SESSION
13:00 Presentation by Kayleen Devlin (BBC Verify): "BBC Verify: tackling falsehoods in an age of uncertainty", Moderation: Christopher Senf (MediaFutures)
13:45 Coffee Break
14:00 Presentation by Lubos Steskal & Chris Ronald Hermansen (TV2): "Real Journalist, Virtual Avatar: What We Learned from Peeking into Pandora's Box with KI-Kjetil", Moderation: Samia Touileb (MediaFutures)
14:30 Presentation by Damian Trilling (University of Amsterdam): "Responsible Recommender Systems for News" Moderator: Erik Knudsen (MediaFutures)
15:15 End. 

The program is tentative and subject to change.

SPEAKERS

Kayleen Devlin

Kayleen Devlin

Senior Journalist at BBC Verify

Bio

Kayleen Devlin is a senior journalist at BBC Verify. She has extensive experience of working on open-source investigations and covering disinformation from the Ukraine and Israel Gaza wars. As well as this, she has also covered topics ranging from climate denial around COP26 and election related disinformation in the Philippines and the US midterm and presidential elections.

BBC Verify: tackling falsehoods in an age of uncertainty

In a year filled with global conflicts and elections, communicating credible information in a timely manner has never felt more important. BBC Verify senior journalist, Kayleen Devlin, joins us to discuss some of the approaches her team takes when it comes to tackling misleading posts online. What are some of the main themes that have cropped up this year? And how much has generative AI disrupted the landscape?

Sanja Šćepanović

Sanja Šćepanović

Senior Research Scientist, Nokia Bell Labs

Bio

Sanja is an applied mathematician with MSc in cybersecurity and cryptography and a PhD in data science. Almuna of Internation Space University and ex of ICEYE, she has a keen interest in space technology and research (e.g., AI for Earth Observation).

Her professional experiences include government institutions, two startups, CERN, and Bell Labs. During her doctoral studies with EIT Digital, she has also taken business, innovation and entrepreneurship courses, working on startup ideas in some of them.

Her example research projects include public and population health studies using social networks, human dynamics using mobile phone data, and urban vitality using satellite data.

Responsible AI: Innovating from Design to Deployment

Abstract: This talk presents the work of the Responsible AI (RAI) team at Nokia Bell Labs, Cambridge. It covers the six RAI pillars and solutions for designing, deploying, and monitoring responsible AI systems. Discover AI Design, a collaborative approach for holistic AI system design, and ExploreGen, which helps foresee and manage potential uses and risks of AI technology. See how NLPGuard prevents over-reliance on protected attributes when monitoring a toxicity classifier. Finally, gain insights into how AI incidents are portrayed in the news media.

Damian Trilling

Damian Trilling

Professor at Vrije Universiteit Amsterdam

Bio

Damian Trilling is a full professor at Vrije Universiteit Amsterdam and holds the Chair for Journalism Studies.

Damian Trilling, with a background in social sciences and a role in a humanities department, integrates multiple perspectives, especially in computational methods. His research focuses on how citizens engage with news and current affairs amid today’s media landscape, examining the roles of journalists, news media, social platforms, and technology.

Initially using surveys, Trilling now leverages digital trace data, such as browser history donations, to study news consumption and sharing across formats, including high- and low-quality news and misinformation. He has also contributed insights on shareworthy news and large-scale sharing on platforms like Facebook.

Trilling investigates feedback loops in media—how popular content reinforces itself, the effects of personalization, and audience metrics on news production. He also critiques the “filter bubble” concept, preferring a nuanced view on personalized news flows.

A proponent of computational communication, Trilling co-founded *Computational Communication Research*, and co-authored a book on computational analysis. His interests include machine learning, event identification across media, and experimental tools for recommender systems and data donation.

Responsible recommender systems for news

News organizations increasingly use recommender systems on their websites, and also for streaming platforms that offer content regarding news and current affairs, recommender systems are essential. From a user perspective, such systems can help surfacing relevant content; and from a commercial perspective, they can lead to higher click-through rates and revenues. At the same time, there are growing concerns that too much emphasis on clicks may be detrimental to delivering a responsible journalistic product. In this talk, I show recent developments that allow to leverage recommendation techniques to achieve desirable outcomes such as broadening users’ horizon without sacrificing user satisfaction.

Siri Gloppen

Siri Gloppen

Dean of SV-faculty at UiB

Bio

Siri Gloppen is a Professor of Political Science and Vice Dean at the Faculty of Social Sciences. She directs LawTransform, the CMI-UiB Centre on Law & Social Transformation, and heads the Bergen School of Global Studies. Her work focuses on the intersections of law, politics, and social change, and she is actively involved in global development and policy research initiatives.

Christian Birkeland

Christian Birkeland

Chief Digital Officer at TV2

Bio

Birkeland holds a Master of Science in Engineering from NTNU in Trondheim and has a background as CEO of RiksTV.

Christian Birkeland is part of the executive management team at TV 2 and the chairman of the steering board in SFI MediaFutures.

Christoph Trattner

Christoph Trattner

Director, SFI MediaFutures

Bio

Christoph Trattner is currently appointed as Lead Professor (1404) by the University Board of the University of Bergen (UiB). At UiB, he serves as the Founder and Center Director of the Research Centre for Responsible Media Technology & Innovation, known as SFI MediaFutures, which has secured funding and in-kind contributions totaling approximately 300 million NOK.

Additionally, he is the Founder and Leader of the DARS research group, Norway’s largest research group specializing in Recommender Systems. He holds a PhD (with distinction), MSc (with distinction), and BSc in Computer Science and Telematics from Graz University of Technology in Austria and is an ACM Senior Member. 

Helge O. Svela

Helge O. Svela

CEO at Media Cluster Norway

Bio

Svela is an award winning investigative journalist and editor. As a journalist in Bergens Tidende, Svela won a Skup-diploma for the fact checking initiative faktasjekk.no. Svela held various leadership positions over 11 years in Bergens Tidende, og was in charge of the newspaper’s coverage of the 2011 terrorist attack. Svela has also led various innovation initiatives in Bergen Tidende, among them New digital formats, which won a silver medal in the 2022 INMA Awards for innovation in newsroom. He started as the CEO of Media Cluster Norway in September 2022. Media Cluster Norway is a media and media tech cluster with around 90 member companies. Svela chairs the ITPC working group on Provenance Best Practices and Implementation. 

Chris Ronald Hermansen

Chris Ronald Hermansen

Project Manager for editorial AI at TV 2

Bio

Chris Ronald Hermansen is the project manager for editorial AI at TV 2. Over the past 15 years, he has worked as a journalist, news editor, and project manager in various media companies. In recent years, he served as the editorial manager for TV 2’s news department in Bergen. Hermansen holds a law degree.

Lubos Steskal is a data scientist and AI developer at TV 2’s editorial AI team, and he is also the industry lead of the Media Futures language technology work package. Combining academic and industry experience, he has worked extensively with natural language processing and machine learning across various sectors. His background includes positions at the University of Bergen, Sbanken, and several startups, bringing a unique perspective to the intersection of AI, journalism and media.

Real Journalist, Virtual Avatar: What We Learned from Peeking into Pandora's Box with KI-Kjetil

‘KI-Kjetil’, the first AI avatar of a news personality in Norwegian media, serves as an interactive chatbot focusing on U.S. presidential election coverage. Based on journalist and news anchor Kjetil H. Dale, this talk presents the journey of developing and deploying KI-Kjetil, from evaluating editorial questions and challenges through technical implementation and towards operational monitoring. We explore the editorial rationale behind creating an AI clone of a news anchor, discuss our project objectives, and evaluate their outcomes. The presentation opens a discussion about the broader implications of AI avatars in journalism, examining their impact on public trust and the ethical dimensions of deploying AI-powered representations of real journalists.

John Magnus R. Dahl

John Magnus R. Dahl

Postdoctoral Researcher in SFI MediaFutures

Bio

John Magnus R. Dahl is Postdoc in WP1 Understanding Media Experiences at MediaFutures. He holds a MA in Rhetoric, Argumentation and Philosophy from the University of Amsterdam and a PhD from the Department of Information Science and Media Studies at the University of Bergen. Dahl is interested in the relationship between culture, communication, and politics in a broad sense, as well is the development of ethnographic methods within media studies and rhetoric. He is currently working on a project of how Norwegian public broadcasters relate to, or do not manage to relate to, the media experiences and social and cultural needs of young people.

Making a world with the smartphone

In this talk, postdoc John Magnus R. Dahl presents insights from his forthcoming book, the first book from MediaFutures – In The Palm of Their Hands: Teenage Boys and their Smartphones as Worldmaking Devices (expected spring 2025).

Based on an ethnographic fieldwork where Dahl observed six teen boys online and offline over 18 months, this books seeks to answer how the smartphone impacts the life of young people. The central argument is that the smartphone gives teenagers agency – agency to find out who they want to be, to connect with the people and communities that matter to them and to the wider world. This is why the smartphone is conceptualised as a worldmaking device.

In addition, Dahl has found that the smartphone use is fundamentally gendered – used to enact masculinities, different ways of being a man – and that it is used differently by those who are “different” – ethnic minorities and queer people. For them, worldmaking through the smartphone is even more important.

Sergej Stoppel

Sergej Stoppel

Chief Innovation Officer at Wolftech

Bio

Sergej  is the Chief Innovation Officer at Wolftech, where he leads a culture of innovation across interdisciplinary teams, transforming industry standards for a platform serving over 20,000 global professionals. With a PhD in Computer Science and recipient of the Eurovis Best Dissertation Award, Sergej combines research-driven strategic planning with a passion for customer-centric solutions. He is a recognized AI expert and thought leader, regularly speaking at key industry events and driving sustainable growth through strategic partnerships and groundbreaking AI solutions for media professionals.

Morten Langfeldt Dahlback

Morten Langfeldt Dahlback

Head of Innovation and Technology at Faktisk.no

Bio

Morten Langfeldt Dahlback leads the technology and development efforts at Faktisk.no. He also heads the EU project NORDIS, a Nordic collaboration aimed at countering misinformation and disinformation. Dahlback holds a PhD in philosophy and has previously worked as a commentator for Adresseavisen and as an analyst for The Economist Intelligence Unit.

Vinay Setty

Vinay Setty

Associate Professor at Universitet i Stavanger

Bio

Vinay Setty is the founder and CTO of Factiverse and an associate professor at University of Stavanger. Setty’s research area broadly encompasses NLP, information retrieval and deep neural networks for language technologies. He specializes in fact-checking, question answering, and conversational search. He has published in several top-tier conferences within the area of information retrieval and web mining such as SIGIR, The Web Conf, WSDM, CIKM etc.

Furthermore, he also won the 2020 SR Bank Innovation prize in Norway for commercializing neural network technology for fake news detection. Setty’s startup Factiverse has also won digital trust challenge and NORA AI startup award and has secured a US Patent on deep neural networks for false claim detection. Setty has a PhD in Computer Science from University of Oslo and a postdoc from Max-Planck Institute for Informatics in Germany.

EXHIBITION AND DEMO PITCHES

Demos

Name Title
Khadiga Seddik Beyond Political Personalization: Enhancing News Recommendation with Headline Style Customization Using ChatGPT
Bilal Mahmood Large Language Models as Editors Picking Related News Articles
Snorre Alvsvåg Sequential Recommender systems in action in the Video Domain: A TV 2 Demonstration
Fazle Rabbi & Svenja Forstner  Ai Conflict Analysis Tool
Pete Andrews AI Co-Moderator: Enhancing Broadcasted Political Debates
Huiling You & Svenja Forstner
Event Extractor Model

Poster

Name Title
Jeng Jia-Hua Negativity Sells? Using an LLM to Affectively Reframe News Articles in a Recommender System
Sindre Berg Sæter Metadata Analysis of Images
and Videos
Adane N. Tarekegn CSAI: New Cluster Validation Index based on Stability Analysis
Beatrix Chik Wu News Report Adaptation for Synthetic Voice Presentation
Sohail Khan CLIPing the deception: Adapting Vision-Language Models for Universal Deepfake Detection
Jørgen Eknes-Riple Emotional Reframing Recommended News Articles
Bjørn Kjartansson Mørch Analysis of Popularity Bias Effect in Media Recommendation
Tord Berget Monclair Personalised News Recommendation in the Sports Domain
Snorre Åldstedt Investigating and Measuring Bias in Generative Language Models
Peter Røysland Aarnes NumPert: When Numbers Shift, does Prediction Hold?
Martin Salterød Sjåvik Subtler biases in LLMs
Marianne Borchgrevink-Brækhus News experience: understanding the resonance between content, practices & situatedness in everyday life
Tobias Jovall Wessel Empowering Real-Time Media Research With NewsCatcher API
Bilal Mahmood Can Large Language Models Support Editors Pick Related News Articles?
Ayoub El Majjodi Advancing Visual Food Attractiveness Predictions for Healthy Food Recommender Systems
Thorstein Lium Fougner Enhancing Enterprise streaming platforms with contextual post-filtering
Gloria Anne Babile Kasangu Picture this: How Image Filters affect trust in online news
Ingunn Statle Nævdal Personalised news summarisation

LOCATION

Both days of the Annual Meeting will take place at the Conference Centre at Scandic Ørnen.

Scandic Ørnen is located in Lars Hilles Gate 18, right next to the main bus terminal.
The closest public transport stop is “Bergen Busstasjon”, the main bus terminal.

https://maps.app.goo.gl/oHBxMur84ReqdFVx7

Details

Start:
November 14 @ 09:45
End:
November 15 @ 15:15
Event Category:

Organizer

MediaFutures

Venue

Scandic Ørnen, Bergen