BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251003T090000
DTEND;TZID=Europe/Oslo:20251003T120000
DTSTAMP:20260510T123758
CREATED:20250818T082952Z
LAST-MODIFIED:20250818T083409Z
UID:21426-1759482000-1759492800@mediafutures.no
SUMMARY:Research Ethics Day at the UiB 2025
DESCRIPTION:It is fundamental for a knowledge-based society that the research underpinning policy is trustworthy. It is thus crucial that research is conducted to the highest ethical standards. To increase awareness of the importance of research ethics at the University of Bergen\, we have established an annual research ethics day. This year’s conference has the main theme of ‘Research Ethics After AI.’ We look forward to a day of meaningful discussion on this important topic. \nRegister here by latest 02.10.2025 – 16.00\n  \nRESEARCH ETHICS AFTER AI\nPROGRAMME\n\n\n\n\n08.30\nLight breakfast buffet\n\n\n09.00\nIntroduction of today´s programme: Marit Bakke\, Professor\, Dean\, Faculty of Medicine\, UiB\, and Chair of the Programme Committee\n\n\n09.05\nWelcome address:  Kjell Morten Myhr\, Professor\, Vice Rector for Research and Innovation\, UiB\n\n\n09.10\nSession 1: Big\, Biased\, and Synthetic: Data after AI   \nHow is AI changing and challenging practices of data collection\, generation and validation?   \nKeynote Anna Feigenbaum\, Professor of Media and Digital Storytelling\, School of Social & Political Sciences\, University of Glasgow \nTitle: Dancing with Robots? AI\, Participation and Ethical Interactivity. \nA conversation:   \n\nBudhaditya Chattopadhyay\, Postdoctoral Fellow\, Faculty of Fine Art\, Music and Design\, UiB\n\n\nAnna Feigenbaum\, Professor of Media and Digital Storytelling\, School of Social & Political Sciences\, University of Glasgow\n\n\nTorger Kielland\, Professor\, Faculty of Law\, UiB.\n\n\nHelge Ræder\, Professor\, Vice Dean for Innovation\, Faculty of Medicine\, UiB\n\nModerator: Gabriele de Seta\, Researcher and Project Leader\, Department of Linguistic\, Literary and Aesthetic Studies\, Faculty of Humanities\, UiB\n\n\n10.00\nBreak\n\n\n10.20\nSession 2: Trust in the black box: Data Analysis after AI  \nHow is AI changing and challenging practices of data analysis? \nA conversation: \n\nJulien Brajard\, Senior Researcher\, Climate Modeling\, NERSC.\n\n\nPekka Parviainen\, Associate Professor\, Machine Learning Group\, Department of Informatics\, Faculty of Science and Technology\, UiB.\n\n\nAnne Sigrid Refsum\, Postdoctoral Fellow\, AI STORIES project\, Department of Linguistic\, Literary and Aesthetic Studies\, Faculty of Humanities\, UiB\n\nModerator: Nathalie Reuter\, Professor\, Department of Chemistry\, Faculty of Science and Technology\, UiB\n\n\n11.10\nSession 3: Building Ethical Literacy: Towards a New Era of Research Ethics.  \nHow do we prepare researchers for an AI-driven research landscape? \nPresentation by: Lina Harder\, PhD Candidate\, Project: Extending Digital Narrative\, Center for Digital Narrative\, Department of Linguistic\, Literary and Aesthetic Studies\, Faculty of Humanities\, UiB \nA conversation: \n\nAnders Goksøyr\, Professor\, Department of Biological Sciences\, Faculty of Science and Technology\, UiB and Deputy Member of the National Committee for Research Ethics in Science and Technology – NENT.\nKari Steen-Johnsen\, Research Professor\, Institute for Social Research\, UiO and Deputy chair of the National Committee for Research Ethics in the Social Sciences and the Humanities – NESH\nIngrid Miljeteig\, Professor\, Department of Global Public Health and Primary Care\, Faculty of Medicine\, UiB and Deputy Member of the National Research Ethics Committees- NEM\n\nModerator: Ragna Aarli\, Professor\, Faculty of Law\, UiB\n\n\n12.00\nClosing remarks: Marit Bakke\, Professor\, Dean\, Faculty of Medicine\, UiB\, and Chair of the Programme Committee\n\n\n12.05\nLunch\n\n\n\n\n 
URL:https://mediafutures.no/event/research-ethics-day-at-the-uib-2025/
LOCATION:Storsalen\, Nygårdsgaten 5\, University of Bergen\, 5015 Bergen
CATEGORIES:Events
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/forskningsetiskdagokt2024_v1_0.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251014T140000
DTEND;TZID=Europe/Oslo:20251014T150000
DTSTAMP:20260510T123758
CREATED:20251006T113725Z
LAST-MODIFIED:20251006T113725Z
UID:21696-1760450400-1760454000@mediafutures.no
SUMMARY:Disinformation and Trust in Science in the U.S. and Beyond
DESCRIPTION:2025 has been a year of unprecedented cuts to global development aid\, significant political tensions\, and increased attacks on science and academia. What are the implications of these developments for health globally\, health research\, and trust in science in general? \nThe Bergen Centre for Ethics and Priority Setting in Health (BCEPS) has invited Professor Ezekiel Emanuel from the University of Pennsylvania\, a world-leading bioethicist\, to shed light on recent changes and current impacts of this challenging political climate on disinformation and trust in science\, especially in the context of health research and policy. \nThis is a unique chance to hear directly from a leading American researcher\, physician\, and writer\, who has also been instrumental in US health policy under the Obama and Biden administrations. He contributes actively to the public debate about current US policies in the press and social media. After the presentation\, there will be time for a Q&A session with the audience. \nThe event will be held in English and is free and open to the public. \nTIME: 14 October\, kl. 1400 – 1500 \nPLACE: Alrek (Årstadveien 17)\, room Midgard\, \nAbout Professor Ezekiel (Zeke) Emanuel \nProfessor Emanuel received his MD from Harvard Medical School and his PhD in Political Philosophy from Harvard University. He is a practicing breast oncologist and currently serves as Vice Provost for Global Initiatives and Chair of the Department of Medical Ethics and Health Policy at the University of Pennsylvania. He is a special advisor to WHO Director General Dr. Tedros and previously served as a Special Advisor for Health Policy under President Obama\, where he was instrumental in drafting the Affordable Care Act. Later\, he served on the Biden-Harris Transition Covid Advisory Board. Professor Emanuel has published widely in leading scientific journals and has been described as the most cited bioethicist ever. He also contributes regularly to the New York Times. Professor Emanuel is a research team leader within the Bergen Centre of Ethics and Priority Setting in Health at the University of Bergen.
URL:https://mediafutures.no/event/disinformation-and-trust-in-science-in-the-u-s-and-beyond/
LOCATION:Årstadveien 17
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/thumbnail_Zeke-Emanuel-Oct.-14_screens.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251018T080000
DTEND;TZID=Europe/Oslo:20251022T170000
DTSTAMP:20260510T123758
CREATED:20251010T081224Z
LAST-MODIFIED:20251010T082102Z
UID:21731-1760774400-1761152400@mediafutures.no
SUMMARY:The 28th ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing
DESCRIPTION:CSCW is the premier venue for research in the design and use of technologies that affect groups\, organizations\, communities\, and networks. Bringing together top researchers and practitioners\, CSCW explores the technical\, social\, material\, and theoretical challenges of designing technology to support collaborative work and life activities. \nThe 28th ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (CSCW) will be held in Bergen\, Norway on October 18 — 22\, 2025. MediaFutures is one of the sponsors of the conference. WP4 leader Morten Fjeld is one of the industry and sponsorship chairs\, and Frode Guribye one of the local organisation chairs. \nOpening Keynote: Kari Kuutti\nKari Kuutti is a professor emeritus (HCI & CSCW) at INTERACT research unit in the University of Oulu\, Finland. Back in 1996\, his professorship was the first one in Finland dedicated to HCI and CSCW. He has also served as a professor at the Department of Computer Science of Helsinki University of Technology (currently Aalto University)\, and as an adjunct professor at Helsinki University of Arts and Design (currently a part of Aalto University) and at the Department of Education at University of Helsinki. He has been a visiting scholar at the Design School in the Polytechnic University of Hong Kong and at the Interaction Design Centre in the University of Limerick. 2020 he received the Lifetime Achievement Award by the European Society of Socially Embedded Technology (EUSSET). \nKuutti has published over 120 research articles\, and he is recognized for his efforts to bring practice-based approaches into the analysis and design of cooperative systems\, especially for his promotion of cultural-historical activity theory (CHAT) for the purpose. He has been actively involved both in national and in European research collaboration. A large part of his work has been linked to developing support for collaborative and distributed design efforts\, not only academically but also with industrial partners\, such as Airbus and Nokia Mobile Phones. \nClosing Keynote: Gina Neff\nGina Neff is the Professor of Responsible AI at Queen Mary University London. She runs the Minderoo Centre for Technology & Democracy at the University of Cambridge. She is the Deputy Chief Executive Officer for UKRI Responsible AI UK and Associate Director of the ESRC Digital Good Network. \nProfessor Neff serves on the boards of the Social Science Research Council\, the Institute for the Future of Work and Reset.tech. She holds a doctorate in sociology and undergraduate degrees in Economics and Middle Eastern Languages and Cultures\, all from Columbia University. Her books include Venture Labor (MIT Press 2012)\, Self-Tracking (MIT Press 2016) and Human-Centered Data Science (MIT Press 2022). \nHer academic research has won both engineering and social sciences awards. Oxford awarded her a 2019 Teaching Excellence Award for her leadership of doctoral programmes at the Oxford Internet Institute. She led the team that won the 2021 Webby for the best educational website on the Internet for the A to Z of AI\, which reached over one million people in 17 different languages as part of Google’s AI skills training. \n \nThe program info is available on the SIGCHI web app\, which you will be able to use for easier planning. \nFor workshops\, see: https://cscw.acm.org/2025/index.php/workshops/ \nNote: All presentations in the main room (Peer Gynt-salen) will be livestreamed and will be viewable by both in-person and virtual attendees. \n  \n 
URL:https://mediafutures.no/event/the-28th-acm-sigchi-conference-on-computer-supported-cooperative-work-social-computing/
LOCATION:Bergen
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Skjermbilde_10-10-2025_10133_cscw.acm_.org_.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251022T091500
DTEND;TZID=Europe/Oslo:20251022T140000
DTSTAMP:20260510T123758
CREATED:20251001T091623Z
LAST-MODIFIED:20251002T084206Z
UID:21639-1761124500-1761141600@mediafutures.no
SUMMARY:Talks by Google Deepmind researcher
DESCRIPTION:SFI MediaFutures hosts two talks with Google Deepmind researcher Nitesh Goyal\, invited and introduced by work package 4 co-leader professor Morten Fjeld. \nTesh (Nitesh) Goyal leads research at the intersection of AI and Safety at Google Deepmind. His work at Google has led to the launch of ML based tools like SynthID to enable AI Literacy\, AIStudio and MakerSuite to enable creatives for leveraging AI to bring their ideas to life\, Harassment Manager to empower targets of online harassment\, ML based moderation to reduce online toxic content production on platforms like OpenWeb\, and multiple NLP based tools that reduce biased sensemaking. He received his MSc in Computer Science from UC\, Berkeley and RWTH Aachen\, prior to receiving his PhD from Cornell University in Information Science. His research has been supported by the German Govt.\, and National Science Foundation. Frequently collaborating with industry (Google Research\, Yahoo Labs\, HP Labs\, Bloomberg Labs)\, he has published in top-tier HCI venues (eg. CHI\, CSCW\, FAccT)\, received three best paper honorable mention awards (CHI\, CSCW) and his work is frequently covered in the press. Tesh also serves on the ACM SIGCHI Steering Committee\, as appointed Adjunct Professor at New York University and Columbia University\, and as ACM Distinguished Speaker. \n  \nTalk 1: Wednesday 22 October 09:15 – 10:00 : Designing AI Responsibly | Case Studies from Practice \nLocation: Egget/UiB Auditorium \nAs an HCI Researcher\, my work pushes boundaries for inclusive AI/ML models. In this talk I will share case studies about building these models and challenges in their large scale adoption. Some of these models are commonly used to detect toxicity in online conversations. These models are trained on datasets annotated by human raters and require relatively large datasets. In the first case study\, I will explore how raters’ self-described identities impact how they annotate toxicity in online comments. In a second case study\, I will share how our collective scholarship presents a gap at evaluating Responsible AI tools that inspect such AI/ML models. I will end with recommendations for an inclusive and equitable RAI practice. \n  \nTalk 2: Wednesday 22 October 13:00- 14:00 : Designing for Sensemaking Translucence | A Crime-Solving Case Study \nLocation: Room Stortinget\, UiB \nSolving crimes correctly is a critical and life-altering problem where intelligence analysts are constantly struggling against their biases. Despite recurring themes of how AI should be designed responsibly to support these use cases/users in 50+ years of scholarship\, we have barely started to scratch the surface. In this lecture\, I introduce the notion of Sensemaking Translucence into biases\, fairness and equity related challenges. I then provide examples of how AI can support Sensemaking Translucence. My work finally makes the case that it is important to design from a human centered perspective by leveraging AI to support these Human AI Collaboration workflows. \nFor questions please contact: Morten.Fjeld@uib.no
URL:https://mediafutures.no/event/21639/
LOCATION:UiB Bergen\, Norway
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-136-3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251023T091500
DTEND;TZID=Europe/Oslo:20251023T150000
DTSTAMP:20260510T123758
CREATED:20251008T145134Z
LAST-MODIFIED:20251013T094227Z
UID:21716-1761210900-1761231600@mediafutures.no
SUMMARY:Recommender Systems and Nudges for Healthier Food Choice
DESCRIPTION:We are pleased to invite you to the trial lecture and public defence of Ayoub el Majjodi\, PhD candidate at SFI MediaFutures. \nDate: 23 OctoberVenue: Auditorium 2 Jussbygget\, UiB \nTime: 9:15 \nPhD Thesis:Recommender Systems and Nudges for Healthier Food Choices \nThesis Summary: \nRecommender systems are widely used to address the challenge of information overload by presenting users with the most relevant content through personalization techniques. In the food domain\, decision-making is particularly complex due to the multifaceted nature of food choices\, which are influenced by a range of individual\, contextual\, and environmental factors. Despite this complexity\, recommender systems have shown considerable promise in modeling real world food preferences and supporting users in navigating food related decisions. Their adoption in food applications has been steadily increasing\, reflecting the growing importance and difficulty of making informed\, personalized\, and health-conscious dietary choices. Nonetheless\, these systems have often been shown to generate predom- inantly popular food options\, which tend to be less healthy. As users interact with such systems\, they are repeatedly exposed to unhealthy choices\, which in turn reinforces their preferences for these items. This feedback loop causes algorithms to prioritize popular yet nutritionally poor options\, ultimately amplifying unhealthy eating behaviors with potential negative implications for public health. At the same time\, digital nudging has emerged as a promising strategy for influencing user behavior in subtle and non intrusive ways. How- ever\, limited research has investigated how digital nudges and recommender systems can be effectively combined particularly in user-centered settings aimed at supporting informed decision-making and promoting behavioral change. To address this gap\, this thesis adopts a Design Science Research methodology to design\, implement\, and evaluate food recommender systems augmented with digital nudges. The research is documented across several peer-reviewed manuscripts and supported by both offline algorithmic evaluations and online user experiments. These studies examine how various preference elicitation methods\, nudging techniques\, and user characteristics such as food knowledge and dietary goals interact to shape user experience and behavior. The findings reveal that several nudging techniques warrant further investigation in the con- text of food recommender systems\, particularly through user-centric evaluation approaches. Moreover\, while digital nudges can support healthier food choices\, their effectiveness varies depending on personalization\, user familiarity\, and system design. Interestingly\, non-personalized recommendations with clear nutritional labeling were often more effective in encouraging healthy decisions than personalized options. Additionally\, the interplay between preference elicitation methods\, user knowledge\, and nudging strate- gies significantly influenced user choices\, interactions\, and overall experience. This thesis contributes to the fields of recommender systems and persuasive technologies by demonstrating how digital nudges and system design features jointly influence health related decision-making. It emphasizes the importance of user-centric evaluation and lays the foundation for future research on adaptive nudging\, long-term behavior change\, and real- world deployments in food-related digital platforms. \nOpponents:Associate Professor Alan Said\, University of Gothenburg\, SwedenAssistant Professor Julia Neidhardt\, TU Vienna\, Austria \nChair of the Committee:Associate Professor Erik Knudsen\, Department of Information Science and Media Studies\, University of Bergen \nDefense Chair:Associate Professor Samia Touileb\, Research Leader at the Department of Information Science and Media Studies\, University of Bergen
URL:https://mediafutures.no/event/recommender-systems-and-nudges-for-healthier-food-choice/
LOCATION:Auditorium 2 Jussbygget
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-131-7.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251024T120000
DTEND;TZID=Europe/Oslo:20251024T130000
DTSTAMP:20260510T123758
CREATED:20251020T081632Z
LAST-MODIFIED:20251024T080239Z
UID:21787-1761307200-1761310800@mediafutures.no
SUMMARY:Beyond Accuracy: Exploring Fairness and Generative AI in (News) Recommender Systems
DESCRIPTION:We want to invite to a lunch seminar with Thomas E. Kolb\, PhD candidate from TU Wien (Austria) and member of the CDL-RecSys. \nThomas is visiting MediaFutures for two months (until November 14) as part of an Erasmus traineeship. His research focuses on news recommender systems\, particularly on fairness and bias over time. \nFriday\, 24th of October\, Thomas will present his recent work and share insights from his ongoing research in this area. \nBio: \nThomas is conducting research as part of his Ph.D. on the subject of long-term dynamics of bias and fairness in cross-domain recommender Systems. To analyse these dynamics in a real world environment his lab works together with a company within the domain of news\, books and lifestyle. The exploration of long-term dynamics in this field has immense potential for the development of fairer recommender systems. He firmly believes in the significance of providing the research community with fresh insights to foster the creation of responsible and fair recommender systems. \nAbstract: \nRecommender systems have become a key technology in digital media environments\, yet their success cannot be measured by accuracy alone. In this talk\, Thomas E. Kolb will first provide an overview of the lab’s current research activities across domains such as\, e-commerce\, fashion\, and news. He will then present his past and current work on evaluating and designing recommender systems from a beyond-accuracy perspective\, including insights on what makes up a “good reading recommendation” in news contexts based on the lab’s industry collaborations. The talk concludes with an outlook on recent trends in conversational and generative recommender systems\, based on insights from his tutorial at the ACM Recommender Systems Conference. \nYou can follow the talk live by joining zoom: https://uib.zoom.us/j/69085222716?pwd=t7lotTdLgtpTLRzcWnmB4DgTNUiNd6.1
URL:https://mediafutures.no/event/beyond-accuracy-exploring-fairness-and-generative-ai-in-news-recommender-systems/
LOCATION:SFI MediaFutures\, MCB
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-136-8.png
END:VEVENT
END:VCALENDAR