BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MediaFutures
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251022T091500
DTEND;TZID=Europe/Oslo:20251022T140000
DTSTAMP:20260420T113033
CREATED:20251001T091623Z
LAST-MODIFIED:20251002T084206Z
UID:21639-1761124500-1761141600@mediafutures.no
SUMMARY:Talks by Google Deepmind researcher
DESCRIPTION:SFI MediaFutures hosts two talks with Google Deepmind researcher Nitesh Goyal\, invited and introduced by work package 4 co-leader professor Morten Fjeld. \nTesh (Nitesh) Goyal leads research at the intersection of AI and Safety at Google Deepmind. His work at Google has led to the launch of ML based tools like SynthID to enable AI Literacy\, AIStudio and MakerSuite to enable creatives for leveraging AI to bring their ideas to life\, Harassment Manager to empower targets of online harassment\, ML based moderation to reduce online toxic content production on platforms like OpenWeb\, and multiple NLP based tools that reduce biased sensemaking. He received his MSc in Computer Science from UC\, Berkeley and RWTH Aachen\, prior to receiving his PhD from Cornell University in Information Science. His research has been supported by the German Govt.\, and National Science Foundation. Frequently collaborating with industry (Google Research\, Yahoo Labs\, HP Labs\, Bloomberg Labs)\, he has published in top-tier HCI venues (eg. CHI\, CSCW\, FAccT)\, received three best paper honorable mention awards (CHI\, CSCW) and his work is frequently covered in the press. Tesh also serves on the ACM SIGCHI Steering Committee\, as appointed Adjunct Professor at New York University and Columbia University\, and as ACM Distinguished Speaker. \n  \nTalk 1: Wednesday 22 October 09:15 – 10:00 : Designing AI Responsibly | Case Studies from Practice \nLocation: Egget/UiB Auditorium \nAs an HCI Researcher\, my work pushes boundaries for inclusive AI/ML models. In this talk I will share case studies about building these models and challenges in their large scale adoption. Some of these models are commonly used to detect toxicity in online conversations. These models are trained on datasets annotated by human raters and require relatively large datasets. In the first case study\, I will explore how raters’ self-described identities impact how they annotate toxicity in online comments. In a second case study\, I will share how our collective scholarship presents a gap at evaluating Responsible AI tools that inspect such AI/ML models. I will end with recommendations for an inclusive and equitable RAI practice. \n  \nTalk 2: Wednesday 22 October 13:00- 14:00 : Designing for Sensemaking Translucence | A Crime-Solving Case Study \nLocation: Room Stortinget\, UiB \nSolving crimes correctly is a critical and life-altering problem where intelligence analysts are constantly struggling against their biases. Despite recurring themes of how AI should be designed responsibly to support these use cases/users in 50+ years of scholarship\, we have barely started to scratch the surface. In this lecture\, I introduce the notion of Sensemaking Translucence into biases\, fairness and equity related challenges. I then provide examples of how AI can support Sensemaking Translucence. My work finally makes the case that it is important to design from a human centered perspective by leveraging AI to support these Human AI Collaboration workflows. \nFor questions please contact: Morten.Fjeld@uib.no
URL:https://mediafutures.no/event/21639/
LOCATION:UiB Bergen\, Norway
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-136-3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20251024T120000
DTEND;TZID=Europe/Oslo:20251024T130000
DTSTAMP:20260420T113033
CREATED:20251020T081632Z
LAST-MODIFIED:20251024T080239Z
UID:21787-1761307200-1761310800@mediafutures.no
SUMMARY:Beyond Accuracy: Exploring Fairness and Generative AI in (News) Recommender Systems
DESCRIPTION:We want to invite to a lunch seminar with Thomas E. Kolb\, PhD candidate from TU Wien (Austria) and member of the CDL-RecSys. \nThomas is visiting MediaFutures for two months (until November 14) as part of an Erasmus traineeship. His research focuses on news recommender systems\, particularly on fairness and bias over time. \nFriday\, 24th of October\, Thomas will present his recent work and share insights from his ongoing research in this area. \nBio: \nThomas is conducting research as part of his Ph.D. on the subject of long-term dynamics of bias and fairness in cross-domain recommender Systems. To analyse these dynamics in a real world environment his lab works together with a company within the domain of news\, books and lifestyle. The exploration of long-term dynamics in this field has immense potential for the development of fairer recommender systems. He firmly believes in the significance of providing the research community with fresh insights to foster the creation of responsible and fair recommender systems. \nAbstract: \nRecommender systems have become a key technology in digital media environments\, yet their success cannot be measured by accuracy alone. In this talk\, Thomas E. Kolb will first provide an overview of the lab’s current research activities across domains such as\, e-commerce\, fashion\, and news. He will then present his past and current work on evaluating and designing recommender systems from a beyond-accuracy perspective\, including insights on what makes up a “good reading recommendation” in news contexts based on the lab’s industry collaborations. The talk concludes with an outlook on recent trends in conversational and generative recommender systems\, based on insights from his tutorial at the ACM Recommender Systems Conference. \nYou can follow the talk live by joining zoom: https://uib.zoom.us/j/69085222716?pwd=t7lotTdLgtpTLRzcWnmB4DgTNUiNd6.1
URL:https://mediafutures.no/event/beyond-accuracy-exploring-fairness-and-generative-ai-in-news-recommender-systems/
LOCATION:SFI MediaFutures\, MCB
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-136-8.png
END:VEVENT
END:VCALENDAR