BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20250603T091500
DTEND;TZID=Europe/Oslo:20250603T150000
DTSTAMP:20260510T185207
CREATED:20250324T070120Z
LAST-MODIFIED:20250521T090812Z
UID:20568-1748942100-1748962800@mediafutures.no
SUMMARY:Phd Defense of Sohail Ahmed Khan
DESCRIPTION:On Tuesday\, June 3rd\, PhD candidate Sohail Ahmed Khan will defend his PhD thesis “Computational Visual Content Verification”. The trial lecture starts at 09.15\, and the defense starts at 10.30. \nIn his dissertation\, Sohail Khan explores how newsrooms verify visual user-generated content (UGC) and examines growing challenges related to manipulated media content such as deepfakes and cheapfakes. The dissertation analyses current verification practices in journalism\, maps existing tools and workflows\, and uncovers a clear gap between technological advances and newsroom practices. \nThrough the thesis\, Sohail contributes both a critical review of the current verification landscape and new AI-based methods for detecting deepfakes and cheapfakes. The research also emphasises the need for closer collaboration between researchers and journalists\, and highlights initiatives to build a research community around these challenges\, including through international media verification competitions. \nOpponents: \n\nProfessor Giulia Boato\, Department of Information Engineering and Computer Science\, University of Trento\nAssociate Professor Nhien-An Le-Khac\, School of Computer Science\, University College Dublin\n\nChair of the committee: Professor Bjørnar Tessem \nChair of the defense: Professor Knut Helland \n 
URL:https://mediafutures.no/event/sohail-phd-defense/
LOCATION:Ulrike Pihls Hus\, Ulrikes aula\, Professor Keysers gate 1\, Bergen\, Norway
CATEGORIES:Events,WP3 Media Content Production & Analysis
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/sohailphddefense.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20250604T100000
DTEND;TZID=Europe/Oslo:20250604T170000
DTSTAMP:20260510T185207
CREATED:20250530T083115Z
LAST-MODIFIED:20250602T084718Z
UID:20998-1749031200-1749056400@mediafutures.no
SUMMARY:Japan-Norway Encounters  日本とノルウェーの出会い x Bergen HCI summer seminar
DESCRIPTION:On the 4th of June\, MediaFutures professor Morten Fjeld as part of the HCI research group presents the HCI summer seminar with a special program for the attendees and three test-defenses of PhDs in HCI. Amongst them will be MediaFutures PhD candidate Peter Andrews (see program below). \nWe are proud to announce that we also got visiting researchers from the Tohoku University in Japan holding a presentation\, as well as members of the UiB HCI research group presenting their work. \nHuman-Computer Interaction (HCI) is a subject with implications for research and development (R&D) in areas such as education\, health\, engineering\, architecture\, and media. While innovation is key in advancing HCI itself\, innovation is also needed to advance research and industry in these areas.   \nDuring this event you will be able to see selected results of the UiB HCI infrastructure project. Presentations will show cutting-edge research demonstrations including conversational agents (Peter)\, motion capture (Miroslav)\, gaze tracking (Yuki)\, AR/VR technology (Paulina\, Floris)\, and digital biomarkers (Vegard).  Some of these projects are partially supported by the UiB digital accessibility initiative. \n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				The Program\n				Check out the Program below by clicking on the date. \n			\n				\n				\n				\n				\n				Wednesday June 4th\n				Locations: \n9.45-13.00 – Auditorium Egget Studentsenteret\, Finding your way\, Room layout \n13.00- 17.00 – Nordre Allmenning 3 Nygårdsgaten 5\, Conference center \n\n\n\nTime\nWednesday June 4th\n\n\n9:00\nPicking up at Terminus entrance\n\n\n9:15\nQuick guidance at the Museum Garden\n\n\n9:45\nWelcome and opening remarks of HCI summer seminar / Announcements\n\n\n10:00\n\nPhD defense rehearsal talk:Human-AI Interaction for Video Content: Designing and Engineering Multimodal Conversational Agents \nPeter Andrews\, PhD candidate\, UiB\, Bergen \nAbstract: This thesis unifies the second screening experience with Computer Vision (CV) and Deep Learning (DL)\, thereby building an interactive video framework following the From Video to Data → From Data to Narrative → From Narrative to Interaction paradigm. The result is a Multimodal Conversational Agent (MCA) that can hyper-contextualize video content. This video framework encompasses three research questions:1) How can recent advances in computer vision and artificial intelligence facilitate interaction with video content?2) How can interactive video increase subjective understanding of the content?3) How do young adults perceive the user experience of interactive video for news broadcasts? \nAnswering these questions gives a better grasp of what is needed to build an end-to-end interactive video framework with AI. At the same time\, empirical research can show how the capabilities of the framework can improve user experience and comprehension. \nTo address these questions\, I develop prototypes for interactive video in sports (football) and politics. I approached the video framework in a modular manner with four in-house design prototypes – FootyVision\, the Automated Commentary System (ACS)\, AiCommentator\, and AiModerator. Collectively\, these four prototypes demonstrate how CV- and NLP-based event detection and LLM-powered MCAs can synchronize and facilitate real-time interaction with video content. I tested prototypes in lab-based mixed method studies and found that interactive video with MCA can enhance engagement\, immersion\, and subjective understanding. However\, a Human-AI Interaction (HAI) trade-off between automation and user control occurs. While a high degree of automation can tightly synchronize the experience\, it comes at the cost of user control. The affordances of MCA include multimodal feedback and remediation. Multimodal feedback supports subjective understanding\, which aligns with the Cognitive Theory of Multimedia Learning (CTML). Remediation involves repurposing traditional roles in innovative ways. MCAs achieve this by transforming sports commentators and political moderators into remediated personas\, thus leading to increased engagement. Moreover\, MCAs can also push the user into a more objective viewing state\, highlighting a trade-off between objectivity and emotional involvement. Finally\, trust is paramount for high-stakes environments where transparency is crucial. \nTest opponent: Shlomo Berkovksy\, Macquarie U.\, Australia \n\n\n\n11:00\n\nKeynote: Synthetic versus Real Media: From the age of signal processing to the battle between AI models \nProf. Giulia Boato\, University of Trento\, Italy \nAbstract: Over the last few decades\, the realism of synthetic media has increased dramatically. The multimedia research community has developed techniques to distinguish real and fake media\, initially focusing on images and\, more recently\, videos. Early methods relied heavily on signal processing and artefact detection. However\, in recent years\, AI-generated media has become so hyper-realistic that it is perceived as “more real than real.” This has sparked the AI-versus-AI battle towards new defensive models. \nBio: Giulia Boato is a Full Professor at the Department of Information Engineering and Computer Science\, University of Trento\, Italy. Since 2012\, she has pioneered research at the intersection of signal processing\, physiological signal analysis\, and\, more recently\, advanced deep learning methods to distinguish between virtual and real humans. Her work also addresses various forms of digital media manipulation\, with a recent focus on deepfake detection and forensic analysis in open-world scenarios such as social media. She has authored over 140 publications in international journals. Her research spans image and signal processing\, multimedia data protection\, and digital forensics. She is an elected member of both the IEEE Multimedia Signal Processing Technical Committee (MMSP TC) and the IEEE Information Forensics and Security Technical Committee (IFS TC). \n\n\n\n12:00\nLunch\, Cafe Smauet\n\n\n\n13:00 \n(40 mins) \n\n\nShort presentation: Tohoku University ICD lab \nProf. Yoshifumi Kitamura (10 min): on the ICD Lab \nAssist. Prof. Miao Cheng (5 min): Understanding emotion from bodily movements: database and cultural influence \nManato Abe\, PhD student (5 min): Force Sensor Data Feedback Method for Industrial Robot-Arm Operation \nRyo Ooka\, PhD student (5 min): Robotics-Enabled Spatial Information Experiences: Novel Presentation with Interactive Displays and Comfortable Furniture \nHongyue Xu\, Master student (5 min): From Vision to Emotion: The Future of Human Health in the Age of AI \nYuhui Wang\, Master student (5 min): Toward Practical VR: Designing Human-Centered Systems for Training and Well-being \nAkira Murakami\, Master student (5 min): Robotic Partitioning System for Adaptive Workspaces \n\n\n\n\n13:50 \n(50 mins) \n\n\nShort presentation: UiB HCI group \nProf. Morten Fjeld (5 min): From interactive tabletops to in-motion UIs \nProf. Frode Guribye (5 min): Research topic tbd \nAssoc. Prof. Miroslav Bachinski (5 min): Simulating Users for Human-Computer Interaction \nYong Ma\, Postdoc (5 min): Emotion-aware voice UIs \nPavel Okopnyi\, Postdoc (5 min): Design Automation \nYuki Onishi\, Postdoc (5 min): Production control room optimization with eye tracking technology \nPaulina Becerril Palma\, PhD student (5 min): Accessible Mixed Reality \nMahya Jahanshahikhabisi\, PhD student (5 min): A Digital Approach to Dementia Research: Integrating Digital Tools\, AI\, and Personalized Interventions in Dementia Management \nVegard Bolstad\, Master student (5 min): Drawn to Mind: Instrumenting fine motor hand movement for cognitive assessment and identification of digital biomarkers for dementia \nAndreas Tjeldflaat\, Bachelor student (5 min): Tangible Privacy and Privacy Perception \n\n\n\n14:40\nCoffee break\n\n\n15:00\n\nPhD defense rehearsal talk: AR for Maritime Collaboration: Using Augmented Reality to Facilitate Team Decision-Making\, Team Situation Awareness\, and Communication in Maritime Operations \nFloris Hendrikus Johannes van den Oever\, PhD candidate\, UiB\, Bergen \nAbstract: High-quality collaboration is crucial for safe and efficient maritime operations. Collaboration is a factor in maritime operations\, such as ship navigation\, port construction\, and maintenance of offshore units. A challenge for collaboration is that crewmembers have to share their different perspectives and information. Augmented reality (AR) has the potential to improve maritime collaboration through facilitating team decision-making\, team situation awareness (TSA)\, and communication. This PhD project investigated the potential of AR in facilitating collaboration within maritime operations. It comprised three core studies: a systematic literature review\, a laboratory study using virtual reality (VR)\, and a field study employing AR. The literature review examined current AR applications across various maritime operations\, including ship navigation\, construction\, and maintenance. The laboratory and field studies focused on the use of AR for collaborative ship navigation\, emphasizing three key constructs of collaboration: team decision-making\, TSA\, and communication\, along with user experience\, and the advantages and disadvantages of AR. Findings indicate that AR can aid communication by simplifying information gathering\, displaying the same information to multiple crewmembers\, displaying complementary information to different crewmembers\, and providing visual tools like point of interest highlighting and crosshairs. \nTest opponent: Prof. Yoshifumi Kitamura\, Tohoku\, Japan \n\n\n\n15:40\n\nPhD defense rehearsal talk:Nature–Machine Entanglement: Interacting with Biologically-Informed Flying Robots \nZiming Wang\, PhD candidate\, Chalmers/Luxembourg/Stanford \nAbstract: Nature and humanity have engaged in continuous\, evolving interactions throughout history and across technological epochs. I hypothesize that integrating natural characteristics into robot design can enrich HCI by leveraging our deep-rooted familiarity and affinity with the natural world. To explore this\, I conducted investigations focusing on close-range interactions with flying robots across various proxemic conditions\, employing a mixed-methods approach. \nTest opponent: Prof. Yoshifumi Kitamura\, Tohoku\, Japan \n\n\n\n17:00\nClosing remarks\n\n\n19:00\nInformal discussion and food/drinks\, Amundsen Bar\, Terminus\n\n\n\n  \n			\n				\n				\n				\n				\n				Thursday June 5th\n				\n\n\nThursdayJune 5th\n\nVenue\n\n\n10:00\nPicking up at Terminus entrance\n\n\n\n10:15 \nGuided tour or MediaFutures office/research facility\n\nMediaFutures\, Media City Bergen \n\n\n\n\n			\n				\n				\n				\n				\n				Friday June 6th\n				\n\n\n12.25\nPick up 12:25 in the atrium of Media City Bergen\n\nMediaFutures\, Media City Bergen \n\n\n\n12:30-13:30 \nGuest lecture: Boosting User Trust to Increase the Uptake of Recommendations\, Shlomo Berkovksy\, Macquarie U.\, Australia\nMediaFutures\, Media City Bergen
URL:https://mediafutures.no/event/bergen-hci-summer-seminar-2025-visit-from-tohoku-university/
LOCATION:Morning: Auditorium Egget\, Studentsenteret Finding your way\, afternoon: Nordre Allmenning 3\, Nygårdsgaten 5
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/screenshot_2023-09-23_at_21.15.03.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20250606T123000
DTEND;TZID=Europe/Oslo:20250606T133000
DTSTAMP:20260510T185207
CREATED:20250210T133803Z
LAST-MODIFIED:20250602T081115Z
UID:20235-1749213000-1749216600@mediafutures.no
SUMMARY:Boosting User Trust to Increase the Uptake of Recommendations
DESCRIPTION:We’re excited to welcome Professor Shlomo Berkovsky to MediaFutures for a special guest talk on June 6! Professor Berkovsky leads the Clinical AI and Sensing Technologies research stream at the Centre for Health Informatics\, Macquarie University. His work explores the intersection of AI\, health\, and media technologies\, including research on food recommender systems and user trust in recommendation interfaces. \nAbstract of the talk: \nThe level of trust a user places in a recommender is crucial to the success of recommendations. Although prior work established factors that build and sustain user trust\, their comparative power and impact on the uptake of recommendations received less attention. We conducted a user study examining the impact of various recommendation interfaces and content selection strategies on user trust. Following this\, we conducted another study that evaluated the impact of including proponents (people or avatars) in the recommendation interface. We will discuss these results and implications on use and misuse in future recommendation interfaces. \nAbout the Speaker: \nShlomo Berkovsky is the leader of the Interactive Medical AI research stream at Macquarie University. The stream focuses on the use of Artificial Intelligence and Machine Learning methods to develop usable patient models and personalised predictions of diagnosis and care. The stream also studies how clinicians and patients interact with health technologies and how Large Language Models can improve patient care. His other areas of expertise include user modelling\, online personalisation\, and behaviour change technologies. \nWe’ll meet you at 12:25 in the atrium of Media City Bergen and take you up to the MediaFutures office on the 3rd floor together. \nFood will be served! \nIf you cannot join in person\, you can follow the presentation on zoom by clicking here: https://uib.zoom.us/j/65067583768?pwd=tJ39Ta5UIRjZsuwYtbVbULafFOAbUV.1
URL:https://mediafutures.no/event/shlomo-berkovsky/
LOCATION:SFI MediaFutures\, MCB
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-21-3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20250612T090000
DTEND;TZID=Europe/Oslo:20250612T120000
DTSTAMP:20260510T185207
CREATED:20250523T075240Z
LAST-MODIFIED:20250523T075349Z
UID:20979-1749718800-1749729600@mediafutures.no
SUMMARY:fAIrgov-sluttkonferanse
DESCRIPTION:Artificial intelligence is often described as the electricity of our time\, and its development is accelerating. But what does this mean for the public sector\, and what is needed to maintain public trust?\n\nPublic Fairness Perceptions of Algorithmic Governance (fAIrgov) is a research project at NORCE that\, since 2018\, has been exploring Norwegian attitudes toward the use of AI in the public sector. They now invite you to their final conference: a morning of reflections\, insights\, and discussion\, featuring contributions from both researchers and public sector representatives. \nBoth academic and administrative staff at the Department of Information Science and Media Studies are welcome to attend the conference! \n Registration form(Limited number of seats!) \nDetails of the fAIrgov Final Conference:\n\nDate: June 12\nTime: 09:00–12:00\nVenue: Lillesalen\, Kulturhuset i Bergen\, Vaskerelven 8\, 5014 Bergen\nAfterwards: Lunch and informal research meetups from 12:00–14:00\n\nPreliminary Program:\n\n09:00–09:30 – Doors open\, coffee and mingling\n09:30–09:45 – Opening remarks by Camilla Stoltenberg\n09:45–10:05 – Case: AI in the Norwegian Tax Administration\, by Nina Serdarevic\n10:05–10:25 – Case: AI in NAV (Labour and Welfare Administration)\, by Robindra Prabhu\n10:25–10:45 – Break\n10:45–11:05 – Public support for AI\, by Mikael Poul Johannesson\, NORCE\n11:05–11:50 – Panel debate: What does it take for the Norwegian public sector to be ready for the AI revolution?– with Camilla Stoltenberg\, Annette Fagerhaug Stephansen\, Sveinung Arnesen\, Nina Serdarevic\, Robindra Prabhu\n11:50–12:00 – Closing remarks by Sveinung Arnesen\, fAIrgov project leader\n12:00–14:00 – Lunch and informal networking with researchers\n\nThe conference is free and open to everyone. It is particularly relevant for public sector employees\, researchers\, policymakers – and anyone curious about how AI might transform the public sector and affect us as citizens. \nFollow the event Is the Norwegian Public Sector Ready for the AI Revolution? on Facebook for updates and a link to the livestream if you’d like to attend digitally.
URL:https://mediafutures.no/event/fairgov-sluttkonferanse/
LOCATION:Lillesalen\, Kulturhuset i Bergen Vaskerelven 8\, 5014 Bergen
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Skjermbilde_23-5-2025_95155_bookdown.org_.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20250623T160000
DTEND;TZID=Europe/Oslo:20250623T200000
DTSTAMP:20260510T185207
CREATED:20250519T110045Z
LAST-MODIFIED:20250519T110045Z
UID:20967-1750694400-1750708800@mediafutures.no
SUMMARY:AI\, rights and development of Norwegian language models
DESCRIPTION:Artificial intelligence (AI) and language technology have developed rapidly in recent years. The language models used in AI services are trained on vast amounts of text\, often without permission from or compensation to the rights holders. \nIn Norway\, there is broad political agreement that we need Norwegian language models. Public and private AI services should reflect the Norwegian and Sámi languages\, history\, and culture\, and there is a need for Norwegian alternatives to international services. \nMuch is at stake. AI services are already weakening the economic foundation for\, among others\, translators\, illustrators\, and those who write and publish educational materials. At the same time\, the National Library has documented that language models improve when trained on protected content\, such as books and newspapers. \nDo Norwegian authors and publishers have a social responsibility to contribute content to Norwegian models if they are compensated?Would such agreements drain an already vulnerable cultural sector by accelerating a shift where human creativity loses ground?Perhaps this is a unique opportunity to set a precedent: that training on protected material must be paid for. New income streams for the cultural field could enable the creation and publication of new works of great value to society. \nPanel discussion featuring experts in the field: \n\n\nTrine Skei Grande\, CEO of the Norwegian Publishers Association \n\n\nEspen Ytreberg\, author and professor of media studies \n\n\nLilja Øvrelid\, professor and head of the Language Technology Group at the Department of Informatics\, University of Oslo\, MediaFutures Work Package 5 member \n\n\nHege Munch Gundersen\, CEO of Kopinor \n\n\nThe event is free and open to all. Welcome!
URL:https://mediafutures.no/event/ai-rights-and-development-of-norwegian-language-models/
LOCATION:Union Scene\, Drammen
CATEGORIES:Events
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/norske-sprakmodeller-event-drammen.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20250626T190000
DTEND;TZID=Europe/Oslo:20250626T210000
DTSTAMP:20260510T185207
CREATED:20250616T100736Z
LAST-MODIFIED:20250618T123503Z
UID:21065-1750964400-1750971600@mediafutures.no
SUMMARY:Pint of Science x MediaFutures
DESCRIPTION:Had Enough of Office Talk and Formal Seminars? \nYeah\, us too. \nThat’s why we’re shaking things up and teaming up with Pint of Science to bring research out of the lecture halls and into your local bar! Join us for our very first Pint of Science night\, right in the cosy atmosphere of Staatsraaden bar in Bergen. \nPint of Science is a global\, non-profit movement where thousands of researchers in 500+ cities share their work in pubs\, cafés\, and public spaces. No slides packed with jargon. No academic gatekeeping. Just real science\, served with a drink. \nThis evening’s theme: The Future of Media Technology\nExpect three short and super accessible talks from our MediaFutures researchers\, each just 10–15 minutes long. They’ve boiled down their work to the juicy bits so you can sip your beer and still get smarter. \nThe vibe? Casual\, fun\, and all about conversation. You’ll get to meet the scientists\, ask questions\, and even take part in a quick quiz to wrap up the night. \nSo whether you’re a media geek\, techie\, curious mind\, or just someone looking for a different kind of night out\, grab a friend or come solo and join us for a pint of science (or soda). \nWe can’t wait to meet you and hear what you think the future of media should look like. \n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n					Samia Touileb\n					Associate Professor \n					Talk Title: Bias in Large Language Models \n					\n				\n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n					Yuki Onishi\n					Researcher \n					Talk title: Can eye gaze show us what future TV production galleries looks like? \n					\n				\n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n					Peter Andrews\n					PhD Candidate \n					Talk Title: Enhancing Debates with a AI Powered Political Co-Pilot
URL:https://mediafutures.no/event/pint-of-science-x-mediafutures/
LOCATION:Staatsraaden bar\, Bergen\, Bradbenken 2\, Bergen\, 5003\, Norway
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Slice-3.png
END:VEVENT
END:VCALENDAR