
- This event has passed.
Japan-Norway Encounters 日本とノルウェーの出会い x Bergen HCI summer seminar
June 4 @ 10:00 - 17:00

On the 4th of June, MediaFutures professor Morten Fjeld as part of the HCI research group presents the HCI summer seminar with a special program for the attendees and three test-defenses of PhDs in HCI. Amongst them will be MediaFutures PhD candidate Peter Andrews (see program below).
We are proud to announce that we also got visiting researchers from the Tohoku University in Japan holding a presentation, as well as members of the UiB HCI research group presenting their work.
Human-Computer Interaction (HCI) is a subject with implications for research and development (R&D) in areas such as education, health, engineering, architecture, and media. While innovation is key in advancing HCI itself, innovation is also needed to advance research and industry in these areas.
During this event you will be able to see selected results of the UiB HCI infrastructure project. Presentations will show cutting-edge research demonstrations including conversational agents (Peter), motion capture (Miroslav), gaze tracking (Yuki), AR/VR technology (Paulina, Floris), and digital biomarkers (Vegard). Some of these projects are partially supported by the UiB digital accessibility initiative.
The Program
Check out the Program below by clicking on the date.
Wednesday June 4th
Locations:
9.45-13.00 – Auditorium Egget Studentsenteret, Finding your way, Room layout
13.00- 17.00 – Nordre Allmenning 3 Nygårdsgaten 5, Conference center
Time | Wednesday June 4th |
9:00 | Picking up at Terminus entrance |
9:15 | Quick guidance at the Museum Garden |
9:45 | Welcome and opening remarks of HCI summer seminar / Announcements |
10:00 |
PhD defense rehearsal talk: Peter Andrews, PhD candidate, UiB, Bergen Abstract: This thesis unifies the second screening experience with Computer Vision (CV) and Deep Learning (DL), thereby building an interactive video framework following the From Video to Data → From Data to Narrative → From Narrative to Interaction paradigm. The result is a Multimodal Conversational Agent (MCA) that can hyper-contextualize video content. This video framework encompasses three research questions: Answering these questions gives a better grasp of what is needed to build an end-to-end interactive video framework with AI. At the same time, empirical research can show how the capabilities of the framework can improve user experience and comprehension. To address these questions, I develop prototypes for interactive video in sports (football) and politics. I approached the video framework in a modular manner with four in-house design prototypes – FootyVision, the Automated Commentary System (ACS), AiCommentator, and AiModerator. Collectively, these four prototypes demonstrate how CV- and NLP-based event detection and LLM-powered MCAs can synchronize and facilitate real-time interaction with video content. I tested prototypes in lab-based mixed method studies and found that interactive video with MCA can enhance engagement, immersion, and subjective understanding. However, a Human-AI Interaction (HAI) trade-off between automation and user control occurs. While a high degree of automation can tightly synchronize the experience, it comes at the cost of user control. The affordances of MCA include multimodal feedback and remediation. Multimodal feedback supports subjective understanding, which aligns with the Cognitive Theory of Multimedia Learning (CTML). Remediation involves repurposing traditional roles in innovative ways. MCAs achieve this by transforming sports commentators and political moderators into remediated personas, thus leading to increased engagement. Moreover, MCAs can also push the user into a more objective viewing state, highlighting a trade-off between objectivity and emotional involvement. Finally, trust is paramount for high-stakes environments where transparency is crucial. Test opponent: Shlomo Berkovksy, Macquarie U., Australia |
11:00 |
Keynote: Synthetic versus Real Media: From the age of signal processing to the battle between AI models Prof. Giulia Boato, University of Trento, Italy Abstract: Over the last few decades, the realism of synthetic media has increased dramatically. The multimedia research community has developed techniques to distinguish real and fake media, initially focusing on images and, more recently, videos. Early methods relied heavily on signal processing and artefact detection. However, in recent years, AI-generated media has become so hyper-realistic that it is perceived as “more real than real.” This has sparked the AI-versus-AI battle towards new defensive models. Bio: Giulia Boato is a Full Professor at the Department of Information Engineering and Computer Science, University of Trento, Italy. Since 2012, she has pioneered research at the intersection of signal processing, physiological signal analysis, and, more recently, advanced deep learning methods to distinguish between virtual and real humans. Her work also addresses various forms of digital media manipulation, with a recent focus on deepfake detection and forensic analysis in open-world scenarios such as social media. She has authored over 140 publications in international journals. Her research spans image and signal processing, multimedia data protection, and digital forensics. She is an elected member of both the IEEE Multimedia Signal Processing Technical Committee (MMSP TC) and the IEEE Information Forensics and Security Technical Committee (IFS TC). |
12:00 | Lunch, Cafe Smauet |
13:00 (40 mins) |
Short presentation: Tohoku University ICD lab Prof. Yoshifumi Kitamura (10 min): on the ICD Lab Assist. Prof. Miao Cheng (5 min): Understanding emotion from bodily movements: database and cultural influence Manato Abe, PhD student (5 min): Force Sensor Data Feedback Method for Industrial Robot-Arm Operation Ryo Ooka, PhD student (5 min): Robotics-Enabled Spatial Information Experiences: Novel Presentation with Interactive Displays and Comfortable Furniture Hongyue Xu, Master student (5 min): From Vision to Emotion: The Future of Human Health in the Age of AI Yuhui Wang, Master student (5 min): Toward Practical VR: Designing Human-Centered Systems for Training and Well-being Akira Murakami, Master student (5 min): Robotic Partitioning System for Adaptive Workspaces |
13:50 (50 mins) |
Short presentation: UiB HCI group Prof. Morten Fjeld (5 min): From interactive tabletops to in-motion UIs Prof. Frode Guribye (5 min): Research topic tbd Assoc. Prof. Miroslav Bachinski (5 min): Simulating Users for Human-Computer Interaction Yong Ma, Postdoc (5 min): Emotion-aware voice UIs Pavel Okopnyi, Postdoc (5 min): Design Automation Yuki Onishi, Postdoc (5 min): Production control room optimization with eye tracking technology Paulina Becerril Palma, PhD student (5 min): Accessible Mixed Reality Mahya Jahanshahikhabisi, PhD student (5 min): A Digital Approach to Dementia Research: Integrating Digital Tools, AI, and Personalized Interventions in Dementia Management Vegard Bolstad, Master student (5 min): Drawn to Mind: Instrumenting fine motor hand movement for cognitive assessment and identification of digital biomarkers for dementia Andreas Tjeldflaat, Bachelor student (5 min): Tangible Privacy and Privacy Perception |
14:40 | Coffee break |
15:00 |
PhD defense rehearsal talk: Floris Hendrikus Johannes van den Oever, PhD candidate, UiB, Bergen Abstract: High-quality collaboration is crucial for safe and efficient maritime operations. Collaboration is a factor in maritime operations, such as ship navigation, port construction, and maintenance of offshore units. A challenge for collaboration is that crewmembers have to share their different perspectives and information. Augmented reality (AR) has the potential to improve maritime collaboration through facilitating team decision-making, team situation awareness (TSA), and communication. This PhD project investigated the potential of AR in facilitating collaboration within maritime operations. It comprised three core studies: a systematic literature review, a laboratory study using virtual reality (VR), and a field study employing AR. The literature review examined current AR applications across various maritime operations, including ship navigation, construction, and maintenance. The laboratory and field studies focused on the use of AR for collaborative ship navigation, emphasizing three key constructs of collaboration: team decision-making, TSA, and communication, along with user experience, and the advantages and disadvantages of AR. Findings indicate that AR can aid communication by simplifying information gathering, displaying the same information to multiple crewmembers, displaying complementary information to different crewmembers, and providing visual tools like point of interest highlighting and crosshairs. Test opponent: Prof. Yoshifumi Kitamura, Tohoku, Japan |
15:40 |
PhD defense rehearsal talk: Ziming Wang, PhD candidate, Chalmers/Luxembourg/Stanford Abstract: Nature and humanity have engaged in continuous, evolving interactions throughout history and across technological epochs. I hypothesize that integrating natural characteristics into robot design can enrich HCI by leveraging our deep-rooted familiarity and affinity with the natural world. To explore this, I conducted investigations focusing on close-range interactions with flying robots across various proxemic conditions, employing a mixed-methods approach. Test opponent: Prof. Yoshifumi Kitamura, Tohoku, Japan |
17:00 | Closing remarks |
19:00 | Informal discussion and food/drinks, Amundsen Bar, Terminus |
Thursday June 5th
Thursday June 5th |
Venue | |
10:00 | Picking up at Terminus entrance | |
10:15 | Guided tour or MediaFutures office/research facility |
MediaFutures, Media City Bergen |
Friday June 6th
12.25 | Pick up 12:25 in the atrium of Media City Bergen |
MediaFutures, Media City Bergen |
12:30- 13:30 |
Guest lecture: Boosting User Trust to Increase the Uptake of Recommendations, Shlomo Berkovksy, Macquarie U., Australia | MediaFutures, Media City Bergen |
