BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MediaFutures
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20240604T110000
DTEND;TZID=Europe/Oslo:20240604T112000
DTSTAMP:20260423T153216
CREATED:20240603T122232Z
LAST-MODIFIED:20240603T122232Z
UID:18885-1717498800-1717500000@mediafutures.no
SUMMARY:One size fits one - accessibility through preferences
DESCRIPTION:Yngvar Nordberg\, representing TV 2 Skole AS and WP4 on Media Content Interaction & Accessibility\, will be an invited presenter at the COST LEAD-ME Seminar on “Media Accessibility in the Age of Artificial Intelligence”\, taking place on June 4\, 2024\, at the St. Raphael Resort\, Limassol\, Cyprus.\n\n \nRemote participants are welcome: \nVideo call link: https://meet.google.com/mdr-wyvp-ccm \nYngvar Nordberg\, TV 2 Skole AS and WP4\, title: \nOne size fits one – accessibility through preferences \n\nIn 1992\, all state-run special schools were closed with the exception of schools for sign language students. The ideology was that special education should take place in a classroom setting together with peers at the local school. In the compulsory primary school\, achieving a ‘school for all’\, or one that is fully inclusive\, is an important policy goal and part of the official aims behind the system of education for all children in Norway. TV 2 Schools educational web-service – elevkanalen.no – has for more than 10 years developed an API that governs a very successful preference-dashboard. Teacher set preferences on his/her pupils. Eye-gaze navigation\, one-button navigation\, two-button navigation etc.; symbol-support; background colors\, font color – It is a user-centric design backed up by national activities in Standards Norway Committee 607 and ongoing work in ISO/IEC JTC1/SC 36/WG 7 – metadata for individualized accessibility. To achieve true accessibility – AI is obviously the road ahead. Machine learning that automatically detects the users needs is already a component in our one fits one approach\, but we need more. This goes for navigation – AI-driven eye-gaze calibration AI-driven tasks like alt-text to pictures\, and captions to film for the blind; text-levels etc. We believe our activities already has a proven potential outside our own organisation in Norway and would love to present them in a Lead-Me – context. \nMore on COST LEAD-ME: \n\nLEAD-ME aims to help European stakeholders in the field of Media Accessibility to meet legal milestones requested by European legislation. Researchers\, engineers and scholars as well as businesses and policy makers will be empowered by LEAD-ME with a common and unique platform which will collect\, create\, and disseminate innovative technologies and solutions\, best practices and guidelines.\nhttps://lead-me-cost.eu/
URL:https://mediafutures.no/event/one-size-fits-one-accessibility-through-preferences/
LOCATION:Remote
CATEGORIES:Events,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-1-5-2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230511T124500
DTEND;TZID=Europe/Oslo:20230511T134500
DTSTAMP:20260423T153216
CREATED:20230607T085016Z
LAST-MODIFIED:20230607T085215Z
UID:15334-1683809100-1683812700@mediafutures.no
SUMMARY:Visualization for Mobile Devices and Embedded Experiences
DESCRIPTION:Welcome to a new department seminar! \n  \nAbstract: \nMobile devices\, such as smartwatches\, fitness bands\, and phones record a variety of data with the goal of making it immediately available to wearers. Smartwatch faces\, for example\, have become mini data dashboards that can give an overview of data such as step counts\, heart rates\, locations\, sleep information or even device-external data such as the current temperature or weather predictions. Due to their small screen size and usage context\, mobile device screens pose several novel and interesting challenges to visualization: visualizations need not only to be small and glanceable but also often to be read in motion. In this talk\, I will present a summary of our past work on mobile device visualization and outline open research opportunities. \n  \nBio:  \nPetra Isenberg is a research director(DR) at the Inria Saclay Centre at Université Paris-Saclay\, France in the Aviz team and part of the Computer Science Laboratory (LISN) of the University Paris-Saclay. Her main research areas are visualization and visual analytics with a focus on visualization for non-desktop devices\, interaction\, and evaluation.  Petra’s research papers have received awards at premiere venues in Visualization and HCI such as IEEE VIS\, IEEE VAST\, ACM CHI\, or EuroVis. She is currently vice-chair of the IEEE VIS steering committee\, AEiC at IEEE CG&A\, and AE at Computer Graphics Forum. Prior to joining Inria\, Petra received her PhD from the University of Calgary in 2010 on collaborative information visualization\, and a Diplom Engineer degree in Computational Visualistics from the University of Magdeburg (2004).  \n\n 
URL:https://mediafutures.no/event/visualization-for-mobile-devices-and-embedded-experiences/
CATEGORIES:Events,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/petra_coll.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230509
DTEND;VALUE=DATE:20230513
DTSTAMP:20260423T153216
CREATED:20230508T100836Z
LAST-MODIFIED:20230611T181423Z
UID:15130-1683590400-1683935999@mediafutures.no
SUMMARY:Bergen HCI and Visualisation Days 2023
DESCRIPTION:MediaFutures invities to HCI and Visualisation Days 2023\nDAY 1: Tue 9th May 2023 (host: Morten) \n10:15:  Interacting with AI: A visualization perspective: talk by Michael Sedlmair \nMeeting in smaller groups:  \nWe invite you to present your respective PhD/PostDoc projects\, followed by discussion\, thereby aiming to: \n\ni)  making your ongoing work know to experts in your field\nii) providing your with qualified feedback\n\niii) potentially opening doors for future collaboration \n14.00: Floris van den Oever\, PhD-student\, Psychology and InfoMedia\, Maritime Initiative\, UiB \nAR for ship bridges \n14.45: Ingar Arntzen\, PhD-student\, Norce Research\, UiT and MediaFutures WP4 Bridging the gap between Interactive and Linear Media Experiences\, by creating a programming model for timed application state \nDAY 2: Wed 10th May 2023 (smaller groups continued; host: Morten) \n10:00: Oda Elise Nordberg\, PhD student\, InfoMedia\,UiB Conversational news interactions \n10:45: Peter Okopny\, PhD-student\, InfoMedia\, UiB \nWhy don’t we have “Google Docs” for Video? \n12:15: Live demo of BSc-students AR prototypes; link to event \n14:00: Rikke Aas\, PhD-student\, Informatikk\, UiB \nResponsible Engagement in Humanized Visualizations \n14:45: Yong Ma\, PostDoc\, Medical faculty (K1)\, ANeED project\, UiB \nFrom Speech Emotion to Dementia with Lewy Bodies (DLB) Detection \nDAY 3: Thu 11th May 2023 (host: Helwig) \n12.45:  Visualization for Mobile Devices and Embedded Experiences: talk by Petra Isenberg \nDAY 4: Fri 12 May 2023 (host: Helwig) \nAfternoon: A hike\, if the weather allows! Details to follow.  \nVelkommen!Morten Fjeld\, InfoMedia\, UiB and Helwig Hauser\, Informatikk\, UiB \n\n\n\n\n 
URL:https://mediafutures.no/event/bergen-hci-and-visualisation-days-2023/
CATEGORIES:Events,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Untitled-design.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230126T120000
DTEND;TZID=Europe/Oslo:20230126T130000
DTSTAMP:20260423T153216
CREATED:20230116T112909Z
LAST-MODIFIED:20230123T124853Z
UID:14147-1674734400-1674738000@mediafutures.no
SUMMARY:MediaFutures Seminar: Public expectations of transparency and accountability in citizens’ councils on data-driven media personalisation: Findings and methodological reflections with Ranjana Das\, Professor at the University of Surrey
DESCRIPTION:Ranjana Das\, Professor at the University of Surrey and currently visiting Erasmus fellow at the University of Bergen\, will give a seminar on January 26th. \nTITLE: Public expectations of transparency and accountability in citizens’ councils on data-driven media personalisation: Findings and methodological reflections \nWHEN: 26 Januar\, 12:00 – 13:00 \nWHERE: MediaFutures \nABSTRACT: \n\nThis talk presents findings and reflections across a set of three journal papers which emerged from a rigorous\, three-wave series of qualitative research into public expectations of data-driven media technologies\, conducted across locations in England\, by a team bringing together sociology\, engineering and media practice. Located within the contexts of the AI4ME project\, through a range of carefully chosen scenarios and deliberations around the risks and benefits afforded by data-driven media personalisation technologies\, we paid close attention to citizens’ voices\, as our multi-disciplinary team sought to engage the public on what ‘good’ might look like in the context of media personalisation. We paid particular attention to risks and opportunities\, examining practical use-cases and scenarios\, and our three-wave councils culminated in citizens producing recommendations for practice and policy. In this talk\, I will focus on citizens’ ethical assessment\, critique and improvements proposed on media personalisation methods in relation to benefits\, fairness\, safety\, transparency and accountability\, particularly within the broader contexts of their expectations around algorithms and algorithmic systems. I will conclude with some reflections on our citizens council methodology\, particularly the use of “vignettes” as scenarios as a method of data collection in user-centric algorithm studies\, in terms of their potential in inviting users’ contextual experiences of algorithms but also enabling more normative reflections on what “good” looks like in contemporary datafied societies. \nNote: The talk draws upon three journal papers from the team under review currently\, and results presented should not be circulated further. \nBIO: \nRanjana Das is Professor in Media and Communication\, in the Department of Sociology\, at the University of Surrey. Her research interests span families\, parenting and parenthood\, mental health\, digital technologies and media audiences and users. Her upcoming book Anticipating algorithms: Algorithmic literacies of datafied parenthood will be published by Rowman & Littlefield. Link to project website: https://ai4me.surrey.ac.uk/  \nMore information on the speaker: https://www.surrey.ac.uk/people/ranjana-das \n  \nThe seminar is arranged by WP1 Understanding media experiences in SFI MediaFutures. At the seminar we will serve a light lunch on a first come first served-basis.
URL:https://mediafutures.no/event/mediafutures-seminar-public-expectations-of-transparency-and-accountability-in-citizens-councils-on-data-driven-media-personalisation-findings-and-methodological-reflections-with-ranjana-da/
CATEGORIES:Events,Seminar,WP1 Understanding Media Experiences,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/thumbnail_VPL_8679.web_-e1673868982390.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230118T141500
DTEND;TZID=Europe/Oslo:20230118T151500
DTSTAMP:20260423T153216
CREATED:20221221T101326Z
LAST-MODIFIED:20230117T124156Z
UID:13942-1674051300-1674054900@mediafutures.no
SUMMARY:MediaFutures Seminar: Designing for Collaborative Video Editing with Pavel Okopnyi\, PhD Candidate at the University of Bergen
DESCRIPTION:Pavel Okopnyi\, a PhD Candidate at the University of Bergen\, will give a seminar on January 18th.  \nTITLE: Designing for Collaborative Video Editing \nWHEN: 18 Januar\, 14:15 – 15:15 \nWHERE: MediaFutures & Zoom: \nhttps://uib.zoom.us/j/69273144707?pwd=cFdCKzlwZ0h6YXd6MVdTTmFZeWNvdz09 \nMeeting ID: 692 7314 4707\nPassword: daKA5X52\n \nABSTRACT: \n\nThe seminar will explore the design space of collaborative video editing through a series of design workshops with video editors. Collaborative video editing can be supported by adding awareness features or other well-known collaborative features found in existing software and introducing new features designed specifically for video editing software. We identify different design concepts that illustrate how such collaborative features can be included in non-linear video editing software and discusses the challenges of introducing such features. Some design concepts are explicitly inspired by existing collaborative tools. However\, we suggest that introducing such features might not be straightforward. In other cases\, alternative abstract representations of time-based media might be necessary to support collaborative video editing. \nBIO: \nPavel Okopnyi is a PhD Candidate at the University of Bergen. His PhD thesis is focused on collaborative aspects of video production workflows. He has master’s degrees in Sociology and Human-Computer Interaction. Pavel’s interests include media production tools\, education\, software engineering\, and video games.
URL:https://mediafutures.no/event/mediafutures-seminar-designing-for-collaborative-video-editing-with-pavel-okopnyi-phd-candidate-at-the-university-of-bergen/
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/photo1-e1671617906952.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20221212T131500
DTEND;TZID=Europe/Oslo:20221212T141500
DTSTAMP:20260423T153216
CREATED:20221202T122334Z
LAST-MODIFIED:20221212T120346Z
UID:13779-1670850900-1670854500@mediafutures.no
SUMMARY:MediaFutures Seminar: Maleficent Neural Networks with Michael Riegler\, Chief Research Scientist at SimulaMet
DESCRIPTION:Michael Riegler\, a Chief Research Scientist at SimulaMet and an Adjunct Associate Professor at the University of Tromsø – The Arctic University of Norway\, will give a seminar on 12 December. \nTITLE: Maleficent Neural Networks \nWHEN: 12 December\, 13:15-14:15 \nWHERE: MediaFutures and Zoom: \n\nhttps://uib.zoom.us/j/69019718716?pwd=TFd0WlFWZWZYekdrZEdQZ0ZQa2VjQT09\n\nMeeting ID: 690 1971 8716\nPassword: SpRcB0CD\nABSTRACT: \n\nNeural networks nowadays are used almost everywhere. From our phones to even smart devices such as fridges\, etc. Due to their complexity neural networks cannot just solve the task at hand but can also be manipulated to perform other tasks. In this talk I will talk about a new method that allows hiding malicious code in neural networks and running this code on target systems. I will focus on the basic ideas and methods and provide an overview of methods that can help to counter these types of attacks. \nBIO: \nMichael Alexander Riegler received the Ph.D. degree from the Department of Informatics\, University of Oslo\, Oslo\, Norway. He is currently working as a Chief Research Scientist at SimulaMet\, Oslo and an Adjunct Associate Professor at the University of Tromsø – The Arctic University of Norway. His research interests are machine learning in combination with biomedical and social sciences. He is also a member of the Young Academy of Norway.
URL:https://mediafutures.no/event/mediafutures-seminar-maleficent-neural-networks-with-michael-riegler-an-adjunct-associate-professor-at-the-university-of-tromso-the-arctic-university-of-norway/
CATEGORIES:Events,WP3 Media Content Production & Analysis,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Unknown-e1669984694954.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20221208T110000
DTEND;TZID=Europe/Oslo:20221208T120000
DTSTAMP:20260423T153216
CREATED:20221121T084514Z
LAST-MODIFIED:20221202T122321Z
UID:13643-1670497200-1670500800@mediafutures.no
SUMMARY:MediaFutures Seminar: Heads-up Computing: Towards the next interaction paradigm with Shengdong Zhao\, an Associate Professor at the National University of Singapore
DESCRIPTION:Shengdong Zhao\, Associate Professor at the National University of Signapore\, will give a seminar on 8 December. \nTITLE: Heads-up Computing: Towards the next interaction paradigm   \nWHEN: 8 December\, 11:00-12:00 \nWHERE: MediaFutures & Zoom: \nhttps://uib.zoom.us/j/66175341824?pwd=VU91bGJ2clovczNZY3lPTVpjT2JTZz09 \nMeeting ID: 661 7534 1824Password: N4fMaE7q \n\nABSTRACT: \nInteraction paradigms (the style of interaction between humans and computers) can significantly change the way we work and live. However\, as much as we are empowered by interaction paradigms\, we are also significantly constrained by them. Existing interaction paradigms limits our movements and activities\, which can negatively affect our overall well-being. Desktop computing\, described as “sitting at a desk\, interpreting and manipulating symbols”\, isolates human beings from interacting with other human beings and nature. Mobile computing\, although free us from the office environment\, demands constant eye-and-hand engagement\, leading to the notorious phenomenon called “smartphone zombies”. We need a new style of interaction that can better support human activities in nature and with other people as well as reducing cognitive load by blending reactive operations with appropriately designed proactive initiatives that can offer just-in-time assistance. By carefully redesigning and integrating some of the latest technologies\, including wearable computing\, sensors\, voice-based multimodal I/O\, data analysis and prediction\, and distributed networking\, etc\, the new Heads-up Computing interaction paradigm can be designed to work seamlessly with our everyday movements and activities. It allows humans to interact with information while engaging in a variety of activities; therefore\, facilitates a holistic lifestyle where essential human needs in work and life can be more seamlessly blended and fulfilled. \n \nBIO: \nShengdong Zhao is an associate professor of the Computer Science Department at the National University of Singapore (NUS). I am also a member of the NUS Graduate School for Integrative Sciences & Engineering. My PhD degree is in computer science from the University of Toronto. My master degree is from School of Information Management & Systems\, University of California\, Berkeley. I also had a dual major in computer science and biology from the Linfield College\, Oregon\, USA. I founded the NUS-HCI Lab at the National University of Singapore in January 2009. I am passionate about developing new interface tools and applications that can simplify and enrich people’s lives (i.e.\, Draco\, won best iPad App of the year in 2016). I publish regularly in top HCI conferences and journals (ToCHI\, CHI\, UbiComp\, CSCW\, UIST\, IUI\, etc.). I am interested in connecting academy results with the industry\, and have served as a senior consultant with the Huawei Consumer Business Group. I frequently participate in program committees of top HCI conferences\, and have worked as the paper co-chair for the ACM SIGCHI 2019 and 2020 conferences. In my leisure time\, I love to read\, run\, and explore the beautiful nature. 
URL:https://mediafutures.no/event/mediafutures-seminar-heads-up-computing-towards-the-next-interaction-paradigm-with-shengdong-zhao-an-associate-professor-at-the-national-university-of-singapore/
CATEGORIES:Events,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/zhaosd_2-e1669021155686.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20221020
DTEND;VALUE=DATE:20221022
DTSTAMP:20260423T153216
CREATED:20221009T101718Z
LAST-MODIFIED:20221009T102410Z
UID:13241-1666224000-1666396799@mediafutures.no
SUMMARY:Hackathon
DESCRIPTION:When: \nDate and time:20. October from 13:0021. October from 09:00Location: Media City Bergen \nWhat:\nWe are arranging a hackathon in Bergen the 20-21st October. We will have one and a half days (and an evening with pizza & coding)\, and we’ll try to create some new and interesting experiments or demonstrations.  \nIf you have any ideas\, please summarize quickly and send to njbo@norceresearch.no.  \nIt’s also important that we have the necessary datasets as well\, so have that in mind! \nCurrent ideas:– Automatic subtitles using Whisper. We’ve already done some work on this (both TV and podcasts)\, and will try to focus more on visualizations and end user experience. What could be products using this technology and for what content?– Football visualizations of automatically detected information. Pete creates datasets tracking and positioning players and the ball – how can we use this to make added value services that are interesting for viewers? Games? Additional information?– Compare different subtitles? Are summarized subtitles better than verbatim? Are “fancysubs” that provide information about who speaks easier to read or comprehend? We have at least one good eye tracker available for this. \nContact: Njål Borch\, WP 4 leader\, njbo@norceresearch.no \nRegister: https://skjemaker.app.uib.no/view.php?id=13533119 \nPhoto credit: Be-novative
URL:https://mediafutures.no/event/hackathon/
CATEGORIES:Events,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/1g3ARV1u4v9b9AKEIyPB7zw-e1665310575577.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20221004T130000
DTEND;TZID=Europe/Oslo:20221004T140000
DTSTAMP:20260423T153216
CREATED:20221001T143716Z
LAST-MODIFIED:20221003T073231Z
UID:13206-1664888400-1664892000@mediafutures.no
SUMMARY:MediaFutures Seminar: Media Accessibility: Current Solutions and Future Challenges with Pilar Orero\, Professor from the Universitat Autònoma de Barcelona
DESCRIPTION:Pilar Orero\, Professor at the Universitat Autònoma de Barcelona (Spain)\, will give a seminar on 4 October\, at 13:00. \nTITLE: MedIa Accessibility: Current Solutions and Future Challenges \nWHEN: Tuesday 4 October\, 13:00-14:00. There will be a lunch served at 12:30. \nWHERE: MediaFutures & Online \n Zoom Meetinghttps://uib.zoom.us/j/64942261686?pwd=eFBPb0lmM3FHN2xHK1pKNmFiMWwzQT09 \nMeeting ID: 649 4226 1686Password: MB926L47 \nABSTRACT: \n The presentation will start describing the current situation of media accessibility\, which is at an unprecedented moment for accessibility. The EU legal framework has forced national transpositions with concrete requirements. The European Accessibility Centre will open next year\, and funding for research by EC is at all times high. Accessibility has moved from being an object of study\, as a silo\, to a horizontal prerequisite in any call where humans are involved. This presentation will inform of existing funded solutions and identify possible challenges that may need funding. \nBIO: \nProfessor Pilar Orero\, PhD (UMIST\, UK) works at Universitat Autònoma de Barcelona (Spain) in the TransMedia Catalonia Lab. She has written and edited many books\, near 100 academic papers and almost the same number of book chapters –all on Media Accessibility. Leader and participant on numerous EU funded research projects focusing on media accessibility. She works in standardisation and participates in the UN ITU IRG-AVA – Intersector Rapporteur Group Audiovisual Media Accessibility\, ISO and ANEC. She has been working on Immersive Accessibility for the past 4 years first in a project called ImAc\, which results are now further developed in TRACTION\, MEDIAVERSE\, MILE\, and has just started to work on green accessibility in GREENSCENT. She leads the EU network LEADME on Media Accessibility. \n For more info please go to https://gent.uab.cat/pilarorero
URL:https://mediafutures.no/event/mediafutures-seminar-media-accessibility-current-solutions-and-future-challenges-with-pilar-orero-professor-from-the-universitat-autonoma-de-barcelona/
LOCATION:MediaFutures\, Media Futures HQ\, 3rd floor\, Bergen\, 5008
CATEGORIES:Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Picture-1-e1664648179569.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220615T120000
DTEND;TZID=Europe/Oslo:20220615T130000
DTSTAMP:20260423T153216
CREATED:20220610T112016Z
LAST-MODIFIED:20220610T134101Z
UID:12233-1655294400-1655298000@mediafutures.no
SUMMARY:MediaFutures Seminar: Augment the Vision: To Help Users Deal with Different Domain Tasks. PhD Candidate\, Chalmers University of Technology\, Sweden.
DESCRIPTION:Yuchong Zhang\, PhD candidate at the Chalmers University of Technology in Sweden\, will give a seminar on 15 June\, at 12:00. \nTITLE: Augment the Vision: To Help Users Deal with Different Domain TasksWHEN: Wednesday 15 June\, 12:00-13:00WHERE: MediaFutures \n  \n\n \nABSTRACT: \nOne of the cutting-edge techniques—augmented reality (AR) (a variation of virtual reality (VR))\, in which virtual objects are superimposed in the real world–has been demonstrated and applied in numerous fields due to its capability of providing interactive interfaces of visualized digital content. Moreover\, AR can provide functional tools that support users undertaking domain-related tasks\, especially facilitating them in data visualization and interaction because of its ability to jointly augment the physical space and the user’s perception. How to fully use the advantages of AR technique\, especially the items which augment human vision to help users with different domain tasks’ perform is the central part of my PhD research. \nBIO: \nYuchong Zhang is currently a Ph.D. candidate at the Chalmers University of Technology\, Sweden. His research interests include augmented reality\, interactive visualization and human-centred design. He received his MSc. degree from Nanyang Technological University\, Singapore in 2017.
URL:https://mediafutures.no/event/mediafutures-seminar-augment-the-vision-to-help-users-deal-with-different-domain-tasks-phd-candidate-chalmers-university-of-technology-sweden/
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/IMG_3041-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220504T141500
DTEND;TZID=Europe/Oslo:20220504T150000
DTSTAMP:20260423T153216
CREATED:20220425T120800Z
LAST-MODIFIED:20220705T065715Z
UID:11901-1651673700-1651676400@mediafutures.no
SUMMARY:Human-Computer Interaction lecture series: UX Research Design in Practice: Some Examples from Safety Critical Industrial Environments. Dr Duy Le\, senior research scientist at VNUHCM University of Science\, Vietnam
DESCRIPTION:Dr. Duy Le\, a senior research scientist and head of the Human-Computer Interaction division of SELab\, VNUHCM University of Science\, Vietnam\, will give a seminar on May 4\, at 14:15. \nTITLE: UX Research Design in Practice: Some Examples from Safety Critical Industrial Environments \nWHEN: Wednesday 4 May\, 14:15-15:00WHERE: MediaFutures and Zoom: \nhttps://uib.zoom.us/j/69626023560?pwd=cVdKWjR1VithNUo4a1g0NTQ1OGpFdz09\n\nMeeting ID: 696 2602 3560\nPassword: icrxk07s\n \nABSTRACT: \nA typical user experience (UX) research and design process is a sequence of user empathization\, problem definition\, solution ideation\, designing\, and evaluation. However\, how this sequence is executed in practice can have several variants\, heavily depending on the resources and the constraints of the environment where the UX work is performed. In this talk\, we will explore some exemplary UX research and design projects targeting manufacturing plants\, which are examples of safety critical industrial environments. The talk will highlight some particular contextual constraints in this kind of environment and then present how UX practitioners flexibly applied the typical design process to comply with the constraints while still adequately ensuring the quality of a user-centered design work. Besides that\, the talk will also demonstrate how different types of creative media such as storyboard\, paper-sketched user interfaces\, animated mockups\, and games can be flexibly used in industrial UX research and design projects. \nBIO: \nDr. Duy Le is currently a senior research scientist and head of the human-computer interaction division of SELab\, VNUHCM University of Science\, Vietnam. He is an HCI researcher with rich working experiences both in academia and industry. He obtained a PhD degree in human-computer interaction from Chalmers University of Technology\, Sweden. Prior to joining VNUHCM University of Science\, he had two years working as a research scientist in the UX research group of ABB Research Sweden where he performed several UX research projects spanning across human-robot interaction\, augmented reality (AR)\, virtual reality (VR)\, and human-automation interaction. In his current position\, Duy is leading research on intelligent interactive systems\, which aim to combine user-centered design\, artificial intelligence and cutting edge interactive technologies such as interactive surfaces\, AR\, VR\, and embodied interfaces to improve user efficacy and provide novel experiences.
URL:https://mediafutures.no/event/mediafutures-seminar-ux-research-design-in-practicesome-examples-from-safety-critical-industrial-environments-dr-duy-le-senior-research-scientist-and-head-of-the-human-computer-interaction-divisio/
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/DuyLe_1-e1657004223713.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220504T111500
DTEND;TZID=Europe/Oslo:20220504T120000
DTSTAMP:20260423T153216
CREATED:20220429T134009Z
LAST-MODIFIED:20220610T134834Z
UID:11919-1651662900-1651665600@mediafutures.no
SUMMARY:Human-Computer Interaction lecture series: Visual Attention in Face-to-Face and Computer-Mediated Interactions. Katarzyna Wisiecka\, PhD Candidate in Psychology and Informatics at SWPS University & Polish-Japanese Academy of Information Technology.
DESCRIPTION:Katrzyna Wisiecka\, PhD candidate in psychology and informatics at SWPS University & Polish-Japanese Academy of Information Technology\, will give a seminar on May 4th\, at 11:15. \nTITLE: Visual Attention in Face-to-Face and Computer-Mediated Interactions \nWHEN: Wednesday 4 May\, 11:15-12:00WHERE: MediaFutures and Zoom:  \nhttps://uib.zoom.us/j/61410167655?pwd=WFJXbGdMVXVuYk5sTDJ2bllPZ3Yydz09\n\n\nMeeting ID: 614 1016 7655\nPassword: 9VA5L6D1\n  \nABSTRACT: \nComputer-mediated interaction has become an integral part of our daily routines. Despite decreased non-verbal communication and face-to-face contact with partners of collaboration\, people learned how to remotely work together. The consequences of decreased non-verbal signals such as gaze communication on collaboration quality in remote settings are however not fully investigated. The present PhD project intends to examine in four eye tracking experiments the role of visual attention during face-to-face and computer-mediated interaction. The project has three interrelated aims: (1) examining the relationship between interaction quality and gaze patterns in remote and face-to-face collaboration; (2) facilitating workspace awareness among collaborators by visualization of the partner’s gaze direction; (3) investigate whether gaze communication enhances physiological synchronization measured by heart rate variability (HRV) in computer-mediated collaboration. Current results suggest that remote collaboration is challenging for participants and its quality benefits from gaze visualizations during task solving. Enhancing gaze communication during remote collaboration has the potential to increase physiological synchronization between collaborators. Broadening the knowledge about physiological correlates of computer-mediated collaboration is a step to develop gaze-based solutions tailored to remote interactions. \nBIO: \nKatarzyna Wisiecka\, M.A. in clinical psychology\, current PhD candidate in psychology and informatics at SWPS University & Polish-Japanese Academy of Information Technology. She is a member of the Eye Tracking Research Center at SWPS University. Her main research interests include social synchronization and gaze communication in computer environments. She is a scholarship holder in grants funded by the National Science Center and National Centre for Research and Development in Poland. She also takes part in numerous international projects including media accessibility and human-computer interaction such as LEAD-ME funded by the Horizon 2020 Framework Programme of the EU.
URL:https://mediafutures.no/event/human-computer-interaction-lecture-series-visual-attention-in-face-to-face-and-computer-mediated-interactions-phd-candidate-ph-d-candidate-in-psychology-and-informatics-at-swps-university-polish-j/
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/kw.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20210603T130000
DTEND;TZID=Europe/Oslo:20210603T140000
DTSTAMP:20260423T153216
CREATED:20210325T080321Z
LAST-MODIFIED:20210520T112711Z
UID:5417-1622725200-1622728800@mediafutures.no
SUMMARY:Seminar: AI in the social sciences AND a taxonomy of fake news: Two research themes. Rich Ling\, Nanyang Technological University\, Singapore.
DESCRIPTION:Welcome to a seminar with Shaw Foundation Professor of Media Technology\, Rich Ling\, at Nanyang Technological University\, Singapore\, on Thursday\, 3 June. \nTITLE: AI in the social sciences AND a taxonomy of fake news: Two research themes.\nWHEN: Thursday\, 3 June 2021\, 13:00-14:00\nWHERE: https://uib.zoom.us/j/66200674584?pwd=a2lUZ1dNZEZNNzdycVI2c1Z3aHhuQT09\nMeeting ID: 662 0067 4584\nPassword: KC8KX7zi \nABSTRACT: In this talk\, Rich Ling will examine the role of AI in social science research. In addition\, he will examine a taxonomy of fake news. In the case of AI in the social sciences\, Ling will examine how this technology is emerging as a new tool that will eventually shape social science research in the coming years. When considering fake news\, Ling will review how this phenomenon has been seen over the past decade and how researchers have approached it. \nBIO: Rich Ling has been the Shaw Foundation Professor of Media Technology\, at Nanyang Technological University\, Singapore where he studies the social consequences of mobile communication. Ling has written The mobile connection (2004)\, New Tech\, New Ties (2008) and Taken for grantedness (2012). He edits the Journal of Computer-Mediated Communication\, and a founding co-editor of both Mobile Media and Communication and the Oxford University Press Series\, Studies in Mobile Communication. He is a member of Det Norske Vitenskaps Akademi (The Norwegian Academy of Arts and Letters)\, Academia Europaea\, and a fellow of the International Communication Association. \nWelcome to all!
URL:https://mediafutures.no/event/seminar-rich-ling-nanyang-technological-university-singapore/
LOCATION:Online
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Rich-Ling-1-e1625469857856.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20210520T130000
DTEND;TZID=Europe/Oslo:20210520T140000
DTSTAMP:20260423T153216
CREATED:20210415T070650Z
LAST-MODIFIED:20210430T074213Z
UID:5566-1621515600-1621519200@mediafutures.no
SUMMARY:Seminar: Should we have PETs in "smart" homes? Tomasz Kosinski\, Chalmers University of Technology
DESCRIPTION:MediaFutures invites you to join us for a talk with PhD candidate Tomasz Kosinski\, Chalmers University of Technology. The topic of his talk is privacy controls for the IoT systems we are surrounded by. \nWelcome to all! \nTITLE: Should we have PETs in “smart” homes?\nWHEN: 20 May 2021\, 13:00-14:00\nWHERE: https://uib.zoom.us/j/66870039042?pwd=TElra0ZlL0dFaFlaMFh5TjJ3dklmdz09\nMeeting ID: 668 7003 9042\nPassword: 2vDBE%p% \nABSTRACT: This talk is not a lecture. The goal is to use plain English. Why? To simply convey some practical information and insights. About what? On Privacy Enhancing Technologies; PETs for short. What for? So that you can answer the question in the title for yourself. (Correct\, I won’t do this one for you.) Why should you care to listen? Will it matter if you don’t? To whom? And what to use PETs for? Can it be applied to an Amazon Echo or Google Home? A “smart” lightbulb or your “smart” TV? All of them? These questions I’ll strive to answer. And I hope you will have more. Especially that in this popular science format\, I will touch upon topics that should resonate with each of you and that are not limited to dark\, dusty and narrow university corridors or Ivory towers. Tangible examples of this include reports of The Norwegian Consumer Council\, Forbrukerrådet\, regarding consumer-unfriendly practices. Similarly\, recent NRK reports on location tracking through smartphone apps illustrate some issues that will be brought up in the talk. \nBIO: The guy who will insist on not answering the question in the title is currently a pre-graduation PhD candidate. He’s based at Chalmers\, a Swedish technical university located in Gothenburg. His core background is in Computer Science. The pre-PhD excursions involved software engineering\, embedded systems\, quadcopters\, AI methods\, human-robot and human-computer interaction. During the PhD time\, he learned a bit about human studies (with mixed methods)\, scrutinizing smartphone apps (on Android) and analyzing the data showing what IoT devices tend to send over networks. Oh\, and we didn’t bother asking about his off-work interests\, since his PhD is on Privacy. You better follow him on social media for that.
URL:https://mediafutures.no/event/seminar-tomasz-kosinski-chalmers-university-of-technology/
LOCATION:Online
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Kosinski-170.jpg
END:VEVENT
END:VCALENDAR