BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220609T093000
DTEND;TZID=Europe/Oslo:20220609T120000
DTSTAMP:20260513T221752
CREATED:20220513T101817Z
LAST-MODIFIED:20220610T134305Z
UID:12108-1654767000-1654776000@mediafutures.no
SUMMARY:Future Week. MediaFutures Workshop: Towards Responsible Media Technology & Innovation
DESCRIPTION:As part of the Media City Bergen Future Week\, MediaFutures will hold workshop on June 9.  \nTITLE: Towards Responsible Media Technology & Innovation \nWHEN: Thursday 9 June\, 09:30-12:00WHERE: Media City Bergen \nREGISTRATION: https://mediacitybergen.no/future-week-22/ \nIn this workshop\, you will learn from MediaFutures’ young researchers in five work packages how new media technologies for media user engagement\, media content production\, media content interaction and accessibility\, and audience understanding are developed and advanced to the next level. No matter if you are a journalist\, a media practitioner\, a developer\, a researcher\, or a curious individual\, you will open a new horizon before you on responsible media technology and innovation.\n\nThe topics we’ll touch upon during the workshop are:\n– Understanding media experience– User Modeling\, Personalization & – Engagement– Media Content Analysis and – Production– Media Content Interaction & Accessibility– Norwegian Language Technologies\n\n\n\n\n\n\nProgram:\n1. Christoph Trattner (Center Director): Opening and welcome speech\n2. Mehdi Elahi (Associate Prof./WP Leader\, WP2): “Research on Responsible Recommendation”\n3. Jonathan Geffen (PhD Candidate\, WP4): “Playing the News – Newsgames 101”\n4. Ana Milojevic (Postdoc Fellow\, WP1): “Connecting Media and Users – Analytics and Metrics ”\n5. Coffee Break\n6. Sohail Ahmed Khan (PhD Candidate\, WP3): “Visual Content Verification in Era of Deepfakes”\n7. Huiling You (PhD Candidate\, WP5): “Event Extraction from News Articles”\n8. Round table discussion\n9. Wrap-up\n 
URL:https://mediafutures.no/event/future-week-mediafutures-workshop-towards-responsible-media-technology-innovation/
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Screenshot-2022-05-13-at-12.11.27.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220615T100000
DTEND;TZID=Europe/Oslo:20220615T130000
DTSTAMP:20260513T221752
CREATED:20220610T115242Z
LAST-MODIFIED:20220610T134203Z
UID:12241-1655287200-1655298000@mediafutures.no
SUMMARY:MediaFutures WP1 DIGSSCORE-Workshop
DESCRIPTION:  \nWP1 DIGSSCORE-Workshop \nOpen for all MediaFutures Partners. The first part will be in English\, and the second part will be in Norwegian/Scandinavian. \nWHERE: Zoom – https://uib.zoom.us/j/65597060274?pwd=Q0huZm9EY2s4VGJCSWdMdVRwTGRkZz09  \n \nPart 1 (10:00 – 11:00) Introduction to DIGSSCORE \n10:00-10:05: Welcome \n10:05-10:45: Introduction to DIGSSCORE by Erik Knudsen: What is it and what can MediaFutures partners use it for? (20 minutes presentation and 20 minutes for questions) \n10:45-11:00 Coffee break \nPart 2 (11:00-13:00) WP1 workshop in Scandinavian (with WP1 partners) \n11:00-11:15: Introduction to Schibsted’s research interests on hard-to-reach audiences and constructive journalism (10 minutes presentation and 5 minutes for questions) \n11:15-11:30: Introduction to NRK’s research interests on constructive journalism (10 minutes presentation and 5 minutes for questions). \n11:30-11:45: Discussing and finding overlapping interests for research \n11:45-12:00 Coffee break \n12:00-12:15: Opportunities for researching constructive journalism and hard-to-reach audiences using DIGSSCORE. \n12:15-12:45: Designing the study \n12:45-13:00: Summing up and the road ahead
URL:https://mediafutures.no/event/mediafutures-wp1-digsscore-workshop/
CATEGORIES:Events,Seminar,WP1 Understanding Media Experiences
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/digsscore-logo-m-tekst-under-hires_0.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220615T120000
DTEND;TZID=Europe/Oslo:20220615T130000
DTSTAMP:20260513T221752
CREATED:20220610T112016Z
LAST-MODIFIED:20220610T134101Z
UID:12233-1655294400-1655298000@mediafutures.no
SUMMARY:MediaFutures Seminar: Augment the Vision: To Help Users Deal with Different Domain Tasks. PhD Candidate\, Chalmers University of Technology\, Sweden.
DESCRIPTION:Yuchong Zhang\, PhD candidate at the Chalmers University of Technology in Sweden\, will give a seminar on 15 June\, at 12:00. \nTITLE: Augment the Vision: To Help Users Deal with Different Domain TasksWHEN: Wednesday 15 June\, 12:00-13:00WHERE: MediaFutures \n  \n\n \nABSTRACT: \nOne of the cutting-edge techniques—augmented reality (AR) (a variation of virtual reality (VR))\, in which virtual objects are superimposed in the real world–has been demonstrated and applied in numerous fields due to its capability of providing interactive interfaces of visualized digital content. Moreover\, AR can provide functional tools that support users undertaking domain-related tasks\, especially facilitating them in data visualization and interaction because of its ability to jointly augment the physical space and the user’s perception. How to fully use the advantages of AR technique\, especially the items which augment human vision to help users with different domain tasks’ perform is the central part of my PhD research. \nBIO: \nYuchong Zhang is currently a Ph.D. candidate at the Chalmers University of Technology\, Sweden. His research interests include augmented reality\, interactive visualization and human-centred design. He received his MSc. degree from Nanyang Technological University\, Singapore in 2017.
URL:https://mediafutures.no/event/mediafutures-seminar-augment-the-vision-to-help-users-deal-with-different-domain-tasks-phd-candidate-chalmers-university-of-technology-sweden/
CATEGORIES:Events,Seminar,WP4 Media Content Interaction & Accessibility
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/IMG_3041-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20220616T140000
DTEND;TZID=Europe/Oslo:20220616T150000
DTSTAMP:20260513T221752
CREATED:20220603T195225Z
LAST-MODIFIED:20220610T134015Z
UID:12215-1655388000-1655391600@mediafutures.no
SUMMARY:MediaFutures Seminar: Mind the Gaps and Normal Accidents with Dirk Hovy. Associate Professor\, Bocconi University in Milan\, Italy.
DESCRIPTION:Dirk Hovy\, Associate Professor of computer science at Bocconi University in Milan\, Italy\, will give a seminar on 16 June\, at 14:00. \nTITLE: Mind the Gaps and Normal AccidentsWHEN: Thursday 16 June\, 14:00-15:00WHERE: Zoom – https://uio.zoom.us/j/61874196241?pwd=RHJFZ1FYRzArdHkvbW9salZkd20yUT09 \n  \nMeeting ID: 618 7419 6241\n\nPassword: 577113\n \nABSTRACT: \nNLP is now stable enough to be used in production systems\, and will soon become even more pervasive. However\, even today’s systems are already highly complex and unpredictable. As they become more ubiquitous\, different algorithms will interact with each other directly leading to tightly coupled systems whose capacity to cause harm we will be unable to predict. In his book Normal Accidents\, the sociologist Charles Perrow proposed a framework to analyze technologies and their risks according to their complexity and the interdependence of their components. He showed that accidents were nigh on unavoidable due to those two features. We apply Perrow’s framework to NLP to assess its potential risks. We argue that under the current paradigm\, “normal accidents” are built into the system\, and that it is only a matter of time before they emerge. Some issues in current NLP practice that aid this development are: \n– the early adoption of methods without sufficient understanding or analysis; \n– the preference for computational methods regardless of risks associated with their limitations; \n– the dangers of unexplainable methods. \nIf these issues are not addressed\, we risk a loss of reproducibility\, reputability\, and subsequently public trust in our field. However\, these factors can help us plan better for making our systems safer and more reliable. \nBIO: \nDirk Hovy is associate professor of computer science at Bocconi University in Milan\, Italy. Before that\, he was faculty and a postdoc in Copenhagen\, got a PhD from USC\, and a linguistics masters in Germany. He is interested in the interaction between language\, society\, and machine learning\, or what language can tell us about society\, and what computers can tell us about language. He has authored over 70 articles on these topics\, including 3 best paper awards. He has organized one conference and several workshops (on abusive language\, ethics in NLP\, and computational social science). Dirk recently received an ERC Starting Grant for a project on demographic factors and bias in NLP. Outside of work\, Dirk enjoys cooking\, running\, and leather-crafting. For updated information\, see https://www.dirkhovy.com
URL:https://mediafutures.no/event/mediafutures-seminar-mind-the-gaps-and-normal-accidents-with-dirk-hovy-associate-professor-bocconi-university-in-milan-italy/
CATEGORIES:Events,Seminar,WP5 Norwegian Language Technologies
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/portrait_small20180221200643.jpeg
END:VEVENT
END:VCALENDAR