BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MediaFutures
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230605T120000
DTEND;TZID=Europe/Oslo:20230607T150000
DTSTAMP:20260406T145347
CREATED:20230418T132148Z
LAST-MODIFIED:20230512T092638Z
UID:14985-1685966400-1686150000@mediafutures.no
SUMMARY:Media City Bergens' Future Week: Workshop on Disinformation and Fake News
DESCRIPTION:In the wake of Russia’s invasion of Ukraine in February 2022\, there has been a marked increase in the spread of disinformation and propaganda by the Kremlin. The abundance of fabricated narratives has renewed attention the potential that automated fact-checking technologies have in combating false information online. \nAs part of MCB Future Week\, MediaFutures is organizing a workshop on June 5 to address the issue of disinformation and fake news. The workshop will bring together media tech professionals and scientists to discuss the latest research and innovations in the field. \nProgram\nKl 12:00 – Welcome – Christoph Trattner \nKl 12:10 – Faktisk.no/IJ – Henrik Brattli Vold \nKl 12:40 – NORDIS – Laurence Dierickx \nKl 13:10 – Coffee and snacks \nKl 13:25 – MediaFutures – Duc Tien Dang Nguyen and Sohail Ahmed Khan \nKl 14:05 – Factiverse/UiS – Vinnay Setty  \nKl 14:35 – Discussion – Led by: Duc Tien Dang Nguyen \nKl 15:00 – End \nDescriptions of the talks and speakers\n \nFaktisk.no/IJ – How Faktisk Verifiserbar verifies the war \nSince the start of the war in Ukraine\, Faktisk Verifiserbar has been developing methods andworkflows to counter propaganda and get precise and verified information out from social media and into Norwegian newsrooms. During the course of these months\, the newsroom has educated more than 30 journalists in open-source intelligence (OSINT) gathering\, and their knowledge is being used to raise knowledge and awareness of the methods in many large newsrooms all over Norway. Henrik Brattli Vold will show you how they succeeded\, and how they taught themselves to work with free or inexpensive tools to unmask the Ukrainian stories. \nHenrik Brattli Vold is working at the Institute of Journalism in Pressens Hus\, and he is also a fellow ofthe Faktisk Verifiserbar newsroom. He has experience working in very different fields and mediums\, and he is now a journalism trainer. \n \n \n \n\n \nNORDIS – Fact-checking the Ukraine war \nAs part of the Nordic Observatory for Digital Media and Information Disorder (NORDIS)\, theUniversity of Bergen focused on the tools and technologies likely to support or augment fact-checking practices. The research in this context encompasses state-of-the-art fact-checkingtechnologies\, a study on the Nordic fact-checkers user needs\, and designing and developinga set of multimedia forensic tools to support a human-in-the-loop approach\, considering theend-user requirements. \nProfessional fact-checking activities can be schematized in a three-stage pipeline\, whichconsists of identifying claims\, verifying claims\, and providing a verdict. However\, this processis challenged by the complex application domain to which it relates – scientific\, economic\,political\, cultural\, social – and by the nature of the fact to check\, either textual or audiovisual.Hence\, to better understand the fact-checkers user needs and requirements in context\, westudied the challenges of the Russian-Ukrainian war\, which relates to the attempts tomanipulate public opinions through propaganda. What particular challenges do fact-checkersface? Does the socio-professional context affect the difficulties that fact-checkers encounter?Do fact-checkers perceive that they have adequate resources to perform their job efficiently? \nTo answer these questions\, the research method included structured interviews with fact-checkers and an online questionnaire distributed during the Global Fact 9 Conference\,organized in June 2022 in Oslo. 85 fact-checkers from 46 countries participated in thissurvey. Initial results showed that the main challenges they face concern access to reliablesources on either side of the conflict. They also underlined struggling to verify the informationpresented in manipulated context. Being a part of a global fact-checking network is viewedas an asset for exchanging information. Fact-checkers had mixed advice on the sufficiency ofthe tools at their disposal. However\, they agreed on the need for new technological tools toprovide context and accurate translations. \nLaurence Dierickx has a professional background in data and computational journalism. She holds a master’s degree in Information and communication science and technology and a PhD in Information and communication science. She is a researcher at the department of Information science and media studies at the University of Bergen and a data journalism teacher at the Université Libre de Bruxelles (Belgium). \n  \n\nMediaFutures: The Future of Mis/Disinformation: The Risks of Generative Models and Detection Techniques for Countering Them \n  \nIn this talk\, we will explore the current progress of generative models and theirpotential in spreading misinformation and disinformation. Since recently\, new generativemodels including GANs\, Diffusion Models\, Large Language Models have demonstratedremarkable progress in generating realistic fake (deepfake) content such as visuals\, audios\,and text. We will highlight the potential dangers that come with these powerful models\, aswell as provide an insight into the research efforts being devised in order to detect fakecontent generated using these models. Also\, the challenges associated with detecting suchcontent and the various efforts being devised to combat it will be briefly discussed. Overall\,the talk will highlight the importance of being vigilant against the spread of mis/disinformation and the critical role that the detection models might play in mitigating its impact. \n  \nSohail is a PhD candidate at MediaFutures and University of Bergen. He holds an MSc in Cybersecurity and Artificial Intelligence from the University of Sheffield\, UK. Prior to joining MediaFutures\, Sohail worked as a research assistant at Mohamed bin Zayed University of AI\, Abu Dhabi\, UAE. Before that\, he worked as a remote research assistant at CYENS Centre of Excellence\, Nicosia\, Cyprus.  His research interests intersect deep learning\, computer vision and multimedia forensics. Sohail is currently associated with the MediaFutures’ WorkPackage 3\, i.e.\, Media Content Production and Analysis. \n  \n\n  \nMediaFutures: Detecting Cheapfakes – Lessons Learned from Three Years of Organizing the Grand Challenge \nThis talk discusses the challenges and lessons learned from three years of organizing agrand challenge focused on detecting out-of-context (OOC) images. Cheapfakes\, which referto non-AI manipulations of multimedia content\, are more prevalent than deepfakes and canbe created using editing software or by altering the context of a media through misleadingclaims. Detecting OOC media is much harder than fake media because the images andvideos are not tampered with. \nOur challenge aims to develop and benchmark models that can detect whether a given newsimage and its associated captions are OOC\, based on the recently compiled COSMOSdataset. Participants have developed state-of-the-art methods\, and we will discuss theevaluation metrics used in the challenge. We have also learned valuable lessons on thecomplexities and nuances of detecting OOC images and the importance of creating diverseand representative datasets. \nAdditionally\, we will share insights on the interdisciplinary collaboration needed to combatcheapfakes effectively. The talk highlights the significance of detecting OOC media in newsitems\, specifically the misuse of real photographs with conflicting captions. \n  \nDuc-Tien Dang-Nguyen is an associate professor of computer science at the Department of Information Science and Media Studies\, University of Bergen. His main area of expertise is on multimedia forensics\, lifelogging\, multimedia retrieval\, and computer vision. He is a member of MediaFutures WP3 – Media Content Analysis and Production in Journalism and The Nordic Observatory for Digital Media and Information Disorder (NORDIS). He is the author and co-author of more than 150 peer-reviewed and widely cited research papers. He is a PC member in a number of conferences in the fields of lifelogging\, multimedia forensics\, and pattern recognition. He is co-organiser of over 40 special sessions\, workshops and research challenges from ACM MM\, ACM ICMR\, NTCIR\, ImageClef and MediaEval during the last 10 years. He is General Chair of MMM 2023 and TPC co-Chair of ACM ICMR 2024. \n  \n\n  \nFactiverse/UiS: Explainable AI for Automated Fact-Checking \nAutomated fact-checking using AI models has shown promising results to combatmisinformation\, thanks to several large-scale datasets which are available. However\, most models are opaque and do not provide reasoning behind their predictions. Moreover\, with the recent popularity of LLMs such as GPT-3/4 by OpenAI\, Llama by Meta and Bard by Google\, there is renewed worry of misinformation. In this talk\, I will enumerate the existing approaches w.r.t XAI for fact-checking and discuss the latest trends in this topic. The talk will also delve into what makes a good explanation in the context of fact-checking\, and identify potential avenues for future research to address the current limitations. \nVinay Setty is a co-founder and the CTO of Factiverse. He is also an Associate Professor at University of Stavanger. His research area broadly includes natural language understanding (NLU)\, information retrieval (IR) and text mining involving unstructured textual documents as well as structured knowledge graphs. Specifically\, automated fact-checking\, question answering and coversational search over knowledge graphs. These days he is spending more time on his startup Factiverse with the mission to automate fact-checking using cutting edge AI and NLP. He has won the SR-Bank Innovation Prize for 2020 using the deep neural networks for fake news detection. He has a PhD from University of Oslo and he was a Postdoctoral researcher at Max Planck Institute for Informatics in Germany. \n  \nYou can sign up for the workshop here.
URL:https://mediafutures.no/event/media-city-bergens-future-week-workshop-on-disinformation-and-fake-news/
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Screenshot-2023-04-18-at-15.20.20.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230609T120000
DTEND;TZID=Europe/Oslo:20230609T130000
DTSTAMP:20260406T145347
CREATED:20230602T165739Z
LAST-MODIFIED:20230602T170327Z
UID:15301-1686312000-1686315600@mediafutures.no
SUMMARY:MediaFutures Seminar: Hidden in Plain Sight: Microcelebrities navigating visibility and surveillance on Twitter with  Özlem Demirkol Tønnesen\, Postdoctoral Fellow at the University of Bergen
DESCRIPTION:Özlem Demirkol Tønnesen from the University of Bergenwill give a seminar on June 9th. \nTITLE:  Hidden in Plain Sight: Microcelebrities navigating visibility and surveillance on Twitter\nWHEN: 9 June\, 12:00 – 13:00  \nWHERE: MediaFutures \nABSTRACT: \nOn social media\, microcelebrities and influencers play various roles and hold immense power as “focusers of attention” (Tufekci\, 2013). This project was formed as an intervention to the growing interest these content creators receive in academic research for their ability to guide their audiences towards products and lifestyles they promote\, while the similar effects they may produce when sharing political opinions\, viewpoints and behaviors are neglected. \nThis talk is based on Özlem’s PhD research into Turkish Twitter microcelebrities and how they express antigovernment viewpoints and navigate the tensions between self-expression and self-preservation under an authoritarian regimese. Drawing from an analysis of tweets by 97 microcelebrity accounts in the 3-months leading up to the 2018 Turkish elections\, she discusses how these accounts consider sharing political opinions and news as a duty despite apparent risks and express their thoughts through a narration of everyday life and daily events by formulating a language that is understood by subgroups who are acclimated to the platform culture. \nBIO:  \nÖzlem Demirkol Tønnesen is a postdoctor fellow at the ERC-project PREPARE – Distributed and prepared. A new theory of citizens` public connection networks in the age of datafication. She completed her PhD at the  University of Southampton.
URL:https://mediafutures.no/event/mediafutures-seminar-hidden-in-plain-sight-microcelebrities-navigating-visibility-and-surveillance-on-twitter-with-ozlem-demirkol-tonnesen-postdoctoral-fellow-at-the-university-of-bergen/
CATEGORIES:Events,Seminar,WP1 Understanding Media Experiences
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/picture-406028-1683025124.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230613
DTEND;VALUE=DATE:20230614
DTSTAMP:20260406T145347
CREATED:20230426T124333Z
LAST-MODIFIED:20230426T125336Z
UID:15078-1686614400-1686700799@mediafutures.no
SUMMARY:Symposium - Towards a Fairer Future of Education: Algorithmic Inequality and Learning Analytics
DESCRIPTION:Where & When: Scandic Hotel Ørnen in Bergen\, Wednesday June 13th. \nInvitation: SLATE Research Centre and MediaFutures invite you to the symposium “Towards a Fairer Future of Education: Algorithmic Inequality and Learning Analytics”! \nThe emerging applications of Learning Analytics and AI in education that employ student data to improve learning and teaching is gaining worldwide attention. How can data and analytics be used to empower learners\, and what is their impact on education? How can we ensure that AI and Learning Analytics are used to support learning while preserving their responsible usage? What are the challenges associated with striking this balance? What can constitute an equitable and fairer use of AI systems in education and Learning Analytics? \nThese are just some examples of the thought-provoking questions that we will highlight during the symposium!  \nThe event will be held in-person at Scandic Hotel Ørnen in Bergen on Wednesday June 13th. This is a physical symposium\, not a digital webinar\, and lunch\, coffee\, tea and snacks will be served.  \nThe symposium program will be announced as we get closer to the event itself\, but the registration form is already open! \nRegistration deadline is June 8th. \nORGANISING COMMITTEE:\n\n\n\n\n\nMohammad Khalil\n\nMohammad Khalil is a senior researcher at the Centre for the Science of Learning & Technology (SLATE) and the project leader of European Erasmus+ project Remote Intelligent Access to Labs in Higher Education (RIALHE). Khalil has a PhD degree in Engineering Sciences with distinction from Graz University of Technology. He worked as a Postdoc at Delft University of Technology in Technology Enhanced Learning. Khalil is the PI of three Peder Sather projects in joint collaboration with University of California Berkeley on Learning Analytics and Recommendation systems in higher education. He has also been involved in several European projects: Scaling up Educational Innovation in Schools (SEIS)\, Open Educational Resources in Computational Biomedicine (OERCompBiomed)\, and European Network for Virtual lab & Interactive SImulated ONline learning 2027 (ENVISION2027). \nKhalil is currently an associate editor of the International Journal of Emerging Technologies in Learning (iJET) and served as a guest editor at the Journal of Computing in Higher Education\, Journal of Learning Analytics\, and the British Journal of Educational Technology. Khalil has more than 75 publications on ICT & learning. His key research interests include Technology Enhanced Learning\, AI in Education\, Learning Analytics\, and Privacy and Ethics. \n  \n\n\n\n\n\n\nMehdi Elahi\n\nMehdi Elahi is an Associate Professor at the University of Bergen (UiB)\, Norway. He obtained his Ph.D. degree in Computer Science in 2014 and since then published more than 90 peer-reviewed journal and conference publications. His current #citation is 2900+ and his H-index is 25. His research has been mainly focused on AI\, Data Science\, and Cognitive Science\, with an emphasis on their potential industrial applications such as on Recommender Systems. He has also co-invented and co-owned an AI-related US-patent. Mehdi Elahi has been involved in the authorship of several EU grant proposals such as the large-scale grants\, recently funded with a budget of 30 Million Euro\, where he will serve as WP Leader for 8 years. Before that\, he has received prestigious research credits from giant IT industries (i.e.\, Amazon and Google). His research findings have been published in some of the most prestigious reference literature of the field including a recently authored book. He has organized International Data Challenges together with top Companies (i.e.\, Spotify and XING). \n\n\n\n\n\n\nBarbara Wasson\n\nBarbara Wasson\, Director of the Centre for The Science of Learning and Technology (SLATE)\, University of Bergen\, Norway is a full Professor in the Department of Information Science & Media Studies. She was one of the founders of Kaleidoscope\, a European Network of Excellence on Technology Enhanced Learning\, sat on the executive committee\, and was leader of its CSCL SIG with over 400 members. She is currently co-leader of the Learning Analytics Community Europe SIG (LACE). Wasson is an expert evaluator in the European Commission. Wasson is/has been a PI for numerous national and international projects including the recent Trond Mohn project ‘AI and Education: Layers of Trust’. She has over 120 publications in the field of Technology Enhanced Learning. \nIn addition to her work for the Council of Europe\, she is a member of the Norwegian Government’s expert group on learning analytics\, which is looking at the technical\, pedagogical\, legal and ethical aspects of the implementation of learning analytics in the Norwegian educational sector. \n\n\n\n\n\nFor more information click here.   \n 
URL:https://mediafutures.no/event/symposium-towards-a-fairer-future-of-education-algorithmic-inequality-and-learning-analytics/
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Screenshot-2023-04-26-at-14.40.36.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230616T120000
DTEND;TZID=Europe/Oslo:20230616T130000
DTSTAMP:20260406T145347
CREATED:20230530T110452Z
LAST-MODIFIED:20230530T113510Z
UID:15252-1686916800-1686920400@mediafutures.no
SUMMARY:MediaFutures Seminar: Metrics for Measuring Normative Diversity in News Recommendations with  Sanne Vrijenhoek from the University of Amsterdam
DESCRIPTION:Sanne Vrijenhoek from the University of  Amsterdam will give a seminar on June 16th \nTITLE: Metrics for Measuring Normative Diversity in News Recommendations \nWHEN: 16 June\, 12:00 – 13:00  \n \nABSTRACT: News recommenders have the potential to fulfill a crucial role in a democratic society\, directing news readers towards the information that is most important to them. However\, while much attention has been given to optimizing user engagement and enticing users to click\, much less research has been done on incorporating editorial values in news recommender systems. I will talk about our interdisciplinary work on defining normative diversity for news recommender systems\, challenges for implementation\, and the way forward.  \n\n  \nBI: Sanne Vrijenhoek is a PhD candidate at the University of Amsterdam and a member of the AI\, Media and Democracy Lab. Her work focuses on translating normative notions of diversity into quantifiable concepts that can be incorporated in news recommender system design.
URL:https://mediafutures.no/event/mediafutures-seminar-with-sanne-vrijenhoek-from-the-university-of-amsterdam/
CATEGORIES:Events,Seminar,WP1 Understanding Media Experiences
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/SVrijenhoek_1000.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230629T150000
DTEND;TZID=Europe/Oslo:20230629T160000
DTSTAMP:20260406T145347
CREATED:20230613T214314Z
LAST-MODIFIED:20230814T070211Z
UID:15382-1688050800-1688054400@mediafutures.no
SUMMARY:MediaFutures Seminar: Reaching New Audiences with Generative AI with Lydia Chilton\, an Assistant Professor from Columbia University
DESCRIPTION:Lydia Chilton from Columbia University will give a seminar on June 29th \n  \nTITLE: Reaching New Audiences with Generative AI \nWHEN: 29 June\, 15:00 – 16:00  \nPresentation: \nhttps://mediafutures.no/wp-content/uploads/Chilton_ReachingNewAudiences_June2023.pdf \nABSTRACT:  \nWriting a news article is a significant investment of time\, energy\, and intellect. For an article to have the impact it deserves\, it needs to reach the right audiences.  We demonstrate that generative AI tools can help journalists broaden their audience by transforming their content. Unlike previous technology\, generative AI is flexible enough to modify the length\, tone\, message\, and medium of content. But it needs human values and contextual understanding to guide it. We show how to turn articles into news illustrations for visual appeal on social media. We show how language models (like GPT 4) help find hooks to motivate topics so that they appeal to different audiences.  And we show how to transform a traditional news article into TikTok-style reels. We then reflect on how generative AI can breathe new life into existing content through transformation\, reuse\, and retargeting information. \n \n  \nBIO: \nLydia Chilton is an Assistant Professor in the Computer Science Department at Columbia University. Her research is in computational design – how computation and AI can help people with design\, innovation\, and creative problem-solving. Applications include: creating media for journalism\, developing technology for public libraries\, improving risk communication during hurricanes\, helping scientists explain their work\, and improving mental health in marginalized communities. Dr. Chilton received her bachelor’s degree in computer science from MIT in 2007\, her Masters in Engineering from MIT in 2009 and her PhD from the University of Washington in 2016.  She was a post-doc at Stanford for 1 year before joining Columbia Engineering in 2017. \n 
URL:https://mediafutures.no/event/mediafutures-seminar-reaching-new-audiences-with-generative-ai-with-lydia-chilton-an-assistant-professor-from-the-columbia-university/
CATEGORIES:Events,Seminar,WP3 Media Content Production & Analysis
ATTACH;FMTTYPE=image/jpeg:https://mediafutures.no/wp-content/uploads/Chilton.Lydia_.jpg
END:VEVENT
END:VCALENDAR