BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20230605T120000
DTEND;TZID=Europe/Oslo:20230607T150000
DTSTAMP:20260406T165208
CREATED:20230418T132148Z
LAST-MODIFIED:20230512T092638Z
UID:14985-1685966400-1686150000@mediafutures.no
SUMMARY:Media City Bergens' Future Week: Workshop on Disinformation and Fake News
DESCRIPTION:In the wake of Russia’s invasion of Ukraine in February 2022\, there has been a marked increase in the spread of disinformation and propaganda by the Kremlin. The abundance of fabricated narratives has renewed attention the potential that automated fact-checking technologies have in combating false information online. \nAs part of MCB Future Week\, MediaFutures is organizing a workshop on June 5 to address the issue of disinformation and fake news. The workshop will bring together media tech professionals and scientists to discuss the latest research and innovations in the field. \nProgram\nKl 12:00 – Welcome – Christoph Trattner \nKl 12:10 – Faktisk.no/IJ – Henrik Brattli Vold \nKl 12:40 – NORDIS – Laurence Dierickx \nKl 13:10 – Coffee and snacks \nKl 13:25 – MediaFutures – Duc Tien Dang Nguyen and Sohail Ahmed Khan \nKl 14:05 – Factiverse/UiS – Vinnay Setty  \nKl 14:35 – Discussion – Led by: Duc Tien Dang Nguyen \nKl 15:00 – End \nDescriptions of the talks and speakers\n \nFaktisk.no/IJ – How Faktisk Verifiserbar verifies the war \nSince the start of the war in Ukraine\, Faktisk Verifiserbar has been developing methods andworkflows to counter propaganda and get precise and verified information out from social media and into Norwegian newsrooms. During the course of these months\, the newsroom has educated more than 30 journalists in open-source intelligence (OSINT) gathering\, and their knowledge is being used to raise knowledge and awareness of the methods in many large newsrooms all over Norway. Henrik Brattli Vold will show you how they succeeded\, and how they taught themselves to work with free or inexpensive tools to unmask the Ukrainian stories. \nHenrik Brattli Vold is working at the Institute of Journalism in Pressens Hus\, and he is also a fellow ofthe Faktisk Verifiserbar newsroom. He has experience working in very different fields and mediums\, and he is now a journalism trainer. \n \n \n \n\n \nNORDIS – Fact-checking the Ukraine war \nAs part of the Nordic Observatory for Digital Media and Information Disorder (NORDIS)\, theUniversity of Bergen focused on the tools and technologies likely to support or augment fact-checking practices. The research in this context encompasses state-of-the-art fact-checkingtechnologies\, a study on the Nordic fact-checkers user needs\, and designing and developinga set of multimedia forensic tools to support a human-in-the-loop approach\, considering theend-user requirements. \nProfessional fact-checking activities can be schematized in a three-stage pipeline\, whichconsists of identifying claims\, verifying claims\, and providing a verdict. However\, this processis challenged by the complex application domain to which it relates – scientific\, economic\,political\, cultural\, social – and by the nature of the fact to check\, either textual or audiovisual.Hence\, to better understand the fact-checkers user needs and requirements in context\, westudied the challenges of the Russian-Ukrainian war\, which relates to the attempts tomanipulate public opinions through propaganda. What particular challenges do fact-checkersface? Does the socio-professional context affect the difficulties that fact-checkers encounter?Do fact-checkers perceive that they have adequate resources to perform their job efficiently? \nTo answer these questions\, the research method included structured interviews with fact-checkers and an online questionnaire distributed during the Global Fact 9 Conference\,organized in June 2022 in Oslo. 85 fact-checkers from 46 countries participated in thissurvey. Initial results showed that the main challenges they face concern access to reliablesources on either side of the conflict. They also underlined struggling to verify the informationpresented in manipulated context. Being a part of a global fact-checking network is viewedas an asset for exchanging information. Fact-checkers had mixed advice on the sufficiency ofthe tools at their disposal. However\, they agreed on the need for new technological tools toprovide context and accurate translations. \nLaurence Dierickx has a professional background in data and computational journalism. She holds a master’s degree in Information and communication science and technology and a PhD in Information and communication science. She is a researcher at the department of Information science and media studies at the University of Bergen and a data journalism teacher at the Université Libre de Bruxelles (Belgium). \n  \n\nMediaFutures: The Future of Mis/Disinformation: The Risks of Generative Models and Detection Techniques for Countering Them \n  \nIn this talk\, we will explore the current progress of generative models and theirpotential in spreading misinformation and disinformation. Since recently\, new generativemodels including GANs\, Diffusion Models\, Large Language Models have demonstratedremarkable progress in generating realistic fake (deepfake) content such as visuals\, audios\,and text. We will highlight the potential dangers that come with these powerful models\, aswell as provide an insight into the research efforts being devised in order to detect fakecontent generated using these models. Also\, the challenges associated with detecting suchcontent and the various efforts being devised to combat it will be briefly discussed. Overall\,the talk will highlight the importance of being vigilant against the spread of mis/disinformation and the critical role that the detection models might play in mitigating its impact. \n  \nSohail is a PhD candidate at MediaFutures and University of Bergen. He holds an MSc in Cybersecurity and Artificial Intelligence from the University of Sheffield\, UK. Prior to joining MediaFutures\, Sohail worked as a research assistant at Mohamed bin Zayed University of AI\, Abu Dhabi\, UAE. Before that\, he worked as a remote research assistant at CYENS Centre of Excellence\, Nicosia\, Cyprus.  His research interests intersect deep learning\, computer vision and multimedia forensics. Sohail is currently associated with the MediaFutures’ WorkPackage 3\, i.e.\, Media Content Production and Analysis. \n  \n\n  \nMediaFutures: Detecting Cheapfakes – Lessons Learned from Three Years of Organizing the Grand Challenge \nThis talk discusses the challenges and lessons learned from three years of organizing agrand challenge focused on detecting out-of-context (OOC) images. Cheapfakes\, which referto non-AI manipulations of multimedia content\, are more prevalent than deepfakes and canbe created using editing software or by altering the context of a media through misleadingclaims. Detecting OOC media is much harder than fake media because the images andvideos are not tampered with. \nOur challenge aims to develop and benchmark models that can detect whether a given newsimage and its associated captions are OOC\, based on the recently compiled COSMOSdataset. Participants have developed state-of-the-art methods\, and we will discuss theevaluation metrics used in the challenge. We have also learned valuable lessons on thecomplexities and nuances of detecting OOC images and the importance of creating diverseand representative datasets. \nAdditionally\, we will share insights on the interdisciplinary collaboration needed to combatcheapfakes effectively. The talk highlights the significance of detecting OOC media in newsitems\, specifically the misuse of real photographs with conflicting captions. \n  \nDuc-Tien Dang-Nguyen is an associate professor of computer science at the Department of Information Science and Media Studies\, University of Bergen. His main area of expertise is on multimedia forensics\, lifelogging\, multimedia retrieval\, and computer vision. He is a member of MediaFutures WP3 – Media Content Analysis and Production in Journalism and The Nordic Observatory for Digital Media and Information Disorder (NORDIS). He is the author and co-author of more than 150 peer-reviewed and widely cited research papers. He is a PC member in a number of conferences in the fields of lifelogging\, multimedia forensics\, and pattern recognition. He is co-organiser of over 40 special sessions\, workshops and research challenges from ACM MM\, ACM ICMR\, NTCIR\, ImageClef and MediaEval during the last 10 years. He is General Chair of MMM 2023 and TPC co-Chair of ACM ICMR 2024. \n  \n\n  \nFactiverse/UiS: Explainable AI for Automated Fact-Checking \nAutomated fact-checking using AI models has shown promising results to combatmisinformation\, thanks to several large-scale datasets which are available. However\, most models are opaque and do not provide reasoning behind their predictions. Moreover\, with the recent popularity of LLMs such as GPT-3/4 by OpenAI\, Llama by Meta and Bard by Google\, there is renewed worry of misinformation. In this talk\, I will enumerate the existing approaches w.r.t XAI for fact-checking and discuss the latest trends in this topic. The talk will also delve into what makes a good explanation in the context of fact-checking\, and identify potential avenues for future research to address the current limitations. \nVinay Setty is a co-founder and the CTO of Factiverse. He is also an Associate Professor at University of Stavanger. His research area broadly includes natural language understanding (NLU)\, information retrieval (IR) and text mining involving unstructured textual documents as well as structured knowledge graphs. Specifically\, automated fact-checking\, question answering and coversational search over knowledge graphs. These days he is spending more time on his startup Factiverse with the mission to automate fact-checking using cutting edge AI and NLP. He has won the SR-Bank Innovation Prize for 2020 using the deep neural networks for fake news detection. He has a PhD from University of Oslo and he was a Postdoctoral researcher at Max Planck Institute for Informatics in Germany. \n  \nYou can sign up for the workshop here.
URL:https://mediafutures.no/event/media-city-bergens-future-week-workshop-on-disinformation-and-fake-news/
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Screenshot-2023-04-18-at-15.20.20.png
END:VEVENT
END:VCALENDAR