Opening of SFI MediaFutures

We invite you to the digital opening of SFI MediaFutures - Research Centre for Responsible Media Technology and Innovation, Tuesday 2nd of February 14.00-15.30 CET. Together with 13 other R&D partners from the media and tech industry and the academic world we are creating the first research centre in Norway developing responsible media technology, and […]

Seminar: Multimedia Verification, with Duc Tien Dang Nguyen, UiB

Online

ABSTRACT: In this seminar, Duc Tien Dang Nguyen will give a broad overview of how researchers seek to advance methods that detect and reveal modified and manipulated images and videos, and building up trust in online media based on advanced multimedia verification algorithms. He will also give discussions on how AI can be used and misused in the era of deep networks.

Steering Board meeting

Online

MediaFutures' first steering board will take place 25th of March from 12:00 to 14:00. This is an internal event that is scheduled online.

Seminar: Are Filter Bubbles Real? Axel Bruns, QUT

Online

ABSTRACT: The success of political movements that appear to be immune to any factual evidence that contradicts their claims – from the Brexiteers to the ‘alt-right’, neo-fascist groups supporting Donald Trump – has reinvigorated claims that social media spaces constitute so-called ‘filter bubbles’ or ‘echo chambers’. But while such claims may appear intuitively true to politicians and journalists – who have themselves been accused of living in filter bubbles –, the evidence that ordinary users experience their everyday social media environments as uniform and homophilous spaces is far more limited. For instance, a 2016 Pew Center study has shown that only 23% of U.S. users on Facebook and 17% on Twitter now say with confidence that most of their contacts’ views are similar to their own. 20% have changed their minds about a political or social issue because of interactions on social media. Similarly, large-scale studies of follower and interaction networks on social media show that such networks are often thoroughly interconnected and facilitate the flow of information across boundaries of personal ideology and interest, except for a few especially hardcore partisan communities. This talk explores the evidence for and against echo chambers and filter bubbles. It moves the present debate beyond a merely anecdotal footing, and offers a more reliable assessment of this purported threat.

Seminar: Reflections of Ourselves – Mobile Psychological Assessment with Smartphones. Clemens Stachl, Stanford University

Online

ABSTRACT: The increasing digitization of our society radically changes how we use digital media, exchange information, and make decisions. This development also changes how social scientists collect data on human behavior and experience in the field. One new form of data comes from in-vivo high-frequency mobile sensing via smartphones. Mobile sensing allows for the investigation of formerly intangible psychological constructs with objective data. In particular mobile sensing enables fine-grained, longitudinal data collections in the wild and at large scale. The additional combination of mobile sensing with state of the art machine learning methods, provides a perspective for the direct prediction of psychological traits and behavioral outcomes from these data. In this talk I will give an overview on my work combining machine learning with mobile sensing and discuss the opportunities and limitations of this approach. Consequently, I will provide an outlook perspective on where the routine use of mobile psychological sensing could take research and society alike.

Seminar: DeepFact: Deep Learning for Automated Fact Checking. Vinay Setty, University of Stavanger

Online

ABSTRACT: The interest around automated fact-checking has increased as misinformation has become a major problem online. A typical pipeline for an automated fact-checking system consists of four steps: (1) detecting check-worthy claims, (2) retrieving relevant documents, (3) selecting most relevant snippets for the claim and (4) predicting the veracity of the claim. In this talk, I will talk about the use of state-of-the-art deep neural networks such as LSTMs and Transformer architectures for these steps. Specifically, how deep hierarchical attention networks can be used for predicting the veracity of the claims and how to use the attention weights to extract the evidence for the claims. In addition, I will also talk about how to do check-worthy claim detection using Transformer models. Using several benchmarks from political debates and manual fact checking websites such as Politifact and Snopes, we show that these models outperform strong baselines. I will also summarize the state-of-the-art research within the areas of automated fact-checking and conclude with a set of challenges and problems remaining in this area.