ABSTRACT: In this seminar, Duc Tien Dang Nguyen will give a broad overview of how researchers seek to advance methods that detect and reveal modified and manipulated images and videos, and building up trust in online media based on advanced multimedia verification algorithms. He will also give discussions on how AI can be used and misused in the era of deep networks.
ABSTRACT: The success of political movements that appear to be immune to any factual evidence that contradicts their claims – from the Brexiteers to the ‘alt-right’, neo-fascist groups supporting Donald Trump – has reinvigorated claims that social media spaces constitute so-called ‘filter bubbles’ or ‘echo chambers’. But while such claims may appear intuitively true to politicians and journalists – who have themselves been accused of living in filter bubbles –, the evidence that ordinary users experience their everyday social media environments as uniform and homophilous spaces is far more limited. For instance, a 2016 Pew Center study has shown that only 23% of U.S. users on Facebook and 17% on Twitter now say with confidence that most of their contacts’ views are similar to their own. 20% have changed their minds about a political or social issue because of interactions on social media. Similarly, large-scale studies of follower and interaction networks on social media show that such networks are often thoroughly interconnected and facilitate the flow of information across boundaries of personal ideology and interest, except for a few especially hardcore partisan communities. This talk explores the evidence for and against echo chambers and filter bubbles. It moves the present debate beyond a merely anecdotal footing, and offers a more reliable assessment of this purported threat.
ABSTRACT: The increasing digitization of our society radically changes how we use digital media, exchange information, and make decisions. This development also changes how social scientists collect data on human behavior and experience in the field. One new form of data comes from in-vivo high-frequency mobile sensing via smartphones. Mobile sensing allows for the investigation of formerly intangible psychological constructs with objective data. In particular mobile sensing enables fine-grained, longitudinal data collections in the wild and at large scale. The additional combination of mobile sensing with state of the art machine learning methods, provides a perspective for the direct prediction of psychological traits and behavioral outcomes from these data. In this talk I will give an overview on my work combining machine learning with mobile sensing and discuss the opportunities and limitations of this approach. Consequently, I will provide an outlook perspective on where the routine use of mobile psychological sensing could take research and society alike.
ABSTRACT: The interest around automated fact-checking has increased as misinformation has become a major problem online. A typical pipeline for an automated fact-checking system consists of four steps: (1) detecting check-worthy claims, (2) retrieving relevant documents, (3) selecting most relevant snippets for the claim and (4) predicting the veracity of the claim. In this talk, I will talk about the use of state-of-the-art deep neural networks such as LSTMs and Transformer architectures for these steps. Specifically, how deep hierarchical attention networks can be used for predicting the veracity of the claims and how to use the attention weights to extract the evidence for the claims. In addition, I will also talk about how to do check-worthy claim detection using Transformer models. Using several benchmarks from political debates and manual fact checking websites such as Politifact and Snopes, we show that these models outperform strong baselines. I will also summarize the state-of-the-art research within the areas of automated fact-checking and conclude with a set of challenges and problems remaining in this area.
ABSTRACT: This seminar will introduce JECT.AI, a new digital product for newsrooms that has emerged from previous research and development work. The use of AI technologies in newsrooms remains contentious. Therefore, the JECT.AI developers worked closely with journalists to design a product that augments the existing capabilities of journalists, and ensures that journalists direct the product’s use. The seminar will demonstrate a series of JECT.AI features in the context of newsroom activities, to reveal how the product augments rather than inhibit how journalists work, and can enable newsrooms to operate more effectively.
ABSTRACT: Recommendation and ranking systems are known to suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items. Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail, less popular, items. The effectiveness of these approaches is often assessed using different metrics to evaluate the extent to which over-concentration on popular items is reduced. However, not much attention has been given to the user-centered evaluation of this bias; how different users with different levels of interest towards popular items (e.g., niche vs blockbuster-focused users) are affected by such algorithms. In this talk, I first give an overview of the popularity bias problem in recommender systems. Then, I show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users’ perspective and I propose a new metric that can address these limitations. In addition, I present an effective approach that mitigates popularity bias from the user-centered point of view. Finally, I investigate several state-of-the-art approaches proposed in recent years to mitigate popularity bias and evaluate their performances using the existing metrics and also from the users’ perspective. Using two publicly available datasets, I show that many of the existing popularity bias mitigation techniques ignore the users' tolerance towards popular items. The proposed user-centered method, on the other hand, can tackle popularity bias effectively for different users while also improving the existing metrics.
ABSTRACT: This talk is not a lecture. The goal is to use plain English. Why? To simply convey some practical information and insights. About what? On Privacy Enhancing Technologies; PETs for short. What for? So that you can answer the question in the title for yourself. (Correct, I won't do this one for you.) Why should you care to listen? Will it matter if you don't? To whom? And what to use PETs for? Can it be applied to an Amazon Echo or Google Home? A "smart" lightbulb or your "smart" TV? All of them? These questions I'll strive to answer. And I hope you will have more. Especially that in this popular science format, I will touch upon topics that should resonate with each of you and that are not limited to dark, dusty and narrow university corridors or Ivory towers. Tangible examples of this include reports of The Norwegian Consumer Council, Forbrukerrådet, regarding consumer-unfriendly practices. Similarly, recent NRK reports on location tracking through smartphone apps illustrate some issues that will be brought up in the talk.
ABSTRACT: In this talk, Rich Ling will examine the role of AI in social science research. In addition, he will examine a taxonomy of fake news. In the case of AI in the social sciences, Ling will examine how this technology is emerging as a new tool that will eventually shape social science research in the coming years. When considering fake news, Ling will review how this phenomenon has been seen over the past decade and how researchers have approached it.
ABSTRACT: Recommender systems are systems that help users in decision-making situations where there is an abundance of choices. We can find them in our everyday lives, for example in online shops. State-of-the-art research in recommender systems has shown the benefits of behavioural modeling. Behavioural modeling means that we use past ratings, purchases, clicks etc. to model the user preferences. However, behavioural modeling is not able to capture certain aspects of the user preferences. In this talk I will show how the usage of complementary research in computational psychology, such as detection of personality and emotions, can benefit recommender systems.
ABSTRACT: In today’s world, innovation appears to have replaced quality as the dominant concept in metajournalistic discourse. Innovation guards the distribution of financial resources - more investments in technology - and working conditions –more freelance journalists as a flexible workforce. Innovation also works as a distinctive mark of professional status and is at the center of antagonistic labor relations – e.g. the introduction of robot journalism.
The shift from quality discourse to innovation discourse involved a change in the journalistic perception of audiences: from being irrelevant (if not a negative concern) to being main targets.
Although the question of how to reach audiences seems to be still dominant, news organizations appear to become more open and sensitive towards finding out how to become valuable to audiences, how to open up their minds, how to broaden their horizon, and how to provide them with a quality experience that will enlighten them with reliable information considered worthwhile.
In this talk I will answer the question how innovation discourse and quality discourse may meet by focusing on what audiences experience as valuable journalism. I will demonstrate how it crystallized over the years into three key experiences: Learning something new, Getting recognition and Increasing mutual understanding.
MediaFutures is pleased to announce that Leo Leppänen, who is a computer science doctoral student at the Discovery Research Group of University of Helsinki, Finland will be giving a seminar on the topic of natural language generation. Welcome to all! TITLE: Natural Language Generation, Automated Journalism and Finding the Middle Road WHEN: Friday, 24 September […]
Carl-Gustav Lindén, Associate Professor in Data Journalism at the University of Bergen, will give a talk to the MediaFutures community on 15 October at 12:00 about the new project titled "Nordis - The Nordic Observatory for Digital Media and Information Disorder". The Nordis project is funded by the European Commission for a duration of two […]