Last Updated on September 7, 2021 by Kata Urban

Can you give us a short introduction about yourself and your background?

I’m an Associate Professor in Communication Studies and Computer Science at Northwestern University. There I direct the Computational Journalism Lab as well as the Technology and Social Behavior PhD Program. Although my PhD is in Computer Science, with a focus on Human-Computer Interaction, I’m quite interdisciplinary in my work. Over the years I’ve worked in Information Studies, Journalism, and now Communication Studies departments, but have always had an eye out for how computation is creating new opportunities and changes in adjacent information disciplines. Much of my research now is on computational journalism including aspects of automation and algorithms in news production, algorithmic accountability and transparency, and social media in news contexts. 

Do you think that automating the news is the future? What do you think about the challenges (ethical, social and economic) that our society are facing because of algorithm in the media industry?

In my book “Automating the News: How Algorithms are Rewriting the Media” I argue that automation has some clear advantages for speed, scope, scale, and personalization in news media. But I also argue that algorithms, automation, and AI need to be tightly coupled to expert humans with editorial expertise to create hybrid systems that leverage complementary human capabilities. The end goal should be to enhance the quality of news media and increase benefits to society by bringing algorithms and people together in smart ways. One of the challenges here is in how to design these kinds of hybrid systems such that they embody important journalistic values and commitments to accuracy and public service. Ethical issues are important to keep front-of-mind when deploying algorithms in the media, whether that’s in how to integrate evidence from an uncertain machine learning system, how to label automation so that end-users are aware of its use, or checking the bias of training datasets and model outputs so that those models don’t perpetuate biases or shape news media in unproductive ways. Designing appropriate ethical values into algorithmic media is essential to their responsible adoption by industry. 

How your work with MediaFutures started? What are your future plans at

I’m grateful for the opportunity to participate in MediaFutures via my affiliation with the University of Bergen, Department of Information and Media Studies, where I am Associate Professor II. I will collaborate with my colleagues on, among other things, the question of how algorithms and AI in media systems create ethical issues related to bias and representation. I am also curious to explore the role of algorithmic prediction in news media and what it means for how news is produced by journalists, and understood by the audience. I am also keen to work with other MediaFutures researchers and partners on questions of design, including how to practice value-sensitive design for AI-driven media, and how to evaluate designed hybrid AI media systems across a range of use-contexts using simulations.