Last Updated on January 20, 2022 by Anna Pacholczyk

In Meet the Team series we introduce our audience to the brilliant people working at MediaFutures. Get to know us better by discovering who we are and what we do.

This week we are interviewing Pete Andrews, PhD Candidate in WP 4 – Media Content Interaction & Accessibility. 

Can you give an introduction of yourself and your scientific background? 

My name is Peter Andrews and I started my Ph.D. at the University of Bergen and MediaFutures in December 2021. I am originally from North London but grew up in a small, medieval town in central England called Warwick. This town is most famous for a castle built by William the Conquerer in 1068 and its Tudor-style architecture developed during King Henry VIII’s reign. When I am not working or going for long walks in the countryside, I enjoy developing games in Unity, creating Apps with Xamarin, and playing the guitar. Before pursuing my Masters of Science (MSc) in Multimedia Design and 3D Technologies, I worked as a medical multimedia and ophthalmic imaging practitioner in the National Health Service (NHS). My MSc dissertation, Computational 3D Image Reconstruction in Medical Imaging, was concerned with reconstructing 3D images of pre-surgical orthodontic/orthognathic patients for surgical planning purposes with stereo, multi-view, and holoscopic images. After completion, I studied data science and data mining at the Chinese Academy of Sciences in Beijing. Here, I was involved in projects utilizing Deep Learning (DL) for the classification of different types of bacteria collected from biopsies using Raman Spectroscopy.

Tell us about your research at MediaFutures and your role in Work Package 1.

At MediaFutures I am working within Work Package 4, which is concerned with creating adaptable systems that can enhance and personalize Human-Computer Interaction (HCI). Currently, most HCI technology is developed around a ‘one-size fits all’ model which is detrimental and inefficient for a many users. Therefore, there is a need to personalize methods of interaction for a more user-centric approach to support users in their tasks. My research aims at collecting intrinsic user data from multiple devices to build a user profile that can adapt visualizations to accordingly. We plan to build a state-of-the-art Artificial Intelligence (AI) system that mines the user profile to hyper-personalized visualizations to improve the user’s understanding of content.

Things about MediaFutures you appreciate the most? 

Everyone at MediaFutures is very friendly and approachable, which creates a really nice working environment. Working in a team of people from varied scientific backgrounds and cultures is an extremely valuable resource. Our various backgrounds give rise to our unique perspective on identifying and solving problems. In turn, this can lead to interesting discussions that give alternative perspectives and a more holistic view of the topic at hand. Another advantage of working at MediaFutures is having industry partners close by which, helps identify real-world problems and gives us access to original datasets.