- This event has passed.
PhD halfway presentation by Peter Daniel Andrews
28 November, 2023 @ 10:00 - 12:00
On November 28th, MediaFutures PhD candidate Peter Daniel Andrews will hold a midway presentation on his work. Anyone can drop by to listen and ask questions.
Abstract
Young adults rank among the least engaged consumers of digital news media. Generation Z predominantly engages with news through social media platforms, significantly altering traditional news consumption patterns. While viewing video content, Generation Z often simultaneously engages with social media and online resources, redefining traditional viewing paradigms. Cross-device multitasking primarily facilitates the search for relevant information across multiple platforms, thereby enhancing media contextualization. Simultaneously, it leverages social media to transform individual experiences into collaborative endeavors. This project aims to encapsulate the cross-device experience into a single platform where young adults can interact with video content to explore information, improving their understanding and accessibility of digital news media. Integrating an interactive platform atop video content provides a more immersive and engaging experience. However, to make video interactive, it is first necessary to extract contextual information so users can easily interact with content. Recent advances in Computer Vision (CV) and Artificial Intelligence (AI) have allowed for more sophisticated video content analysis. Using Object Detection and Multi-Object Tracking (MOT), contextual data is drawn from the video content and linked to relevant identities. By linking these identities to an external dataset resource, information regarding and surrounding content within the video becomes accessible for the user to assist in exploring digital news media. The user then has the freedom to explore and access information relevant to their understanding and experience. A dynamic interactive layer provides the necessary interface for users to engage directly with content extracted from the CV and AI backend. Utilizing Multimodal Conversational Agents (MCAs), users can interact seamlessly with the video content through natural language input while receiving feedback through multiple sensory modalities. The project uses quantitative and qualitative methods to assess the framework’s usability. The analysis aims to evaluate the feasibility of implementing such systems to assist young adults in comprehending news media video content. It also seeks to enhance user engagement and immersion, justifying its continued and future application. This research aims to revolutionize how young adults interact with and comprehend news media by offering an interactive layer that facilitates a deeper understanding and engagement with content topics, unlike traditional methods. By leveraging MCAs, the project enhances comprehension and introduces a fun and immersive element to digital news consumption, transforming it into an engaging and interactive experience. This approach counters the current trends of low engagement among young adults, presenting a new paradigm for how they interact with and understand news media.