Senior Researcher Njål Borch
Work Package Leader
2024
Arntzen, Ingar M; Borch, Njål; Andersen, Anders
Control-driven Media. A unifying model for consistent, cross-platform multimedia experiences Journal Article
In: FTC 2024 International Journal of Advanced Computer Science and Applications (IJACSA), 2024.
@article{controldrivingar24,
title = {Control-driven Media. A unifying model for consistent, cross-platform multimedia experiences},
author = {Ingar M Arntzen and Njål Borch and Anders Andersen},
url = {https://mediafutures.no/preprint_cdm/},
year = {2024},
date = {2024-11-24},
journal = {FTC 2024 International Journal of Advanced Computer Science and Applications (IJACSA)},
abstract = {Targeting a diverse consumer base, many media providers offer complementary products on different platforms. Online sports coverage for instance, may include professionally produced audio and video channels, as well as Web pages and native apps offering live statistics, maps, data visualizations, social commentary and more. Many consumers are also engaging in parallel usage, setting up streaming products and interactive interfaces on available screens, laptops and handheld devices. This ability to combine products holds great promise, yet, with no coordination, cross-platform user experiences often appear inconsistent and disconnected.
We present emph{Control-driven Media (CdM)}, a new media model adding support for coordination and consistency across interfaces, devices, products and platforms, while also remaining compatible with existing services, technologies and workflows. CdM promotes online media control as an independent resource type in multimedia systems. With control as a driving force, CdM offers a highly flexible model, opening up for further innovations in automation, personalization, multi-device support, collaboration and time-driven visualization. Furthermore, CdM bridges the gap between continuous media and Web/native apps, allowing the combined powers of these platforms to be seamlessly exploited as parts of a single, consistent user experience.
CdM is supported by extensive research in time-dependent, multi-device, data-driven media experiences. In particular, State Trajectory, a unifying concept for online, timeline-consistent media control, has recently been proposed as a generic solution for media control in CdM. This paper makes the case for CdM, bringing a significant potential to the attention of research and industry.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We present emph{Control-driven Media (CdM)}, a new media model adding support for coordination and consistency across interfaces, devices, products and platforms, while also remaining compatible with existing services, technologies and workflows. CdM promotes online media control as an independent resource type in multimedia systems. With control as a driving force, CdM offers a highly flexible model, opening up for further innovations in automation, personalization, multi-device support, collaboration and time-driven visualization. Furthermore, CdM bridges the gap between continuous media and Web/native apps, allowing the combined powers of these platforms to be seamlessly exploited as parts of a single, consistent user experience.
CdM is supported by extensive research in time-dependent, multi-device, data-driven media experiences. In particular, State Trajectory, a unifying concept for online, timeline-consistent media control, has recently been proposed as a generic solution for media control in CdM. This paper makes the case for CdM, bringing a significant potential to the attention of research and industry.
Andrews, Peter; Nordberg, Oda Elise; Guribye, Frode; Fjeld, Morten; Borch, Njål
Designing for Automated Sports Commentary Systems Conference
IMX'24, 2024.
@conference{designing_for_automated24,
title = {Designing for Automated Sports Commentary Systems},
author = {Peter Andrews and Oda Elise Nordberg and Frode Guribye and Morten Fjeld and Njål Borch },
url = {https://mediafutures.no/designing_for_automated_sports_commentary_systems-2/},
year = {2024},
date = {2024-06-12},
booktitle = {IMX'24},
abstract = {Advancements in Natural Language Processing (NLP) and Computer Vision (CV) are revolutionizing how we experience sports broadcasting. Traditionally, sports commentary has played a crucial role in enhancing viewer understanding and engagement with live games. Yet, the prospects of automated commentary, especially in light of these technological advancements and their impact on viewers’ experience, remain largely unexplored. This paper elaborates upon an innovative automated commentary system that integrates NLP and CV to provide a multimodal experience, combining auditory feedback through text-to-speech and visual cues, known as italicizing, for real-time in-game commentary. The system supports color commentary, which aims to inform the viewer of information surrounding the game by pulling additional content from a database. Moreover, it also supports play-by-play commentary covering in-game developments derived from an event system based on CV. As the system reinvents the role of commentary in sports video, we must consider the design and implications of multimodal artificial commentators. A focused user study with eight participants aimed at understanding the design implications of such multimodal artificial commentators reveals critical insights. Key findings emphasize the importance of language precision, content relevance, and delivery style in automated commentary, underscoring the necessity for personalization to meet diverse viewer preferences. Our results validate the potential value and effectiveness of multimodal feedback and derive design considerations, particularly in personalizing content to revolutionize the role of commentary in sports broadcasts.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Andrews, Peter; Borch, Njål; Fjeld, Morten
ACM ICMIP, 2024.
@conference{Footyvision1,
title = {FootyVision: Multi-Object Tracking, Localisation, and Augmentation of Players and Ball in Football Video},
author = {Peter Andrews and Njål Borch and Morten Fjeld},
url = {https://mediafutures.no/peterandrews-footyvision-icmip24-final/},
year = {2024},
date = {2024-04-20},
booktitle = {ACM ICMIP},
abstract = {Football video content analysis is a rapidly evolving field aiming to enrich the viewing experience of football matches. Current research often focuses on specific tasks like player and/or ball detection, tracking, and localisation in top-down views. Our study strives to integrate these efforts into a comprehensive Multi-Object Tracking (MOT) model capable of handling perspective transformations. Our framework, FootyVision, employs a YOLOv7 backbone trained on an extended player and ball dataset. The MOT module builds a gallery and assigns identities via the Hungarian algorithm based on feature embeddings, bounding box intersection over union, distance, and velocity. A novel component of our model is the perspective transformation module that leverages activation maps from the YOLOv7 backbone to compute homographies using lines, intersection points, and ellipses. This method effectively adapts to dynamic and uncalibrated video data, even in viewpoints with limited visual information. In terms of performance, FootyVision sets new benchmarks. The model achieves a mean average precision (mAP) of 95.7% and an F1-score of 95.5% in object detection. For MOT, it demonstrates robust capabilities, with an IDF1 score of approximately 93% on both ISSIA and SoccerNet datasets. For SoccerNet, it reaches a MOTA of 94.04% and shows competitive results for ISSIA. Additionally, FootyVision scores a HOTA(0) of 93.1% and an overall HOTA of 72.16% for the SoccerNet dataset. Our ablation study confirms the effectiveness of the selected tracking features and identifies key attributes for further improvement. While the model excels in maintaining track accuracy throughout the testing dataset, we recognise the potential to enhance spatial-location accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Andrews, Peter; Nordberg, Oda Elise; Guribye, Frode; Fujita, Kazuyuki; Fjeld, Morten; Borch, Njål
AiCommentator: A Multimodal Conversational Agent for Embedded Visualization in Football Viewing Conference
Intelligent User Interfaces (IUI), 2024.
@conference{AIComment,
title = {AiCommentator: A Multimodal Conversational Agent for Embedded Visualization in Football Viewing},
author = {Peter Andrews and Oda Elise Nordberg and Frode Guribye and Kazuyuki Fujita and Morten Fjeld and Njål Borch},
url = {https://mediafutures.no/acm_iui_24_aicommentator_peterandrews-1/},
year = {2024},
date = {2024-03-18},
urldate = {2024-03-18},
booktitle = {Intelligent User Interfaces (IUI)},
journal = {Intelligent User Interfaces (IUI)},
abstract = {Traditionally, sports commentators provide viewers with diverse information, encompassing in-game developments and player performances. Yet young adult football viewers increasingly use mobile devices for deeper insights during football matches. Such insights into players on the pitch and performance statistics support viewers’ understanding of game stakes, creating a more engaging viewing experience. Inspired by commentators’ traditional roles and to incorporate information into a single platform, we developed AiCommentator, a Multimodal Conversational Agent (MCA) for embedded visualization and conversational interactions in football broadcast video. AiCommentator integrates embedded visualization, either with an automated non-interactive or with a responsive interactive commentary mode. Our system builds upon multimodal techniques, integrating computer vision and large language models, to demonstrate ways for designing tailored, interactive sports-viewing content. AiCommentator’s event system infers game states based on a multi-object tracking algorithm and computer vision backend, facilitating automated responsive commentary. We address three key topics: evaluating young adults’ satisfaction and immersion across the two viewing modes, enhancing viewer understanding of in-game events and players on the pitch, and devising methods to present this information in a usable manner. In a mixed-method evaluation (n=16) of AiCommentator, we found that the participants appreciated aspects of both system modes but preferred the interactive mode, expressing a higher degree of engagement and satisfaction. Our paper reports on our development of AiCommentator and presents the results from our user study, demonstrating the promise of interactive MCA for a more engaging sports viewing experience. Systems like AiCommentator could be pivotal in transforming the interactivity and accessibility of sports content, revolutionizing how sports viewers engage with video content.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2023
Tessem, Bjørnar; Tverberg, Are; Borch, Njål
The future technologies of journalism Journal Article
In: Procedia Computer Science, vol. 239, pp. 96-104, 2023.
@article{CENTERIS,
title = {The future technologies of journalism},
author = {Bjørnar Tessem and Are Tverberg and Njål Borch },
url = {https://mediafutures.no/centeris/},
year = {2023},
date = {2023-11-10},
urldate = {2023-11-10},
booktitle = {Centeris},
journal = {Procedia Computer Science},
volume = {239},
pages = {96-104},
abstract = {The practice of journalism has undergone many changes in the last few years, with changes in technology being the
main driver of these changes. We present a future study where we aim to get an understanding of what technologies
will become important for the journalist and further change the journalist’s workplace. The new technological
solutions will have to be implemented in the media houses’ information systems, and knowledge about what
technologies will have the greatest impact will influence IS strategies in the media house. In the study we
interviewed 16 experts on how they envision the future technologies of the journalist. We analyzed the interviews
with a qualitative research approach. Our analysis shows that technologies for multi-platform news production,
automated news content generation, cloud services for flexible production, content search, and content verification
are the most important in terms of needs and competitiveness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
main driver of these changes. We present a future study where we aim to get an understanding of what technologies
will become important for the journalist and further change the journalist’s workplace. The new technological
solutions will have to be implemented in the media houses’ information systems, and knowledge about what
technologies will have the greatest impact will influence IS strategies in the media house. In the study we
interviewed 16 experts on how they envision the future technologies of the journalist. We analyzed the interviews
with a qualitative research approach. Our analysis shows that technologies for multi-platform news production,
automated news content generation, cloud services for flexible production, content search, and content verification
are the most important in terms of needs and competitiveness.
2021
Trattner, Christoph; Jannach, Dietmar; Motta, Enrico; Meijer, Irene Costera; Diakopoulos, Nicholas; Elahi, Mehdi; Opdahl, Andreas L.; Tessem, Bjørnar; Borch, Njål; Fjeld, Morten; Øvrelid, Lilja; Smedt, Koenraad De; Moe, Hallvard
Responsible media technology and AI: challenges and research directions Journal Article
In: AI and Ethics, 2021.
@article{cristin2000622,
title = {Responsible media technology and AI: challenges and research directions},
author = {Christoph Trattner and Dietmar Jannach and Enrico Motta and Irene Costera Meijer and Nicholas Diakopoulos and Mehdi Elahi and Andreas L. Opdahl and Bjørnar Tessem and Njål Borch and Morten Fjeld and Lilja Øvrelid and Koenraad De Smedt and Hallvard Moe},
url = {https://app.cristin.no/results/show.jsf?id=2000622, Cristin
https://link.springer.com/content/pdf/10.1007/s43681-021-00126-4.pdf},
doi = {https://doi.org/10.1007/s43681-021-00126-4},
year = {2021},
date = {2021-12-20},
urldate = {2021-12-20},
journal = {AI and Ethics},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Arntzen, Ingar M; Borch, Njål; Andersen, Anders
Unify Media and UX with timed variables Working paper
2021.
@workingpaper{cristin1959749,
title = {Unify Media and UX with timed variables},
author = {Ingar M Arntzen and Njål Borch and Anders Andersen},
url = {https://app.cristin.no/results/show.jsf?id=1959749, Cristin},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
keywords = {},
pubstate = {published},
tppubtype = {workingpaper}
}
2018
Borch, Njål; Arntzen, Ingar Mæhlum
Mediasync Report 2015: Evaluating timed playback of HTML5 Media Technical Report
2018, (Pre SFI).
@techreport{Borch2015,
title = {Mediasync Report 2015: Evaluating timed playback of HTML5 Media},
author = {Njål Borch and Ingar Mæhlum Arntzen},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/bitstream/handle/11250/2711974/Norut_Tromso_rapport_28-2015.pdf?sequence=2},
year = {2018},
date = {2018-12-18},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Arntzen, Ingar Mæhlum; Borch, Njål; Daoust, François
Media Synchronization on the Web. In: MediaSync Book Chapter
In: 2018, (Pre SFI).
@inbook{Arntzen2018,
title = {Media Synchronization on the Web. In: MediaSync},
author = {Ingar Mæhlum Arntzen and Njål Borch and François Daoust},
url = {https://www.w3.org/community/webtiming/files/2018/05/arntzen_mediasync_web_author_edition.pdf},
year = {2018},
date = {2018-05-07},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
Arntzen, Ingar Mæhlum; Borch, Njål; Daoust, François; Hazael-Massieux, Dominique
Media Synchronization Workshop Brussels, 2018, (Pre SFI).
@conference{Arntzen2018b,
title = {Multi-device Linear Composition on the Web, Enabling Multi-device Linear Media with HTMLTimingObject and Shared Motion},
author = {Ingar Mæhlum Arntzen and Njål Borch and François Daoust and Dominique Hazael-Massieux
},
url = {https://www.researchgate.net/publication/324991987_Multi-device_Linear_Composition_on_the_Web_Enabling_Multi-device_Linear_Media_with_HTMLTimingObject_and_Shared_Motion},
year = {2018},
date = {2018-01-01},
address = {Brussels},
organization = {Media Synchronization Workshop},
abstract = {Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise multi-device timing is the main technical challenge. This paper proposes the introduction of an HTMLTimingObject as basis for linear composition in the single-device scenario. Linear composition in the multi-device scenario is ensured as HTMLTimingObjects may integrate with Shared Motion, a generic timing mechanism for the Web. By connecting HTMLMediaElements and HTMLTrackElements with a multi-device timing mechanism, a powerful programming model for multi-device linear media is unlocked.},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2017
Borch, Njål
Økt samvirke og beslutningsstøtte – Case Salten Brann IKS Technical Report
2017, (Pre SFI).
@techreport{Borch2017,
title = {Økt samvirke og beslutningsstøtte – Case Salten Brann IKS},
author = {Njål Borch},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/handle/11250/2647818},
year = {2017},
date = {2017-08-17},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Borch, Njål; Daoust, François; Arntzen, Ingar Mæhlum
Timing - small step for developers, giant leap for the media industry, IBC 2016 Conference
2017, (Pre SFI).
@conference{Borch2016,
title = {Timing - small step for developers, giant leap for the media industry, IBC 2016},
author = {Njål Borch and François Daoust and Ingar Mæhlum Arntzen},
url = {https://www.w3.org/community/webtiming/files/2016/09/Borch_IBC2016-final.pdf},
year = {2017},
date = {2017-02-11},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2016
Arntzen, Ingar Mæhlum; Borch, Njål
2016, (Pre SFI).
@proceedings{Arntzen2016,
title = {Data-independent sequencing with the timing object: a JavaScript sequencer for single-device and multi-device web media. In Proceedings of the 7th International Conference on Multimedia Systems (MMSys '16)},
author = {Ingar Mæhlum Arntzen and Njål Borch},
url = {https://www.w3.org/community/webtiming/files/2016/05/mmsys2016slides.pdf},
year = {2016},
date = {2016-05-12},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
2015
Borch, Njål; Arntzen, Ingar Mæhlum
Mediasync Report 2015: Evaluating timed playback of HTML5 Media Journal Article
In: Norut, 2015, ISBN: 978-82-7492-319-5, (Pre SFI).
@article{Borch2015b,
title = {Mediasync Report 2015: Evaluating timed playback of HTML5 Media},
author = {Njål Borch and Ingar Mæhlum Arntzen},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/bitstream/handle/11250/2711974/Norut_Tromso_rapport_28-2015.pdf?sequence=2&isAllowed=y},
isbn = {978-82-7492-319-5},
year = {2015},
date = {2015-12-08},
journal = {Norut},
abstract = {In this report we provide an extensive analysis of timing aspects of HTML5 Media, across a variety of browsers,
operating systems and media formats. Particularly we investigate how playback compares to the progression of
the local clock and how players respond to time-shifting and adjustments in playback-rate.
Additionally, we use the MediaSync JS library to enforce correctly timed playback for HTML5 media, and indicate
the effects this has on user experience. MediaSync is developed based on results from the above analysis.
MediaSync aims to provide a best effort solution that works across a variety of media formats, operating systems
and browser types, and does not make optimizations for specific permutations..
},
note = {Pre SFI},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
operating systems and media formats. Particularly we investigate how playback compares to the progression of
the local clock and how players respond to time-shifting and adjustments in playback-rate.
Additionally, we use the MediaSync JS library to enforce correctly timed playback for HTML5 media, and indicate
the effects this has on user experience. MediaSync is developed based on results from the above analysis.
MediaSync aims to provide a best effort solution that works across a variety of media formats, operating systems
and browser types, and does not make optimizations for specific permutations..