2022
|
RedirectedDoors: Redirection While Opening Doors in Virtual Reality Conference Morten Fjeld; Yukai Hoshikawa; Kazuyuki Fujita; Kazuki Takashima; Yoshifumi Kitamura RedirectedDoors: Redirection While Opening Doors in Virtual Reality., 2022. @conference{Fjeld2022,
title = {RedirectedDoors: Redirection While Opening Doors in Virtual Reality},
author = {Morten Fjeld and Yukai Hoshikawa and Kazuyuki Fujita and Kazuki Takashima and Yoshifumi Kitamura },
year = {2022},
date = {2022-03-12},
urldate = {2022-03-12},
booktitle = {RedirectedDoors: Redirection While Opening Doors in Virtual Reality.},
abstract = {We propose RedirectedDoors, a novel technique for redirection in VR focused on door-opening behavior. This technique manipulates the user's walking direction by rotating the entire virtual environment at a certain angular ratio of the door being opened, while the virtual door's position is kept unmanipulated to ensure door-opening realism. Results of a user study using two types of door-opening interfaces (with and without a passive haptic prop) revealed that the estimated detection thresholds generally showed a higher space efficiency of redirection. Following the results, we derived usage guidelines for our technique that provide lower noticeability and higher acceptability.},
keywords = {New, Virtual Reality, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {conference}
}
We propose RedirectedDoors, a novel technique for redirection in VR focused on door-opening behavior. This technique manipulates the user's walking direction by rotating the entire virtual environment at a certain angular ratio of the door being opened, while the virtual door's position is kept unmanipulated to ensure door-opening realism. Results of a user study using two types of door-opening interfaces (with and without a passive haptic prop) revealed that the estimated detection thresholds generally showed a higher space efficiency of redirection. Following the results, we derived usage guidelines for our technique that provide lower noticeability and higher acceptability. |
2021
|
VXSlate: Exploring Combination of Head Movements and Mobile Touch for Large Virtual Display Interaction Proceeding Khanh-Duy Le; Tanh Quang Tran; Karol Chlasta; Krzysztof Krejtz; Morten Fjeld; Andreas Kunz Association for Computing Machinery, New York, NY, USA, 2021, ISBN: 978-1-4503-8476-6. @proceedings{Kunz2021,
title = {VXSlate: Exploring Combination of Head Movements and Mobile Touch for Large Virtual Display Interaction},
author = {Khanh-Duy Le and Tanh Quang Tran and Karol Chlasta and Krzysztof Krejtz and Morten Fjeld and Andreas Kunz},
doi = {https://doi.org/10.1145/3461778.3462076},
isbn = {978-1-4503-8476-6},
year = {2021},
date = {2021-06-28},
journal = {DIS '21: Designing Interactive Systems Conference 2021},
pages = {283–297},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {proceedings}
}
|
VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction Conference Khanh-Duy Le; Tanh Quang Tran; Karol Chlasta; Krzysztof Krejtz; Morten Fjeld; Andreas Kunz 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE The Institute of Electrical and Electronics Engineers, Inc., 2021. @conference{Le2021b,
title = {VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction},
author = {Khanh-Duy Le and Tanh Quang Tran and Karol Chlasta and Krzysztof Krejtz and Morten Fjeld and Andreas Kunz},
url = {https://conferences.computer.org/vrpub/pdfs/VRW2021-2ANNoldm4A10Ml9f63uYC9/136700a528/136700a528.pdf
https://www.youtube.com/watch?v=N8ZJlKWj4mk&ab_channel=DuyL%C3%AAKh%C3%A1nh},
doi = { 10.1109/VRW52623.2021.00146},
year = {2021},
date = {2021-02-12},
pages = {528-529},
publisher = {IEEE The Institute of Electrical and Electronics Engineers, Inc.},
organization = {2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).},
abstract = {Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays, using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display, as an expansion of a tablet. VXSlate combines a user’s head movements, as tracked by the VR headset, and touch interaction on the tablet. The user’s head movements position both a virtual representation of the tablet and of the user’s hand on the large virtual display. The user’s multi-touch interactions perform finely-tuned content manipulations.},
keywords = {Human computer interaction, Human-centered computing, Interaction techniques, SFI MediaFutures, Virtual Reality, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {conference}
}
Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays, using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display, as an expansion of a tablet. VXSlate combines a user’s head movements, as tracked by the VR headset, and touch interaction on the tablet. The user’s head movements position both a virtual representation of the tablet and of the user’s hand on the large virtual display. The user’s multi-touch interactions perform finely-tuned content manipulations. |
2020
|
Unpacking Editorial Agreements in Collaborative Video Production Conference Pavel Okopnyi; Oskar Juhlin; Frode Guribye IMX '20: ACM International Conference on Interactive Media Experiences, New York, 2020, (Pre SFI). @conference{Okopnyi2020,
title = {Unpacking Editorial Agreements in Collaborative Video Production},
author = {Pavel Okopnyi and Oskar Juhlin and Frode Guribye},
url = {https://www.researchgate.net/publication/342251635_Unpacking_Editorial_Agreements_in_Collaborative_Video_Production},
doi = {https://doi.org/10.1145/3391614.3393652},
year = {2020},
date = {2020-06-01},
booktitle = {IMX '20: ACM International Conference on Interactive Media Experiences},
pages = {117–126},
address = {New York},
abstract = {Video production is a collaborative process involving creative, artistic and technical elements that require a multitude of specialised skill sets. This open-ended work is often marked by uncertainty and interpretive flexibility in terms of what the product is and should be. At the same time, most current video production tools are designed for single users. There is a growing interest, both in industry and academia, to design features that support key collaborative processes in editing, such as commenting on videos. We add to current research by unpacking specific forms of collaboration, in particular the social mechanisms and strategies employed to reduce interpretive flexibility and uncertainty in achieving agreements between editors and other collaborators. The findings contribute to the emerging design interest by identifying general design paths for how to support collaboration in video editing through scaffolding, iconic referencing, and suggestive editing.},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {conference}
}
Video production is a collaborative process involving creative, artistic and technical elements that require a multitude of specialised skill sets. This open-ended work is often marked by uncertainty and interpretive flexibility in terms of what the product is and should be. At the same time, most current video production tools are designed for single users. There is a growing interest, both in industry and academia, to design features that support key collaborative processes in editing, such as commenting on videos. We add to current research by unpacking specific forms of collaboration, in particular the social mechanisms and strategies employed to reduce interpretive flexibility and uncertainty in achieving agreements between editors and other collaborators. The findings contribute to the emerging design interest by identifying general design paths for how to support collaboration in video editing through scaffolding, iconic referencing, and suggestive editing. |
“We in the Mojo Community” – Exploring a Global Network of Mobile Journalists Journal Article Anja Salzmann; Frode Guribye; Astrid Gynnild In: Journalism Practice, pp. 1-18, 2020, (Pre SFI). @article{Salzmann2020,
title = {“We in the Mojo Community” – Exploring a Global Network of Mobile Journalists},
author = {Anja Salzmann and Frode Guribye and Astrid Gynnild},
url = {https://www.tandfonline.com/doi/epub/10.1080/17512786.2020.1742772?needAccess=true},
doi = {https://doi.org/10.1080/17512786.2020.1742772},
year = {2020},
date = {2020-04-03},
journal = {Journalism Practice},
pages = {1-18},
abstract = {Mobile journalism is a fast-growing area of journalistic innovation that requires new skills and work practices. Thus, a major challenge for journalists is learning not only how to keep up with new gadgets but how to advance and develop a mojo mindset to pursue their interests and solidify future work options. This paper investigates a globally pioneering network of mojo journalism, the Mojo Community, that consists of journalists and practitioners dedicated to creating multimedia content using mobile technologies. The study is based on empirical data from interviews with and the observation of the participants of the community over a two-year period. The analysis draws on Wenger’s concept of “communities of practice” to explore the domain, structure, and role of this communal formation for innovation and change in journalistic practices. The community’s core group is comprised of journalists mainly affiliated with legacy broadcast organizations and with a particular interest in and extensive knowledge of mobile technologies. The participants perceive their engagement with the community as a way of meeting the challenges of organizational reluctance to change, fast-evolving technological advancements, and uncertain job prospects.},
note = {Pre SFI},
keywords = {community of practice, digital culture, mobile content creation, Mobile journalism, mobile technologies, mojo, mojo community, smartphone reporting, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {article}
}
Mobile journalism is a fast-growing area of journalistic innovation that requires new skills and work practices. Thus, a major challenge for journalists is learning not only how to keep up with new gadgets but how to advance and develop a mojo mindset to pursue their interests and solidify future work options. This paper investigates a globally pioneering network of mojo journalism, the Mojo Community, that consists of journalists and practitioners dedicated to creating multimedia content using mobile technologies. The study is based on empirical data from interviews with and the observation of the participants of the community over a two-year period. The analysis draws on Wenger’s concept of “communities of practice” to explore the domain, structure, and role of this communal formation for innovation and change in journalistic practices. The community’s core group is comprised of journalists mainly affiliated with legacy broadcast organizations and with a particular interest in and extensive knowledge of mobile technologies. The participants perceive their engagement with the community as a way of meeting the challenges of organizational reluctance to change, fast-evolving technological advancements, and uncertain job prospects. |
Learn with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen Mobiles Journal Article Morten Fjeld; Smitha Sheshadri; Shengdong Zhao; Yang Cheng In: CHI 2020 Paper, pp. 1-13, 2020, (Pre SFI). @article{Fjeld2020,
title = {Learn with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen Mobiles},
author = {Morten Fjeld and Smitha Sheshadri and Shengdong Zhao and Yang Cheng},
url = {https://dl.acm.org/doi/pdf/10.1145/3313831.3376272
https://www.youtube.com/watch?v=WY_T0fK5gCQ&ab_channel=ACMSIGCHI},
year = {2020},
date = {2020-04-01},
urldate = {2020-04-01},
journal = {CHI 2020 Paper},
pages = {1-13},
abstract = {Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications.},
note = {Pre SFI},
keywords = {Haptics for Learning, Intersensory reinforced learning, Mobile Vocabulary Learning, Motoric Engagement, Multimodal Learning, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {article}
}
Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications. |
2019
|
Participatory Design of VR Scenarios for Exposure Therapy Conference Eivind Flobak; Jo Dugstad Wake; Joakim Vindenes; Smiti Kahlon; T. Nordgreen; Frode Guribye Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), no. Paper 569, New York, 2019, (Pre SFI). @conference{Flobak2019,
title = {Participatory Design of VR Scenarios for Exposure Therapy},
author = {Eivind Flobak and Jo Dugstad Wake and Joakim Vindenes and Smiti Kahlon and T. Nordgreen and Frode Guribye},
url = {https://www.researchgate.net/publication/330205387_Participatory_Design_of_VR_Scenarios_for_Exposure_Therapy},
doi = {10.1145/3290605.3300799 },
year = {2019},
date = {2019-05-01},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19)},
number = {Paper 569},
address = {New York},
abstract = {Virtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participa-tory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants' interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and refections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immer-sive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible.},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {conference}
}
Virtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participa-tory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants' interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and refections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immer-sive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible. |
2018
|
Mediasync Report 2015: Evaluating timed playback of HTML5 Media Technical Report Njål Borch; Ingar Mæhlum Arntzen 2018, (Pre SFI). @techreport{Borch2015,
title = {Mediasync Report 2015: Evaluating timed playback of HTML5 Media},
author = {Njål Borch and Ingar Mæhlum Arntzen},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/bitstream/handle/11250/2711974/Norut_Tromso_rapport_28-2015.pdf?sequence=2},
year = {2018},
date = {2018-12-18},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {techreport}
}
|
AdapTable: Extending Reach over Large Tabletops Through Flexible Multi-Display Configuration. Proceeding Yoshiki Kudo; Kazuki Takashima; Morten Fjeld; Yoshifumi Kitamura 2018, (Pre SFI). @proceedings{Kudo2018,
title = { AdapTable: Extending Reach over Large Tabletops Through Flexible Multi-Display Configuration.},
author = {Yoshiki Kudo and Kazuki Takashima and Morten Fjeld and Yoshifumi Kitamura},
url = {https://dl.acm.org/doi/pdf/10.1145/3279778.3279779
https://www.youtube.com/watch?v=HG_4COsWGDM},
year = {2018},
date = {2018-11-17},
urldate = {2018-11-17},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {proceedings}
}
|
WristOrigami: Exploring foldable design for multi-display smartwatch Proceeding Kening Zhu; Morten Fjeld; Ayca Ülüner 2018, (Pre SFI). @proceedings{Zhu2018,
title = {WristOrigami: Exploring foldable design for multi-display smartwatch},
author = {Kening Zhu and Morten Fjeld and Ayca Ülüner},
url = {https://dl.acm.org/doi/pdf/10.1145/3196709.3196713
https://www.youtube.com/watch?v=1_2D79zntIk},
year = {2018},
date = {2018-06-09},
urldate = {2018-06-09},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {proceedings}
}
|
Movespace: on-body athletic interaction for running and cycling Journal Article Velko Vechev; Alexandru Dancu; Simon T. Perrault; Quentin Roy; Morten Fjeld; Shengdong Zhao In: 2018, (Pre SFI). @article{Vechev2018,
title = {Movespace: on-body athletic interaction for running and cycling},
author = {Velko Vechev and Alexandru Dancu and Simon T. Perrault and Quentin Roy and Morten Fjeld and Shengdong Zhao},
url = {https://dl.acm.org/doi/pdf/10.1145/3206505.3206527
https://www.youtube.com/watch?v=1_u4Zm4F7I0},
year = {2018},
date = {2018-05-29},
urldate = {2018-05-29},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {article}
}
|
Media Synchronization on the Web. In: MediaSync Book Chapter Ingar Mæhlum Arntzen; Njål Borch; François Daoust In: 2018, (Pre SFI). @inbook{Arntzen2018,
title = {Media Synchronization on the Web. In: MediaSync},
author = {Ingar Mæhlum Arntzen and Njål Borch and François Daoust},
url = {https://www.w3.org/community/webtiming/files/2018/05/arntzen_mediasync_web_author_edition.pdf},
year = {2018},
date = {2018-05-07},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {inbook}
}
|
Multi-device Linear Composition on the Web, Enabling Multi-device Linear Media with HTMLTimingObject and Shared Motion Conference Ingar Mæhlum Arntzen; Njål Borch; François Daoust; Dominique Hazael-Massieux
Media Synchronization Workshop Brussels, 2018, (Pre SFI). @conference{Arntzen2018b,
title = {Multi-device Linear Composition on the Web, Enabling Multi-device Linear Media with HTMLTimingObject and Shared Motion},
author = {Ingar Mæhlum Arntzen and Njål Borch and François Daoust and Dominique Hazael-Massieux
},
url = {https://www.researchgate.net/publication/324991987_Multi-device_Linear_Composition_on_the_Web_Enabling_Multi-device_Linear_Media_with_HTMLTimingObject_and_Shared_Motion},
year = {2018},
date = {2018-01-01},
address = {Brussels},
organization = {Media Synchronization Workshop},
abstract = {Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise multi-device timing is the main technical challenge. This paper proposes the introduction of an HTMLTimingObject as basis for linear composition in the single-device scenario. Linear composition in the multi-device scenario is ensured as HTMLTimingObjects may integrate with Shared Motion, a generic timing mechanism for the Web. By connecting HTMLMediaElements and HTMLTrackElements with a multi-device timing mechanism, a powerful programming model for multi-device linear media is unlocked.},
note = {Pre SFI},
keywords = {Linear Media, Multi-device, Shared Motion, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {conference}
}
Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise multi-device timing is the main technical challenge. This paper proposes the introduction of an HTMLTimingObject as basis for linear composition in the single-device scenario. Linear composition in the multi-device scenario is ensured as HTMLTimingObjects may integrate with Shared Motion, a generic timing mechanism for the Web. By connecting HTMLMediaElements and HTMLTrackElements with a multi-device timing mechanism, a powerful programming model for multi-device linear media is unlocked. |
2017
|
Økt samvirke og beslutningsstøtte – Case Salten Brann IKS Technical Report Njål Borch 2017, (Pre SFI). @techreport{Borch2017,
title = {Økt samvirke og beslutningsstøtte – Case Salten Brann IKS},
author = {Njål Borch},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/handle/11250/2647818},
year = {2017},
date = {2017-08-17},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {techreport}
}
|
Timing - small step for developers, giant leap for the media industry, IBC 2016 Conference Njål Borch; François Daoust; Ingar Mæhlum Arntzen 2017, (Pre SFI). @conference{Borch2016,
title = {Timing - small step for developers, giant leap for the media industry, IBC 2016},
author = {Njål Borch and François Daoust and Ingar Mæhlum Arntzen},
url = {https://www.w3.org/community/webtiming/files/2016/09/Borch_IBC2016-final.pdf},
year = {2017},
date = {2017-02-11},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {conference}
}
|
2016
|
The changing ecology of tools for live news reporting Journal Article Frode Guribye; Lars Nyre In: Journalism Practice, vol. 10, no. 11, pp. 1216-1230, 2016, ISSN: 1751-2794, (Pre SFI). @article{Guribye2016,
title = {The changing ecology of tools for live news reporting},
author = {Frode Guribye and Lars Nyre},
url = {https://www.tandfonline.com/doi/pdf/10.1080/17512786.2016.1259011?needAccess=true},
doi = {10.1080/17512786.2016.1259011},
issn = {1751-2794},
year = {2016},
date = {2016-12-05},
journal = {Journalism Practice},
volume = {10},
number = {11},
pages = {1216-1230},
abstract = {Broadcast news channels provide fresh, continuously updated coverage of events, in sharp competition with other news channels in the same market. The live moment is a valuable feature, and broadcasters have always relied on teams that can react quickly to breaking news and report live from the scene. Technology plays an important role in the production of live news, and a number of tools are applied by skilled actors in what can be called an ecology of tools for live news reporting. This study explores new video tools for television news, and the tinkering conducted by the reporting teams to adapt to such tools. Six journalists and photographers at broadcaster TV 2 in Norway were interviewed about their everyday work practices out in the field, and we present the findings in an analysis where six aspects of contemporary live news reporting are explored: (1) from heavy to light equipment, (2) more live news at TV 2, (3) the practice of going live, (4) the mobility of live reporters, (5) tinkering to go live, and (6) quicker pace of production. In the concluding remarks we summarize our insights about live news reporting.},
note = {Pre SFI},
keywords = {broadcast news; ecology of tools; journalism; live reporting; mobile interaction; video applications; video journalism; visual technology, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {article}
}
Broadcast news channels provide fresh, continuously updated coverage of events, in sharp competition with other news channels in the same market. The live moment is a valuable feature, and broadcasters have always relied on teams that can react quickly to breaking news and report live from the scene. Technology plays an important role in the production of live news, and a number of tools are applied by skilled actors in what can be called an ecology of tools for live news reporting. This study explores new video tools for television news, and the tinkering conducted by the reporting teams to adapt to such tools. Six journalists and photographers at broadcaster TV 2 in Norway were interviewed about their everyday work practices out in the field, and we present the findings in an analysis where six aspects of contemporary live news reporting are explored: (1) from heavy to light equipment, (2) more live news at TV 2, (3) the practice of going live, (4) the mobility of live reporters, (5) tinkering to go live, and (6) quicker pace of production. In the concluding remarks we summarize our insights about live news reporting. |
Data-independent sequencing with the timing object: a JavaScript sequencer for single-device and multi-device web media. In Proceedings of the 7th International Conference on Multimedia Systems (MMSys '16) Proceeding Ingar Mæhlum Arntzen; Njål Borch 2016, (Pre SFI). @proceedings{Arntzen2016,
title = {Data-independent sequencing with the timing object: a JavaScript sequencer for single-device and multi-device web media. In Proceedings of the 7th International Conference on Multimedia Systems (MMSys '16)},
author = {Ingar Mæhlum Arntzen and Njål Borch},
url = {https://www.w3.org/community/webtiming/files/2016/05/mmsys2016slides.pdf},
year = {2016},
date = {2016-05-12},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {proceedings}
}
|
RAMPARTS: Supporting sensemaking with spatially-aware mobile interactions Journal Article Pawel Wozniak; Nitesh Goyal; Przemyslaw Kucharski; Lars Lischke; Sven Mayer; Morten Fjeld In: 2016, (Pre SFI). @article{Wozniak2016,
title = { RAMPARTS: Supporting sensemaking with spatially-aware mobile interactions},
author = {Pawel Wozniak and Nitesh Goyal and Przemyslaw Kucharski and Lars Lischke and Sven Mayer and Morten Fjeld},
url = {https://dl.acm.org/doi/10.1145/2858036.2858491
https://www.youtube.com/watch?v=t01yLj3xhVc},
year = {2016},
date = {2016-05-01},
urldate = {2016-05-01},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {article}
}
|
Hapticolor: Interpolating color information as haptic feedback to assist the colorblind Proceeding Marta G. Carcedo; Soon H. Chua; Simon Perrault; Pawel Wozniak; Raj Joshi; Mohammad Obaid; Morten Fjeld; Shengdong Zhao 2016, (Pre SFI). @proceedings{Carcedo2016,
title = {Hapticolor: Interpolating color information as haptic feedback to assist the colorblind},
author = { Marta G. Carcedo and Soon H. Chua and Simon Perrault and Pawel Wozniak and Raj Joshi and Mohammad Obaid and Morten Fjeld and Shengdong Zhao},
url = {https://dl.acm.org/doi/10.1145/2858036.2858220
https://www.youtube.com/watch?v=qjoH6eNNZBU},
year = {2016},
date = {2016-05-01},
urldate = {2016-05-01},
note = {Pre SFI},
keywords = {WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {proceedings}
}
|
2015
|
Mediasync Report 2015: Evaluating timed playback of HTML5 Media Journal Article Njål Borch; Ingar Mæhlum Arntzen In: Norut, 2015, ISBN: 978-82-7492-319-5, (Pre SFI). @article{Borch2015b,
title = {Mediasync Report 2015: Evaluating timed playback of HTML5 Media},
author = {Njål Borch and Ingar Mæhlum Arntzen},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/bitstream/handle/11250/2711974/Norut_Tromso_rapport_28-2015.pdf?sequence=2&isAllowed=y},
isbn = {978-82-7492-319-5},
year = {2015},
date = {2015-12-08},
journal = {Norut},
abstract = {In this report we provide an extensive analysis of timing aspects of HTML5 Media, across a variety of browsers,
operating systems and media formats. Particularly we investigate how playback compares to the progression of
the local clock and how players respond to time-shifting and adjustments in playback-rate.
Additionally, we use the MediaSync JS library to enforce correctly timed playback for HTML5 media, and indicate
the effects this has on user experience. MediaSync is developed based on results from the above analysis.
MediaSync aims to provide a best effort solution that works across a variety of media formats, operating systems
and browser types, and does not make optimizations for specific permutations..
},
note = {Pre SFI},
keywords = {HTML5, MediaSync, WP4: Media Content Interaction and Accessibility},
pubstate = {published},
tppubtype = {article}
}
In this report we provide an extensive analysis of timing aspects of HTML5 Media, across a variety of browsers,
operating systems and media formats. Particularly we investigate how playback compares to the progression of
the local clock and how players respond to time-shifting and adjustments in playback-rate.
Additionally, we use the MediaSync JS library to enforce correctly timed playback for HTML5 media, and indicate
the effects this has on user experience. MediaSync is developed based on results from the above analysis.
MediaSync aims to provide a best effort solution that works across a variety of media formats, operating systems
and browser types, and does not make optimizations for specific permutations..
|