2021
|
Khanh-Duy Le; Tanh Quang Tran; Karol Chlasta; Krzysztof Krejtz; Morten Fjeld; Andreas Kunz VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction Conference SFI MediaFutures IEEE VR 2021 2021. Abstract | BibTeX | Links: @conference{Le2021,
title = {VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction},
author = {Khanh-Duy Le and Tanh Quang Tran and Karol Chlasta and Krzysztof Krejtz and Morten Fjeld and Andreas Kunz},
url = {https://mediafutures.no/vxslate_ieee_vr_2021-2/
https://www.youtube.com/watch?v=N8ZJlKWj4mk&ab_channel=DuyL%C3%AAKh%C3%A1nh},
year = {2021},
date = {2021-02-12},
pages = {1-2},
organization = {IEEE VR 2021},
series = {SFI MediaFutures},
abstract = {Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays, using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display, as an expansion of a tablet. VXSlate combines a user’s head movements, as tracked by the VR headset, and touch interaction on the tablet. The user’s head movements position both a virtual representation of the tablet and of the user’s hand on the large virtual display. The user’s multi-touch interactions perform finely-tuned content manipulations.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays, using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display, as an expansion of a tablet. VXSlate combines a user’s head movements, as tracked by the VR headset, and touch interaction on the tablet. The user’s head movements position both a virtual representation of the tablet and of the user’s hand on the large virtual display. The user’s multi-touch interactions perform finely-tuned content manipulations. |
2020
|
Pavel Okopnyi; Oskar Juhlin; Frode Guribye Unpacking Editorial Agreements in Collaborative Video Production Conference IMX '20: ACM International Conference on Interactive Media Experiences, New York, 2020. Abstract | BibTeX | Links: @conference{Okopnyi2020,
title = {Unpacking Editorial Agreements in Collaborative Video Production},
author = {Pavel Okopnyi and Oskar Juhlin and Frode Guribye},
url = {https://www.researchgate.net/publication/342251635_Unpacking_Editorial_Agreements_in_Collaborative_Video_Production},
doi = {https://doi.org/10.1145/3391614.3393652},
year = {2020},
date = {2020-06-01},
booktitle = {IMX '20: ACM International Conference on Interactive Media Experiences},
pages = {117–126},
address = {New York},
abstract = {Video production is a collaborative process involving creative, artistic and technical elements that require a multitude of specialised skill sets. This open-ended work is often marked by uncertainty and interpretive flexibility in terms of what the product is and should be. At the same time, most current video production tools are designed for single users. There is a growing interest, both in industry and academia, to design features that support key collaborative processes in editing, such as commenting on videos. We add to current research by unpacking specific forms of collaboration, in particular the social mechanisms and strategies employed to reduce interpretive flexibility and uncertainty in achieving agreements between editors and other collaborators. The findings contribute to the emerging design interest by identifying general design paths for how to support collaboration in video editing through scaffolding, iconic referencing, and suggestive editing.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Video production is a collaborative process involving creative, artistic and technical elements that require a multitude of specialised skill sets. This open-ended work is often marked by uncertainty and interpretive flexibility in terms of what the product is and should be. At the same time, most current video production tools are designed for single users. There is a growing interest, both in industry and academia, to design features that support key collaborative processes in editing, such as commenting on videos. We add to current research by unpacking specific forms of collaboration, in particular the social mechanisms and strategies employed to reduce interpretive flexibility and uncertainty in achieving agreements between editors and other collaborators. The findings contribute to the emerging design interest by identifying general design paths for how to support collaboration in video editing through scaffolding, iconic referencing, and suggestive editing. |
Anja Salzmann; Frode Guribye; Astrid Gynnild “We in the Mojo Community” – Exploring a Global Network of Mobile Journalists Journal Article Journalism Practice, pp. 1-18, 2020. Abstract | BibTeX | Links: @article{Salzmann2020,
title = {“We in the Mojo Community” – Exploring a Global Network of Mobile Journalists},
author = {Anja Salzmann and Frode Guribye and Astrid Gynnild},
url = {https://www.tandfonline.com/doi/epub/10.1080/17512786.2020.1742772?needAccess=true},
doi = {https://doi.org/10.1080/17512786.2020.1742772},
year = {2020},
date = {2020-04-03},
journal = {Journalism Practice},
pages = {1-18},
abstract = {Mobile journalism is a fast-growing area of journalistic innovation that requires new skills and work practices. Thus, a major challenge for journalists is learning not only how to keep up with new gadgets but how to advance and develop a mojo mindset to pursue their interests and solidify future work options. This paper investigates a globally pioneering network of mojo journalism, the Mojo Community, that consists of journalists and practitioners dedicated to creating multimedia content using mobile technologies. The study is based on empirical data from interviews with and the observation of the participants of the community over a two-year period. The analysis draws on Wenger’s concept of “communities of practice” to explore the domain, structure, and role of this communal formation for innovation and change in journalistic practices. The community’s core group is comprised of journalists mainly affiliated with legacy broadcast organizations and with a particular interest in and extensive knowledge of mobile technologies. The participants perceive their engagement with the community as a way of meeting the challenges of organizational reluctance to change, fast-evolving technological advancements, and uncertain job prospects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mobile journalism is a fast-growing area of journalistic innovation that requires new skills and work practices. Thus, a major challenge for journalists is learning not only how to keep up with new gadgets but how to advance and develop a mojo mindset to pursue their interests and solidify future work options. This paper investigates a globally pioneering network of mojo journalism, the Mojo Community, that consists of journalists and practitioners dedicated to creating multimedia content using mobile technologies. The study is based on empirical data from interviews with and the observation of the participants of the community over a two-year period. The analysis draws on Wenger’s concept of “communities of practice” to explore the domain, structure, and role of this communal formation for innovation and change in journalistic practices. The community’s core group is comprised of journalists mainly affiliated with legacy broadcast organizations and with a particular interest in and extensive knowledge of mobile technologies. The participants perceive their engagement with the community as a way of meeting the challenges of organizational reluctance to change, fast-evolving technological advancements, and uncertain job prospects. |
Morten Fjeld; Smitha Sheshadri; Shendong Zhao; Yang Cheng Learn with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen Mobiles Journal Article CHI 2020 Paper, pp. 1-13, 2020. Abstract | BibTeX | Links: @article{Fjeld2020,
title = {Learn with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen Mobiles},
author = {Morten Fjeld and Smitha Sheshadri and Shendong Zhao and Yang Cheng},
url = {https://dl.acm.org/doi/pdf/10.1145/3313831.3376272
https://www.youtube.com/watch?v=WY_T0fK5gCQ&ab_channel=ACMSIGCHI},
year = {2020},
date = {2020-04-01},
journal = {CHI 2020 Paper},
pages = {1-13},
abstract = {Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications. |
2019
|
Eivind Flobak; Jo Dugstad Wake; Joakim Vindenes; Smiti Kahlon; T. Nordgreen; Frode Guribye Participatory Design of VR Scenarios for Exposure Therapy Conference Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), (Paper 569), New York, 2019. Abstract | BibTeX | Links: @conference{Flobak2019,
title = {Participatory Design of VR Scenarios for Exposure Therapy},
author = {Eivind Flobak and Jo Dugstad Wake and Joakim Vindenes and Smiti Kahlon and T. Nordgreen and Frode Guribye},
url = {https://www.researchgate.net/publication/330205387_Participatory_Design_of_VR_Scenarios_for_Exposure_Therapy},
doi = {10.1145/3290605.3300799 },
year = {2019},
date = {2019-05-01},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19)},
number = {Paper 569},
address = {New York},
abstract = {Virtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participa-tory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants' interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and refections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immer-sive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Virtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participa-tory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants' interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and refections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immer-sive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible. |
2018
|
Njål Borch; Ingar Mæhlum Arntzen Mediasync Report 2015: Evaluating timed playback of HTML5 Media Technical Report 2018. BibTeX | Links: @techreport{Borch2015,
title = {Mediasync Report 2015: Evaluating timed playback of HTML5 Media},
author = {Njål Borch and Ingar Mæhlum Arntzen},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/bitstream/handle/11250/2711974/Norut_Tromso_rapport_28-2015.pdf?sequence=2},
year = {2018},
date = {2018-12-18},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
|
Y. Kudo; K. Takashima; Morten Fjeld; Y. Kitamura, AdapTable: Extending Reach over Large Tabletops Through Flexible Multi-Display Configuration. Proceeding 2018. BibTeX | Links: @proceedings{Kudo2018,
title = { AdapTable: Extending Reach over Large Tabletops Through Flexible Multi-Display Configuration.},
author = {Y. Kudo and K. Takashima and Morten Fjeld and Y. Kitamura, },
url = {https://dl.acm.org/doi/pdf/10.1145/3279778.3279779
https://www.youtube.com/watch?v=HG_4COsWGDM},
year = {2018},
date = {2018-11-17},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
|
K, Zhu; Morten Fjeld; A Ülüner WristOrigami: Exploring foldable design for multi-display smartwatch Proceeding 2018. BibTeX | Links: @proceedings{Zhu2018,
title = {WristOrigami: Exploring foldable design for multi-display smartwatch},
author = {K, Zhu and Morten Fjeld and A Ülüner},
url = {https://dl.acm.org/doi/pdf/10.1145/3196709.3196713
https://www.youtube.com/watch?v=1_2D79zntIk},
year = {2018},
date = {2018-06-09},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
|
V. Vechev; A. Dancu; S Perrault; Q. Roy; Morten Fjeld; S Zhao Movespace: on-body athletic interaction for running and cycling Journal Article 2018. BibTeX | Links: @article{Vechev2018,
title = {Movespace: on-body athletic interaction for running and cycling},
author = {V. Vechev and A. Dancu and S Perrault and Q. Roy and Morten Fjeld and S Zhao },
url = {https://dl.acm.org/doi/pdf/10.1145/3206505.3206527
https://www.youtube.com/watch?v=1_u4Zm4F7I0},
year = {2018},
date = {2018-05-29},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
Ingar Mæhlum Arntzen; Njål Borch; François Daoust Media Synchronization on the Web. In: MediaSync Book Chapter 2018. BibTeX | Links: @inbook{Arntzen2018,
title = {Media Synchronization on the Web. In: MediaSync},
author = {Ingar Mæhlum Arntzen and Njål Borch and François Daoust},
url = {https://www.w3.org/community/webtiming/files/2018/05/arntzen_mediasync_web_author_edition.pdf},
year = {2018},
date = {2018-05-07},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
|
Ingar Mæhlum Arntzen; Njål Borch; François Daoust; Dominique Hazael-Massieux
Multi-device Linear Composition on the Web, Enabling Multi-device Linear Media with HTMLTimingObject and Shared Motion Conference Media Synchronization Workshop Brussels, 2018. Abstract | BibTeX | Links: @conference{Arntzen2018b,
title = {Multi-device Linear Composition on the Web, Enabling Multi-device Linear Media with HTMLTimingObject and Shared Motion},
author = {Ingar Mæhlum Arntzen and Njål Borch and François Daoust and Dominique Hazael-Massieux
},
url = {https://www.researchgate.net/publication/324991987_Multi-device_Linear_Composition_on_the_Web_Enabling_Multi-device_Linear_Media_with_HTMLTimingObject_and_Shared_Motion},
year = {2018},
date = {2018-01-01},
address = {Brussels},
organization = {Media Synchronization Workshop},
abstract = {Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise multi-device timing is the main technical challenge. This paper proposes the introduction of an HTMLTimingObject as basis for linear composition in the single-device scenario. Linear composition in the multi-device scenario is ensured as HTMLTimingObjects may integrate with Shared Motion, a generic timing mechanism for the Web. By connecting HTMLMediaElements and HTMLTrackElements with a multi-device timing mechanism, a powerful programming model for multi-device linear media is unlocked.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise multi-device timing is the main technical challenge. This paper proposes the introduction of an HTMLTimingObject as basis for linear composition in the single-device scenario. Linear composition in the multi-device scenario is ensured as HTMLTimingObjects may integrate with Shared Motion, a generic timing mechanism for the Web. By connecting HTMLMediaElements and HTMLTrackElements with a multi-device timing mechanism, a powerful programming model for multi-device linear media is unlocked. |
2017
|
Njål Borch Økt samvirke og beslutningsstøtte – Case Salten Brann IKS Technical Report 2017. BibTeX | Links: @techreport{Borch2017,
title = {Økt samvirke og beslutningsstøtte – Case Salten Brann IKS},
author = {Njål Borch},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/handle/11250/2647818},
year = {2017},
date = {2017-08-17},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
|
Njål Borch; François Daoust; Ingar Mæhlum Arntzen Timing - small step for developers, giant leap for the media industry, IBC 2016 Conference 2017. BibTeX | Links: @conference{Borch2016,
title = {Timing - small step for developers, giant leap for the media industry, IBC 2016},
author = {Njål Borch and François Daoust and Ingar Mæhlum Arntzen},
url = {https://www.w3.org/community/webtiming/files/2016/09/Borch_IBC2016-final.pdf},
year = {2017},
date = {2017-02-11},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
|
2016
|
Frode Guribye; Lars Nyre The changing ecology of tools for live news reporting Journal Article Journalism Practice, 10 (11), pp. 1216-1230, 2016, ISSN: 1751-2794. Abstract | BibTeX | Links: @article{Guribye2016,
title = {The changing ecology of tools for live news reporting},
author = {Frode Guribye and Lars Nyre},
url = {https://www.tandfonline.com/doi/pdf/10.1080/17512786.2016.1259011?needAccess=true},
doi = {10.1080/17512786.2016.1259011},
issn = {1751-2794},
year = {2016},
date = {2016-12-05},
journal = {Journalism Practice},
volume = {10},
number = {11},
pages = {1216-1230},
abstract = {Broadcast news channels provide fresh, continuously updated coverage of events, in sharp competition with other news channels in the same market. The live moment is a valuable feature, and broadcasters have always relied on teams that can react quickly to breaking news and report live from the scene. Technology plays an important role in the production of live news, and a number of tools are applied by skilled actors in what can be called an ecology of tools for live news reporting. This study explores new video tools for television news, and the tinkering conducted by the reporting teams to adapt to such tools. Six journalists and photographers at broadcaster TV 2 in Norway were interviewed about their everyday work practices out in the field, and we present the findings in an analysis where six aspects of contemporary live news reporting are explored: (1) from heavy to light equipment, (2) more live news at TV 2, (3) the practice of going live, (4) the mobility of live reporters, (5) tinkering to go live, and (6) quicker pace of production. In the concluding remarks we summarize our insights about live news reporting.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Broadcast news channels provide fresh, continuously updated coverage of events, in sharp competition with other news channels in the same market. The live moment is a valuable feature, and broadcasters have always relied on teams that can react quickly to breaking news and report live from the scene. Technology plays an important role in the production of live news, and a number of tools are applied by skilled actors in what can be called an ecology of tools for live news reporting. This study explores new video tools for television news, and the tinkering conducted by the reporting teams to adapt to such tools. Six journalists and photographers at broadcaster TV 2 in Norway were interviewed about their everyday work practices out in the field, and we present the findings in an analysis where six aspects of contemporary live news reporting are explored: (1) from heavy to light equipment, (2) more live news at TV 2, (3) the practice of going live, (4) the mobility of live reporters, (5) tinkering to go live, and (6) quicker pace of production. In the concluding remarks we summarize our insights about live news reporting. |
Ingar Mæhlum Arntzen; Njål Borch Data-independent sequencing with the timing object: a JavaScript sequencer for single-device and multi-device web media. In Proceedings of the 7th International Conference on Multimedia Systems (MMSys '16) Proceeding 2016. BibTeX | Links: @proceedings{Arntzen2016,
title = {Data-independent sequencing with the timing object: a JavaScript sequencer for single-device and multi-device web media. In Proceedings of the 7th International Conference on Multimedia Systems (MMSys '16)},
author = {Ingar Mæhlum Arntzen and Njål Borch},
url = {https://www.w3.org/community/webtiming/files/2016/05/mmsys2016slides.pdf},
year = {2016},
date = {2016-05-12},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
|
M.G Carcedo; S.H Chua; S Perrault; P Wozniak; R Joshi; M Obaid; Morten Fjeld; S Zhao Hapticolor: Interpolating color information as haptic feedback to assist the colorblind Proceeding 2016. BibTeX | Links: @proceedings{Carcedo2016,
title = {Hapticolor: Interpolating color information as haptic feedback to assist the colorblind},
author = { M.G Carcedo and S.H Chua and S Perrault and P Wozniak and R Joshi and M Obaid and Morten Fjeld and S Zhao},
url = {https://dl.acm.org/doi/10.1145/2858036.2858220
https://www.youtube.com/watch?v=qjoH6eNNZBU},
year = {2016},
date = {2016-05-01},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
|
P Wozniak; N. Goyal; P. Kucharski; L. Lischke; S. Mayer; Morten Fjeld RAMPARTS: Supporting sensemaking with spatially-aware mobile interactions Journal Article 2016. BibTeX | Links: @article{Wozniak2016,
title = { RAMPARTS: Supporting sensemaking with spatially-aware mobile interactions},
author = {P Wozniak and N. Goyal and P. Kucharski and L. Lischke and S. Mayer and Morten Fjeld},
url = {https://dl.acm.org/doi/10.1145/2858036.2858491
https://www.youtube.com/watch?v=t01yLj3xhVc},
year = {2016},
date = {2016-05-01},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
2015
|
Njål Borch; Ingar Mæhlum Arntzen Mediasync Report 2015: Evaluating timed playback of HTML5 Media Journal Article Norut, 2015, ISBN: 978-82-7492-319-5. Abstract | BibTeX | Links: @article{Borch2015b,
title = {Mediasync Report 2015: Evaluating timed playback of HTML5 Media},
author = {Njål Borch and Ingar Mæhlum Arntzen},
url = {https://norceresearch.brage.unit.no/norceresearch-xmlui/bitstream/handle/11250/2711974/Norut_Tromso_rapport_28-2015.pdf?sequence=2&isAllowed=y},
isbn = {978-82-7492-319-5},
year = {2015},
date = {2015-12-08},
journal = {Norut},
abstract = {In this report we provide an extensive analysis of timing aspects of HTML5 Media, across a variety of browsers,
operating systems and media formats. Particularly we investigate how playback compares to the progression of
the local clock and how players respond to time-shifting and adjustments in playback-rate.
Additionally, we use the MediaSync JS library to enforce correctly timed playback for HTML5 media, and indicate
the effects this has on user experience. MediaSync is developed based on results from the above analysis.
MediaSync aims to provide a best effort solution that works across a variety of media formats, operating systems
and browser types, and does not make optimizations for specific permutations..
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In this report we provide an extensive analysis of timing aspects of HTML5 Media, across a variety of browsers,
operating systems and media formats. Particularly we investigate how playback compares to the progression of
the local clock and how players respond to time-shifting and adjustments in playback-rate.
Additionally, we use the MediaSync JS library to enforce correctly timed playback for HTML5 media, and indicate
the effects this has on user experience. MediaSync is developed based on results from the above analysis.
MediaSync aims to provide a best effort solution that works across a variety of media formats, operating systems
and browser types, and does not make optimizations for specific permutations..
|