About us
Publications
Home
Publications
Publications from 2020 and before are not direct results of the SFI MediaFutures, but are key results from our team members working on related topics in MediaFutures.
2020
Alain D. Starke; Martijn C. Willemsen; Chris C.P. Snijders
With a little help from my peers: depicting social norms in a recommender interface to promote energy conservation Conference
no. March 2020, 2020.
Abstract | BibTeX | Tags: Decision Support System, Human computer interaction, Human-centered computing, Information Systems, User studies | Links:
@conference{Starke2020b,
title = {With a little help from my peers: depicting social norms in a recommender interface to promote energy conservation},
author = {Alain D. Starke and Martijn C. Willemsen and Chris C.P. Snijders},
url = {https://dl.acm.org/doi/10.1145/3377325.3377518},
doi = {10.1145/3377325.3377518},
year = {2020},
date = {2020-03-17},
number = {March 2020},
pages = {1-11},
abstract = {How can recommender interfaces help users to adopt new behaviors? In the behavioral change literature, nudges and norms are studied to understand how to convince people to take action (e.g. towel re-use is boosted when stating that `75% of hotel guests' do so), but what is advised is typically not personalized. Most recommender systems know what to recommend in a personalized way, but not much research has considered how to present such advice to help users to change their current habits. We examine the value of presenting normative messages (e.g. `75% of users do X') based on actual user data in a personalized energy recommender interface called `Saving Aid'. In a study among 207 smart thermostat owners, we compared three different normative explanations (`Global', `Similar', and `Experienced' norm rates) to a non-social baseline (`kWh savings'). Although none of the norms increased the total number of chosen measures directly, we show evidence that the effect of norms seems to be mediated by the perceived feasibility of the measures. Also, how norms were presented (i.e. specific source, adoption rate) affected which measures were chosen within our Saving Aid interface.},
keywords = {Decision Support System, Human computer interaction, Human-centered computing, Information Systems, User studies},
pubstate = {published},
tppubtype = {conference}
}
How can recommender interfaces help users to adopt new behaviors? In the behavioral change literature, nudges and norms are studied to understand how to convince people to take action (e.g. towel re-use is boosted when stating that `75% of hotel guests' do so), but what is advised is typically not personalized. Most recommender systems know what to recommend in a personalized way, but not much research has considered how to present such advice to help users to change their current habits. We examine the value of presenting normative messages (e.g. `75% of users do X') based on actual user data in a personalized energy recommender interface called `Saving Aid'. In a study among 207 smart thermostat owners, we compared three different normative explanations (`Global', `Similar', and `Experienced' norm rates) to a non-social baseline (`kWh savings'). Although none of the norms increased the total number of chosen measures directly, we show evidence that the effect of norms seems to be mediated by the perceived feasibility of the measures. Also, how norms were presented (i.e. specific source, adoption rate) affected which measures were chosen within our Saving Aid interface.
2019
Christoph Trattner; Dietmar Jannach
Learning to Recommend Similar Items from Human Judgements Journal Article
In: User Modeling and User-Adapted Interaction Journal, pp. 1-50, 2019, (Pre SFI).
Abstract | BibTeX | Tags: Content-basedrecommender systems, Similar item recommendations, Similarity measures, User studies | Links:
@article{Trattner2020,
title = {Learning to Recommend Similar Items from Human Judgements},
author = {Christoph Trattner and Dietmar Jannach},
url = {https://www.christophtrattner.info/pubs/UMUAI2019.pdf},
doi = {10.1007/s11257-019-09245-4},
year = {2019},
date = {2019-09-20},
journal = {User Modeling and User-Adapted Interaction Journal},
pages = {1-50},
abstract = {Similar item recommendations—a common feature of many Web sites—point users to other interesting objects given a currently inspected item. A common way of computing such recommendations is to use a similarity function, which expresses how much alike two given objects are. Such similarity functions are usually designed based on the specifics of the given application domain. In this work, we explore how such functions can be learned from human judgments of similarities between objects, using two domains of “quality and taste”—cooking recipe and movie recommendation—as guiding scenarios. In our approach, we first collect a few thousand pairwise similarity assessments with the help of crowdworkers. Using these data, we then train different machine learning models that can be used as similarity functions to compare objects. Offline analyses reveal for both application domains that models that combine different types of item characteristics are the best predictors for human-perceived similarity. To further validate the usefulness of the learned models, we conducted additional user studies. In these studies, we exposed participants to similar item recommendations using a set of models that were trained with different feature subsets. The results showed that the combined models that exhibited the best offline prediction performance led to the highest user-perceived similarity, but also to recommendations that were considered useful by the participants, thus confirming the feasibility of our approach.},
note = {Pre SFI},
keywords = {Content-basedrecommender systems, Similar item recommendations, Similarity measures, User studies},
pubstate = {published},
tppubtype = {article}
}
Similar item recommendations—a common feature of many Web sites—point users to other interesting objects given a currently inspected item. A common way of computing such recommendations is to use a similarity function, which expresses how much alike two given objects are. Such similarity functions are usually designed based on the specifics of the given application domain. In this work, we explore how such functions can be learned from human judgments of similarities between objects, using two domains of “quality and taste”—cooking recipe and movie recommendation—as guiding scenarios. In our approach, we first collect a few thousand pairwise similarity assessments with the help of crowdworkers. Using these data, we then train different machine learning models that can be used as similarity functions to compare objects. Offline analyses reveal for both application domains that models that combine different types of item characteristics are the best predictors for human-perceived similarity. To further validate the usefulness of the learned models, we conducted additional user studies. In these studies, we exposed participants to similar item recommendations using a set of models that were trained with different feature subsets. The results showed that the combined models that exhibited the best offline prediction performance led to the highest user-perceived similarity, but also to recommendations that were considered useful by the participants, thus confirming the feasibility of our approach.