A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems Journal Article Ingrid Nunes; Dietmar Jannach In: User-Modeling and User-Adapted Interaction, vol. 27, no. 3-5, pp. 393-444, 2020, (Pre SFI). @article{Nunes2020,
title = {A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems},
author = {Ingrid Nunes and Dietmar Jannach},
url = {https://arxiv.org/pdf/2006.08672.pdf},
doi = {10.1007/s11257-017-9195-0},
year = {2020},
date = {2020-06-15},
journal = {User-Modeling and User-Adapted Interaction},
volume = {27},
number = {3-5},
pages = {393-444},
abstract = {With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.},
note = {Pre SFI},
keywords = {Artificial Intelligence, Decision Support System, Expert System, Explanation, Knowledge-based system, Machine Learning, Recommender systems, Systematic review, Trust, WP2: User Modeling Personalization and Engagement},
pubstate = {published},
tppubtype = {article}
}
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated. |
Verifying information with multimedia content on twitter: A comparative study of automated approaches Journal Article Christina Boididou; Stuart Middleton; Zhiwei Jin; Symeon Papadopoulos; Duc-Tien Dang-Nguyen; G. Boato; Ioannis (Yiannis) Kompatsiaris In: Multimedia Tools and Applications, vol. 77, no. 12, pp. 15545-15571, 2017, (Pre SFI). @article{Boididou2017,
title = {Verifying information with multimedia content on twitter: A comparative study of automated approaches},
author = {Christina Boididou and Stuart Middleton and Zhiwei Jin and Symeon Papadopoulos and Duc-Tien Dang-Nguyen and G. Boato and Ioannis (Yiannis) Kompatsiaris},
url = {https://www.researchgate.net/publication/319859894_Verifying_information_with_multimedia_content_on_twitter_A_comparative_study_of_automated_approaches},
doi = {10.1007/s11042-017-5132-9},
year = {2017},
date = {2017-09-01},
urldate = {2017-09-01},
journal = {Multimedia Tools and Applications},
volume = {77},
number = {12},
pages = {15545-15571},
abstract = {An increasing amount of posts on social media are used for dissem- inating news information and are accompanied by multimedia content. Such content may often be misleading or be digitally manipulated. More often than not, such pieces of content reach the front pages of major news outlets, having a detrimental eect on their credibility. To avoid such eects, there is profound need for automated methods that can help debunk and verify online content in very short time. To this end, we present a comparative study of three such methods that are catered for Twitter, a major social media platform used for news sharing. Those include: a) a method that uses textual patterns to extract
},
note = {Pre SFI},
keywords = {Credibility, Fake Detection, Multimedia, Social Media, Trust, Twitter, Veracity, Verification, WP3: Media Content Production and Analysis},
pubstate = {published},
tppubtype = {article}
}
An increasing amount of posts on social media are used for dissem- inating news information and are accompanied by multimedia content. Such content may often be misleading or be digitally manipulated. More often than not, such pieces of content reach the front pages of major news outlets, having a detrimental eect on their credibility. To avoid such eects, there is profound need for automated methods that can help debunk and verify online content in very short time. To this end, we present a comparative study of three such methods that are catered for Twitter, a major social media platform used for news sharing. Those include: a) a method that uses textual patterns to extract
|