Contexte gauche | Mot | Contexte droit |
| ai | |
| ai | |
| ai | |
Sec. | AI | for Human Learning and Behavior |
| ai | .2022.750763 |
| ai | |
| ai | |
artificial intelligence ( | AI | ) have been, and will continue |
2020). | AI | has also been useful in |
| ai | , 2020). Though |
| artificial intelligence | is already applied to a |
| ai | |
considered true | artificial intelligence | that is comparable to human |
| ai | |
| ai | , 2020). Humans are not capable |
| ai | |
artificial intelligence” (X | AI | ; Gunning et al., 2019 |
| AI | , but, rather, discuss |
| ai | |
the autonomous capabilities of | AI | systems advance (Lyons, 2013 |
lead to misuse of the | AI | system. Further, over-reliance on |
complacency and failing to detect | AI | system errors |
| AI | , and resulting inappropriately reliance on |
noted above, trust in | AI | systems is closely related to |
| ai | |
| ai | |
collaboration between computer science, | AI | , and machine learning |
| ai | |
developing | AI | systems that effectively interact with |
the future behavior of the | AI | system more accurately (Riedl |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
learning, explanation in AI, or interpretability in | AI | , nor |
apply theories to | artificial intelligence | . From this, we lay the |
foundation for computer science and | AI | researchers to understand what |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
interactions, failures can occur because | AI | has neglected the |
| ai | |
intensive problem-solving at which | AI | is already adept |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
has great implications for human- | AI | interactions |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
Related to social signal processing, | AI | capabilities are still |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
traditional evaluation of | AI | systems that are primarily based |
| ai | |
| ai | |
teammate. We know that current | AI | |
understanding of how | AI | systems come to their decisions |
human teammates to reject the | AI | system (Akula et al |
| ai | , 2020; Papagni and Koeszegi, 2021 |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| AI | system within a human-agent |
| ai | |
| ai | |
factors that can affect an | AI | system's |
As | AI | progresses, interdisciplinary research is needed |
structure. | AI | , then, can only be expected |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | , J |
| ai | |
| ai | |
emergence to explainable and empathetic | AI | |
| ai | |
| ai | |
| ai | |
you: theory of mind in | AI | . Psychol. Med. 50 |
DARPA (2016). Explainable Artificial Intelligence (X | AI | ). Technical |
| ai | |
| artificial intelligence | : why, when, what for, and |
| ai | .2021.104541 |
Z. (2019). XAI-Explainable | artificial intelligence | . Sci. Robot. 4, 37 |
| ai | |
| ai | |
Intelligence ( | AI | ) applications in orthopaedics: an innovative |
| AI | Soc. 34, 535–543. doi |
| ai | |
| ai | |
Formalizing trust in | artificial intelligence | : prerequisites, causes |
goals of human trust in | AI | ,” in Proceedings of the 2021 |
| ai | |
| artificial intelligence | : Nonverbal social signal prediction in |
| ai | |
| ai | |
| ai | |
implications of | artificial intelligence | . Bus. Horizons 62, 15–25 |
| AI | |
| ai | |
| ai | |
| AI | Spring Symposium Series |
Miller, T. (2019). Explanation in | artificial intelligence | : insights |
A. (2020). “The use of | AI | in |
| ai | |
| ai | |
Rai, A. (2020). Explainable | AI | : from black box to glass |
M. O. (2019). Human-centered | artificial intelligence | and machine |
2021). Symbolic behaviour in | artificial intelligence | . arXiv preprint |
| ai | |
| ai | |
| ai | |
| ai | |
| ai | .2022.750763 |