In the field of prediction, Algorithm Aversion is used to mean that once you realize that an algorithm can make mistakes, you tend to avoid using it, even if it is better than human predictions. For example, in AI interviews, disease prediction, autonomous driving, and AI judges, there is a tendency to be reluctant to use even if all technical difficulties are resolved and average performance is better than humans. Several surveys were conducted, and two trends were observed: [Dietvorst, Simmons, and Massey, 2014, 2016-link in attachment]
- Even with the same mistakes, there is a tendency to be harsher with algorithms than with humans.
- Even for algorithms with similar performance, there is a tendency to prefer algorithms that can be controlled by human will (even a small part).
1 is clear, and 2 is a little solved, and it seems to be a fear of the black box algorithm.
Most of the current deep learning algorithms are black box algorithms, so they are subject to Algorithm Aversion. In the attached link, it is difficult to control the black box algorithm itself, so we investigated the effect of adding an explanation. In short, we have seen if eXplainable AI (XAI) mitigates the Algorithm Aversion effect. (Task is a graduate interview task in Kaggle)
On average, the difference is not large, but results consistently favor XAI. Interestingly, I divided groups with or without basic knowledge of probability, machine learning, and computational science, but people with no such background knowledge preferred XAI (over the error range) due to very distinct differences, and on the contrary, familiar or heard People had very little or no preference for XAI. (I don't believe it more because I know what?^^) cf AI industry workers (pros) very slightly prefer XAI
Among AI technologies, if you want to enhance or replace existing services, you should consider introducing XAI. In particular, AI interviews, which are an issue these days, seem to be almost essential. You must know why it fell, and you will be convinced.