Evaluating the Utility of Conformal Prediction Sets for AI-Advised Image Labeling
ACM Human Factors in Computing Systems (CHI) 2024 Best Paper Honorable MentionAbstract
As deep neural networks are more commonly deployed in high-stakes domains, their black-box nature makes uncertainty quantification challenging. We investigate the effects of presenting conformal prediction sets—a distribution-free class of methods for generating prediction sets with specified coverage—to express uncertainty in AI-advised decision-making. Through a large online experiment, we compare the utility of conformal prediction sets to displays of Top-1 and Top-k predictions for AI-advised image labeling. In a pre-registered analysis, we find that the utility of prediction sets for accuracy varies with the difficulty of the task: while they result in accuracy on par with or less than Top-1 and Top-k displays for easy images, prediction sets excel at assisting humans in labeling out-of-distribution (OOD) images, especially when the set size is small. Our results empirically pinpoint practical challenges of conformal prediction sets and provide implications on how to incorporate them for real-world decision-making.
Citation
BibTeX
@inproceedings{zhang2024evaluating,
title={Evaluating the utility of conformal prediction sets for ai-advised image labeling},
author={Zhang, Dongping and Chatzimparmpas, Angelos and Kamali, Negar and Hullman, Jessica},
booktitle={Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems},
pages={1--19},
year={2024},
doi={https://doi.org/10.1145/3613904.3642446}
}