Talks and presentations

Trustworthy AI for Computer Vision

January 18, 2026

Talk, Department of Statistics & Computer Science, University of Kelaniya, Kelaniya, Sri Lanka

Organized by the IEEE Student Branch of University of Kelaniya Abstract: Trustworthy AI in computer vision necessitates models that are not only accurate but also reliable and aware of their limitations. Two critical pillars for achieving this trust are Out-of-Distribution (OOD) detection and model calibration. OOD detection aims to identify inputs that differ significantly from the training data, preventing overconfident and erroneous predictions on unfamiliar or anomalous samples. Calibration ensures that a model’s predicted confidence scores accurately reflect its true probability of being correct, separating predictive performance from reliability assessment. This work explores the synergy and challenges at the intersection of these fields. We review contemporary methods for OOD detection, including those based on density estimation, distance metrics, logit analysis, and energy scores, and techniques for calibration, such as temperature scaling and ensemble-based approaches.  Achieving robust OOD detection and calibration is essential for the safe deployment of vision systems in real-world, open-ended environments, forming the foundation for trustworthy AI that can recognize and express its own uncertainty.