Loginova, Olga and Bezrukov, Oleksandr and Shekhar, Ravi and Kravets, Alexey (2025) Addressing Blind Guessing: Calibration of Selection Bias in Multiple-Choice Question Answering by Video Language Models. In: The 63rd Annual Meeting of the Association for Computational Linguistics, 2025-07-27 - 2025-08-01, Vienna, Austria.
Loginova, Olga and Bezrukov, Oleksandr and Shekhar, Ravi and Kravets, Alexey (2025) Addressing Blind Guessing: Calibration of Selection Bias in Multiple-Choice Question Answering by Video Language Models. In: The 63rd Annual Meeting of the Association for Computational Linguistics, 2025-07-27 - 2025-08-01, Vienna, Austria.
Loginova, Olga and Bezrukov, Oleksandr and Shekhar, Ravi and Kravets, Alexey (2025) Addressing Blind Guessing: Calibration of Selection Bias in Multiple-Choice Question Answering by Video Language Models. In: The 63rd Annual Meeting of the Association for Computational Linguistics, 2025-07-27 - 2025-08-01, Vienna, Austria.
Abstract
Evaluating Video Language Models (VLMs) is a challenging task. Due to its transparency, Multiple-Choice Question Answering (MCQA) is widely used to measure the performance of these models through accuracy. However, existing MCQA benchmarks fail to capture the full reasoning capabilities of VLMs due to selection bias, when models disproportionately favor certain answer options based on positional patterns observed during training. In this work, we conduct a comprehensive empirical analysis of several VLM architectures across major datasets designed to assess complex video-focused reasoning. We identify where the bias is most pronounced and demonstrate to what extent model responses reflect genuine understanding of video content and related questions, as opposed to reliance on arbitrary patterns or superficial cues, such as answer position. By decomposing the MCQA task and adapting fairness bias metrics to VLMs, we introduce a post-processing calibration technique BOLD to balance this bias. Our results show that reducing selection bias improves not only debiasing metrics but also overall model performance, including Accuracy and F1 Mean score. Our method, by suppressing "blind guessing", offers a more cost- and time-effective approach to mitigating selection bias compared to existing techniques. This study represents the first focused investigation of selection bias in video-to-text LLM-powered models.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
| SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
| Depositing User: | Unnamed user with email elements@essex.ac.uk |
| Date Deposited: | 21 Apr 2026 16:16 |
| Last Modified: | 21 Apr 2026 16:17 |
| URI: | http://repository.essex.ac.uk/id/eprint/42492 |
Available files
Filename: 2025.acl-long.162.pdf
Licence: Creative Commons: Attribution 4.0