Sagi Shaier
Department of Computer Science
University of Colorado Boulder
sagi.shaier@colorado.edu
Abstract
This study examines whether large language models (LLMs), such as ChatGPT, specifically the latest GPT-4o-mini, exhibit sensitivity to repeated prompts and whether repeating a question can improve response accuracy. We hypothesize that reiterating a question within a single prompt might enhance the model’s focus on key elements of the query. To test this, we evaluate ChatGPT’s performance on a large sample of two reading comprehension datasets under both open-book and closed-book settings, varying the repetition of each question to 1, 3, or 5 times per prompt. Our findings indicate that the model does not demonstrate sensitivity to repeated questions, highlighting its robustness and consistency in this context. Code can be found here https://github.com/Shaier/question_repeat.git.
Asking Again and Again: Exploring LLM Robustness to Repeated Questions
Sagi ShaierDepartment of Computer ScienceUniversity of Colorado Bouldersagi.shaier@colorado.edu
1 Introduction
Large language models (LLMs) have become indispensable tools across various fields, excelling in tasks such as natural language understanding Li etal. (2023); OpenAI (2023a, b); Anthropic , content creation Ma etal. (2024); Bae and Kim (2024), and question answering OpenAI (2023a, b). Their capability to generate coherent, human-like responses has made them particularly valuable in applications ranging from chatbots to research support. Among these applications, question answering stands out as a key area, highlighting the models’ strengths in reasoning, information retrieval, and contextual comprehension.
Interactions with LLMs typically involve providing context and posing questions. Prior research has shown that LLMs are sensitive to input variations, such as the order of questions and context Shaier etal. (2024a), conflicting information Longpre etal. (2021); Zhou etal. (2023); Neeman etal. (2022); Shaier etal. (2024b, c); Hong etal. (2023); Chen etal. (2022), and minor adversarial perturbations Jia and Liang (2017); Cao etal. (2022); Alexandrov etal. (2023). These variations can lead to unstable behavior, such as degraded performance or the introduction of biases Levy etal. (2024); Shaier etal. (2023). This body of work highlights the importance of input structure and presentation in shaping response quality and relevance. However, the specific effect of question repetition within prompts remains under-explored.
This study investigates whether repeating a question within a prompt can improve LLM performance. By systematically increasing the frequency of question repetition, we evaluate its impact on overall performance. Specifically, we tested one of the latest ChatGPT version—GPT-4o-mini—on two reading comprehension datasets under open-book and closed-book settings, varying the repetition of each question to 1, 3, or 5 times per prompt.
Our results indicate that repeating questions within a single prompt does not improve model performance. Interestingly, this finding contrasts with prior research Mekala etal. (2024), which highlight the benefits of instructing models to restate questions in their responses. These results suggest that further research is needed to explore the nuanced relationship between question repetition and LLM behavior.
2 Experiments
In our experiments, each dataset consists of triples , where is a question, is the context document, and are the gold answers. We follow prior work Brown etal. (2020); Chowdhery etal. (2022) by concatenating the question and contexts into a single string, which is then input to the model.
2.1 Metrics
To evaluate accuracy, we follow prior work Liu etal. (2023); Mallen etal. (2023); Kandpal etal. (2023) and use substring matching – evaluating if any of the gold answers are present in the output.
2.2 Datasets
Due to the cost associated with using ChatGPT and limited funding, we utilized a sample of 500 questions from each dataset. With 12 experimental settings (2 datasets × 3 levels of question repetition × 2 conditions: open-book and closed-book), this setup resulted in a total of 6,000 questions.
Stanford Question Answering Dataset (SQuAD)
SQuAD Rajpurkar etal. (2016) is a large-scale reading comprehension dataset containing over 100,000 questions crafted by crowdworkers based on a diverse set of Wikipedia articles. Each question is designed such that the answer is a specific text segment from the corresponding article, encouraging models to closely analyze and comprehend the context of the passage. The dataset includes a wide range of question types, requiring reasoning at various levels of complexity, from simple fact retrieval to more nuanced understanding of dependencies within the text.
HotPotQA
HotPotQA Yang etal. (2018) is a complex multi-hop question-answering dataset comprising 113,000 question-answer pairs grounded in Wikipedia articles. It is specifically designed to address limitations in existing question answering datasets by emphasizing multi-hop reasoning, where answering a question requires synthesizing information from multiple supporting documents. The dataset stands out for its diversity, as it is not restricted to predefined knowledge bases or schemas, ensuring that questions reflect realistic and open-ended scenarios. Additionally, HotPotQA provides annotated sentence-level supporting facts, enabling strong supervision for reasoning and facilitating explainable predictions.
2.3 Models
We conduct our experiments using one of the latest versions of ChatGPT, specifically GPT-4o-mini (gpt-4o-mini-2024-07-18). Like many other closed-source models, the specifics of its training data and parameter count remain undisclosed. However, it achieves an impressive on the MMLU benchmark and currently surpasses GPT-4 in chat preference ratings on the LMSYS leaderboard, highlighting its strong performance and user alignment.
3 Results
Dataset | Context Setting | Qx1 Accuracy | Qx3 Accuracy | Qx5 Accuracy |
---|---|---|---|---|
HotPotQA | With Context | 0.58 | 0.58 | 0.59 |
Without Context | 0.42 | 0.42 | 0.43 | |
SQuAD | With Context | 0.99 | 0.99 | 0.98 |
Without Context | 0.49 | 0.49 | 0.49 |
Our results are summarized in Table 1. This section examines the performance of GPT-4o-mini across the two datasets, HotPotQA and SQuAD, under both open-book (with context) and closed-book (without context) settings, and evaluates the impact of varying levels of question repetition (Qx1, Qx3, Qx5).
Open-Book (With Context) Performance
In the open-book setting, where the model is provided with relevant contextual information, performance remained stable across different levels of question repetition. For HotPotQA, the accuracy was 0.58 when the question was asked once (Qx1) and remained unchanged when repeated three times (Qx3). A slight increase to 0.59 was observed when the question was repeated five times (Qx5), although this improvement is negligible and within the margin of error. Similarly, for SQuAD, the model performed exceptionally well, achieving an accuracy of 0.99 for both Qx1 and Qx3, with only a minimal decline to 0.98 for Qx5. These findings indicate that repeating questions does not significantly affect performance in open-book settings, underscoring the model’s robustness to such input variations.
Closed-Book (Without Context) Performance
In the closed-book setting, where the model must rely solely on its internal knowledge, performance was consistently lower than in the open-book setting. For HotPotQA, the model achieved an accuracy of 0.42 for both Qx1 and Qx3, with a slight increase to 0.43 for Qx5. Similarly, for SQuAD, the accuracy was uniform at 0.49 across all levels of question repetition (Qx1, Qx3, Qx5). These results demonstrate that question repetition has no discernible impact on the model’s performance in closed-book scenarios, further reinforcing the idea that repetition neither improves nor harms the output quality.
Comparison Between Datasets
When comparing performance across datasets, a clear distinction emerges. In the open-book setting, the model performs significantly better on SQuAD than on HotPotQA, with accuracies nearing 1.0 for the former and around 0.58–0.59 for the latter. This disparity highlights the greater complexity of HotPotQA, which requires multi-hop reasoning and synthesis of information across multiple documents. In contrast, SQuAD questions are typically answerable from a single passage, making them less challenging for the model.
In the closed-book setting, the performance gap between the datasets persists, albeit with lower overall accuracies. SQuAD accuracy remains relatively higher at 0.49 across all repetitions, while HotPotQA accuracy hovers around 0.42–0.43. This trend further illustrates the challenge posed by HotPotQA’s multi-hop reasoning tasks when contextual information is unavailable.
Effect of Question Repetition
Across both datasets and settings, the results indicate that repeating a question within a prompt neither improves nor degrades model performance. The consistent accuracies observed across Qx1, Qx3, and Qx5 suggest that GPT-4o-mini processes the information effectively without being influenced by repeated phrasing. This stability reflects the robustness of the model to redundant input structures, contrasting with prior work Mekala etal. (2024), that highlight the benefits of instructing models to restate questions in their responses.
Summary
Overall, our findings show that question repetition does not meaningfully affect performance, regardless of dataset, context availability, or repetition level. These results highlight the stability of GPT-4o-mini in handling repeated input structures and provide valuable insights into its interaction with prompt design variations.
4 Future Work
This study highlights the stability of GPT-4o-mini’s performance under question repetition, but it also opens avenues for further exploration. Future work could examine the effects of question repetition on other LLMs, particularly those with differing architectures or training paradigms. Additionally, it would be valuable to assess whether more complex or nuanced forms of repetition, such as rephrased or semantically varied questions, could elicit improved performance or deeper reasoning. Expanding the evaluation to include more diverse datasets, including those with open-ended or subjective answers, may reveal whether these findings generalize across broader tasks. Finally, a more detailed investigation into the interaction between prompt structure, repetition, and interpretability could provide actionable insights into optimizing LLM prompts for specific applications.
5 Conclusion
In this study, we explored the impact of repeating questions within prompts on the performance of GPT-4o-mini using the HotPotQA and SQuAD datasets. Our results demonstrate that question repetition neither improves nor degrades performance, highlighting the model’s robustness to redundant input structures. This finding contrasts with prior work which highlights the benefits of instructing models to restate questions in their responses. These insights contribute to a deeper understanding of how LLMs interact with prompt design, offering a foundation for future research in question-answering systems.
References
- Alexandrov etal. (2023)Dmitriy Alexandrov, Anastasiia Zakharova, and Nikolay Butakov. 2023.Does noise really matter? investigation into the influence of noisy labels on bert-based question answering system.In 2023 IEEE 17th International Conference on Semantic Computing (ICSC), pages 33–40.
- (2)Anthropic.Model Card and Evaluations for Claude Models Anthropic.
- Bae and Kim (2024)Minwook Bae and Hyounghun Kim. 2024.Collective critics for creative story generation.In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18784–18819, Miami, Florida, USA. Association for Computational Linguistics.
- Brown etal. (2020)Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, JaredD Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.Language models are few-shot learners.In Advances in Neural Information Processing Systems, volume33, pages 1877–1901. Curran Associates, Inc.
- Cao etal. (2022)YuCao, Dianqi Li, Meng Fang, Tianyi Zhou, Jun Gao, Yibing Zhan, and Dacheng Tao. 2022.Tasa: Deceiving question answering models by twin answer sentences attack.Preprint, arXiv:2210.15221.
- Chen etal. (2022)Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022.Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2292–2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Chowdhery etal. (2022)Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, HyungWon Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, YiTay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, AndrewM. Dai, ThanumalayanSankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.Palm: Scaling language modeling with pathways.Preprint, arXiv:2204.02311.
- Hong etal. (2023)Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, and JoyceJiyoung Whang. 2023.Discern and answer: Mitigating the impact of misinformation in retrieval-augmented models with discriminators.Preprint, arXiv:2305.01579.
- Jia and Liang (2017)Robin Jia and Percy Liang. 2017.Adversarial examples for evaluating reading comprehension systems.In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics.
- Kandpal etal. (2023)Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2023.Large language models struggle to learn long-tail knowledge.Preprint, arXiv:2211.08411.
- Levy etal. (2024)Sharon Levy, TahilinSanchez Karver, William Adler, MichelleR Kaufman, and Mark Dredze. 2024.Evaluating biases in context-dependent sexual and reproductive health questions.In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 5801–5812, Miami, Florida, USA. Association for Computational Linguistics.
- Li etal. (2023)Dacheng Li, Rulin Shao*, Anze Xie, Ying Sheng, Lianmin Zheng, JosephE. Gonzalez, Ion Stoica, Xuezhe Ma, , and Hao Zhang. 2023.How long can open-source llms truly promise on context length?
- Liu etal. (2023)NelsonF. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023.Lost in the middle: How language models use long contexts.Preprint, arXiv:2307.03172.
- Longpre etal. (2021)Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021.Entity-based knowledge conflicts in question answering.In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052–7063, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Ma etal. (2024)Yan Ma, YuQiao, and Pengfei Liu. 2024.MoPS: Modular story premise synthesis for open-ended automatic story generation.In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2135–2169, Bangkok, Thailand. Association for Computational Linguistics.
- Mallen etal. (2023)Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023.When not to trust language models: Investigating effectiveness of parametric and non-parametric memories.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802–9822, Toronto, Canada. Association for Computational Linguistics.
- Mekala etal. (2024)RajasekharReddy Mekala, Yasaman Razeghi, and Sameer Singh. 2024.Echoprompt: Instructing the model to rephrase queries for improved in-context learning.Preprint, arXiv:2309.10687.
- Neeman etal. (2022)Ella Neeman, Roee Aharoni, OrHonovich, Leshem Choshen, Idan Szpektor, and Omri Abend. 2022.Disentqa: Disentangling parametric and contextual knowledge with counterfactual question answering.arXiv preprint.
- OpenAI (2023a)OpenAI. 2023a.Chatgpt: Optimizing language models for dialogue.
- OpenAI (2023b)OpenAI. 2023b.Gpt-4 technical report.Preprint, arXiv:2303.08774.
- Rajpurkar etal. (2016)Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016.SQuAD: 100,000+ questions for machine comprehension of text.In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
- Shaier etal. (2023)Sagi Shaier, Kevin Bennett, Lawrence Hunter, and Katharina Kann. 2023.Emerging challenges in personalized medicine: Assessing demographic effects on biomedical question answering systems.In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 540–550, Nusa Dua, Bali. Association for Computational Linguistics.
- Shaier etal. (2024a)Sagi Shaier, Lawrence Hunter, and Katharina vonder Wense. 2024a.It is not about what you say, it is about how you say it: A surprisingly simple approach for improving reading comprehension.In Findings of the Association for Computational Linguistics ACL 2024, pages 8292–8305, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
- Shaier etal. (2024b)Sagi Shaier, Lawrence Hunter, and Katharina Wense. 2024b.Desiderata for the context use of question answering systems.In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 777–792, St. Julian’s, Malta. Association for Computational Linguistics.
- Shaier etal. (2024c)Sagi Shaier, Ari Kobren, and PhilipV. Ogren. 2024c.Adaptive question answering: Enhancing language model proficiency for addressing knowledge conflicts with source citations.In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17226–17239, Miami, Florida, USA. Association for Computational Linguistics.
- Yang etal. (2018)Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and ChristopherD. Manning. 2018.HotpotQA: A dataset for diverse, explainable multi-hop question answering.In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
- Zhou etal. (2023)Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023.Context-faithful prompting for large language models.Preprint, arXiv:2303.11315.