Findings of recent studies suggest variations in performance across reading comprehension tests may be a product of differences among assessment dimensions (e.g., response format, genre) or child skills (e.g., Francis, Fletcher, Catts, & Tomblin, 2005; Keenan, 2013). The purpose of the current study was to investigate sources of variation in reading comprehension for three response formats (i.e., open-ended questions, multiple choice, retell) in relation to text genres and child skills. Participants included 79 fourth graders recruited from six classrooms within one elementary school. All participants read six passages (including three narrative and three expository texts) from the Qualitative Reading Inventory-Fifth Edition (QRI-5; Leslie & Caldwell, 2011) and completed a brief comprehension assessment, each of a varying response format. In addition, measures of word reading, linguistic and cognitive skills, and learning strategies were administered to each student across two 60-min testing sessions. Item-response crossed random effects models revealed statistically significant differences between open-ended and multiple-choice questions. Moreover, across the three response formats, five covariates were statistically significant predictors of reading comprehension: (a) genre, (b) listening comprehension, (c) working memory, (d) attention, and (e) word recognition. Further exploratory analyses identified three two-way interactions between: (a) Response Format (i.e., open-ended and multiple-choice questions) × Genre, (b) Response Format × Listening Comprehension, and (c) Response Format × Attention. Results of this study offer evidence to suggest the use of different response formats may lead to variations in student performance across reading comprehension tests. Given these findings, directions for future research and implications for using comprehension tests in research, policy, and practice are discussed.