DOI: 10.7763/IJCTE.2025.V17.1374
Assessing AI-Generated Questions’ Alignment with Cognitive Frameworks in Educational Assessment
2. Faculty of Science, Lebanese University, Beirut, Lebanon
Email: antoun.yaacoub@esiea.fr (A.Y.); jerome.darugna@esiea.fr (J.D.R.); zainab.assaghir@ul.edu.lb (Z.A.)
*Corresponding author
Manuscript received August 30, 2024; revised October 4, 2024; accepted April 2, 2025; published July 9, 2025
Abstract—This study evaluates the integration of Bloom’s Taxonomy into OneClickQuiz, an Artificial Intelligence (AI) driven plugin for automating Multiple-Choice Question (MCQ) generation in Moodle. Bloom’s Taxonomy provides a structured framework for categorizing educational objectives into hierarchical cognitive levels. Our research investigates whether incorporating this taxonomy can improve the alignment of AI-generated questions with specific cognitive objectives. We developed a dataset of 3691 questions categorized according to Bloom’s levels and employed various classification models—Multinomial Logistic Regression, Naive Bayes, Linear Support Vector Classification (SVC), and a Transformer-based model (DistilBERT)—to evaluate their effectiveness in categorizing questions. Our results indicate that higher Bloom’s levels generally correlate with increased question length, Flesch-Kincaid Grade Level (FKGL), and Lexical Density (LD), reflecting the increased complexity of higher cognitive demands. Multinomial Logistic Regression showed varying accuracy across Bloom’s levels, performing best for “Knowledge” and less accurately for higher-order levels. Merging higher-level categories improved accuracy for complex cognitive tasks. Naive Bayes and Linear SVC also demonstrated effective classification for lower levels but struggled with higher-order tasks. DistilBERT achieved the highest performance, significantly improving classification of both lower and higher-order cognitive levels, achieving an overall validation accuracy of 91%. This study highlights the potential of integrating Bloom’s Taxonomy into AI-driven assessment tools and underscores the advantages of advanced models like DistilBERT for enhancing educational content generation.
Keywords—artificial intelligence, machine learning, educational technology, bloom taxonomy, education system and its application, natural language processing
Cite: Antoun Yaacoub, Jérôme Da-Rugna, and Zainab Assaghir, "Assessing AI-Generated Questions’ Alignment with Cognitive Frameworks in Educational Assessment," International Journal of Computer Theory and Engineering, vol. 17, no. 3, pp. 114-125, 2025.
Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).