This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 The AuthorsABSTRACT
This research examines the use of Explainable Artificial Intelligence (XAI) in educational assessment systems, focusing on its potential to detect and mitigate bias in automated grading and predictive analytics. The study aims to evaluate how XAI frameworks, such as SHAP, LIME, and counterfactual explanations, enhance transparency, fairness, and accountability in student evaluation processes. The research problem arises from the growing reliance on AI in education, where opaque decision-making can lead to unintended discrimination, inaccuracies, or unequal treatment of students from diverse backgrounds. Key research questions include: How do XAI frameworks reveal sources of bias in educational AI systems? To what extent can these frameworks support equitable assessment practices? The study hypothesizes that implementing XAI in educational assessment improves fairness and interpretability, allowing educators to make more informed decisions while reducing the risk of bias and enhancing student trust in automated evaluation tools.
Keywords: Explainable Artificial Intelligence (XAI); AI Frameworks;
Bias Detection; Algorithmic Bias; Educational Assessment;
Assessment Systems.
Received : Oct 22, 2025
Revised : Oct 24, 2025
Accepted : Nov 30, 2025
Walaa Rahim Gouda
| Acknowledgment | None |
|---|---|
| Author Contribution | All authors contributed equally to the main contributor to this paper. All authors read and approved the final paper. |
| Conflicts of Interest | “The authors declare no conflict of interest.” |
| Funding | “This research received no external funding” |
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 The Authors