| Issue |
E3S Web Conf.
Volume 699, 2026
11th International Conference on Energy and City of the Future (EVF’2024)
|
|
|---|---|---|
| Article Number | 05003 | |
| Number of page(s) | 13 | |
| Section | E-Health, Professions of the Future and Related Training Courses | |
| DOI | https://doi.org/10.1051/e3sconf/202669905003 | |
| Published online | 20 March 2026 | |
Explainable Artificial Intelligence for Diabetes Diagnosis
1 Laboratory of Systems and Applications of Information and Telecommunications Technologies (SATIT), Department of Industrial Engineering, ABBES Laghrour University, Khenchela, Algeria.
2 Laboratoire des Telecommunications (LT), Faculty of Sciences and Technologies, 8 Mai 1945 University, Guelma, Algeria.
3 Department of Computer Sciences, Faculty of Sciences and Technologies, ABBES Laghrour University, Khenchela, Algeria.
* Corresponding author: This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract
Whether young, old, type 1, type 2, gestational, newly diagnosed, long-time sufferer, caretaker or loved one, millions of people are afflicted and affected by diabetes. The World Health Organization (WHO) predicts that by 2030, diabetes will be the 7th leading cause of death in the world, and estimated more than 422 million adults of the population worldwide are living with diabetes, with millions of people with prediabetes. Machine learning models have shown promising results in the correct identification of the presence of diabetes, which is essential for providing efficient treatment; however, their decision-making process is often considered a “black box” that lacks transparency and interpretability. In this project, we explored the use of Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), two popular explainable AI techniques, to generate local and global explanations for machine learning models. All the datasets used for the study were gathered from Kaggle and split into training and test sets using different kinds of machine learning algorithms, which would boost the success rate of therapy. Along with Categorical Boosting (CatBoost), Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), Random Forest (RF), Adaptive Boosting (AdaBoost), Logistic Regression (LR), Light Gradient Boosting Machine (LightGBM), and Decision Trees (DT) are well-known models for predicting diabetes and managing therapy. Explainable AI techniques were then applied to generate explanations of the model’s predictions on the test sets. Our results demonstrated that SHAP and LIME can effectively identify patterns in the symptoms of patients and suggest a potential diagnosis or recommend further courses of action. In addition, this study also presents a comparative analysis of these algorithms based on various performance metrics, such as accuracy, recall, AUC-ROC, and F1 score, achieving the highest values on the test set, indicating the potential of combining machine learning and explainable AI for improving diabetes diagnosis and treatment.
Key words: Machine Learning / Diabetes Prediction / SHAP / LIME / XAI
© The Authors, published by EDP Sciences, 2026
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

