Back to Projects
Completed

DengueStackX-19

Explainable AI for Dengue Diagnosis

An interpretable machine-learning model for dengue fever diagnosis using ensemble stacking and explainability techniques.

96.38%

Accuracy

Q1

Journal Rank

Elsevier

Publisher

1The Problem

Dengue fever affects millions worldwide, and early accurate diagnosis is crucial for treatment. Traditional machine learning models often act as 'black boxes,' making it difficult for healthcare professionals to trust and understand their predictions. There was a need for a model that not only achieves high accuracy but also provides interpretable explanations for its predictions.

2The Approach

I developed DengueStackX-19, a stacking ensemble model that combines multiple base classifiers to achieve robust predictions. To address class imbalance in the dataset, I implemented SMOTEENN sampling technique. The key innovation was integrating SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to provide feature importance rankings and local explanations for individual predictions. This makes the model's decisions transparent and interpretable for medical professionals.

3Technologies Used

PythonScikit-learnSHAPLIMEPandasNumPyMatplotlibSeaborn

4The Outcome

The model achieved 96.38% accuracy on the test dataset, outperforming several baseline models. The research was published in Healthcare Analytics, an Elsevier Q1 journal, demonstrating its scientific rigor and contribution to the field. The explainability features were particularly well-received, as they bridge the gap between AI predictions and clinical decision-making.

5Lessons Learned

This project taught me the importance of model interpretability in healthcare applications. I learned that achieving high accuracy is only part of the solution—making AI trustworthy and understandable for end-users is equally critical. I also gained deep experience with ensemble methods and advanced sampling techniques for imbalanced datasets.