Utilizing Explainable AI for Interpreting Machine Learning Model Results in Ceria Credit Scoring
Abstract
This study aims to improve the transparency of machine learning models in credit scoring using various Explainable Artificial Intelligence (XAI) methods. The methods used include SHAP, BRCG, ALE, Anchor, and ProtoDash to explain the prediction results of machine learning models, namely logistic regression, XGBoost, and random forest. This study applies a quantitative approach with a comparative method, where Ceria loan application data from Bank Rakyat Indonesia (BRI) is analyzed using a machine learning model, then evaluated using the Explanation Consistency Framework (ECF). The results show that the XAI method can improve understanding of model decisions, with SHAP and ALE effective for global explanations, while Anchor and ProtoDash provide in-depth insights at the individual level. Evaluation using ECF shows that the post-hoc method has high consistency, although Anchor has limitations in the aspect of axiom identity. In conclusion, the XAI method can help improve trust and transparency in credit scoring at BRI.
Keywords: Explainable Artificial Intelligence; Credit Scoring; Machine Learning; Model Interpretability; Explanation Consistency Framework
Abstrak
Penelitian ini bertujuan untuk meningkatkan transparansi model pembelajaran mesin dalam penilaian kredit menggunakan berbagai metode Explainable Artificial Intelligence (XAI). Metode yang digunakan antara lain SHAP, BRCG, ALE, Anchor, dan ProtoDash untuk menjelaskan hasil prediksi model pembelajaran mesin yaitu regresi logistik, XGBoost, dan random forest. Penelitian ini menggunakan pendekatan kuantitatif dengan metode komparatif, dimana data pengajuan pinjaman Ceria dari Bank Rakyat Indonesia (BRI) dianalisis menggunakan model machine learning, kemudian dievaluasi menggunakan Explanation Consistency Framework (ECF). Hasilnya menunjukkan bahwa metode XAI dapat meningkatkan pemahaman keputusan model, dengan SHAP dan ALE efektif untuk penjelasan global, sementara Anchor dan ProtoDash memberikan wawasan mendalam pada tingkat individu. Evaluasi menggunakan ECF menunjukkan bahwa metode post-hoc memiliki konsistensi yang tinggi, meskipun Anchor memiliki keterbatasan pada aspek identitas aksioma. Kesimpulannya, metode XAI dapat membantu meningkatkan kepercayaan dan transparansi dalam credit scoring di BRI.
Kata Kunci: Explainable Artificial Intelligence; Credit Scoring; Machine Learning; Model Interpretability; Explanation Consistency Framework
References
M. Bekhet and S. Eletter, “Credit scoring model using logistic regression technique: A study of banks in Jordan,” Eur. Sci. J, vol. 10, no. 10, pp. 271–281, 2014.
M. R. Honegger and S. Blanc, “Shedding light on black box machine learning algorithms,” Master’s,” thesis, Karlsruhe Inst. Techno, 2018.
S. M. Lundberg and S. I. Lee, “A unified approach to interpreting model predictions,” in in Advances in Neural Information Processing Systems, 2017, pp. 4765–4774.
C. G. M. T. Ribeiro, S. Singh, “Anchors: High-precision model-agnostic explanations,” in in Proc. AAAI Conf. Artificial Intelligence, 2018, pp. 1527–1535.
and J. S. C. D. V. Carvalho, E. M. Pereira, “Machine learning interpretability: A survey on methods and metrics,” Electronics, vol. 8, no. 8, pp. 832-841, 2019.
and C. A. K. S. Gurumoorthy, A. Dhurandhar, G. Cecchi, “Efficient data representation by selecting prototypes with importance weights,” in in Proc. IEEE Int. Conf. Data Mining (ICDM), 2019, pp. 260–269.
D. W. Apley and J. Zhu, “Visualizing the effects of predictor variables in black box supervised learning models,” J. R. Stat. Soc. Ser. B, vol. 82, no. 4, pp. 1059–1086, 2020.
H. Hilmin, “Internasionalisasi nilai moderasi beragama dalam kurikulum merdeka,” Muaddib, vol. 7, no. 1, pp. 37–45, 2024.
M. Mukhlis, “Signifikansi dan kontribusi guru PAI dalam pembentukan karakter siswa,” Integr. Educ. J, vol. 1, no. 1, pp. 22–42, 2024.
S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box,” Harvard J. Law Technol, vol. 31, no. 2, pp. 841–887, 2018.
A. D. Saputra and A. Tunnafia, “Penguatan pendidikan karakter pada anak sekolah dasar,” Phenomenon, vol. 2, no. 2, pp. 69–92, 2024.
N. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” in arXiv preprint arXiv:1702.08608, 2017.
I. Rahmawati, “Kesehatan mental siswa korban bullying,” Undergrad. thesis Univ. Ponorogo, 2023.
A. P. Setiani and L. N. Hidayah, “Dampak bullying terhadap kesehatan psikologis siswa,” Liberosis, vol. 2, no. 1, pp. 41–50, 2024.
R. V. Astifionita, “Memahami dampak bullying pada siswa sekolah menengah,” lebah, vol. 18, no. 1, pp. 36–46, 2024.
B. Waltl and R. Vogl, “Increasing transparency in algorithmic-decision-making with explainable AI,” Datenschutz und Datensicherheit, vol. 42, no. 10, pp. 613–617, 2018.
and D. W. S. Dash, O. Günlük, “Boolean decision rules via column generation,” arXiv Prepr. arXiv1805.09901.
Barredo Arrieta, A., Díaz-Rodríguez, N., “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Inf. Fusion, vol. 58, pp. 82–115, 2020.
R. Zhou, J., & Hooker, “Explainable AI for credit scoring: A survey,” J. Financ. Technol., vol. 5, no. 2, pp. 123–139, 2023.
B. Doshi-Velez, F., & Kim, “Doshi-Velez, F., & Kim, B,” arXiv Prepr. arXi 1702.08608., 2017.
M. Chen, L., Liu, P., & Zhang, “Rule-based explanations for tree ensembles,” in In Proc. IJCAI 2020, 2020, pp. 5432–5438.
How To Cite This :
Refbacks
- There are currently no refbacks.