Explainable AI: A case study on a Citizen’s Complaint Text Classification Model
Machine Learning, XAI, LIME, Explainability, Explainable AI, Artificial Intelligence
Present-day society is very much influenced by Artificial Intelligence (AI) systems in numerous contexts such as healthcare, industry, marketing etc. Although AI has provided many contributions such as reaching an early diagnosis for diseases, monitoring health, improving the quality of processes in the industry, forecasting client demand and enhancing energy efficiency, it’s important to build an accountable, responsible, transparent approach to AI models so that people can benefit from these important contributions whilst preventing societal harm.
There is no ideal methodology or framework to interpret or explain machine learning models. There are studies about different explainability methods, including the ones about their limitations concerning the task of explaining the model in all of its different aspects. Recent studies have indicated that integrating explainability tools into the use of artificial intelligence models brings about advantages in the decision-making process and in monitoring model behavior, with the aim of bias prevention. However, there is still a lack of concrete case studies in the field of explainable AI (XAI), particularly concerning Natural Language Processing (NLP). This present study is dedicated to contributing to this domain by examining the advantages of employing XAI within the context of public
administration through a qualitative evaluation of XAI in the classification process of textual complaints.