Print Email Facebook Twitter Explainable Artificial Intelligence (XAI) Techniques - A Review and Case Study Title Explainable Artificial Intelligence (XAI) Techniques - A Review and Case Study Author Lee Kaijen, Kaijen (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Lal, C. (mentor) Conti, M. (mentor) P. Gonçalves, Joana (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2022-06-24 Abstract The significant progress of Artificial Intelligence (AI) and Machine Learning (ML) techniques such as Deep Learning (DL) has seen success in their adoption in resolving a variety of problems. However, this success has been accompanied by increasing model complexity resulting in a lack of transparency and trustworthiness. Explainable Artificial Intelligence (XAI) has been proposed as a solution to the need for trustworthy AI/ML systems. A large number of studies about XAI are published in recent years, with a majority discussing the specifics of XAI. Hence this work aims to formalize existing XAI literature from a high-level approach in terms of (1) benefits, (2) requirements, (3) challenges and (4) the underlying building blocks involved. Additionally, this paper presents a case study of XAI within the medical image analysis domain followed by future works and research directions in the field and from a general perspective, all to serve as a foundation and reference point to make the topic more accessible to novices. Subject XAIExplainable Artificial IntelligenceMedical Image Analysisreview To reference this document use: http://resolver.tudelft.nl/uuid:3c283c1c-d2bd-4062-845a-490efc8ff4d4 Part of collection Student theses Document type bachelor thesis Rights © 2022 Kaijen Lee Kaijen Files PDF Kaijen_LEE_CSE3000_XAI_Re ... _Final.pdf 1.11 MB Close viewer /islandora/object/uuid:3c283c1c-d2bd-4062-845a-490efc8ff4d4/datastream/OBJ/view