首页 | 本学科首页   官方微博 | 高级检索  
     


Explainable AI: A Review of Machine Learning Interpretability Methods
Authors:Pantelis Linardatos  Vasilis Papastefanopoulos  Sotiris Kotsiantis
Affiliation:Department of Mathematics, University of Patras, 26504 Patras, Greece; (V.P.); (S.K.)
Abstract:Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Keywords:xai   machine learning   explainability   interpretability   fairness   sensitivity   black-box
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号