International Journal of Advance Computational Engineering and Networking (IJACEN)
.
Follow Us On :
current issues
Volume-12,Issue-3  ( Mar, 2024 )
Past issues
  1. Volume-12,Issue-2  ( Feb, 2024 )
  2. Volume-12,Issue-1  ( Jan, 2024 )
  3. Volume-11,Issue-12  ( Dec, 2023 )
  4. Volume-11,Issue-11  ( Nov, 2023 )
  5. Volume-11,Issue-10  ( Oct, 2023 )
  6. Volume-11,Issue-9  ( Sep, 2023 )
  7. Volume-11,Issue-8  ( Aug, 2023 )
  8. Volume-11,Issue-7  ( Jul, 2023 )
  9. Volume-11,Issue-6  ( Jun, 2023 )
  10. Volume-11,Issue-5  ( May, 2023 )

Statistics report
Jul. 2024
Submitted Papers : 80
Accepted Papers : 10
Rejected Papers : 70
Acc. Perc : 12%
Issue Published : 135
Paper Published : 1585
No. of Authors : 4146
  Journal Paper


Paper Title :
Illuminating The Black Box: Explainable AI (XAI)

Author :Ansh Tandon, Rajeel Ansari, Chahat Tandon, Rohan Shah, Ashok Saranya

Article Citation :Ansh Tandon ,Rajeel Ansari ,Chahat Tandon ,Rohan Shah ,Ashok Saranya , (2024 ) " Illuminating The Black Box: Explainable AI (XAI) " , International Journal of Advance Computational Engineering and Networking (IJACEN) , pp. 21-27, Volume-12,Issue-3

Abstract : In recent years, the field of artificial intelligence (AI) has witnessed unprecedented growth, leading to significant advancements in various sectors such as healthcare, finance, and autonomous systems. However, this growth has also brought to light the complexities and opacities of AI models, especially deep learning algorithms, which often function as 'black boxes.' Explainable Artificial Intelligence (XAI) has emerged as a crucial subfield, aiming to make AI decisions transparent, understandable, and trustworthy for human users. This paper presents a comprehensive survey of the current state of XAI. It begins by exploring the fundamental concepts and methodologies underpinning XAI, including feature attribution, model visualization, and localglobal explanations. The paper then delves into domain-specific applications of XAI, highlighting how explainability is being integrated in areas such as healthcare diagnostics, financial decision-making, and legal systems. Furthermore, the survey addresses the ethical implications and challenges in implementing XAI, such as balancing transparency with model complexity and maintaining privacy and security. In the latter part, the paper forecasts future trends and potential avenues in XAI research. These include the development of standardized evaluation metrics for explanations, the integration of causal inference for more insightful explanations, the rise of user-centric explanation interfaces, and the potential regulatory landscape shaping the adoption of XAI. By providing a holistic view of the current achievements and potential future directions, this paper aims to guide researchers, practitioners, and policymakers in the evolving landscape of explainable AI. Keywords - AI Transparency, Machine Learning Interpretability, Explainable Artificial Intelligence (XAI)

Type : Research paper

Published : Volume-12,Issue-3


DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-20639   View Here

Copyright: © Institute of Research and Journals

| PDF |
Viewed - 6
| Published on 2024-06-26
   
   
IRAJ Other Journals
IJACEN updates
Paper Submission is open now for upcoming Issue.
The Conference World

JOURNAL SUPPORTED BY