Definition – Explainable AI

AI Definitions

Explainable AI is a term that refers to the development and
deployment of artificial intelligence systems that are ethical,
reliable, and aligned with the values and interests of the society
they serve. Explainable AI systems should adhere to certain
principles and standards that ensure they are transparent,
explainable, fair, impartial, robust, reliable, respectful of privacy,
safe, secure, responsible, and accountable.

Explainable AI systems should also allow human users to
comprehend and trust the results and output created by machine
learning algorithms, and to understand how and why the
algorithms arrived at a specific decision or prediction.
Explainable AI is important for debugging and improving model
performance, meeting regulatory and ethical requirements,
fostering end-user trust and confidence, and enabling human-AI
collaboration. Explainable AI can be achieved by using various
methods and techniques, such as feature attributions, example￾based explanations, model analysis, and interactive visualization
tools.