Definition – Trustworthy AI

AI Definitions

Trustworthy AI is a term that refers to the development and
deployment of artificial intelligence systems that are ethical,
reliable, and aligned with the values and interests of the society
they serve. Trustworthy AI systems should adhere to certain principles and standards that ensure they are transparent, explainable, fair, impartial, robust, reliable, respectful of privacy, safe, secure, responsible, and accountable.

Trustworthy AI also requires governance and regulatory compliance throughout the AI lifecycle, from ideation to design, development, deployment, and operation. Trustworthy AI is a goal of many AI researchers, practitioners, policymakers, and stakeholders who want to ensure that AI can be trusted by humans and can benefit humanity without causing harm or injustice.