Explainable AI: Transparency and Trust in AI Systems

August 20, 2024 – Reading time: 2 minutes

In today’s tech-driven landscape, Artificial Intelligence (AI) is omnipresent and plays a pivotal role in shaping our daily lives. AI occurs in many technologies in different application areas, also in critical sectors such as autonomous driving, healthcare, and manufacturing, where the safety, ethics, and trustworthiness within AI systems are outstanding for delivering reliable outcomes. The recent introduction of the EU AI Act by the European Commission emphasizes the urgent necessity to analyze these high-risk AI systems. Consequently, there is a pressing need to delve into high-risk domains to deepen our understanding and foster transparency within AI systems.

Explainable Artificial Intelligence (XAI) emerges as a pivotal tool in addressing these needs, as it is specifically designed to facilitate transparency, enhance the understandability of AI decisions, and foster trust in AI utilization. Within the KARLI project, we have investigated various XAI methods for safety-relevant functions. KARLI is a funded project focused on developing adaptive, responsive and level-compliant interactions in the vehicle of the future, aiming to utilize safety-relevant assistance systems for automated vehicles. However, the variety of available techniques poses a significant challenge in determining the most suitable explanation method for specific use cases. Meeting this challenge requires the development of guidelines to assist in selecting and evaluating explanation methods, tailored to meet the needs of specific contexts and stakeholders. Such guidelines can significantly contribute to the development of more responsible and accountable AI systems. In the ever-evolving landscape of AI technologies, ensuring transparency and trustworthiness is not only aligned with regulatory frameworks like the EU AI Act but also serves as a foundational element for the responsible deployment of high-risk AI systems.

For more information download our whitepaper on Explainable AI below.

Download Explainable Artificial Intelligence Whitepaper

Contact person

  • Dr. Marc Großerüschkamp

    Head of Software & Data Technologies

Authors

  • Trang Nguyen

    Technology Consultant

  • Dr. Stephan Kremser

    Technology Consultant

How can we accelerate your development?

Resources