Trustworthy AI for real-world decision making

Enhancing AI transparency with Explainable AI and Visual Analytics

24 april 2025

Vidya Prasad's research helps make powerful AI systems more understandable, empowering people to use them safely and responsibly in everyday life.

As AI becomes more common in fields, such as medicine and creative work, concerns about how much we can trust these systems are growing. Many AI models are difficult to understand, often referred to as operating like “black boxes.” PhD researcher Vidya Prasad addressed this issue by combining Explainable AI and Visual Analytics to reveal how complex image generation models make decisions. She defended her thesis on Wednesday, April 23.

Understanding complex AI systems

With the growing use of AI in vital fields like healthcare and the arts, understanding how these systems arrive at their decisions is increasingly important.

Models that generate or change images are especially challenging, as their results are more difficult to interpret than simpler outcomes like identifying a specific object.

When the decision process behind an AI’s output is unclear, it becomes harder to trust, especially when the technology is used in areas that have a strong impact on human lives.

Bridging Explainable AI and Visual Analytics

research focused on making these advanced systems more understandable. She used Explainable AI to break down how decisions were made and paired it with Visual Analytics, which offered interactive tools to explore large and complex data sets.

By combining these two approaches, she created a way for researchers to explore the inner workings of AI and uncover the patterns and signals that influenced how decisions were made.

Addressing the challenge of image generation models

Most tools that explain AI behavior are built for simpler tasks like object recognition, where the outcome is often a clear yes or no.

Prasad focused on a more difficult problem: understanding systems that generate entire images.

These include things like medical scans or artistic creations, where the output is more fluid and detailed. Her work required more advanced tools that could explain decisions at the level of individual pixels.

Building trust in AI

Prasad’s research provided valuable resources for scientists working with deep learning. These tools allow experts to examine, test, and improve AI systems with greater confidence.

By applying her methods to fields like generative art and medical imaging, she showed how transparency can lead to stronger and more trustworthy AI.

Her contributions offer real progress in ensuring that AI can be used responsibly in fields where accuracy and trust are essential.

PhD researcher Vidya Prasad. Photo: Vincent van den Hoogen 

Paving the way for transparent AI

Vidya Prasad’s research sets the stage for a future where AI models are not only powerful, but also transparent and understandable.

By improving how AI was interpreted and analyzed, her work helped to understand and to ensure that these systems could be trusted in life-critical applications, leading to safer and more reliable use of AI across industries.

  • Supervisors

    Anna Vilanova, Nicola Pezzotti (external)

Written by

Bouri, Danai
(Communications Advisor M&CS)

More on AI and Data Science

Latest news

keep following us