Skip to main content
TR EN

SEMINAR:A Causal View on Explainability and Robustness of Neural Network

Guest: Hicchem Debbi, University of M’slia

Title: A Causal View on Explainability and Robustness of Neural Networks (CS, DSA, EE, IE)

Date/Time: November 13, 2024, 13:40

Location: FENS L030

Abstract: Deep neural networks (DNNs) have shown great success in many applications such as image classification, natural language processing, and two-player games. DNNs could achieve good levels of precision, but they have a black box nature. As a result, the interpretation and the explanation of their decisions have become very crucial, especially when extending their application to critical fields.

In this presentation, we will investigate the explainability and robustness of Neural Networks (CNNs) through the lens of causality. We will first show how causal reasoning would help to deliver consistent, robust and stable explanations for CNNs. Then, we show how we could achieve promising results on other related tasks, such as Weakly Supervised Object Localization (WSOL) and fine-grained similarity. We will also show that causal reasoning could stand as a helpful tool for increasing the robustness of CNNs. Finally, we will show that causal reasoning could also provide good explanations for Graph Neural Networks (GNNs), which would help to improve the reliability and robustness of GNNs.

 

Bio: Hichem Debbi is a Professor at the computer science department, University of M’sila, Algeria. He obtained his PhD degree in computer science from the University of M’sila, in 2015. His research interests include but are not limited to: deep learning, computer vision and explainable Artificial Intelligence (XAI). He has interests on every aspect of explainability, robustness and trustworthiness of Intelligent Systems. His previous research works on explainability included explaining and debugging probabilistic systems.