Explainable AI for Decision Making (Research Seminar)

Organization of the course: cf. here

Course description: Explainable AI is a recent and growing subfield of machine learning (ML) that aims to bring transparency into ML models without sacrificing their predictive accuracy. This seminar will explore current research on the use of Explainable AI for building models whose decisions are more trustworthy. Techniques to verify existing models and to correct flaws identified by the user from explanations will be covered. Students will select a few papers from a pool of thematically relevant research papers, which they will read and present over the course of the semester.

Kick-off meeting: 26 April 2023 from 2:15pm-3pm in room A7/SR 140 Seminarraum (Hinterhaus) (Arnimallee 7).

Presentations:

  • Block 1: 5 July 2023, 10am-12pm, via webex (link)
    • 10:00-10:30: Arpi Hunanyan: Debugging tests
    • 10:30-11:30: Henrik Strangalies: Autonomous driving
    • 11:00-11:30: Niklas Pauli: Right scientific reasons
       
  • Block 2: 5 July 2023, 2pm-4pm, SR 140, Arnimallee 7
    • 14:00-14:30: Weihang Li: Attention is not explanation
    • 14:30-15:00: Oussama Bouanani: Unmasking Clever Hans
    • 15:00-15:30: Michael Migacev: Explainability for GNNs
       
  • Block 3: 12 July 2023, 8:15am-10am, via webex (link)
    • 08:15-08:45: Kaan Dönmez, Unmasking Clever Hans
    • 08:45-09:15: David Knaack, Debugging tests
    • 09:15-09:45: Jonas Heinemann, Unmasking Clever Hans
       
  • Block 4: 12 July 2023, 10am-12pm, via webex (link) SR 119, Arnimallee 3
    • 10:00-10:30: Jan Kroissenbrunner, Right for the right reasons
    • 10:30-11:00: Georgi Lazarov, Finding and removing Clever Hans
    • 11:00-11:30: Jim Neuendorf, Preemptively pruning CH
    • 11:30-12:00: Manuel Welte, Preemptively pruning CH
       
  • Block 5: 12 July 2023, 2pm-4pm, via webex (link) SR  140, Arnimallee 7
    • 14:00-14:30: Leo Hauser, Towards robust explanations
    • 14:30-15:00: Mahmoud Kozae: Attention is not explanation
    • 15:00-15:30: Julian Hesse: Post hoc explanations ineffective