In this proseminar, we delve into the field of Interactive Intelligent Systems (IIS), which uses artificial intelligence technologies such as natural language processing. We focus on the design, implementation, and evaluation of these systems, taking into account both the AI technology employed and the user's interaction with the system. Interactive intelligent systems encompass a wide range of approaches, including AI-infused systems, recommender systems, mixed-initiative user interfaces, and intelligent agent-centered approaches.

Throughout the seminar, we adopt a human-centered perspective, emphasizing the interaction between the technical system and the human user. By exploring existing capabilities and limitations in the field of human-computer interaction, we aim to gain insights into how to improve the user experience.

At the beginning of each semester, we provide a comprehensive introduction to the specific topic for that year. Building on this foundation, students will have the opportunity to present and discuss existing approaches, methods, and implementations related to the topic. Working together, we will create a mind map that visually captures the collective understanding of the topic.

Each participant is expected to independently prepare, present, and discuss their chosen topic in the area of interactive intelligent systems with the class. These presentations will serve as a platform for sharing insights, exchanging ideas, and fostering critical thinking among peers.

Based on the results of the discussions, students will be required to produce a written scientific article that synthesizes the knowledge and insights gained from the seminar. This report will be evaluated, providing an opportunity for students to demonstrate their understanding and analytical skills.

Here you can find our Code of Conduct.

 

 

Paper Selection List

# Reference Topic Research Method Reference Method Student Name
1

V. Lai, C. Chen, A. Smith-Renner, Q. V. Liao, und C. Tan, „Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies“, in 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago IL USA: ACM, Juni 2023, S. 1369–1385. doi: 10.1145/3593013.3594087.

 

literature review

Evropi Stefanidi, Marit Bentvelzen, Paweł W. Woźniak, Thomas Kosch, Mikołaj P. Woźniak, Thomas Mildner, Stefan Schneegass, Heiko Müller, and Jasmin Niess. 2023. Literature Reviews in HCI: A Review of Reviews. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 509, 1–24. https://doi.org/10.1145/3544548.3581332

https://www.prisma-statement.org/

Wladimir Belucha
2 J.D. Zamfirescu-Pereira, Heather Wei, Amy Xiao, Kitty Gu, Grace Jung, Matthew G Lee, Bjoern Hartmann, and Qian Yang. 2023. Herding AI Cats: Lessons from Designing a Chatbot by Prompting GPT-3. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (DIS '23). Association for Computing Machinery, New York, NY, USA, 2206–2220. https://doi.org/10.1145/3563657.3596138 case study

J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017.https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 7 (case study)

Helia Dadkhah
3 J. Schaffer, J. O’Donovan, J. Michaelis, A. Raglin, and T. Höllerer, “I can do better than your AI: Expertise and explanations,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2019, pp. 240–251. experiment

J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017. https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 2 und 3

https://www.nngroup.com/articles/attitudinal-behavioral/

Aaron Ehrlich
4 X. Wang and M. Yin, “Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2021, pp. 318–328. experiment

J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017. Zhttps://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 2 und 3

https://www.nngroup.com/articles/attitudinal-behavioral/

 
5 H. Kaur, H. Nori, S. Jenkins, R. Caruana, H. Wallach, and J. Wortman Vaughan, “Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning,” in Proc. SIGCHI Conf. Hum. Factors Comput. Syst., 2020, pp. 1–14. contextual inquiry and a survey

J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017.https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 5 (surveys)

https://www.nngroup.com/articles/open-ended-questions/

 
6 J. Rebanal, J. Combitsis, Y. Tang, and X. Chen, “XAlgo: A design probe of explaining algorithms’ internal states via question-answering,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2021, pp. 329–339. Wizard of oz

D. Maulsby, S. Greenberg, and R. Mander. Prototyping an intelligent agent through Wizard of Oz. In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems (CHI '93). Association for Computing Machinery, New York, NY, USA, 1993. 277–284. https://doi.org/10.1145/169059.169215

https://www.nngroup.com/articles/wizard-of-oz/

 
7 M. Szymanski, M. Millecamp, and K. Verbert, “Visual, textual or hybrid: The effect of user expertise on different explanations,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2021, pp. 109–119. user study

J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017. https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 10 (usability studies)

Jan Altmeyer
8 A. Balayn, N. Rikalo, C. Lofi, J. Yang, and A. Bozzon, “How can explainability methods be used to support bug identification in computer vision models?,” in Proc. SIGCHI Conf. Hum. Factors Comput. Syst., 2022, pp. 1–16. design probe

T. Mattelmäki, Design probes. Aalto University. https://aaltodoc.aalto.fi/bitstream/handle/123456789/11829/isbn9515582121.pdf?sequence=1

https://www.interaction-design.org/literature/topics/cultural-probes

https://www.interaction-design.org/literature/topics/technology-probes

 
8 F. Hohman, A. Head, R. Caruana, R. DeLine, and S. M. Drucker, “Gamut: A design probe to understand how data scientists understand machine learning models,” in Proc. SIGCHI Conf. Hum. Factors Comput. Syst., 2019, pp. 1–13. design probe

T. Mattelmäki, Design probes. Aalto University. https://aaltodoc.aalto.fi/bitstream/handle/123456789/11829/isbn9515582121.pdf?sequence=1

https://www.interaction-design.org/literature/topics/cultural-probes

https://www.interaction-design.org/literature/topics/technology-probes

Anna Fey Winkler
10 K. Z. Gajos and L. Mamykina, “Do people engage cognitively with AI? impact of AI assistance on incidental learning,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2022, pp. 794–806. experiments

J. Lazar, J. H. Feng, und H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017. Zugegriffen: 31. März 2023. [Online]. Verfügbar unter: https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 2 und 3

Deliah Duckstein
11

J. D. Zamfirescu-Pereira, R. Y. Wong, B. Hartmann, und Q. Yang, „Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts“, in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, in CHI ’23. New York, NY, USA: Association for Computing Machinery, Apr. 2023, S. 1–21. doi: 10.1145/3544548.3581388.

 

design probe

T. Mattelmäki, Design probes. Aalto University. https://aaltodoc.aalto.fi/bitstream/handle/123456789/11829/isbn9515582121.pdf?sequence=1

https://www.interaction-design.org/literature/topics/cultural-probes

https://www.interaction-design.org/literature/topics/technology-probes

Markus Schmidt
12 Z. Buçinca, P. Lin, K. Z. Gajos, and E. L. Glassman, “Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2020, pp. 454–464.Z. Buçinca, P. Lin, K. Z. Gajos, and E. L. Glassman, “Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems,” in Proc. ACM Int. Conf. Intell. User Interfaces, 2020, pp. 454–464. experiment

J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, 2nd Aufl. Morgan Kaufmann, 2017. https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction#book-description

Chapter 2 und 3

 

Vinzent Jörß