Uni-Logo

Seminar: Ethics of AI - Topics

A significant subset of the articles are only available from within the university network. To download, you must use a university computer or VPN.

Area A: Meta-Ethics

A1: Machines as artificial moral agents

Betreuer: Prof. Dr. Oliver Müller

  • Floridi, L., Sanders, J.W.: On the morality of artificial moral agents. Minds and Machine 14 (2004), 349–379. (PDF)

A2: Machines as artificial moral patients

Betreuer: Prof. Dr. Felix Lindner (Juniorprof.)

  • Gunkel, D.J.: The machine question. Critical perspectives in AI, robots, and ethics. Cambridge MA 2017, 93-157. (PDF)

A3: Are there robot rights?

Betreuer: Prof. Dr. Oliver Müller

  • Coeckelbergh, M.: Robot rights? Towards a social-relational justification of moral considerations. Ethics and Information Technology 12:209--221 (2010) (PDF)

A4: Should robots have no rights?

Betreuer: Prof. Dr. Oliver Müller

  • Bryson, J.J.: Robots Should be Slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, Chapter 11. John Benjamins Publishing Company, 2010, pp. 63--74. (PDF)

Area B: Descriptive Ethics

B1: Dataset Diversity in Emotion detection

Betreuer: Dr. Robert Mattmüller

  • Bryant, D., Howard, A.: A comparative analysis of emotion-detecting AI systems with respect to algorithm performance and dataset diversity. In: The Second AAAI / ACM Annual Conference on AI, Ethics, and Society (2019) (PDF)

B2: Cultural differences in the perception of a domestic robot's behavior

Betreuer: Prof. Dr. Oliver Müller

  • Li, H., Milani, S., Krishnamoorthy, V., Lewis, M., Sycara, K.: Perceptions of domestics robots‘ normative behavior across cultures. In: The Second AAAI / ACM Annual Conference on AI, Ethics, and Society (2019) (PDF)

B3: Cultural differences in the perception of an autonomous car's behaviour

Betreuer: Prof. Dr. Bernhard Nebel

  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., Rahwan, I.: The moral machine experiment. Nature 563:59--65 (2018) (PDF)

Area C: Cognitive Science, AI and Ethics

C1: Descriptive aspects of human moral reasoning and decision-making under consideration of cognitive theories

Betreuer: Julia Wertheim

  • Bucciarelli, M., Khemlani, S., & Johnson-Laird, P. N. (2008). The psychology of moral reasoning. Judgment and Decision making, 3(2), 121--139. (PDF)
  • Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). The role of analytic thinking in moral judgements and values. Thinking & Reasoning, 20(2), 188-214. (PDF)

C2: Influence of emotions and value judgements on moral decision-making

Betreuer: Prof. Dr. Oliver Müller

  • Gubbins, E., & Byrne, R. M. (2014). Dual processes of emotion and reason in judgments about moral dilemmas. Thinking & Reasoning, 20(2), 245--268. (PDF)
  • Manfrinati, A., Lotto, L., Sarlo, M., Palomba, D., & Rumiati, R. (2013). Moral dilemmas and moral principles: When emotion and cognition unite. Cognition & emotion, 27(7), 1276--1291. (PDF)

C3: How people apply different moral norms to human and robot agents

Betreuer: Barbara Kuhnert

  • Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction (pp. 117--124). ACM. (PDF)
  • Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016). Which robot am I thinking about?: The impact of action and appearance on people's evaluations of a moral robot. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction (pp. 125--132). IEEE Press. (PDF)

Area D: Machine Ethics

D1: What is machine ethics?

Betreuer: Dr. Thorsten Engesser

  • Anderson, M., Anderson, S. L.: Machine ethics: Creating an ethical intelligent agent. AI Magazine 28(4):15--26 (2007) (PDF)

D2: Programming and implementing ethics in machines

Betreuer: Prof. Dr. Oliver Müller

  • Wallach, W., Allen C.: Moral Machines. Teaching robots right from wrong. Oxford 2009, 13--36. (PDF)
  • Allen, C., Wallach, W., Smit, I.: Why Machine Ethics? IEEE Intelligent Systems 21(4):12--17, 2006. (PDF)

D3: Embedding ethics in a robot architecture

Betreuer: Prof. Dr. Oliver Müller

  • Arkin, R.: Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report GIT-GVU-07-11, 2007 (PDF)

D4: Utilitarian machines

Betreuer: Dr. David Speck

  • Cloos, C.: The Utilibot Project. An autonomous mobile robot based on utilitarianism. American Association for Artificial Intelligence 2005. (PDF)

D5: Kantian machines

Betreuer: Dr. Tim Schulte

  • Powers, T.M.: Prospects for a Kantian Machine. In: Anderson, M., Anderson, S.L. (eds.): Machine ethics. Cambridge 2011, 464--475. (PDF)

Area E: Core Concepts in AI Ethics

E1: Responsibility

Betreuer: Prof. Dr. Oliver Müller

  • Gunkel, D. J.: Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology 1--14 (2017) (PDF)

E2: Trust

Betreuer: Dr. Tim Schulte

  • Coeckelbergh, M.: Can we trust robots? Ethics and Information Technology 14 (2012), 53--60. (PDF)

Area F: Applied AI Ethics

F1: User autonomy

Betreuer: Prof. Dr. Oliver Müller

  • Susser, D.: Invisible influence: Artificial intelligence and the ethics of adaptive choice architectures. In: The Second AAAI / ACM Annual Conference on AI, Ethics, and Society (2019) (PDF)

F2: Biases in machine learning

Betreuer: Dr. Robert Mattmüller

  • Benthall, S., Haynes, B. D.: Racial categories in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 289–298 (2019) (PDF)

F3: Autonomous weapons systems

Betreuer: Prof. Dr. Bernhard Nebel

  • Lim, D.: Killer robots and human dignity. In: The Second AAAI / ACM Annual Conference on AI, Ethics, and Society (2019) (PDF)

F4: Autonomous cars

Betreuer: Prof. Dr. Oliver Müller

  • Bundesministeriums für Verkehr und digitale Infrastruktur: Bericht der Ethik Kommission für Automatisiertes und vernetztes Fahren. Berlin 2017. (PDF)

Area G: Outlook

G1: Explainable AI

Betreuer: Prof. Dr. Oliver Müller

  • Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*19), pp. 279--288 (2019) (PDF)