Felix Lindner Publikationen
Die Liste enthält meine Publikationen seit 2016. Eine vollständige Liste meiner Publikationen ist hier verfügbar.
(Alle Abstracts einblenden)
(Alle Abstracts ausblenden)
2020
-
Felix Lindner, Robert Mattmüller und Bernhard Nebel.
Evaluation of the Moral Permissibility of Action
Plans.
Artificial Intelligence 287. 2020.
(Abstract einblenden)
(Abstract ausblenden)
(PDF)
Research in classical planning so far has been mainly concerned with
generating a satisficing or an optimal plan. However, if such
systems are used to make decisions that are relevant to humans, one
should also consider the ethical consequences generated plans can have.
Traditionally, ethical principles are formulated
in an action-based manner, allowing to judge the execution
of one action. We show how such a judgment can be generalized to
plans. Further, we study the computational complexity of making ethical judgment
about plans.
2019
-
Felix Lindner, Barbara Kuhnert, Laura Wächter und Katrin Möllney.
Perception of Creative Responses to Moral Dilemmas by a Conversational Robot.
In
Proc. ICSR 2019.
2019.
(PDF)
-
Felix Lindner und Katrin Möllney.
Extracting Reasons for Moral Judgments under Various Ethical Principles.
In
Proceedings of KI 2019.
2019.
(PDF)
-
Hanna Stellmach und Felix Lindner.
Perception of an Uncertain Ethical Reasoning Robot.
Journal of Interactive Media 18(1). 2019.
-
Felix Lindner, Robert Mattmüller und Bernhard Nebel.
Moral Permissibility of Action Plans.
In
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19).
2019.
(Abstract einblenden)
(Abstract ausblenden)
(PDF)
Research in classical planning so far was mainly concerned with generating a satisficing or an optimal plan. However, if such systems are used to make decisions that are relevant to humans, one should also consider the ethical consequences generated plans can have. We address this challenge by analyzing in how far it is possible to generalize existing approaches of machine ethics to automatic planning systems. Traditionally, ethical principles are formulated in an action-based manner, allowing to judge the execution of one action. We show how such a judgment can be generalized to plans. Further, we study the computational complexity of making ethical judgment about plans.
2018
-
Laura Wächter und Felix Lindner.
An Explorative Comparison of Blame Attributions to Companion Robots Across Various Moral Dilemmas.
In
Proceedings of The 6th International Conference on Human-Agent Interaction (HAI 2018).
2018.
-
Barbara Kuhnert, Felix Lindner, Martin Mose Bentzen und Marco Ragni.
Causal Structure of Moral Dilemmas Predicts Perceived Difficulty of Making a Decision.
In
Proceedings of the KogWis 2018 (Extended Abstract).
2018.
-
Martin Mose Bentzen, Felix Lindner, Louise Dennis und Michael Fisher.
Moral Permissibility of Actions in Smart Home Systems.
In
Proceedings of the FLoC 2018 Workshop on Robots, Morality, and Trust through the Verification Lens (Extended Abstract).
2018.
-
Hanna Stellmach und Felix Lindner.
Perception of an Uncertain Ethical Reasoning Robot: A Pilot Study.
In
Proceedings of Mensch und Computer 2018.
2018.
(Abstract einblenden)
(Abstract ausblenden)
The study investigates the effect of uncertainty expressed by a robot facing a moral dilemma.
Participants (N = 80) were shown a video of a robot explaining a moral dilemma and the
decision it makes. The robot either expressed certainty or uncertainty about its decision.
Participants rated how much blame the robot deserves for its action, the moral wrongness
of the action, and their impression of the robot in terms of four scale dimensions measuring
social perception. The results suggest that participants that were not familiar with the moral
dilemma assign more blame to the robot for the same action when it expresses uncertainty,
while expressed uncertainty has less effect on moral wrongness judgments. There was no
significant effect of expressed uncertainty on participants’ impression of the robot. We discuss
implications of this result for the design of social robots.
-
Felix Lindner und Martin Mose Bentzen.
A Formalization of Kant's Second Formulation of the Categorical Imperative.
In
Proceedings of the 14th International Conference on Deontic Logic and Normative Systems (DEON).
2018.
(Abstract einblenden)
(Abstract ausblenden)
We present a formalization and computational implementation of the second formulation of Kant's categorical imperative. This ethical principle requires an agent to never treat someone merely as a means but always also as an end. Here we interpret this principle in terms of how persons are causally affected by actions. We introduce Kantian causal agency models in which moral patients, actions, goals, and causal influence are represented, and we show how to formalize several readings of Kant's categorical imperative that correspond to Kant's concept of strict and wide duties towards oneself and others. Stricter versions handle cases where an action directly causally affects oneself or others, whereas the wide version maximizes the number of persons being treated as an end. We discuss limitations of our formalization by pointing to one of Kant's cases that the machinery cannot handle in a satisfying way.
-
Felix Lindner, Robert Mattmüller und Bernhard Nebel.
Moral Permissibility of Action Plans.
In
Proceedings of the ICAPS Workshop on EXplainable AI Planning (XAIP).
2018.
(Abstract einblenden)
(Abstract ausblenden)
(PDF)
Research in classical planning so far was mainly concerned with generating a satisficing or an optimal plan.
However, if such systems are used to make decisions that are relevant to humans, one should also consider
the ethical consequences that generated plans can have. We address this challenge by analyzing in how
far it is possible to generalize existing approaches of machine ethics to automatic planning systems.
Traditionally, ethical principles are formulated in an action-based manner, allowing to judge the execution of
one action. We show how such a judgment can be generalized to plans.
Further, we study the complexity of making ethical judgment about plans.
-
Glenda Hannibal und Felix Lindner.
Transdisciplinary Reflections on Social Robotics in Academia and Beyond.
In
Proceedings of Robo-Philosophy 2018.
2018.
2017
-
Felix Lindner, Martin Mose Bentzen und Bernhard Nebel.
The HERA Approach to Morally Competent Robots.
In
Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017).
2017.
(Abstract einblenden)
(Abstract ausblenden)
(PDF)
To address the requirement for autonomous moral decision making, we introduce a software library for modeling hybrid ethical reasoning agents (short: HERA). The goal of the HERA project is to provide theoretically well-founded and practically usable logic-based machine ethics tools for implementation in robots. The novelty is that HERA implements multiple ethical principles like utilitarianism, the principle of double effect, and a Pareto-inspired principle. These principles can be used to automatically assess moral situations represented in a format we call causal agency models. We discuss how to model moral situations using our approach, and how it can cope with uncertainty about moral values. Finally, we briefly outline the architecture of our robot IMMANUEL, which implements HERA and is able to explain ethical decisions to humans.
-
Barbara Kuhnert, Marco Ragni und Felix Lindner.
The Gap between Human's Attitute towards Robots in General and Human's Expectation of an Ideal Everyday Life Robot.
In
Proceedings of the 2017 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2017).
2017.
(PDF)
-
Felix Lindner, Laura Wächter und Martin Mose Bentzen.
Discussions About Lying with an Ethical Reasoning Robot.
In
Proceedings of the 2017 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2017).
2017.
(PDF)
-
Felix Lindner und Carola Eschenbach.
An Affordance-Based Conceptual Framework for Spatial Behavior of Social Robots.
In
Raul Hakli und Johanna Seibt (Hrsg.),
Sociality and Normativity for Robots --- Philosophical Inquiries into Human-Robot Interactions.
Springer International Publishing 2017.
-
Barbara Kuhnert, Felix Lindner, Martin Mose Bentzen und Marco Ragni.
Perceived Difficulty of Moral Dilemmas Depends on Their Causal Structure: A Formal Model and Preliminary Results.
In
Proceedings of the 39th Annual Meeting of the Cognitive Science Society CogSci 2017.
2017.
(PDF)
-
Felix Lindner und Martin Mose Bentzen.
The Hybrid Ethical Reasoning Agent IMMANUEL.
In
Proceedings of the 2017 Conference on Human-Robot Interaction (HRI2017), Late-Breaking Report.
2017.
(Abstract einblenden)
(Abstract ausblenden)
(PDF)
We introduce a novel software library that supports the implementation of hybrid
ethical reasoning agents (HERA). The objective is to make moral principles available
to robot programming. At its current stage, HERA can assess the moral permissibility
of actions according to the principle of double effect, utilitarianism, and the do-no-harm
principle. We present the prototype robot IMMANUEL based on HERA. The robot will
be used to conduct research on joint moral reasoning in human-robot interaction.
2016
-
Felix Lindner.
How To Count Multiple Personal-Space Intrusions in Social Robot Navigation.
In
Proceedings of the Robo-Philosophy Conference 2016.
2016.
(Abstract einblenden)
(Abstract ausblenden)
One aspect of social robot navigation is to avoid personal space intrusions. Computationally, this can be achieved by
introducing social costs into a robot's path planner's objective function. This article tackles the normative question
of how robots should aggregate social costs incurred by multiple personal-space intrusions. Of particular interest is the
question whether numbers should count, i.e., whether a robot ought to intrude into one person's personal space in
order to aboid intruding into multiple personal spaces. This work proposes four different modes of aggregation of the
costs of intrusions into personal space, discusses some of the philosophical arguments, and presents results from a
pilot study.
-
Felix Lindner.
A Model of a Robot's Will Based on Higher-Order Desires.
In
Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016).
2016.
(Abstract einblenden)
(Abstract ausblenden)
Autonomous robots implement decision making capacities on several layers of abstraction. Put in terms of desires, decision making
evaluates desires to eventually commit to some most rational one. Drawing on the philosophical literature on volition and agency,
this work introduces a conceptual model that enables robots to reason about which desires they want to want to realize, i.e., higher-order
desires. As a result, six jointly exhaustive and pairwise disjoint types of choices are defined. Technical evaluation shows how
to add a robot's will to its rational decision-making capacity. This guarantees that informed choices are possible even in cases
rational decision making alone is indecisive. Further applications to modeling personality traits for human-robot interaction are discussed.
-
Felix Lindner.
A Social Robot's Knowledge About Territories in Public Space.
In
Proceedings of the 3rd Workshop on Public Space Human-Robot Interaction (PubRob 2016).
2016.
(Abstract einblenden)
(Abstract ausblenden)
Human territoriality is a microsocial phenomenon that displays the strong interrelations between action and space.
Actions are spatially located, and because spaces that afford certain activities are rare, humans claim portions of space
and restrict space access to particular agents. Thereby, they create territories. In public spaces, territories are
usually short-term yet bravely defended: E.g., the table in a restaurant, the place in a queue at the checkout, the seat in the train.
This abstract sketches a formal theory of territory. Its aim is to enable social robots to consider existing territories during decision making
and planning both in order to avoid intrusions of others' territories and to claim territories for their own benefit.