Developing moral machines: Hybrid implementation of ethical theories
By Beate Sohrmann
1971 1993: Diploma computer science 2008: B.Sc. psychology 2021: and. M.A. philosophy Since 1993: working as Software Engineer
Drag to move and double click to reset size to explore this poster
Intelligent and especially autonomous machines, that increasingly interact with humans, must have some degree of independent moral decision-making capability. If humans are to trust and accept sophisticated machines as interaction partners, they must be convinced that the machines share their essential values and concerns, or at least act accordingly.
But how can ethical theories be implemented in machines? And what moral standards should they be based on?
For moral machines to be accepted, it is important that they behave similarly to humans because this increases trust in the moral machine. In moral psychology, a distinction is made between models in which the basis of moral action is rationalistic or intuitionistic, or which are based on a combination of these two. The approaches of Daniel Kahneman and Dreyfus and Dreyfus both assume a circular bottom-up/top-down structure of experiential learning and the use of rules. Ethical theories can be divided into generalist, particularist and coherentist theories. The coherentist approach, which is a combination of generalist and particularist theories, have the same circular structure as shown in human moral action. In the implementation approaches in AI the hybrid approach also reflects this circular structure of the combination of logic-based top-down approaches with sub-symbolic bottom-up approaches.
This structure is thus found in all three areas considered here: in the models of moral psychology, in the approaches of ethical theories, and in the implementation approaches of AI. Because of the isomorphism found in the circular structure, the hybrid approach in implementing a coherentist ethical theory into an AMA is a viable method for creating a convincing simulacrum of the human capacity for abstract moral reasoning.