(By Jen Schellinck)
At our lab meeting this week, Steve showed us a preliminary version of his thesis presentation on moral machines: Towards an Artificial Moral Agent: One Step at a Time.
Some Preliminary Thoughts
The Terminator, Ex Machina – these movies might give us some inspiration, but let’s take this farther. Issues raised by these movies include the relationship between consciousness and morality, and how necessary or optional the presence of each of these is in A.I. Are they conflated in these movies? Are they both necessary? Kate’s example: The self driving car, the old person and the puddle. Does the car need consciousness to (considerately) not splash the old person while driving through the puddle? Rob’s remark: People tend to distinguish between emotion and cognition – but are they really separate?
Steve’s Research
Steve’s goal is to develop a framework for morally acceptable machine behavior, also consistent with social norms. To show that his framework is more than just theoretical, he has implemented a basic moral agent that knows, for example, that it should not kill in situations like the trolley problem. The framework is rule-based and motivated by Hume’s conceptualization of ethics, with the addition of a compassion component. A key aspect of the framework, as implemented in Java, is the ongoing refactoring of a particular ethical problem, until, ultimately, it is generalizable to a host of related ethical problems.
Steve’s Motivation
People spend a great deal of time developing, for example, the sensory abilities of A.I.s, and then five minutes worrying about the morality (oh yeah, we should worry about that too!). These days, what might be called ‘proto-A.I.s’ abound – consider, for example, the growing use of automated personal assistants (e.g. Siri), and autonomous vehicles. This type of technology raises many immediate and long term ethical issues. For example: All of this data is going off to the cloud. What are the ethics of this? Right now there are few ethical boundaries imposed on these and other types of automation. Resolving relevant issues in this area will require an intersection of Cognitive Psychology, Cognitive Modeling and Philosophy.
Despite its growing relevance, not a lot of research has been carried out with respect to the implementation of moral robots. But see: Moral machines, Machine ethics, Robot Ethics. See also the NAO Robot programmed to be a nursing aid (Paper: Be Nice Robot). Once again, Asimov’s Rules of Robotics come into play.
Finding ways to develop morality in A.I., and developing methods and frameworks that let A.I. grow, adapt and generalize as they encounter novel situations will be a key part of this ongoing research.
Some Other Lab Meeting Tidbits
- Finding names for your A.I.s – it’s fun!
- Real time decision making with the trolley problem – it’s a little different when you can’t think about it forever!
- Can you define consciousness in 5 words or less?