Towards an Artificial Moral Agent: One Step at a Time

(By Jen Schellinck)

At our lab meeting this week, Steve showed us a preliminary version of his thesis presentation on moral machines: Towards an Artificial Moral Agent: One Step at a Time.

Some Preliminary Thoughts

The Terminator, Ex Machina – these movies might give us some inspiration, but let’s take this farther. Issues raised by these movies include the relationship between consciousness and morality, and how necessary or optional the presence of each of these is in A.I. Are they conflated in these movies? Are they both necessary? Kate’s example: The self driving car, the old person and the puddle. Does the car need consciousness to (considerately) not splash the old person while driving through the puddle? Rob’s remark: People tend to distinguish between emotion and cognition – but are they really separate?

Steve’s Research

Steve’s goal is to develop a framework for morally acceptable machine behavior, also consistent with social norms. To show that his framework is more than just theoretical, he has implemented a basic moral agent that knows, for example, that it should not kill in situations like the trolley problem. The framework is rule-based and motivated by Hume’s conceptualization of ethics, with the addition of a compassion component. A key aspect of the framework, as implemented in Java, is the ongoing refactoring of a particular ethical problem, until, ultimately, it is generalizable to a host of related ethical problems.

Steve’s Motivation

People spend a great deal of time developing, for example, the sensory abilities of A.I.s, and then five minutes worrying about the morality (oh yeah, we should worry about that too!). These days, what might be called ‘proto-A.I.s’ abound – consider, for example, the growing use of automated personal assistants (e.g. Siri), and autonomous vehicles. This type of technology raises many immediate and long term ethical issues. For example: All of this data is going off to the cloud. What are the ethics of this? Right now there are few ethical boundaries imposed on these and other types of automation. Resolving relevant issues in this area will require an intersection of Cognitive Psychology, Cognitive Modeling and Philosophy.

Despite its growing relevance, not a lot of research has been carried out with respect to the implementation of moral robots. But see: Moral machines, Machine ethics, Robot Ethics. See also the NAO Robot programmed to be a nursing aid (Paper: Be Nice Robot). Once again, Asimov’s Rules of Robotics come into play.

Finding ways to develop morality in A.I., and developing methods and frameworks that let A.I. grow, adapt and generalize as they encounter novel situations will be a key part of this ongoing research.

Some Other Lab Meeting Tidbits

  • Finding names for your A.I.s – it’s fun!
  • Real time decision making with the trolley problem – it’s a little different when you can’t think about it forever!
  • Can you define consciousness in 5 words or less?

It’s All About Flow

(By Jen Schellinck)

Some tidbits from this week’s lab round-table:

  • Representations and truth value – how many definitions can we find? Can we get away without representations? Can philosophy help us by teasing apart many competing definitions of this word?
  • Musical cognition: A fascinating topic! Music is one of the few human universals. And even some non-human species are happy to bop along to a tune (although not all have a decent sense of rhythm). But what role does music really play in cognition? Is it important? Is it a side-effect of some other cognitive feature? The jury is still out.
  • Books the lab is currently reading: Against the Grain, Catching Fire
  • Coming from the discussion of books we are reading, some further thoughts on the psychological state of flow:
    • How does flow relate to Expertise and SGOMS – Is expertise a flow state?
    • Interruptions – SGOMS handles this, but flow is about *not* perceiving interruptions
    • Cognitive tunneling – can this be a part of expertise?

Reinforcement Learning For The Win?

(By Jen Schellinck)

Game playing computers are getting better! It’s interesting that reinforcement learning (RL) as being used as a base strategy for these solutions, as described in Silver et al.’s (2017) recently published paper on AlphaGo Zero, arguably the most successful chess playing program to date. Perhaps most importantly, it was able to learn very quickly, without being exposed to large numbers of example chess games.

Chad pointed out that if you want to try out or learn more about reinforcement learning, you can check out two RL platforms released by OpenAI.