Rise of the Research Blogs

(By Jen Schellinck)

Back in the day, researchers would engage in lengthy correspondences to hash out their ideas prior to publishing them formally – some of the more famous of these have been preserved as compelling examples of behind the scenes research collaboration.

In a discussion about our lab blog, Adam brought up the interesting point that researchers are now using blog posts as a way to both informally publish their ideas and at the same time engage other academics in public discussions about these ideas. As an intermediary between private correspondence and formal publication, blog posts are perhaps an important new force in the research world.

Research labs can also take advantage of blogs as a means of giving the world a window into their research efforts and activities. All it takes is a website, clear set of instructions, and some encouragement. On your mark, get set… blog…

Deadline Parties and Cognitive History

(By Jen Schellinck)

We started our lab meeting this week with our deadline party, which went very well – clearly our lab is a fan of social support on the deadline front. We also got a sneak peak at Brendan’s current research project, which argues that we’ve been looking at history from a cognitively naive point of view. His main assertion is that we are at a unique point in time when we can now look back at history through a cognitive lens and obtain a novel view of the human story. In connection with this, Rob brought up an interesting reference relevant to this research, connected with Jeremy’s work, about a predecessor to Marr.

To round out the lab meeting, we also briefly chatted about the celebrity culture within science, and came up with 18 different ways to put up a whiteboard – never say our lab isn’t multi-talented 🙂

The Value of A Deadline

(By Jen Schellinck)

This week was the kick off for our summer lab meetings, and we decided that next week we’ll be having a ‘deadline party’. Being cognitive scientists, in addition to pragmatically setting our goals for next week’s deadline party, we also contemplated the value – and cognition – behind deadlines and group work and accountability.

A few more links that came up during the discussion, which eventually turned to creativity and writing: taming the muse and gardeners and architects.

Pragmatism, Instrumentalism… Functionalism?

(By Jen Schellinck)

At this week’s lab meeting, Jeremy presented his work on pragmatism and the knowledge layer. He discussed the philosophical underpinnings and implications of the knowledge layer, as well as its tangible role in the development of autonomous systems.

Following his very interesting and information dense talk, a discussion about the relationships between pragmatism, instrumentalism and scientific realism broke out. Rob also threw functionalism into the mix.

Although at times these discussions of ‘isms’ can seem a bit esoteric, they do have real implications for the development of new theories and techniques – in this case techniques for the development of autonomous systems – as well as the ways in which science is conducted more broadly. Related to this, a newly published essay in Aeon has a very nice discussion about the role of scientific realism and the ways in which it arguably impedes scientific progress.

Towards an Artificial Moral Agent: One Step at a Time

(By Jen Schellinck)

At our lab meeting this week, Steve showed us a preliminary version of his thesis presentation on moral machines: Towards an Artificial Moral Agent: One Step at a Time.

Some Preliminary Thoughts

The Terminator, Ex Machina – these movies might give us some inspiration, but let’s take this farther. Issues raised by these movies include the relationship between consciousness and morality, and how necessary or optional the presence of each of these is in A.I. Are they conflated in these movies? Are they both necessary? Kate’s example: The self driving car, the old person and the puddle. Does the car need consciousness to (considerately) not splash the old person while driving through the puddle? Rob’s remark: People tend to distinguish between emotion and cognition – but are they really separate?

Steve’s Research

Steve’s goal is to develop a framework for morally acceptable machine behavior, also consistent with social norms. To show that his framework is more than just theoretical, he has implemented a basic moral agent that knows, for example, that it should not kill in situations like the trolley problem. The framework is rule-based and motivated by Hume’s conceptualization of ethics, with the addition of a compassion component. A key aspect of the framework, as implemented in Java, is the ongoing refactoring of a particular ethical problem, until, ultimately, it is generalizable to a host of related ethical problems.

Steve’s Motivation

People spend a great deal of time developing, for example, the sensory abilities of A.I.s, and then five minutes worrying about the morality (oh yeah, we should worry about that too!). These days, what might be called ‘proto-A.I.s’ abound – consider, for example, the growing use of automated personal assistants (e.g. Siri), and autonomous vehicles. This type of technology raises many immediate and long term ethical issues. For example: All of this data is going off to the cloud. What are the ethics of this? Right now there are few ethical boundaries imposed on these and other types of automation. Resolving relevant issues in this area will require an intersection of Cognitive Psychology, Cognitive Modeling and Philosophy.

Despite its growing relevance, not a lot of research has been carried out with respect to the implementation of moral robots. But see: Moral machines, Machine ethics, Robot Ethics. See also the NAO Robot programmed to be a nursing aid (Paper: Be Nice Robot). Once again, Asimov’s Rules of Robotics come into play.

Finding ways to develop morality in A.I., and developing methods and frameworks that let A.I. grow, adapt and generalize as they encounter novel situations will be a key part of this ongoing research.

Some Other Lab Meeting Tidbits

  • Finding names for your A.I.s – it’s fun!
  • Real time decision making with the trolley problem – it’s a little different when you can’t think about it forever!
  • Can you define consciousness in 5 words or less?

It’s All About Flow

(By Jen Schellinck)

Some tidbits from this week’s lab round-table:

  • Representations and truth value – how many definitions can we find? Can we get away without representations? Can philosophy help us by teasing apart many competing definitions of this word?
  • Musical cognition: A fascinating topic! Music is one of the few human universals. And even some non-human species are happy to bop along to a tune (although not all have a decent sense of rhythm). But what role does music really play in cognition? Is it important? Is it a side-effect of some other cognitive feature? The jury is still out.
  • Books the lab is currently reading: Against the Grain, Catching Fire
  • Coming from the discussion of books we are reading, some further thoughts on the psychological state of flow:
    • How does flow relate to Expertise and SGOMS – Is expertise a flow state?
    • Interruptions – SGOMS handles this, but flow is about *not* perceiving interruptions
    • Cognitive tunneling – can this be a part of expertise?

Reinforcement Learning For The Win?

(By Jen Schellinck)

Game playing computers are getting better! It’s interesting that reinforcement learning (RL) as being used as a base strategy for these solutions, as described in Silver et al.’s (2017) recently published paper on AlphaGo Zero, arguably the most successful chess playing program to date. Perhaps most importantly, it was able to learn very quickly, without being exposed to large numbers of example chess games.

Chad pointed out that if you want to try out or learn more about reinforcement learning, you can check out two RL platforms released by OpenAI.