HED Research Program + Blog

(By Jen Schellinck)

As mentioned in So Many Summer Projects, Ken and I spent the summer getting the Human Exploitation Dynamics (HED) Research Program off the ground. Things have been going well and we had our first user-test of the simulated environment mid-August.

To avoid cluttering up the Cognitive Modeling Lab blog with too many simulation mechanics related posts, I’ll direct folks to the HED Research Program blog if you would like more technical details about what’s happening with that project, but I’ll also cross-post relevant articles here from time to time, in case people want to follow along.

Thank You

 

 

 

 

 

 

Another update, I am honoured to be receiving the Institute of Cognitive Science departmental Teaching Assistant Excellence Award.

It’s been an incredibly rewarding experience helping so many enthusiastic students. Thank you to all my supportive and dedicated colleagues!

-Brendan C-S

A Thesis for Brendan

 

 

 

 

 

 

After an unusually warm summer full of more work than play, I have finally settled on my thesis topic (cue trumpets). This thesis involves a life long interest of mine – metacognition.

My thesis will involve modelling metacognition within the macrocognitive architecture SGOMS (as developed by Robert L. West). The thesis will discuss the overall philosophical structure of metacognition within a cognitive architecture.  This will make for a broad and yet strong foundation for the topic, well affirmed from my discussions with Dr. West.

The applications of a metacognitive model holds many exciting possibilities, both for human and artificial agents. While this topic extends far beyond my humble thesis, I intend to help lay some of the necessary philosophical groundwork. After all, a blueprint is needed before building in stone – or silicon, anyway.

-Brendan C-S

So Many Summer Projects

(By Jen Schellinck)

Summer project work is in full swing in the cognitive modelling lab. In brief…

  • Kate is working to submit multiple abstracts and papers on Rachel, her A.I. model of human cognition,
  • David is updating and rejuvenating his wikimergic/wikisilo site and working on his book on open vs closed thought forms,
  • Brendan is putting the finishing touches on his paper on cognitive history,
  • Phoebe is undertaking the massive task of bringing us into the modern world of citation and paper management and taking psych courses,
  • Liz is beta-testing a new cognitive task app that will be used to collect data for upcoming experiments,
  • Chad submitted a conference paper,
  • Ken and I are rocketing along in developing the Islands game, which will be used to study exploitation dynamics,
  • Emily is gearing up to enter the world of cognitive science (via the Master’s program),
  • Adam is preparing to plow through a massive reading list of articles,
  • AND everyone else is so busy they didn’t have a chance to update us this week!
  • So much for a quiet summer!

Rise of the Research Blogs

(By Jen Schellinck)

Back in the day, researchers would engage in lengthy correspondences to hash out their ideas prior to publishing them formally – some of the more famous of these have been preserved as compelling examples of behind the scenes research collaboration.

In a discussion about our lab blog, Adam brought up the interesting point that researchers are now using blog posts as a way to both informally publish their ideas and at the same time engage other academics in public discussions about these ideas. As an intermediary between private correspondence and formal publication, blog posts are perhaps an important new force in the research world.

Research labs can also take advantage of blogs as a means of giving the world a window into their research efforts and activities. All it takes is a website, clear set of instructions, and some encouragement. On your mark, get set… blog…

Deadline Parties and Cognitive History

(By Jen Schellinck)

We started our lab meeting this week with our deadline party, which went very well – clearly our lab is a fan of social support on the deadline front. We also got a sneak peak at Brendan’s current research project, which argues that we’ve been looking at history from a cognitively naive point of view. His main assertion is that we are at a unique point in time when we can now look back at history through a cognitive lens and obtain a novel view of the human story. In connection with this, Rob brought up an interesting reference relevant to this research, connected with Jeremy’s work, about a predecessor to Marr.

To round out the lab meeting, we also briefly chatted about the celebrity culture within science, and came up with 18 different ways to put up a whiteboard – never say our lab isn’t multi-talented 🙂

The Value of A Deadline

(By Jen Schellinck)

This week was the kick off for our summer lab meetings, and we decided that next week we’ll be having a ‘deadline party’. Being cognitive scientists, in addition to pragmatically setting our goals for next week’s deadline party, we also contemplated the value – and cognition – behind deadlines and group work and accountability.

A few more links that came up during the discussion, which eventually turned to creativity and writing: taming the muse and gardeners and architects.

Pragmatism, Instrumentalism… Functionalism?

(By Jen Schellinck)

At this week’s lab meeting, Jeremy presented his work on pragmatism and the knowledge layer. He discussed the philosophical underpinnings and implications of the knowledge layer, as well as its tangible role in the development of autonomous systems.

Following his very interesting and information dense talk, a discussion about the relationships between pragmatism, instrumentalism and scientific realism broke out. Rob also threw functionalism into the mix.

Although at times these discussions of ‘isms’ can seem a bit esoteric, they do have real implications for the development of new theories and techniques – in this case techniques for the development of autonomous systems – as well as the ways in which science is conducted more broadly. Related to this, a newly published essay in Aeon has a very nice discussion about the role of scientific realism and the ways in which it arguably impedes scientific progress.

Towards an Artificial Moral Agent: One Step at a Time

(By Jen Schellinck)

At our lab meeting this week, Steve showed us a preliminary version of his thesis presentation on moral machines: Towards an Artificial Moral Agent: One Step at a Time.

Some Preliminary Thoughts

The Terminator, Ex Machina – these movies might give us some inspiration, but let’s take this farther. Issues raised by these movies include the relationship between consciousness and morality, and how necessary or optional the presence of each of these is in A.I. Are they conflated in these movies? Are they both necessary? Kate’s example: The self driving car, the old person and the puddle. Does the car need consciousness to (considerately) not splash the old person while driving through the puddle? Rob’s remark: People tend to distinguish between emotion and cognition – but are they really separate?

Steve’s Research

Steve’s goal is to develop a framework for morally acceptable machine behavior, also consistent with social norms. To show that his framework is more than just theoretical, he has implemented a basic moral agent that knows, for example, that it should not kill in situations like the trolley problem. The framework is rule-based and motivated by Hume’s conceptualization of ethics, with the addition of a compassion component. A key aspect of the framework, as implemented in Java, is the ongoing refactoring of a particular ethical problem, until, ultimately, it is generalizable to a host of related ethical problems.

Steve’s Motivation

People spend a great deal of time developing, for example, the sensory abilities of A.I.s, and then five minutes worrying about the morality (oh yeah, we should worry about that too!). These days, what might be called ‘proto-A.I.s’ abound – consider, for example, the growing use of automated personal assistants (e.g. Siri), and autonomous vehicles. This type of technology raises many immediate and long term ethical issues. For example: All of this data is going off to the cloud. What are the ethics of this? Right now there are few ethical boundaries imposed on these and other types of automation. Resolving relevant issues in this area will require an intersection of Cognitive Psychology, Cognitive Modeling and Philosophy.

Despite its growing relevance, not a lot of research has been carried out with respect to the implementation of moral robots. But see: Moral machines, Machine ethics, Robot Ethics. See also the NAO Robot programmed to be a nursing aid (Paper: Be Nice Robot). Once again, Asimov’s Rules of Robotics come into play.

Finding ways to develop morality in A.I., and developing methods and frameworks that let A.I. grow, adapt and generalize as they encounter novel situations will be a key part of this ongoing research.

Some Other Lab Meeting Tidbits

  • Finding names for your A.I.s – it’s fun!
  • Real time decision making with the trolley problem – it’s a little different when you can’t think about it forever!
  • Can you define consciousness in 5 words or less?