cognitive science

The Turing test’s insight into humanness.

Illustration from Dean Burnett's Guardian spoof of the June 2014 reissued 'Turing Test Passed' Story
Illustration from Dean Burnett’s spoof of the June 2014 reissued ‘Turing Test Passed’ Story

I’ve heard people reacting in two ways to the hyped announcement about Eugene passing the Turing Test. Some claim the test should be harder: longer term and  more complex, others that it doesn’t show machines doing thinking. I disagree with both complaints. I think the test is a brilliant one, and very insightful and informative about what it means to be a language machine.

Turing (1950) wrote:

“I believe that in about fifty years’ time it will be possible, to programme computers… [to] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning”.

So however hyped, the basic facts of the story are more or less correct, and I find it quite amazing (given that the Eugene chat bot was first written in 2001) that Turing got the timing spot on. However, I do agree that the news story and most interpretations of the meaning of the Turing Test are nonsensical from a scientific standpoint.

It seems likely to me that since 2001 many 13 year olds along with a great many other humans would fail the test as described, and equally likely that many more advanced chatbots would be able to pass it quite easily. This wasn’t the case in 1950 when the social meaning of computing would have been unrecognisable to contemporary judges, and vice versa.

Given that Turing’s computing challenge was passed, quite trivially, some time ago, the research challenge posed by the test as a socio-historical milestone, and the challenge for cognitive science in general since then is figuring out how, when and in what ways humanness is an ascribable quality.

There is a nice discussion of exactly this problem in QM’s very own CS4FUN – although I’m not sure who (or what) wrote it.

Refs:

What can cognitive science tell us about art, and vice versa?

How do people make sense of Tuner Prize nominee Tino Sehgal’s These Associations? And what can cognitive scientists learn from the way they do it?

The result of the Turner prize 2013 has been reported worldwide as a shock win – mostly because this year, the chosen artwork is less shocking than usual.

French artist Laure Prouvost’s madcap films overturned both critical expectations and the bookie’s 6/1 odds against her to win. While William Hill and Ladbrokes had David Shrigley’s mischievous peeing sculptures as a 2/1 favourite, the critics had fancied Tino Sehgal’s live conceptual/performance artworks.

The Turner prize and its contestants have become famous for creating controversy and public discussion about the limits of what artists, galleries and critics consider worthy of aesthetic judgement. However, new research from Queen Mary University of London’s Cognitive Science Group suggests that audiences are generally unfazed by this kind of issue. In ordinary conversations between visitors to the Tate Modern, one of the most supposedly ‘experimental’ artworks in this year’s Turner Prize was immediately and unproblematically subjected to complex processes of aesthetic judgement by the viewing public.

To find out how (and if) people made sense of Tino Sehgal’s Turner Prize-nominated artwork These Associations, I recorded and analysed over two hundred ordinary conversations between visitors to the Tate Modern’s Turbine Hall.

lynpam1.1-vector

I collected recordings of visitors’ conversations over the duration of Sehgal’s performance piece for which the artist trained 300 participants (including the researcher himself) to engage in a series of coordinated movements on the floor of the 3400m² Turbine Hall.

Throughout gallery opening hours in the summer of 2012, up to 70 of these participants at a time would blend into the crowds of tourists, gallery-goers and school children that usually fill the hall. Sometimes Sehgal’s participants would engage visitors in one-to-one conversations, at others they would break out into songs or chants, run in a flocking pattern, or slow-walk through the hall as a large group. For the visitors on the balcony overlooking the hall, this was quite a spectacle, and they would often stand in couples or small groups talking and watching.

Although the But is it art? question is always in the headlines when the Turner Prize is announced, visitors to the Turbine Hall seemed not to care one way or the other. While the question was frequently invoked, and many guests simply assumed that what they were witnessing was an unauthorized and spontaneous ‘flashmob’, most conversations quickly moved on to discussing and describing the action unfolding in front of them—more like sports commentary or a nature documentary voice-over than art criticism.

People’s commentaries were often funny, insightful and playful. “Standing… Standing’s really contemporary right now” was a young American woman’s description of one of Sehgal’s living tableau scenes. “A bit like watching paint dry isn’t it” was one older English woman’s assessment, although she and her friend then discussed what they were observing in detail for half an hour. Several groups of children also learned to play ‘Pooh sticks’ with the piece: as Sehgal’s participants marched under the viewing bridge, they would pick favourites and then run to see whose would walk out first on the other side.

Even negative assessments of the work were then justified in discussion of the details of the piece: how it worked, what it looked like, who the trained participants were and how to tell them apart from ordinary gallery visitors, and what underlying rationale might account for different patterns, behaviours or movements.

Many visitors who arrived on the balcony talking to each other would lapse into long comfortable silences (quite unusual in normal conversation), while others would make ‘oohing’ and ‘aahing’ noises like people watching firework displays. However, both noisy and silent watchers would then explain their reactions to each other in terms of their analysis of the piece. Most striking was how people would seamlessly switch between talking about the artwork and talking about other aspects of their lives: work, music, London, the events of their day, etc., then back to the piece. Often assessments of the artwork were bound up in practical issues about whether to move on or stay watching, what to eat for lunch or what to view next.

The initial findings of this research suggest that seeing something as art—whether good or bad—is an ordinary, everyday social activity. Aesthetic judgements of Sehgal’s work did not come out as individual’s lofty ‘judgements of taste’, but were embedded in people’s everyday social activities. So the humour and skill with which people explained what they were experiencing to one another other was central to their enjoyment of Sehgal’s work – whether or not they categorized it as art.

The study of aesthetics in psychology, neuroscience and artificial intelligence has tended to concentrate on how people’s reactions to formal properties of traditional artistic objects or images, on survey data, or on tests of people’s basic perceptual or cognitive capabilities. These approaches tend to avoid dealing with artworks which—like many that are nominated for the Turner Prize – use non-conventional art forms because they may not be perceived ‘correctly’  as art outside of a gallery context.

But by looking at how people spontaneously explain their own perceptions of new and unfamiliar art forms to each other while in the process of experiencing them, this research explores how judgements of taste constantly adapt to changing social contexts. Finding out how interaction shapes the contexts in which aesthetic judgements ordinarily happen may be key to a more general understanding of how human cognition and perception adapt to constantly changing social situations and norms.

Identifying Emotions on the Basis of Manual Activation

Arash Eshghi started what turned out to be a very productive fight on our CogSci listserv with this press release: Carnegie Mellon Researchers Identify Emotions Based on Brain Activity and its attendant paper: Identifying Emotions on the Basis of Neural Activation.

I came up with a press release of my own, I might at some point get round to doing the Atlantic Salmon paper on the subject.


Press Release: University Researchers Identify Emotions Based on Finger Activity

New Study Extends “Palm Reading” Research to Feelings by Applying Machine Learning Techniques to Keyboard Data

For the first time, scientists at a university have identified which emotion a person is experiencing based on finger activity.

------------------

 :)      happy

 :(      sad

-----fig 1--------

The study combines keyboards and machine learning to measure finger signals to accurately read emotions in individuals. The findings illustrate how the finger categorizes feelings, giving researchers the first reliable methods to evaluate them.

“Our big breakthrough was the idea of testing typists, who are
experienced at expressing emotional states digitally. We were fortunate,
in that respect, that EECS has so many superb typists”

said a professor.

For the study, typists were shown the words for 9 emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. and were recorded typing them multiple times in random order.

------------------
   :§      8(>_<)8
      8^O
            =)
x-(    ;-b...
 _ _ 
( " )   :-c
             :-@
    (-.-)

 DX        >:(
     :-S     
             ;-)
:0=     :-)

  :(     *:-}

  !-}       (-_-)
-----fig 2--------

The computer model, using statistical information to analyse keyboard activation patterns for 18 emotional words was able to guess the emotional content of photos being viewed using only the finger activity of the viewers.

“Despite manifest differences between people’s psychology, different
people tend to manually encode emotions in remarkably similar ways”

noted a graduate student.

A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the keyboard.

“This suggests that emotion signatures aren’t limited to specific
regions such as the qwerty parentheses cluster, but produce
characteristic patterns throughout a number of keyboard regions”

said a senior research programmer.