Arash Eshghi started what turned out to be a very productive fight on our CogSci listserv with this press release: Carnegie Mellon Researchers Identify Emotions Based on Brain Activity and its attendant paper: Identifying Emotions on the Basis of Neural Activation.
I came up with a press release of my own, I might at some point get round to doing the Atlantic Salmon paper on the subject.
Press Release: University Researchers Identify Emotions Based on Finger Activity
New Study Extends “Palm Reading” Research to Feelings by Applying Machine Learning Techniques to Keyboard Data
For the first time, scientists at a university have identified which emotion a person is experiencing based on finger activity.
------------------ :) happy :( sad -----fig 1--------
The study combines keyboards and machine learning to measure finger signals to accurately read emotions in individuals. The findings illustrate how the finger categorizes feelings, giving researchers the first reliable methods to evaluate them.
“Our big breakthrough was the idea of testing typists, who are
experienced at expressing emotional states digitally. We were fortunate,
in that respect, that EECS has so many superb typists”
said a professor.
For the study, typists were shown the words for 9 emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. and were recorded typing them multiple times in random order.
------------------ :§ 8(>_<)8 8^O =) x-( ;-b... _ _ ( " ) :-c :-@ (-.-) DX >:( :-S ;-) :0= :-) :( *:-} !-} (-_-) -----fig 2--------
The computer model, using statistical information to analyse keyboard activation patterns for 18 emotional words was able to guess the emotional content of photos being viewed using only the finger activity of the viewers.
“Despite manifest differences between people’s psychology, different
people tend to manually encode emotions in remarkably similar ways”
noted a graduate student.
A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the keyboard.
“This suggests that emotion signatures aren’t limited to specific
regions such as the qwerty parentheses cluster, but produce
characteristic patterns throughout a number of keyboard regions”
said a senior research programmer.
3 thoughts on “Identifying Emotions on the Basis of Manual Activation”
On Mon, Jun 24, 2013 at 06:38:26AM +0100, Dr X wrote:
> It might not thats fine. The space of algorithms defined by the word
> frustration is large, enormous. We all experience frustration
> differently. We give the word to a set of characteristics under the
> prototype frustration.
> – or, indeed, nothng of the sort).
> Again, not a problem for the idea that emotions have an important
> algorithmic basis, which is what I’m saying.
Similarly, there are lots of potential expressions of frustration eg.
=P :-P X[ >< B/ >.< * -_-*which will be variously common and recognizable within and between different cultural groups.
If we train a classifier on this data set: http://pc.net/emoticons/ it seems likely that we could come up with statistically significant correlates between words and emoticons:
pride: !-) ( " ) *:-}
anger: x-( x( :-@ >:(
disgust: DX :§ :-s :O= :&
envy: (V) 8(>_< )8 fear: 8^O =:O happiness: =) :) :-) lust: ;-) ;-b... ,-} sadness: :( :-( >:[ :-c :c
shame: (-.-) (-_-)
This could tell us some pretty interesting things about the structure of the emoticons, ie. that colons and parentheses are pretty important, and that some can be sub-typed by the prevelance of certain features. It doesn’t tell us that individual colons, parentheses or ampersands are any more relevant to people’s emotional lives than any other character per se. I also doesn’t provide satisfying explanations for how these emoticons are understood and responded to.
To find out about this, we could look at how emoticons were used in dialogue. When a chat participant doesn’t understand one, or mistypes or misreads it, what kind of work do they do to understand and make themselves understood? This analysis doesn’t yield statistical
correlates, but it tells us something interesting about people’s step-by-step procedures for understanding and responding to emoticons or certain features of them as emotionally relevant (or not) in their own terms.
> May we have different ideas of what an ‘algorithm’ is? I notice the
> word used very differently, e.g. Esther Thelen used it very strangely
> to refer to only symbolic operations, I just mean it as any mindless
> operational procedure that can be unambiguously specified, in the
> standard computer science sense.
I think algorithm in the latter sense is probably a useful metaphor for describing both of these analytical approaches. There are plenty of metaphors from CS (recursion, state, typed variables, boostrapping) that are tremendously useful for specifying operational procedures in human interaction and language use concisely (‘mindless’ seems strong).
It’s a pity that when researchers and PR people make incautious claims that conflate the representation, the metaphor and the phenomenon, it turns into a brawl for or against an ontology that confuses the issue and discards potentially valuable insights on each side.
“Look at a face-what is important is its expression-not its colour, size, etc.” “Well give us the ‘ expression without the face.” The expression is not an effect of the face-on me or anyone. You could not say that if anything else had this effect, it would have the expression on this face. I want to make you sad. , I show you a picture, and you are sad. This is the effect of this face.
Wittgenstein, L. (1966). Lectures and conversations on aesthetics, psychology and religious belief.
On 24/06/13 13:05, Matthew Purver wrote:
> On 24/06/2013 10:46, Saul Albert wrote:
> If we train a classifier on this data set:http://pc.net/emoticons/ it
> seems likely that we could come up with statistically significant
> correlates between words and emoticons:
this has been tried, and indeed significant correlations exist:
This could tell us some pretty interesting things about the structure of
the emoticons, ie. that colons and parentheses are pretty important, and
and this has also been tried, although not with Western-style emoticons
which are a bit boring structurally, but with the much richer
But whether any of these has anything at all to tell us about our
brains, or what emotions are, is not a question I think these papers
claim to address.
On Mon, Jun 24, 2013 at 8:09 AM, Arash Eshghi
I think all of us would (more or less) agree that:
1) emotions are not mental states (whether fuzzy or determinate).
2) brains are necessary but not sufficient.
3) that nobody really knows how to deal with the phenomenology (the Wittgensteinian approach ignores it altogether I think, but I think one can have a sufficient theory without reference to “experience” as such, I mean who cares, it’s only me that has them anyhow).
4) emotions do feel a certain way, but they are not identical with that experience (I can’t point inwards). So there will be several of them that have an identical “feel” to them, but they are nevertheless distinct (and I can tell, albeit not during: I won’t know until I’ve ended up responding to another’s response to their initial expression).
> Saul: “It’s a pity that when researchers and PR people make incautious claims that
> conflate the representation, the metaphor and the phenomenon, it turns into a brawl
> for or against an ontology that confuses the issue and discards potentially valuable
> insights on each side.”
It is a pity yes, but this stuff is really quite confusing! Language misleads and hides, a la Ludwig.