blog

The pragmatics of showing off your new artwork to friends

In Anita Pomerantz’s canonical paper on preference for agreement and disagreement with assessments in conversation I found two fascinating examples of precisely the phenomena I was looking for in the ways people talk about art.

Pomerantz uses the example below to demonstrate what she calls ‘second-assessment
productions: agreement dispreferred’, by which she means that when someone
produces a self-deprecating assessment in conversation, it invites an agreement
or a disagreement with that self-deprecation – but generally a disagreement is
preferred.

As is often the case with this kind of conversational analysis, the evidence
for one action or response being preferred is found by observing what happens
when the ‘dispreferred’ action or response is supplied. In Pomerantz’s
examples of assessments, ‘dispreferred’ conversational responses, often
contradictions and disagreements, are characterised by delays, pauses, and
other avoidances or ‘softenings’ of the dispreferred response. It is in this
context that Pomerantz produces the example below, as evidence for structure of
how conversational participants tend to withhold what she calls ‘coparticipant
criticism’ – basically, worming their way around insulting each other’s
sensibilities.

However, this extract also shows some of the features of aesthetic
conversations that I intuitively shaped into my research question about how the
criteria for an assessment become relevant referents for sub-asessements in an
aesthetic evaluation.

Pomerantz identifies this entire exchange only as a series of deferrment turns,
in which D’s (dispreferred) critical assessment is delayed, softened or
otherwise minimised.

With my question about the criteria for judgement being negotiated in mind,
these exchanges seem to support the idea that a feature of these deferrments of
dispreferred responses is that they include a negotiation of the referents of
the assessments they precede:

In the exchange highlighted above, A offers up a print for assessment, D gives
a positive assessment (softening an immanent critical assessment, Pomerantz
suggests), and C interrupts, identifying one criterion for judgement: the
question of authorship. A, C and D then negotiate claims to knowledge of this
referent (the author of the print).

Two further criteria are then raised by A – the assessment that the print is
monetarily valuable, which is met with a silence (which Pomerantz sees as an
implicit marker for a dispreferred second assessment – in this case, a
disagreement). A then raises the rarity of the print “only a hundred of’m”,
which D acknowledges after with ‘Hrm’ after a further pause.

Pomerantz observes in this paper that acknowledgements of prior assessments do
not imply claims of access to a referent: by acknowledging A’s claim of the
rarity and value of the piece without agreeing or disagreeing, D acknowledges
only the claim, but (marked by a further pause) does not participate in
claiming to have that knowledge.

E then requests a clarification of the referent being assessed: “Which picture
is that.”, then interrupts D’s negative assessment, seemingly to raise a
further criterion for assessment of the print: the spelling of the word ‘Life’
in the print that A points out. E then seems to claim that “That’s all I wd
loo(hh)k fo(h)” in the print.

Finally, D delivers a critical assessment, raising several further criteria:

  • that this print belongs to a type of art that can be more or less
    ‘realistic’, and that D assesses this to be ‘less realistic’. This criterion is coupled with an assessment that D likes the ‘more realistic’ of this ‘type of art’
  • That the print belongs to a “magazine advertisement” type, which can be
    more or less “great”, and D implys an assessment that this print is less great.

I am provisionally thinking of aesthetic judgements as assessment sequences in
cases where the referent of the assessment is evidently ambiguous.

Although Pomerantz is only concerned with the overall structures of preference
in agreement and disagreement with assessments, this extract does seem to bear
out some of the assumptions in my research question: namely that in aesthetic
judgements, conversational participants seem to offer up candidates for
assessment criteria along with their assessments.

Other examples of assessment sequences in Pomerantz’s text that deal with more
self-evident referents do not seem to exhibit this characteristic, suggesting
that it may be useful to look at aesthetic judgements as a special case of
conversational interaction.

Another assumption in my research question is that as conversational
participants offer up candidate criteria as referents for their assessments,
the opportunity for ‘topical drift’ is extended. The discussion of one referent
may lead to another, and yet another, potentially replacing the subject of an
initial assessment with a sequence of second and third assessments of different
referents altogether.

In a note on a section about ‘upgrades’ (described in her paper as strong
agreements with assessments on sequential grounds), Pomerantz picks out an
exception to the upgrades she finds in the corpus that reinforce prior
assessments of the same referent: upgrades that also slightly modify the
referent, and then reinforce it:

In this extract A assesses as ‘nice’ the way two things appear together. B
upgrades the assessment to ‘lovely’, but generalises the referent to the two
things – not their appearance together.

Pomerantz identifies this topic shift as part of a softening of a later
dispreferred disagreement with an assessment: B eventually emphasises the
niceness being that the two pieces are separate – contradicting A’s initial
assesment.

Once again, in the interim between assessment and dispreferred disagreement, B
offers up the colours of the pieces (“blue en grey, en white”) as candidate
criteria for assessment, which A agrees with in this sequence.

Again, this seems to bear out some of the intuitions in my research question –
not only that a component of the conversational pragmatics of aesthetic
judgements is the proffering of multiple criteria for assessment, but also that
in this process, there is an opportunity for a shift in the referent being
assessed.

The qualification for these intitutions that I can take from Pomerantz’s paper
is that this proffering of alternate candidate criteria may be seen as a
specific case of deferring or delaying a dispreferred second assessment in a
sequence.

The pragmatics of showing off your new artwork to friends Read More »

‘Blame’ and ‘dispreference’ in aesthetics and conversation analysis

I’ve started my literature review with Anita Pomerantz’s “Agreeing
and disagreeing with assessments: some features of preferred/dispreferred turn shapes
” to try and get a flavour of how ‘judgements of taste’ might look
through the lens of conversation analysis (CA).

Pomerantz begins her paper with the assertion that assessments are a
fundamental conversational feature of participation in an activity. She finds
evidence for this claim in the way that people decline to make assessments of things they haven’t participated in.

In the example below, this claim is demonstrated by B declining to deliver an expected assessment of the “dresses” by claiming lack of access to them: “I haven’t been uh by there-“.

Pomerantz then points out that a statement of having participated in something is somehow incomplete without a correlating assessment.

The above example intuitively bears this out: if L’s completion of this
sequence with “it’s really good” is missed out, the conversation would seem
stilted.

Reading Pomerantz’ detailed, CA account of the coordination of assessment patterns in speech alongside Kant and Hume’s philosophical
discourses on aesthetics and judgement offers some compelling but potentially incompatible insights.

Pomerantz’s observations of how preference structures work in assessments, namely that there are ‘preferred’ and ‘dispreferred’ responses to assessments
seems to map onto the way Kant uses ‘blame’ in judgements of taste.

Kant’s critique of aesthetic judgement asserts that an aesthetic judgement is
distinctive because we ‘blame’ others for not agreeing with us. Kant uses this
as a way of differentiating between judgements of the ‘agreeable’ and
judgements of taste: when something is ‘agreeable’ (such as, for
example, a cute puppy), we don’t argue the point with people who don’t like
dogs because, even when most people seem to find them agreeable, not everyone
has to like all kinds of puppies. However, when discussing aesthetic
judgements, we argue the point. For Kant, that willingness to argue marks out
an aesthetic judgement from other forms of judgement: a judgement that is (at
least potentially, in the mind of the judger) a universal judgement.

Pomerantz bases her claim that there are ‘preferred’ and ‘dispreferred’
responses to assessments on the regularly observable structures of how people
negotiate assessments in everyday conversation.

She identifies preferences for ‘second’ assessments that affirm their prior
assessments (or negate them, in the case of self-deprecations). She describes
what she calls a ‘preferred-action turn shape’: marked by immediate response,
lack of explanation, delay or requests for repetitions of clarifications.

By contrast, she demonstrates ‘dispreferred-action turn shapes’ being
consistently marked by conversational phenomena such as pauses, explanations,
laughter and seeming agreements: ‘Yes but… no but…’, (to borrow from Vicky Pollard), softening a contradiction (or the ‘dispreferred’ affirmation of a self-deprecation).

Could Kant’s idea of ‘blame’, the expectation or demand that others should
agree with us be equated in some way to the idea of a ‘preferred’ and a
‘dispreferred’ assessment? The problem with mapping an essentialist idea onto a phenomenological framework like CA is that everything starts to look like either a chicken or an egg. Are the conversational phenomena products of some underlying rule, or are Kant’s observations of cases in which we ‘blame’ each other for disagreement and argue the point based on observation of certain degrees of dispreference in talk?

This may point to a fundamental problem in how I am constructing my question: philosophical discourses such as Kant’s and Hume’s may simply be incompatible with analyses based on the phenomena of dialogue. I suspect I need to read a lot more philosophy and conversation analysis to sharpen my questions up to the point that these very different kinds of source materials can be brought into play in a useful way.

‘Blame’ and ‘dispreference’ in aesthetics and conversation analysis Read More »

Install Dropbox On Your Server.

Start Dropbox Automatically On Boot

Dropbox provides a handy little service management script that makes it easy to start, stop and check the status of the Dropbox client.

Create a new file for the service management script

sudo vi /etc/init.d/dropbox

 

Paste the following script into the new file

#!/bin/sh
# dropbox service
# Replace with linux users you want to run Dropbox clients for
DROPBOX_USERS="user1 user2"

DAEMON=.dropbox-dist/dropbox

start() {
    echo "Starting dropbox..."
    for dbuser in $DROPBOX_USERS; do
        HOMEDIR=`getent passwd $dbuser | cut -d: -f6`
        if [ -x $HOMEDIR/$DAEMON ]; then
            HOME="$HOMEDIR" start-stop-daemon -b -o -c $dbuser -S -u $dbuser -x $HOMEDIR/$DAEMON
        fi
    done
}

stop() {
    echo "Stopping dropbox..."
    for dbuser in $DROPBOX_USERS; do
        HOMEDIR=`getent passwd $dbuser | cut -d: -f6`
        if [ -x $HOMEDIR/$DAEMON ]; then
            start-stop-daemon -o -c $dbuser -K -u $dbuser -x $HOMEDIR/$DAEMON
        fi
    done
}

status() {
    for dbuser in $DROPBOX_USERS; do
        dbpid=`pgrep -u $dbuser dropbox`
        if [ -z $dbpid ] ; then
            echo "dropboxd for USER $dbuser: not running."
        else
            echo "dropboxd for USER $dbuser: running (pid $dbpid)"
        fi
    done
}

case "$1" in

    start)
        start
        ;;

    stop)
        stop
        ;;

    restart|reload|force-reload)
        stop
        start
        ;;

    status)
        status
        ;;

    *)
        echo "Usage: /etc/init.d/dropbox {start|stop|reload|force-reload|restart|status}"
        exit 1

esac

exit 0

 

Make sure you replace the value of DROPBOX_USERS with a comma separated list of the linux users on your machine you want to run the Dropbox client to run for. Each user in the list should have a copy of the Dropbox files and folders that you extracted from the archive, available under their home directory.

Make sure the script is executable and add it to default system startup run levels

sudo chmod +x /etc/init.d/dropbox
sudo update-rc.d dropbox defaults

 

Control the Dropbox client like any other Ubuntu service

sudo service dropbox start|stop|reload|force-reload|restart|status

 

 

Dropbox Delorean By Dropbox Artwork Team

Dropbox Delorean By Dropbox Artwork Team

Depending upon the number of files you have on Dropbox and the speed of your internet connection it may take some time for the Dropbox client to synchronize everything.

Check Status with Dropbox CLI

Dropbox has a command line python script available separately to provide more functionality and details on the status of the Dropbox client.

Download the dropbox.py script and adjust the file permissions

wget -O ~/.dropbox/dropbox.py "http://www.dropbox.com/download?dl=packages/dropbox.py"
chmod 755 ~/.dropbox/dropbox.py

 

You can download the script anywhere you like, I’ve included it along with the rest of the Dropbox files.

Now you can easily check the status of the Dropbox client

~/.dropbox/dropbox.py status
Downloading 125 files (303.9 KB/sec, 1 hr left)

 

Get a full list of CLI commands

~/.dropbox/dropbox.py help

Note: use dropbox help <command> to view usage for a specific command.

 status       get current status of the dropboxd
 help         provide help
 puburl       get public url of a file in your dropbox
 stop         stop dropboxd
 running      return whether dropbox is running
 start        start dropboxd
 filestatus   get current sync status of one or more files
 ls           list directory contents with current sync status
 autostart    automatically start dropbox at login
 exclude      ignores/excludes a directory from syncing

 

Use the exclude command to keep specific files or folders from syncing to your server

~/.dropbox/dropbox.py help exclude

dropbox exclude [list]
dropbox exclude add [DIRECTORY] [DIRECTORY] ...
dropbox exclude remove [DIRECTORY] [DIRECTORY] ...

"list" prints a list of directories currently excluded from syncing.  
"add" adds one or more directories to the exclusion list, then resynchronizes Dropbox. 
"remove" removes one or more directories from the exclusion list, then resynchronizes Dropbox.
With no arguments, executes "list". 
Any specified path must be within Dropbox.

 

Once the Dropbox service is running and fully syncrhonized you can access all your Dropbox files and easily share files on your server with all your other Dropbox connected gadgets!

For more resources and troubleshooting tips visit the Text Based Linux Install page on the Dropbox wiki and the Dropbox forums. Happy syncing!

via Install Dropbox On Your Ubuntu Server (10.04, 10.10 & 11.04) | Ubuntu Server GUI.

Install Dropbox On Your Server. Read More »

Access-Control-Allow-Origin XMLHttpRequest day. What fun.

XMLHttpRequest cannot load Origin http://mydomain.net is not allowed by Access-Control-Allow-Origin

Courtesy of Toby’s code for his BBC Stories visualisation for a demo we’re doing of our joint work at the DTC all hands conference, I had a day of cross-domain Ajax woe.

It was particularly annoying to run into this issue because I wasn’t even really trying to do cross-site AJAX, I just wanted to call some data from a SPARQL server running on a high port of my own server! But no, a different port, as far as the browser is concerned, is a different server.

After spending hours trying to “do it properly” and get Cross Origin Resource Sharing to work on my ISPconfig 2 debian lenny server, I just gave up.

I got it in principle, and I discovered that by adding Apache Directives like this:


Header add Access-Control-Allow-Origin "http://myserver.net"
Header add Access-Control-Allow-Origin "http://myserver.net:8080"
Header set Access-Control-Allow-Headers "X-Requested-With"
Header set Access-Control-Max-Age "60"
Header set Access-Control-Allow-Credentials true
Header set Access-Control-Allow-Headers "Content-Type, *"

To ISPConfig’s site control panel (instead of directly to Apache VirtualHosts), I did manage to get my headers doing the right thing:


saul@ni ~ $ curl -i -X OPTIONS http://mydomain.net/mydemo/

HTTP/1.1 200 OK
Date: Tue, 01 Nov 2011 14:38:56 GMT
Server: Apache (Debian) modpython Python modruby Ruby mod_ssl OpenSSL
Allow: GET,HEAD,POST,OPTIONS,TRACE
Vary: Accept-Encoding
Access-Control-Allow-Origin: http://mydomain.net
Access-Control-Allow-Origin: http://mydomain.net:8080
Access-Control-Allow-Headers: Content-Type, *
Access-Control-Max-Age: 60
Access-Control-Allow-Credentials: true
Content-Length: 0
Content-Type: text/html

at least as described in the various how-tos I was reading.

But after plenty of attempts, I just couldn’t get it working. Maybe it was something on the client-side that I just didn’t get. I’m no Javascript person…

Anyway, after battling hard to do it the right way, I caved and did it the sysadminny way, following the advice from Steve Harris I found on the 4store-support site in the first place and just set up a proxy to port 8080 so that the script could just request /whatever/ and get http://mydomain.net:8080/whatever/.

Bah.

Access-Control-Allow-Origin XMLHttpRequest day. What fun. Read More »

Building & Installing 4store on Debian Lenny

It took a good few attempts to get 4-store installed on my Debian Lenny box, even after reading a very useful guide by Richard Reynolds.

For anyone following that guide, here are the modifications I had to make:

Firstly, I had to install Raptor first (it complains that there’s no Rasqual otherwise). That was fairly straight forward, I was able to follow Richard Reynolds guide:


wget http://download.librdf.org/source/raptor2-2.0.2.tar.gz
tar -xzvf raptor2-2.0.2.tar.gz
cd raptor2-2.0.2
./configure
make
sudo make install

Then I was able to build Rasqual:


wget http://download.librdf.org/source/rasqal-0.9.25.tar.gz
tar -xjvf rasqal-0.9.25.tar.gz
cd rasqal-0.9.25
./configure
make
sudo make install

When it came to building 4store, I couldn’t get the sources from github. This line:

git clone https://github.com/garlik/4store.git

Got me:


Initialized empty Git repository in /home/blah/4store-v1.1.4/4store/.git/
warning: remote HEAD refers to nonexistent ref, unable to checkout.

Which wasn’t very useful, and created an empty 4store directory that I had to delete. A bit of googling indicated that the maintainers need to issue a few commands to push the default branch to the server. I couldn’t do anything about that, so I tried other methods of getting hold of the sources.

Then I tried several times to download auto-zipped up sources from github, unzipped them, and struggled with building the Makefile using the included automake.sh script, which I never got to work.

So finally I downloaded the sources from the 4store website here, unzipped them, found a nice Makefile and followed the INSTALL instructions from there.

It was a bit of a mission getting 4store to compile, I had to apt-get install:

  • libglib2.0-dev (Make complained about not having glibc-2.0)
  • libxml++-dev
  • libreadline-dev

But I finally got it configured, made and installed. Next: configuration!

Building & Installing 4store on Debian Lenny Read More »

How to insert a special character in Vim

Press ctl-k, then press one of these characters:

Character	Meaning
--------------------------------
!		Grave
'		Acute accent
>		Circumflex accent
?		Tilde
-		Macron
(		Breve
.		Dot above
:		Diaeresis
,		Cedilla
_		Underline
/		Stroke
"		Double acute (Hungarumlaut)
;		Ogonek
<		Caron
0		Ring above
2		Hook
9		Horn
=		Cyrillic
*		Greek
%		Greek/Cyrillic special
+		Smalls: Arabic, caps: Hebrew
3		Some Latin/Greek/Cyrillic
4		Bopomofo
5		Hiragana
6		Katakana

from http://vim.runpaint.org/typing/inserting-accented-characters/

How to insert a special character in Vim Read More »

3 Representations of Dr Who

Three representations of Dr Who?
Three representations of Dr Who? Script, RDF and Chat

I have three representations of Dr Who. S4E1 sitting in front of me:

  1. A Semantic annotation of the episode based on the BBC Stories Ontology, both by Michael O. JewellPaul Rissen, and Toby Harris{{1}}.
  2. The script for the episode, by Russel T Davies
  3. A transcript of a couple of very rowdy screenings of the episode I organised at The People Speak HQ during which people heckled at the screen using short messages, images and video.

What’s hurting my brain at the moment is a question of representation. In this triple, if ‘represents’ is the predicate, which is the subject and which is the object?

  • Is the Semantic annotation a representation of Dr Who S4E1: Partners in Crime the TV show, or is it a representation of the experience and interpretation of the person watching and annotating it? Or both?
  • In the same way, is the transcript of the conversation a representation of people’s experience of watching the episode and making social sense of it together, but with a lot more context?
  • Is the episode itself a representation of the shooting script?

Which philosophical texts can I turn to to help me make sense of this?

But most crucially (for my purposes), how can I best understand the similarities and differences between 1 (the semantic annotation) and 3 (the conversational transcript)?

I had a few ideas about this, mostly based on text-mining the conversation transcript via concept-extraction services such as LUpedia or Alchemy API to see if snatches of conversation can identify related entities within the annotation’s timeline, but feedback from the wonderful Jo Walsh was sceptical of this approach.

Basically, her critique was that

  1. Using text-mining/concept extraction privileges text, whereas the heckle stream is very visual, and that seems important.
  2. Entity-recognition/tagging services will yield a very variable quality of metadata. They’re designed to look for something specific in text and match it, and tend to require quite a bit of context (more than 140 characters of text)
  3. Asking the question “to what extent can this be considered metadata” will get very inconclusive answers, which will question the point of asking the question in the first place.

I think I agree with point 3 – which questions the point of this blog post, but I think I still need some kind of bottom-up analysis of the relatedness of the data, and although I’d like to just disregard the slightly solipsistic question of what is representing what, it would be nice to be able to attribute any philosophical assertions to someone other than myself!

[[1]] Here’s the OWL for the episode. Here’s the n3 formatted annotation of the episode [[1]]

3 Representations of Dr Who Read More »

Conversational Scenario Design


This scenario is designed to elicit and capture conversation between a group of people who are watching a specific episode of Dr. Who together.

The aim is to be able to compare existing formal metadata for this episode with this speculative ‘conversational metadata’, and evaluate it as an alternative representation of the same media object: Dr Who, Season 4, Episode 1, Partners in Crime.

The Setup

Two groups of eight people are invited to watch of an episode of Dr Who together on a large screen, during which they use their laptops and a simple text/image/video annotation interface to type short messages or send images onto the screen where they are visible as an overlay on top of the video of Dr Who.

The room is laid out in a ‘living room’ arrangement to support co-present viewing and interaction between participants, with comfortable seating arranged in a broad semi-circle, oriented towards a large projected video screen about ten feet away. Each participant is asked to bring their own laptop, tablet PC, or other wifi-enabled device with a web browser.

After making sure that all participants are on the network, there is an introductory briefing where they are given a presentation explaining the aims of the project and that they are free to walk around, use their laptops or just talk, and help themselves to food and drink during the screening.

The Annotation Tool

The system that the participants are using on their laptops/tablets or mobile phones has a simple web-based client, enabling viewers to choose a colour to identify themselves on the screen, and then type in 140 characters of text or search for images and video, before sending them to the main screen.

Users are asked to choose a colour
Users are asked to choose a colour
The 'red' user's annotation interface with image search
The ‘red’ user’s annotation interface with image search
Search results for 'knitted adipose' before posting to screen
Search results for ‘knitted adipose’ before posting to screen

The Display Screen

The video of Dr Who is projected on a ‘main’ screen, alongside text, images and video clips sent by viewers in a fullscreen browser window. The images and videos sent by users have a coloured outline, and text-bubbles are coloured to indicate who posted them.

Dr Who layered with text, image and video annotations.
Dr Who layered with text, image and video annotations.

Images and videos run underneath the video in a ‘media bar’, while text bubbles posted by users drop onto the screen in random positions, but can be re-arranged on the screen or deleted by a ‘facilitator’.

Rationale

This ‘conversational scenario’ is a hybrid of various methods in which researchers have contrived situations to elicit data from participants. Before making any claims about the data gathered, some clarification of the purpose and methods of the scenario are necessary.

Ethnographic Studies of Social TV have tended to use audiovisual recordings of TV viewers in naturalistic settings as their primary source, and analytical methods such as Conversation Analysis and participant observation have been used to deepen their understanding of how people use existing TV devices and infrastructures in a social context.

HCI approaches to designing Social TV systems have built novel systems and undertaken user testing and competitive analysis of existing systems in order to better understand the relationship between people’s social behaviours around TV, and the heuristics of speculative Social TV{{1}} devices and services.

Semantic Web researchers have opportunistically found ways to ‘harvest’ and analyse communications activity from the Social Web, as well as new Social TV network services that track users’ TV viewing activity as a basis for content recommendations and social communication.

All of these approaches will be extremely useful in developing better conversational annotation systems, and improving understanding and design of Social TV for usability, and for making better recommendations.

Although the conversational scenario described borrows from each of these methods, it’s primary objective is to gather data from people’s mediated conversations had around a TV in order to build a case for seeing and using it as metadata.

System design, usability, viewer behaviour, user profiles, choices of video material, and the effect those issues have on the quality and nature of the captured metadata are a secondary concern to this first step in ascertaining whether conversations can be captured and treated as metadata pertaining to the video in the first place.

[[1]]I am using the term Social TV, following one of the earliest papers to coin the phrase by Oehlberg et. al (2006) to refer to Interactive TV systems that concentrate on the opportunities for viewer-to-viewer interaction afforded by the convergence of telecoms and broadcast infrastructures. Oehlberg, L., Ducheneaut, N., Thornton, J. D., Moore, R. J., & Nickell, E. (2006). Social TV: Designing for distributed, sociable television viewing. Proc. EuroITV (Vol. 2006, pp. 25–26). Retrieved from http://best.berkeley.edu/~lora/Publications/SocialTV_EuroITV06.pdf [[1]]

Conversational Scenario Design Read More »

Conversational Annotation

Annotation of a conversation would usually be a post-hoc chore undertaken by someone charged with watching a documentary or ethnographic video and ‘making sense’ of the diffuse multifariousness of a conversation. Heckle‘s approach, that each visual/textual interjection might be used as an annotation, attempts to turn annotation into an augmentation of the experience of the conversation. Because it is concurrent and live, the participants who heckle may notice and incorporate all kinds of contextual markers outside the view of the video camera, as well as bring their own diverse interpretations and experiences into the heckled conversation.

Most crucially, the variety of ways that people use the Heckle system mirrors the diversity of people’s verbal and non-verbal contributions to the live conversation. The stream of images, video, text and links that result can be seen as a parallel conversation that ‘annotates’ the conversation around the Talkaoke table, but also interacts with it in real time: the representation of the conversation itself becomes conversational.

Research Strategy

There are so many questions to be asked about this approach: about the user interfaces, about how and whether Heckle does really ‘augment’ the experience, lead to further engagement, and how it influences people’s interpretation and behaviour. However, with the time and resources available at this stage, the goals will have to be very limited and specific.

My research task at hand is to enquire about this ‘conversational metadata’: what is it? To what extent can it be considered ‘metadata’? What objects does it’s metadata relate to; to the conversation around the Talkaoke table, or to the people Heckling? And to what extent does it correlate (or not) with other forms of annotation and representation of these objects?

Asking this question will involve re-purposing the Heckle system to create scenarios in which this correlation can be measured.

To be specific, rather than using Heckle to annotate a live conversation around the Talkaoke table, I will be using it to annotate a group of people watching Dr. Who Season 4 Episode 1, ‘Partners in Crime’.

This is an opportunistic choice of programme, suggested to me by Pat Healey because he happens to have supervised my MAT colleague Toby Harris on the BBC Stories project, with Paul Rissen and Michael Jewell to annotate this episode of Dr. Who in an exemplary ‘top-down’ fashion, developing and then using the BBC stories ontology.

The plan, then, is to gather a group of people to sit watch TV together, and to provide them with The People Speak’s Heckle system as a means of interacting with each other, layered on top of the Dr Who video. The resulting conversational metadata can then be compared to the detailed, semantic annotatations provided by the BBC Stories project.

Evaluation Strategy

There are a number of possible methods to use to make this comparison, although it will be hard to tell which to use before being able to look at the data.

It may be useful to simply look at mentions of characters, plot developments, and other elements in the BBC Stories ontology, and see whether they appear at equivalent moments in the BBC Stories annotation and the heckled conversation. A basic measurement of correlation could be gathered from that kind of comparison, and would indicate whether the two forms of metadata are describing the same thing.

Similarly, it might be useful to demonstrate the differences between the conversational metadata and the BBC Stories version by looking for conversational annotations that relate to the specific context of the experience of watching the episode: the space, the food, the sofa. These would (of course) be absent from the BBC Stories annotation.

However, another strategy, which Toby Harris and I concocted while playing with LUpedia, a semantic ‘encrichment’ service that takes free text, and attempts to match it with semantic web resources such as DBPedia, and return structured metadata.

If it would be possible to feed LUpedia the BBC Stories ontology, and then feed it with the episode of Dr Who in question as a dataset, it should be possible to submit people’s heckles to it, and see if LUpedia returns relevant structured data.

If LUpedia can enrich people’s Heckles with metadata from the BBC Stories dataset, that should indicate that the heckles are pertinent to the same object (in this case, the episode of Dr Who), and might therefore be seen as conversational metadata for it{{1}}.

[[1]]My conversational metadata will probably also describe the interactional experience of watching the show, and other contextual references that will be absent from the BBC Stories annotation. However, it is important to show that the two types of metadata relate to at least one of the same objects. If this is not demonstrable, it does create some ‘fun’ philosophical problems for my research such as what conversational *does not* refer to. That one might be harder to answer.[[1]]

 

Conversational Annotation Read More »

Heckle

Since 2007, our art collective The People Speak have been working on ways of trying to make the 13+ years of conversational oral history we have on archive public and searchable.

The conversations between people who meet around the Talkaoke table{{1}}, on street corners, at festivals, schools, or conferences have been recorded and archived on every format going from digi-beta to hi-8, RealMedia (oh God, the 90’s), and miniDV. For the last two years, we have finally moved to digital only, but the archival backlog is intimidating.

As challenging as the digitisation and archival issues are, the real problem is figuring out what people are talking about in this mountain of data. All the conversations facilitated by The People Speak are spontaneous, off the cuff, and open to people changing tack at any point. This has made it almost impossible to provide a thematically structured archive.

And this problem is not unique to this rather speciliased context. Aren’t all conversations, questions and answer sessions, and in fact, pretty much anything that involves people interacting with each other on video subject to the same contingencies of meaning?

If my early-stages training in Conversation Analysis have shown me anything, it’s that the apparent ‘content’ of a conversation is impossible to represent in any way other than through further conversations, and observations of how people work to repair their misunderstandings.

The Heckle System

The People Speak’s response to this problem has been the ‘Heckle’ system.

Using ‘Heckle’, an operator, or multiple participants in a conversation may search for and post google images, videos, web links, wikipedia articles or 140 characters of text, which then appear overlayed on a projected live video of the conversation.

Here is a picture of Heckle in use at the National Theatre, after a performance of Greenland.

Heckle in action at the National Theatre

As you can see, the people sitting around the Talkaoke table aren’t focused on the screens on which the camera view is projected live. The aim of the Heckle system is not to compete with the live conversation as such – but to be a backchannel, throwing up images, text and contextual explanations on the screen that enable new participants to understand what’s going on and join in the conversation.

The Heckle system also has a ‘cloud’ mode, in which it displays a linear representation of the entire conversation so far, including snapshots from the video at the moment that a heckle was created, alongside images, keywords, ‘chapter headings’ and video.

Heckle stream from Talkaoke

This representation of the conversation is often used as part of a rhetorical device by the Talkaoke host to review the conversation so far for the benefit of people who have just sat down to talk. A ‘Heckle operator’ can temporarily bring it up on a projection or other nearby display and the host then verbally summarises what has happened so far.

It also often functions as a modifier for what is being said. Someone is talking about a subject, and another participant or viewer posts an image which may contradict or ridicule their statement; someone notices and laughs, everyone’s attention is drawn to the screen momentarily, then returns to the conversation with this new interjection in mind. Some people use the Heckle system because they are too shy to take the microphone and speak. It may illustrate and reinforce or undermine and satirize. Some ‘heckles’ are made in reply to another heckle, some in reply to something said aloud, and vice versa.

If keywords are mentioned in the chat, those keywords can be matched to a timecode in the video, in effect, the heckled conversation becomes an index for the video recorded conversation: the conversation annotates the video{{2}}.

[[1]] Talkaoke, if you’ve never seen it before, is a pop-up talk-show invented by Mikey Weinkove of The People Speak in 1997. It involves a doughnut-shaped table, with a host sitting in the middle on a swivelly chair, passing the microphone around to anyone who comes and sits around the edge to talk. Check out the Talkaoke website if you’re curious.[[1]] [[2]] People don’t just post keywords. It’s quite important that they can post images and video too. The search terms they use to find these resources can also be recorded and used as keywords to annotate the video. A further possibility for annotation is that a corpus of pre-annotated images, such as those catalogued using the ESPgame could be used to annotate the video. This would then provide a second level of annotation: the annotations of the images used could be considered to be ‘nested’ annotations of the Talkaoke conversation. [[2]]

 

Heckle Read More »