blog

‘Blame’ and ‘dispreference’ in aesthetics and conversation analysis

I’ve started my literature review with Anita Pomerantz’s “Agreeing
and disagreeing with assessments: some features of preferred/dispreferred turn shapes
” to try and get a flavour of how ‘judgements of taste’ might look
through the lens of conversation analysis (CA).

Pomerantz begins her paper with the assertion that assessments are a
fundamental conversational feature of participation in an activity. She finds
evidence for this claim in the way that people decline to make assessments of things they haven’t participated in.

In the example below, this claim is demonstrated by B declining to deliver an expected assessment of the “dresses” by claiming lack of access to them: “I haven’t been uh by there-“.

Pomerantz then points out that a statement of having participated in something is somehow incomplete without a correlating assessment.

The above example intuitively bears this out: if L’s completion of this
sequence with “it’s really good” is missed out, the conversation would seem
stilted.

Reading Pomerantz’ detailed, CA account of the coordination of assessment patterns in speech alongside Kant and Hume’s philosophical
discourses on aesthetics and judgement offers some compelling but potentially incompatible insights.

Pomerantz’s observations of how preference structures work in assessments, namely that there are ‘preferred’ and ‘dispreferred’ responses to assessments
seems to map onto the way Kant uses ‘blame’ in judgements of taste.

Kant’s critique of aesthetic judgement asserts that an aesthetic judgement is
distinctive because we ‘blame’ others for not agreeing with us. Kant uses this
as a way of differentiating between judgements of the ‘agreeable’ and
judgements of taste: when something is ‘agreeable’ (such as, for
example, a cute puppy), we don’t argue the point with people who don’t like
dogs because, even when most people seem to find them agreeable, not everyone
has to like all kinds of puppies. However, when discussing aesthetic
judgements, we argue the point. For Kant, that willingness to argue marks out
an aesthetic judgement from other forms of judgement: a judgement that is (at
least potentially, in the mind of the judger) a universal judgement.

Pomerantz bases her claim that there are ‘preferred’ and ‘dispreferred’
responses to assessments on the regularly observable structures of how people
negotiate assessments in everyday conversation.

She identifies preferences for ‘second’ assessments that affirm their prior
assessments (or negate them, in the case of self-deprecations). She describes
what she calls a ‘preferred-action turn shape’: marked by immediate response,
lack of explanation, delay or requests for repetitions of clarifications.

By contrast, she demonstrates ‘dispreferred-action turn shapes’ being
consistently marked by conversational phenomena such as pauses, explanations,
laughter and seeming agreements: ‘Yes but… no but…’, (to borrow from Vicky Pollard), softening a contradiction (or the ‘dispreferred’ affirmation of a self-deprecation).

Could Kant’s idea of ‘blame’, the expectation or demand that others should
agree with us be equated in some way to the idea of a ‘preferred’ and a
‘dispreferred’ assessment? The problem with mapping an essentialist idea onto a phenomenological framework like CA is that everything starts to look like either a chicken or an egg. Are the conversational phenomena products of some underlying rule, or are Kant’s observations of cases in which we ‘blame’ each other for disagreement and argue the point based on observation of certain degrees of dispreference in talk?

This may point to a fundamental problem in how I am constructing my question: philosophical discourses such as Kant’s and Hume’s may simply be incompatible with analyses based on the phenomena of dialogue. I suspect I need to read a lot more philosophy and conversation analysis to sharpen my questions up to the point that these very different kinds of source materials can be brought into play in a useful way.

‘Blame’ and ‘dispreference’ in aesthetics and conversation analysis Read More »

Install Dropbox On Your Server.

Start Dropbox Automatically On Boot

Dropbox provides a handy little service management script that makes it easy to start, stop and check the status of the Dropbox client.

Create a new file for the service management script

sudo vi /etc/init.d/dropbox

 

Paste the following script into the new file

#!/bin/sh
# dropbox service
# Replace with linux users you want to run Dropbox clients for
DROPBOX_USERS="user1 user2"

DAEMON=.dropbox-dist/dropbox

start() {
    echo "Starting dropbox..."
    for dbuser in $DROPBOX_USERS; do
        HOMEDIR=`getent passwd $dbuser | cut -d: -f6`
        if [ -x $HOMEDIR/$DAEMON ]; then
            HOME="$HOMEDIR" start-stop-daemon -b -o -c $dbuser -S -u $dbuser -x $HOMEDIR/$DAEMON
        fi
    done
}

stop() {
    echo "Stopping dropbox..."
    for dbuser in $DROPBOX_USERS; do
        HOMEDIR=`getent passwd $dbuser | cut -d: -f6`
        if [ -x $HOMEDIR/$DAEMON ]; then
            start-stop-daemon -o -c $dbuser -K -u $dbuser -x $HOMEDIR/$DAEMON
        fi
    done
}

status() {
    for dbuser in $DROPBOX_USERS; do
        dbpid=`pgrep -u $dbuser dropbox`
        if [ -z $dbpid ] ; then
            echo "dropboxd for USER $dbuser: not running."
        else
            echo "dropboxd for USER $dbuser: running (pid $dbpid)"
        fi
    done
}

case "$1" in

    start)
        start
        ;;

    stop)
        stop
        ;;

    restart|reload|force-reload)
        stop
        start
        ;;

    status)
        status
        ;;

    *)
        echo "Usage: /etc/init.d/dropbox {start|stop|reload|force-reload|restart|status}"
        exit 1

esac

exit 0

 

Make sure you replace the value of DROPBOX_USERS with a comma separated list of the linux users on your machine you want to run the Dropbox client to run for. Each user in the list should have a copy of the Dropbox files and folders that you extracted from the archive, available under their home directory.

Make sure the script is executable and add it to default system startup run levels

sudo chmod +x /etc/init.d/dropbox
sudo update-rc.d dropbox defaults

 

Control the Dropbox client like any other Ubuntu service

sudo service dropbox start|stop|reload|force-reload|restart|status

 

 

Dropbox Delorean By Dropbox Artwork Team

Dropbox Delorean By Dropbox Artwork Team

Depending upon the number of files you have on Dropbox and the speed of your internet connection it may take some time for the Dropbox client to synchronize everything.

Check Status with Dropbox CLI

Dropbox has a command line python script available separately to provide more functionality and details on the status of the Dropbox client.

Download the dropbox.py script and adjust the file permissions

wget -O ~/.dropbox/dropbox.py "http://www.dropbox.com/download?dl=packages/dropbox.py"
chmod 755 ~/.dropbox/dropbox.py

 

You can download the script anywhere you like, I’ve included it along with the rest of the Dropbox files.

Now you can easily check the status of the Dropbox client

~/.dropbox/dropbox.py status
Downloading 125 files (303.9 KB/sec, 1 hr left)

 

Get a full list of CLI commands

~/.dropbox/dropbox.py help

Note: use dropbox help <command> to view usage for a specific command.

 status       get current status of the dropboxd
 help         provide help
 puburl       get public url of a file in your dropbox
 stop         stop dropboxd
 running      return whether dropbox is running
 start        start dropboxd
 filestatus   get current sync status of one or more files
 ls           list directory contents with current sync status
 autostart    automatically start dropbox at login
 exclude      ignores/excludes a directory from syncing

 

Use the exclude command to keep specific files or folders from syncing to your server

~/.dropbox/dropbox.py help exclude

dropbox exclude [list]
dropbox exclude add [DIRECTORY] [DIRECTORY] ...
dropbox exclude remove [DIRECTORY] [DIRECTORY] ...

"list" prints a list of directories currently excluded from syncing.  
"add" adds one or more directories to the exclusion list, then resynchronizes Dropbox. 
"remove" removes one or more directories from the exclusion list, then resynchronizes Dropbox.
With no arguments, executes "list". 
Any specified path must be within Dropbox.

 

Once the Dropbox service is running and fully syncrhonized you can access all your Dropbox files and easily share files on your server with all your other Dropbox connected gadgets!

For more resources and troubleshooting tips visit the Text Based Linux Install page on the Dropbox wiki and the Dropbox forums. Happy syncing!

via Install Dropbox On Your Ubuntu Server (10.04, 10.10 & 11.04) | Ubuntu Server GUI.

Install Dropbox On Your Server. Read More »

Access-Control-Allow-Origin XMLHttpRequest day. What fun.

XMLHttpRequest cannot load Origin http://mydomain.net is not allowed by Access-Control-Allow-Origin

Courtesy of Toby’s code for his BBC Stories visualisation for a demo we’re doing of our joint work at the DTC all hands conference, I had a day of cross-domain Ajax woe.

It was particularly annoying to run into this issue because I wasn’t even really trying to do cross-site AJAX, I just wanted to call some data from a SPARQL server running on a high port of my own server! But no, a different port, as far as the browser is concerned, is a different server.

After spending hours trying to “do it properly” and get Cross Origin Resource Sharing to work on my ISPconfig 2 debian lenny server, I just gave up.

I got it in principle, and I discovered that by adding Apache Directives like this:


Header add Access-Control-Allow-Origin "http://myserver.net"
Header add Access-Control-Allow-Origin "http://myserver.net:8080"
Header set Access-Control-Allow-Headers "X-Requested-With"
Header set Access-Control-Max-Age "60"
Header set Access-Control-Allow-Credentials true
Header set Access-Control-Allow-Headers "Content-Type, *"

To ISPConfig’s site control panel (instead of directly to Apache VirtualHosts), I did manage to get my headers doing the right thing:


saul@ni ~ $ curl -i -X OPTIONS http://mydomain.net/mydemo/

HTTP/1.1 200 OK
Date: Tue, 01 Nov 2011 14:38:56 GMT
Server: Apache (Debian) modpython Python modruby Ruby mod_ssl OpenSSL
Allow: GET,HEAD,POST,OPTIONS,TRACE
Vary: Accept-Encoding
Access-Control-Allow-Origin: http://mydomain.net
Access-Control-Allow-Origin: http://mydomain.net:8080
Access-Control-Allow-Headers: Content-Type, *
Access-Control-Max-Age: 60
Access-Control-Allow-Credentials: true
Content-Length: 0
Content-Type: text/html

at least as described in the various how-tos I was reading.

But after plenty of attempts, I just couldn’t get it working. Maybe it was something on the client-side that I just didn’t get. I’m no Javascript person…

Anyway, after battling hard to do it the right way, I caved and did it the sysadminny way, following the advice from Steve Harris I found on the 4store-support site in the first place and just set up a proxy to port 8080 so that the script could just request /whatever/ and get http://mydomain.net:8080/whatever/.

Bah.

Access-Control-Allow-Origin XMLHttpRequest day. What fun. Read More »

Building & Installing 4store on Debian Lenny

It took a good few attempts to get 4-store installed on my Debian Lenny box, even after reading a very useful guide by Richard Reynolds.

For anyone following that guide, here are the modifications I had to make:

Firstly, I had to install Raptor first (it complains that there’s no Rasqual otherwise). That was fairly straight forward, I was able to follow Richard Reynolds guide:


wget http://download.librdf.org/source/raptor2-2.0.2.tar.gz
tar -xzvf raptor2-2.0.2.tar.gz
cd raptor2-2.0.2
./configure
make
sudo make install

Then I was able to build Rasqual:


wget http://download.librdf.org/source/rasqal-0.9.25.tar.gz
tar -xjvf rasqal-0.9.25.tar.gz
cd rasqal-0.9.25
./configure
make
sudo make install

When it came to building 4store, I couldn’t get the sources from github. This line:

git clone https://github.com/garlik/4store.git

Got me:


Initialized empty Git repository in /home/blah/4store-v1.1.4/4store/.git/
warning: remote HEAD refers to nonexistent ref, unable to checkout.

Which wasn’t very useful, and created an empty 4store directory that I had to delete. A bit of googling indicated that the maintainers need to issue a few commands to push the default branch to the server. I couldn’t do anything about that, so I tried other methods of getting hold of the sources.

Then I tried several times to download auto-zipped up sources from github, unzipped them, and struggled with building the Makefile using the included automake.sh script, which I never got to work.

So finally I downloaded the sources from the 4store website here, unzipped them, found a nice Makefile and followed the INSTALL instructions from there.

It was a bit of a mission getting 4store to compile, I had to apt-get install:

  • libglib2.0-dev (Make complained about not having glibc-2.0)
  • libxml++-dev
  • libreadline-dev

But I finally got it configured, made and installed. Next: configuration!

Building & Installing 4store on Debian Lenny Read More »

How to insert a special character in Vim

Press ctl-k, then press one of these characters:

Character	Meaning
--------------------------------
!		Grave
'		Acute accent
>		Circumflex accent
?		Tilde
-		Macron
(		Breve
.		Dot above
:		Diaeresis
,		Cedilla
_		Underline
/		Stroke
"		Double acute (Hungarumlaut)
;		Ogonek
<		Caron
0		Ring above
2		Hook
9		Horn
=		Cyrillic
*		Greek
%		Greek/Cyrillic special
+		Smalls: Arabic, caps: Hebrew
3		Some Latin/Greek/Cyrillic
4		Bopomofo
5		Hiragana
6		Katakana

from http://vim.runpaint.org/typing/inserting-accented-characters/

How to insert a special character in Vim Read More »

3 Representations of Dr Who

Three representations of Dr Who?
Three representations of Dr Who? Script, RDF and Chat

I have three representations of Dr Who. S4E1 sitting in front of me:

  1. A Semantic annotation of the episode based on the BBC Stories Ontology, both by Michael O. JewellPaul Rissen, and Toby Harris{{1}}.
  2. The script for the episode, by Russel T Davies
  3. A transcript of a couple of very rowdy screenings of the episode I organised at The People Speak HQ during which people heckled at the screen using short messages, images and video.

What’s hurting my brain at the moment is a question of representation. In this triple, if ‘represents’ is the predicate, which is the subject and which is the object?

  • Is the Semantic annotation a representation of Dr Who S4E1: Partners in Crime the TV show, or is it a representation of the experience and interpretation of the person watching and annotating it? Or both?
  • In the same way, is the transcript of the conversation a representation of people’s experience of watching the episode and making social sense of it together, but with a lot more context?
  • Is the episode itself a representation of the shooting script?

Which philosophical texts can I turn to to help me make sense of this?

But most crucially (for my purposes), how can I best understand the similarities and differences between 1 (the semantic annotation) and 3 (the conversational transcript)?

I had a few ideas about this, mostly based on text-mining the conversation transcript via concept-extraction services such as LUpedia or Alchemy API to see if snatches of conversation can identify related entities within the annotation’s timeline, but feedback from the wonderful Jo Walsh was sceptical of this approach.

Basically, her critique was that

  1. Using text-mining/concept extraction privileges text, whereas the heckle stream is very visual, and that seems important.
  2. Entity-recognition/tagging services will yield a very variable quality of metadata. They’re designed to look for something specific in text and match it, and tend to require quite a bit of context (more than 140 characters of text)
  3. Asking the question “to what extent can this be considered metadata” will get very inconclusive answers, which will question the point of asking the question in the first place.

I think I agree with point 3 – which questions the point of this blog post, but I think I still need some kind of bottom-up analysis of the relatedness of the data, and although I’d like to just disregard the slightly solipsistic question of what is representing what, it would be nice to be able to attribute any philosophical assertions to someone other than myself!

[[1]] Here’s the OWL for the episode. Here’s the n3 formatted annotation of the episode [[1]]

3 Representations of Dr Who Read More »

Conversational Scenario Design


This scenario is designed to elicit and capture conversation between a group of people who are watching a specific episode of Dr. Who together.

The aim is to be able to compare existing formal metadata for this episode with this speculative ‘conversational metadata’, and evaluate it as an alternative representation of the same media object: Dr Who, Season 4, Episode 1, Partners in Crime.

The Setup

Two groups of eight people are invited to watch of an episode of Dr Who together on a large screen, during which they use their laptops and a simple text/image/video annotation interface to type short messages or send images onto the screen where they are visible as an overlay on top of the video of Dr Who.

The room is laid out in a ‘living room’ arrangement to support co-present viewing and interaction between participants, with comfortable seating arranged in a broad semi-circle, oriented towards a large projected video screen about ten feet away. Each participant is asked to bring their own laptop, tablet PC, or other wifi-enabled device with a web browser.

After making sure that all participants are on the network, there is an introductory briefing where they are given a presentation explaining the aims of the project and that they are free to walk around, use their laptops or just talk, and help themselves to food and drink during the screening.

The Annotation Tool

The system that the participants are using on their laptops/tablets or mobile phones has a simple web-based client, enabling viewers to choose a colour to identify themselves on the screen, and then type in 140 characters of text or search for images and video, before sending them to the main screen.

Users are asked to choose a colour
Users are asked to choose a colour
The 'red' user's annotation interface with image search
The ‘red’ user’s annotation interface with image search
Search results for 'knitted adipose' before posting to screen
Search results for ‘knitted adipose’ before posting to screen

The Display Screen

The video of Dr Who is projected on a ‘main’ screen, alongside text, images and video clips sent by viewers in a fullscreen browser window. The images and videos sent by users have a coloured outline, and text-bubbles are coloured to indicate who posted them.

Dr Who layered with text, image and video annotations.
Dr Who layered with text, image and video annotations.

Images and videos run underneath the video in a ‘media bar’, while text bubbles posted by users drop onto the screen in random positions, but can be re-arranged on the screen or deleted by a ‘facilitator’.

Rationale

This ‘conversational scenario’ is a hybrid of various methods in which researchers have contrived situations to elicit data from participants. Before making any claims about the data gathered, some clarification of the purpose and methods of the scenario are necessary.

Ethnographic Studies of Social TV have tended to use audiovisual recordings of TV viewers in naturalistic settings as their primary source, and analytical methods such as Conversation Analysis and participant observation have been used to deepen their understanding of how people use existing TV devices and infrastructures in a social context.

HCI approaches to designing Social TV systems have built novel systems and undertaken user testing and competitive analysis of existing systems in order to better understand the relationship between people’s social behaviours around TV, and the heuristics of speculative Social TV{{1}} devices and services.

Semantic Web researchers have opportunistically found ways to ‘harvest’ and analyse communications activity from the Social Web, as well as new Social TV network services that track users’ TV viewing activity as a basis for content recommendations and social communication.

All of these approaches will be extremely useful in developing better conversational annotation systems, and improving understanding and design of Social TV for usability, and for making better recommendations.

Although the conversational scenario described borrows from each of these methods, it’s primary objective is to gather data from people’s mediated conversations had around a TV in order to build a case for seeing and using it as metadata.

System design, usability, viewer behaviour, user profiles, choices of video material, and the effect those issues have on the quality and nature of the captured metadata are a secondary concern to this first step in ascertaining whether conversations can be captured and treated as metadata pertaining to the video in the first place.

[[1]]I am using the term Social TV, following one of the earliest papers to coin the phrase by Oehlberg et. al (2006) to refer to Interactive TV systems that concentrate on the opportunities for viewer-to-viewer interaction afforded by the convergence of telecoms and broadcast infrastructures. Oehlberg, L., Ducheneaut, N., Thornton, J. D., Moore, R. J., & Nickell, E. (2006). Social TV: Designing for distributed, sociable television viewing. Proc. EuroITV (Vol. 2006, pp. 25–26). Retrieved from http://best.berkeley.edu/~lora/Publications/SocialTV_EuroITV06.pdf [[1]]

Conversational Scenario Design Read More »

Conversational Annotation

Annotation of a conversation would usually be a post-hoc chore undertaken by someone charged with watching a documentary or ethnographic video and ‘making sense’ of the diffuse multifariousness of a conversation. Heckle‘s approach, that each visual/textual interjection might be used as an annotation, attempts to turn annotation into an augmentation of the experience of the conversation. Because it is concurrent and live, the participants who heckle may notice and incorporate all kinds of contextual markers outside the view of the video camera, as well as bring their own diverse interpretations and experiences into the heckled conversation.

Most crucially, the variety of ways that people use the Heckle system mirrors the diversity of people’s verbal and non-verbal contributions to the live conversation. The stream of images, video, text and links that result can be seen as a parallel conversation that ‘annotates’ the conversation around the Talkaoke table, but also interacts with it in real time: the representation of the conversation itself becomes conversational.

Research Strategy

There are so many questions to be asked about this approach: about the user interfaces, about how and whether Heckle does really ‘augment’ the experience, lead to further engagement, and how it influences people’s interpretation and behaviour. However, with the time and resources available at this stage, the goals will have to be very limited and specific.

My research task at hand is to enquire about this ‘conversational metadata’: what is it? To what extent can it be considered ‘metadata’? What objects does it’s metadata relate to; to the conversation around the Talkaoke table, or to the people Heckling? And to what extent does it correlate (or not) with other forms of annotation and representation of these objects?

Asking this question will involve re-purposing the Heckle system to create scenarios in which this correlation can be measured.

To be specific, rather than using Heckle to annotate a live conversation around the Talkaoke table, I will be using it to annotate a group of people watching Dr. Who Season 4 Episode 1, ‘Partners in Crime’.

This is an opportunistic choice of programme, suggested to me by Pat Healey because he happens to have supervised my MAT colleague Toby Harris on the BBC Stories project, with Paul Rissen and Michael Jewell to annotate this episode of Dr. Who in an exemplary ‘top-down’ fashion, developing and then using the BBC stories ontology.

The plan, then, is to gather a group of people to sit watch TV together, and to provide them with The People Speak’s Heckle system as a means of interacting with each other, layered on top of the Dr Who video. The resulting conversational metadata can then be compared to the detailed, semantic annotatations provided by the BBC Stories project.

Evaluation Strategy

There are a number of possible methods to use to make this comparison, although it will be hard to tell which to use before being able to look at the data.

It may be useful to simply look at mentions of characters, plot developments, and other elements in the BBC Stories ontology, and see whether they appear at equivalent moments in the BBC Stories annotation and the heckled conversation. A basic measurement of correlation could be gathered from that kind of comparison, and would indicate whether the two forms of metadata are describing the same thing.

Similarly, it might be useful to demonstrate the differences between the conversational metadata and the BBC Stories version by looking for conversational annotations that relate to the specific context of the experience of watching the episode: the space, the food, the sofa. These would (of course) be absent from the BBC Stories annotation.

However, another strategy, which Toby Harris and I concocted while playing with LUpedia, a semantic ‘encrichment’ service that takes free text, and attempts to match it with semantic web resources such as DBPedia, and return structured metadata.

If it would be possible to feed LUpedia the BBC Stories ontology, and then feed it with the episode of Dr Who in question as a dataset, it should be possible to submit people’s heckles to it, and see if LUpedia returns relevant structured data.

If LUpedia can enrich people’s Heckles with metadata from the BBC Stories dataset, that should indicate that the heckles are pertinent to the same object (in this case, the episode of Dr Who), and might therefore be seen as conversational metadata for it{{1}}.

[[1]]My conversational metadata will probably also describe the interactional experience of watching the show, and other contextual references that will be absent from the BBC Stories annotation. However, it is important to show that the two types of metadata relate to at least one of the same objects. If this is not demonstrable, it does create some ‘fun’ philosophical problems for my research such as what conversational *does not* refer to. That one might be harder to answer.[[1]]

 

Conversational Annotation Read More »

Heckle

Since 2007, our art collective The People Speak have been working on ways of trying to make the 13+ years of conversational oral history we have on archive public and searchable.

The conversations between people who meet around the Talkaoke table{{1}}, on street corners, at festivals, schools, or conferences have been recorded and archived on every format going from digi-beta to hi-8, RealMedia (oh God, the 90’s), and miniDV. For the last two years, we have finally moved to digital only, but the archival backlog is intimidating.

As challenging as the digitisation and archival issues are, the real problem is figuring out what people are talking about in this mountain of data. All the conversations facilitated by The People Speak are spontaneous, off the cuff, and open to people changing tack at any point. This has made it almost impossible to provide a thematically structured archive.

And this problem is not unique to this rather speciliased context. Aren’t all conversations, questions and answer sessions, and in fact, pretty much anything that involves people interacting with each other on video subject to the same contingencies of meaning?

If my early-stages training in Conversation Analysis have shown me anything, it’s that the apparent ‘content’ of a conversation is impossible to represent in any way other than through further conversations, and observations of how people work to repair their misunderstandings.

The Heckle System

The People Speak’s response to this problem has been the ‘Heckle’ system.

Using ‘Heckle’, an operator, or multiple participants in a conversation may search for and post google images, videos, web links, wikipedia articles or 140 characters of text, which then appear overlayed on a projected live video of the conversation.

Here is a picture of Heckle in use at the National Theatre, after a performance of Greenland.

Heckle in action at the National Theatre

As you can see, the people sitting around the Talkaoke table aren’t focused on the screens on which the camera view is projected live. The aim of the Heckle system is not to compete with the live conversation as such – but to be a backchannel, throwing up images, text and contextual explanations on the screen that enable new participants to understand what’s going on and join in the conversation.

The Heckle system also has a ‘cloud’ mode, in which it displays a linear representation of the entire conversation so far, including snapshots from the video at the moment that a heckle was created, alongside images, keywords, ‘chapter headings’ and video.

Heckle stream from Talkaoke

This representation of the conversation is often used as part of a rhetorical device by the Talkaoke host to review the conversation so far for the benefit of people who have just sat down to talk. A ‘Heckle operator’ can temporarily bring it up on a projection or other nearby display and the host then verbally summarises what has happened so far.

It also often functions as a modifier for what is being said. Someone is talking about a subject, and another participant or viewer posts an image which may contradict or ridicule their statement; someone notices and laughs, everyone’s attention is drawn to the screen momentarily, then returns to the conversation with this new interjection in mind. Some people use the Heckle system because they are too shy to take the microphone and speak. It may illustrate and reinforce or undermine and satirize. Some ‘heckles’ are made in reply to another heckle, some in reply to something said aloud, and vice versa.

If keywords are mentioned in the chat, those keywords can be matched to a timecode in the video, in effect, the heckled conversation becomes an index for the video recorded conversation: the conversation annotates the video{{2}}.

[[1]] Talkaoke, if you’ve never seen it before, is a pop-up talk-show invented by Mikey Weinkove of The People Speak in 1997. It involves a doughnut-shaped table, with a host sitting in the middle on a swivelly chair, passing the microphone around to anyone who comes and sits around the edge to talk. Check out the Talkaoke website if you’re curious.[[1]] [[2]] People don’t just post keywords. It’s quite important that they can post images and video too. The search terms they use to find these resources can also be recorded and used as keywords to annotate the video. A further possibility for annotation is that a corpus of pre-annotated images, such as those catalogued using the ESPgame could be used to annotate the video. This would then provide a second level of annotation: the annotations of the images used could be considered to be ‘nested’ annotations of the Talkaoke conversation. [[2]]

 

Heckle Read More »

Conversational Metadata

In my last post about the Social TV research context I explained that I decided to focus on generating and evaluating conversational video metadata by eliciting mediated conversations through SocialTV.

Before drilling down into that choice to distill a set of research questions, however, there’s a more basic question about the context of research into SocialTV metadata: why gather SocialTV metadata at all?

slide showing a graphic from the Notube project's
A slide showing a graphic from one of the Notube.tv project’s presentations

The Notube project has a diagram on this issue, which shows the progression of a piece of TV content along a timeline from pre-broadcast to media archive. The assertion is that having more metadata about user preferences enhances the value of TV content because it provides many more opportunities to recommend programmes. Although the Notube project adds a great deal to existing research on recommendation systems, Notube’s central focus follows most existing research in focusing on developing more accurate and sophisticated user profiles (in this case through aggregating media consumption habits from heterogeneous sources on the web).

The way the diagram is re-used in my slide above emphasises the corrolary of the point that it is intended to illustrate: that having more TV programme metadata would also enhance the frequency and accuracy of recommendation for content (and thereby its value) throughout its lifecycle {{2}}.

If metadata (about profiles or programme content) can enhance the value of a TV show and facilitate production, discovery and delivery of TV programmes because it increases the likelihood of that programme being recommended, then it should follow that the greater the richness and referential diversity of that metadata, the more ‘recommendable’ the programme becomes.

This suggestion presupposes that the metadata in question is relevant, rather than a random spamming of references, intended to maximise recommendation in all possible contexts. So the question then becomes: what determines relevance in this context, and, even more importantly, relevance to what?

It may seem self-evident that the metadata about users should be relevant to their consumption habits, and that these habits, recorded and aggregated should constitute their preferences or dispreferences. Even if this widespread assumption holds true, what should programme metadata relate to? Are broadcaster’s assertions about their programes necessarily relevant to the way those programmes are discovered and interpreted? And crucially, in SocialTV – which almost by definition is about the interaction between viewers brokered via a networked TV infrastructure, how do those metadata correlate with the way TV programmes are used as a prop, touch-point, or stimulus to conversation between viewers{{1}}?

Current content discovery heuristics tend to rely on user profiles and broadcaster-provided metadata, without necessarily taking into account the context and quality of interactions between users.

The hypothesis of this research project is that metadata derived from conversations between viewers via SocialTV can provide a crucial additional component to support the interactional possibilities of of SocialTV.

To test this, an experimental scenario will be developed to involve concurrent, co-present viewers of a TV programme in a public multimedia chat system, designed to elicit metadata from their conversation. The transcript of their interactions will then undrego Conversation Analysis. This analysis will provide a baseline to evaluating to what extent conversations around a SocialTV experience can be correlated to a detailed and highly granular ‘top down’ semantic annotation of a TV programme.

An analysis of the data may also be used to test several related hypotheses:

  • that conversational metadata are likely to have more divergent subject matter and more external reference than a-priori programme data about actors, characters and plot developments
  • that conversational metadata are likely to be more responsive to ways in which the context of the conversation changes{{3}}.
    Social TV Research Context - constrained to areas relevant to this project
    Social TV Research Context – constrained to areas relevant to this project

So in terms of my earlier exploration of SocialTV as a research context, I can start to narrow my focus onto a few areas.

To achieve the study outlined above, I am proposing to deploy a multimedia discussion interface that will allow co-present concurrent viewers of a TV programme to interact and converse as freely as possible. Although it may have significant design issues, the state of the art in this context is definitely the ‘hot topic’ of Second Screen/Companion Devices. This is not the central point of the study, however, so I will be approaching this part of the project as a design process – building on prior art – which is abundant at the moment – and iterating out a customised version as quickly as possible to find something basic that works well enough for me to get the conversational data I’m looking for.

The other research objectives in the slide above are already in order of priority: eliciting and capturing discussion is the most pressing need for the system. Other functions and features are interesting, but probably out of scope for the time being. However, I might well post some ideas to this blog about how conversational metadata might underpin new approaches to searching, filtering, annotating and segmenting video.

[[1]] There’s still the thorny issue of what determines ‘relevant’ in this context. There may be perceptual studies comparing recommendation engines or other ways of trying to determine relevance of metadata statistically. However, for the purpose of this study, the question is moot. Relevance, especially in terms of conversational metadata, is subjective unless based on evidence of how it is used to broker or support an interpersonal interaction via SocialTV.[[1]] [[2]] This is also core to Notube’s aims and methods: using highly granular metadata about programmes to create ‘serendipitous’ trails through content that can break through the sameness of personalisation recommendation systems that can tend to channel users into relatively static, homogenous clusters of content. [[2]] [[3]]For example, a programme might be broadcast 50 times over 15 years. Conversational metadata associated with the programme might change significantly over time and in the different contexts in which it is watched, accumulating qualified layers of annotations, whereas broadcaster-provided metadata is likely to remain static.[[3]]

Conversational Metadata Read More »