Conversational Annotation
Annotation of a conversation would usually be a post-hoc chore undertaken by someone charged with watching a documentary or ethnographic video and ‘making sense’ of the diffuse multifariousness of a conversation. Heckle‘s approach, that each visual/textual interjection might be used as an annotation, attempts to turn annotation into an augmentation of the experience of the conversation. Because it is concurrent and live, the participants who heckle may notice and incorporate all kinds of contextual markers outside the view of the video camera, as well as bring their own diverse interpretations and experiences into the heckled conversation.
Most crucially, the variety of ways that people use the Heckle system mirrors the diversity of people’s verbal and non-verbal contributions to the live conversation. The stream of images, video, text and links that result can be seen as a parallel conversation that ‘annotates’ the conversation around the Talkaoke table, but also interacts with it in real time: the representation of the conversation itself becomes conversational.
Research Strategy
There are so many questions to be asked about this approach: about the user interfaces, about how and whether Heckle does really ‘augment’ the experience, lead to further engagement, and how it influences people’s interpretation and behaviour. However, with the time and resources available at this stage, the goals will have to be very limited and specific.
My research task at hand is to enquire about this ‘conversational metadata’: what is it? To what extent can it be considered ‘metadata’? What objects does it’s metadata relate to; to the conversation around the Talkaoke table, or to the people Heckling? And to what extent does it correlate (or not) with other forms of annotation and representation of these objects?
Asking this question will involve re-purposing the Heckle system to create scenarios in which this correlation can be measured.
To be specific, rather than using Heckle to annotate a live conversation around the Talkaoke table, I will be using it to annotate a group of people watching Dr. Who Season 4 Episode 1, ‘Partners in Crime’.
This is an opportunistic choice of programme, suggested to me by Pat Healey because he happens to have supervised my MAT colleague Toby Harris on the BBC Stories project, with Paul Rissen and Michael Jewell to annotate this episode of Dr. Who in an exemplary ‘top-down’ fashion, developing and then using the BBC stories ontology.
The plan, then, is to gather a group of people to sit watch TV together, and to provide them with The People Speak’s Heckle system as a means of interacting with each other, layered on top of the Dr Who video. The resulting conversational metadata can then be compared to the detailed, semantic annotatations provided by the BBC Stories project.
Evaluation Strategy
There are a number of possible methods to use to make this comparison, although it will be hard to tell which to use before being able to look at the data.
It may be useful to simply look at mentions of characters, plot developments, and other elements in the BBC Stories ontology, and see whether they appear at equivalent moments in the BBC Stories annotation and the heckled conversation. A basic measurement of correlation could be gathered from that kind of comparison, and would indicate whether the two forms of metadata are describing the same thing.
Similarly, it might be useful to demonstrate the differences between the conversational metadata and the BBC Stories version by looking for conversational annotations that relate to the specific context of the experience of watching the episode: the space, the food, the sofa. These would (of course) be absent from the BBC Stories annotation.
However, another strategy, which Toby Harris and I concocted while playing with LUpedia, a semantic ‘encrichment’ service that takes free text, and attempts to match it with semantic web resources such as DBPedia, and return structured metadata.
If it would be possible to feed LUpedia the BBC Stories ontology, and then feed it with the episode of Dr Who in question as a dataset, it should be possible to submit people’s heckles to it, and see if LUpedia returns relevant structured data.
If LUpedia can enrich people’s Heckles with metadata from the BBC Stories dataset, that should indicate that the heckles are pertinent to the same object (in this case, the episode of Dr Who), and might therefore be seen as conversational metadata for it{{1}}.
[[1]]My conversational metadata will probably also describe the interactional experience of watching the show, and other contextual references that will be absent from the BBC Stories annotation. However, it is important to show that the two types of metadata relate to at least one of the same objects. If this is not demonstrable, it does create some ‘fun’ philosophical problems for my research such as what conversational *does not* refer to. That one might be harder to answer.[[1]]
Conversational Annotation Read More »