conversational technology

Re/drawing interactions: an EM/CA video tools development workshop

As part of the Drawing Interactions project (see report), Pat HealeyToby Harris, Claude Heath, and Sophie Skach and I ran a workshop at New Developments in Ethnomethodology in London (March 2018) to teach interaction analysts how and why to draw.

Sophie Skach leading the life drawing workshop at New Directions in Ethnomethodology
Sophie Skach leading the workshop at New Directions in Ethnomethodology

Here’s the workshop abstract:

Ethnomethodological and conversation analytic (EM/CA) studies often use video software for transcription, analysis and presentation, but no such tools are designed specifically for EM/CA. There are, however, many software tools commonly used to support EM/CA research processes (Hepburn & Bolden, 2016 pp. 152-169; Heath, Hindmarsh & Luff 2010 pp. 109-132), all of which adopt one of two major paradigms. On the one hand, horizontal scrolling timeline partition-editors such as ELAN (2017) facilitate the annotation of multiple ‘tiers’ of simultaneous activities. On the other hand, vertical ‘lists of turns’ editors such as CLAN (Macwhinney, 1992) facilitate a digital, media-synced version of Jefferson’s representations of turn-by-turn talk. However, these tools and paradigms were primarily designed to support forms of coding and computational analysis in interaction research that have been anathema to EM/CA approaches (Schegloff 1993). Their assumptions about how video recordings are processed, analyzed and rendered as data may have significant but unexamined consequences for EM/CA research. This 2.5 hour workshop will reflect on the praxeology of video analysis by running a series of activities that involve sharing and discussing diverse EM/CA methods of working with video. Attendees are invited to bring a video they have worked up from ‘raw data’ to publication, which we will re-analyze live using methods drawn from traditions of life drawing and still life. A small development team will build a series of paper and software prototypes over the course of the workshop week, aiming to put participants’ ideas and suggestions into practice. Overall, the workshop aims to inform the ongoing development of software tools designed reflexively to explore, support, and question the ways we use video and software tools in EM/CA research.

References

ELAN (Version 5.0.0-beta) [Computer software]. (2017, April 18). Nijmegen: Max Planck Institute for Psycholinguistics. Retrieved from https://tla.mpi.nl/tools/tla-tools/elan/

Heath, C., Hindmarsh, J., & Luff, P. (2010). Video in qualitative research: analysing social interaction in everyday life. London: Sage Publications.

Hepburn, A., & Bolden, G. B. (2017). Transcribing for social research. London: Sage.

MacWhinney, B. (1992). The CHILDES project: Tools for analyzing talk. Child Language Teaching and Therapy, (2000).

Schegloff, E. A. (1993). Reflections on Quantification in the Study of Conversation. Research on Language & Social Interaction, 26(1), 99–128.

Re/drawing interactions: an EM/CA video tools development workshop Read More »

Heckle

Since 2007, our art collective The People Speak have been working on ways of trying to make the 13+ years of conversational oral history we have on archive public and searchable.

The conversations between people who meet around the Talkaoke table{{1}}, on street corners, at festivals, schools, or conferences have been recorded and archived on every format going from digi-beta to hi-8, RealMedia (oh God, the 90’s), and miniDV. For the last two years, we have finally moved to digital only, but the archival backlog is intimidating.

As challenging as the digitisation and archival issues are, the real problem is figuring out what people are talking about in this mountain of data. All the conversations facilitated by The People Speak are spontaneous, off the cuff, and open to people changing tack at any point. This has made it almost impossible to provide a thematically structured archive.

And this problem is not unique to this rather speciliased context. Aren’t all conversations, questions and answer sessions, and in fact, pretty much anything that involves people interacting with each other on video subject to the same contingencies of meaning?

If my early-stages training in Conversation Analysis have shown me anything, it’s that the apparent ‘content’ of a conversation is impossible to represent in any way other than through further conversations, and observations of how people work to repair their misunderstandings.

The Heckle System

The People Speak’s response to this problem has been the ‘Heckle’ system.

Using ‘Heckle’, an operator, or multiple participants in a conversation may search for and post google images, videos, web links, wikipedia articles or 140 characters of text, which then appear overlayed on a projected live video of the conversation.

Here is a picture of Heckle in use at the National Theatre, after a performance of Greenland.

Heckle in action at the National Theatre

As you can see, the people sitting around the Talkaoke table aren’t focused on the screens on which the camera view is projected live. The aim of the Heckle system is not to compete with the live conversation as such – but to be a backchannel, throwing up images, text and contextual explanations on the screen that enable new participants to understand what’s going on and join in the conversation.

The Heckle system also has a ‘cloud’ mode, in which it displays a linear representation of the entire conversation so far, including snapshots from the video at the moment that a heckle was created, alongside images, keywords, ‘chapter headings’ and video.

Heckle stream from Talkaoke

This representation of the conversation is often used as part of a rhetorical device by the Talkaoke host to review the conversation so far for the benefit of people who have just sat down to talk. A ‘Heckle operator’ can temporarily bring it up on a projection or other nearby display and the host then verbally summarises what has happened so far.

It also often functions as a modifier for what is being said. Someone is talking about a subject, and another participant or viewer posts an image which may contradict or ridicule their statement; someone notices and laughs, everyone’s attention is drawn to the screen momentarily, then returns to the conversation with this new interjection in mind. Some people use the Heckle system because they are too shy to take the microphone and speak. It may illustrate and reinforce or undermine and satirize. Some ‘heckles’ are made in reply to another heckle, some in reply to something said aloud, and vice versa.

If keywords are mentioned in the chat, those keywords can be matched to a timecode in the video, in effect, the heckled conversation becomes an index for the video recorded conversation: the conversation annotates the video{{2}}.

[[1]] Talkaoke, if you’ve never seen it before, is a pop-up talk-show invented by Mikey Weinkove of The People Speak in 1997. It involves a doughnut-shaped table, with a host sitting in the middle on a swivelly chair, passing the microphone around to anyone who comes and sits around the edge to talk. Check out the Talkaoke website if you’re curious.[[1]] [[2]] People don’t just post keywords. It’s quite important that they can post images and video too. The search terms they use to find these resources can also be recorded and used as keywords to annotate the video. A further possibility for annotation is that a corpus of pre-annotated images, such as those catalogued using the ESPgame could be used to annotate the video. This would then provide a second level of annotation: the annotations of the images used could be considered to be ‘nested’ annotations of the Talkaoke conversation. [[2]]

 

Heckle Read More »