Pandoc + Markdown for Conversation Analytic Transcripts

markdown.png

A while back I wrote a blog post detailing why I chose Pandoc and Markdown to write papers including Jeffersonian Conversation Analytic transcripts. It wasn’t very detailed though, because a full explanation of how to set up a compatible text-based writing workflow was an onerous task – one happily now completed beautifully by Dennis Tenen and Grant Wythoff’s guide to Sustainable Authorship in Plain Text using Pandoc and Markdown.

So, I decided to update this how-to for anyone using Pandoc and Markdown to start including CA style transcriptions quickly and easily.

To go along with this how-to, there is also a set of demo files you can download to try out this approach. However, before you do that you probably want to get a pandoc + markdown setup installed.

The Problem

There are great software tools out there for CA-style transcription, my favourite is CLAN for a number of reasons. However, I can’t find any resources online about how to publish CA-style transcriptions without being forced through some eye-bleeding LaTeX diddling every time.

Of course I could just use a WYSIWYG text editor like LibreOffice – but now I’ve experienced the power of LaTeX for document preparation and publication, I really can’t see myself going back.

When doing CA it seems particularly important to have transcriptions legibly in the body of the paper and visible during the writing process, because many of the analytical observations come, or get significantly modified at the point of writing about them, double and triple checking assumptions, and cross-referencing with the CA literature while tweaking citations.

The Simplest Solution: Markdown + Pandoc

Markdown is my favourite lightweight markup language, a highly readable format with which you can write a visually pleasing text file, which you can then convert into almost any other format – HTML, OpenOffice, LaTeX, RTF, etc. using Pandoc. There are many similar systems, notably reStructuredText and Textile, all of which you can use to write your text file, and other conversion tools/toolsets, but in my experience, Markdown and Pandoc are the most useful combination in an academic context 1.

There are lots of great things about markdown:

  • Just edit simple text files – no weird file formats to get corrupted or mangled.
  • Less verbose and complicated-looking than LaTeX.
  • Small files are easy to share/collaborate on with others (everyone gets to use their favourite editor).
  • There are some great pandoc plugins for my favourite text editor vim.

However, the best thing is that, used along with the XeTeX typesetting engine, it solves the problem with CA transcriptions being unreadable in LaTeX/pdflatex.

For example, in my first CA-laced paper, my transcriptions looked like this in my LaTeX source:

\begin{table*}[!ht]
\hfill{}
\texttt{
  \begin{tabular}{@{}p{2mm}p{2mm}p{150mm}@{}}
 & D: &  0:h (I k-)= \\
 & A: &  =Dz  that  make any sense  to  you?  \\
 & C: &  Mn mh. I don' even know who she is.  \\
 & A: &  She's that's, the Sister Kerrida, \hspace{.3mm} who, \\
 & D: &  \hspace{76mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{2.5mm}{[}}'hhh  \\
 & D: &  Oh \underline{that's} the one you to:ld me you bou:ght.= \\
 & C: &  \hspace{2mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{2.5mm}{[}} Oh-- \hspace{42mm}\raisebox{0pt}[0pt][0pt]{             \raisebox{2mm}{\lceil}} \\
 & A: &  \hspace{60.2mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{3.1mm}{\lfloor}}\underline{Ye:h} \\
  \end{tabular}
\hfill{}
}
\caption{ Evaluation of a new artwork from (JS:I. -1) \cite[p.78]{Pomerantz1984} .}
\label{ohprefix}
\end{table*}

which renders this:

A simpler way to do this in Markdown (with none of the fancy stuff) is to use Markdown’s ‘verbatim’ environment – you do this by putting four spaces or one tab before each line in your transcript (including blank lines). Here’s the messy LaTeX above re-done in simple Markdown.

(3)

STE:        U̲o̲:̲h̲ oh ugly things [he paints.] 
KAT:                            [Really?] 
        (3.0) 
STE:        (°I think s[o-])°
KAT:                   [So you wouldn't sell any?] 
STE:        U̲u̲h̲ n[o] 
KAT:              [No?] 
        (1.7)

which renders like this:

ugly_things.png

Overall, I think the Markdown version represents a significant improvement in legibility while writing. I think it might be possible to do the same in LaTeX using the {verbatim} environment, but the fact that Markdown also lets me concentrate on writing without throwing errors or refusing to compile lets me spend longer on the writing than on endless text-fiddling procrastination.

When it comes to rendering, I feed my markdown file to pandoc:

$ pandoc --latex-engine xelatex --bibliography library.bib --csl default.csl -N -o  paper_title.pdf paper_title.markdown

If you want to use the nicely stretched ceiling characters for overlap marking, or the raised full stop / bullet operator for inbreaths, you can do so, but you’ll need to run Pandoc (see below) referencing a font that has those characters. For example, you could use CAfont and add:

--variable monofont=CAfont

to the pandoc command above.

The default.csl file is a citation style language file to customise how bibliographical references are rendered.

If you’re only adding a few examples to your document, this will probably work fine. If you are writing a thesis or a longer document – read on.

For Longer Texts: Markdown + Pandoc + LaTeX

The above approach may work for writing a short paper with one or two examples, for a thesis or a longer piece where you may have many examples, you’re going to have to take this a step further and use some LaTeX within your Markdown document. The bad news, you will have to use LaTeX, templates and some code to deal with:

  1. Example Layout: you probably want your examples to be graphically separated from your text in a consistent way.
  2. Document layout: you may need to make some stylistic tweaks to how your document prints out.
  3. Referencing: you will want to use labels for your examples so you can cross-reference them automatically within the text and not have to re-label them every time you make a change.
  4. Audio/video links: you may want to include links to audio/video examples in your files.

The good news: your CA transcript examples will still be easy to read/edit, and actually this is all pretty straight forward once you’ve got it set up.

What you will need

First, you need a working Pandoc + Markdown setup installed. You also need a nice monospaced font installed – I use CAfont by the amazing CHILDES project.

I’ve made a downloadable archive of the three files I use every time I create a new document. Download those. There is also a working demo (README.md) and some image files that you can use to edit/test things, or modify them to create your own.

Along with these examples inside the camarkdown_files folder you will find:

  • template.txt: a LaTeX template that Pandoc uses when it renders PDFs – with macros etc.
  • apa.csl: a citation style language file describing how I want my APA citations rendered.
  • margins.sty: a little margins file I canuse to tweak the overall page layout separately (US Letter vs. A4 etc.)

Whenever you start a new document, these three files into the same folder.

A little explanation

Without getting too geeky about it, here’s a little explanation of how I use this setup:

Whenever I convert my Mardown to PDF using Pandoc, I add:

--template template.txt

to the pandoc command to make sure it uses this template. The template is based on the default LaTeX template Pandoc always uses to convert Markdown to PDF via LaTeX, but I’ve added a macro: caextract.

Basically the caextract environment sets the default monospaced font, and (optionally) creates a to an online media file referenced in the Markdown file (see working example below), it also formats the paragraph containing the example as a framed float to divide it from the body of the text, and changes the listings name to ‘Extract’, so references list it as ‘Extract 1’ rather than ‘Figure 1’.

Here’s the relevant bits from the header section of template.txt

    \newcommand{\medialink}[2] { \begin{flushright} \href{#1}{#2}\\ \end{flushright} 
    } 
    $if(highlighting-macros)$
    $highlighting-macros$
    $endif$
    $if(verbatim-in-note)$
    \usepackage{fancyvrb}
    $endif$
    \usepackage{listings}
    \lstnewenvironment{extract}[1][]{
        \renewcommand*{\lstlistingname}{Extract}
        \lstset{frame=single,basicstyle=\small\ttfamily,keepspaces=true,#1}
    }{}

And this bit goes into the main section of the template:

    \usepackage{float}
    \floatstyle{ruled}
    \newfloat{caextract}{htp}{lop}
    \floatname{caextract}{Extract}

A working example

Here is a full example from a paper I’m writing at the moment that you can tweak and play with. It’s all done in simple markdown, using a little bit of LaTeX embedded within the Markdown file to call the macro.

So where I want my extract to appear in my Markdown file, I add:

![Different stopping postures between dancers \label{stopping-postures}](images/stopping-postures.png)

\begin{caextract}[H]
\caption{See https://www.dropbox.com/s/jnpf5pnxcy4dg8m/lexical-features.mov}
\label{lexical-features}
\begin{small}
\begin{verbatim}

1  JIM:   ∙hhh ⌈opps sorry Hh hyeh °hyour head°, ∙HHh Hmhmhmhmhm hehheh
2  TEA:        ⌊YE::AH! KAY >>LET's TRY it AGAIN< FIve, (.) s⌈ix? (.)
3  TEA:   ↓⌈five six se::v⌉en eight? Rock st⌈ep. (.) tri:ple, (.) tri:ple.  ⌉
4  JIM:    ⌊°five six shh°⌋                 ⌊°ep (.) tri:ple, (.) tri:ple.°]⌋
5  TEA:   G O :̲ ̲:̲ ̲O̲ :⌈ : d! L̲o̲v̲e̲l̲y̲⌈̲::.    (.)    ⌉ OKA::Y!
6  JIM:              ⌊O:hhkay:̲:̲? °Hm ↑hmhmhmhmhm°⌋
7          (1.3)
8  TEA:   LETS ROTATE PA:RTNERS!

\end{verbatim}
\medialink{https://www.dropbox.com/s/e960eu94ji7ncn3/lexical-features.mov}{Watch}

\end{small}
\end{caextract}

That should render something like this:

A later paragraph refers to the figure like so:

By contrast, Sara, Paul and Anne - marked in red in figure 
\ref{stopping-postures} - step back, split their weight and 
stop dancing together with the onset of Teacher's 
"\verb|G O :̲ ̲:̲ ̲O̲ : : d!|". Without having space to analyse 
this method, it is worth noting in closing that the regularity 
of these methods and their interactional contingencies are 
shown in the [slow-motion sections of the video](https://www.dropbox.com/s/jnpf5pnxcy4dg8m/lexical-features.mov) 
by how dancers who stop like Jim are all pulled off balance 
by dancers who stop like Paul, Sara and Anne.

It should look something like this:

A few notes on how this works:

  • The main reason for the macro is to enable cross-referencing. In the Markdown file, within each caextract I use \label{my-label} to label my examples. Then I can reference them anywhere in my Markdown file with something like “See extract \ref{my-label}”.
  • If you don’t have any media, just leave out the \medialink line.
  • You can put anything in the \caption section – your example name if you have a set naming schema for your corpus.
  • Note the neat Markdown trick in the paragraph above: I use “\verb|This comes out verbatim|” for a short inline bit of monospaced text.

Rendering your CA extracts using Pandoc

Finally, making sure you have your csl file (apa.csl), your images, your template.txt file and your margins.sty file all in the same folder with your example (I find that convenient), and making sure you have a nice monospaced font to use (CAfont is great) in place, run something like this:

pandoc --latex-engine xelatex --csl apa.csl --variable monofont=CAfont --variable mainfont=Arial --variable fontsize=12pt -H margins.sty --template template.txt --bibliography /path/to/library.bib -o README.pdf README.md

You can, of course, run this command from the terminal – swapping out the relevant variables as needed, but I use vim-pandoc’s PandocRegisterExecutor function to run this whenever I type the local leader character twice (,,) followed by pdf. See https://github.com/vim-pandoc/vim-pandoc for documentation of that kind of thing.

I’m happy to answer any questions here or on @saul.

Notes:

  1. Not all of these systems support bibliographical references with BibTeX – Markdown + Pandoc does this quite elegantly

Pandoc + Markdown for Conversation Analytic Transcripts Read More »

EMCAwiki.net

A wiki and collaborative bibliography on Ethnomethodology and Conversation Analysis.

EMCA wiki screenshot
EMCA wiki screenshot

The EMCA wiki is an information resource and comprehensive bibliography built by and for the Ethnomethodology and Conversation Analysis research community.

In October 2014 Paul ten Have stepped down as the sole maintainer of the vast  EMCA news bibliography and website he had been updating weekly since 1996. The EMCAwiki continues that legacy, aiming to develop it into an ongoing resource.

The site is run by a group of volunteer admins and contributors, and will be supported and managed by the International Society of Conversation Analysis from September 2015.

Popular tags in the EMCA bibliography
Popular tags in the EMCA bibliography

Credits & Links

 

Latest from the wiki:


EMCAwiki.net Read More »

Schegloff Sequencing Labels Cheat Sheet

This cheat sheet (PDF version) provides all the symbols you will encounter in Schegloff (2007): a useful reminder while doing an initial sequential analysis of your data. Use with caution, and remember to re-read the last chapter, as well as Schegloff (2005) beforehand. Usages are referenced with example and page numbers.

Adjacency pair labels

  • F / FPP : First Pair Part
  • S / SPP : Second Pair Part (2.01 p. 17)

Sequence management markers:

  • 1 / 2 / 3 : subscript numbering for multi-sequence analyses e.g.: Fb1, Fb1 (5.30, p.75)
  • + : more of a FPP or SPP i.e.: +F / +S (used in combination with other labels) (7.05, p. 121)1
  • b : base pair i.e. Fb or Sb
  • pre : pre-sequence marker
    • e.g. Fpre or Spre of a pre-expansion sequence (5.32, p. 77, see note 5 p. 27)
    • can take b and / or numbering for multi-sequence analyses.
  • ins or i : insert expansion FPPins or SPPins (can take b / numbering). (6.08, p.103 / 6.01, p.105)
  • insins : nested insert expansions (can be further nested e.g.: insinsins ) (6.17, p.110)
  • post : post-expansion (p. 27 note 5)

Position-specific markers:

  • pre-S : a preliminary (e.g. anticipatory account) coming between F and S. (p. 69 ex. 5.19)
  • preSb : a preliminary to a base sequence (p. 84 ex. 5.38)
  • SCT : sequence closing third (can be used with numbering, + and design feature labels) (7.03, p.119)
  • PCM : post-completion musing (7.32, p. 143)

Action labels

Obviously actions can be described in many ways, but Schegloff (2007) only uses these ones:

  • off : offer (could also be req for requests (10.14, pp 213-214), ass for assessments etc. etc.)2
  • acc : accept prior action (5.39, p. 85)
  • rej : reject prior action (5.39, p. 85)
  • prerej : a pre-rejection (could be used for any action) (5.39, p. 85)
  • req1 / off2 / acc2 / acc1 : numbered actions for multi/nested-sequence analyses. (5.38, p. 85)
  • retr : disavowal or retraction of prior action (9.03b, p. 185)
  • alt : alternative version of prior action (7.50c, pp. 166-167)
  • again : reissuing a prior action (7.50b, pp. 165-166)
  • redo : reworking/redoing of a prior action (7.49, pp. 163-164)
  • add : addition to prior action (7.49, pp. 163-164)

Design feature labels

  • up : upgrade (7.43, p. 157)
  • hedge or hdg : hedge (7.50b, pp. 165-166)
  • agree : agreement with preference (5.32, p. 77)
  • rev : reversal of preference / type conformity (5.32, p. 77)
  • cnt : counter (2.01 p.17)

References

  • Schegloff, E. A. (2005). On integrity in inquiry… of the investigated, not the investigator. Discourse Studies, 7(4-5), 455–480.
  • Schegloff, E. A. (2007). Sequence organization in interaction: Volume 1: A primer in conversation analysis. Cambridge: Cambridge University Press.

  1. This is rather ambiguously described in passing as: “‘preferred’ or ‘+ {plus]’ second pair parts” (Schegloff 2007 p. 120). However, these are not equivalents but alternatives. Confusingly, the literature does sometimes use the + sign to indicate preference in analytic transcripts. Schegloff (2007) uses it to indicate ‘more’ of an FPP or SPP.
  2. NB: When using action labels with a b marker, separate them with a comma for clarity e.g.: Fb, req (10.14, pp. 213-214).

Schegloff Sequencing Labels Cheat Sheet Read More »

Traffic Island Disks

A radio show about music, people and spaces.

traffic_island_disks_logo

We walked the streets looking for people listening to headphones then talked to them about the music, their day, and the location while recording whatever they happened to be listening to. The result was a show that offered a glimpse into people’s soundtracks to the city, and how they used them to move through their day.

kwame_mikey
Host Mikey and postman Kwame in Brixton

Traffic Island Disks started out with a regular slot on London’s Resonance FM station in 2003, then grew into an internationally touring live Internet radio show. In 2004 we designed a mobile system with custom software for live mixing, editing and streaming, and teamed up with local presenters to stream the show via wifi connection from the streets of cities starting in Newcastle then touring to Bratislava, Vienna, Sofia, Aarhus and San Jose.

Credits and Links

Traffic Island Disks Read More »

Heckle

Enables large groups of passers-by to engage in ongoing spontaneous conversations.

Heckle at the National Theatre
Heckle used in the foyer of the National Theatre

Heckle augments conversations by letting people catch up with what is being talked about visually – providing a user-contributed flow of relevant texts, tweets images, videos, websites overlaid on a live video feed of the event.

Developed with The People Speak to augment their public conversational performances, Heckle lets participants use a web app to contribute to a live visual summary of the conversation so far, so that new people can get involved at any time in the course of a discussion.

Heckle also provides a post-event visual timeline of the conversation, so over the last 7 years, The People Speak have been building a searchable, participant-annotated video archive of spontaneous conversations.

A post-event visual summary
A post-event visual summary
Schematic of a Heckle set-up
Schematic of a Heckle set-up

Credits & Links

Heckle Read More »

Heckling at Ontologies

Heckling at Ontologies explored the relationship between TV-viewers’ text chat and TV broadcaster media metadata.

Heckling at Ontologies Research Poster
Heckling at Ontologies Research Poster

Heckling at Ontologies is a collaboration with fellow MAT student Toby Harris exploring the dissonance between top-down and bottom-up approaches to media metadata generation. The project started with two datasets:

1. A semantic description of Season 4, Episode 1 of Doctor Who

This was one of the outcomes of Toby’s industrial placement at the BBC, which developed an ontology for rich description of BBC programme content and then used it to create a highly detailed summary of a specific episode of Doctor Who.

2. User-generated tweets ‘heckled’ at the screen by viewers

As part of an industrial placement at BT, I worked with The People Speak to develop a new version of Heckle, a web service that gathers tweets, images, videos and texts ‘heckled’ to a shared screen by viewers of a TV show, building up a visual summary of the mediated conversation. These two data sets were analysed to discover the dissonances and regularities between Toby’s top-down and my bottom-up descriptions of the same TV show.

The two data sets were then superimposed on a live stream of the video. You can download the source of the demo and grab copies of semantic data on the ontoheckle github page.

Links & Credits

 

Heckling at Ontologies Read More »

Conversational Annotation

Conversational Annotation explored the interactional dynamics and potential of ‘Social TV’ audiences. 

Conversational Annotation research poster
Conversational Annotation research poster

As part of an industrial placement with BT I worked with The People Speak to develop Heckle, a twitter-like web-service that captures the comments, asides, and discussion generated by an audience to annotate video content.

Analysis of of these mediated conversations revealed the degree to which people’s comments related to the TV content (characters, plot developments and production entities), and to what extent they related to the organisation of activity in the local context (crisps, drinks, sofa arrangements etc.), and looked for differences in frequency, peaks and other pragmatic detail evident in the data indicating to what extent this could function as media metadata.

A closer qualitative approach assessed the its amenability of this data to conversation analysis, looking for evidence of turn-taking sequences, and various forms of conversational repair.  ‘Conversationality’ in these empirical terms interrogated the technology and situations of ‘Social TV’, providing a critical view of the over-applied term ‘social media’ often used to refer to these forms of mediated communication.

There’s a working copy of the Msc thesis based on this research available here.

thesis2

Links & Credits

  • M.Sc. project co-supervised by Pat Healey at Queen Mary University of London and Andy Gower at BT.
  • Thanks to all the participants in the Dr. Who screenings, Richard Kelly, Toby Harris, and The People Speak.
  • This research was funded by the Digital Economy programme through the Media and Arts Technology Doctoral Training Centre at Queen Mary, University of London

Conversational Annotation Read More »

The Dicshunary

The Dicshunary collected and shared vocabularies used by one person, family, or other micro-language group.

Screenshot from the Dicshunary

Users added their neologisms, definitions and redefinitions, and could create sub-lexicons to host on their own websites. They could even download the code and run their own version.

The dicshunary installed on trash technologyInstalled as a kiosk on recycled computers reclaimed from local skips, dumps and store cupboards, the Dicshunary would capture local place names and micro-dialects from areas near the galleries, schools and public spaces in which it was exhibited.

 

Credits & Links

The Dicshunary Read More »

ICCA 2014 and the Future(ology) of CA

Still from Ari Folman’s “The Congress.”
Still from Ari Folman’s film “The Congress.”

Summary:

  • A brief account of my experience of ICCA 2014 (the 4th International Conference on Conversation Analysis).
  • Tips gleaned about how to present interactional data analysis in 20 minues.
  • What I learned about terminology in the analysis and presentation of CA research.
  • A little reflection on what ICCA 2014 meant for the origins and future of CA.

Introduction

In The Futurological Congress (1971) by Polish science fiction author Stanislaw Lem the protagonist attends a meeting of 70,000 researchers in the increasingly popular discipline of Futurology. There are so many Futurologists these days that they can’t possibly all give their papers in full, so they are assigned index numbers, then when it’s their turn to present they stand up and say the number. Of course there also isn’t time for questions, so all questions have to be submitted in advance and delivered by the questioners standing up and saying the number. Then the speaker may respond with the index number of their response and so on.

ICCA 2014 at UCLA was not quite at this stage yet, but it was still an awesome experience to see 500 Conversation Analysts and Ethnomethodologists from around the world gathered to present papers to one another in one of nine concurrent sessions, each paper containing a series of line-numbered transcripts of spates of interaction each of which – in themselves – could have been the subject of an entire day’s workshop.

Having watched something like 50 presentations in 4 days, with a vast range of styles and approaches, the purpose of this post is to collate tips and ideas for presentation of this kind of research, as well as provide those who weren’t able to be there with a brief impression of what ICCA 2014 was like to attend.

Overall Impressions of ICCA 2014

Conference opening speech by John heritage
Conference opening speech by John Heritage

Firstly, a brief impression, which I hope will dispel any negative implication in my referencing Lem’s sci-fi satire in the introduction. This was an extraordinarily well organised and enjoyable event. The venue, the programme, the facilities, and all the important basics were so well organised, they appeared seamless. Sessions progressed on time and with a cooperative and collegial atmosphere that everyone – especially the heroic session chairs, graduate helpers and local organisers – worked so hard to maintain. Despite its size and the diversity of approaches to the study of human interaction, my overall sense was that this conference served an unusually cohesive research community with a strong set of methodological and philosophical alignments. This became most evident to me when I realized – after fretting over having to hop between the 9 concurrent sessions to catch everything I wanted to see – that I could just as well stay put in one session, or walk into almost any other one and still find something interesting and comprehensible to me going on in every room. This is a real contrast with other conferences I’ve attended, especially in a Computer Science context where the widespread intra-field specialization means that walking into the wrong session might result in having to sit and listen to an hour of highly technical niche gibberish.

The panels I enjoyed most used this double aspect of EM/CA’s ethnographic subject-diversity and its methodological coherence to great effect. For example Arnulf Depperman’s excellent sessions on ‘Disjunct and convergent temporalities and the coordination of action’ brought together Jürgen Streeck’s work on the postural configurations of car mechanics at work, Dirk vom Lehn’s work on gallery and museum visiting, as well as Depperman’s own research into driving instruction, and Sae Oshima’s analysis of the interactional dynamics of stylists and clients as they come to the evaluation-relevant endpoint of a haircut. This was one of the most powerful examples to me of how very different interactional contexts and activities, looked at together with different analytic foci, can cohere by allowing a sense of the stable underlying structure of natural human interaction and conversation to emerge from the mix.

Two approaches to presenting (and doing) interaction analysis

Having mentioned ‘methodological coherence’, I should also point out that there were very different approaches and methods too, and they were also presented quite differently. The most obvious differences in overall approach were somewhat similar to the different dimensions of distinction in CA drawn by Emmanuel Schegloff (1996) in his paper on person reference between single case and interaction-oriented analyses on one hand, and aggregate and system-oriented analyses on the other. While these distinctions were not always entirely clear in the 20 minutes most people had to present, it became clear to me that the best presentations were the ones that chose a style to match this aspect of their analytic approach.

Here are some tips based on the best of each type of presentation that I saw. Some are common knowledge or are gleaned along with my reading of the King’s group’s excellent book on video analysis (Heath, Hindmarsh & Luff 2010).

Tips for presenting single case / interaction-oriented analyses

  1. Get to the data almost immediately, show first then tell.
  2. Make the context descriptions as integrated into the presentation as possible, and always illustrated with visible examples.
  3. Make time to show the same clip as many times as possible at multiple speeds.
  4. Don’t sit still – physically demonstrate and illustrate the embodied actions that are being talked about.
  5. Show clips with and without audio, but make sure to use the volume control when speaking over the video otherwise its impossible to hear.
  6. Avoid using transcripts, or if necessary use subtitles to avoid splitting audience focus between page and screen.
  7. If there are transcripts/cartoons/diagrams, show those first, then the video so the audience knows what to look for and gets the satisfaction of seeing it.

Jon Hindmarsh’s presentation was an excellent example of this approach. He showed the same clips on silent repeat while he walked around talking and gesticulating about them animatedly. He also returned to the same clips at the beginning and end of his session, giving us the opportunity to see our perception of the action in the clips changing as we heard his analysis. The final time we saw it at the end was quite powerful for this reason.

Tips for presenting aggregate / system-oriented analyses

  1. Make clips and transcripts as short and concise as possible, just focus on the phenomenon in question.
  2. If there is important stuff earlier or later on in the interaction, show it on screen using subtitles/animation.
  3. Make introductions pithy and descriptive, but don’t spend too long on them. If relationships etc. are important, do a diagram.
  4. When showing an interactional effect, show it not happening too. Standard practices should be demonstrated alongside deviant cases.
  5. When going through the analysis, only talk about details relevant to the phenomenon. A presentation is not the place for a comprehensive analysis, and it’s distracting from the main point.
  6. Be extra careful not to run out of time. Almost all the presentations ran out of time, and whereas I could still get a lot from a badly timed interaction-oriented analysis, it was really hard to make sense of aggregate analyses that never completed their narrative arc.
  7. Don’t show video if the analysis does not depend on it.

Alexa Hepburn and Paul Drew’s paper on absent apologies was a great example of an aggregate analysis of a phenomenon that – by virtue of its absence – required both deviant and non-deviant cases to be presented in a kind of drip-drip of evidence, building up to an aggregate view of the phenomenon. Along the way there were lots of mini-system insights into how the mechanisms of accounting and accountability for apologies work in different ways, and although they did run out of time a bit, they were able to skip several of the deviant cases to complete the overall picture and reach the phenomenon in question. They also had excellent transcripts which included the ethnographic glosses so you could also reconstruct the missing pieces of the puzzle from the data after the fact.

The most important thing I learned at ICCA 2014

In Lem’s novel, the academic discipline of futurology is based on a practical extrapolation of Sapir-Whorfian notions of language being a necessary prerequisite for rendering the world intelligible through thought. The practice of futurology involves future-casting by a kind of reverse-etymology. Futurologists come up with new words and phrases and evaluate them for their potential meaningfulness. The idea is that if your phrase is semantically loaded and suggestive of other, related terminologies, it may at some point come to mean something. The implications of those potential meanings are then the concern of futurologists.

Alongside a fantastic series of pre-conference workshops, graduate students were invited to sign up to have lunch with a group of CA grandees. I was thrilled to get (literally) a ticket to talk with Anita Pomerantz – whose work was the starting point for my entire PhD thesis. It was great to meet and thank her in person for that, and very interesting to hear what she had to say when I insisted she give us some sage advice (her own very gracious approach was to focus on us and our research interests). Notably, she said “don’t use the term preference”, and went on to advise against using terms like “adjacency pair” and all the other bits of terminology established analytically by her first generation of CA people.

This intrigued me, as she is often credited with initiating a whole rich seam of work on preference and dispreference with her work on compliment responses (1978) and second assessments (1984). She explained that she was telling us to avoid using these words as shortcuts to, or even replacements for a proper analysis. I have heard similar things from others in her generation, for whom using these terms must feel very different given that they had to do the analytic work first to clarify and then invent them.

As the conference proceeded, I got a very tangible sense of why this advice is so important. Very often I saw people presenting great research, but peppering their presentations with these kinds of keywords, usually needlessly. These terms are useful, especially for structuring training and disciplined analysis, and for spreading knowledge about CA and its inner workings. It’s hard to imagine a CA textbook not including a full description of adjacency pairs or preference organization. However, their use in this context mostly seemed aimed at expressing group membership rather than contributing to the analysis at hand.

So, given that this was my first CA conference, the key thing I learned at ICCA 2014 was that it’s only worth naming these analytic terms in presentations of research findings where they are absolutely salient to the analysis and phenomena at hand. The transcripts and the data are presented so that other researchers who are familiar with CA methods can challenge the conclusions drawn from them. All the CA terminology is a vital but essentially back-room business that doesn’t need to feature in the presentation of findings at all.

The Future(ology) of CA

Given the breadth and diversity of the research presented at ICCA 2014, it’s difficult to sum it up in anything other than a very partial and subjective way. Having said that, there were a few moments and aspects of the event that suggested some interesting potential directions of contemporary CA.

Firstly, there was a set of presentations in the ‘Hybrids Heretics and Converts’ category that had its own panel on the Friday. Unfortunately I missed most of this, but had the chance to speak to some of the presenters and their colleagues – many from the Max Planck Institute for Psycholinguistics in Nijmegen, and most supervised by Stephen Levinson. The presentations I did see had a distinctive style and structure, familiar to me from hypothetico-deductive models of presentation that I see in my home disciplines of Computer Science and Engineering. I was pleased to see Schegloff in these panels, respectfully engaged with (mostly) young researchers, offering constructive critique. One of his criticisms of much of the experimental work presented was that it tended to focus on moments of experimentally-salient (but interactionally isolated) conversational structure. His argument was that talk is densely layered and interconnected, and that by isolating temporal fragments of talk from continuous processes of interaction, all that contextual interconnection is lost. It was great that there was space for debate focussed on these issues that extend from long-running questions of quantification in the study of human interaction, which is especially healthy given the quantitative (though not yet widespread experimental) turn in recent prominent CA work.

Photo of Harvey Sacks from the exhibition 'Order At All Points'
Photo of Harvey Sacks from the exhibition ‘Order at all points: the work of Harvey Sacks’

Secondly, the ISCA general meeting was interesting for me, as a neophyte, to find out more about CA as an organisational project, and get a feel for its origins and future. It was lovely to see Schegloff being awarded an honour for lifetime achievement, and the speeches and presentations were moving, especially given that this is the 40 year anniversary of what is often seen as the founding of the discipline with Sacks Schegloff and Jefferson’s 1974 turn-taking paper. There was also a fantastic exhibition entitled “Order at All Points: The Work of Harvey Sacks” in the Young library featuring key papers, letters, photos and artifacts from the Sacks archive. Alongside these celebrations, ISCA chair John Heritage set out that the focus of the next four years would be on developing and disseminating more educational resources in CA – and he announced that Schegloff was very generously donating all his transcripts, course notes, assignments and recordings to ISCA for publication. While many people whooped and cheered at this bit, Schegloff turned around in his seat at the front and hooted “Yo:::u’ll be S^OrE:::e,” at the crowd.

Video still from the exhibition 'Order at all points: the work of Harvey Sacks'
Video still from the exhibition ‘Order at all points: the work of Harvey Sacks’

Finally, the ICCA baton was passed from ICCA 2014 lead organiser Tanya Stivers to conference chair Paul Drew at Loughborough where the 2017 conference will take place, with Lorenza Mondada also announcing that there would be an interim ISCA-sponsored conference in Basel in 2015 entitled ‘Revisiting Participation’. Given how much I enjoyed and got out of ICCA 2014, I’m very much looking forward to those.

References

  • Heath, C., Hindmarsh, J., & Luff, P. (2010). Video in qualitative research: analysing social interaction in everyday life. Sage Publications.
  • Lem, S. (1985). The Futurological Congress (from the Memoirs of Ijon Tichy). Harcourt Brace Jovanovich.
  • Schegloff, E. A. (1996). Some Practices for Referring to Persons in Talk-in-Interaction: A Partial Sketch of a Systematics. In B. Fox (Ed.), Studies in Anaphora (pp. 437–85). Amsterdam: John Benjamins Publishing Company.
  • Pomerantz, A. (1978). Compliment responses: Notes on the co-operation of multiple constraints. In J. Schenkein (Ed.), Studies in the organization of conversational interaction. Academic Press.
  • Pomerantz, A. (1984). Agreeing and disagreeing with assessments : some features of preferred / dispreferred turn shapes. In J. M. Atkinson (Ed.), Structures of social action: Studies in Conversation Analysis (pp. 57–101). London: Cambridge University Press.

ICCA 2014 and the Future(ology) of CA Read More »

Two forms of silent contemplation – talk at ICCA 2014

For the International Conference on Conversation Analysis 2014 I gave a talk on some work derived from my PhD: Respecifying Aesthetics. It looked at two forms of silent contemplation – and two sequential positions for bringing off silences as accountable moments for subjective contemplation and aesthetic judgement.

The talk looked at where this conventional notion of aesthetic judgment as an internal, ineffable phenomenon might come from in practical terms. In philosophical terms the idea comes from Kant, who gets it from Hume, who draws on Shaftesbury. I think Hume puts it best.

Hume

But this talk isn’t about philosophical aesthetics – it’s about the practical production of contemplation in interaction. It points to the kinds of practical phenomena that we can observe in people’s interactional behaviors that might have inspired philosophers to hypothesise that aesthetic judgments are ineffable, internal, psychological activities.

The empirical crux points to two positions in sequences of talk that people can use to present something as arising from contemplation. The first is done as an initial noticing or assessment, launched from first position without reference to prior talk or action. The second is produced as a subsequent noticing – launched in first position as though responsive to some tacit prior ‘first’.

By studying the practical structure of these ostensibly internal, ineffable events, we can develop more plausible hypotheses about how aesthetic experiences function in theoretical or psychological terms.

References 

  • Coulter, J., & Parsons, E. (1990). The praxiology of perception: Visual orientations and practical action. Inquiry, 33(3).
  • Eriksson, M. (2009). Referring as interaction: On the interplay between linguistic and bodily practices. Journal of Pragmatics, 41(2), 240–262. doi:10.1016/j.pragma.2008.10.011
  • Goodwin, C. (1996). Transparent vision. In E. A. Schegloff & S. A. Thompson (Eds.), Interaction and Grammar (pp. 370–404). Cambridge: Cambridge University Press.
  • Goodwin, C., & Goodwin, M. (1987). Concurrent Operations on Talk: Notes on the Interactive Organization of Assesments. Papers in Pragmatics, 1(1).
  • Goffman, E. (1981). Forms of Talk. Philadelphia: University of Pennsylvania Press.
  • Heath, C., & vom Lehn, D. (2001). Configuring exhibits. The interactional production of experience in museums and galleries. In H. Knoblauch & H. Kotthoff (Eds.), Verbal Art across Cultures. The aesthetics and proto-aestehtics of communication (pp. 281–297). Tübingen: Gunter Narr Verlag.
  • Heritage, J. (2012). Epistemics in Action: Action Formation and Territories of Knowledge. Research on Language & Social Interaction, 45(1), 1–29.
  • Heritage, J., & Raymond, G. (2005). The Terms of Agreement: Indexing Epistemic Authority and Subordination in Talk-in-Interaction. Social Psychology Quarterly, 68(1), 15–38.
  • Kamio, A. (1997). Territory of information. J. Benjamins Publishing Company.
  • Leder, H. (2013). Next steps in neuroaesthetics: Which processes and processing stages to study? Psychology of Aesthetics, Creativity, and the Arts, 7(1), 27–37.
  • Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. M. Atkinson & J. Heritage (Eds.), Structures of social action: Studies in Conversation Analysis (pp. 57–102). Cambridge: Cambridge University Press.
  • Schegloff, E. A. (1996). Some Practices for Referring to Persons in Talk-in-Interaction: A Partial Sketch of a Systematics. In B. Fox (Ed.), Studies in Anaphora (pp. 437–85). Amsterdam: John Benjamins Publishing Company.
  • Schegloff, E. A. (2007). Sequence organization in interaction: Volume 1: A primer in conversation analysis. Cambridge: Cambridge Univ Press.
  • Schegloff, E. A., & Sacks, H. (1973). Opening up closings. Semiotica, 8(4), 289–327.
  • Stivers, T., & Rossano, F. (2010). Mobilizing Response. Research on Language & Social Interaction, 43(1), 3–31.
  • Vom Lehn, D. (2013). Withdrawing from exhibits: The interactional organisation of museum visits. In P. Haddington, L. Mondada, & M. Nevile (Eds.), Interaction and Mobility. Language and the Body in Motion (pp. 1–35). Berlin: De Gruyter.

Two forms of silent contemplation – talk at ICCA 2014 Read More »