June 2014

Two forms of silent contemplation – talk at ICCA 2014

For the International Conference on Conversation Analysis 2014 I gave a talk on some work derived from my PhD: Respecifying Aesthetics. It looked at two forms of silent contemplation – and two sequential positions for bringing off silences as accountable moments for subjective contemplation and aesthetic judgement.

The talk looked at where this conventional notion of aesthetic judgment as an internal, ineffable phenomenon might come from in practical terms. In philosophical terms the idea comes from Kant, who gets it from Hume, who draws on Shaftesbury. I think Hume puts it best.

Hume

But this talk isn’t about philosophical aesthetics – it’s about the practical production of contemplation in interaction. It points to the kinds of practical phenomena that we can observe in people’s interactional behaviors that might have inspired philosophers to hypothesise that aesthetic judgments are ineffable, internal, psychological activities.

The empirical crux points to two positions in sequences of talk that people can use to present something as arising from contemplation. The first is done as an initial noticing or assessment, launched from first position without reference to prior talk or action. The second is produced as a subsequent noticing – launched in first position as though responsive to some tacit prior ‘first’.

By studying the practical structure of these ostensibly internal, ineffable events, we can develop more plausible hypotheses about how aesthetic experiences function in theoretical or psychological terms.

References 

  • Coulter, J., & Parsons, E. (1990). The praxiology of perception: Visual orientations and practical action. Inquiry, 33(3).
  • Eriksson, M. (2009). Referring as interaction: On the interplay between linguistic and bodily practices. Journal of Pragmatics, 41(2), 240–262. doi:10.1016/j.pragma.2008.10.011
  • Goodwin, C. (1996). Transparent vision. In E. A. Schegloff & S. A. Thompson (Eds.), Interaction and Grammar (pp. 370–404). Cambridge: Cambridge University Press.
  • Goodwin, C., & Goodwin, M. (1987). Concurrent Operations on Talk: Notes on the Interactive Organization of Assesments. Papers in Pragmatics, 1(1).
  • Goffman, E. (1981). Forms of Talk. Philadelphia: University of Pennsylvania Press.
  • Heath, C., & vom Lehn, D. (2001). Configuring exhibits. The interactional production of experience in museums and galleries. In H. Knoblauch & H. Kotthoff (Eds.), Verbal Art across Cultures. The aesthetics and proto-aestehtics of communication (pp. 281–297). Tübingen: Gunter Narr Verlag.
  • Heritage, J. (2012). Epistemics in Action: Action Formation and Territories of Knowledge. Research on Language & Social Interaction, 45(1), 1–29.
  • Heritage, J., & Raymond, G. (2005). The Terms of Agreement: Indexing Epistemic Authority and Subordination in Talk-in-Interaction. Social Psychology Quarterly, 68(1), 15–38.
  • Kamio, A. (1997). Territory of information. J. Benjamins Publishing Company.
  • Leder, H. (2013). Next steps in neuroaesthetics: Which processes and processing stages to study? Psychology of Aesthetics, Creativity, and the Arts, 7(1), 27–37.
  • Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. M. Atkinson & J. Heritage (Eds.), Structures of social action: Studies in Conversation Analysis (pp. 57–102). Cambridge: Cambridge University Press.
  • Schegloff, E. A. (1996). Some Practices for Referring to Persons in Talk-in-Interaction: A Partial Sketch of a Systematics. In B. Fox (Ed.), Studies in Anaphora (pp. 437–85). Amsterdam: John Benjamins Publishing Company.
  • Schegloff, E. A. (2007). Sequence organization in interaction: Volume 1: A primer in conversation analysis. Cambridge: Cambridge Univ Press.
  • Schegloff, E. A., & Sacks, H. (1973). Opening up closings. Semiotica, 8(4), 289–327.
  • Stivers, T., & Rossano, F. (2010). Mobilizing Response. Research on Language & Social Interaction, 43(1), 3–31.
  • Vom Lehn, D. (2013). Withdrawing from exhibits: The interactional organisation of museum visits. In P. Haddington, L. Mondada, & M. Nevile (Eds.), Interaction and Mobility. Language and the Body in Motion (pp. 1–35). Berlin: De Gruyter.

Two forms of silent contemplation – talk at ICCA 2014 Read More »

Normativity: outside in/inside out

I’m not sure these two were really as far apart as all that on the sources of normativity in aesthetics and ethics.

garfinkel.jpg

 

 

“For Kant the moral order “within” was an awesome mystery; for sociologists the moral order “without” is a technical mystery. From the point of view of sociological theory the moral order consists of the rule governed activities of everyday life. A society’s members encounter and know the moral order as perceivedly normal courses of action-familiar scenes of everyday affairs, the world of daily life known in common with others and with others taken for granted.”

Garfinkel, H. (1967). Studies in ethnomethodology. America. Englewood Cliffs, New Jersey: Prentice-Hall Inc. p.35.

kant.jpg

“The propaedeutic for all beautiful art, so far as it is aimed at the highest degree of its perfection, seems to lie not in precepts, but in the culture of the mental powers through those prior forms of knowledge that are called humaniora, presumably because humanity means on the one hand the universal feeling of participation and on the other hand the capacity for being able to communicate one’s inmost self universally, which properties taken together constitute the sociability that is appropriate to humankind, by means of which it distinguishes itself from the limitation of animals.

Kant, I. (2000). Critique of the Power of Judgement. (P. Guyer, Ed.). Cambridge: Cambridge University Press. p.229.

Normativity: outside in/inside out Read More »

The Data Session

The ‘data session’ has become my favourite research activity since starting to work with ethnomethodology (EM) and conversation analysis (CA). However, this crucial bit of analytic trade-craft seems poorly documented as a research process – with minimal references scattered throughout textbooks, articles and course materials. This post pulls together some of the descriptions and tips I’ve found relating to the practical activity of doing data sessions, followed by a short account of why I am so fond of this wonderful research practice.

Early CA work

I can’t find any direct references to data session practices in any of the early CA literature from Sacks, Schegloff or Jefferson. However, there are some very interesting methodological discussions that provide insight into how data is prepared and collected prior to a principled collection being established in the following two papers:

I can only assume that the trade craft of CA was being established at this time, so the practice of the data session was not yet at a point where it was stable, well understood and ready to be written up for instructional purposes. I’ve heard stories (but can’t find any write-ups) of how Gail Jefferson was particularly involved in its development as a pedagogical/analytic practice. I would be very interested in reading these stories – and particularly learning about any rules / procedures for doing data sessions that may have been established in these early days.

First instructional descriptions

Paul ten Have’s “Doing Conversation Analysis” first published in 1999 provides one of the first instructional descriptions of the data session I can find – and lays out the essentials of what the data session consists of very clearly:

“The data session can be seen both as a kind of playground to mutually inspire one’s understanding of the data, and as an environment that requires a rather specific ‘discipline’. A ‘data session’ is an informal get-together of researchers in order to discuss some ‘data’ – recordings and transcripts. The group may consist of a more or less permanent coalition of people working together on a project or in related projects, or an ad hoc meeting of independent researchers. The basic procedure is that one member brings in the data, for the session as a whole or for a substantial part of it.”

He then provides – as far as I can find – the first description of the actual practical activity in the data session, and how it functions as a pedagogical as well as an analytic practice:

“This often involves playing (a part of) a tape recording and distributing a transcript, or sometimes only giving a transcript. The session starts with a period of seeing/hearing and/or reading the data, sometimes preceded by the provision of some background information by the ‘owner’ of the data. Then the participants are invited to proffer some observations on the data, to select an episode which they find ‘interesting’ for whatever reason, and formulate their understanding, or puzzlement, regarding that episode. Then anyone can come in to react to these remarks, offering alternative, raising doubts, or whatever. What is most important in these discussions is that the participants are, on the one hand, free to bring in anything they like, but, on the other hand, required to ground their observations in the data at hand, although they may also support them with reference to their own data-based findings or those published in the literature. One often gets, then, a kind of mixture, or coming together, of substantial observations, methodological discussions, and also theoretical points. Data sessions are an excellent setting for learning the craft of CA, as when novices, after having mastered some of the basic methodological and theoretical ideas, can participate in data sessions with more experienced CA researchers. I would probably never have become a CA practitioner if I had not had the opportunity to participate in data sessions with Manny Schegloff and Gail Jefferson.”

Have, P. Ten. (2007). Doing conversation analysis: A Practical Guide (2nd ed.). London: Sage Publications. pp. 140-141.

He also mentions that these sessions are poorly documented, writing (in the 1999 and 2007 editions of his book) that he can only find one real description in Jordan & Henderson (1995) quoted below. They also note that the data session – which they call the “Interaction Analysis Laboratory” is both vitally important, and difficult to describe in formal/procedural terms:

“Group work is also essential for incorporating novices because Interaction Analysis is difficult to describe and is best learned by doing. Much in the manner of apprentices, newcomers are gradually socialized into an ongoing community of practice in which they increasingly participate in the work of analysis, theorizing, and constructing appropriate representations of the activities studied.”

They also provide a great description of the actual mechanics of presenting data, and how specific heuristics in the organisation of the data session can mitigate against rambling, ungrounded theoretical speculation:

The tape is played with one person, usually the owner, at the controls. It is stopped whenever a participant finds something worthy of remark. Group members propose observations and hypotheses about the activity on the tape, searching for specific distinguishing practices within a particular domain or for identifiable regularities in the interactions observed. Proposed hypotheses must be of the kind for which the tape in question (or some related tape) could provide confirming or disconfirming evidence. The idea is to ground assertions about what is happening on the tape in the materials at hand. To escape the ever-present temptation to engage in ungrounded speculation, some groups have imposed a rule that a tape cannot be stopped for more than 5 min. This means in practice that rambling group discussions are discouraged and that no single participant can speculate for very long without being called upon to ground her or his argument in the empirical evidence, that is to say, in renewed recourse to the tape.

Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103.

Recent accounts and empirical work

More recent instructional publications have included tips on data sessions. For example, in Heath, Hindmarsh & Luff (2010), there is one tip section, and one appendix on data sessions. They introduce the basic idea, and also mention its pedagogic function, as well as highlighting the opportunity for its use in interdisciplinary / workplace studies that involve practitioners from other fields:

“[Data sessions] can also be used to introduce new members of a team or research group into a particular project and can be very important for training students in video based analysis. On occasions it can also be helpful to have ‘practitioners’, personnel from the research domain, participate in data sessions, as they can provide distinctive insights, and can often help to clarify events that have proved difficult to understand. in data sessions it is important to avoid overwhelming participants with too much material. A small number of brief extracts of, say, no more than 20 seconds or so is fine. it is also helpful to provide transcripts of the talk as well as any other materials that may be useful for understanding the extracts in question.”

They also add a number of key points about the distinct benefits and caveats of running data sessions, paraphrased here:

  • identifying candidate phenomena for more detailed study.
  • Enforcing evidential demonstration of analytic claims.
  • Revealing issues/challenges in demonstrating analytic findings.
  • Eliciting alternative/complimentary perspectives.
  • Generating new analytic ideas/issues and suggesting improvements for future data collection.
  • Keeping one’s ‘hand in’, i.e. practising analysis on other people’s data to maintain a fresh eye/ear for your own research.

Then they say something explicit about the data session as a collaborative practice which I haven’t seen anyone else mention, but it seems absolutely crucial to me. The fact that this almost never comes up also reinforces my sense that EM/CA and its research practices in general are much less fraught by this particular problem than many other research contexts, which speaks extremely well for it as a community and its empirical/epistemic commitments in general:

“Data sessions are a collegial activity and are based on mutual trust. They should be treated as such and discussions of intellectual Property and the like should be avoided. It is up to individual participants to reveal or withhold ideas that they have, if they do or do not want others use those ideas in future analytic work.”

The appendix with more tips on data sessions (pp.156-157) has more very useful practical advice. To paraphrase:

  • Limit the numbers to no more than 20 or so.
  • Presenters should select 3-6 clips, ideally under 30s each.
  • Do bring transcripts – even rough ones are helpful.
  • Bring any supplementary material that is relevant/necessary for understanding the action.
  • Look at one fragment of data at a time – approximate ratio of 20-30m on each 5s of recording.
  • Don’t cheat and look ahead, or rely on analyst’s information exogenous to the clip itself.
  • When it’s done, sum up, take notes and get general reflections.

Heath, C., Hindmarsh, J., & Luff, P. (2010). Video in qualitative research: analysing social interaction in everyday life. Sage Publications. pp. 102-103.

There is a wonderfully reflexive EM/CA analysis of a data session in a chapter by Harris, Theobold, Danby, Reynolds & Rintel (2012) in a volume on postgraduate pedagogical practices that presents the analysis of a data session by the authors and data session participants themselves. They focus on the collaborative / peer pedagogical aspects of the session, and highlight the “fluidity of ownership of ‘noticing'” with reference to clear evidence of how these noticings can be done in this way.

Harris, J., Theobald, M. A., Danby, S. J., Reynolds, E., & Rintel, S. (2012). “What’s going on here?” The pedagogy of a data analysis session. In A. Lee & S. J. Danby (Eds.), Reshaping doctoral education: International Approaches and Pedagogies (pp. 83–96). London: Routledge.

A participant-observer account

Finally, my favourite description of work practices in the data session was written by John Hindmarsh in the affectionate and humorous Festschrift publication he and his colleagues edited for Christian Heath. In an uncharacteristically participant-observer style, he nonetheless describes the detail of both pedagogical and analytic processes of “Heath’s natural habitat: The data session” very vividly. He includes:

  • Delicate interrogations: where researchers are subtly probed as to why they selected specific clips.
  • Occasioned exclamations: in which the seasoned analyst will hoot with infectious laughter or joy at a clip – infectious partly because it can leave less experienced researchers either shamefully nonplussed or scrambling to find a grounding for the source of the laughter.
  • Transcription timings: opportunities to (delicately) rectify transcription errors.
  • Re-characterisations: moments where a banal, if well-targeted observation is picked up and re-packaged as an elegant and insightful analysis – a form of agreement with some extra pedagogical/analytic impetus.
  • Troubled re-characterisations: same as above, but done as an (initially veiled) disagreement, demonstrating poor targeting or a flawed analysis – again, always analytically useful and instructive, but less pleasantly so.

Hindmarsh, J. (2012). Heath’s natural habitat: The data session. In P. Luff, J. Hindmarsh, D. vom Lehn, & B. Schnettler (Eds.), Work, Interaction and Technology: A Festschrift for Christian Heath. (pp. 21–23). London: Dept. of Management, Kings College London.

Finally – some of my own reflections on the data session – and why it constitutes such an important methodological and pedagogical practice.

Why I love data sessions and why you should too

The last description of the trade craft of a particular researcher’s data session is my favourite because it shows what an excellent apprenticeship situation this is. Whereas instruction in environments where empirical data is less straight-forwardly ready-at-hand, there is a latency between the teaching moment and the understanding moment that is frustratingly difficult to bridge. In this situation, the data is really doing the teaching, but the skilled analyst elicits both the observation and its pedagogical thrust from the same few seconds of interaction that has been in plain sight all along.

Furthermore, this public availability of the data as a mutually assessable resource to the group provides a constant check on authoritative hubris. More than once I’ve seen a junior analyst grasping and holding onto a powerful observation that provides irrefutable counter-evidence to a more experienced analyst’s position on some piece of data. There is honesty and accountability that flows in each direction in the data session, which is what makes it such a wonderful occasion for learning, analysis and – literally – serious fun.

I also like Jon Hindmarsh’s description because it really captures what it’s like to attend data sessions with different people who love the practice. I’m new to it, but thanks to the generosity of my supervisor Pat Healey and his enthusiasm for this work I’ve had the great pleasure of analysing data with pros such as Steven Clayman, Chuck Goodwin, Christian Heath, John Heritage, Yuri Hosoda, Shimako Iwasaki, Celia Kitzinger, Dirk vom Lehn, Gene Lerner, Rose McCabe, Tanya Stivers, Liz Stokoe and Sandy Thompson, not to mention my fellow students in these sessions from whom – given the peer pedagogical structure of the data session – I was able to learn just as much.

My experience has been that everyone approaches data very differently, and each person has a very distinctive style, analytic focus and approach. Nonetheless, the dynamics and epistemic arrangements of the situation allows for an amazingly rich exchange of ideas and empirical observations between disciplines, across interactional contexts, cultures, languages and focal phenomena. I am convinced that it is one of the most crucial factors in how EM/CA projects have made such robust findings in studies of interaction, language and culture, and that there is a great deal more to be understood and appreciated about how they function.

I am also convinced that the data session has a very important place in disseminating EM/CA findings and practices beyond studies that centre onits traditional sociological/anthropological/linguistic contexts of study. There are sure to be ways of adapting some of its pedagogical/analytical dynamism to working with other contexts and types of recorded materials – although it’s debatable whether this form of analysis would really work with anything other than interactional data. In any case, as I mentioned at the beginning – I am very curious about other data session practices and would like to know more about the similarities and differences in how people run theirs, so, I would be very grateful if you would send me your data session experiences/tips/formats and training materials.

The Data Session Read More »

The Turing test’s insight into humanness.

Illustration from Dean Burnett's Guardian spoof of the June 2014 reissued 'Turing Test Passed' Story
Illustration from Dean Burnett’s spoof of the June 2014 reissued ‘Turing Test Passed’ Story

I’ve heard people reacting in two ways to the hyped announcement about Eugene passing the Turing Test. Some claim the test should be harder: longer term and  more complex, others that it doesn’t show machines doing thinking. I disagree with both complaints. I think the test is a brilliant one, and very insightful and informative about what it means to be a language machine.

Turing (1950) wrote:

“I believe that in about fifty years’ time it will be possible, to programme computers… [to] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning”.

So however hyped, the basic facts of the story are more or less correct, and I find it quite amazing (given that the Eugene chat bot was first written in 2001) that Turing got the timing spot on. However, I do agree that the news story and most interpretations of the meaning of the Turing Test are nonsensical from a scientific standpoint.

It seems likely to me that since 2001 many 13 year olds along with a great many other humans would fail the test as described, and equally likely that many more advanced chatbots would be able to pass it quite easily. This wasn’t the case in 1950 when the social meaning of computing would have been unrecognisable to contemporary judges, and vice versa.

Given that Turing’s computing challenge was passed, quite trivially, some time ago, the research challenge posed by the test as a socio-historical milestone, and the challenge for cognitive science in general since then is figuring out how, when and in what ways humanness is an ascribable quality.

There is a nice discussion of exactly this problem in QM’s very own CS4FUN – although I’m not sure who (or what) wrote it.

Refs:

The Turing test’s insight into humanness. Read More »

How to prepare for an EM/CA data session

Participants in the first EMCA DN Meting

At the inaugural EMCA Doctoral Network meeting (my write-up here), where there were a mix of researchers with different levels of familiarity with EM/CA, I realised that the process of preparing for a data session – one of the most productive and essential tools of interaction analysis – is really poorly documented. There are some guidelines provided in textbooks and on websites that usually include issues of how to do the analysis/transcription itself, but nowhere is there a simple guide for how to actually get ready to contribute your data to a data session. This short primer is intended to fulfil that function, and invite others to contribute their own tips and best practices. 

I was more or less in this situation (having crammed my head full of CA literature without having had a great deal of hands-on data session practice) last year when I went to the Centre for Language Interaction and Culture at UCLA. There I had the chance to witness and participate in four separate weekly data sessions run by Steven Clayman, Chuck Goodwin, John Heritage, Gene Lerner, Tanya Stivers and Sandy Thompson and their students. It was a bit of a baptism of fire, but I learned a lot from it.

Each of the pros had interestingly different approaches to preparing data for the sessions, all useful in slightly different ways so I decided to write up my synthesis of best practice for preparing your data for a data session. Feedback and comments are very welcome!

What/who this guide is for:

This guide is intended for researchers interested in participating in data sessions in the tradition of Ethnomethodology and Conversation Analysis (EM/CA), who already have data and want to figure out how to present it.

This is not about gathering data, specific analytic approaches or about actually doing detailed analysis or any of the meat and potatoes of EM/CA work which is amply covered in many books and articles including:

This guide is intended to help researchers who may not have had much experience of data sessions to prepare their data in such a way that the session will be fun and analytically useful for them and everyone else who attends.

This is also not intended to be a primer in the use of specific bits of audio/video/text editing or transcription software – there are so many out there, I will recommend some that are freely available but pretty much any will do. I do plan to do this kind of guide, but that’s not what this article is for.

Selecting from your data for the session

Doing a data session obviously requires that some kind of data selection has to be made, so it helps to have a focal phenomenon of some sort. Since the data session is exploratory rather than about findings, it doesn’t really matter what the phenomenon is.

That’s the great thing about naturally occurring data – you might not find what you’re looking for, but you will find something analytically interesting. Negative findings about your focal phenomenon are also useful – i.e. you might find out that you’ve selected your clips with some assumptions about how they are related – you might find that this is not borne out by the interaction analysis. That is still a useful finding and will make for a fun and interesting data session.

Example phenomena for a rough data-session-like collection of extracts might focus on any one or on a combination of lexical, gestural, sequential, pragmatic, contextual, topical etc. features. E.g.:

  • Different sequential or pragmatic uses of specific objects such as ‘Oh’, ‘wow’ or ‘maybe’.
  • Body orientation shifts or specific patterns of these shifts during face-to-face interaction.
  • Word repeats by speaker and/or recipient at different sequential locations in talk.
  • Extracts from interactions in a particular physical location or within a specific institutional context.
  • Extracts of talk-in-interaction where speakers topicalize something specific (e.g.: doctors/teapots/religion/traffic).

At this stage your data doesn’t have to be organised into a principled ‘collection’ as such. Having cases that are ostensibly the same or similar, and then finding out how they are different is a tried and tested way of finding out what phenomenon you are actually dealing with in EM/CA terms.

There are wonderful accounts of this data-selection / phenomenon discovery process with notes and caveats about some of the practical and theoretical consequences of data selection in these two papers:

Pre-data session selection: how to focus your selection on a specific phenomenon.

You can bring any natural data at all to a session and it will be useful as long as it’s prepared reasonably well. However if you want the session to focus on something relevant to your overall project, it is helpful to think about what kind of analysis will be taking place in the session in relation to your candidate phenomenon and select clips accordingly.

There are proper descriptions of how to actually do this detailed interaction analysis in the references linked above. However, here is a paraphrase of some of the simple tips on data analysis that Gene Lerner and Sandy Thompson give when introducing an interdisciplinary data session where many people are doing it for the first time in their wonderful Language and the Body course:

  1. Describe the occasion and current situation being observed (where/when/sequence/location etc.).
  2. Limit your observations to those things you can actually point to on the screen/transcript.
  3. Then, pick out for data analysis features/occasions that are demonstrably oriented to by the participant themselves.
  4. That is your ‘target’, then zoom in to line-by-line, action-by-action sequences and describe each.
  5. Select a few targets where you can specify what is being done as the sequence of action unfolds.

Then in the data session itself, you and other researchers can look at how all interactional resources (bodily movements / prosody / speech / environmental factors) etc. are involved in these processes and make observations about how these things are being done.

Providing a transcript

I find it very hard to focus on analysis without having a printed transcript but there are a few different approaches each with different advantages and disadvantages. Chuck Goodwin, for example, recommends putting Jeffersonian transcription subtitles directly onto the video/audio clips so you don’t have to split focus between screen and page. However, most researchers produce a transcript using Jeffersonian transcription and play their clips separately.

Advantages of printed transcriptions

  • You and other participants have something convenient to write notes on.
  • You can capture errors or issues in the transcription easily.
  • Participants can refer to line numbers that are off-screen when they make their observations.

Advantages of subtitles on-screen

  • You don’t miss the action looking up and down between page and screen.
  • Generally easier to understand immediately than multi-line transcript when presenting data in a language your session participants might not understand.
  • You can present this data in environments where you don’t have the opportunity to print out and distribute paper transcripts.

In either case you will need to take the time to do a Jeffersonian transcription so why not do both?

Jeffersonian transcription

There are lots of resources for learning Jeffersonian transcription, here are some especially useful ones:

Visual/graphical transcripts

Chuck and Candy Goodwin often also present carefully designed illustrations alongside their final analyses. Some people also present their data, usually at a later stage of research with detailed multi-modal transcripts incorporating drawings, animations, film-strip-like representations etc. (see Eric Laurier’s paper for a great recent overview):

  • Laurier, E. (2014). The Graphic Transcript: Poaching Comic Book Grammar for Inscribing the Visual, Spatial and Temporal Aspects of Action. Geography Compass, 8(4), 235–248.

How much work you want to do on your transcript before a data session is up to you but it is probably premature to work on illustrations etc. until you have some analytic findings to illustrate.

Transcription issues vs. errors

It’s inevitable that other people will hear things differently, so the data session is a legitimate environment for improving a transcript-in-progress. In fact, often analytic findings may hinge on how something is heard, and then how it is transcribed – this is a useful thing to discuss in a data session and will be instructive for everyone. However, it is important to capture as much as possible of the obvious stuff as accurately as possible to provide people with a basic resource for doing analysis together without getting hung up on simple transcription errors rather than the interesting transcription questions.

Introducing your data

It is useful to give people a background to your data before you present it. This does not have to be a full background to all your research and the study you are undertaking. In fact, it’s useful to omit most of this kind of information because the resource you have access to in the data session are fresh eyes and ears that aren’t yet contaminated by assumptions about what is going on.

In terms of introducing your study as a whole, it’s useful to have a mini presentation (5 mins max for a 1.5h data session) prepared about your study with two or three key points that can give people an insight into where/what you are studying. Once you’ve made one of these for each study you can re-use it in multiple data sessions.

In terms of introducing each clip, have a look at Schegloff’s (2007) descriptions of his data extracts. They have a brilliantly pithy clarity that provides just enough information to understand what is going on without giving away any spoilers or showing any bias.

Preparing your audio/video data.

Assuming you already have naturalistic audio/video data of some kind, make some short clips using your favourite piece of audio/video editing software. The shorter the better (under 30s ideally) – longer clips, especially complex/busy ones may need to be broken down for analysis into smaller chunks.

It can be time-consuming searching through longer clips for specific sections, so I recommend making clips that correspond precisely to your transcript, but noting down where in the larger video/audio file this clip is located, in case someone wants to see what happens next or previously.

Copy these clips into a separate file or folder on your computer that is specifically for this data session – finding them if they’re buried in your file system can waste time.

If possible, test the audio and video projection/display equipment in the room you’re running the data session to make sure that your clips are audible and visible without headphones and on other screens. If in doubt, use audio editing software (such as Audacity) to make sure the audio in your files is as loud as possible without clipping. You can always turn a loud sound system down – but if your data and the playback system you’re using is too quiet – you’re stuck in the data session without being able to hear anything.

There are many more useful tips about sound and lighting etc. in data collection in Heath, Hindmarsh & Luff (2010).

The mechanics of showing your data

I find it useful to think of this as a kind of presentation – just like at a conference or workshop, so I recommend using presentation software to cue up and organise your clips for display rather than struggling looking through files and folders with different names etc…

Make sure each clip/slide is clearly named and/or numbered to correspond with sections of a transcript, so that people can follow it easily, and make sure you can get the clip to play – and pause/rewind/control it – with a minimum of fuss.

The data session is probably the most useful analytic resource you have after the data itself, so make sure you use every second of it.

Feedback / comments / comparisons very welcome

I hope this blog-post-grade guide is useful for those just getting into EMCA, and while I know that data session conventions vary widely, I hope this represents a sufficiently widely applicable set of recommendations to make sense in most contexts.

In general I am very interested in different data session conventions and would very much welcome tips, advice, recommendations and descriptions of specialized data session practices from other researchers/groups.

More very useful tips (thanks!):

Dr Jo Meredith adds: “because data sessions can be a bit terrifying the temptation’s to take some data you can talk about in an intelligent way, best data sessions i’ve been to have been with new pieces of data, and I’ve got inspired by other people’s observations”

How to prepare for an EM/CA data session Read More »

Report from the first EMCA Doctoral Network meeting

Poster drawing session
Poster drawing session

I’m on my way back from the inaugural meeting of the inaugural EMCA Doctoral Network in Edinburgh this weekend, which has been one of the best PhD-related events I’ve ever had the pleasure of attending. The last word on the meeting by Anca Sterie (one of the participants) at the summing-up got it absolutely right: the openness, intellectual curiosity and thoughtful care of the organisers and the meeting as a whole was unusual and extremely encouraging.

In any case, I thought it would be useful to document how the workshop was put together, because the format and approach was well worth replicating, especially in a field like EM/CA that can only really progress if people have ways of practising and becoming skilled collaborators in data sessions.

Update: Thanks to Eric for the nice photos!

Before the workshop

The organisers sent us a full timetable before the workshop including a list of readings to have a look at. The readings were two methodology-focussed papers from Discourse Studies:

  • Lynch, M. (2000). The ethnomethodological foundations of conversation analysis. Text – Interdisciplinary Journal for the Study of Discourse, 20(4), 517–532. doi:10.1515/text.1.2000.20.4.517
  • Stokoe, E. (2012). Moving forward with membership categorization analysis: Methods for systematic analysis. Discourse Studies, 14(3), 277–303. doi:10.1177/1461445612441534

They also included other authors’ responses to both papers, highlighting methodological differences and challenges.

The sign-up sheet that the workshop organisers mailed around had asked us in advance whether we wanted to do a research presentation or use our data in a data session or both. The timetable made it clear where, when and to whom we were going to be presenting.

Tim Smith introducing the workshop
Tim Smith introducing the workshop

Day 1

10:00 – 11:00 Icebreaker:

After coffee and name-badging we made our way into the old library and were allocated seats in small groups of 5 or 6. It was nice to have our names on the tables like at a wedding or something – not a biggie but it made me feel welcome and individually catered-for immediately.

Tables at the EMCA DN wedding
Tables at the EMCA DN wedding

An orientation talk from the organisers Tim Smith and Eric Laurier got us ready to go, then we did a particularly inspired ice-breaker: 20 minutes to use the flip-chart paper and coloured markers on the tables create a hand-drawn poster about our research summarising what we were doing in our PhDs and noting down things we would like help with or wanted to collaborate on. Then the posters were hung up on a long piece of string wound all the way around the room and we had the chance to mingle, grab a coffee and circulate.

Posters hanging up after the ice-breaker
Posters hanging up after the ice-breaker

 

The string and posters stayed up throughout and had the effect of making the library feel fun, colourful and informal.

 

11:00 – 12:30 Reading Session

We then went with our groups to a few different rooms for a reading session where we discussed one of the texts we’d been sent. In my group we discussed the Stokoe text and the responses we’d read. This was a very useful way to meet each other with a common text to discuss that helped us see how we all had quite different responses and were coming from different perspectives.

The choices of methodology papers and responses had been a great idea for this reason: rather than doing the obvious thing of giving us some foundational EMCA papers this choice provided participants from many disciplines with a way to get involved in issues and debates within EMCA.

13:30-15:00 Data Session 1

After lunch we got our teeth into the first data session. We broke up into different groups of 5 or 6 and those of us who had signed up to present data got a chance to get feedback and ideas from the wonderful collaborative practice of looking at natural data together.

Some participants hadn’t done data sessions before, but there were sufficient numbers of experienced analysts and in my session Liz Stokoe (the plenary speaker) for the day was on hand to facilitate.

In any case, it was fascinating to see the work that people were doing. Geraldine Bengsch had some great multi-lingual data from hotel check-in counters that really reminded me of episodes of Fawlty Towers. It showed how the humour of that situation comedy comes in part from the interactional contradictions of working in the ‘hospitality industry’. The clue’s in the name I guess: where staff somehow have to balance the imperatives of making customers feel welcome and comfortable with needing to extract full names, credit card details, passports and other formalities and securities out of them.

15:00-16:00 Walk

This was a nice moment to have time to chat informally while having a look around the city or (in my case) sheltering from the pouring rain in a nearby cafe.

16:00 – 17:00 Plenary

Liz Stokoe presenting CARM
Liz Stokoe presenting CARM

Liz Stokoe then gave a plenary talk that was unusual in that it focussed mostly on her methods of explaining EMCA to non-academics and on the challenges of making EMCA research both public and open to commercial opportunities in communication training with CARM. This kind of talk just doesn’t get done often enough in academia in general. In many fields academics treat discussing ‘impact’ as a necessary evil reserved for review boards, grant justifications and job applications – but never even mention it as part of PhD student training. Stokoe’s talk was open and honest about the challenges and conflicts in this process and it was really useful to see how someone had learned – through repeated efforts – to explain this kind of work to people effectively in non-academic workshop environments.

Although the talk didn’t really relate to the papers we’d read in preparation for the meeting I actually thought this was a much more useful talk in the context than a purely academic presentation. Also, we still had plenty of time to ask Liz questions about her academic research afterwards and over a very nice dinner for all participants.

Day 2

After meeting up for a coffee at around 9 we split up into different groups of 6 again, this time for the presentation sessions.

0900 – 10:00: Presentation Sessions

Chandrika Cycil presenting her data
Chandrika Cycil presenting her data

I was presenting so I only got to see one other presenter: Chandrika Cycil who had some fantastic multi-screen data from her research on car-based interactions focussing particularly on mobile uses of media technologies. There were some lovely recordings of a family occupying very differently configured but overlapping interactional environments (i.e. front-passenger seat / back-seat / driver) together. It was fascinating to see how they worked with and around these constraints in their interactions. For example, the ways the driver could use the stereo was really constrained by having to split focus with the road etc. whereas the child in the front passenger seat could exploit unmitigated access to the stereo to do all kinds of cheekiness.

I also got some really nice feedback and references from my presentation on rhythm in social interaction that I’ll be posting soon.

10:00 – 11:30: Data Session 2

I was also presenting in the final data session. This session – and the meeting as a whole – strongly reaffirmed my affection for the data session as a research practice. There’s no academic environment I’ve found to be so consistently collaborative, principled, and generous in terms of research ideas generated and shared. So – as always in data sessions – I got some amazing analytic ideas from Mengxi Pang, Yulia Lukyanova, Anna Wilson and Lorenzo Marvulli that I can’t wait to get working on.

11:30 – 12:00: Closing review

In the last session we had a chance to give feedback and start planning the next meeting – I think the date we arrived at was the 27th/28th October. If you’re a PhD student working with (or interested in working with) EMCA, and didn’t make it to this one – I strongly recommend putting the dates in your diary for the next one!

Report from the first EMCA Doctoral Network meeting Read More »

References: Interactional Choreography

Saussure's Dancers

Here are the references from my talk at the 5th Ethnography and Qualitative Research Conference, University of Bergamo, June 5-7, 2014

Interactional Choreography: rhythms of social interaction in the co-production of an aesthetic practice.

  • De Saussure, F. (1959). Course in general linguistics. (C. Bally & A. Sechehaye, Eds.). New York: Philosophical Library.
  • Iwasaki, S. (2011). The Multimodal Mechanics of Collaborative Unit Construction in Japanese Conversation. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied Interaction Language and Body in the Material World (pp. 106–120). Cambridge Univ Press.
  • Jefferson, G. (1988). Preliminary notes on a possible metric which provides for a’standard maximum’silence of approximately one second in conversation. In D. Roger & P. Bull (Eds.), Conversation: An interdisciplinary perspective. Clevedon, UK: Multilingual Matters.
  • Kirsh, D., Dafne Muntanyola, R., Jao, J., Lew, A., & Sugihara, M. (2009). Choreographic methods for creating novel, high quality dance. Proceedings, DESFORM 5th International Workshop on Design & Semantics & Form, 188–195.
  • Lerner, G. (2002). Turn-sharing: The choral co-production of talk-in-interaction. In C. E. Ford, B. A. Fox, & S. A. Thompson (Eds.), The language of turn and sequence (pp. 225–257). New York: Oxford University Press USA.
  • Shannon, C. E. (1948). The mathematical theory of communication. 1963. M.D. Computing : Computers in Medical Practice, 14(4), 306–17.
  • Stivers, T., Enfield, N., Brown, P., Englert, C., Hayashi, M., Heinemann, T., … Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America, 106(26), 10587–92.
  • Whalen, J., Whalen, M., & Henderson, K. (2002). Improvisational choreography in teleservice work. The British Journal of Sociology, 53(2), 239–58.

References: Interactional Choreography Read More »