How do people with dementia and their carers use Alexa-type devices in the home?

We have a a fully funded PhD position available (deadline 6th March 2020) to work with myself, Prof. Charles Antaki and Prof. Liz Peel in collaboration with The Alzheimer’s Society to explore the opportunities, risks and wider issues surrounding the use of AI-based voice technologies such as the Amazon Echo and home automation systems in the lives of people with dementia.

Voice technologies are often marketed as enabling people’s independence. For example, Amazon’s “Sharing is Caring” advert for its AI-based voice assistant Alexa shows an elderly man being taught to use the ‘remind me’ function of an Amazon Echo smart speaker by his young carer. But how accessible are these technologies in practice? How are people with dementia and carers using them in creative ways to solve everyday access issues? And what are the implications for policy given the consent and privacy issues?

The project will combine micro and macro-levels of analysis and research. On the micro-level, the successful applicant will be trained and/or supported to use video analysis to study how people with dementia collaborate with their assistants to adapt and use voice technologies to solve everyday access issues. On the macro-level, the project will involve working on larger scale operations and policy issues with Ian Mcreath and Hannah Brayford at The Alzheimer’s Society and within the wider Dementia Choices Action Network (#DCAN).

Through this collaboration, the research will influence how new technologies are used, interpreted and integrated into personalised care planning across health, social care and voluntary, community and social enterprise sectors.

The deadline is the 6th March 2020 (see the job ad for application details). All you need to submit for a first round application is a CV and a short form, with a brief personal statement. We welcome applications from people from all backgrounds and levels of research experience (training in specific research methods will be provided where necessary). We especially welcome applications from people with first hand experience of disability and dementia, or with experience of working as a formal or informal carer/personal assistant.

This research will form part of the Adept at Adaptation project, looking at how disabled people adapt consumer AI-based voice technologies to support their independence across a wide range of impairment groups and applied settings.

The successful applicant will be supported through the ESRC Midlands Doctoral Training Partnership, and will have access to a range of highly relevant supervision and training through the Centre for Research in Communication and Culture at Loughborough University.

Feel free to contact me on s.b.albert@lboro.ac.uk with any informal inquiries about the post.

How do people with dementia and their carers use Alexa-type devices in the home? Read More »

Adept at Adaptation

AI and voice technologies in disability and social care

There is a crisis in social care for disabled people, and care providers are turning to AI for high-tech solutions. However, research often focuses on medical interventions rather than on how disabled people adapt technologies and work with their carers to enhance their independence.

This project explores how disabled people adapt consumer voice technologies such as the Amazon Alexa to enhance their personal independence, and the wider opportunities and risks that AI-based voice technologies may present for future social care services.

We are using a Social Action research method to involve disabled people and carers in shaping the research from the outset, and conversation analysis to examine how participants work together using technology (in the broadest sense – including language and social interaction), to solve everyday access issues.

The project team includes myself, Elizabeth Stokoe, Thorsten Gruber, Crispin Coombs , Donald Hislop, and Mark Harrison.

Background

Voice technologies are often marketed as enabling people’s independence.

For example, a 2019 Amazon ad entitled “Morning Ritual” features a young woman with a visual impairment waking up, making coffee, then standing in front of a rain-spattered window while asking Alexa what the weather is like.

Many such adverts, policy reports and human-computer interaction studies suggest that new technologies and the ‘Internet of Things’ will help disabled people gain independence. However, technology-centred approaches often take a medicalized approach to ‘fixing’ individual disabled people, which can stigmatize disabled people by presenting them as ‘broken’, offering high-tech, lab-based solutions over more realistic adaptations.

This project explores how voice technologies are used and understood by elderly and disabled people and their carers in practice. We will use applied conversation analysis – a method designed to show, in procedural detail, how people achieve routine tasks together via language and social interaction.

A simple example: turning off a heater

Here’s a simple example of the kind of process we are interested in.

In the illustration below, Ted, who is about to be hosted out of his bed, gives a command to Alexa to turn off his heater (named ‘blue’) while his carer, Ann moves around his bed, unclipping the wheel locks so she can move it underneath the hoist’s ceiling track. Before Ann can move the bed, she has to put away the heater. Before she can put it away, it must be switched off.


Ann leaves time and space for Ted to use Alexa to participate in their shared activity.

While Ann could easily have switched off the heater herself before moving it out of the way and starting to push the bed towards the hoist, she pauses her activity while Ted re-does his command to Alexa – this time successfully. You can see this sequence of events as it unfolds in the video below.

Here are a few initial observations we can make about this interaction.

Firstly, Ann is clearly working with Ted, waiting for him to finish his part of the collaborative task before continuing with hers. By pausing her action, she supports the independence of his part of their interdependent activity.

Secondly, using conversation analytic transcriptions and the automated activity log of the Amazon device, we can study the sequences of events that lead up to such coordination problems. For example, we can see that when Ted says the ‘wake word’ Alexa, it successfully pauses the music and waits for a command. We can see how Alexa mishears the reference to the heater name ‘blue’, then in lines 7 and 8, it apologizes and gives an account for simply giving up on fulfilling the request and unpausing the music.

Alexa mishears Ted’s reference to the heater ‘blue’, then terminates the request sequence

These moments give us insight both into the conversation design of voice and home automation systems, and also into how interactional practices and jointly coordinated care activities can support people’s independence.

Thanks to

The British Academy/Leverhulme Small Research Grants scheme for funding the pilot project.

Adept at Adaptation Read More »

The EMCAwiki reached a milestone in 2019

I recently wrote this congratulatory email to the wonderful admins of the Ethnomethodology and Conversation Analysis Wiki (http://emcawiki.net) to congratulate them on reaching a real milestone in this community project. We don’t really have a place to share these things yet so I’m putting it here.

If you are reading this and would like to get involved in the wiki or related projects mentioned here, please drop me an email or message me on Twitter.

Dear Paul and the EMCA wiki team,

I can’t quite believe it’s been six years since we started the EMCAwiki project – when Paul sent out an email via the languse mailing list asking for help with his original EM/CA news site and we began the discussions that led to the lovely bibliography wiki we now run.

Towards the end of 2019 we finally completed the transfer of all remaining legacy bibliography entries to the new wiki format from Paul’s original very long PDF files. We now host a grand total of 8537 entries – from our first entry: (Harold Garfinkel, (1949), “Research Note on Inter- and Intraracial Homicides”, Social Forces, vol. 27, no. 4, pp. 369–381.) – to a host of new papers published as recently as this first week of 2020. We can now begin consolidating and standardizing our work (as Andrei Korbut has been doing brilliantly over the last few months) – making sure things are consistent, and then thinking about how to explore, analyze and share the EM/CA bibliometric data we now have at our disposal. I’ll write more about that below – but this is quite an achievement, and I’m very grateful to all of you, for putting in such incredibly generous and dedicated work.

Before I say anything more or propose any new projects or initiatives, I should say that one of the things I like most about the EM/CA wiki is the almost total lack of administrative overheads. So many things in academic life are bogged down with committees, meetings, action items etc… I love the fact that from the beginning we’ve not really done that, but have mostly just got on with the tasks we thought necessary to the best of our individual and collective abilities. We’ve sometimes made efforts to met up at conferences, which has been fun, and have continued to take the pragmatic approach of just doing what we can when possible without undue pressure or overarching expectations. This is outstanding, and long may it continue.

Having said that, I did take on a new role in 2019 – that of ISCA communications & information officer, and I now participate in more admin meetings than I would usually aim for. These are great fun, and some have included ideas that involve EMCA wiki. I wanted to share some of those ideas with you now, and leave it open to you all to respond (or not) in what is now a time-honored laid-back tradition of the EMCAwiki admins.

Firstly, I am aware that the reason I was elected to the ISCA board was because of this project and all of your work. I would like to acknowledge that publicly by adding a page to the new ISCA website I’m currently developing – aiming to launch it towards the end of January 2020. I have kept a list of admins here: http://emcawiki.net/The_EMCA_wiki_Admins – I like the fact that we have all done different things at different times – and some of us have been more active than others. I hope that continues. If you would really prefer not to be acknowledged for what you’ve done – or what you may do in the future – let me know.

Secondly, I am working with Lucas Seuren and a great group of ECRs from around the world on an exciting new ISCA project. This will draw on the content in the EMCA wiki and promote it to a wider audience, as well as inviting contributions beyond bibliography entries (e.g. lists of up-to-date equipment, cross-cultural ethics frameworks for data recording, shared syllabi, useful youtube videos etc.). I hope that this will contribute positively to the wiki, without increasing any administrative overhead. Of course if any of you would like to contribute to that project too, please let me know.

Thirdly, I am aware that there are lots of features of the wiki that I have long promised to implement – and I have long delayed that implementation. I’ve written many of them down here: http://emcawiki.net/User:SaulAlbert. There is also a list of ‘known issues’ that have been pointed out as problems over the years: http://emcawiki.net/Known_Current_Emcawiki_Issues – I’m going to acknowledge now that I doubt I’ll ever have time to implement any of these myself. I have fewer and fewer of the kind of uninterrupted stretches of code hacking-time that are required for software development. Instead, I’m going to try to raise funds to pay professional programmers and systems administrators to do this. I think it’s something I could find a funder to support, and I’ll work on this – with your consent – and (if you have any ideas/time/funders) your involvement and collaboration.

I hope all of that sounds OK. I’m just going to get on with it slowly, and will welcome any thoughts/feedback/initiatives/and ideas that you all have over the next decade.

All the best, and happy 2020,

The EMCAwiki reached a milestone in 2019 Read More »

Ask Alexa Anything (a discussion tool)

For the Adept at Adaptation project, we have been developing interview protocols for discussing the potential benefits and harms of voice technologies and smart homecare systems.

Initially, we designed a set of very traditional social science-style questions:

  • Draft interview Schedule (paired interviews with disabled person and their carer)
  • General
    • Can you describe your disability, and give an overview of how it affects your day-to-day life?
    • Can you describe your living arrangements (i.e do you live alone, and independently, live with others etc)
    • Do you have support, in day to day living, from family members?
    • What type of activities do you need care support with, and how regularly do carers provide support?
    • How important is care support to you?
    • Are there any problems/challenges/limitations with your care support package?
  • Technology (non-specific)
    • Can you describe the main/most important technologies that facilitate your day-to-day living (including wheelchairs, cars, communication technology, computers, technologies in the home ..), and the capabilities they provide for you?
      • Who is responsible for providing/servicing/supporting these technologies (i.e. self, hospital, local authority ..)
      • To what extent have these technologies had to be adapted for you?
      • Can you give examples of the type of adaptations required?
      • Who does these adaptations, and how simple/difficult were they to do?
      • Are these adaptations that would be beneficial, but that you are not able to do?
    • In what ways, if at all, do carers require to use these technologies in providing day-to-day care?
      • Are carers involved in purchasing/organising/adapting/servicing the technologies you use?
  • Are there technologies you would like, but not able to have (due to funding limitations etc)?
  • Technology (AI/Voice)
    • To what extent, if at all, do you use Alexa-type AI voice-technologies?
    • What activities do they help you with?
    • In what way, if at all, has their use affected the type of care you receive, and the work of your carers?
    • Are there challenges/limitations in using these technologies?
    • Has it been necessary/useful to adapt these technologies, and have you been able to do that?

But after initial discussions with members of our project reference group, many felt that these were very impairment-focused and too restrictive in terms of the research agenda. We then decided to add a further step in the research process by asking our initial group of participants how they would ask these questions. The result of this process was a discussion format based on a more conversational, playful, and imaginative way to talk about technologies which we ended up calling ‘Ask Alexa Anything’.

This format enabled us to explore the possible meanings and uses of voice technologies in people’s lives without necessarily loading our questions with presuppositions. People came up with all kinds of unexpected responses (hard to achieve with a more directive survey format!), and this experience of a Social Action Research process began to guide our thinking about the project as a whole.

For more information about Social Action Research, see Mark Harrison’s publications on http://socialaction.info.

Ask Alexa Anything (a discussion tool) Read More »

Drawing, multimodality and interaction analytics

Image from the 2018 Drawing Interactions workshop at the University of Liverpool, London

On the 28th November 2019 I’m running this workshop in London for the National Centre for Research Methods with Pat Healey, Matthew Tobias Harris, Claude Heath, and Sophie Skach, which focuses on drawing as a method in interaction analysis. It’s open to any researcher and/or draftsperson – regardless of experience with conversation analysis or drawing. The aim is to introduce artists and social scientists to each other’s methods for visual analysis, inductive observation and inscription of research objects. Places are limited, so please sign up at the link below:

https://www.swdtp.ac.uk/event/ncrm-training-drawing-multimodality-and-interaction-analytics/

Workshop abstract

Analysing embodied interaction enables researchers to study the qualitative details of communication and to do reliable coding of interaction for quantification. Some researchers use video stills and word processing software to add arrows and highlights. Others use simple sketches or tracings to present their research findings in their final published results. However, until now, no dedicated courses have been offered that teach drawing as a method for the transcription and analysis of social interaction.

This one-day course will introduce researchers to the theory and method of conversation analysis, and to new graphical tools, transcription methods, and software systems that are available for multimodal analysis of audio-visual data. It will involve short presentations, group discussions and practical work including video data gathering, transcription and analysis. No special equipment is required, although we encourage participants to bring some means of recording video (e.g. a phone or other digital camera).

This course is aimed at researchers across disciplines with an interest in face-to-face social interaction and communication (human or animal, face-to-face or video-mediated). No prior experience of drawing or conversation and discourse analysis is necessary, since we will cover the basics required to learn independently.

Learning outcomes

This course will introduce you to methods, techniques and tools for analysing embodied social interaction.

The course covers:

  • Conversation analytic methods for collecting, transcribing and analysing video data.
  • Drawing techniques for use in field notes and in exploratory data analysis sessions.
  • How to create and use multimodal transcripts for data analysis and presentation of results.
  • Software tools for creating and sharing computer-readable graphical transcriptions.
  • Future directions for multimodal interaction analytics e.g. automation and open science.

Drawing, multimodality and interaction analytics Read More »

Drawing Interactions

Read our (2019) OA paper based on this project: Drawing as transcription: how do graphical techniques inform interaction analysis?

The Drawing Interactions project aims to develop new graphical techniques and tools for the transcription, analysis and presentation of research into social interaction.

The Drawing Interactions Prototype App (& source code)

In conversation analytic research, Jeffersonian transcripts of talk are usually used with traced outlines or video stills, and these techniques primarily focus on presenting polished research findings for finished publications. But what about the exploratory phases of research such as initial transcription or collaborative inspection at data sessions? The drawing interactions project uses traditional artistic still life and figure drawing techniques and detailed studies of analysts’ work practices as key starting points to inform the development of graphical tools and techniques for the transcription, analysis and presentation of social interaction.

The project team includes myself, Pat HealeyToby Harris, Claude Heath, and Sophie Skach. We have created a software prototype and a workshop/training format to support the use of drawing for interaction research and the social sciences more generally. 

The idea grew out of The Fine Art of Conversation CogSci workshop which explored artistic methods for depicting interaction in classical painting and sculpture. Here is a demo video of the current prototype and a detailed project report outlining developments so far.

Publications

Project links

Thanks to

Drawing Interactions Read More »

Re/drawing interactions: an EM/CA video tools development workshop

As part of the Drawing Interactions project (see report), Pat HealeyToby Harris, Claude Heath, and Sophie Skach and I ran a workshop at New Developments in Ethnomethodology in London (March 2018) to teach interaction analysts how and why to draw.

Sophie Skach leading the life drawing workshop at New Directions in Ethnomethodology
Sophie Skach leading the workshop at New Directions in Ethnomethodology

Here’s the workshop abstract:

Ethnomethodological and conversation analytic (EM/CA) studies often use video software for transcription, analysis and presentation, but no such tools are designed specifically for EM/CA. There are, however, many software tools commonly used to support EM/CA research processes (Hepburn & Bolden, 2016 pp. 152-169; Heath, Hindmarsh & Luff 2010 pp. 109-132), all of which adopt one of two major paradigms. On the one hand, horizontal scrolling timeline partition-editors such as ELAN (2017) facilitate the annotation of multiple ‘tiers’ of simultaneous activities. On the other hand, vertical ‘lists of turns’ editors such as CLAN (Macwhinney, 1992) facilitate a digital, media-synced version of Jefferson’s representations of turn-by-turn talk. However, these tools and paradigms were primarily designed to support forms of coding and computational analysis in interaction research that have been anathema to EM/CA approaches (Schegloff 1993). Their assumptions about how video recordings are processed, analyzed and rendered as data may have significant but unexamined consequences for EM/CA research. This 2.5 hour workshop will reflect on the praxeology of video analysis by running a series of activities that involve sharing and discussing diverse EM/CA methods of working with video. Attendees are invited to bring a video they have worked up from ‘raw data’ to publication, which we will re-analyze live using methods drawn from traditions of life drawing and still life. A small development team will build a series of paper and software prototypes over the course of the workshop week, aiming to put participants’ ideas and suggestions into practice. Overall, the workshop aims to inform the ongoing development of software tools designed reflexively to explore, support, and question the ways we use video and software tools in EM/CA research.

References

ELAN (Version 5.0.0-beta) [Computer software]. (2017, April 18). Nijmegen: Max Planck Institute for Psycholinguistics. Retrieved from https://tla.mpi.nl/tools/tla-tools/elan/

Heath, C., Hindmarsh, J., & Luff, P. (2010). Video in qualitative research: analysing social interaction in everyday life. London: Sage Publications.

Hepburn, A., & Bolden, G. B. (2017). Transcribing for social research. London: Sage.

MacWhinney, B. (1992). The CHILDES project: Tools for analyzing talk. Child Language Teaching and Therapy, (2000).

Schegloff, E. A. (1993). Reflections on Quantification in the Study of Conversation. Research on Language & Social Interaction, 26(1), 99–128.

Re/drawing interactions: an EM/CA video tools development workshop Read More »

Vocalizations as evaluative assessments in a novice partner dance workshop

I’m presenting with Dirk vom Lehn on a panel organized by two fantastic EM/CA scholars Richard Ogden and Leelo Keevalik on ‘non-lexical vocalizations’. We’re using some great video data we collected featuring novice dancers in a Swing Patrol ‘Dance in a Day’ workshop as part of the dance as interaction project.

CA studies of assessments as distinct, sequentially organized social actions (Pomerantz, 1984) have tended to define assessments for the purposes of data selection (Ogden, 2006, p. 1758) as “utterances that offer an evaluation of a referent with a clear valence” (Stivers & Rossano, 2010). However, this definition may exclude evaluative practices where the ‘valenced’ terms of assessment are more equivocal. It also obscures how the valences that mark out an utterance as an assessment are produced interactionally in the first place. This paper follows Goodwin & Goodwin’s (1992) proposal that assessment ‘segments’ (words like ‘good’ or ‘beautiful’), and assessment ‘signals’ (vocalizations like “mmm!” or “ugh!”) are organized into sequential ‘slots’ that render both ‘segments’ and ‘signals’ reflexively accountable as evaluative ‘assessment activities’. Data are drawn from recordings of a novice partner dance workshop at moments where teachers’ pro-forma terminal assessments marking the completion of a dance practice session co-occur with students’ evaluative assessment activities. Analysis shows how students use non-lexical vocalizations as evaluative assessments after imitating the bodily-vocal demonstrations (Keevalik, 2014) of the teachers and completing an unfamiliar dance move together. Extract 1 shows one example of these non-lexical vocalizations as dance partners Paul and Mary complete a new dance movement while the teachers call out rhythms and instructions.

Extract 1
(video: http://bit.ly/CADA_SP_03)


1 Tch1: tri:ple and ⌈rock step (0.8) BRINGING I::n. a::n rock step
2 Tch2:             ⌊rock step tri:ple an tri:ple a::n ro̲c̲k step
3 Tch1: tri:ple (.) tri:ple.≈
4 Mary: ≈⌈So̲rry. <(I’m a) little AUa:⁎U:h⁎ ((Shifts arm down Paul’s shoulder))
5 Tch2:  ⌊(a::nd then sto:p?)
6 Paul: Ye:: sHheh a:̲h⌈- yeh. (.) ∙HEh UhUH ->
7 Mary:               ⌊it⌈'s li- Au̲h- uh. ((Re-does and emphsizes arm-shift))
8 Tch1:                  ⌊ROTATE P::̲↑ARTne::::r::s::.
9        (0.8)
10 Mary:  ⌈Eya̲a̲::: ((Makes a clawing gesture))
11 Paul:  ⌊The bh- the bi̲:cep clench (°>dy'a know wha' I mean<°)≈ ->
12 Mary: ≈↑Y e̲a̲h̲h̲.⌈ it's- it's b- hh((Re-does and emphasizes clawing gesture))
13 Paul:          ⌊HAH hah Ha::h °hah hah° ∙HHh Heh heh ∙hh
14 Tch1: SO: WITH YOUR NE̲W̲ P:̲A̲R̲TNE:⌈:r.
15 Paul:                           ⌊That's an odd way of descri:bing it.

The analysis suggests that non-lexical vocalizations provide a useful resource for evaluating the achievement of as-yet-unfamiliar joint actions and managing and calibrating subtle degrees and dimensions of individual and mutual accountability for troubles encountered in learning a new, unfamiliar partner dance movement.

References

  • Goodwin, C., & Goodwin, M. H. (1992). Context, activity and participation. In P. Auer & A. D. Luzio, P. Auer & A. D. Luzio (Eds.), The contextualization of language (pp. 77–100). John Benjamins.
  • Keevallik, L. (2014). Turn organization and bodily-vocal demonstrations. Journal of Pragmatics, 65, 103–120.
  • Ogden, R. (2006). Phonetics and social action in agreements and disagreements. Journal of Pragmatics, 38(10), 1752–1775.
  • Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. M. Atkinson & J. Heritage, (Eds.), Structures of social action: Studies in conversation analysis (pp. 57–102). Cambridge: Cambridge University Press.
  • Stivers, T., & Rossano, F. (2010). Mobilizing Response. Research on Language & Social Interaction, 43(1), 3–31.

Vocalizations as evaluative assessments in a novice partner dance workshop Read More »

Noticings as actions-in-conversation, an ICCA 2018 panel

Mick Smith and I are organizing this panel on noticings at ICCA 2018. We’re really excited to have submissions from some amazing EM/CA scholars to help us explore this questions of action formation / ascription, embodiment, multiactivity, and reference across at least three languages.

Noticings as actions-in-conversation are a ubiquitous, versatile, but under-researched phenomenon (Keisanen, 2012). Schegloff (2007b, p. 218) suggests that noticings “put on offer a line of talk” that renders something optionally relevant for subsequent interaction, although Stivers & Rossano’s (2010) study of the diminished ‘response-relevance’ of noticings leads some analysts to question whether noticings function as social actions (Thompson, Fox, & Couper-Kuhlen, 2015, p. 141) formed from prospectively paired ‘action types’ (Levinson, 2013), or whether they are organised—as Schegloff (2007b, p. 219) suggests—as a generic retro-sequence pointing backwards to a prior ‘noticeable’. Alongside these debates, C. Goodwin & Goodwin (2012) focus on how noticings point “outside of talk”, drawing as-yet-unnoticed resources into embodied social action. Without pre-specifying any one analytic characterization, this panel brings together research explores the ambiguities of noticings as social actions alongside a range of mobile and embodied practices where describing (Sidnell & Barnes, 2009), referring (Hindmarsh & Heath, 2000), and categorizing may also be at issue (Schegloff, 2007a). Alongside empirical studies, contributors also address theoretical questions that arise from treating noticings as conversational devices. How are researchers’ noticings and participants’ noticings differently constitutive of interactional phenomena (Laurier, 2013)? Do noticings emerge reflexively as part of a particular interactional environment and work towards particular interactional ends (Schegloff, 2007a, p. 87 note 17), or are analytic invocations of ‘noticing’ in CA flawed descriptions that obscure more of the action than they clarify? Drawing together diverse approaches to noticings, this panel asks how understanding noticings as actions-in-conversation may open up new empirical and theoretical questions and challenges.

References

  • Goodwin, C., & Goodwin, M. H. (2012). Car talk: Integrating texts, bodies, and changing landscapes. Semiotica, 191(1/4), 257–286.
  • Hindmarsh, J., & Heath, C. (2000). Embodied reference: A study of deixis in workplace interaction. Journal of Pragmatics, 32(12), 1855–1878.
  • Keisanen, T. (2012). “Uh-oh, we were going there”: Environmentally occasioned noticings of trouble in in-car interaction. Semiotic, 191(1/4), 197–222.
  • Laurier, E. (2013). Noticing: Talk, gestures, movement and objects in video analysis. In R. Lee, N. Castree, R. Kitchin, V. Lawson, A. Paasi, C. Philo, … C. W. Withers (Eds.), The SAGE Handbook of Human Geography (2nd ed., Vol. 31, pp. 250–272). London: Sage.
  • Levinson, S. C. (2013). Action formation and ascription. In J. Sidnell & T. Stivers (Eds.), The Handbook of Conversation Analysis (pp. 101–130). Oxford: John Wiley & Sons.
  • Schegloff, E. A. (2007a). A tutorial on membership categorization. Journal of Pragmatics, 39(3), 462–482.
  • Schegloff, E. A. (2007b). Sequence organization in interaction: Volume 1: A primer in conversation analysis, Cambridge: Cambridge University Press.
  • Sidnell, J., & Barnes, R. (2009). Alternative, subsequent descriptions. In J. Sidnell, M. Hayashi, & G. Raymond (Eds.), Conversational repair and human understanding (pp. 322–342). Cambridge: Cambridge University Press.
  • Stivers, T., & Rossano, F. (2010). Mobilizing Response. Research on Language & Social Interaction, 43(1), 3–31.
  • Thompson, S. A., Fox, B. A., & Couper-Kuhlen, E. (2015). Grammar in everyday talk: Building responsive actions. Cambridge: Cambridge University Press.

Noticings as actions-in-conversation, an ICCA 2018 panel Read More »

Getting a backchannel in wordwise: using “big data” with CA

Here’s the abstract to an ICCA 2018 paper I’m working on with J.P. de Ruiter at the Human Interaction Lab at Tufts. The goal is to use computational linguistic methods (that often use the term ‘backchannel’) to see if all these responsive particles really belong in one big undifferentiated ‘bucket’.

Many studies of dialogue use the catch-all term ‘backchannel’ (Yngve ,1970) to refer to a wide range of utterances and behaviors as forms of listener-feedback in interaction. The use of this wide category ignores nearly half a century of research into the highly differentiated interactional functions of ‘continuers’ such as ‘uh huh’ or ‘wow’ (Schegloff, 1982, Goodwin, 1986), acknowledgement tokens such as ‘yeah’, ‘right’ or ‘okay’ (Jefferson, 1984; Beach, 1993) and change-of-state markers such as ‘oh’ or ‘nå’ (Heritage, 1984; Heinemann, 2017). These studies show how participants use responsive particles as fully-fledged, individuated, and distinctive words that do not belong in an undifferentiated functional class of ‘backchannels’ (Sorjonen, 2001). For this paper we use the Conversation Analytic British National Corpus (CABNC) (Albert, L. de Ruiter & J. P. de Ruiter, 2015) – a 4.2M word corpus featuring audio recordings of interaction from a wide variety of everyday settings that facilitates ‘crowdsourced’ incremental improvements and multi-annotator coding. We use Bayesian model comparison to evaluate the relative predictive performance of two competing models. In the first of these, all ‘backchannels’ imply the same amount of floor-yielding, while the second CA informed model assumes that different response tokens are more or less effective in ushering extended turns or sequences to a close. We argue that using large corpora together with statistical models can also identify candidate ‘deviant cases’, providing new angles and opportunities for ongoing detailed, inductive conversation analysis. We discuss the methodological implications of using “big data” with CA, and suggest key guidelines and common pitfalls for researchers using large corpora and statistical methods at the interface between CA and cognitive psychology (De Ruiter & Albert, 2017).

References (including references for the final talk – which has many more references than this abstract).

  • Albert, S., De Ruiter, L., & De Ruiter, J. P. (2015). The CABNC. Retrieved from https://saulalbert.github.io/CABNC/ 9/09/2017
  • Albert, S., & De Ruiter, J.P. (2018, in press), Ecological grounding in interaction research. Collabra: Psychology.
  • Beach, W. A. (1990). Searching for universal features of conversation. Research on Language &amp; Social Interaction, 24(1–4), 351–368.
  • Bolden, G. B. (2015). Transcribing as Research: ‘Manual’; Transcription and Conversation Analysis. Research on Language and Social Interaction, 48(3), 276–280. https://doi.org/10.1080/08351813.2015.1058603
  • de Ruiter, J. P., & Albert, S. (2017). An Appeal for a Methodological Fusion of Conversation Analysis and Experimental Psychology. Research on Language and Social Interaction, 50(1), 90–107. https://doi.org/10.1080/08351813.2017.1262050
  • Goodwin, C. (1986). Between and within: Alternative sequential treatments of continuers and assessments. Human Studies, 9(2), 205–217. https://doi.org/10.1007/BF00148127
  • Greiffenhagen, C., Mair, M., & Sharrock, W. (2011). From Methodology to Methodography: A Study of Qualitative and Quantitative Reasoning in Practice. Methodological Innovations Online, 6(3), 93–107. https://doi.org/10.4256/mio.2011.009
  • Hayashi, M., & Yoon, K. (2009). Negotiating boundaries in talk. Conversation Analysis: Comparative Perspectives, 27, 250.
  • Hepburn, A., & Bolden, G. B. (2017). Transcribing for social research. London: Sage.
  • Heritage, J. (1984). A change-of-state token and aspects of its sequential placement. In M. Atkinson & J. Heritage, M. Atkinson & J. Heritage (Eds.), Structures of social action: Studies in conversation analysis (pp. 299–345). Cambridge: Cambridge University Press.
  • Heritage, J. (1998). Oh-prefaced responses to inquiry. Language in Society, 27(3), 291–334. https://doi.org/10.1017/S0047404500019990
  • Heritage, J. (2002). Oh-prefaced responses to assessments: A method of modifying agreement/disagreement. In C. E. Ford, B. A. Fox, & S. A. Thompson, C. E. Ford, B. A. Fox, & S. A. Thompson (Eds.), The Language of Turn and Sequence (pp. 1–28). New York: Oxford University Press.
  • Hoey, E. M., & Kendrick, K. H. (2017). Conversation Analysis. In A. M. B. de Groot & P.Hagoort, A. M. B. de Groot & P.Hagoort (Eds.), Research Methods in Psycholinguistics: A Practical Guide (pp. 151–173). Hoboken, NJ: WileyBlackwell.
  • Housley, W., Procter, R., Edwards, A., Burnap, P., Williams, M., Sloan, L., … Greenhill, A. (2014). Big and broad social data and the sociological imagination: A collaborative response. Big Data &amp; Society, 1(2). https://doi.org/10.1177/2053951714545135
  • Jefferson, G. (1981). On the Articulation of Topic in Conversation. Final Report. London: Social Science Research Council.
  • Jefferson, G. (1984). Notes on a systematic Deployment of the Acknowledgement tokens ’Yeah’ and ’Mmhm’. Papers in Linguistics, 17(2), 197–216. https://doi.org/10.1080/08351818409389201
  • Kendrick, K. H. (2017). Using Conversation Analysis in the Lab. Research on Language and Social Interaction , 1–11. https://doi.org/10.1080/08351813.2017.1267911
  • MacWhinney, B. (1992). The CHILDES project: Tools for analyzing talk. Child Language Teaching and Therapy, (2000).
  • Nishizaka, A. (2015). Facts and Normative Connections: Two Different Worldviews. Research on Language and Social Interaction, 48(1), 26–31. https://doi.org/10.1080/08351813.2015.993840
  • Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114
  • Ochs, E. (1979). Transcription as theory. In E. Ochs & B. B. Schieffelin, E. Ochs & B. B. Schieffelin (Eds.), Developmental pragmatics (pp. 43–72). New York: Academic Press.
  • Potter, J., & te Molder, H. (2005). Talking cognition: Mapping and making the terrain. In J. Potter & D. Edwards, J. Potter & D. Edwards (Eds.), Conversation and cognition (pp. 1–54).
  • Sacks, H. (1963). Sociological description. Berkeley Journal of Sociology, 1–16.
  • Schegloff, E. A. (1982). Discourse as an interactional achievement: Some uses of ?uh huh?and other things that come between sentences. In D. Tannen, D. Tannen (Ed.), Analyzing discourse: Text and talk (pp. 71–93). Georgetown University Press.
  • Schegloff, E. A. (2007). Sequence organization in interaction: Volume 1: A primer in conversation analysis. Cambridge: Cambridge University Press.
  • Steensig, J., & Heinemann, T. (2015). Opening Up Codings? Research on Language and Social Interaction, 48(1), 20–25. https://doi.org/10.1080/08351813.2015.993838
  • Stivers, T. (2015). Coding Social Interaction: A Heretical Approach in Conversation Analysis? Research on Language and Social Interaction, 48(1), 1–19. https://doi.org/10.1080/08351813.2015.993837
  • Rühlemann (2017). Integrating Corpus-Linguistic and Conversation-Analytic Transcription in XML: The Case of Backchannels and Overlap in Storytelling Interaction. Corpus Pragmatics, 1(3), 201–232.
  • Rühlemann, C., & Gee, M. (2018). Conversation Analysis and the XML method. Gesprächsforschung–Online-Zeitschrift Zur Verbalen Interaktion, 18.
  • Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006). ELAN: a professional framework for multimodality research. In 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1556–1559).
  • Yngve, V. (1970). On getting a word in edgewise. Chicago Linguistics Society, 6th Meeting, 566–579. Retrieved from http://ci.nii.ac.jp/naid/10009705656/

Getting a backchannel in wordwise: using “big data” with CA Read More »