blog

An artificial turn in social interaction research?

Jakub Mlynář, Andreas Liesenfeld, Lynn de Renata Topinková, Wyke Stommel, Lynn de Rijk, and Saul Albert for the 6th Copenhagen Multimodality Day: Interacting with AI

The turn towards multimodality and embodiment in interaction research has yielded new terminology and representational schema in key publications (Nevile 2015). At the intersections between multidisciplinary fields, e.g., ethnomethodological and conversation analytic (EMCA) research exploring interactions between humans and ‘AI’, social robots, and conversational user interfaces, such methodological changes are even harder to track. How do these approaches to the meticulous, naturalistic study of technologies in (and of) social interaction reframe the key terms, schema and practices that constitute AI as a field of technosocial activity? Largely grounded in the EMCA Wiki bibliography, we map this emerging field and report on a bibliometric review of 90 publications directly relevant to EMCA studies of AI (broadly defined) including social robots and their components such as voice interfaces.

We found that the most works cited in the EMCA+AI corpus are classics from the canon of human interaction research (Garfinkel, Sacks, Schegloff, Goffman), including multimodality (Goodwin, Heath), human-machine interaction (Suchman), and STS (Latour). The most frequently cited texts are: Sacks, Schegloff and Jefferson’s (1974) ‘turn-taking paper’ (in 45% of items from the corpus), Garfinkel’s (1967) Studies (40%), and Suchman’s (1987) book (31%). Dealing specifically with AI from an EMCA perspective, Porcheron et al.’s 2018 paper on voice user interfaces is the most cited (11%). Apart from this one, two other texts feature as citation hubs: Alač’s (2016) and Pitsch et al.’s (2013) papers on social robots and embodiment. The study aims to provide a starting point for discussion about how concepts such as embodiment, agency and interaction are shared, used and understood through the practice of academic citation.

References 

Nevile, M. (2015). The Embodied Turn in Research on Language and Social Interaction. Research on Language and Social Interaction, 48(2), 121–151.

The interactional coordination of virtual and personal assistants in a homecare setting

Saul Albert, Magnus Hamann & Elizabeth Stokoe (for the 6th Copenhagen Multimodality Day), October 2021.

Policymakers and care service providers are increasingly looking to technological developments in AI and robotics to augment or replace health and social care services in the context of a demographic ageing crisis (House of Lords, 2021; Kingston et al., 2018; Topol, 2019, pp. 54–55). However, there is still little evidence as to how these technologies might be applied to everyday social care situations (Maguire et al., 2021). This paper uses conversation analysis of ~100 hours of video recorded interactions between a disabled person, their virtual assistant (Alexa), and their (human) personal assistant to explore how routine care tasks are organized in a domestic setting. We focus on how the human participants organize conversational turn-space around ‘turns-at-use’ with the virtual assistant. Specifically, how turns-at-use ostensibly designed for the virtual assistant can recruit overhearing others. Further, we show how participants include the virtual assistant in their shared taskscape by, for example, putting ongoing activities and conversations on hold, visibly reorienting their bodies, or explicitly making themselves available for – or requesting – assistance when coordination trouble emerges between the machine-human dyad. Our findings show that virtual assistants expand the affordances of a homecare environment but do not replace the work of personal assistants.

References

House of Lords. (2021). Ageing: Science, Technology and Healthy Living (p. 132). House of Lords Science and Technology Select Committee. https://publications.parliament.uk/pa/ld5801/ldselect/ldsctech/183/183.pdf

Kingston, A., Comas-Herrera, A., & Jagger, C. (2018). Forecasting the care needs of the older population in England over the next 20 years: Estimates from the Population Ageing and Care Simulation (PACSim) modelling study. The Lancet Public Health3(9), e447–e455. https://doi.org/10.1016/S2468-2667(18)30118-X

Maguire, D., Honeyman, M., Fenney, D., & Jabbal, J. (2021). Shaping the future of digital technology in health and social care. The King’s Fund. https://www.kingsfund.org.uk/publications/future-digital-technology-health-social-care

Topol, E. (2019). The Topol Review: Preparing the healthcare workforce to deliver the digital future (p. 103). Health Education England. https://topol.hee.nhs.uk/wp-content/uploads/HEE-Topol-Review-2019.pdf

Putting wake words to bed

Magnus Hamann and I wrote a provocation paper for the third conference on Conversational User Interfaces 2021.

In it, we argue (hopefully provocatively), that voice user interface designers should stop using wake words like “Alexa” and “Hey Siri” that are crowding each other out of the audible environment of the smart home. Our point is that, as interface elements, wake words are misleading for users who seem to treat them like fully-fledged interactional summons, when they’re really little more than glorified ‘on’ buttons.

We got a surprisingly positive response from the technically-inclined audience at the conference. I found it surprising mostly because wake words are so ubiquitous and central to the branding and functionality of today’s voice interfaces that it seems hard to imagine them being phased out in favour of something more prosaic.

You can read the full paper on the ACM site, or a preprint here.

References

  1. Charles Goodwin. 2007. Interactive footing. In Reporting Talk, Elizabeth Holt and Rebecca Clift (eds.). Cambridge University Press, Cambridge, 16–46. DOI:https://doi.org/10.1017/CBO9780511486654.003
  2. Alexa Hepburn and Galina B Bolden. 2017. Transcribing for social research. Sage, London.
  3. William Housley, Saul Albert, and Elizabeth Stokoe. 2019. Natural Action Processing. In Proceedings of the Halfway to the Future Symposium 2019 (HTTF 2019), Association for Computing Machinery, Nottingham, United Kingdom, 1–4. DOI:https://doi.org/10.1145/3363384.3363478
  4. Razan Jaber, Donald McMillan, Jordi Solsona Belenguer, and Barry Brown. 2019. Patterns of gaze in speech agent interaction. In Proceedings of the 1st International Conference on Conversational User Interfaces – CUI ’19, ACM Press, Dublin, Ireland, 1–10. DOI:https://doi.org/10.1145/3342775.3342791
  5. Seung-Hee Lee. 2006. Second summonings in Korean telephone conversation openings. Language in Society. 35, 02. DOI:https://doi.org/10.1017/S0047404506060118
  6. Gene H Lerner. 2003. Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society. 32, 02, 177–201. DOI:https://doi.org/10.1017/S004740450332202X
  7. Ewa Luger and Abigail Sellen. 2016. “Like Having a Really Bad PA”: The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), Association for Computing Machinery, New York, NY, USA, 5286–5297. DOI:https://doi.org/10.1145/2858036.2858288
  8. Robert J. Moore and Raphael Arar. 2019. Conversational UX design: A practitioner’s guide to the natural conversation framework. Association for Computing Machinery, New York, NY, USA.
  9. Clifford Nass and Youngme Moon. 2000. Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues 56, 1 (2000), 81–103. DOI:https://doi.org/10.1111/0022-4537.00153
  10. Hannah R. M. Pelikan and Mathias Broth. 2016. Why That Nao? In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems – CHI \textquotesingle16, ACM Press. DOI:https://doi.org/10.1145/2858036.2858478
  11. Danielle Pillet-Shore. 2018. How to Begin. Research on Language and Social Interaction 51, 3 (July 2018), 213–231. DOI:https://doi.org/10.1080/08351813.2018.1485224
  12. Martin Porcheron, Joel E Fischer, Stuart Reeves, and Sarah Sharples. 2018. Voice Interfaces in Everyday Life. In Proceedings of the 2018 ACM Conference on Human Factors in Computing Systems – CHI’18, ACM Press. DOI:https://doi.org/10.1145/3173574.3174214
  13. Stuart Reeves, Martin Porcheron, and Joel Fischer. 2018. “This is not what we wanted”: designing for conversation with voice interfaces. Interactions 26, 1, 46–51. DOI:https://doi.org/10.1145/3296699
  14. Harvey Sacks. 1995. Lectures on conversation. Wiley-Blackwell, London.
  15. Emanuel A Schegloff. 1968. Sequencing in Conversational Openings. American Anthropologist 70, 6, 1075–1095. DOI:https://doi.org/10.1525/aa.1968.70.6.02a00030
  16. Emanuel A Schegloff. 1988. Presequences and indirection: Applying speech act theory to ordinary conversation. Journal of Pragmatics 12, 1 (1988), 55–62.
  17. Emanuel A Schegloff. 2007. Sequence organization in interaction: Volume 1: A primer in conversation analysis. Cambridge University Press, Cambridge.

Digital transcription for EM/CA research

I have put my introduction to digital transcription workshop materials and tutorials online, here’s a little blog outlining some of the reasons I started developing the workshop, and how I hope researchers will use it.

There are very few – if any – software tools designed specifically for conversation analytic transcription, partly because so few conversation analysts use them, so there’s not really a ‘market’ for software developers to cater to.

Instead, we have to make do with tools that were designed for more generic research workflows, and which often build in analytic assumptions, constraints and visual metaphors that don’t necessarily correspond with EM/CA’s methodological priorities.

Nonetheless, most researchers that use digital transcription systems choose between two main paradigms.

  1. the ‘list-of-turns’-type system represents interaction much like a Jeffersonian transcript: a rendering of turn-by-turn talk, line by line, laid out semi-diagrammatically so that lines of overlapping talk are vertically aligned on the page.
  2. the ‘tiers-of-timelines’ system uses a horizontal scrolling timeline like a video editing interface, with multiple layers or ‘tiers’ representing e.g., each participant’s talk, embodied actions, and other types of action annotated over time.

 

A key utility of both kinds of digital transcription systems is that they allow researchers to align media and transcript, and to use very precise timing tools to check the order and timing of their analytic observations.

I used these terms to describe this distinction between representational schema in a short ‘expert box’ for Alexa Hepburn and Galina Bolden’s excellent (2017) book Transcribing for Social Research entitled “how to choose transcription software for conversation analysis“, where I tried to explain what is at stake in choosing one or the other type of system .

For the most part, researchers choose lists-of-turns tools when their analysis is focused on conversation and audible turn-space, and tiers-of-timelines when their analysis focuses on video analysis of visible bodily action.

The problem for EM/CA researchers working with both these approaches, however, is that neither representational schema on its own, (nor any schema save whatever schema may have been constituted through the original interaction itself), is ideal for exploring and describing participants’ sense-making processes and resources.

Tiers-of-timelines representations are great for showing the temporal unfolding of simultaneous action, but it is hard to read more than a few seconds of activity at a glance. By contrast, lists-of-turns use the same basic schema as our well-practiced, mundane reading abilities to scan a page of text and take in the overall structure of a conversation, but reduce the fine-grained timing and multi-activity organization of complex embodied activities.

In any case, neither of these representational schema, nor any currently available transcription tools adequately capture the dynamics of movement in the way that, for example, specialized graphical methods and life drawing techniques were developed to achieve (although our Drawing Interactions prototype points to some possibilities).

The reason I put this digital transcription workshop together was to combine existing, well-used software tools for digital transcription from both major paradigms, and to show how to work on a piece of data using both approaches. It’s not intended as a comprehensive ‘solution’, and there are many unresolved practical and conceptual issues, but I think it gives researchers the best chance to address their empirical concerns to help break away from the conceptual and disciplinary constraints that come from analyzing data using one, uniform type of user interface.

The workshop materials include slides (so people can use them to teach collaborators/students) as well as a series of short tutorial videos accompanying each practical exercise in the slides, along with some commentary from me.

My hope is that researchers will use and improve these materials, and possibly extend them to include additional tools (e.g., EXMARaLDA project tools, with which I’m less familiar). If you do, and you find ways to improve them with additional tips, hacks, or updated instructions that take into account new versions, please do let me know.

Moving into step: The embodiment of social structures of action

The abstract for a forthcoming article by myself and Dirk vom Lehn, soon to be liberated from the stalled pandemic year R&R cycle. Draft available if you’re willing to give feedback!

Abstract 

While dance has often featured in sociological theory, there are relatively few empirical studies that explore the social practices through which people learn to dance together. This paper takes as its point of departure the way that partner dance is often featured as a metaphor to illustrate theories about social order and interaction. We examine a corpus of video data gathered as part of a day-long workshop and explore how novice dancers learn to perform some of the basic steps of a social dance in time with their partner and with the rhythmical environment. The analysis shows how dancers use rhythm, bodies, language and other resources to organize their social interactions and shows how ethnomethodology and conversation analysis provide a critical standpoint for examining sociological theories about the relationship between the body and the social.  

Keywords: ethnomethodology, conversation analysis, multimodality, dance, culture, 

Three meeting points between CA and AI

I gave this keynote at the first European Conference on Conversation Analysis (ECCA 2020), which, due to C-19, had to be delivered as a video instead of a stand-up talk.

I tried to make a mix between a film essay and a research presentation of work-in-progress, so it didn’t always work to put references on every slide. I’ve added them below with links to the data used where available.

Abstract

Sacks’ (1963) first published paper on ‘sociological description’ uses the metaphor of a mysterious ‘talking-and-doing’ machine, where researchers from different disciplines come up with incompatible, contradictory descriptions of its functionality. We may soon find ourselves in a similar situation to the one Sacks describes as AI continues to permeate the social sciences, and CA begins to encounter AI either as a research object, as a research tool, or more likely as a pervasive feature of both.

There is now a thriving industry in ‘Conversational AI’ and AI-based tools that claim to emulate or analyse talk, but both the study and use of AI within CA is still unusual. While a growing literature is using CA to study social robotics, voice interfaces, and  conversational user experience design (Pelikan & Broth, 2016; Porcheron et al., 2018), few conversation analysts even use digital tools, let alone the statistical and computational methods that underpin conversational AI. Similarly, researchers and developers of conversational AI rarely cite CA research and have only recently become interested in CA as a possible solution to hard problems in natural language processing (NLP). This situation presents an opportunity for mutual engagement between conversational AI and CA (Housley et al., 2019). To prompt a debate on this issue, I will present three projects that combine AI and CA very differently and discusses the implications and possibilities for combined research programmes.

The first project uses a series of single case analyses to explore recordings in which an advanced conversational AI successfully makes appointments over the phone with a human call-taker. The second revisits debates on using automated speech recognition for CA transcription (Moore, 2015) in light of significant recent advances in AI-based speech-to-text, and includes a live demo of ‘Gailbot’, a Jeffersonian automated transcription system. The third project both uses and studies AI in an applied CA context. Using video analysis, it asks how a disabled man and his care worker interact while using AI-based voice interfaces and a co-designed ‘home automation’ system as part of a domestic routine of waking, eating, and personal care. Data are drawn from a corpus of ~500 hours of video data recorded by the participants using a voice-controlled, AI-based ‘smart security camera’ system.

These three examples of CA’s potential interpretations and uses of AI’s ‘talking-and-doing’ machines provide material for a debate about how CA research programmes might conceptualize AI, and use or combine it with CA in a mutually informative way.

Videos (in order of appearance)

The Senster. (2007, March 29). https://www.youtube.com/watch?v=wY85GrYGnyw

MIT AI Lab. (2011, September 25). https://www.youtube.com/watch?v=hp9NHNKTV-M

Keynote (Google I/O ’18). (2018, May 9). https://www.youtube.com/watch?v=ogfYd705cRs

Online Data

Linguistic Data Consortium. (2013). CABank CallHome English Corpus [Data set]. Talkbank. https://ca.talkbank.org/access/CallHome/eng.html

Jefferson, G. (2007). CABank English Jefferson NB Corpus [Data set]. TalkBank. https://doi.org/10.21415/T58P4Z

Bibliography

Agre, P. (1997). Toward a critical technical practice: Lessons learned in trying to reform AI. Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide. Erlbaum.

Alač, M., Gluzman, Y., Aflatoun, T., Bari, A., Jing, B., & Mozqueda, G. (2020). How Everyday Interactions with Digital Voice Assistants Resist a Return to the Individual. Evental Aesthetics, 9(1), 51.

Berger, I., Viney, R., & Rae, J. P. (2016). Do continuing states of incipient talk exist? Journal of Pragmatics, 91, 29–44. https://doi.org/10.1016/j.pragma.2015.10.009

Bolden, G. B. (2015). Transcribing as Research: “Manual” Transcription and Conversation Analysis. Research on Language and Social Interaction, 48(3), 276–280. https://doi.org/10.1080/08351813.2015.1058603

Brooker, P., Dutton, W., & Mair, M. (2019). The new ghosts in the machine: “Pragmatist” AI and the conceptual perils of anthropomorphic description. Ethnographic Studies, 16, 272–298. https://doi.org/10.5281/zenodo.3459327

Button, Graham. (1990). Going Up a Blind Alley: Conflating Conversation Analysis and Computational Modelling. In P. Luff, N. Gilbert, & D. Frolich (Eds.), Computers and Conversation (pp. 67–90). Academic Press. https://doi.org/10.1016/B978-0-08-050264-9.50009-9

Button, Graham, & Dourish, P. (1996). Technomethodology: Paradoxes and possibilities. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. http://dl.acm.org/citation.cfm?id=238394

Button, G., & Sharrock, W. (1996). Project work: The organisation of collaborative design and development in software engineering. Computer Supported Cooperative Work (CSCW), 5(4), 369–386. https://doi.org/10.1007/BF00136711

Casino, T., & Freenor, Michael. (2018). An introduction to Google Duplex and natural conversations, Willowtree. https://willowtreeapps.com/ideas/an-introduction-to-google-duplex-and-natural-conversations

Duca, D. (2019). Who’s disrupting transcription in academia? — SAGE Ocean | Big Data, New Tech, Social Science. SAGE Ocean. https://ocean.sagepub.com/blog/whos-disrupting-transcription-in-academia

Fischer, J. E., Reeves, S., Porcheron, M., & Sikveland, R. O. (2019). Progressivity for voice interface design. Proceedings of the 1st International Conference on Conversational User Interfaces  – CUI ’19, 1–8. https://doi.org/10.1145/3342775.3342788

Garfinkel, H. (1967). Studies in ethnomethodology. Prentice-Hall.

Goodwin, C. (1996). Transparent vision. In E. A. Schegloff & S. A. Thompson (Eds.), Interaction and Grammar (pp. 370–404). Cambridge University Press.

Heath, C., & Luff, P. (1992). Collaboration and control: Crisis management and multimedia technology in London Underground Line Control Rooms. Computer Supported Cooperative Work (CSCW), 1(1–2), 69–94.

Heritage, J. (1984). Garfinkel and ethnomethodology. Polity Press.

Heritage, J. (1988). Explanations as accounts: A conversation analytic perspective. In C. Antaki (Ed.), Analysing Everyday Explanation: A Casebook of Methods (pp. 127–144). Sage Publications.

Hoey, E. M. (2017). Lapse organization in interaction [PhD Thesis, Max Planck Institute for Psycholinguistics, Radbound University, Nijmegen]. http://bit.ly/hoey2017

Housley, W., Albert, S., & Stokoe, E. (2019). Natural Action Processing. In J. E. Fischer, S. Martindale, M. Porcheron, S. Reeves, & J. Spence (Eds.), Proceedings of the Halfway to the Future Symposium 2019 (pp. 1–4). Association for Computing Machinery. https://doi.org/10.1145/3363384.3363478

Kendrick, K. H. (2017). Using Conversation Analysis in the Lab. Research on Language and Social Interaction, 50(1), 1–11. https://doi.org/10.1080/08351813.2017.1267911

Lee, S.-H. (2006). Second summonings in Korean telephone conversation openings. Language in Society, 35(02). https://doi.org/10.1017/S0047404506060118

Leviathan, Y., & Matias, Y. (2018). Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone [Blog]. Google AI Blog. http://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html

Local, J., & Walker, G. (2005). Methodological Imperatives for Investigating the Phonetic Organization and Phonological Structures of Spontaneous Speech. Phonetica, 62(2–4), 120–130. https://doi.org/10.1159/000090093

Luff, P., Gilbert, N., & Frolich, D. (Eds.). (1990). Computers and Conversation. Academic Press.

Moore, R. J. (2015). Automated Transcription and Conversation Analysis. Research on Language and Social Interaction, 48(3), 253–270. https://doi.org/10.1080/08351813.2015.1058600

Ogden, R. (2015). Data Always Invite Us to Listen Again: Arguments for Mixing Our Methods. Research on Language and Social Interaction, 48(3), 271–275. https://doi.org/10.1080/08351813.2015.1058601

O’Leary, D. E. (2019). Google’s Duplex: Pretending to be human. Intelligent Systems in Accounting, Finance and Management, 26(1), 46–53. https://doi.org/10.1002/isaf.1443

Pelikan, H. R. M., & Broth, M. (2016). Why That Nao? Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems – CHI \textquotesingle16. https://doi.org/10.1145/2858036.2858478

Pelikan, H. R. M., Broth, M., & Keevallik, L. (2020). “Are You Sad, Cozmo?”: How Humans Make Sense of a Home Robot’s Emotion Displays. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 461–470. https://doi.org/10.1145/3319502.3374814

Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice Interfaces in Everyday Life. Proceedings of the 2018 ACM Conference on Human Factors in Computing Systems (CHI’18).

Reeves, S. (2017). Some conversational challenges of talking with machines. Talking with Conversational Agents in Collaborative Action, Workshop at the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing. http://eprints.nottingham.ac.uk/40510/

Relieu, M., Sahin, M., & Francillon, A. (2019). Lenny the bot as a resource for sequential analysis: Exploring the treatment of Next Turn Repair Initiation in the beginnings of unsolicited calls. https://doi.org/10.18420/muc2019-ws-645

Robles, J. S., DiDomenico, S., & Raclaw, J. (2018). Doing being an ordinary technology and social media user. Language & Communication, 60, 150–167. https://doi.org/10.1016/j.langcom.2018.03.002

Sacks, H. (1984). On doing “being ordinary.” In J. Heritage & J. M. Atkinson (Eds.), Structures of social action: Studies in conversation analysis (pp. 413–429). Cambridge University Press.

Sacks, H. (1987). On the preferences for agreement and contiguity in sequences in conversation. In G Button & J. R. Lee (Eds.), Talk and social organization (pp. 54–69). Multilingual Matters.

Sacks, H. (1995a). Lectures on conversation: Vol. II (G. Jefferson, Ed.). Wiley-Blackwell.

Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the organization of turn-taking for conversation. Language, 50(4), 696–735. https://doi.org/10.2307/412243

Sahin, M., Relieu, M., & Francillon, A. (2017). Using chatbots against voice spam: Analyzing Lenny’s effectiveness. Proceedings of the Thirteenth Symposium on Usable Privacy and Security, 319–337.

Schegloff, E. A. (1988). On an Actual Virtual Servo-Mechanism for Guessing Bad News: A Single Case Conjecture. Social Problems, 35(4), 442–457. https://doi.org/10.2307/800596

Schegloff, E. A. (1993). Reflections on Quantification in the Study of Conversation. Research on Language & Social Interaction, 26(1), 99–128. https://doi.org/10.1207/s15327973rlsi2601_5

Schegloff, E. A. (2004). Answering the Phone. In G. H. Lerner (Ed.), Conversation Analysis: Studies from the First Generation (pp. 63–109). John Benjamins Publishing Company.

Schegloff, E. A. (2010). Some Other “Uh(m)s.” Discourse Processes, 47(2), 130–174. https://doi.org/10.1080/01638530903223380

Soltau, H., Saon, G., & Kingsbury, B. (2010). The IBM Attila speech recognition toolkit. 2010 IEEE Spoken Language Technology Workshop, 97–102. https://doi.org/10.1109/SLT.2010.5700829

Stivers, T. (2015). Coding Social Interaction: A Heretical Approach in Conversation Analysis? Research on Language and Social Interaction, 48(1), 1–19. https://doi.org/10.1080/08351813.2015.993837

Stokoe, E. (2011). Simulated Interaction and Communication Skills Training: The `Conversation-Analytic Role-Play Method’. In Applied Conversation Analysis (pp. 119–139). Palgrave Macmillan UK. https://doi.org/10.1057/9780230316874_7

Stokoe, E. (2013). The (In)Authenticity of Simulated Talk: Comparing Role-Played and Actual Interaction and the Implications for Communication Training. Research on Language & Social Interaction, 46(2), 165–185. https://doi.org/10.1080/08351813.2013.780341

Stokoe, E. (2014). The Conversation Analytic Role-play Method (CARM): A Method for Training Communication Skills as an Alternative to Simulated Role-play. Research on Language and Social Interaction, 47(3), 255–265. https://doi.org/10.1080/08351813.2014.925663

Stokoe, E., Sikveland, R. O., Albert, S., Hamann, M., & Housley, W. (2020). Can humans simulate talking like other humans? Comparing simulated clients to real customers in service inquiries. Discourse Studies, 22(1), 87–109. https://doi.org/10.1177/1461445619887537

Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433–460.

Walker, G. (2017). Pitch and the Projection of More Talk. Research on Language and Social Interaction, 50(2), 206–225. https://doi.org/10.1080/08351813.2017.1301310

Wong, J. C. (2019, May 29). “A white-collar sweatshop”: Google Assistant contractors allege wage theft. The Guardian. https://www.theguardian.com/technology/2019/may/28/a-white-collar-sweatshop-google-assistant-contractors-allege-wage-theft

Collecting data from streaming cameras with youtube-dl

I’ve been fascinated by a live camera stream showing a UK street since the start of the lockdown on the 23rd March 2020 because it’s shown how pedestrians interpret the 2m physical distancing rule.

Some of the data from this camera was incorporated into a very nice ROLSI blog post by Eric Laurier, Magnus Hamann and Liz Stokoe that I helped with about the emergence of the ‘social swerve’.

I thought others might find it useful to read a quick how-to about grabbing video from live cameras – it’s a great way to get a quick and dirty bit of data to test a working hunch or do some rough analysis.

There are thousands of live cameras that stream to youtube, but it can be a bit cumbersome to capture more than a few seconds via more straightforward screen capture methods.

NB: before doing this for research purposes, check that doing so is compliant with relevant regional/institutional ethical guidelines.

Step 1: download and configure youtube-dl

Youtube-dl is a command line utility, which means you run it from the terminal window of your operating system of choice – it works fine on any Unix, on Windows or on Mac Os.

Don’t be intimidated if you’ve never used a command line before, you won’t have to do much beyond some copying and pasting.

I can’t do an installation how-to, but there are plenty online:

Mac:

https://www.youtube.com/watch?v=NhOkYXB2_QQ

Windows:

https://www.youtube.com/watch?v=xyPAaYq3H9E

I’ll assume that if you’re a Unix user, you know how to do this.

Step 2: copy and paste the video ID from the stream

Every Youtube video has a video ID that you can copy from the address bar of your browser. Here’s the one I used for the blog post mentioned above – which we’ve affectionately nicknamed the ‘kebab corpus’. The video ID is circled in red:

Step 3: use youtube-dl to begin gathering your video data

This bit is a little hacky – as in not really using the software as intended or documented, so I’ve created a short howto video. There might be better ways. If so, please let me know!

As I mention in that video – probably best not to leave youtube-dl running for too long on a stream as you might end up losing your video if something happens to interrupt the stream. I’ve captured up to half an hour at a time.

It’s possible to create scripts and automated actions for a variety of operating systems to do this all for you on a schedule – but if you need extensive video archives, I’d recommend contacting the owner of the stream to see if they can simply send you their high quality youtube archives.

How do people with dementia and their carers use Alexa-type devices in the home?

We have a a fully funded PhD position available (deadline 6th March 2020) to work with myself, Prof. Charles Antaki and Prof. Liz Peel in collaboration with The Alzheimer’s Society to explore the opportunities, risks and wider issues surrounding the use of AI-based voice technologies such as the Amazon Echo and home automation systems in the lives of people with dementia.

Voice technologies are often marketed as enabling people’s independence. For example, Amazon’s “Sharing is Caring” advert for its AI-based voice assistant Alexa shows an elderly man being taught to use the ‘remind me’ function of an Amazon Echo smart speaker by his young carer. But how accessible are these technologies in practice? How are people with dementia and carers using them in creative ways to solve everyday access issues? And what are the implications for policy given the consent and privacy issues?

The project will combine micro and macro-levels of analysis and research. On the micro-level, the successful applicant will be trained and/or supported to use video analysis to study how people with dementia collaborate with their assistants to adapt and use voice technologies to solve everyday access issues. On the macro-level, the project will involve working on larger scale operations and policy issues with Ian Mcreath and Hannah Brayford at The Alzheimer’s Society and within the wider Dementia Choices Action Network (#DCAN).

Through this collaboration, the research will influence how new technologies are used, interpreted and integrated into personalised care planning across health, social care and voluntary, community and social enterprise sectors.

The deadline is the 6th March 2020 (see the job ad for application details). All you need to submit for a first round application is a CV and a short form, with a brief personal statement. We welcome applications from people from all backgrounds and levels of research experience (training in specific research methods will be provided where necessary). We especially welcome applications from people with first hand experience of disability and dementia, or with experience of working as a formal or informal carer/personal assistant.

This research will form part of the Adept at Adaptation project, looking at how disabled people adapt consumer AI-based voice technologies to support their independence across a wide range of impairment groups and applied settings.

The successful applicant will be supported through the ESRC Midlands Doctoral Training Partnership, and will have access to a range of highly relevant supervision and training through the Centre for Research in Communication and Culture at Loughborough University.

Feel free to contact me on s.b.albert@lboro.ac.uk with any informal inquiries about the post.

The EMCAwiki reached a milestone in 2019

I recently wrote this congratulatory email to the wonderful admins of the Ethnomethodology and Conversation Analysis Wiki (http://emcawiki.net) to congratulate them on reaching a real milestone in this community project. We don’t really have a place to share these things yet so I’m putting it here.

If you are reading this and would like to get involved in the wiki or related projects mentioned here, please drop me an email or message me on Twitter.

Dear Paul and the EMCA wiki team,

I can’t quite believe it’s been six years since we started the EMCAwiki project – when Paul sent out an email via the languse mailing list asking for help with his original EM/CA news site and we began the discussions that led to the lovely bibliography wiki we now run.

Towards the end of 2019 we finally completed the transfer of all remaining legacy bibliography entries to the new wiki format from Paul’s original very long PDF files. We now host a grand total of 8537 entries – from our first entry: (Harold Garfinkel, (1949), “Research Note on Inter- and Intraracial Homicides”, Social Forces, vol. 27, no. 4, pp. 369–381.) – to a host of new papers published as recently as this first week of 2020. We can now begin consolidating and standardizing our work (as Andrei Korbut has been doing brilliantly over the last few months) – making sure things are consistent, and then thinking about how to explore, analyze and share the EM/CA bibliometric data we now have at our disposal. I’ll write more about that below – but this is quite an achievement, and I’m very grateful to all of you, for putting in such incredibly generous and dedicated work.

Before I say anything more or propose any new projects or initiatives, I should say that one of the things I like most about the EM/CA wiki is the almost total lack of administrative overheads. So many things in academic life are bogged down with committees, meetings, action items etc… I love the fact that from the beginning we’ve not really done that, but have mostly just got on with the tasks we thought necessary to the best of our individual and collective abilities. We’ve sometimes made efforts to met up at conferences, which has been fun, and have continued to take the pragmatic approach of just doing what we can when possible without undue pressure or overarching expectations. This is outstanding, and long may it continue.

Having said that, I did take on a new role in 2019 – that of ISCA communications & information officer, and I now participate in more admin meetings than I would usually aim for. These are great fun, and some have included ideas that involve EMCA wiki. I wanted to share some of those ideas with you now, and leave it open to you all to respond (or not) in what is now a time-honored laid-back tradition of the EMCAwiki admins.

Firstly, I am aware that the reason I was elected to the ISCA board was because of this project and all of your work. I would like to acknowledge that publicly by adding a page to the new ISCA website I’m currently developing – aiming to launch it towards the end of January 2020. I have kept a list of admins here: http://emcawiki.net/The_EMCA_wiki_Admins – I like the fact that we have all done different things at different times – and some of us have been more active than others. I hope that continues. If you would really prefer not to be acknowledged for what you’ve done – or what you may do in the future – let me know.

Secondly, I am working with Lucas Seuren and a great group of ECRs from around the world on an exciting new ISCA project. This will draw on the content in the EMCA wiki and promote it to a wider audience, as well as inviting contributions beyond bibliography entries (e.g. lists of up-to-date equipment, cross-cultural ethics frameworks for data recording, shared syllabi, useful youtube videos etc.). I hope that this will contribute positively to the wiki, without increasing any administrative overhead. Of course if any of you would like to contribute to that project too, please let me know.

Thirdly, I am aware that there are lots of features of the wiki that I have long promised to implement – and I have long delayed that implementation. I’ve written many of them down here: http://emcawiki.net/User:SaulAlbert. There is also a list of ‘known issues’ that have been pointed out as problems over the years: http://emcawiki.net/Known_Current_Emcawiki_Issues – I’m going to acknowledge now that I doubt I’ll ever have time to implement any of these myself. I have fewer and fewer of the kind of uninterrupted stretches of code hacking-time that are required for software development. Instead, I’m going to try to raise funds to pay professional programmers and systems administrators to do this. I think it’s something I could find a funder to support, and I’ll work on this – with your consent – and (if you have any ideas/time/funders) your involvement and collaboration.

I hope all of that sounds OK. I’m just going to get on with it slowly, and will welcome any thoughts/feedback/initiatives/and ideas that you all have over the next decade.

All the best, and happy 2020,

Drawing, multimodality and interaction analytics

Image from the 2018 Drawing Interactions workshop at the University of Liverpool, London

On the 28th November 2019 I’m running this workshop in London for the National Centre for Research Methods with Pat Healey, Matthew Tobias Harris, Claude Heath, and Sophie Skach, which focuses on drawing as a method in interaction analysis. It’s open to any researcher and/or draftsperson – regardless of experience with conversation analysis or drawing. The aim is to introduce artists and social scientists to each other’s methods for visual analysis, inductive observation and inscription of research objects. Places are limited, so please sign up at the link below:

https://www.swdtp.ac.uk/event/ncrm-training-drawing-multimodality-and-interaction-analytics/

Workshop abstract

Analysing embodied interaction enables researchers to study the qualitative details of communication and to do reliable coding of interaction for quantification. Some researchers use video stills and word processing software to add arrows and highlights. Others use simple sketches or tracings to present their research findings in their final published results. However, until now, no dedicated courses have been offered that teach drawing as a method for the transcription and analysis of social interaction.

This one-day course will introduce researchers to the theory and method of conversation analysis, and to new graphical tools, transcription methods, and software systems that are available for multimodal analysis of audio-visual data. It will involve short presentations, group discussions and practical work including video data gathering, transcription and analysis. No special equipment is required, although we encourage participants to bring some means of recording video (e.g. a phone or other digital camera).

This course is aimed at researchers across disciplines with an interest in face-to-face social interaction and communication (human or animal, face-to-face or video-mediated). No prior experience of drawing or conversation and discourse analysis is necessary, since we will cover the basics required to learn independently.

Learning outcomes

This course will introduce you to methods, techniques and tools for analysing embodied social interaction.

The course covers:

  • Conversation analytic methods for collecting, transcribing and analysing video data.
  • Drawing techniques for use in field notes and in exploratory data analysis sessions.
  • How to create and use multimodal transcripts for data analysis and presentation of results.
  • Software tools for creating and sharing computer-readable graphical transcriptions.
  • Future directions for multimodal interaction analytics e.g. automation and open science.