|Analysing Talk In Interaction
Lecture / Seminar 12: Applying CA (2)
Applying the findings of CA work on 'institutional' settings
Last lecture I talked about what CA could do in explicating practices in various applied settings like medicine, education and the law.
Description or prescription?
I mentioned that CA might be used to change what people do. Close inspection and analysis, you might think, ought to help doctors handle medical consultations, reporters conduct new interviews, or teachers achieve educational goals in the classroom. And indeed it ought to help patients, interviewees and pupils to the same degree.
Admittedly, CA people have no medical, educational or other specialist expertise. They can see how things are done, but they can't comment on why they are done. CA can't advise a doctor what diagnosis to give the patient. Nevertheless the argument would be that CA might recommend how to give it - for example, to make bad news easier for the patient to take.
That seems to be uncontroversial. But it isn't as simple as it looks. What, for CA, counts as a 'bad' way of doing something, and what as a 'good' way? I think there are three criteria, varying in how much CA takes the lead.
1.Relying on experts' identifications of bad practices
The most cautious criterion is the (expert) participants' own. We would look at the tape alongside the people involved, and listen to what they say. For example, an experienced practitioner can look at a tape with us and point to what he or she sees as 'mistakes' in what a trainee is doing. They may say that something about the way the trainee is asking questions (etc) seems wrong, but they can't say exactly why. Our job would then be to analyse what is happening, and make recommendations to change it. This is the case where CA takes the least initiative.
2. Identifying practices which depart from the institution's own ideals
We might act independently of the expert and see things which can't be 'right'. The criterion would be 'going against the principles of ideals of the institution involved'. These can be quite subtle. For example, psychological assessments (like IQ tests) must be performed in an utterly standard way, yet if we looked into their actual administration, we might see subtle practices which distort the test questions. That would go against the stated ideals of the institution.
If we brought these things to the attention of the participants, they would probably agree that something was wrong. (If they didn't agree, we would probably be in the territory of point 3 below.). This is the case where CA takes the initiative in spotting something, and is willing to consult with practitioners about what it sees, though it would probably blow the whistle anyway.
3. Seeing practices that we don't like - even if they are okay with the institution
This is where CA would be taking the most initiative - but would be uncomfortable about it. We'd have to be clear what it was that we didn't like. For us not to like it on CA grounds, it would have to be something that only emerges when you do CA. You don't need CA to object if, for example, your tape shows a teacher obviously humiliating a child in front of the class, or a doctor making a patently insulting remark to a patient..
What if CA does shows something nasty, but not obvious? Say you discover a practice which you think is in fact humiliating or insulting, but the institution doesn't agree, or for which it gives a justification? Suppose you do research on police interrogations, and discover subtle practices of questioning that you think are humiliating or insulting. The police do not agree. Or they agree, but justify it on the grounds of efficient investigation. Do you have a case on CA grounds alone, or would you have to go outside CA, and make a more socially-grounded argument? Very tricky.
Let's stick to the middle line, seeing something which arguably ought not to happen, by the institution's own principles. I'll take you through to extracts which seem to show some deviation form an institutional ideal. But I'll also ask how we judge that.
Here is an extract from a telephone survey. In such surveys, the callers are told explicitly that they must ask the questions in an absolutely standard format. Otherwise the respondents' answers will not be comparable. One thing they should certainly not do is give 'leading questions'.
From Houtkoop-Steenstra (2000) p 80 (I= telephone interviewer, R=respondent)
It may seem understandable enough, and CA would explain why it is understandable that the interviewer should pose her questions like that, but it is strictly forbidden in standardised interviews. What she says in lines 1 to 3 is in the script, but then the script goes on to give a series of alternatives (something like "elementary school, secondary school, college, or university?). Line 4, and again line 6, shows that she makes two 'mistakes' - one, she assumes an answer, and two, she poses a question as a yes/no question, rather than the alternatives that are laid out on her script.
Here is a more extreme case of a distortion of a script, this time from a psychological assessment. The official question on the script is:
Q: How did you decide to do the job or other daily activities you do now?
From Antaki (1999) p 440-441, transcription simplified.
We can track the same sorts of editing and distortion, now more pronounced than in the first example. But could a case be made that in fact these distortions were - in spite of going against the traditional prescriptions of testing - actually more appropriate than the literal questions would have been? A point to discuss.
Houtkoop-Steenstra, H (2000) Interaction and the standardized survey interview. Cambridge: CUP
Antaki, C (1999) Interviewing persons with a learning disability. Qualitative Health Research, 9, 437-454
We'll talk about the rights and wrongs of applying CA.
We'll review the course as a whole, so bring your thoughts and questions.
|A course for the University of Southern Denmark, Odense 2003