1An invited foreward to the special issue, « Interprétations et méthodes qualitatives » of Revue Internationale de Psychosociologie (Numéro coordonné par Martine Hlady Rispal), 17 mars 2009
2‘The value of qualitative research, or how researchers legitimize the way they work’ – this was the announced theme for this special issue. The special issue is exciting in the ways that it showcases the imagination of scholars committed to pushing the boundaries of what it means to do empirical research, from the innovative team research of Lièvre and Rix-Lièvre who accompanied a polar expedition to Sachet-Milliat’s analysis of the special demands facing researchers of “sensitive” topics, such as corruption. In this brief foreward to the Special Issue, we distinguish between ‘qualitative’ and ‘interpretive’ research in order to facilitate a more nuanced discussion between the scholars who have contributed to this special issue and among readers interested in the topic. What the contributors have in common is their varied use of in-depth, unstructured or semi-structured interviewing, case studies, participation-observation, and archival research, as well as their use of word-based modes of analyzing data and narrative forms of communicating their findings. Where they may not always agree is on the varied philosophical rationales that underlie their research and the associated criteria for evaluating its quality. Surfacing and clarifying areas of agreement and disagreement and the methodological-philosophical grounding for these can, we believe, spur further refinements of research methods and practices and improve presentation and assessment of research. In the process of developing our argument, we will touch on a few of the papers in this Special Issue that raise points that are key in our own thinking, but this essay should not be read as a commentary on the collected papers.
3We are aware of the fact that French scholarship and US-based scholarship may use the same terms with different meanings. As we anticipate, eagerly, more contacts between these scholarly communities, we hope to clarify where the North American debate now stands. This, at least, is our interpretation of the kind invitation to write this foreward, an opportunity for which we are most grateful.
Why ‘Interpretive’?
4Given the ways in which qualitative methods and methodologies have been developing in the US in recent years, many researchers doing what has traditionally been called ‘qualitative’ research have adopted the term ‘interpretive’ instead. The sort of research they are referring to has its origins in early 20th century anthropology-sociology – before those fields split into different departments and associations – as conducted at the University of Chicago, primarily. This is not to say that such research was not done elsewhere, but that the ‘Chicago School’ name forms a benchmark for more explicitly methodological writings either developed there or written in reference to those works. Although this distinction may be more reflective of present US (and perhaps Canadian) debates, given that ontological and epistemological questions, mostly lost to the debate on that side of the Atlantic, have continued to be discussed on the European and UK side, we think they are worth articulating, especially for globalizing fields such as organizational and management studies that are increasingly coming into contact with those views.
5We can identify three reasons for this shift in terminology. Chief among these is the methodological (or ‘philosophical’) explanation. (A note on terminology: we use ‘method/s’ to refer to the tools or techniques for accessing/generating data, whereas ‘methodology’ denotes the philosophical rationales that underpin methods. In this sense, methodology could well be understood as ‘applied’ ontology and epistemology.) As Continental philosophies (so-called from a North American perspective: phenomenology, hermeneutics, critical theory) and their US methodological counterparts (pragmatism, symbolic interaction, ethnomethodology) became more widely known, a so-called ‘interpretive turn’ began to take place. Some of the key works that helped to advance this development appeared in the philosophy of science (Kuhn 1970), social theory and social philosophy (e.g., Berger and Luckmann 1966, Rabinow and Sullivan 1979, 1985; Hiley et al. 1991), and anthropology (Geertz 1973). As the linkages between these more theoretical arguments and their methods enactments became clear, those methods increasingly became designated ‘interpretive’ (e.g., Prasad 2005, Part I – ‘The Interpretive Traditions’; Yanow and Schwartz-Shea 2006).
6Second is a pragmatic reason. ‘Qualitative’ research has increasingly been coming under pressure over the last 15 years to adopt ‘large “n”’ research logic (King, Keohane, and Verba 1994), rather than the ‘single “n”’ – single setting, single case – logic that characterizes them. The term increasingly refers, then, to research informed by realist-objectivist ontological and epistemological presuppositions, rather than the more constructivist-subjectivist presuppositions that characterize traditional qualitative research. To distinguish semiotic, ethnomethodological, and other forms of meaning-focused ‘qualitative’ methods from survey, interview, and other research modes that attempt to mirror or capture the real social world, methodologists and researchers who engage in the former have adopted the language of ‘interpretive’ methods.
7And then there is a linguistic reason. ‘Qualitative’ came into play in response to the term ‘quantitative’, used to designate those methods using numerical data: statistical analyses of various sorts, such as analysis of variance or multiple regression. But if ‘qualitative’ increasingly refers to surveys, focus groups, case study research (e.g., Gerring 2008), and the like, which assume very different ontological-epistemological points of departure, then another term is needed to designate ‘old-style’ Chicago-school methods. That is where ‘interpretive’ increasingly comes in.
In encountering quantitative methods in a research article, rarely does one find an explication of the ontological-epistemological presuppositions on which they rest. Similarly, in methods courses and textbooks (at least, the ones we are familiar with from our own teaching and research; Schwartz-Shea and Yanow 2002), rarely are quantitative methods discussed in terms of their logically prior suppositions about the reality status of the world being studied or the ways in which humans can have or generate knowledge of that world. These silences reflect the hegemonic position of quantitative methods in the social science world, throughout the US and in those countries and/or disciplines that have fallen under the sway of US methods. Increasingly, however, researchers are being called on, in their methods statements in articles and books, to make their presuppositions explicit. These expectations are being voiced both within specific disciplines and on the pages of methods texts. We see this Special Issue as one example of this trend.
Legitimating Interpretive Methods
8If interpretive methods rest on different methodological grounds from quantitative and qualitative ones, it stands to reason that the legitimation of their ‘truth claims’ would also rest on different criteria. To wit: if one no longer holds that social scientific theories ‘mirror’ the social world they claim to understand and explain (see Rorty 1979), and that it is not possible to develop a science of human activity that is ‘objective’ (because the social scientist does not and cannot stand outside of that which she or he is studying; Yanow 2006), then criteria that are designed to articulate the closest, best capturing of that world – reliability, validity – are not useful measures of the trustworthiness of scientific claims. ‘Bias’ – in the sense of ‘distortion’ caused by ‘les influences exercées par les interactions entre le chercheur et les acteurs au cours de la collecte de données qualitatives’, discussed by Céline Barredy (this volume) – can only take place within a world in which a social reality external to the researcher is seen as possible. Only then can social reality be distorted by inaccurate ‘reflections’ of it. We would pose this same argument to the position held by Robert Yin, cited by Martine Hlady, concerning internal validity. In his body of work on case study method, Yin clearly takes a realist position, which is not in keeping with interpretive methodologies.
9By contrast, an interpretive methodology holds that there is no direct, unmediated access to reality (a basic claim in interpretive epistemology), and this, in turn, means that humans’ interactions with their external worlds are always already mediated by the historical, cultural contexts in which they find themselves. But more than this, humans do not simply respond to external stimuli but actively make and remake their understandings of those stimuli (a constructivist ontology). From the perspective of these presuppositions, the notions of reliability and replicability as possibilities within the social world should never be assumed (e.g., as a standard). Instead, their ontological status should be considered an open question to be determined by empirical research. A more appropriate starting point is the notion that “one can never step in the same river twice,” due to the continual flow of time and history. Reification of the social world is a means for coping with this complexity, but interpretive researchers seek to problematize reifications and processes of reification, instead seeking out the human roots of accepted routines and institutional forms and the tacit knowledge that forms the a priori background for all interpretations.
10Methodologists since Lincoln and Guba (1985) have been looking for alternative standards that would be more in keeping with interpretivist presuppositions (for an overview, see Schwartz-Shea 2006). They began by seeking parallel equivalents (e.g., credibility in lieu of internal validity; transferability in lieu of generalizability). Subsequent scholars pointed out how this “parallel-ist” approach to the generation of interpretive standards retained a positivist-informed realism, and they sought to correct this by creating new sets of criteria for assessing interpretive research based on its own presuppositional grounding (e.g., Lather 1993; Lincoln 1995).
11This move raised scholars’ awareness of the need for greater methodological transparency and shared assessment criteria across the disciplines, even as some (e.g., Smith and Deemer 2003) argued that universal, unchanging, ahistorical standards were inconsistent with the interpretive project. In this spirit, working inductively from the “standards” literature and methods texts, Schwartz-Shea (2006) identified several terms that name common interpretive research practices (e.g., “thick description,” “member-checking”), arguing that these can serve as reasonable starting points for assessing interpretive research studies (so long as a “check-list” mentality does not overwhelm the substantive issues unique to each research study). However, awareness of these criteria, as well as ongoing debates, varies across disciplines and epistemic communities.
12We therefore disagree with Julien Cusin that ‘reliability and rigour’ are the criteria that all researchers should be meeting. As the methodology literature points out, neither of these criteria is consistent with interpretive presuppositions. (Another terminological note: We speak here of methodological rigor, not philosophical rigor. That is, we do hold that scholarly argumentation – of whatever methodological orientation – needs to be rigorous in the sense of thorough and meticulous; but that is not the context within which ‘rigor’ is used when critiquing the approach of a study or the methods used in it.) Interpretive research methods are not ‘rigorous’ – they cannot be, as they require the researcher to respond with flexibility in the field (on rigor, see Yanow 2006; for an example, see Zirakzadeh 2009) – but interpretive research is systematic. At the same time, we concur with Cusin that the emphasis in designing research that is based on ‘les « meilleurs » choix méthodologiques possibles’ – choices of the ‘best’ setting, the ‘best’ interview participants, and so forth – does not do justice to the improvisational character of interpretive field research. Moreover, Hlady is correct, in our view, to point to the fact that research participants’ views are themselves the ‘data’ that interpretive researchers (want to) work with, as it is their ‘local knowledge’ – their situated meaning – or sense-making of their own circumstances – that is of research interest to us. Many methodologists, including John Van Maanen (2001), have pointed out that the researcher’s person – we would add, her body – is the primary instrument of research in ethnographic and other interpretive projects. Yet neither of these points in and of itself necessarily undermines the ‘trustworthiness’ of interpretive research – the criterion that is increasingly being articulated as the baseline standard for interpretive research (in lieu of ‘rigor’).
Felix et al. pose the ultimate question from this perspective: is (participatory) action research ([P]AR) scientific? This is a much bigger issue than what we can engage here (see, e.g., Berg and Eikelund 2008, Greenwood and Levin 1998/2007, Sykes and Treleavan 2009), but let us point to one way to think about it. (P)AR is deemed unscientific because the researcher crosses from a position outside the research to one where she or he collaborates with those with – not for! – whom the research is being conducted. In an objectivist methodology, this is no longer scientific. But in a subjectivist methodology, such collaborative research design can still be systematic and trustworthy. In fact, from an interpretivist perspective, non-AR might be seen as unethical, as it uses participants for the researcher’s own ends (see, e.g., American Anthropological Association 2002).
Implications for Publication
13In our view, this has very clear implications for both writers and editors. To the extent that we are no longer writing for readers within single epistemic communities – organizational and management studies, educational studies, public policy studies, and the like are increasingly cross-disciplinary, if not inter-disciplinary, and their research is informed by quantitative, qualitative, and interpretive methods – we, meaning all scientists, need to make our methodological presuppositions more explicit. We need also to be more transparent in our methods and our justifications for their choices. This is what authors need to do (for further elaboration, see Schwartz-Shea and Yanow 2009).
Journal editors and reviewers have the responsibility of familiarizing themselves with methods and methodologies that draw on other presuppositions than the ones that characterize their own research – and of knowing that interpretive research is not ‘sloppy’ quantitative research: it is neither non-rigorous nor impressionistic. Science of any sort rests on two hallmarks: its systematicity and its attitude of doubt (what in the philosophy of science is called ‘testability’). Each of these forms of research needs to be judged on its own methodological criteria: interpretive methods will never measure up to the criteria of quantitative research – but neither can quantitative research measure up to the standards of interpretive methods!
Acknowledgements
14Our thanks to Martine Hlady Rispal for inviting us to write this forward and for comments on an earlier draft and to Marie-José Avenier for initiating the idea.
Bibliographie
References
- American Anthropological Association. 2002. El Dorado task force papers. Vol. I. Submitted to the Executive Board as a final report, May 18.
- Berg, Anne Marie and Eikeland, Olav, eds. 2008. Action research and organization theory. Frankfurt am Mein: Peter Lang.
- Berger, Peter L., and Thomas Luckmann. 1966. The social construction of reality. New York: Anchor Books.
- Geertz, Clifford. 1973. The interpretation of cultures. New York: Basic Books.
- Gerring, John. 2007. Case study research: Principles and practices. Cambridge: Cambridge University Press.
- Greenwood, Davydd and Levin, Morten. 1998/2007. Introduction to action research: Social research for social change, 2nd ed. Thousand Oaks, CA: Sage.
- Hiley, David R., James F. Bohman, and Richard Shusterman, eds. 1991. The interpretive turn. Ithaca, NY: Cornell University Press.
- King, Gary, Robert Keohane, and Sidney Verba. 1994. Designing social inquiry. Princeton, NJ: Princeton University Press.
- Kuhn, Thomas S. 1970. The structure of scientific revolutions, 2nd ed. Chicago: University of Chicago Press.
- Lather, Patty. 1993. Fertile obsession: Validity after poststructuralism. Sociological Quarterly34: 673–93.
- Lincoln, Yvonna S. 1995. Emerging criteria for quality in qualitative and interpretive research. Qualitative Inquiry1: 275–89.
- Lincoln, Yvonna S. and Guba, Egon G. 1985. Establishing trustworthiness. Naturalistic inquiry. Thousand Oaks, CA: Sage, chapter 11 (289-331).
- Prasad, Pushkala. 2005. Crafting qualitative research: Working in the postpositivist traditions. Armonk, NY: M. E. Sharpe.
- Rabinow, Paul, and William M. Sullivan, eds. 1979. Interpretive social science. Berkeley: University of California Press.
- Rabinow, Paul, and William M. Sullivan, eds. 1985. Interpretive social science, 2nd ed. Berkeley: University of California Press.
- Rorty, Richard. 1979. Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.
- Schwartz-Shea, Peregrine. 2006. Judging quality: Evaluative criteria and epistemic communities. In Dvora Yanow and Peregrine Schwartz-Shea, eds. Interpretation and method: Empirical research methods and the interpretive turn, 89-113. Armonk, NY: M.E. Sharpe.
- Schwartz-Shea, Peregrine, and Dvora Yanow. 2002. ‘Reading’ ‘methods’ ‘texts’: How research methods texts construct political science. Political Research Quarterly55: 457–86.
- Schwartz-Shea, Peregrine and Dvora Yanow, Dvora. 2009. Reading and writing as method: In search of trustworthy texts. In Sierk Ybema, Dvora Yanow, Harry Wels, and Frans Kamsteeg, eds., Organizational ethnography: Studying the complexities of everyday life, 56-82. London: Sage.
- Smith, John K., and Deborah K. Deemer. 2003. The problem of criteria in the age of relativism. In Norman K. Denzin and Yvonna S. Lincoln, eds., Collecting and interpreting qualitative materials, 2nd ed., 427–57. Thousand Oaks, CA: Sage.
- Sykes, Chris and Treleaven, Lesley. 2009. Critical action research and organizational ethnography. In Sierk Ybema, Dvora Yanow, Harry Wels, and Frans Kamsteeg, eds., Organizational ethnography: Studying the complexities of everyday life, 215-230. London: Sage.
- Van Maanen, John. 2001. Afterword. In David N. Gellner and Eric Hirsch, eds., Inside organizations: Anthropologists at work. Oxford: Berg.
- Yanow, Dvora. 2006. Neither rigorous nor objective? Interrogating criteria for knowledge claims in interpretive science. In Dvora Yanow and Peregrine Schwartz-Shea, eds., Interpretation and method: Empirical research methods and the interpretive turn, 67-88. Armonk, NY: M E Sharpe.
- Yanow, Dvora and Schwartz-Shea, Peregrine. eds. 2006. Interpretation and method: Empirical research methods and the interpretive turn. Armonk, NY: M.E. Sharpe.
- Zirakzadeh, Cyrus Ernesto. 2009. When nationalists are not separatists: Discarding and recovering academic theories while doing fieldwork in the Basque region of Spain. In Edward Schatz, ed., Political ethnography: What immersion brings to the study of power. Chicago: University of Chicago Press (in press).