The goal of this article is provide a first systematic review of the connection between public policy analysis and QCA techniques, with a core emphasis on the state-of-the-art in QCA empirical applications1. First, we present QCA both as an approach and as a set of techniques (crisp-set, multi-value and fuzzy-set QCA), stressing their specific characteristics. In a second section, we argue that, in several ways, there exists a preferential connection between QCA and public policy analysis: in terms of research design and also in terms of the actual goals and needs of policy-oriented research.
Then, in the bulk of the article (sections 3 to 7), we provide an exhaustive survey of empirical applications published so far. To do so, we develop a typology of applications along two dimensions: the stages in the policy process (from agenda-setting and policy initiation to policy evaluation) and the level at which the 'cases' or units of analysis are empirically defined (from micro to macro). Finally, we wrap up this inventory by taking stock of what has been achieved so far. On that basis, we discuss some remaining limitations as well as some of the most promising avenues for further developments.
Some precautionary notes must be made. To start with, this review is circumscribed to the field of public policy analysis, focusing on the action of public/state bodies with regards to concrete issues or societal demands - as indeed the domain of 'policy analysis' is much broader and could also encompass decision-making processes in private firms2 and other non-public players (e.g. Gill & Saunders 1992). Further, the purpose of this article is to focus on the concrete ways in which QCA has been used in the field. This requires that we provide some short indications on the research questions and research results for most of the applications, and therefore quite large sections of the article are rather descriptive by nature. Finally, this article does not aim at surveying the numerous technical issues and innovations in QCA (see rather Wagemann & Schneider 2010a, 2010b; Schneider & Wagemann forthcoming; Rihoux & Ragin 2009). Neither does it aim to provide 'quantitative' information on all the surveyed applications (e.g. number of cases, number of condition variables, ...) - this will be the purpose of a forthcoming article focusing on the broader population of QCA applications (Rihoux et.al. 2012 forthcoming).
1 QCA in a nutshell (3)
1.1 QCA as an approach
QCA designates both an approach and an umbrella term for three specific techniques. The whole approach, as well as the first technique (csQCA - crisp-set QCA, first referred to as QCA) was launched by Charles Ragin's seminal volume (1987). QCA is first and foremost of comparative nature - more precisely: it was initially geared towards multiple case-studies, in a small- or intermediate-N research design. It thus strives to meet two apparently contradicting goals: gathering in-depth insight in the different cases and capture the complexity of the cases (gaining 'intimacy' with the cases), but also producing some level of generalization (Ragin 1987). The whole intention of Ragin (1987, 1997) was to develop an original "synthetic strategy" as a middle way between the case-oriented (or 'qualitative'), and the variable-oriented (or 'quantitative') approaches, which would "integrate the best features of the case-oriented approach with the best features of the variable-oriented approach" (Ragin 1987: 84).
On the one hand, indeed, QCA embodies some key strengths of the case-oriented approach (Ragin 1987; Berg-Schlosser et.al. 2009). To start with, it is a holistic approach, in the sense that each individual case is considered as a complex whole which needs to be comprehended and which should not be forgotten in the course of the analysis. Thus, QCA is in essence a case-sensitive approach. Furthermore, QCA develops a conception of causality that leaves room for complexity. (Ragin 1987; Berg-Schlosser et.al. 2009): multiple conjunctural causation. This implies that: 1) most often, it is a combination of conditions (independent or "explanatory" variables) that eventually produces a phenomenon - the outcome (dependent variable, or phenomenon to be explained); 2) several different combinations of conditions may produce the same outcome; and 3) depending on the context, a given condition may very well have a different impact on the outcome. Thus different causal paths - each path being relevant, in a distinct way - may lead to the same outcome. As J.S. Mill, Ragin rejects any form of permanent causality since causality is context- and conjuncture-sensitive. Bottom line: by using QCA, the researcher is urged not to specify a single causal model that fits the data best, as one usually does with standard statistical techniques, but instead to "determine the number and character of the different causal models that exist among comparable cases" (Ragin 1987).
On the other hand, QCA indeed embodies some key strengths of the quantitative, or analytic-formalized approach. First, it allows one to analyze more than just a handful of cases, which is seldom done in case-oriented studies. This is a key asset, as it opens up the possibility to produce generalizations. Moreover, its key operations rely on Boolean algebra and Set logic, and requires that each case be reduced to a series of variables (conditions and an outcome). Hence, it is an analytic approach, which allows replication (Berg-Schlosser et.al. 2009). This replicability enables other researchers to eventually corroborate or falsify the results of the analysis, a key condition for progress in scientific knowledge (Popper 1963). This being said, QCA is not radically analytic, as it leaves some room for the holistic dimension of phenomena. This is linked to another fundamental feature of QCA: it establishes Set connections, which are asymmetric by design, by contrast with correlational connections (and most other measures of associations on which mainstream statistics are based) which are symmetric by design (Ragin 2006, 2008). Indeed settheoretic analysis, like qualitative research more generally, focuses on uniformities and nearuniformities - taking into consideration several combined properties of the 'cases' considered as whole configurations - and not on general patterns of association (Ragin 2008). Finally, the QCA algorithms allow one to identify (causal) regularities that are parsimonious, i.e. that can be expressed with the fewest possible conditions within the whole set of conditions that are considered in the analysis - though a maximum level of parsimony should not be pursued at all costs.
1.2 QCA as a set of techniques
QCA using conventional Boolean sets (i.e. variables can be cod-ed only "0" or "1", and thus have to be dichotomized) was developed first, which is why the label "QCA" has been often used to name this first technique. However, the standard practice (following Schneider & Wagemann 2007, and Ragin & Rihoux 2009) is now to distinguish between three labels: (1) when referring to the original Boolean version of QCA, we use csQCA (where "cs" stands for "crisp set"); (2) when referring to the version that allows multiple-category conditions, we use mvQCA (where "mv" stands for "multi-value"); (3) when referring to the fuzzy set version which also links Fuzzy Sets to truth table analysis, we use fsQCA (where "fs" stands for "fuzzy set").
The QCA protocol is similar all three techniques, with some specificities & enrichments for mvQCA and fsQCA (Rihoux & De Meur 2009; Cronqvist & Berg-Schlosser 2009; Ragin 2008; Ragin 2009a). The more formalized steps, based on the formal logic of Boolean or settheoretic algebra and implemented by computer programs, aim at identifying so-called "prime implicants" in a truth table. The key philosophy of csQCA is to "[start] by assuming causal complexity and then [mount] an assault on that complexity" (Ragin 1987: x).
One must first produce a data table, in which each case displays a specific combination of conditions (expressed in terms of setmembership for all the conditions) and an outcome (also expressed in set-membership). The software then produces a truth table that displays the data as a list of configurations. A configuration is a given combination of some conditions and an outcome. A specific configuration may correspond to several observed cases, thereby producing a first step of synthesis of the data.
The key following step of the analysis is Boolean minimization - that is, reducing the long Boolean expression, which consists in the long description of the truth table, to the shortest possible expression (the minimal formula, which is the list of the prime implicants) that unveils the regularities in the data. It is then up to the researcher to interpret this minimal formula, possibly in terms of causality.
As a set of techniques, QCA can be used for at least five different purposes (De Meur and Rihoux 2002: 78-80; Berg-Schlosser et.al 2009). The most basic use is simply to summarize data, i.e. to describe cases in a synthetic way by producing a truth table, as a tool for data exploration and typology-building. This use is basic in the sense that it does not rely on a more elaborate, stepwise design of typology-building, such as recently developed by George and Bennett (2005). It can also be used to check coherence within the data: the detection of contradictions allows one to learn more about the individual cases. The third use is to test existing theories or hypotheses, to corroborate or refute these theories or hypotheses - QCA is hence a particularly powerful tool for theory-testing (e.g. Sager 2004; Goertz & Mahoney 2004). Fourth, it can be used to test some new ideas or propositions formulated by the researcher, and not embodied in an existing theory; this can also be useful for data exploration. Finally, QCA allows one to elaborate new hypotheses...