Someone recently bought our

students are currently browsing our notes.

X

Methodology Notes

PPE Notes > Politics: Comparative Government - Methodology Notes

This is an extract of our Methodology document, which we sell as part of our Politics: Comparative Government - Methodology Notes collection written by the top tier of University Of Oxford students.

The following is a more accessble plain text extract of the PDF sample above, taken from our Politics: Comparative Government - Methodology Notes. Due to the challenges of extracting text from PDFs, it will have odd formatting:

Politics: Comparative Government Methodology: Literature Summaries and Exam Tips Exam questions What is the exam question asking you to do?
What key theoretical developments do you need to discuss?
How can you evaluate these theories?
3 main types:
- conceptual framework (eg parl vs pres)
- causal mechanisms (electoral vs party systems)
- modeling actor strategies (party systems) Use operationalization Internal/external validity Correlation/causation Non-linear relationship; multiplicative/interaction effects Multivariate Are any paradigms sufficiently successful in integrating statics and dynamics?
2 key (interrelated) areas Theory construction Empirical testing Me: need to ask why care - broader implications = democratic theory. So should care about processes as well as substantive results. Historical instituioanlism can be better for the former. Comparative Politics: Mill's methods (1874) Systematic search for the necessary and sufficient conditions of political phenomena Necessary condition: without it the event cannot occur Sufficient: presence must occur Necessary and sufficient Method of agreement: compare cases with same dependent variable. Method of difference: compare cases with different dependent variable and condition on vector. Critique. Assumptions too strong:
- Causal process must be deterministic
- No interaction effects
- Single causal
- All possible causes must be oidentified
- All the instances of the phenomena that could ever occur have been observed by us or all the unobserved instances must be like the institutions we have observed But, not definitive of comparative politics. Elements of good theory
- Evidence to support it
- Clear derivable hypotheses that can be tested. Eg Marxism isn't falsifiable (Popper).
- Internal consistency
- Parsimony - eg Durverger's law
- Clear microfoundations - how it affects actors and their decisions. Eg how presidentialism affects choices of presidents.
- Clear causal mechanisms
- Bad problems - endogeneity problem. Potential outcomes framework to resolve...or instrumental variables. Can I think of any?
Elements of good research

-

Public about methods Uncertainty Replicable Falsifiable Maximum leverage Internally consistent Relevant to real world

1) Hypothesis 2) Causal mechanism 3) Definition of variables. What do you mean by economic development. What is development? Unit of analysis. 4) Evaluation of evidence/adjudication 5) Rival explanations Smaller n/case studies: Not too narrow visioned Analysis not description Grounded in references in the field Data is accurate/measurement error Large n studies/rational choice (often synonymous with game theory) arguments: Good assumptions. All the strength of argument going to be built into assumptions. Measurement error and potential for bias Enough similarity in cases Direct/valid measurement Reliability of the collection 'Controlled for' KKV: DSI (1994)

1. The goal of scientific research is inference - ie attempting to infer beyond the immediate data to something broader that is not directly observed (whether descriptive or causal).

2. The procedures are public.

3. The conclusions are uncertain

4. The content is the method. Descriptive Inference

1. Generalizing from a sample to a universe of cases.

2. Inferences from observations to concepts; from particular facts broader social structure.

3. Separate the systematic and random components of any phenomenon.
= The process of understanding an unobserved phenomenon on the basis of a set of observations. Causal inference A counterfactual understanding of causation. 3 major assumptions:

1. Causal homogeneity (highly questionable)

2. IID

3. Conditional independence
- Random variables
- Unbiased estimators Defining the problem

1. Important problem

2. Contribute to scholarly literature

3. Must permit valid inference

4. Construct falsifiable hypotheses a. Max observable implications b. Be concrete

5. Built theories that are logically consistent

6. Increase leverage a. Parsimony b. Observable outcomes

7. Range of variation

8. Sufficient number

9. Causal homogeneity - assumption that all units with the same value as the explanatory variable have the same EV as the dependent variable

10. Avoid selection bias

11. Select cases non-randomly in small N

12. If observations aren't independent, recognize and address cause of interdependence Social science research should be both general and specific. Causality and causal inference
- Impossible to know a cause for certain
- Unity homogeneity rarely possible. Instead, key is understanding degree of homogeneity Criteria for Judging Causal Inferences
- Define causality in terms of RVs
- Use same criteria to judge descriptive inferences: unbiasedness and efficiency Rule 1: Falsifiability Popper 1968. Popper's view: falsification (not verification) is key to science. Testable hypotheses generated from theories emphasizing that the value of these theories lie in their ability to defend themselves from disconfirmation through rigorous testing. Rule 2: internal consistency Eg through formal modeling/RCT But, need to be aware of assumptions Rule 3: Select Dependent Variables Carefully

1. Dependent variables should be dependent (avoid endogeneity problem)

2. Do not select observations based on the dependent variable so the dependent variable is constant.

3. Choose a dependent variable that represents the variation we wish to express. Rule 4: Maximize Concreteness Observable rather than unobservable concepts where possible. Concreteness and specificity in language and thought Omitted Variable bias if unobservable. Rule 5: State Theories in as Encompassing Ways as feasible Problem of endogeneity: values of explanatory variables sometimes a consequence not a cause of the dependent variable. KKV: single observation generally not useful technique because of measurement error/OV bias. Falsification from a single case study not really science. Maximise leverage

-

Try to explain as much as possible with as little as possible If can accurately explain what at first appears to be a complicated effect with a single causal variable or few variables the leverage we have over a problem is v.high
- If can explain many effects on the basis of one or a few variables, also have high leverage
- Leverage low in the social sciences in general
- Areas conventionally studied qualitatively are often those in which leverage is low
- Maximising leverage is so important so general that strongly recommend that researchers routinely list all possible observable implications of their hypotheses that might be observed in their data or in other data Report uncertainty of inference Descriptive inference Assumptions underlying causal inference
- Unit homogeneity o EV of the dependent variables from each unity are the same when our explanatory variable takes on a particular value o A weaker but also fully acceptable version of unit homogeneity is the constant effect assumption
? Constant causal effect o Conditional independence
? Values are assigned to explanatory variables independently of the values taken by the dependent variables
? Assignment to the treatment group or the control group is independent of expected outcome once controlled by the independent variables o Endogeneity o Random selection and assignment: help us to make causal inferences because automatically satisfy three assumptions under CIA
? Process of assigning values to the explanatory variables is is independent of dependent variable
? Selection bias absent
? OV bias absent o Causal theories can originally be conceived as deductive or inductive o Should be internally consistent o Falsifiable
? Process = searching for their bounds of applicability Overall:
- Same logic of inference underpins all good inquiry. NB qualitative research can be just as good as long as you follow guidelines - doesn't say quantitative research inherently better! So people criticizing KKV on this grounds missing the point.
- Main recommendations: o 1. Increase number of observations (intertemporally and crossnationally), though problem of effort/time/money o 2. Make data public
*Rogowski (2004) 'How inference in the social (but not the physical) sciences neglects theoretical anomoly KKV attends insufficiently to the importance of problemation and deductive theorizing 3 complementary routes of inference:

1. Making clear the essential model, or process, that one hypothesizes to be at work

2. Teasing out the deductive implications of that model

3. Rigorously testing those implications against empirical reality. Contends that KKV emphases third one almost to exclusion of the first two. Argues single observation studies can be v.useful: Eg. TRUMAN (1951) widely accepted theory of 'cross-cutting cleavages. Truman: argued mutually enforcing social cleavages impeded social agreement. Only where each deep cleavage orthogonal to another (eg Switzerland) could social peace endure. BUT...
LIJPHART: Netherlands as serious empirical challenge. 1) virtually no cross-cutting cleavages 2) stable democracy McKeown (1997):
- DSI assumes similar problems of causal inference
- Forms the basis for analyzing problems as if for parameter estimation or significance testing
- Assumption is problematic
- Inconsistencies arise between their practical advice and their philosophical position
- DSI: see qualitative research as back up when quantitative research not feasible/too costly to pursue.
- Don't address question of adequacy of operationalization
- DSI: case studies beset by degree of freedom problem
- Tools for comparative case study research: o Detailed contextual knowledge: to assess the appropriateness of the methods employed in hypothesis testing, and provide the practical understanding that is the basis for theorizing o Bayesian Inference: evaluate new data in the light of prior empirical/theoretical knowledge o Crucial case analysis: select cases that offer valuable tests because strongly expect to confirm/disconfirm prior hypotheses o Counterfactual Analysis o Process Tracing: locating causal mechanisms
? DSI's apparent criticism: infinitely long. REP: game tree finite.
? Better criticisms: inattentiveness to selection bias; failure to specify counterfactual claims with enough accuracy. But analysis of subgame still provides useful result.
? Me: the whole tree thing is not incompatible with quantitative research nor specific to qualitative research?
Pretty dodgy argument. o Iterated Dialogue among theory, data and research design: good case not necessarily a 'typical' case but rather a 'telling' case in which the BRADY and COLLIER: RSI
- Too many variables - appearance of statistical significance without any causal explanation Critique the 'quantitative template' as basis for all social science research
- KKV: tacit assumption that large-N research superior. Seek to ascribe such tools to qualitative analysts.
- Preoccupation with hypothesis testing inhibits theory building and development. Important trade off between theoretical innovation and rigorous testing

--

KKV ignores various criticisms of quant approaches o Regression analysis depends on model. If model is wrong, so is the analysis
? Multicollinearity (R^2 may still be high)
? Heteroskedasticity
? OV bias
? Non-linear effects
? Interaction effects KKV emphasise evaluating uncertainty, but significance tests evaluate specific kinds of uncertainty - not intended for general purpose. Increasing move towards untenable level of generality and loss of contextual knowledge o KKV Overstates warning against post hoc hypothesis formation o KKV emphasises deduction at cost of induction o Concern with context prerequisite for achieving descriptive and causal inference that is rigorous and valid. Required to understand implications of homogeneity assumptions. o Iterated refinement of models in quant testing (eg null vs alternative hyp) and adoption is similar to inductive practices and procedures, just less explicitly. Trade-offs in research design. Should recognize strengths and weaknesses of different methods and be prepared to justify choices made. Traditions should learn from eachother. Hard to do either well. o Any study based on observational data faces challenge of eliminating rival explanations

How far does the quantitative template get us?
- KKV: error in the dependent variable does not bias regression results; error in the independent variable biases regression coefficients
- Substantial defects o Equating explanation with causal inference (eg. high R^2) o Narrow definition of causality o Lacks appreciation of importance on measurement and concept formation o Mainstream regression analysis (frequentist approach) ignores supplementary data from outside the sample. Bayesian approach: prior knowledge about the world deeply important. Data important only to extent they modify our prior assumptions or theories. o Homogenous causality unlikely. Often nonlinear. Qualitative methods helpful here. Isolating individual variables and testing them across time doesn't capture how they interact - is inconsistent with the ontology of complex causality. Critical junctures/systems effects/path dependency. Snapshot approaches focusing on microfoundations can neglect this.
- Strengths: o Convo between quant and qual researchers o Counterfactuals (eg potential outcome framework). Eg GOODIN (1992) (though possible in qualitative framework too) o Selection bias o Increasing number of observations - crucial
? But, danger of spatial and temporal autocorrelation that can thwart innovative attempts to increase observations o Tarrow: whenever possible should use qualitative data to interpret quantitative findings. Important thing is relations between quantitative and qualitative data. o Tools for bridging the quantitative-qualitative divide:
? 1. Process Tracing: focusing on processes of change with cases to uncover the underlying causal mechanisms

Buy the full version of these notes or essay plans and more in our Politics: Comparative Government - Methodology Notes.

More Politics: Comparative Government Methodology Samples