Statistical Power in Content Analysis Designs
How Effect Size, Sample Size and Coding Accuracy Jointly Affect Hypothesis Testing – A Monte Carlo Simulation Approach
Keywords:
Content analysis;, Power analysis;, Sample size;, Effect size;, Intercoder reliability;, Intercoder agreement;, Hypothesis testing;, Monte Carlo simulation;Abstract
Analogous to power calculations for experimental designs, this study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. In most widespread sample sizes the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, in studies with low sample sizes and/or low expected effect sizes, statistical power will be too low even if coding agreement is decent. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may still result in reasonable statistical power. Beyond consideration in evaluation of studies, this can help in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). Equations and a set of tables are provided to demonstrate how they can be used in designing, reporting, and evaluating content analysis studies. A web-based calculator and R functions are available to facilitate using this framework.