# Chapter 9 Hypothesis testing

In the first chapter we discussed the three major goals of statistics:

• Describe
• Decide
• Predict

In this chapter we will introduce the ideas behind the use of statistics to make decisions – in particular, decisions about whether a particular hypothesis is supported by the data.

## 9.1 Null Hypothesis Statistical Testing (NHST)

The specific type of hypothesis testing that we will discuss is known (for reasons that will become clear) as null hypothesis statistical testing (NHST). If you pick up almost any scientific or biomedical research publication, you will see NHST being used to test hypotheses, and in their introductory psycholology textbook, Gerrig & Zimbardo (2002) referred to NHST as the “backbone of psychological research”. Thus, learning how to use and interpret the results from hypothesis testing is essential to understand the results from this research.

It is also important for you to know, however, that NHST is deeply flawed, and that many statisticians and researchers (including myself) think that it has been the cause of serious problems in science, which we will discuss in Chapter 17. For more than 50 years, there have been calls to abandon NHST in favor of other approaches (like those that we will discuss in the following chapters):

• “The test of statistical significance in psychological research may be taken as an instance of a kind of essential mindlessness in the conduct of research” (Bakan, 1966)
• Hypothesis testing is “a wrongheaded view about what constitutes scientific progress” (Luce, 1988)

NHST is also widely misunderstood, largely because it violates our intuitions about how statistical hypothesis testing should work. Let’s look at an example to see.

## 9.2 Null hypothesis statistical testing: An example

There is great interest in the use of body-worn cameras by police officers, which are thought to reduce the use of force and improve officer behavior. However, in order to establish this we need experimental evidence, and it has become increasingly common for governments to use randomized controlled trials to test such ideas. A randomized controlled trial of the effectiveness of body-worn cameras was performed by the Washington, DC government and DC Metropolitan Police Department in 2015/2016 in order to test the hypothesis that body-worn cameras are effective. Officers were randomly assigned to wear a body-worn camera or not, and their behavior was then tracked over time to determine whether the cameras resulted in less use of force and fewer civilian complaints about officer behavior.

Before we get to the results, let’s ask how you would think the statistical analysis might work. Let’s say we want to specifically test the hypothesis of whether the use of force is decreased by the wearing of cameras. The randomized controlled trial provides us with the data to test the hypothesis – namely, the rates of use of force by officers assigned to either the camera or control groups. The next obvious step is to look at the data and determine whether they provide convincing evidence for or against this hypothesis. That is: What is the likelihood that body-worn cameras reduce the use of force, given the data and everything else we know?

It turns out that this is not how null hypothesis testing works. Instead, we first take our hypothesis of interest (i.e. whether body-worn cameras reduce use of force), and flip it on its head, creating a null hypothesis – in this case, the null hypothesis would be that cameras do not reduce use of force. Importantly, we then assume that the null hypothesis is true. We then look at the data, and determine whether the data are sufficiently unlikely under the null hypothesis that we can reject the null in favor of the alternative hypothesis which is our hypothesis of interest. If there is not sufficient evidence to reject the null, then we say that we “failed to reject” the null.

Understanding some of the concepts of NHST, particularly the notorious “p-value”, is invariably challenging the first time one encounters them, because they are so counter-intuitive. As we will see later, there are other approaches that provide a much more intuitive way to address hypothesis testing (but have their own complexities). However, before we get to those, it’s important for you to have a deep understanding of how hypothesis testing works, because it’s clearly not going to go away any time soon.

## 9.3 The process of null hypothesis testing

We can break the process of null hypothesis testing down into a number of steps:

1. Formulate a hypothesis that embodies our prediction (before seeing the data)
2. Collect some data relevant to the hypothesis
3. Specify null and alternative hypotheses
4. Fit a model to the data that represents the alternative hypothesis and compute a test statistic
5. Compute the probability of the observed value of that statistic assuming that the null hypothesis is true
6. Assess the “statistical significance” of the result

For a hands-on example, let’s use the NHANES data to ask the following question: Is physical activity related to body mass index? In the NHANES dataset, participants were asked whether they engage regularly in moderate or vigorous-intensity sports, fitness or recreational activities (stored in the variable $$PhysActive$$). They also measured height and weight and computed Body Mass Index:

$BMI = \frac{weight(kg)}{height(m)^2}$

### 9.3.1 Step 1: Formulate a hypothesis

For step 1, we hypothesize that BMI should be greater for people who do not engage in physical activity, compared to those who do.

### 9.3.2 Step 2: Collect some data

For step 2, we collect some data. In this case, we will sample 250 individuals from the NHANES dataset. Figure 9.1 shows an example of such a sample, with BMI shown separately for active and inactive individuals.

# sample 250 adults from NHANES and compute mean BMI separately for active
# and inactive individuals

sampSize <- 250

NHANES_sample <-
sample_n(sampSize)

sampleSummary <-
NHANES_sample %>%
group_by(PhysActive) %>%
summarize(
N = length(BMI),
mean = mean(BMI),
sd = sd(BMI)
)

# calculate the mean difference in BMI between active
# and inactive individuals; we'll use this later to calculate the t-statistic
meanDiff <-
sampleSummary %>%
select(
PhysActive,
mean
) %>%
mutate(
meanDiff = No - Yes
) %>%
pull(meanDiff)

# calculate the summed variances in BMI for active
# and inactive individuals; we'll use this later to calculate the t-statistic
sumVariance <-
sampleSummary %>%
select(
PhysActive,
N,
sd
) %>%
gather(column, stat, N:sd) %>%
unite(temp, PhysActive, column) %>%
mutate(
sumVariance = No_sd**2 / No_N + Yes_sd**2 / Yes_N
) %>%
pull(sumVariance)

# print sampleSummary table
pander(sampleSummary)
PhysActive N mean sd
No 135 30.25 8.2
Yes 115 28.6 6.88

### 9.3.3 Step 3: Specify the null and alternative hypotheses

For step 3, we need to specify our null hypothesis (which we call $$H_0$$) and our alternative hypothesis (which we call $$H_A$$). $$H_0$$ is the baseline against which we test our hypothesis of interest: that is, what would we expect the data to look like if there was no effect? The null hypothesis always involves some kind of equality (=, $$\le$$, or $$\ge$$). $$H_A$$ describes what we expect if there actually is an effect. The alternative hypothesis always involves some kind of inequality ($$\ne$$, >, or <). Importantly, null hypothesis testing operates under the assumption that the null hypothesis is true unless the evidence shows otherwise.

We also have to decide whether to use directional or non-directional hypotheses. A non-directional hypothesis simply predicts that there will be a difference, without predicting which direction it will go. For the BMI/activity example, a non-directional null hypothesis would be:

$$H0: BMI_{active} = BMI_{inactive}$$

and the corresponding non-directional alternative hypothesis would be:

$$HA: BMI_{active} \neq BMI_{inactive}$$

A directional hypothesis, on the other hand, predicts which direction the difference would go. For example, we have strong prior knowledge to predict that people who engage in physical activity should weigh less than those who do not, so we would propose the following directional null hypothesis:

$$H0: BMI_{active} \ge BMI_{inactive}$$

and directional alternative:

$$HA: BMI_{active} < BMI_{inactive}$$

### 9.3.4 Step 4: Fit a model to the data and compute a test statistic

For step 4, we want to use the data to compute a statistic that will ultimately let us decide whether the null hypothesis is rejected or not. To do this, the model needs to quantify the amount of evidence in favor of the alternative hypothesis, relative to the variability in the data. Thus we can think of the test statistic as providing a measure of the size of the effect compared to the variability in the data. In general, this test statistic will have a probability distribution associated with it, because that allows us to determine how likely our observed value of the statistic is under the null hypothesis.

For the BMI example, we need a test statistic that allows us to test for a difference between two means, since the hypotheses are stated in terms of mean BMI for each group. One statistic that is often used to compare two means is the t-statistic, first developed by the statistician William Sealy Gossett, who worked for the Guiness Brewery in Dublin and wrote under the pen name “Student” - hence, it is often called “Student’s t-statistic”. The t-statistic is appropriate for comparing the means of two groups when the sample sizes are relatively small and the population standard deviation is unknown. The t-statistic for comparison of two independent groups is computed as:

$t = \frac{\bar{X_1} - \bar{X_2}}{\sqrt{\frac{S_1^2}{n_1} + \frac{S_2^2}{n_2}}}$

where $$\bar{X}_1$$ and $$\bar{X}_2$$ are the means of the two groups, $$S^2_1$$ and $$S^2_2$$ are the estimated variances of the groups, and $$n_1$$ and $$n_2$$ are the sizes of the two groups. The t-statistic is distributed according to a probability distribution known as a t distribution. The t distribution looks quite similar to a normal distribution, but it differs depending on the number of degrees of freedom, which for this example is the number of observations minus 2, since we have computed two means and thus given up two degrees of freedom. When the degrees of freedom are large (say 1000), then the t distribution looks essentialy like the normal distribution, but when they are small then the t distribution has longer tails than the normal (see Figure 9.2).

### 9.3.5 Step 5: Determine the probability of the data under the null hypothesis

This is the step where NHST starts to violate our intuition – rather than determining the likelihood that the null hypothesis is true given the data, we instead determine the likelihood of the data under the null hypothesis - because we started out by assuming that the null hypothesis is true! To do this, we need to know the probability distribution for the statistic under the null hypothesis, so that we can ask how likely the data are under that distribution. Before we move to our BMI data, let’s start with some simpler examples.

##### 9.3.5.0.1 Randomization: A very simple example

Let’s say that we wish to determine whether a coin is fair. To collect data, we flip the coin 100 times, and we count 70 heads. In this example, $$H_0: P(heads)=0.5$$ and $$H_A: P(heads) \neq 0.5$$, and our test statistic is simply the number of heads that we counted. The question that we then want to ask is: How likely is it that we would observe 70 heads if the true probability of heads is 0.5. We can imagine that this might happen very occasionally just by chance, but doesn’t seem very likely. To quantify this probability, we can use the binomial distribution:

$P(X < k) = \sum_{i=0}^k \binom{N}{k} p^i (1-p)^{(n-i)}$ This equation will tell us the likelihood of a certain number of heads or fewer, given a particular probability of heads. However, what we really want to know is the probability of a certain number or more, which we can obtain by subtracting from one:

$P(X \ge k) = 1 - P(X < k)$

We can compute the probability for our example using the pbinom() function in R as follows:

# compute the probability of 69 or fewer heads, when P(heads)=0.5
p_lt_70 <- pbinom(69, 100, 0.5)
sprintf("probability of 69 or fewer heads given P(heads)=0.5: %0.6f", p_lt_70)
## [1] "probability of 69 or fewer heads given P(heads)=0.5: 0.999961"
# the probability of 70 or more heads is simply the complement of p_lt_70
p_ge_70 <- 1 - p_lt_70
sprintf("probability of 70 or more heads given P(heads)=0.5: %0.6f", p_ge_70)
## [1] "probability of 70 or more heads given P(heads)=0.5: 0.000039"

This computation shows us that the likelihood of getting 70 heads if the coin is indeed fair is very small. Now, what if we didn’t have the pbinom() function to tell us the probability of that number of heads? We could instead determine it by simulation – we repeatedly flip a coin 100 times using a true probability of 0.5, and then compute the distribution of the number of heads across those simulation runs. Figure 9.3 shows the result from this simulation.

# simulate tossing of 100,000 flips of 100 coins to identify empirical
# probability of 70 or more heads out of 100 flips

# create function to toss coins
tossCoins <- function() {
flips <- runif(100) > 0.5
return(sum(flips))
}

# use a large number of replications since this is fast
coinFlips <- replicate(100000, tossCoins())

p_ge_70_sim <- mean(coinFlips >= 70)
sprintf(
p_ge_70_sim
)
## [1] "empirical probability of 70 or more heads given P(heads)=0.5: 0.000020"

Here we can see that the probability computed via simulation (0.000020) is very close to the theoretical probability (.00004).

Let’s do the analogous computation for our BMI example. First we compute the t statistic using the values from our sample that we calculated above:

tStat <-
meanDiff / sqrt(sumVariance)

sprintf("t statistic = %0.3f", tStat)
## [1] "t statistic = 1.735"

The question that we then want to ask is: What is the likelihood that we would find a t statistic of this size, if the true difference between groups is zero or less (i.e. the directional null hypothesis)?
We can use the t distribution to determine this probability. Our sample size is 250, so the appropriate t distribution has 248 degrees of freedom. We can use the pt() function in R to determine the probability of finding a value of the t-statistic greater than or equal to our observed value. Note that we want to know the probability of a value greater than our observed value, but by default pt() gives us the probability of a value less than the one that we provide it, so we have to tell it explicitly to provide us with the “upper tail” probability (by setting lower.tail = FALSE).

pvalue_tdist <-
pt(tStat, df = 248, lower.tail = FALSE)

sprintf("p(t > %0.2f, df = 248) = %0.3f", tStat, pvalue_tdist)
## [1] "p(t > 1.74, df = 248) = 0.042"

This tells us that our observed t-statistic value of 1.74 is relatively unlikely if the null hypothesis really is true.

In this case, we used a directional hypothesis, so we only had to look at one end of the null distribution. If we wanted to test a non-directional hypothesis, then we would need to be able to identify how unexpected the size of the effect is, regardless of its direction. In the context of the t-test, this means that we need to know how likely it is that the statistic would be as extreme in either the positive or negative direction. To do this, we multiply the observed t value by -1, since the t distribution is centered around zero, and then add together the two tail probabilities to get a two-tailed p-value:

pvalue_tdist_twotailed <-
pt(tStat, df = 248, lower.tail = FALSE) +
pt(-1 * tStat, df = 248, lower.tail = TRUE)

sprintf(
"p(t > %0.2f or t< %0.2f, df = 248) = %0.3f",
tStat,
-1 * tStat, pvalue_tdist_twotailed
)
## [1] "p(t > 1.74 or t< -1.74, df = 248) = 0.084"

Here we see that the p value for the two-tailed test is twice as large as that for the one-tailed test, which reflects the fact that an extreme value is less surprising since it could have occurred in either direction.

How do you choose whether to use a one-tailed versus a two-tailed test? The two-tailed test is always going to be more conservative, so it’s always a good bet to use that one, unless you had a very strong prior reason for using a one-tailed test. In that case, you should have written down the hypothesis before you ever looked at the data. In Chapter 17 we will discuss the idea of pre-registration of hypotheses, which formalizes the idea of writing down your hypotheses before you ever see the actual data. You should never make a decision about how to perform a hypothesis test once you have looked at the data, as this can introduce serious bias into the results.

#### 9.3.5.1 Computing p-values using randomization

So far we have seen how we can use the t-distribution to compute the probability of the data under the null hypothesis, but we can also do this using simulation. The basic idea is that we generate simulated data like those that we would expect under the null hypothesis, and then ask how extreme the observed data are in comparison to those simulated data. The key question is: How can we generate data for which the null hypothesis is true? The general answer is that we can randomly rearrange the data in a specific way that makes the data look like they would if the null was really true. This is similar to the idea of bootstrapping, in the sense that it uses our own data to come up with an answer, but it does it in a different way.

##### 9.3.5.1.1 Randomization: a simple example

Let’s start with a simple example. Let’s say that we want to compare the mean squatting ability of football players with cross-country runners, with $$H_0: \mu_{FB} \le \mu_{XC}$$ and $$H_A: \mu_{FB} > \mu_{XC}$$. We measure the maximum squatting ability of 5 football players and 5 cross-country runners (which we will generate randomly, assuming that $$\mu_{FB} = 300$$, $$\mu_{XC} = 140$$, and $$\sigma = 30$$.

# generate simulated data for squatting ability across football players
# and cross country runners

# reset random seed for this example
set.seed(12345678)

# create a function to round values to nearest product of 5,
# to keep example simple
roundToNearest5 <- function(x, base = 5) {
return(base * round(x / base))
}

# create and show data frame containing simulated data
squatDf <- tibble(
group = as.factor(c(rep("FB", 5), rep("XC", 5))),
squat = roundToNearest5(c(rnorm(5) * 30 + 300, rnorm(5) * 30 + 140))
)

pander(squatDf)
group squat
FB 335
FB 350
FB 230
FB 290
FB 325
XC 115
XC 115
XC 170
XC 175
XC 215

From the plot in Figure 9.4 it’s clear that there is a large difference between the two groups. We can do a standard t-test to test our hypothesis, using the t.test() command in R:

# compute and print t statistic comparing two groups

tt <-
t.test(
squat ~ group,
data = squatDf,
alternative = "greater",
var.equal = TRUE
)

sprintf("p(t > %0.2f, df = 8) = %0.5f", tt$statistic, tt$p.value)
## [1] "p(t > 5.14, df = 8) = 0.00044"

This shows that the likelihood of such a difference under the null hypothesis is very small, using the t distribution to define the null. Now let’s see how we could answer the same question using randomization. The basic idea is that if the null hypothesis of no difference between groups is true, then it shouldn’t matter which group one comes from (football players versus cross-country runners) – thus, to create data that are like our actual data but also conform to the null hypothesis, we can randomly reorder the group labels for the individuals in the dataset, and then recompute the difference between the groups. The results of such a shuffle are shown in Figure 9.5.

# create a scrambled version of the group membership variable

dfScram <-
squatDf %>%
mutate(
scrambledGroup = sample(group)
) %>%
select(-group)

pander(dfScram)
squat scrambledGroup
335 FB
350 XC
230 FB
290 XC
325 XC
115 FB
115 FB
170 XC
175 FB
215 XC

After scrambling the labels, we see that the two groups are now much more similar, and in fact the cross-country group now has a slightly higher mean. Now let’s do that 10000 times and store the t statistic for each iteration; this may take a moment to complete.

# shuffle data 10,000 times and compute distribution of t values

nRuns <- 10000

shuffleAndMeasure <- function(df) {
dfScram <-
df %>%
mutate(
scrambledGroup = sample(group)
)
tt <- t.test(
squat ~ scrambledGroup,
data = dfScram,
alternative = "greater",
var.equal = TRUE
)
return(tt$statistic) } shuffleDiff <- replicate(nRuns, shuffleAndMeasure(squatDf)) sprintf("mean t value across shuffles = %0.3f", mean(shuffleDiff)) ## [1] "mean t value across shuffles = -0.004" We can now look at the distribution of mean differences across the shuffled datasets. Figure 9.6 shows the histogram of the group differences across all of the random shuffles. As expected under the null hypothesis, this distribution is centered at zero. We can see that the distribution of t values after shuffling roughly follows the theoretical t distribution under the null hypothesis (with mean=0), showing that randomization worked to generate null data. We also see something interesting if we compare the shuffled t values to the actual t value: # compute number of runs on which t statistic for shuffle data was # equal to observed t statistic equalSum <- sum(shuffleDiff == tt$statistic)
sprintf("Number of runs on which shuffled t == observed t: %d", equalSum)
## [1] "Number of runs on which shuffled t == observed t: 33"
# compute number of runs on which t statistic for shuffle data was
# equal to observed t statistic times -1

equalSumMinus <- sum(shuffleDiff == tt$statistic * -1) sprintf("Number of runs on which shuffled t == observed t*-1: %d", equalSumMinus) ## [1] "Number of runs on which shuffled t == observed t*-1: 28" There are 33 shuffle runs on which the t statistic for the shuffled data was exactly the same as the observed data – which means that shuffling resulted in the same labeling as the actual data! This is unlikely, but not that unlikely, and we can actually compute its likelihood using a bit of probability theory. The number of possible permutations of 10 items is $$10!$$ which comes out to 3628800. The number of possible rearrangements of each set of 5 is $$5!$$ which comes out to 120, so the number of possible rearrangements of two sets of five is $$5! * 5!$$ or 14400. Thus, we expect that 0.0039 of the random labelings should come out exactly the same as the original, which is fairly close to the 0.0033 that we see in our simulation. We have a similar expectation for the number of times that the labeling will be exactly opposite of the true labeling, giving us the negative of the observed t value. We can compute the p-value from the randomized data by measuring how many of the shuffled values are at least as extreme as the observed value: # compute p value using randomization pvalRandomization <- mean(shuffleDiff >= tt$statistic)

sprintf(
'p(t > %0.2f, df = 8) using randomization = %0.5f',
tt$statistic, pvalRandomization ) ## [1] "p(t > 5.14, df = 8) using randomization = 0.00330" This p-value is very similar to the p-value that we obtained using the t distribution, and both are quite extreme, suggesting that the observed data are very unlikely to have arisen if the null hypothesis is true - and in this case we know that it’s not true, because we generated the data. ##### 9.3.5.1.2 Randomization: BMI/activity example Now let’s use randomization to compute the p-value for the BMI/activity example. In this case, we will randomly shuffle the PhysActive variable and compute the difference between groups after each shuffle, and then compare our observed t statistic to the distribution of t statistics from the shuffled datasets. # create function to shuffle BMI data shuffleBMIstat <- function() { bmiDataShuffled <- NHANES_sample %>% select(BMI, PhysActive) %>% mutate( PhysActive = sample(PhysActive) ) # compute the difference simResult <- t.test( BMI ~ PhysActive, data = bmiDataShuffled, var.equal = TRUE ) return(simResult$statistic)
}

# run function 5000 times and save output

nRuns <- 5000
meanDiffSimDf <-
data.frame(
meanDiffSim = replicate(nRuns, shuffleBMIstat())
)

Let’s look at the results. Figure 9.7 shows the distribution of t values from the shuffled samples, and we can also compute the probability of finding a value as large or larger than the observed value:

# compute the empirical probability of t values larger than observed
# value under the randomization null
bmtTTest <-
t.test(
BMI ~ PhysActive,
data = NHANES_sample,
var.equal = TRUE,
alternative = "greater"
)

bmiPvalRand <-
mean(meanDiffSimDf$meanDiffSim >= bmtTTest$statistic)

sprintf(
"p(mean > %0.2f, df = 248) using randomization = %0.5f",
bmtTTest$statistic, bmiPvalRand ) ## [1] "p(mean > 1.71, df = 248) using randomization = 0.04380" sprintf( "p(mean > %0.2f, df = 248) using parametric t-test = %0.5f", bmtTTest$statistic,
bmtTTest$p.value ) ## [1] "p(mean > 1.71, df = 248) using parametric t-test = 0.04413" Again, the p-value obtained from randomization (0.044) is very similar to the one obtained using the t distribution (0.044). The advantage of the randomization test is that it doesn’t require that we assume that the data from each of the groups are normally distributed, though the t-test is generally quite robust to violations of that assumption. In addition, the randomization test can allow us to compute p-values for statistics when we don’t have a theoretical distribution like we do for the t-test. We do have to make one main assumption when we use the randomization test, which we refer to as exchangeability. This means that all of the observations are distributed in the same way, such that we can interchange them without changing the overall distribution. The main place where this can break down is when there are related observations in the data; for example, if we had data from individuals in 4 different families, then we couldn’t assume that individuals were exchangeable, because siblings would be closer to each other than they are to individuals from other families. In general, if the data were obtained by random sampling, then the assumption of exchangeability should hold. ### 9.3.6 Step 6: Assess the “statistical significance” of the result The next step is to determine whether the p-value that results from the previous step is small enough that we are willing to reject the null hypothesis and conclude instead that the alternative is true. How much evidence do we require? This is one of the most controversial questions in statistics, in part because it requires a subjective judgment – there is no “correct” answer. Historically, the most common answer to this question has been that we should reject the null hypothesis if the p-value is less than 0.05. This comes from the writings of Ronald Fisher, who has been referred to as “the single most important figure in 20th century statistics”(Efron 1998): “If P is between .1 and .9 there is certainly no reason to suspect the hypothesis tested. If it is below .02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at .05 … it is convenient to draw the line at about the level at which we can say: Either there is something in the treatment, or a coincidence has occurred such as does not occur more than once in twenty trials” (Fisher 1925) However, Fisher never intended $$p < 0.05$$ to be a fixed rule: “no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas” [fish:1956] Instead, it is likely that it became a ritual due to the reliance upon tables of p-values that were used before computing made it easy to compute p values for arbitrary values of a statistic. All of the tables had an entry for 0.05, making it easy to determine whether one’s statistic exceeded the value needed to reach that level of significance. The choice of statistical thresholds remains deeply controversial, and recently (Benjamin et al., 2018) it has been proposed that the standard threshold be changed from .05 to .005, making it substantially more stringent and thus more difficult to reject the null hypothesis. In large part this move is due to growing concerns that the evidence obtained from a significant result at $$p < .05$$ is relatively weak; we will discuss this in much more detail in our later discussion of reproducibility in Chapter 17. #### 9.3.6.1 Hypothesis testing as decision-making: The Neyman-Pearson approach Whereas Fisher thought that the p-value could provide evidence regarding a specific hypothesis, the statisticians Jerzy Neyman and Egon Pearson disagreed vehemently. Instead, they proposed that we think of hypothesis testing in terms of its error rate in the long run: “no test based upon a theory of probability can by itself provide any valuable evidence of the truth or falsehood of a hypothesis. But we may look at the purpose of tests from another viewpoint. Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not often be wrong” (J. Neyman and Pearson 1933) That is: We can’t know which specific decisions are right or wrong, but if we follow the rules, we can at least know how often our decisions will be wrong. To understand the decision making framework that Neyman and Pearson developed, we first need to discuss statistical decision making in terms of the kinds of outcomes that can occur. There are two possible states of reality ($$H_0$$ is true, or $$H_0$$ is false), and two possible decisions (reject $$H_0$$, or fail to reject $$H_0$$). There are two ways in which we can make a correct decision: • We can decide to reject $$H_0$$ when it is false (in the language of decision theory, we call this a hit) • We can fail to reject $$H_0$$ when it is true (we call this a correct rejection) There are also two kinds of errors we can make: • We can decide to reject $$H_0$$ when it is actually true (we call this a false alarm, or Type I error) • We can fail to reject $$H_0$$ when it is actually false (we call this a miss, or Type II error) Neyman and Pearson coined two terms to describe the probability of these two types of errors in the long run: • P(Type I error) = $$\alpha$$ • P(Type II error) = $$\beta$$ That is, if we set $$\alpha$$ to .05, then in the long run we should make a Type I error 5% of the time. Whereas it’s common to set $$\alpha$$ as .05, the standard value for $$\beta$$ is .2 - that is, we are willing to accept that 20% of the time we will fail to detect a true effect. We will return to this below when we discuss statistical power in Section 10.3, which is the complement of Type II error. ### 9.3.7 What does a significant result mean? There is a great deal of confusion about what p-values actually mean (Gigerenzer, 2004). Let’s say that we do an experiment comparing the means between conditions, and we find a difference with a p-value of .01. There are a number of possible interpretations. #### 9.3.7.1 Does it mean that the probability of the null hypothesis being true is .01? No. Remember that in null hypothesis testing, the p-value is the probability of the data given the null hypothesis ($$P(data|H_0)$$). It does not warrant conclusions about the probability of the null hypothesis given the data ($$P(H_0|data)$$). We will return to this question when we discuss Bayesian inference in a later chapter, as Bayes theorem lets us invert the conditional probability in a way that allows us to determine the latter probability. #### 9.3.7.2 Does it mean that the probability that you are making the wrong decision is .01? No. This would be $$P(H_0|data)$$, but remember as above that p-values are probabilities of data under $$H_0$$, not probabilities of hypotheses. #### 9.3.7.3 Does it mean that if you ran the study again, you would obtain the same result 99% of the time? No. The p-value is a statement about the likelihood of a particular dataset under the null; it does not allow us to make inferences about the likelihood of future events such as replication. #### 9.3.7.4 Does it mean that you have found a meaningful effect? No. There is an important distinction between statistical significance and practical significance. As an example, let’s say that we performed a randomized controlled trial to examine the effect of a particular diet on body weight, and we find a statistically significant effect at p<.05. What this doesn’t tell us is how much weight was actually lost, which we refer to as the effect size (to be discussed in more detail in Chapter 10). If we think about a study of weight loss, then we probably don’t think that the loss of ten ounces (i.e. the weight of a bag of potato chips) is practically significant. Let’s look at our ability to detect a significant difference of 1 ounce as the sample size increases. # create simulated data for weight loss trial weightLossTrial <- function(nPerGroup, weightLossOz = 1) { # mean and SD in Kg based on NHANES adult dataset kgToOz <- 35.27396195 # conversion constant for Kg to Oz meanOz <- 81.78 * kgToOz sdOz <- 21.29 * kgToOz # create data controlGroup <- rnorm(nPerGroup) * sdOz + meanOz expGroup <- rnorm(nPerGroup) * sdOz + meanOz - weightLossOz ttResult <- t.test(expGroup, controlGroup) return(c( nPerGroup, weightLossOz, ttResult$p.value,
diff(ttResult\$estimate)
))
}

nRuns <- 1000
sampSizes <- 2**seq(5,17) # powers of 2

simResults <- c() ## create an empty list to add results onto
for (i in 1:length(sampSizes)) {
tmpResults <- replicate(
nRuns,
weightLossTrial(sampSizes[i], weightLossOz = 10)
)
summaryResults <- c(
tmpResults[1, 1], tmpResults[2, 1],
sum(tmpResults[3, ] < 0.05),
mean(tmpResults[4, ])
)
simResults <- rbind(simResults, summaryResults)
}

simResultsDf <-
as.tibble(simResults) %>%
rename(
sampleSize = V1,
effectSizeLbs = V2,
nSigResults = V3,
meanEffect = V4
) %>%
mutate(pSigResult = nSigResults / nRuns)

Figure 9.8 shows how the proportion of significant results increases as the sample size increases, such that with a very large sample size (about 262,000 total subjects), we will find a significant result in more than 90% of studies when there is a 1 ounce weight loss. While these are statistically significant, most physicians would not consider a weight loss of one ounce to be practically or clinically significant. We will explore this relationship in more detail when we return to the concept of statistical power in Section 10.3, but it should already be clear from this example that statistical significance is not necessarily indicative of practical significance.

## 9.4 NHST in a modern context: Multiple testing

So far we have discussed examples where we are interested in testing a single statistical hypothesis, and this is consistent with traditional science which often measured only a few variables at a time. However, in modern science we can often measure millions of variables per individual. For example, in genetic studies that quantify the entire genome, there may be many millions of measures per individual, and in brain imaging we often collect data from more than 100,000 locations in the brain at once. When standard hypothesis testing is applied in these contexts, bad things can happen unless we take appropriate care.

Let’s look at an example to see how this might work. There is great interest in understanding the genetic factors that can predispose individuals to major mental illnesses such as schizophrenia, because we know that about 80% of the variation between individuals in the presence of schizophrenia is due to genetic differences. The Human Genome Project and the ensuing revolution in genome science has provided tools to examine the many ways in which humans differ from one another in their genomes. One approach that has been used in recent years is known as a genome-wide association study (GWAS), in which the genome of each individual is characterized at one million or more places in their genome to determine which letters of the genetic code (which we call “variants”) they have at that location. After these have been determined, the researchers perform a statistical test at each location in the genome to determine whether people diagnosed with schizoprenia are more or less likely to have one specific variant at that location.

Let’s imagine what would happen if the researchers simply asked whether the test was significant at p<.05 at each location, when in fact there is no true effect at any of the locations. To do this, we generate a large number of simulated t values from a null distribution, and ask how many of them are significant at p<.05. Let’s do this many times, and each time count up how many of the tests come out as significant (see Figure 9.9).

# simulate 1500 studies with 10,000 tests each, thresholded at p < .05

nRuns <- 1500 # number of simulated studies to run
nTests <- 10000 # number of simulated genes to test in each run

uncAlpha <- 0.05 # alpha level

uncOutcome <- replicate(nRuns, sum(rnorm(nTests) < qnorm(uncAlpha)))

sprintf("mean proportion of significant tests per run: %0.2f", mean(uncOutcome) / nTests)
## [1] "mean proportion of significant tests per run: 0.05"
# compute proportion of studies with at least one false positive result,
# known as the familywise error rate
sprintf("familywise error rate: %0.3f", mean(uncOutcome > 0))
## [1] "familywise error rate: 1.000"

This shows that about 5% of all of the tests were significant in each run, meaning that if we were to use p < .05 as our threshold for statistical significance, then even if there were no truly significant relationships present, we would still “find” about 500 genes that were seemingly significant (the expected number of significant results is simply $$n * \alpha$$). That is because while we controlled for the error per test, we didn’t control the familywise error, or the error across all of the tests, which is what we really want to control if we are going to be looking at the results from a large number of tests. Using p<.05, our familywise error rate in the above example is one – that is, we are pretty much guaranteed to make at least one error in any particular study.

A simple way to control for the familywise error is to divide the alpha level by the number of tests; this is known as the Bonferroni correction, named after the Italian statistician Carlo Bonferroni. Using the data from our example above, we see in Figure 9.10 that only about 5 percent of studies show any significant results using the corrected alpha level of 0.000005 instead of the nominal level of .05. We have effectively controlled the familywise error, such that the probability of making any errors in our study is controlled at right around .05.

# compute Bonferroni-corrected alpha
corAlpha <- 0.05 / nTests

corOutcome <- replicate(nRuns, sum(rnorm(nTests) < (qnorm(corAlpha))))

sprintf("corrected familywise error rate: %0.3f", mean(corOutcome > 0))
## [1] "corrected familywise error rate: 0.046"