Watch
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favorite
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Significance Levels
If a test of significance gives a pvalue lower than or equal to the significance level, the null hypothesis is rejected at that level.
Learning Objective

Outline the process for calculating a pvalue and recognize its role in measuring the significance of a hypothesis test.
Key Points
 Significance levels may be used either as a cutoff mark for a pvalue or as a desired parameter in the test design.
 To compute a pvalue from the test statistic, one must simply sum (or integrate over) the probabilities of more extreme events occurring.
 In some situations, it is convenient to express the complementary statistical significance (so 0.95 instead of 0.05), which corresponds to a quantile of the test statistic.
 Popular levels of significance are 10% (0.1), 5% (0.05), 1% (0.01), 0.5% (0.005), and 0.1% (0.001).
 The lower the significance level chosen, the stronger the evidence required.
Terms

Student's ttest
Any statistical hypothesis test in which the test statistic follows a Student's t distribution if the null hypothesis is supported.

pvalue
The probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.
Full Text
A fixed number, most often 0.05, is referred to as a significance level or level of significance. Such a number may be used either as a cutoff mark for a pvalue or as a desired parameter in the test design.
pValue
In brief, the (lefttailed) pvalue is the quantile of the value of the test statistic, with respect to the sampling distribution under the null hypothesis. The righttailed pvalue is one minus the quantile, while the twotailed pvalue is twice whichever of these is smaller. Computing a pvalue requires a null hypothesis, a test statistic (together with deciding if one is doing onetailed test or a twotailed test), and data. The key preparatory computation is computing the cumulative distribution function (CDF) of the sampling distribution of the test statistic under the null hypothesis, which may depend on parameters in the null distribution and the number of samples in the data. The test statistic is then computed for the actual data and its quantile is computed by inputting it into the CDF. An example of a pvalue graph is shown in .
pValue Graph
Example of a pvalue computation. The vertical coordinate is the probability density of each outcome, computed under the null hypothesis. The pvalue is the area under the curve past the observed data point.
Hypothesis tests, such as Student's ttest, typically produce test statistics whose sampling distributions under the null hypothesis are known. For instance, in the example of flipping a coin, the test statistic is the number of heads produced. This number follows a known binomial distribution if the coin is fair, and so the probability of any particular combination of heads and tails can be computed. To compute a pvalue from the test statistic, one must simply sum (or integrate over) the probabilities of more extreme events occurring. For commonly used statistical tests, test statistics and their corresponding pvalues are often tabulated in textbooks and reference works.
Using Significance Levels
Popular levels of significance are 10% (0.1), 5% (0.05), 1% (0.01), 0.5% (0.005), and 0.1% (0.001). If a test of significance gives a pvalue lower than or equal to the significance level, the null hypothesis is rejected at that level. Such results are informally referred to as statistically significant (at the p = 0.05 level, etc.). For example, if someone argues that "there's only one chance in a thousand this could have happened by coincidence", a 0.001 level of statistical significance is being stated. The lower the significance level chosen, the stronger the evidence required. The choice of significance level is somewhat arbitrary, but for many applications, a level of 5% is chosen by convention.
In some situations, it is convenient to express the complementary statistical significance (so 0.95 instead of 0.05), which corresponds to a quantile of the test statistic. In general, when interpreting a stated significance, one must be careful to make precise note of what is being tested statistically.
Different levels of cutoff trade off countervailing effects. Lower levels – such as 0.01 instead of 0.05 – are stricter and increase confidence in the determination of significance, but they run an increased risk of failing to reject a false null hypothesis. Evaluation of a given pvalue of data requires a degree of judgment; and rather than a strict cutoff, one may instead simply consider lower pvalues as more significant.
Key Term Reference
 binomial distribution
 Appears in these related concepts: The Hypergeometric Random Variable, Calculating a Normal Approximation, and Categorical Data and the Multinomial Experiment
 cumulative distribution function
 Appears in these related concepts: Two Types of Random Variables, Continuous Probability Distributions, and The Uniform Distribution
 datum
 Appears in these related concepts: Graphs of Qualitative Data, Mean: The Average, and Change of Scale
 distribution
 Appears in these related concepts: Application of Knowledge, Monte Carlo Simulation, and Selling to Consumers
 graph
 Appears in these related concepts: Graphing on Computers and Calculators, Reading Points on a Graph, and Graphing Functions
 hypothesis test
 Appears in these related concepts: Level of Confidence, Determining Sample Size, and Hypothesis Tests or Confidence Intervals?
 level
 Appears in these related concepts: Misleading Graphs, Randomized Design: SingleFactor, and Factorial Experiments: Two Factors
 null hypothesis
 Appears in these related concepts: Testing a Single Mean, Mean Squares and the FRatio, and When Does the ZTest Apply?
 probability
 Appears in these related concepts: Particle in a Box, The Addition Rule, and Rules of Probability for Mendelian Inheritance
 sample
 Appears in these related concepts: Applications of Statistics, Sampling, and Defining the Sample and Collecting Data
 sampling
 Appears in these related concepts: Inferential Statistics, Outliers, and Collecting and Measuring Data
 sampling distribution
 Appears in these related concepts: Sampling Distributions and the Central Limit Theorem, Properties of Sampling Distributions, and Creating a Sampling Distribution
 significance level
 Appears in these related concepts: Distorting the Truth with Descriptive Statistics, Using the Model for Estimation and Prediction, and Elements of a Hypothesis Test
 statistical significance
 Appears in these related concepts: Tests of Significance, Was the Result Significant?, and Was the Result Important?
 statistics
 Appears in these related concepts: Communicating Statistics, Understanding Statistics, and Population Demography
 ttest
 Appears in these related concepts: The tTest, tTest for One Sample, and One, Two, or More Groups?
Sources
Boundless vets and curates highquality, openly licensed content from around the Internet. This particular resource used the following sources:
Cite This Source
Source: Boundless. “Significance Levels.” Boundless Statistics. Boundless, 21 Jul. 2015. Retrieved 22 Jul. 2015 from https://www.boundless.com/statistics/textbooks/boundlessstatisticstextbook/estimationandhypothesistesting12/hypothesistestingonesample54/significancelevels2652716/