Watch
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favorite
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
The Normal Approximation to the Binomial Distribution
The process of using the normal curve to estimate the shape of the binomial distribution is known as normal approximation.
Learning Objective

Explain the origins of central limit theorem for binomial distributions
Key Points

Originally, to solve a problem such as the chance of obtaining 60 heads in 100 coin flips, one had to compute the probability of 60 heads, then the probability of 61 heads, 62 heads, etc, and add up all these probabilities.

Abraham de Moivre noted that when the number of events (coin flips) increased, the shape of the binomial distribution approached a very smooth curve.

Therefore, de Moivre reasoned that if he could find a mathematical expression for this curve, he would be able to solve problems such as finding the probability of 60 or more heads out of 100 coin flips much more easily.

This is exactly what he did, and the curve he discovered is now called the normal curve.
Terms

central limit theorem
The theorem that states: If the sum of independent identically distributed random variables has a finite variance, then it will be (approximately) normally distributed.

normal approximation
The process of using the normal curve to estimate the shape of the distribution of a data set.
Full Text
The binomial distribution can be used to solve problems such as, "If a fair coin is flipped 100 times, what is the probability of getting 60 or more heads? " The probability of exactly x heads out of N flips is computed using the formula:
where x is the number of heads (60), N is the number of flips (100), and π is the probability of a head (0.5). Therefore, to solve this problem, you compute the probability of 60 heads, then the probability of 61 heads, 62 heads, etc, and add up all these probabilities.
Abraham de Moivre, an 18^{th} century statistician and consultant to gamblers, was often called upon to make these lengthy computations. de Moivre noted that when the number of events (coin flips) increased, the shape of the binomial distribution approached a very smooth curve. Therefore, de Moivre reasoned that if he could find a mathematical expression for this curve, he would be able to solve problems such as finding the probability of 60 or more heads out of 100 coin flips much more easily. This is exactly what he did, and the curve he discovered is now called the normal curve. The process of using this curve to estimate the shape of the binomial distribution is known as normal approximation, illustrated graphically in .
The importance of the normal curve stems primarily from the fact that the distribution of many natural phenomena are at least approximately normally distributed. One of the first applications of the normal distribution was to the analysis of errors of measurement made in astronomical observations, errors that occurred because of imperfect instruments and imperfect observers. Galileo in the 17^{th} century noted that these errors were symmetric and that small errors occurred more frequently than large errors. This led to several hypothesized distributions of errors, but it was not until the early 19^{th} century that it was discovered that these errors followed a normal distribution. Independently the mathematicians Adrian (in 1808) and Gauss (in 1809) developed the formula for the normal distribution and showed that errors were fit well by this distribution.
This same distribution had been discovered by Laplace in 1778—when he derived the extremely important central limit theorem. Laplace showed that even if a distribution is not normally distributed, the means of repeated samples from the distribution would be very nearly normal, and that the the larger the sample size, the closer the distribution would be to a normal distribution. Most statistical procedures for testing differences between means assume normal distributions. Because the distribution of means is very close to normal, these tests work well even if the distribution itself is only roughly normal.
Assign just this concept or entire chapters to your class for free.
Key Term Reference
 binomial distribution
 Appears in this related concepts: The Hypergeometric Random Variable, Categorical Data and the Multinomial Experiment, and Goodness of Fit
 distribution
 Appears in this related concepts: Monte Carlo Simulation, Interpreting Distributions Constructed by Others, and Selling to Consumers
 error
 Appears in this related concepts: Estimating the Accuracy of an Average, Estimation, and Precise Definition of a Limit
 mean
 Appears in this related concepts: Mean, Variance, and Standard Deviation of the Binomial Distribution, The Mean Value Theorem, Rolle's Theorem, and Monotonicity, and Understanding Statistics
 normal distribution
 Appears in this related concepts: Shapes of Sampling Distributions, The Average and the Histogram, and Standard Deviation: Definition and Calculation
 probability
 Appears in this related concepts: The Addition Rule, Theoretical Probability, and Probability Basics
 sample
 Appears in this related concepts: Populations, Defining the Sample and Collecting Data, and Surveys
Sources
Boundless vets and curates highquality, openly licensed content from around the Internet. This particular resource used the following sources:
Cite This Source
Source: Boundless. “The Normal Approximation to the Binomial Distribution.” Boundless Statistics. Boundless, 02 Jul. 2014. Retrieved 17 Apr. 2015 from https://www.boundless.com/statistics/textbooks/boundlessstatisticstextbook/continuousrandomvariables10/normalapproximation40/thenormalapproximationtothebinomialdistribution1942642/