Sampling distribution is the probability distribution of a particular sample statistic (such as mean) obtained by drawing all possible samples of a particular sample size ‘n’ from the population and calculating their statistics.
Sampling Distribution of Sample Mean and the Central Limit Theorem
If you draw samples of size, let’s say, ‘n’ from a population, calculate the sample mean for all the samples and then draw the probability distribution for the random variable X (where X denotes the mean of the sample), the resulting probability distribution is called ‘sampling distribution of sample means’. The mean of the sample means is denoted by . The standard deviation of the sampling distribution of the sample means is denoted by .
Central Limit Theorem: In simple words, the central limit theorem can be stated as follows:
When you draw a sampling distribution of sample means, where the sample size is sufficiently large, the sampling distribution of the sample means will look like a normal distribution.
When is the sample size (n) considered sufficiently large?
For a non-normally distributed population, ‘n’ should be greater than or equal to 30. (This 30-rule is an oversimplification and can be verified). For a normally distributed population, the sample size can be anything.
Significance of the central limit theorem: The central limit theorem states that for a sufficiently large sample size, the sampling distribution is approximately normally distributed. This approximation improves with an increase in the sample size because of this normal distribution. The sampling distribution of sample means has its own normal variate (Z). In the next section, you will see how this Z is used to estimate population parameters.
Important property: Mean of the sample means (μx̅) = Mean of the population (μ)
= σ / √n, where n is the sample size of all the samples.
So, the normal variate or the Z-score for the sampling distribution of a sample means is:
Z = ( x̅ - μ ) / ( σ / √n )
The process of drawing inferences about a population using the information from its samples is known as estimation.
Types of estimation
Point estimate: Here, a statistic obtained from a sample is used to estimate a population parameter. So, its accuracy depends on how well the sample represents the population. The population parameters derived from sample statistics of various samples may vary. This is why interval estimate is preferred to point estimate.
Interval estimate: Here, the lower and upper limits of values (that is, the confidence interval) within which a population parameter will lie are estimated along with a certain level of confidence.
The mathematics involved in interval estimate:
As discussed above, the normal variate of the sampling distribution of a sample means is:
Z* = (X̅ - μ)/(σ/√n)
Rearranging the equation above, you get:
(X̅ - μ) = Z*(σ/√n)
Since Z can be both positive and negative (for a random variable smaller than the mean), you have:
(X̅ - μ) = ± Z*(σ/√n)
The equation above can be rearranged to:
μ= X̅ ± ( Z*(σ/√n))
So, you can say that the population mean μ will lie between:
X̅ - (Z*(σ/√n)) < μ < X̅ + ( Z*(σ/√n))
The formula above is used to calculate the upper and the lower limits of μ for a certain level of confidence (a certain value of Z), where the value of σ is known.
What if the value of σ is not known? In that case, you use the t-distribution.
Properties of T-distribution:
It can only be applied when the samples are drawn from a normally distributed population.
It is flatter than a normal distribution.
Degrees of freedom = Sample size - Number of unknown parameters
Here, there is only one unknown parameter: the population standard deviation. So, the degree of freedom for a t-distribution is given by ‘sample size (n) - 1’.
Standard normal variate or test statistic for t-distribution = (X̅ - μ)/(s/√n),
where ‘s’ is the sample standard deviation.
The formula to find the confidence interval is:
X̅ - ( tα/2*(s/√n)) < μ < X̅ + ( tα/2*(s/√n)),
where 1-α is the confidence level associated with it.
Go to this link for more details.