There are generally two basic types of inferential statistics -
Confidence Interval Estimation
It is used when an unknown issue under investigation involves learning the value of an unknown population parameter. In confidence interval estimation, a random sample is collected from the population, and the resulting sample statistics are used to determine the lower limit L and upper limit U of an interval that accurately estimates the actual value of the unknown population parameter.
The idea of confidence interval estimation will focus on two aspects:
The confidence - It is referring to the likelihood that the population parameter will be contained within the interval, that is the population parameter is estimated to be between L and U with 1 minus.
Alpha times 100% confidence where the notation one minus alpha times 100% represents what they call the confidence level. The confidence level is the probability that the confidence interval accurately estimates the population parameter.
So to have a great deal of confidence in the estimate, one must set their confidence level very high. In statistics, the customary value used for alpha which is called the level of significance is 0.05.
Thus, the customary confidence level used in statistics is 1 - 0.05 or 95%. You can construct confidence intervals that can accurately estimate the value of the population parameter 95% of the time.
Even though the value of the population parameter is usually unknown, the actual value is fixed. The exact value can be determined if you conduct a census.
On the other hand, since random sample results are used to calculate the confidence interval, the resulting lower limit L and upper limit U produce varying results determined by chance due to random sampling.
Suppose a random sample constructed the confidence level, but now it's the actual data values in the confidence intervals used to produce the lower and upper limits.
The interval - A confidence interval is a range of values from some lower limit L to upper limit U such that the actual value of the population parameter is estimated to fall somewhere within it.
So when it comes to confidence interval estimations, the confidence interval themselves in random and takes on bearing results due to chance and random.
What each confidence interval is trying to do is estimate the actual value of the population parameter. This population parameter value, although unknown, is fixed or constant. It's the target that each of these confidence intervals is trying to hit.
When a confidence interval is accurate, one minus alpha times 100%. Each time a confidence interval is constructed, it has this one minus alpha time 100% probability of hitting its target, having the actual population parameter be contained within the interval.
Thus, 95% of confidence intervals constructed with the customary confidence level accurately estimate the population parameter's value.
On the other hand, only 5% of confidence intervals constructed with the customary confidence level do not accurately estimate the value of the population parameter.
Hypothesis Testing Procedure
It is used when the issue under investigation involves assessing the validity of an assumed value of a particular population parameter. In this case, one has an idea of the value of the population parameter ahead of time, and with all the ideas, it is either a good idea or a bad idea.
In hypothesis testing procedures, a random sample is collected from the population. If the resulting sample statistics are consistent with the assumed value of the population parameter, the valid assumed (hypothesised) and upper limits alternatively, if the resulting sample statistics contradict the assumed value of the population parameter, the assumed value is considered invalid.
In hypothesis testing procedures, the value of the population parameter is assumed to take on a certain value, then a sample is collected, and the corresponding sample statistics are calculated.
If the sample statistic is consistent with the value of the hypothesised population parameter, it can be concluded that this population parameter value is valid but if the resulting sample statistics differs from the hypothesised population parameter, it gives evidence that maybe the hypothesised population parameter is not valid.
There may be a 50% chance that the actual sample statistic will be larger than the sample mean. It may happen due to chance alone, just because the resulting sample statistics differ somewhat from the population parameter. Hence, sample data is used to test the idea in a hypothesis testing procedure. It is not necessary to conclude that the hypothesised value is invalid.
Now, in hypothesis testing, one wants the sample evidence to contradict the hypothesised value, so if the resulting sample statistic is different from the hypothesised population parameter, then it can be said that one has enough evidence to really contradict it. This leads to the conclusion that the hypothesised population value is invalid.
In hypothesis testing, the probability that the sample statistics is at least as extreme as the resulting sample value is calculated under the assumption that the population parameter equals the hypothesised value. The probability is referred to as the P-value.
This probability with the p-value that is calculated determines this result of being as extreme as the sample result just due to chance alone is how one takes a decision. When the resulting p-value is 0.05 or less it would be considered unusual to obtain these sample results by chance alone.
Therefore, the more likely the explanation of these sample results is that the assumed value of the population parameter is invalid. Thus, when the resulting sample statistic is considered to be an unusual outcome, this hypothesised value of the population parameter is rejected in favour of an alternative explanation which is much more consistent with the sample results.
Thus, having a more likely explanation to summarise, there are two basic types of inferential statistics methods.