Why is 30 the minimum sample size?
A sample size of 30 is fairly common across statistics. A sample size of 30 often increases the confidence interval of your population data set enough to warrant assertions against your findings.4 The higher your sample size, the more likely the sample will be representative of your population set.
The logic behind the rule of 30 is based on the Central Limit Theorem (CLT). The CLT assumes that the distribution of sample means approaches (or tends to approach) a normal distribution as the sample size increases.
It's not that "30 in a sample group should be enough" for a study. It's that you need at least 30 before you can reasonably expect an analysis based upon the normal distribution (i.e. z test) to be valid. That is it represents a threshold above which the sample size is no longer considered "small".
If the research has a relational survey design, the sample size should not be less than 30. Causal-comparative and experimental studies require more than 50 samples. In survey research, 100 samples should be identified for each major sub-group in the population and between 20 to 50 samples for each minor sub-group.
Most statisticians agree that the minimum sample size to get any kind of meaningful result is 100. If your population is less than 100 then you really need to survey all of them.
Academia tells us that 30 seems to be an ideal sample size for the most comprehensive view of an issue, but studies with as few as 10 participants can yield fruitful and applicable results (recruiting excellence is even more important here!).
Sampling ratio (sample size to population size): Generally speaking, the smaller the population, the larger the sampling ratio needed. For populations under 1,000, a minimum ratio of 30 percent (300 individuals) is advisable to ensure representativeness of the sample.
Why is 30 the minimum sample size? The rule of thumb is based on the idea that 30 data points should provide enough information to make a statistically sound conclusion about a population. This is known as the Law of Large Numbers, which states that the results become more accurate as the sample size increases.
For your research report, you can effectively justify your study sample size by these four factors - statistical power, effect size and precision, type and complexity of analysis, and study population variability and homogeneity.
By convention, we consider a sample size of 30 to be “sufficiently large.” When n < 30, the central limit theorem doesn't apply. The sampling distribution will follow a similar distribution to the population. Therefore, the sampling distribution will only be normal if the population is normal.
Is 30 the minimum sample required for a research study?
Some researchers do, however, support a rule of thumb when using the sample size. For example, in regression analysis, many researchers say that there should be at least 10 observations per variable. If we are using three independent variables, then a clear rule would be to have a minimum sample size of 30.
Dworkin (2012) points out that most authors suggest sample sizes of 5 to 50. This leaves a lot of room for error and does not, in advance, propose a reasonable estimate. He also reminds us that in qualitative research of the “grounded theory” type, having 25 to 30 participants is a minimum to reach saturation.

Based on studies that have been done in academia on this very issue, 30 seems to be an ideal sample size for the most comprehensive view, but studies can have as little as 10 total participants and still yield extremely fruitful, and applicable, results.
For example, when we are comparing the means of two populations, if the sample size is less than 30, then we use the t-test. If the sample size is greater than 30, then we use the z-test.
If the population is normal, then the result holds for samples of any size (i..e, the sampling distribution of the sample means will be approximately normal even for samples of size less than 30).
Appropriate sample sizes are critical for reliable, reproducible, and valid results. Evidence generated from small sample sizes is especially prone to error, both false negatives (type II errors) due to inadequate power and false positives (type I errors) due to biased samples.
However, when the sample size is a minimum of 30 then, in general, any kind of distribution such as uniform, left-skewed, right-skewed, or u-shaped distributions would follow normal distributions. You may need a higher than 30 sample size, in case of highly skewed distributions.
When a study's aim is to investigate a correlational relationship, however, we recommend sampling between 500 and 1,000 people. More participants in a study will always be better, but these numbers are a useful rule of thumb for researchers seeking to find out how many participants they need to sample.
the size of the sample is small when compared to the size of the population. When the target population is less than approximately 5000, or if the sample size is a significant proportion of the population size, such as 20% or more, then the standard sampling and statistical analysis techniques need to be changed.
Whatever be the aim, one can draw a precise and accurate conclusion only with an appropriate sample size. A smaller sample will give a result which may not be sufficiently powered to detect a difference between the groups and the study may turn out to be falsely negative leading to a type II error.
What is too small sample size in research?
A study of 20 subjects, for example, is likely to be too small for most investigations. For example, imagine that the proportion of smokers among a particular group of 20 individuals is 25%. The associated 95% CI is 9–49.
However, you will have better results should you understand some key concepts. The larger the sample size is the smaller the effect size that can be detected. The reverse is also true; small sample sizes can detect large effect sizes.
In this overview article six approaches are discussed to justify the sample size in a quantitative empirical study: 1) collecting data from (almost) the entire population, 2) choosing a sample size based on resource constraints, 3) performing an a-priori power analysis, 4) planning for a desired accuracy, 5) using ...
Case Study Sample Size
Typically, a case study has a sample of one (i.e., the bounded case, but note that sam- pling can also occur within the case), unless the research project is a multiple-case study.
While there are no hard and fast rules around how many people you should involve in your research, some researchers estimate between 10 and 50 participants as being sufficient depending on your type of research and research question (Creswell & Creswell, 2018).
These should, however, provide some guidance and a starting point for thinking about your own sample. At a minimum, you probably want to begin with a sample of 12-15 participants. Plan to add more as needed if you do not believe you have reached saturation in that amount.
Roscoe's (1975) set of guidelines for determining sample size has been a common choice in the last several decades. Roscoe suggested that a sample size greater than 30 and less than 500 is suitable for most behavioural studies, while a sample size larger than 500 may lead to a Type II error (Sekaran & Bougie, 2016).
The Student's t-test is widely used when the sample size is reasonably small (less than approximately 30). In these cases the sample distribution of the mean is known to follow a t-distribution.
The "rule of 30" is a rule of thumb about how large a sample has to be so the distribution of the sample estimates of the mean tends to a normal distribution, not about how close to the true parameter, μ, are the estimates.
Expert-Verified Answer
The confidence interval for the population mean µ, when the population standard deviation σ is unknown and the sample size is small (n ≤ 30), is based on the t-distribution. Therefore, the correct answer is d. The t-distribution.
When the sample size is at least 30 the sample mean is normally distributed?
The Central Limit Theorem says that no matter what the distribution of the population is, as long as the sample is “large,” meaning of size 30 or more, the sample mean is approximately normally distributed.
Z-tests are closely related to t-tests, but t-tests are best performed when the data consists of a small sample size, i.e., less than 30. Also, t-tests assume the standard deviation is unknown, while z-tests assume it is known.
For example, when we are comparing the means of two populations, if the sample size is less than 30, then we use the t-test. If the sample size is greater than 30, then we use the z-test.
If the population is normal, then the result holds for samples of any size (i..e, the sampling distribution of the sample means will be approximately normal even for samples of size less than 30).
If the sample size is 30, the studentized sampling distribution approximates the standard normal distribution and assumptions about the population distribution are meaningless since the sampling distribution is considered normal, according to the central limit theorem.
A small sample is generally regarded as one of size n<30. A t-test is necessary for small samples because their distributions are not normal. If the sample is large (n>=30) then statistical theory says that the sample mean is normally distributed and a z test for a single mean can be used.
Hypothesis testing on samples with fewer than 30 participants can be carried out by using the t-value for the corresponding level of significance in place of the z value when setting up the decision rule.