Parametric and Nonparametric Analysis
By Nathan B. Smith
Numerous instances occur when data acquired from an organization do not meet the standards of parametric analysis. A practitioner could not do a t- or F-test on the data (ANOVA). Professional practitioners and academics should be conversant about these tests, including the chi-square test, the Mann–Whitney U test, the Wilcoxon signed-rank test, and the Kruskal–Wallis one-way analysis of variance. There are circumstances in which each of these tests is appropriate. It is essential to know when to use each, what each test is used for, and why it is preferred over t-tests and ANOVA (Huck, 2012).
Discussion
The t-test is used to assess if two populations are statistically distinct, while ANOVA (F-test) is used to discover if three or more populations are statistically distinct. Both tests examine the difference in means and the spread of distributions (such as variance) across groups; however, their methods for determining statistical significance are distinct. These tests are used when the samples are independent and exhibit (roughly) normal distributions. Alternatively, these tests can be performed when the sample size is large (more than 35 per sample group). While more samples are preferable, the tests may be conducted with as few as three samples per condition. Both the t-test and ANOVA generate a value for the test statistic ("t" or "F") that is converted to a "p-value." A p-value represents the probability that the null hypothesis is true, indicating that both (or all) populations are identical. A lower p-value suggests that the diversion in value across groups is more significant. P-values of 0.05 general mean significant variations in distributions between sample groups (Levine, Ramsey, & Smidt, 2001).
Chi-square test
A Chi-square test is a method for assessing hypotheses. Two types of Chi-square tests are often used: the Chi-square test of goodness of fit and independence. Both tests make use of variables to categorize the sampled data. The Chi-square goodness of fit test determines whether one variable comes from a particular distribution. On the other hand, the chi-square test of independence determines whether two variables may be related (Green & Salkind, 2017).
Mann-Whitney U test
The Mann-Whitney U test assesses differences between two independent groups if the dependent variable is ordinal or continuous but does not exhibit a normal distribution. Unlike the independent-samples t-test, the Mann-Whitney U test enables a researcher to make various conclusions about the data based on the distribution assumptions (Green & Salkind, 2017).
Wilcoxon signed-rank test
The Wilcoxon signed-rank test is the nonparametric t-nonparametric test's counterpart. Because the Wilcoxon signed-rank test does not make any assumptions about the data's normality, it may be employed when this assumption is broken and the dependent t-test is incorrect. It compares two sets of scores obtained from the same people. This may arise when we desire to analyze any change in scores across time points or when people are exposed to many conditions (Huck, 2012).
Kruskal-Wallis one-way analysis of variance
The Kruskal-Wallis H test (also known as the one-way ANOVA on ranks test) is a nonparametric rank-based test that evaluates if there are statistically significant differences between two or more groups of an independent variable for a continuous or ordinal dependent variable (Johnson & Bhattacharyya, 2014).
Conclusion
The most often used approaches for inferring the means of quantitative response variables assume that the sample means distributions are nearly Normal. This criterion is met when researchers collect experimental data with normal distributions (Johnson & Bhattacharyya, 2014).
Generally, no distribution is perfectly normal. However, the standard methodology for inferring population means (one- and two-sample t procedures and analysis of variance) is effective. In other words, the inference findings are somewhat insensitive to a modest loss of normality, much more so when the samples are sufficiently prominent.
References
Green, S. B., & Salkind, N. J. (2017). Using SPSS for Windows and Macintosh: Analyzing and understanding the data (8th ed.). Upper Saddle River, NJ: Pearson.
Huck, S. W. (2012). Reading statistics and research. Boston. MA: Pearson Education, Allyn & Bacon.
Johnson, R. A., & Bhattacharyya, G. K. (2014). Statistics: Principles and methods. Boston, MA: Wiley.
Levine, D. M., Ramsey, P. P., & Smidt, R. K. (2001). Applied statistics for engineers and scientists. Upper Saddle River, NJ: Prentice-Hall.
Comments
Post a Comment