Hi all, This Blog is an English archive of my PhD experience in Imperial College London, mainly logging my research and working process, as well as some visual records.

Monday, 3 September 2007

Choosing the Proper Statistical Test


Let's finish our discussion of inferential statistics with a summary of all the inferential statistics we have discussed and look at the conditions under which we would use each of these statistics. Generally if we know the number of groups or samples in our research design and the level of measurement of the dependent variable we will know which inferential statistic to use.

First let us look at statistical hypotheses in research designs where the dependent variable is at the interval or ratio level. These statistics are known as parametric statistics and we have used the following:

  • If we are testing a statistical hypothesis, involving a single score (we are comparing the score with the population mean) we will use the z-score test (see lesson 9).
  • If we are testing a statistical hypothesis involving a single group (we are comparing the mean of the group with the population mean) and the standard deviation of the population is know use the z test (see lesson 10).
  • If we are testing a statistical hypothesis involving a single group (we are comparing the mean of the group with the population mean) and the standard deviation of the population is not known use the single sample t-test (see lesson 10).
  • If we are testing a statistical hypothesis involved two groups of subjects (we are comparing the means of the two groups) and the two groups are independent of one another, we use the independent t-test (see lesson 11).
  • If we are testing a statistical hypothesis involved two groups of subjects (we are comparing the means of the two groups) and the two groups are dependent on one another (pretest/posttest or matched samples), we use the dependent t-test (see lesson 12).
  • If we are testing a statistical hypothesis involved three or more groups of subjects (we are comparing the means of three or more groups) and there is a single dependent variable in the study, we use one-way analysis of variance (see lesson 13).
  • If we are testing a statistical hypothesis involved the relationship between two variables for one sample (we are measuring the relationship between the two variables) and the data is at the interval or ratio level of measurement), use the Pearson product moment correlation coefficient.

We also looked at two other statistics we could use with data that was not at the interval or ratio level of measurement. These statistics are called non-parametric statistics.

  • If we are testing a statistical hypothesis for one, two, or more groups with one or two variables where the data is catagorical (frequencies). The data is at the nominal level of measurement. For this type of study use chi-square (see lesson 14). We have discussed three different variants of the chi-square statistic.
    1. one variable chi-square with equal expected frequencies
    2. one variable chi-square with unequal (predetermined) expected frequencies
    3. two variable chi-square
  • If we are testing a statistical hypothesis involved the relationship between two variables for one sample (we are measuring the relationship between the two variables) and the data is at the ordinal level of measurement (ranks), use the Spearman rank-difference correlation coefficient (see lesson 15).

The information we have discussed above can be put into the following table. The table also includes other statistics that we have not included in this course. If you think you may need one of the statistics we did not cover in your research design, please send e-mail to the instructor and I will give you a reference to the calculation and interpretation of that statistic. I wish you the best as you complete the final examination for this course and as you apply the information from this course to your own research design.

Selecting a Statistical Test
Level of
Measurement
Sample Characteristics
One-Sample
Statistical
Tests
Two-Sample
Statistical
Tests
Multiple Sample
Statistical
Tests
Measures of
Association
(one-sample, more
than one variable)
Independent
Samples
Non-independent
Samples
Nominal or
Categorical
(frequencies)
Chi-Square Chi-Square McNemar
Change Test
Chi-Square Phi Coefficient
Ordinal
(Ranks)
Kolmagorov-Smirnov
One-Sample
Test
Mann Whitney
U-Test
Wilcoxon
Matched Pairs
Signed-Rank
Test
Krushcal-Wallis
One-Way
Analysis of
Variance
Spearman rho
rS
Interval
or Ratio
Z test

One-Sample
t-Test
Independent
t-test
Dependent
t-test
Simple
Analysis of Variance

Factorial
Analysis of Variance

Scheffe Tests

Analysis of Covariance
Pearson r

Multiple
Regression

No comments: