Chapter Contents |
Previous |
Next |
The FREQ Procedure |
When selecting statistics to analyze your data, consider the study design (which indicates whether the row and column variables are dependent or independent), the measurement scale of the variables (nominal, ordinal, or interval), the type of association that the statistics detect, and the assumptions for valid interpretation of the statistics. For example, the Mantel-Haenszel chi-square statistic requires an ordinal scale for both variables and detects a linear association. On the other hand, the Pearson chi-square is appropriate for all variables and can detect any kind of association, but is less powerful for detecting a linear association. Select tests and measures carefully, choosing those that are appropriate for your data. For more information on when to use a statistic and how to interpret the results, refer to Agresti (1996) and Stokes et al. (1995).
Definitions and Notation |
(row totals) | |
|
(column totals) |
|
(overall total) |
|
(cell percentages) |
|
(row percentages) |
|
(column percentages) |
score for row | |
score for column | |
|
(average row score) |
|
|
|
(average column score) |
|
|
|
|
|
(twice the number of concordances) |
|
(twice the number of discordances) |
For numeric variables, TABLE scores are the values of the row and column levels. If the row or column variables are formatted, then the TABLE score is the internal numeric value corresponding to that level. If two or more numeric values are classified into the same formatted level, then the internal numeric value for that level is the smallest of these values. For character variables, TABLE scores are defined as the row numbers and column numbers (that is, 1 for the first row, 2 for the second row, and so on).
RANK scores, which you can use to obtain nonparametric analyses, are defined by
Note that RANK scores yield midranks for tied values.
RIDIT scores (Bross 1958; Mack and Skillings 1980) also yield nonparametric analyses, but they are standardized by the sample size. RIDIT scores are derived from RANK scores as
Modified ridit (MODRIDIT) scores (van Elteren 1960 and Lehmann 1975), which also yield nonparametric analyses, represent the expected values of the order statistics for the uniform distribution on (0,1). Modified ridit scores are derived from RANK scores as
Chi-Square Tests and Measures |
For one-way frequency tables, PROC FREQ performs a chi-square goodness-of-fit test when you specify the CHISQ option. See Chi-Square Test for One-Way Tables for information. The other chi-square tests and statistics described in this section are defined only for two-way tables, and so are not computed for one-way frequency tables.
All the two-way test statistics described in this section test the null hypothesis of no association between the row variable and the column variable. When the sample size is large, these test statistics are distributed approximately as chi-square when the null hypothesis is true. When the sample size is not large, exact tests may be useful. PROC FREQ computes exact tests for the following chi-square statistics when you specify the corresponding option in the EXACT statement: Pearson chi-square, likelihood-ratio chi-square, and Mantel-Haenszel chi-square. See Exact Statistics for more information.
Note that the Mantel-Haenszel chi-square statistic is appropriate
only
when both variables lie on an ordinal scale. The other chi-square tests and
statistics in this section are appropriate for either nominal or ordinal variables.
The following sections give the formulas that PROC FREQ uses to compute the
chi-square tests and statistics. For further information on the formulas and
on the applicability of each statistic, refer to Agresti (1996), Stokes et
al. (1995), and the other references cited for each statistic.
where is the expected frequency for class under the null hypothesis.
In the test for equal proportions, which is the default for the CHISQ option, the null hypothesis specifies equal proportions of the total sample size for each class. Under this null hypothesis, the expected frequency for each class equals the total sample size divided by the number of classes,
In the test for specified frequencies, which PROC FREQ computes when you input null hypothesis frequencies using the TESTF= option, the expected frequencies are those TESTF= values. In the test for specified proportions, which PROC FREQ computes when you input null hypothesis proportions using the TESTP= option, the expected frequencies are determined from the TESTP= proportions , as
Under the null hypothesis (of equal proportions, specified
frequencies, or specified proportions), this test statistic has an asymptotic
chi-square distribution, with
degrees of freedom. In addition to the asymptotic test,
PROC FREQ computes the exact one-way chi-square test when you specify the
CHISQ option in the EXACT statement.
where
When the row and column variables are independent, has an asymptotic chi-square distribution with degrees of freedom. For large values of , this test rejects the null hypothesis in favor of the alternative hypothesis of general association. In addition to the asymptotic test, PROC FREQ computes the exact chi-square test when you specify the PCHI option or CHISQ option in the EXACT statement.
For a 2×2 table, the Pearson chi-square is also appropriate for
testing the equality of two binomial proportions or, for
and
tables, the homogeneity of proportions. Refer to Fienberg
(1980).
When the row and column variables are independent,
has an asymptotic chi-square distribution with
degrees of freedom. In addition to the
asymptotic test,
PROC FREQ computes the exact test when you specify the LRCHI option or the
CHISQ option in the EXACT statement.
Under the null hypothesis of independence,
has an asymptotic chi-square distribution with
degrees of freedom.
where is the Pearson correlation between the row variable and the column variable. For a description of the Pearson correlation, see Pearson Correlation Coefficient . The Pearson correlation, and thus the Mantel-Haenszel chi-square statistic, use the scores you specify in the SCORES= option in the TABLES statement.
Under the null hypothesis of no association, has an asymptotic chi-square distribution with 1 degree of freedom. In addition to the asymptotic test, PROC FREQ computes the exact test when you specify the MHCHI option or the CHISQ option in the EXACT statement.
Refer to Mantel and Haenszel (1959) and Landis et al. (1978).
For a two-sided alternative hypothesis, A is the set of tables with less than or equal to the probability of the observed table. A small two-sided p-value supports the alternative hypothesis of association between the row and column variables.
One-sided tests are defined in terms of the frequency of the cell in the first row and first column (the (1,1) cell). For a left-sided alternative hypothesis, A is the set of tables where the frequency in the (1,1) cell is less than or equal to that of the observed table. A small left-sided p-value supports the alternative hypothesis that the probability of an observation being in the first cell is less than that expected under the null hypothesis of independent row and column variables.
Similarly, for a right-sided alternative hypothesis, A is the set of tables where the frequency in the (1,1) cell is greater than or equal to that of the observed table. A small right-sided p-value supports the alternative that the probability of the first cell is greater than that expected under the null hypothesis.
Because the (1,1) cell frequency completely determines the 2×2 table when the marginal row and column sums are fixed, these one-sided alternatives can be equivalently stated in terms of other cell probabilities or ratios of cell probabilities. The left-sided alternative is equivalent to an odds ratio less than 1, and the right-sided alternative is equivalent to an odds ratio greater than 1, where the odds ratio equals . Additionally, the left-sided alternative is equivalent to the column 1 risk for row 1 being less than the column 1 risk for row 2, . Similarly, the right-sided alternative is equivalent to the column 1 risk for row 1 being greater than the column 1 risk for row 2, . Refer to Agresti (1996).
Fisher's exact test was extended to general tables by Freeman and Halton (1951), and this test is also known as the Freeman-Halton test. For tables, the two-sided p-value is defined the same as it is for 2×2 tables. A is the set of all tables with p less than or equal to the probability of the observed table. A small p-value supports the alternative hypothesis of association between the row and column variables. For tables, Fisher's exact test is inherently two-sided. The alternative hypothesis is defined only in terms of general, and not linear, association. Therefore, PROC FREQ does not compute right-sided or left-sided p-values for general tables.
For
tables, PROC FREQ computes Fisher's exact test using the
network algorithm of Mehta and Patel (1983), which provides a faster and more
efficient solution than direct enumeration. See Exact Statistics for more information.
Refer to Fleiss (1981, pp 59-60).
Refer to Kendall and Stuart (1979, pp 587-588).
Refer to Kendall and Stuart (1979, p. 588).
Measures of Association |
The Pearson correlation coefficient and the Spearman rank correlation coefficient are also appropriate for ordinal variables. The Pearson correlation describes the strength of the linear association between the row and column variables, and is computed using the row and column scores specified by the SCORES= option in the TABLES statement. The Spearman correlation is computed with rank scores. The polychoric correlation (requested by the PLCORR option) also requires ordinal variables, and assumes that the variables have an underlying bivariate normal distribution. The following measures of association do not require ordinal variables, but are appropriate for nominal variables: lambda asymmetric and symmetric, and the uncertainty coefficients.
PROC FREQ computes estimates of the measures according to the formulas
given in the discussion of each measure of association. For each measure,
PROC FREQ computes an asymptotic standard error, which is the square root
of the asymptotic variance denoted by var in the following sections.
where
is the estimate of the measure,
is the
percentile of the standard normal distribution, and ASE is the asymptotic standard error of the
estimate.
where is the estimate of the measure, and is the variance of the estimate under the null hypothesis. Formulas for are given in the discussion of each measure of association.
Note that the ratio of to is the same for the following measures: gamma, Kendall's tau-b, Stuart's tau-c, Somers' D( ), and Somers' D( ). Therefore, the tests for these measures are identical. For example, the p-values for the test of : gamma=0 equal the p-values for the test of : tau-b= 0.
PROC FREQ computes one-sided and two-sided p-values for each of these tests. When the test statistic z is greater than its null hypothesis expected value of zero, PROC FREQ computes the right-sided p-value, which is the probability of a larger value of the statistic occurring under the null hypothesis. A small right-sided p-value supports the alternative hypothesis that the true value of the measure is greater than zero. When the test statistic is less than or equal to zero, PROC FREQ computes the left-sided p-value, which is the probability of a smaller value of the statistic occurring under the null hypothesis. A small left-sided p-value supports the alternative hypothesis that the true value of the measure is less than zero. The one-sided p-value can be expressed as
where has a standard normal distribution. The two-sided p-value is computed as
with
The variance of the estimator under the null hypothesis that gamma equals zero is computed as
For 2×2 tables, gamma is equivalent to Yule's Q.
Refer to Goodman and Kruskal (1963; 1972), Brown and Benedetti (1977), and
Agresti (1990).
with
where
The variance of the estimator under the null hypothesis that tau-b equals zero is computed as
Refer to Kendall (1955) and Brown and Benedetti (1977).
with
where
The variance of the estimator under the null hypothesis that tau-c equals zero is the same as in the above equation.
Refer to Brown and Benedetti (1977).
with
where
The variance of the estimator under the null hypothesis that tau-c equals zero is computed as
Refer to Somers (1962) and Goodman and Kruskal (1972).
with
The row scores and the column scores are determined by the SCORES= option in the TABLES statement. Then
where and are the average row and columns scores as defined in Definitions and Notation . Refer to Snedecor and Cochran (1989) and Brown and Benedetti (1977).
To compute an asymptotic test for the Pearson correlation, PROC FREQ uses a standardized test statistic , which has an asymptotic standard normal distribution under the null hypothesis. The standardized test statistic is computed as
where is the variance of the correlation under the null hypothesis.
This asymptotic variance is derived for multinomial sampling in a contingency table framework, and it differs from the form obtained under the assumption that both variables are continuous and normally distributed. Refer to Brown and Benedetti (1977).
PROC FREQ also computes the exact test for the hypothesis that the Pearson
correlation equals zero when you specify the PCORR option in the EXACT statement.
See Exact Statistics for more information on exact tests.
with
where
Refer to Snedecor and Cochran (1989) and Brown and Benedetti (1977).
To compute an asymptotic test for the Spearman correlation, PROC FREQ uses a standardized test statistic , which has an asymptotic standard normal distribution under the null hypothesis. The standardized test statistic is computed as
where is the variance of the correlation under the null hypothesis.
where
This asymptotic variance is derived for multinomial sampling in a contingency table framework, and it differs from the form obtained under the assumption that both variables are continuous. Refer to Brown and Benedetti (1977).
PROC FREQ also computes the exact test for the hypothesis that the Spearman
rank correlation equals zero when you specify the SCORR option in the EXACT
statement. See Exact Statistics for more information.
To estimate the polychoric correlation, PROC FREQ iteratively solves
the likelihood equations by a Newton-Raphson algorithm. Iteration stops when
the convergence measure falls below the convergence criterion, or when the
maximum number of iterations is reached, whichever occurs first. The CONVERGE=
option sets the convergence criterion, and the default is 0.0001. The MAXITER=
option sets the maximum number of iterations, and the default is 20.
with
where
Also, let be the unique value of such that , and let be the unique value of such that .
Because of the uniqueness assumptions, ties in the frequencies or in the marginal totals must be broken in an arbitrary but consistent manner. In case of ties, is defined here as the smallest value of such that . For a given , if there is at least one value such that then is defined here to be the smallest such value of . Otherwise, if , then is defined to be equal to . If neither condition is true, then is taken to be the smallest value of such that . The formulas for lambda asymmetric can be obtained by interchanging the indices.
Refer to Goodman and Kruskal
(1963).
with
where
Refer to Goodman and Kruskal (1963).
with
where
Refer to Theil (1972, pp 115-120) and Goodman and Kruskal
(1972).
with
Refer to Goodman and Kruskal (1972).
Binomial Proportion |
where is the frequency for the first level, and is the total frequency for the one-way table. The standard error for the binomial proportion is computed as
Using the normal approximation to the binomial distribution, PROC FREQ constructs asymptotic confidence limits for according to
where is the percentile of the standard normal distribution. The confidence level is determined by the ALPHA= option, which by default equals .05 and produces 95 percent confidence limits. Additionally, PROC FREQ computes exact confidence limits for the binomial proportion using the F distribution method given in Collett (1991) and also described by Leemis and Trivedi (1996).
PROC FREQ computes an asymptotic test of the hypothesis that the binomial proportion equals , where the value of is specified by the P= option in the TABLES statement. If you do not specify a value for P=, PROC FREQ uses by default. The asymptotic test statistic is
PROC FREQ computes one-sided and two-sided p-values for this test. When the test statistic z is greater than its null hypothesis expected value of zero, PROC FREQ computes the right-sided p-value, which is the probability of a larger value of the statistic occurring under the null hypothesis. A small right-sided p-value supports the alternative hypothesis that the true value of the proportion is greater than . When the test statistic is less than or equal to zero, PROC FREQ computes the left-sided p-value, which is the probability of a smaller value of the statistic occurring under the null hypothesis. A small left-sided p-value supports the alternative hypothesis that the true value of the proportion is less than . The one-sided p-value can be expressed as
where has a standard normal distribution. The two-sided p-value is computed as
When you specify the BINOMIAL option in the EXACT statement, PROC FREQ also computes an exact test of the null hypothesis . To compute this exact test, PROC FREQ uses the binomial probability function
where the variable X has a binomial distribution with parameters and . To compute , PROC FREQ sums these binomial probabilities over from zero to . To compute , PROC FREQ sums these binomial probabilities over from to . Then the exact one-sided p-value is
and the exact two-sided p-value is
Risks and Risk Differences |
Let the frequencies of the 2×2 table be represented as follows:
The column 1 risk for row 1 is the proportion of row 1 observations classified in column 1
This estimates the conditional probability of the column 1 response, given the first level of the row variable.
The column 1 risk for row 2 is the proportion of row 2 observations classified in column 1,
and the overall column 1 risk is the proportion of all observations classified in column 1,
The column 1 risk difference compares the risks for the two rows, and it is computed as the column 1 risk for row 1 minus the column 1 risk for row 2,
The risks and risk difference are defined similarly for column 2.
The standard error of the column 1 risk estimate for row i is computed as
The standard error of the overall column 1 risk estimate is computed as
If the two rows represent independent binomial samples, the standard error for the column 1 risk difference is computed as
The standard errors are computed similarly for the column 2 risks and risk difference.
Using the normal approximation to the binomial distribution, PROC FREQ constructs asymptotic confidence limits for the risk and risk differences according to
where is the estimate, is the percentile of the standard normal distribution, and is the standard error of the estimate. The confidence level is determined from the value of the ALPHA= option, which, by default, equals 0.05 and produces 95 percent confidence limits.
PROC FREQ computes exact confidence limits for the column 1, column 2, and overall risks using the F distribution method given in Collett (1991), and also described by Leemis and Trivedi (1996). PROC FREQ does not provide exact confidence limits for the risk differences. Refer to Agresti (1992) for a discussion of issues involved in constructing exact confidence limits for differences of proportions.
Odds Ratio and Relative Risks for 2×2 Tables |
The odds of a positive response (column 1) in row 1 is . Similarly, the odds of positive response in row 2 is . The odds ratio is formed as the ratio of the row 1 odds to the row 2 odds. The odds ratio for 2×2 tables is defined as
The odds ratio can be any nonnegative number. When the row and column variables are independent, the true value of the odds ratio equals 1. An odds ratio greater than 1 indicates that the odds of a positive response are higher in row 1 than in row 2. Values less than 1 indicate the odds of positive response are higher in row 2. The strength of association increases with the deviation from 1.
The transformation transforms the odds ratio to the range such that when , when , and is close to 1 for very large values of . is the gamma statistic, which PROC FREQ computes when you specify the MEASURES option.
The asymptotic percent confidence limits for the odd ratio are
where
and is the percentile of the standard normal distribution. If any of the four cell frequencies are zero, the estimates are not computed.
When you specify the OR option in the EXACT statement PROC FREQ computes
exact confidence limits for the odds ratio using an iterative algorithm based
on that presented by Thomas (1971). Because this is a discrete problem, the
confidence coefficient for these exact confidence limits is not exactly
, but is at least
. Thus, these confidence limits are conservative. Refer
to Agresti (1992).
The column 1 relative risk is the ratio of the column 1 risks for row 1 to row 2. The column 1 risk for row 1 is the proportion of the row 1 observations classified in column 1,
Similarly, the column 1 risk for row 2 is
The column 1 relative risk is then computed as
A relative risk greater than 1 indicates that the probability of positive response is greater in row 1 than in row 2. Similarly, a relative risk that is less than 1 indicates that the probability of positive response is less in row 1 than in row 2. The strength of association increases with the deviation from 1.
The asymptotic percent confidence limits for the column 1 relative risk are
where
and is the percentile of the standard normal distribution. If either or is zero, PROC FREQ does not compute the relative risks.
The column 2 relative risks are computed similarly.
Cochran-Armitage Test for Trend |
The trend test is based upon the regression coefficient for the weighted linear regression of the binomial proportions on the scores of the levels of the explanatory variable. Refer to Margolin (1988) and Agresti (1990). If the contingency table has two columns and R rows, the trend test statistic is computed as
where
The row scores are determined by the value of the SCORES= option in the TABLES statement. By default, PROC FREQ uses TABLE scores. For character variables, the TABLE scores for the row variable are the row numbers (for example, 1 for the first row, 2 for the second row, and so on). For numeric variables, the TABLE score for each row is the numeric value of the row level. When you perform the trend test, the explanatory variable may be numeric (for example, dose of a test substance), and these variable values may be appropriate scores. If the explanatory variable has ordinal levels that are not numeric, you can assign meaningful scores to the variable levels. Sometimes equidistant scores, such as the TABLE scores for a character variable, may be appropriate. For more information on choosing scores for the trend test, refer to Margolin (1988).
The null hypothesis for the Cochran-Armitage test is no trend, which means the binomial proportion is the same for all levels of the explanatory variable. Under this null hypothesis, the trend test statistic is asymptotically distributed as a standard normal random variable. In addition to this asymptotic test, PROC FREQ can compute the exact test for trend, which you request by specifying the TREND option in the EXACT statement. See the EXACT Statement for information on exact tests.
PROC FREQ computes one-sided and two-sided p-values for the trend test. When the test statistic is greater than its expected value of zero, PROC FREQ computes the right-sided p-value, which is the probability of a larger value of the statistic occurring under the null hypothesis. A small right-sided p-value supports the alternative hypothesis of increasing trend in column 1 probability from row 1 to row R. When the test statistic is less than or equal to zero, PROC FREQ computes the left-sided p-value. A small left-sided p-value supports the alternative of decreasing trend. The one-sided p-value can be expressed as
The two-sided p-value is computed as
Jonckheere-Terpstra Test |
The Jonckheere-Terpstra test is appropriate for a contingency table where an ordinal column variable represents the response. The row variable, which can be nominal or ordinal, represents the classification variable. The levels of the row variable should be ordered according to the ordering you want the test to detect. The order of variable levels is determined by the ORDER= option in the PROC FREQ statement. The default is ORDER=INTERNAL, which orders by unformatted value. If you specify ORDER=DATA, PROC FREQ orders values according to their order in the input data set. For more information on how to order variable levels, see the ORDER= option .
The Jonckheere-Terpstra test statistic is computed by first forming Mann-Whitney counts , where , for pairs of rows in the contingency table,
where is response in row . Then the Jonckheere-Terpstra test statistic is computed as
This test rejects the null hypothesis of no difference among classes for large values of . Asymptotic p-values for the Jonkheere-Terpstra test are obtained by using the normal approximation for the distribution of the standardized test statistic. The standardized test statistic is computed as
where and are the expected value and variance of the test statistic under the null hypothesis.
where
In addition to this asymptotic test, PROC FREQ can compute the exact Jonckheere-Terpstra test, which you request by specifying the JT option in the EXACT statement. See the EXACT Statement for information on exact tests.
PROC FREQ computes one-sided and two-sided p-values for the Jonckheere-Terpstra test. When the standardized test statistic is greater than its expected value of 0, PROC FREQ computes the right-sided p-value, which is the probability of a larger value of the statistic occurring under the null hypothesis. A small right-sided p-value supports the alternative hypothesis of increasing order from row 1 to row R. When the standardized test statistic is less than or equal to 0, PROC FREQ computes the left-sided p-value. A small left-sided p-value supports the alternative of decreasing order from row 1 to row R. The one-sided p-value, , can be expressed as
The two-sided p-value, , is computed as
Tests and Measures of Agreement |
PROC FREQ computes the kappa coefficients (simple and weighted), their asymptotic standard errors, and their confidence limits when you specify the AGREE option in the TABLES statement. If you also specify the KAPPA option in the TEST statement, then PROC FREQ computes the asymptotic test of the hypothesis that simple kappa equals zero. Similarly, if you specify WTKAP in the TEST statement, PROC FREQ computes the asymptotic test for weighted kappa.
In addition to the asymptotic tests that are described in this section, PROC FREQ also computes the exact p-value for McNemar's test when you specify the keyword MCNEM in the EXACT statement. For the kappa statistic, PROC FREQ computes an exact test of the hypothesis that kappa (or weighted kappa) equals zero when you specify KAPPA (or WTKAP) in the EXACT statement. See Exact Statistics for more information about these tests.
The discussion of each test and measure of agreement provides
the formulas
that PROC FREQ uses to compute the AGREE statistics. For information about
the use and interpretation of these statistics, refer to Agresti
(1990), Agresti (1996), Fleiss (1981), and the references that follow.
Under the null hypothesis,
has an asymptotic chi-square distribution with one degree
of freedom. Refer to McNemar (1947), as well as the references cited in
the preceding section. PROC FREQ also computes an exact p-value
for McNemar's test when you specify MCNEM in the EXACT statement.
For large samples,
has an asymptotic chi-square distribution with
degrees of freedom under the null hypothesis of symmetry
of the expected counts. Refer to Bowker (1948). For two categories, this test
of symmetry is identical to McNemar's test.
where and . Viewing the two response variables as two independent ratings of the subjects, the kappa coefficient equals +1 when there is complete agreement of the raters. When the observed agreement exceeds chance agreement, the kappa coefficient is positive, with its magnitude reflecting the strength of agreement. Although unusual in practice, kappa is negative when the observed agreement is less than chance agreement. The minimum value of kappa is between -1 and 0, depending on the marginal proportions.
The asymptotic variance of the simple kappa coefficient is estimated by the following, according to Fleiss et al. (1969):
where
and
PROC FREQ computes confidence limits for the simple kappa coefficient according to
where is the percentile of the standard normal distribution. The value of is determined by the value of the ALPHA= option, which by default equals 0.05 and produces 95 percent confidence limits.
To compute an asymptotic test for the kappa coefficient, PROC FREQ uses a standardized test statistic , which has an asymptotic standard normal distribution under the null hypothesis that kappa equals zero. The standardized test statistic is computed as
where is the variance of the kappa coefficient under the null hypothesis.
Refer to Fleiss (1981).
In addition to the asymptotic test for kappa, PROC FREQ computes an
exact test when you specify the KAPPA option or the AGREE option in the EXACT
statement. See Exact Statistics for more information on exact tests.
where
and
For 2×2 tables, the weighted kappa coefficient is identical to the simple kappa coefficient. Therefore, PROC FREQ displays only the simple kappa coefficient for 2×2 tables. The asymptotic variance of the weighted kappa coefficient is estimated by the following, according to Fleiss et al. (1969):
where
and
PROC FREQ computes confidence limits for the weighted kappa coefficient according to
where is the percentile of the standard normal distribution. The value of is determined by the value of the ALPHA= option, which by default equals 0.05 and produces 95 percent confidence limits.
To compute an asymptotic test for the weighted kappa coefficient, PROC FREQ uses a standardized test statistic , which has an asymptotic standard normal distribution under the null hypothesis. The standardized test statistic is computed as
where is the variance of the kappa coefficient under the null hypothesis.
Refer to Fleiss (1981).
In addition to the asymptotic test for weighted kappa, PROC FREQ computes the exact test when you specify the WTKAP option or the AGREE option in the EXACT statement. See Exact Statistics for more information on exact tests.
PROC FREQ computes kappa coefficient weights using the column scores and one of two available weight types. The column scores are determined by the SCORES= option in the TABLES statement. The two available weight types are Cicchetti-Allison and Fleiss-Cohen. By default, PROC FREQ uses the Cicchetti-Allison type. If you specify WT=FC in the AGREE option, then PROC FREQ uses the Fleiss-Cohen weight type to construct kappa weights. To display the kappa weights, specify the PRINTKWT option in the TABLES statement.
PROC FREQ computes Cicchetti-Allison kappa coefficient weights using a form similar to that given by Cicchetti and Allison (1971).
where is the score for column , and C is the number of categories. You can specify the type of score using the SCORES= option in the TABLES statement. If you do not specify the SCORES= option, PROC FREQ uses TABLE scores. For numeric variables, TABLE scores are the numeric values of the variable levels. You can assign numeric values to the categories in a way that reflects their level of similarity. For example, suppose you have four categories and order them according to similarity. If you assign them values of 0, 2, 4, and 10, the following weights are used for computing the weighted kappa coefficient: and .
If you specify (WT=FC) with the AGREE option in the TABLES statement, PROC FREQ computes Fleiss-Cohen kappa coefficient weights using a form similar to that given by Fleiss and Cohen (1973).
An estimate of the overall weighted kappa is computed similarly.
A similar test is done for weighted kappa coefficients.
where
is the number of positive responses for variable
,
is the total number of positive responses over all variables,
and
is the number of positive responses for subject
. Under the null hypothesis, Cochran's Q is
an approximate chi-square statistic with
degrees of freedom. Refer to Cochran (1950). When there
are two variables (
), Cochran's Q simplifies to McNemar's statistic.
When there are more than two response categories, you can test for marginal
homogeneity using the repeated measures capabilities of the CATMOD procedure.
To include a variable level with no observations in the analysis, you can assign an extremely small weight (such as 1E-8) to an observation with that variable level. Then the analysis includes this variable level, but the statistic value remains unchanged because the weight is so small. For example, suppose you need to compute a kappa coefficient for data for two raters. One rater uses all possible ratings (say, 1, 2, 3, 4, and 5), but another rater uses only four of the available ratings (1, 2, 3, and 4). You can create an observation where the second rater uses the rating level 5, and assign it a weight of 1E-8. This forms a 5×5 square table for the analysis.
Cochran-Mantel-Haenszel Statistics |
proc freq; tables a*b*c*d / cmh; run;The CMH option in the TABLES statement gives a stratified statistical analysis of the relationship between C and D, controlling for A and B. The stratified analysis provides a way to adjust for the possible confounding effects of A and B without being forced to estimate parameters for them. The analysis produces Cochran-Mantel-Haenszel statistics, and for 2×2 tables, it includes estimation of the common odds ratio, common relative risks, and the Breslow-Day test for homogeneity of the odds ratios.
Let the number of strata be denoted by , indexing the strata by . Each stratum contains a contingency table with X representing the row variable and Y representing the column variable. For table , denote the cell frequency in row and column by , with corresponding row and column marginal totals denoted by and and the overall stratum total by .
Because the formulas for the Cochran-Mantel-Haenszel statistics are more easily defined in terms of matrices, the following notation is used. Vectors are presumed to be column vectors unless they are transposed (′).
Assume that the strata are independent and that the marginal totals of each stratum are fixed. The null hypothesis, , is that there is no association between X and Y in any of the strata. The corresponding model is the multiple hypergeometric, which implies that under , the expected value and covariance matrix of the frequencies are, respectively,
and
where
and where denotes Kronecker product multiplication and is a diagonal matrix with elements of on the main diagonal.
The generalized CMH statistic (Landis, Heyman, and Koch 1978) is defined as
where
and where
is a matrix of fixed constants based on column scores and row scores . When the null hypothesis is true, the CMH statistic has an asymptotic chi-square distribution with degrees of freedom equal to the rank of . If is found to be singular, PROC FREQ displays a message and sets the value of the CMH statistic to missing.
PROC FREQ computes three CMH statistics using this formula for the generalized CMH statistic, with different row and column score definitions for each statistic. The CMH statistics that PROC FREQ computes are the correlation statistic, the ANOVA (row mean scores) statistic, and the general association statistic. These statistics test the null hypothesis of no association against different alternative hypotheses. The following sections describe the computation of these CMH statistics.
The alternative hypothesis is that there is a linear association between X and Y in at least one stratum. If either X or Y does not lie on an ordinal (or interval) scale, then this statistic is meaningless.
To compute the correlation statistic, PROC FREQ uses the formula for the generalized CMH statistic with the row and column scores determined by the SCORES= option in the TABLES statement. See Scores for more information on the available score types. The matrix of row scores has dimension , and the matrix of column scores has dimension .
When there is only one stratum, this CMH statistic reduces to
, where
is the Pearson correlation coefficient between X and Y.
When you specify nonparametric (RANK, RIDIT, or MODRIDIT) scores, the statistic
reduces to
, where
is the Spearman rank correlation coefficient between X
and Y. When there is more than one stratum, then the CMH statistic becomes
a stratum-adjusted correlation statistic.
The matrix of column scores has dimension , and the scores, one for each column, are specified in the SCORES= option. The matrix has dimension which PROC FREQ creates internally as
where is an identity matrix of rank , and is an vector of ones. This matrix has the effect of forming independent contrasts of the mean scores.
When there is only one stratum, this CMH statistic is essentially an analysis-of-variance (ANOVA) statistic in the sense that it is a function of the variance ratio F statistic that would be obtained from a one-way ANOVA on the dependent variable Y. If nonparametric scores are specified in this case, then the ANOVA statistic is a Kruskal-Wallis test.
If there is more than one stratum, then this CMH statistic corresponds
to a stratum-adjusted ANOVA or Kruskal-Wallis test. In the special
case where there is one subject per row and one subject per column in the
contingency table of each stratum, then this CMH statistic is identical to
Friedman's chi-square. See Computing Friedman's Chi-Square Statistic for an illustration.
For the general association statistic, the matrix is the same as the one used for the ANOVA statistic. The matrix is defined similarly as
PROC FREQ generates both score matrices internally. When there is only one stratum, then the general association CMH statistic reduces to , where is the Pearson chi-square statistic. When there is more than one stratum, then the CMH statistic becomes a stratum-adjusted Pearson chi-square statistic. Note that a similar adjustment is made by summing the Pearson chi-squares across the strata. However, the latter statistic requires a large sample size in each stratum to support the resulting chi-square distribution with degrees of freedom. The CMH statistic requires only a large overall sample size because it has only degrees of freedom.
Refer to Cochran
(1954); Mantel and Haenszel (1959); Mantel (1963);
Birch (1965); and Landis et al. (1978).
proc freq; tables a*b*c*d / cmh; run;In this example, if the row and column variables C and D both have two levels, PROC FREQ provides odds ratio and relative risk estimates, adjusting for the confounding variables A and B.
The choice of an appropriate measure depends on the study design. For case-control (retrospective) studies, the odds ratio is appropriate. For cohort (prospective) or cross-sectional studies, the relative risk is appropriate. See Odds Ratio and Relative Risks for 2×2 Tables for more information on these measures.
Throughout this section,
is the
percentile of the standard normal distribution.
It is always computed unless the denominator is zero. Refer to Mantel and Haenszel (1959) and Agresti (1990).
Using the estimated variance for given by Robins et al. (1986), PROC FREQ computes the corresponding percent confidence limits for the odds ratio as
where
Note that the Mantel-Haenszel odds ratio estimator is less
sensitive to small
than the logit estimator.
and the corresponding percent confidence limits are
where is the odds ratio for stratum h, and
Refer to Woolf (1955)
If any cell frequency in a stratum
is zero, then PROC FREQ adds 0.5 to each cell of the stratum
before computing
and
(Haldane 1955), and displays a warning.
It is always computed unless the denominator is zero. Refer to Mantel and Haenszel (1959) and Agresti(1990).
Using the estimated variance for given by Greenland and Robins (1985), PROC FREQ computes the corresponding confidence percent limits for the relative risk as
where
The adjusted logit estimate of the common relative risk for column 1 is computed as
and the corresponding percent confidence limits are
where is the column 1 relative risk estimator for stratum h, and
If or is zero, then PROC FREQ adds 0.5 to each cell of the stratum before computing and , and displays a warning.
Refer to Kleinbaum, Kupper, and Morgenstern (1982, Sections 17.4, 17.5)
and Breslow and Day (1994).
The Breslow-Day statistic is computed as
where E and var denote expected value and variance, respectively. The summation does not include any tables with a zero row or column. If equals zero or if it is undefined, then PROC FREQ does not compute the statistic, and displays a warning message.
Refer to Breslow and Day (1993).
Exact Statistics |
In addition to computation of exact p-values, PROC FREQ provides the option of estimating exact p-values by Monte Carlo simulation. This can be useful for problems that are so large that exact computations require a great amount of time and memory, but for which asymptotic approximations may not be sufficient.
PROC FREQ provides exact p-values for the following tests for two-way tables: Pearson chi-square, likelihood-ratio chi-square, Mantel-Haenszel chi-square, Fisher's exact test, Jonckheere-Terpstra test, Cochran-Armitage test for trend, and McNemar's test. PROC FREQ can also compute exact p-values for tests of hypotheses that the following statistics are equal to zero: Pearson correlation coefficient, Spearman correlation coefficient, simple kappa coefficient, and weighted kappa coefficient. Additionally, PROC FREQ can compute exact confidence limits for the odds ratio for 2×2 tables. For one-way frequency tables, PROC FREQ provides the exact chi-square goodness-of-fit test (for equal proportions, or for proportions or frequencies that you specify). Also for one-way tables, PROC FREQ provides exact confidence limits for the binomial proportion, and an exact test for the binomial proportion value.
If the procedure does not complete the computation within the specified time, use MAXTIME= to increase the amount of clock time that PROC FREQ can use to compute the exact p-values directly or with Monte Carlo estimation.
The following sections summarize the computational algorithms, define
the p-values that PROC FREQ computes, and discuss the computational
resource requirements.
The reference set for a given contingency table is the set of all contingency tables with the observed marginal row and column sums. Corresponding to this reference set, the network algorithm forms a directed acyclic network consisting of nodes in a number of stages. A path through the network corresponds to a distinct table in the reference set. The distances between nodes are defined so that the total distance of a path through the network is the corresponding value of the test statistic. At each node, the algorithm computes the shortest and longest path distances for all the paths that pass through that node. For statistics that can be expressed as a linear combination of cell frequencies multiplied by increasing row and column scores, PROC FREQ computes shortest and longest path distances using the algorithm given in Agresti et al. (1990). For statistics of other forms, PROC FREQ computes an upper limit for the longest path and a lower limit for the shortest path following the approach of Valz and Thompson (1994).
The longest and shortest path distances or limits for a node are compared to the value of the test statistic to determine whether all paths through the node contribute to the p-value, none of the paths through the node contribute to the p-value, or neither of these situations occur. If all paths through the node contribute, the p-value is incremented accordingly, and these paths are eliminated from further analysis. If no paths contribute, these paths are eliminated from the analysis. Otherwise, the algorithm continues, still processing this node and the associated paths. The algorithm finishes when all nodes have been accounted for, incrementing the p-value accordingly, or eliminated.
In applying the network algorithm, PROC FREQ uses full precision to represent all statistics, row and column scores, and other quantities involved in the computations. Although it is possible to use rounding to improve the speed and memory requirements of the algorithm, PROC FREQ does not do this because it can result in reduced accuracy of the p-values.
PROC FREQ computes exact confidence limits for the odds ratio according to an iterative algorithm based on that presented by Thomas (1971). Refer also to Gart (1971). Because this is a discrete problem, the confidence coefficient is not exactly , but is at least . Thus, these confidence limits are conservative.
For one-way tables, PROC FREQ computes the exact chi-square goodness-of-fit test by the method of Radlow and Alf (1975). PROC FREQ generates all possible one-way tables with the observed total sample size and number of categories. For each possible table, PROC FREQ compares its chi-square value with the value for the observed table. If the table's chi-square value is greater than or equal to the observed chi-square, PROC FREQ increments the exact p-value by the probability of that table, which is calculated under the null hypothesis using the multinomial frequency distribution. By default, the null hypothesis states that all categories have equal proportions. If you specify null hypothesis proportions or frequencies using the TESTP= or TESTF= option in the TABLES statement, then PROC FREQ calculates the exact chi-square test based on that null hypothesis.
For binomial proportions in one-way tables, PROC FREQ computes exact
confidence limits using the F distribution method given in Collett
(1991) and also described by Leemis and Trivedi (1996). PROC FREQ computes
the exact test for a binomial proportion
by summing binomial probabilities over all alternatives.
See Binomial Proportion for details. By default PROC FREQ uses
as the null hypothesis proportion. Alternatively, you can
specify the null hypothesis proportion with the P= option in the TABLES statement.
There are other tests where it may be appropriate to test against either a one-sided or a two-sided alternative hypothesis. For example, when you test the null hypothesis that the true parameter value equals zero , the alternative of interest may be one-sided or two-sided . Such tests include the Pearson correlation coefficient, Spearman correlation coefficient, Jonckheere-Terpstra test, Cochran-Armitage test for trend, simple kappa coefficient, and weighted kappa coefficient. For these tests, PROC FREQ computes the right-sided p-value when the observed value of the test statistic is greater than its expected value. The right-sided p-value is the sum of probabilities for those tables having a test statistic greater than or equal to the observed test statistic. Otherwise, when the test statistic is less than or equal to its expected value, PROC FREQ computes the left-sided p-value. The left-sided p-value is the sum of probabilities for those tables having a test statistic less than or equal to the one observed. The one-sided p-value can be expressed as
where t is the observed value of the test statistic, and is the expected value of the test statistic under the null hypothesis. PROC FREQ computes the two-sided p-value as the sum of the one-sided p-value and the corresponding area in the opposite tail of the distribution of the statistic, equidistant from the expected value. The two-sided p-value can be expressed as
A formula does not exist that can determine in advance how much time or memory that PROC FREQ needs to compute an exact p-value for a certain problem. The time and memory requirements depend on several factors which include the test that is performed, the total sample size, the number of rows and columns, and the specific arrangement of the observations into table cells. Generally, larger problems (in terms of total sample size, number of rows, and number of columns) tend to require more time and memory. Additionally, for a fixed total sample size, time and memory requirements tend to increase as the number of rows and columns increases, because this corresponds to an increase in the number of tables in the reference set. Also for a fixed sample size, time and memory requirements increase as the marginal row and column totals become more homogeneous. Refer to Agresti et al. (1992) and Gail and Mantel (1977).
At any time while PROC FREQ computes exact p-values, you can terminate the computations by pressing the system interrupt key sequence (refer to the SAS Companion for your operating environment) and choosing to stop computations. After you terminate exact computations, PROC FREQ completes all other remaining tasks that the procedure specifies. The procedure produces the requested output, reporting missing values for any exact p-values that were not computed by the time of termination.
You can also use the MAXTIME= option in the EXACT statement to limit
the amount of clock time PROC FREQ uses for exact computations. You specify
a MAXTIME= value that is the maximum amount of time (in seconds) that PROC
FREQ can use to compute an exact p-value. If PROC FREQ does not
finish computing an exact p-value within that time, it terminates
the computation and completes all other remaining tasks.
To compute a Monte Carlo estimate of an exact p-value, PROC FREQ generates a random sample of tables with the same total sample size, row totals, and column totals as the observed table. PROC FREQ uses the algorithm of Agresti et al. (1979), which generates tables in proportion to their hypergeometric probabilities, conditional on the marginal frequencies. For each sample table, PROC FREQ computes the value of the test statistic and compares it to the value for the observed table. When estimating a right-sided p-value, PROC FREQ counts all sample tables for which the test statistic is greater than or equal to the observed test statistic. Then the p-value estimate equals the number of these tables divided by the total number of tables sampled.
PROC FREQ computes left-sided and two-sided p-value estimates similarly. For left-sided p-values, PROC FREQ evaluates whether the test statistic for each sampled table is less than or equal to the observed test statistic. For two-sided p-values, PROC FREQ examines the sample test statistics according to the expression for given in Definition of p-Values . The variable above is a binomially distributed variable with trials and success probability . It follows that the asymptotic standard error of the Monte Carlo estimate is
PROC FREQ constructs asymptotic confidence limits for the p-values according to
where is the percentile of the standard normal distribution, and the confidence level is determined by the ALPHA= option in the EXACT statement.
When the Monte Carlo estimate equals 0, then PROC FREQ computes the confidence limits for the p-value as
When the Monte Carlo estimate equals 1, then PROC FREQ computes the confidence limits as
Chapter Contents |
Previous |
Next |
Top of Page |
Copyright 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.