Inconsistencies in Item Parameter Estimates Across Seemingly Parallel Forms

June 24, 2017 | Autor: Ki Matlock Cole | Categoria: Item Response Theory
Share Embed


Descriรงรฃo do Produto

Inconsistencies in Item Parameter Estimates Across Seemingly Parallel Forms Ki L. Matlock, College of Education, Oklahoma State University, Stillwater, OK Ronna C. Turner, College of Education and Health Professions, University of Arkansas, Fayetteville, AR Sample

Results

Discussion

When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within sub-content areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall. Manipulated variables were the number of items and average item difficulty within subsets of items primarily measuring one of two dimensions. Datasets were simulated at four levels of correlation (0, .3, .6, and .9). Item parameters were estimated using the Rasch and 2PL unidimensional IRT models. Estimated discrimination and difficulty were compared across forms and within subsets of items. The average unidimensional estimated discrimination was consistent across forms having the same correlation. Forms having a larger set of easy items measuring one dimension were estimated as being more difficult than forms having a larger set of hard items. Estimates were also investigated within subsets of items, and measures of bias were reported. This study encourages test developers to not only maintain consistent test specifications across forms as a whole, but also within sub-content areas.

Four two-dimensional true ability dataset of 1,000 examinees following a multivariate standard normal distributions were simulated at each of the following levels of correlation: 0, .3, .6, and .9.

Discrimination โ€ข Across forms at the same level of correlation, the changes in item specifications within sets of items had little effect on the bias between the estimated discrimination and true measures that took into account the discrimination on both dimensions, i.e. ๐‘Ž, ๐‘Ž1 + ๐‘Ž2 , and ๐‘€๐ท๐ผ๐‘†๐ถ. โ€ข The estimated discrimination was less bias of ๐‘Ž1 as compared to ๐‘Ž2 . This was likely due to a larger number of items with ๐‘Ž1 > ๐‘Ž2 . โ€ข As forms became more balanced, the magnitude of the differences between ๐‘Ž and ๐‘Ž1 or ๐‘Ž2 became more equal and converged to ๐‘Ž. โ€ข Difficulty had little effect on all measures of bias. โ€ข As the correlation increased, the estimate converged closer to ๐‘€๐ท๐ผ๐‘†๐ถ when ๐œŒ = .3 or .6 and closer to ๐‘Ž1 + ๐‘Ž2 when ๐œŒ = .9.

The results of this study may be used to guide test developers to design forms with more consistent average item difficulty within subsets of items. It is also advised to evaluate the correlation between the multiple dimensions. If data are highly correlated, it may be desired to use a 2PL model over a Rasch model due to the stability of the estimated total test difficulty. Once the correlation has been established, the unidimensional estimate from a Rasch model as compared to a 2PL model may represent the true parameters somewhat differently. โ€ข The estimated average item discrimination across forms (at the same level of correlation) tends to be consistent regardless of changes in difficulty and/or numbers of items within sets. โ€ข When data are uncorrelated, the estimated discrimination may be a closer approximation to the average of true values, as reported by Ansley & Forsyth (1985), Reckase et al. (1988), and Song (2010) or to the discrimination of the larger set of items. โ€ข As correlation increases (๐œŒ = .3 and .6), the estimate tends to be closer to ๐‘€๐ท๐ผ๐‘†๐ถ. โ€ข When data are highly correlated ( ๐œŒ = .9 ), the estimated discrimination is a close estimate of the sum of true values, consistent with the report from Way, Ansley, & Forsyth (1988). โ€ข The average unidimensional estimated difficulty is affected by the confounding of specifications within dimensions, the correlation, and the model applied (Rasch or 2PL IRT model). โ€ข The unidimensional difficulty tends to be a close estimate of ๐‘€๐ท๐ผ๐น๐น when the 2PL model is used, and the estimate becomes more stable as the correlation increases. โ€ข The estimate tends to be closer to โˆ’๐‘‘ more often across forms when the Rasch model is used, yet the stability of the estimate was similar at all levels of correlation.

Data Item specifications for the first test form were taken from Form 24B of the ACT Mathematics Usage Test (Reckase & McKinley, 1991). Five additional forms were created which had the same average item difficulty overall and total number of items, but differed in average item difficulty and/or number of items within sets of items defined as the following: โ€ข Set 1: items discriminated primarily on the first dimension (๐›ผ๐‘– < 30) โ€ข Set 2: items discriminated on each dimension somewhat equally โ€ข Set 3:; items discriminated primarily on the second dimension (๐›ผ๐‘– > 60) Table 1 Number of Items, Average Discrimination, and Difficulty Across Test Forms and Within Sets of Items Manipulated Number of Manipulated Average Difficulty Items within Sets within Sets of Items (easy items or hard items)

The purpose of this study was to investigate the effects of estimating unidimensional item parameters across test forms that have the same number of items and total test difficulty overall with confounding length and difficulty within sub-content areas. Consider two forms of a mathematics test with items measuring both algebra and geometry. Overall, the forms had the same number of items and average item difficulty. Form A had more algebra items at a high difficulty level and fewer easy geometry items; form B had fewer easy algebra items and more geometry items that were hard. Though these forms were similar overall, they differed within sub-content areas. โ€ข If the same set of examinees interacted with all test forms, and the data were analyzed with the same model using the same software, how might item parameters be estimated differently across these forms? โ€ข Would the average estimated total test difficulty be similar across all forms? โ€ข Would the average estimated difficulty within subsets of items be affected by the confounding of the number of items and true item difficulty within dimensions? Previous studies have reported biased estimates (Ackerman, 1987a, 1987b, 1989; Ansley & Forsyth, 1985; Reckase et al., 1988; Way et al., 1988) and others have reported unbiased estimates of the true parameters (Ackerman, 1987a, Reckase et al., 1988; Song, 2010). These did not compared across multiple forms having equal true average item difficulty; additionally, estimated parameters were seldom compared to the true multidimensional parameters, ๐‘€๐ท๐ผ๐‘†๐ถ and ๐‘€๐ท๐ผ๐น๐น.

๐‘›

๐‘Ž1

๐‘Ž2

๐‘‘

Form 1: Set 1 Set 2 Set 3

40 20 11 9

1.04 1.35 1.01 0.39

0.71 0.28 1.05 1.25

-0.32 0.16 -0.43 -1.28

Form 2: Set 1 Set 2 Set 3

40 20 11 9

1.04 1.35 1.01 0.39

0.71 0.28 1.05 1.25

-0.32 -0.81 -0.22 0.63

Form 3: Set 1 Set 2 Set 3

40 18 11 11

0.98 1.36 1.01 0.33

0.77 0.30 1.05 1.26

-0.32 0.21 -0.43 -1.09

Form 4: Set 1 Set 2 Set 3

40 18 11 11

0.98 1.36 1.01 0.33

0.77 0.30 1.05 1.26

-0.32 -0.86 -0.22 0.44

Form 5: Set 1 Set 2 Set 3

40 15 11 14

0.93 1.46 1.01 0.29

0.82 0.34 1.05 1.17

-0.32 0.31 -0.43 -0.92

Form 6: Set 1 Set 2 Set 3

40 15 11 14

0.93 1.46 1.01 0.29

0.82 0.34 1.05 1.17

-0.32 -0.96 -0.22 0.28

The โ€˜mirtโ€™ package (Chalmers, 2014) in R-3.1.2 was used to simulate twodimensional dichotomous data following the 2PL compensatory model. Five hundred replications were simulated.

Analysis Parameters were estimated with the Rasch model and the 2PL model using the โ€˜ltmโ€™ package (Rizopoulous, 2006) in R.

1.0 0.8 0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0

๐‘Ž โˆ’๐‘Ž

1 2 3 4 5 6 0.0

1 2 3 4 5 6

1 2 3 4 5 6

0.3

0.6

1 2 3 4 5 6 0.9

a_1๐‘˜1 ๐‘Ž2 โˆ’๐‘Ž a_2 ๐‘˜ ๐‘Žโˆ’๐‘Ž a_bar ๐‘˜ ๐‘Žโˆ’๐‘Ž a_sum ๐‘˜ ๐‘€๐ท๐ผ๐‘†๐ถโˆ’๐‘Ž MDISC ๐‘˜

Form Correlation

Figure 1. Line graph of the average bias between true and estimated measures of discrimination across six forms at four levels of correlation. A value greater than zero indicates the true value was underestimated; a value less than zero indicates the true value was overestimated.

Difficulty โ€ข When either model was applied, forms with a larger number of easy items (odd forms) were estimated as being more difficult than forms with a larger number of hard items. โ€ข When the Rasch model was applied, the effects of the differences in subset difficulty remained constant at all levels of correlation. Across forms at the same level of correlation, the estimated difficulty from the Rasch model was most often a better estimate of the true difficulty. โ€ข The changes in the difficulty of subsets of items had a strong effect on the 2PL estimates when ๐œŒ = 0, and the effect weakened as correlation increased. 0.50 0.40 Difficulty

Introduction

Average Bias

Abstract

0.30 0.20 0.10 0.00 1 2 3 4 5 6 0

1 2 3 4 5 6

1 2 3 4 5 6

0.3

0.6

1 2 3 4 5 6

โˆ’๐‘‘ (-d) True ๐‘€๐ท๐ผ๐น๐น MDIFF Raschmodel ๐‘ Rasch 2PLmodel ๐‘ 2PL

0.9

Form Correlation Figure 2. Line graph of the true difficulty (โˆ’๐‘‘ and ๐‘€๐ท๐ผ๐น๐น) and the estimated difficulty when the Rasch and 2PL models were used across all forms and at all levels of correlation.

โ€ข The estimated difficulty of the Rasch model tended to be closer to the true difficulty (โˆ’๐‘‘). The estimated difficulty from the 2PL model tended to follow the pattern of ๐‘€๐ท๐ผ๐น๐น more closely than did the estimated difficulty of the Rasch model.

Future Studies โ€ข Future research may include the 3PL and include the effects of confounding specifications on estimated ability, processes of equating, and implications in the CAT setting.

References Ackerman, T. A. (1987a). A comparison study of the unidimensional IRT estimation of compensatory and noncompensatory multidimensional item response data. ACT Research Report Series. (No. 87-12). Ackerman, T. A. (1987b). ACT Research Report Series. (No. 87-13). Ackerman, T. A. (1989). Unidimensional IRT calibration of compensatory and noncompensatory multidimensional items. Applied Psychological Measurement, 13(2), 113โ€“127. Ansley, T. N. & Forsyth, R. A. (1985). An examination of the characteristics of unidimensional IRT parameter estimates derived from two-dimensional data. Applied Psychological Measurement, 9(1), 37-48. Chalmers, P. (2014). Mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1-29. Reckase, M. D., Ackerman, T. A., & Carlson, J. E. (1988). Building a unidimensional test using multidimensional items. Journal of Educational Measurement, 25(3), 193โ€“203. Reckase, M. D., & McKinley, R. L. (1991). The discriminating power of items that measure more than one dimension. Applied Psychological Measurement, 15(4), 361-373. Rizopoulos, D. (2006). Ltm: An R package for latent variable modelling and item response theory analysis. Journal of Statistical Software, 17(5), 1-25. Song, T. (2010). The effect of fitting a unidimensional IRT model to multidimensional data in content-balanced computer adaptive testing. (Doctoral dissertation). Retrieved from ProQuest. (UMI Number: 3435117) Way, W. D., Ansley, T. N., & Forsyth, R. A. (1988). The comparative effects of compensatory and noncompensatory two-dimensional data on unidimensional IRT estimates. Applied Psychological Measurement, 12(3), 239โ€“252.

Lihat lebih banyak...

Comentรกrios

Copyright ยฉ 2017 DADOSPDF Inc.