مطالب مرتبط با کلیدواژه

Test Bias


۱.

Applying IRT Model to Determine Gender and Discipline-based DIF and DDF: A Study of the IAU English Proficiency Test(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Differential Distractor Functioning (DDF) Differential Item Functioning (DIF) English Proficiency Test (EPT) Item response Theory (IRT) Test Bias

حوزه‌های تخصصی:
تعداد بازدید : ۲۹۸ تعداد دانلود : ۱۷۱
The purpose of this study was to examine gender and discipline-based Differential Item Functioning (DIF) and Differential Distractor Functioning (DDF) on the Islamic Azad University English Proficiency Test (IAUEPT). The study evaluated DIF and DDF across genders and disciplines using the Rasch model. To conduct DIF and DDF analysis, the examinees were divided into two groups: Humanities and Social Sciences (HSS) and Non-Humanities and Social Sciences (N-HSS). The results of the DIF analysis showed that four out of 100 items had DIF across gender, and two items had discipline DIF. Additionally, gender DDF analysis identified one item each for Options A, B, and C, and four items for Option D. Similarly, the discipline DDF analysis revealed one item for Option A, three items for Option B, four items for Option C, and three items for Option D. The findings of this study have significant implications for test developers. The identification of potential biases in high-stakes proficiency tests can help ensure fairness and equity for all examinees. Furthermore, identifying gender DIF can shed light on potential gender-based gaps in the curriculum, highlighting areas where male or female learners may be disadvantaged or underrepresented in terms of knowledge or skills.
۲.

Investigating the Reliability of the Reading Module of the Iranian Ministry of Science, Research, and Technology’s English Proficiency Test(مقاله علمی وزارت علوم)

کلیدواژه‌ها: MSRT reading Test reliability Topic effect Item Type Test Bias

حوزه‌های تخصصی:
تعداد بازدید : ۲۲ تعداد دانلود : ۲۳
As a national high-stakes test of English proficiency, MSRT needs further scrutiny of reliability. Thus, the present study aimed to investigate different sources of variation that may impact the MSRT test-takers’ reading comprehension. Accordingly, some factors including the reading topics, item types, and participants’ general proficiency were delved into based on the scores obtained from 60 MSRT prospective candidates. Upon administering a sample of the reading subtest taken from a recent version of MSRT, the collected data was dichotomously scored and then analyzed in terms of internal consistency, inter-correlations, and causal patterns. The yielded results showed an overall reliability of 0.86 for the reading module, while a moderate interrelationship was obtained amongst the passages (r = 0.47) as well as the item types (r = 0.44). Furthermore, the Mixed ANOVA results demonstrated that topic and item type significantly affected the reading performance, whereas the proficiency factor did not play a conspicuous role in distinguishing the participants’ reading accomplishment. Both theoretically and operationally, the results reported through this study may stress the need for reconsidering the influence of such test-method facets as topic and item type in the MSRT reading subtest to upgrade the test’s unidimensionality and fairness.