VALIDASI STRUKTUR INTERNAL DAN LATEN SUB-CLASS ACADEMIC SELF-REGULATED LEARNING VERSI INDONESIA DENGAN RASCH MIXTURE MODEL

Sandra Arviyenna, Ni Putu Rahayu Eka Putri, Ananta Yudiarso

Abstract


Penelitian ini bertujuan untuk menguji validitas menggunakan model Rasch dan model Rasch Mixture pada skala Academic Self-Regulated Learning Scale (A-SRL-S) (Magno, 2010). Metode penelitian menggunakan survei dengan non random sampling. Penelitian ini melibatkan 401 responden. Hasil penelitian menunjukkan bahwa asumsi unidimensionalitas pada skala A-SRL terpenuhi pada semua sub-skala, setelah dilakukan pengguguran MS4, MS5, SA32, SA37, O53, dan O54, sehingga menjadi 48 item. Reliabilitas item pada semua sub-skala A-SRL menunjukkan hasil yang sangat baik, sedangkan person reliability dan person separated index tidak reliabel dengan rentang 0.46-0.79 dan PSI 0.92-1.94. Item misfit pada LR46, tidak sesuai dengan model Rasch. Seluruh responden mampu membedakan skala dari STS sampai SS. Pada analisis DIF, item LR44 dan LR46 menunjukkan bias gender dengan probabilitas Welch 0.0065 (LR44) dan 0.0037 (LR46). Wright map seluruh sub-skala menunjukkan tingkat kesulitan item yang kurang mampu menjangkau responden dengan kemampuan tinggi. Pada analisis Rasch Mixture Model, sub-skala learning responsibility terdapat laten sub-class yang terdiri dari dua kelas. Implikasi temuan menunjukkan perlunya item misfit dan item multidimensi, renorming berdasarkan latent class, serta replikasi penelitian dengan partisipan yang lebih heterogen guna meningkatkan sensitivitas skala.

Keywords


A-SRL-S, analisis rasch, analisis rasch mixture

References


Andiani, S. (2017). Hubungan prestasi akademik dan strategi regulasi diri dalam belajar pada mahasiswa tunarungu. SRLCalyptra: Jurnal Ilmiah Mahasiswa Universitas Surabaya, 6(2), 1–10.

Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43(4), 561–573. https://doi.org/10.1007/BF02293814

Boone, W. J., Staver, J. R., & Yale, M. S. (2014). Rasch analysis in the human sciences. Springer. https://doi.org/10.1007/978-94-007-6857-4

Bond, T. G., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences (3rd edition). Routledge.

Bozdogan, H. (1987). Model Selection and Akaike’s Information Criterion (AIC): The General Theory and its Analytical Extensions. Psychometrika, 52(3), 345–370. https://doi.org/10.1007/BF02294361

Burnham, K. P., & Anderson, D. R. (2004). Multimodel Inference: Understanding AIC and BIC in Model Selection. Sociological Methods & Research, 33(2), 261–304. https://doi.org/10.1177/0049124104268644

Camilli, G., & Shepard, L. A. (1994). Methods for identifying biased test items. Sage Publications.

Christensen, K. B., Kreiner, S., & Mesbah, M. (Eds.). (2012). Front Matter. In Rasch Models in Health (1st ed.). Wiley. https://doi.org/10.1002/9781118574454.fmatter

Clauser, B. E., & Mazor, K. M. (1998). Using statistical procedures to identify differentially functioning test items. Educational Measurement: Issues and Practice, 17(1), 31–44.

Duncan, T. G., & McKeachie, W. J. (2005). The Making of the Motivated Strategies for Learning Questionnaire. Educational Psychologist, 40(2), 117–128. https://doi.org/10.1207/s15326985ep4002_6

Erikson, E. H. (1968). Identity: Youth and crisis. W. W. Norton & Company.

Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of Convenience Sampling and Purposive Sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1. https://doi.org/10.11648/j.ajtas.20160501.11

Fisher, W. P., Jr. (n.d.). Rating scale instrument quality criteria. Retrieved April 11, 2025, from https://www.winsteps.com/facetman/reliability.htm

Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education (8th ed). McGraw-Hill Humanities/Social Sciences/Languages.

Gliem, J. A., & Gliem, R. R. (n.d.). Calculating, Interpreting, and Reporting Cronbach’s Alpha Reliability Coefficient for Likert-Type Scales. 2003.

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis (Eighth edition). Cengage.

Havighurst, R. J. (1972). Developmental tasks and education (3rd ed.). Longman.

Holland, P. W., & Wainer, H. (1993). Differential Item Functioning. Hillsdale, NJ: Lawrence Erlbaum Associates.

Linacre, J. M. (1994). Sample size and item calibration stability. Rasch Measurement Transactions, 7(4), 328.

Linacre, J. M. (2002). What do Infit and Outfit, Mean-square and Standardized mean? Rasch Measurement Transactions, 16(2), 878.

Linacre, J. M. (2007). A User’s Guide to WINSTEPS: Rasch Model Computer Program. Winsteps.com.

Linacre, J. M. (2007). Sample size and item calibration stability. Rasch Measurement Transactions, 21(1), 1095.

Linacre, J. M. (2009). A User’s Guide to WINSTEPS. Winsteps.com.

Linacre, J. M. (2012). Reliability and separation of measures. https://www.winsteps.com/winman/reliability.htm

Linacre, J. M. (2023). Reliability and separation of measures. https://www.winsteps.com/winman/reliability.htm

Magno, C. (2011). The predictive validity of the academic self-regulated learning scale. The International Journal of Educational and Psychological Assessment, 9(1), 45-58. Time Taylor Academic Journals.

Magno, C. (2010). Assessing Academic Self-Regulated Learning among Filipino College Students: The Factor Structure and Item Fit. 5.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometri theory (3rd edition). McGraw-Hill.

Nylund, K. L., Asparouhov, T., & Muthén, B. O. (2007). Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study. Structural Equation Modeling: A Multidisciplinary Journal, 14(4), 535–569. https://doi.org/10.1080/10705510701575396

Padilla, J.-L., & Benítez, I. (2014). Validity evidence based on response processes. Psicothema, 1(26), 136–144. https://doi.org/10.7334/psicothema2013.259

Penfield, R. D., & Camilli, G. (2007). Differential item functioning and item bias. In C. R. Rao & S. Sinharay (Eds.), Handbook of Statistics: Vol. 26. Psychometrics (pp. 125–167). Elsevier.

Raosoft. (2024). Sample size calculator. http://www.raosoft.com/samplesize.html.

Rost, J. (1990). Rasch Models in Latent Classes: An Integration of Two Approaches to Item Analysis. Applied Psychological Measurement, 14(3), 271–282. https://doi.org/10.1177/014662169001400305

Schwarz, G. (1978). Estimating the Dimension of a Model. The Annals of Statistics, 6(2). https://doi.org/10.1214/aos/1176344136

Sireci, S., & Faulkner-Bond, M. (2014). Validity evidence based on test content. Psicothema, 1(26), 100–107. https://doi.org/10.7334/psicothema2013.256

Smith, E. V. (2002). Detecting and evaluating the impact of multidimensionality using item fit statistics and principal component analysis of residuals. Journal of Applied Measurement, 3(2), 205–231.

Streiner, D. L. (2003). Starting at the beginning: An introduction to coefficient alpha and internal consistency. Journal of Personality Assessment, 80(1), 99–103. https://doi.org/10.1207/S15327752JPA8001_18

Sumintono, B. (2016). Aplikasi Model Rasch dalam Penelitian Pendidikan dan Psikologi. Penerbit Unika Soegijapranata.

Sumintono, B., & Widhiarso, W. (2013). Aplikasi model rasch: Untuk penelitian ilmu-ilmu sosial. Trim Komunikata Publishing House.

The Jamovi Project. (2024). Jamovi (Version 2.6) [Computer software]. https://www.jamovi.org

Van De Vijver, F. J. R., & Leung, K. (2021). Methods and Data Analysis for Cross-Cultural Research (V. H. Fetvadjiev, J. R. J. Fontaine, & J. He, Eds.; 2nd ed.). Cambridge University Press. https://doi.org/10.1017/9781107415188

Van De Vijver, F. J. R., & Poortinga, Y. H. (1997). Towards an Integrated Analysis of Bias in Cross-Cultural Assessment. European Journal of Psychological Assessment, 13(1), 29–37. https://doi.org/10.1027/1015-5759.13.1.29

Wright, B. D., & Linacre, J. M. (1989). Rasch measurement: Transactions of the Rasch Measurement SIG. MESA Press.

Wright, B. D., & Masters, G. N. (1982). Rating Scale Analysis. MESA Press.

Wright, B. D., & Stone, M. H. (1979). Best test design. MESA Press.

Zimmerman, B. J. (2002). Becoming a Self-Regulated Learner: An Overview. Theory Into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal, 23(4), 614-628.

Zumbo, B. D. (1999). A Handbook on the Theory and Methods of Differential Item Functioning (DIF): Logistic Regression Modeling as a Unitary Framework for Binary and Likert-type (Ordinal) Item Scores. Ottawa, Canada: Directorate of Human Resources Research and Evaluation.

Zumbo, B. D. (2007). Three generations of DIF analyses: Considering where it has been, where it is now, and where it is going. Language Assessment Quarterly, 4(2), 223–233.




DOI: https://doi.org/10.36269/psyche.v7i2.3049

Refbacks

  • There are currently no refbacks.


PSYCHE Index:

     

  

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Web Analytics

View My Stats