TRANSITIONING TO AN ALTERNATIVE ASSESSMENT: COMPUTER-BASED TESTING AND KEY FACTORS RELATED TO TESTING MODE

Hooshang Khoshsima, Seyyed Morteza Hashemi Toroujeni

Abstract


Computer-Based Testing (CBT) is becoming widespread due to its many identified positive merits including productive item development, flexible delivery testing mode, existence of self-selection options for test takers, immediate feedback, results management, standard setting and so on. Transitioning to CBT raised the concern over the effects of testing administration mode on test takers’ scores compared to Paper-and-Pencil-Based testing. In this comparability study, we compared the effects of two different media (CBT vs. PPT) by investigating the score comparability of General English test taken by Iranian graduate students studying at Chabahar Maritime University to see whether test scores obtained from two testing modes were different. To achieve this goal, two versions of the same test were administered to 100 intermediate-level test takers organized in one testing group in two separate testing occasions. Using paired sample t-test to compare the means, the findings revealed the priority of CBT over PPT with .01 degree of difference at p<05. Utilizing ANOVA, the results indicated that two prior computer familiarity and attitudes external moderator factors had no significant effect on test takers’ CBT scores. Furthermore, according to the results, the greatest percentage of test takers preferred test features presented on computerized version of the test.

 

Article visualizations:

Hit counter

DOI

Keywords


computer-based testing, paper-and-pencil-based testing, computer familiarity, computer attitude, test preference

References


Al-Amir, S., (2009).Computer-based testing vs. paper-based testing: establishing the comparability of reading tests through the evolution of a new comparability model in a Saudi EFL context. Unpublished doctoral dissertation. University of Essex, England.

Allen, M. J., & Yen, W. M. (1979). Introduction to measurement theory. Monterey, CA: Brooks/Cole.

American Educational Research Association. (1999). Standards for Educational and Psychological Testing. Washington: American Educational Research Association.

Bennett, R. E. (1999). How the Internet will help large-scale assessment reinvents itself. Education Policy Analysis Archives, 9(5), 1-25.

Berberoglu, G. & Calikoglu, G. (1992).The construction of a Turkish computer attitude scale. Studies in Educational Evaluation, 24 (2), 841-845.

Boo, J. & Vispoel, W. (2012). Computer versus paper-and-pencil assessment of educational development: A comparison of psychometric features and examinee preferences. Psychological Reports, 111, 443-460.

Cater, K., Rose, D., Thille, C., & Shaffer, D. (2010, June). Innovations in the classroom. Presen¬tation at the Council of Chief State School Officers (CCSSO) National Conference on Student Assessment, Detroit MI.

Challoner, J. (2009). 1001 Inventions that changed the world (Cassell Illustrated: 2009).

Choi, I. C., Kim, K. S., & Boo, J. (2003). Comparability of a paper-based language test and a computer-based language test. Language Testing, 20, 295–320.

Christensen, R. and Knezek, G. (1996). Constructing the Teachers’ Attitudes toward Computers (TAC) questionnaire. ERIC Document Reproduction Service No. ED398244.

Coniam, D. (2006). Evaluating computer-based and paper-based versions of an English language listening test. ReCALL, 18, 193-211.

Eagly, A. H., & Shelly C., (1998). ―Attitude Structure and Function.” In Handbook of Social Psychology, ed. D.T. Gilbert, Susan T. Fisk, and G. Lindsey, 269–322. New York: McGowan-Hill.

Fleming, S. & Hiple, D. (2004). Foreign language distance education at the University of Hawai'i. In C. A. Spreen, (Ed.), new technologies and language learning: issues and options (Tech. Rep. No.25) (pp. 13-54). Honolulu, HI: University of Hawai'i, Second Language Teaching & Curriculum Center.

Florida Department of Education. (2006, September 4). What do we know about choosing to take a high-stakes test on a computer? Retrieved May 15, 2010, from: http://www.fldoe.org/asp/k12memo/pdf/WhatDoWeKnowAboutChoosingToTake AHighStakesTestOnAComputer.pdf.

Flowers, C., Do-Hong, K., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read- aloud accommodation. Journal of Special Education Technology, 26(1), 1-12.

Fulcher, G. (1999). Computerizing an English language placement test. ELT Journal, 53(4), 289-299.

Higgins, J., Russell, M., & Hoffmann, T. (2005). Examining the effect of computer-based passage presentation on reading test performance. Journal of Technology, Learning, and Assessment, 3(4). Retrieved July 5, 2005, from http://www.jtla.org.

International Test Commission. (2006). International guidelines on computer-based and Internet delivered testing. International Journal of Testing, 6, 143–171.

Kenyon, D.M. and Malabonga, V. (2001). ‘Comparing examinee attitudes toward computer-assisted and other oral proficiency assessments’, Language Learning and Technology 5(2), 60–83.

Khoshsima, H. & Hashemi, M. (2017). Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper and Pencil-Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL learners. English Language Teaching, Vol 10, No 2(2017).

Laurier, M. (1999). The development of an adaptive test for placement in French. In M. Chalhoub-Deville (ed.), Development and research in computer adaptive language testing (pp. 122-35). Cambridge: University of Cambridge Examinations Syndicate/Cambridge University Press.

Lee, J., Moreno, K. E., & Sympson, J. B. (1986). The effects of mode of test administration on test performance. Educational and Psychological Measurement, 46, 467-473.

Lottridge, S., Nicewander, A., Schulz, M. & Mitzel, H. (2008). Comparability of Paper-based and Computer-based Tests: A Review of the Methodology. Pacific Metrics Corporation 585 Cannery Row, Suite 201 Monterey, California 93940.

Loyd, B. H, & Gressard, C. (1985). The Reliability and Validity of an Instrument for the Assessment of Computer Attitudes. Educational and Psychological Measurement, 45(4), 903- 908.

Madsen, H. S & Larson J. W. (1986). Computerized Rasch Analysis of item bias in ESL Tests. In C. W. Stansfield (Ed.), Technology and language testing. A collection of papers from the annual colloquium on language testing research. Princeton, New Jersey.

Makiney, J.D., Rosen, C., Davis, B.W., Tinios, K. & Young, P. (2003). Examining the measurement equivalence of paper and computerized job analyses scales. Paper presented at the 18th Annual Conference of the Society for Industrial and Organizational Psychology, Orlando, FL.

Mojarrad, H, Hemmati, F, Jafari Gohar, M, & Sadeghi , A. (2013). Computer-based assessment (CBA) vs. Paper/pencil-based assessment (PPBA): An investigation into the performance and attitude of Iranian EFL learners' reading comprehension. International Journal of Language Learning and Applied Linguistics World, 4(4), 418-428.

OECD. (2010). PISA Computer-based assessment of student skills in science. http://www.oecd.org/publishing/corrigenda (accessed September 21, 2014).

O’Malley, K. J., Kirkpatrick, R., Sherwood, W., Burdick, H. J., Hsieh, M.C. &, Sanford, E.E. (2005, April). Comparability of a Paper Based and Computer Based Reading Test in Early Elementary Grades. Paper presented at the AERA Division D Graduate Student Seminar, Montreal, Canada.

Pinsoneault, T.B., (1996). Equivalency of computer-assisted and paper-and-pencil administered versions of the Minnesota Multiphasic Personality Inventory-2. Computers in Human Behavior, 12, 291–300.

Poggio, J., Glasnapp, D., Yang, X. & Poggio, A. (2005). A Comparative Evaluation of Score Results from Computerized and Paper & Pencil Mathematics Testing in a Large Scale State Assessment Program. The Journal of Technology, Learning and Assessment, 3(6), 5-30.

Russell, M., Almond, P., Higgins, J., Clarke-Midura, J., Johnstone, C., Bechard, S., & Fedorchak, G. (2010, June). Technology enabled assessments: Examining the potential for universal access and better measurement in achievement. Presentation at the Council of Chief State School Of¬ficers (CCSSO) National Conference on Student Assessment, Detroit MN.

Ryan, A. M., & Ployhart, R. E. (2000). Applicants’ perceptions of selection procedures and decisions: a critical review and agenda for the future. Journal of Management, 26, 565–606.

Seidman, I. (1998). Interviewing as qualitative research: A guide for researchers in education and the social sciences (2nd ed.). New York: Teachers College Press.

Taylor, C., Kirsch, I., Eignor, D., & Jamieson, J. (1999). Examining the relationship between computer familiarity and performance on computer-based language tasks. Language Learning, 49, 219–274.

Wallace, P., & Clariana, R. (2005). Perception versus reality – Determining business students’ computer literacy skills and need for instruction in information concepts and technology, Journal of Information Technology Education, 4, 141-151. Retrieved March 26, 2008 from http://jite.org/documents/Vol4/v4p141-151Wallace59.pdf

Wang, T., & Kolen, M. J. (2001). Evaluating comparability in computerized adaptive testing: Issues, criteria and an example. Journal of Educational Measurement, 38, 19–49.

Warner, R. M. (2013). Applied Statistics: From Bivariate through Multivariate Techniques. (2th Ed.). SUA: SAGE Publication Inc.

Watson, B., (2001). Key factors affecting conceptual gains from CAL. British Journal of Educational Technology 32 (5) 587–593.

Yurdabakan, I., & Uzunkavak, C. (2012). Primary school students ‘attitudes towards computer based testing and assessment in turkey. Turkish Online Journal of Distance Education, 13(3), 177-188.




DOI: http://dx.doi.org/10.46827/ejel.v0i0.499

Refbacks

  • There are currently no refbacks.


Copyright © 2015 - 2023. European Journal of English Language Teaching (ISSN 2501-7136) is a registered trademark of Open Access Publishing GroupAll rights reserved.

This journal is a serial publication uniquely identified by an International Standard Serial Number (ISSN) serial number certificate issued by Romanian National Library (Biblioteca Nationala a Romaniei). All the research works are uniquely identified by a CrossRef DOI digital object identifier supplied by indexing and repository platforms.

All the research works published on this journal are meeting the Open Access Publishing requirements and can be freely accessed, shared, modified, distributed and used in educational, commercial and non-commercial purposes under a Creative Commons Attribution 4.0 International License (CC BY 4.0).