DIGITAL LIBRARY
DETERMINING THE RELEVANCE OF AUTOMATIZED FEEDBACK ON STUDENTS’ E-TEST RESULTS IN SUPPORTING TEACHERS TO PROMOTE COMPONENTS OF SCIENTIFIC LITERACY IN THE CLASSROOM
University of Tartu (ESTONIA)
About this paper:
Appears in: EDULEARN19 Proceedings
Publication year: 2019
Pages: 5528-5534
ISBN: 978-84-09-12031-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2019.1355
Conference name: 11th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2019
Location: Palma, Spain
Abstract:
Background:
The overall goal of science education is seen as promoting Scientific Literacy (SL) in students. Important e-testing components of SL are seen as subject-related scientific knowledge and skills, inquiry skills, and problem-solving and decision-making skills, all of which are emphasized in the national curriculum. Unfortunately, a recent Estonian study showed that students’ problem-solving and decision-making skills are relatively weak can be attributed to the fact that the emphasis within science teaching in Estonia is on the development of lower level cognitive thinking skills (such as recall and understanding). In an attempt to examine this further, an e-testing, computer-based system was developed for science education learning.
Summative assessment undertaken in a written format is limited in assessing wider SL components (e.g. decision-making where students’ personal attitudes are involved) and does not provide meaningful feedback, which can assist further teaching and learning. Therefore, a model for science e-testing has been developed in Estonia for assessing students’ gains through which it is possible to determine a range of SL components and enable automatic feedback to be given about the performance of a particular student group and provide information about gaps in specific skills and learning needs that need more attention in the future.

Aim:
The aim of this study is to explore the relevance of a developed, automatic feedback, e-testing approach, from a science teacher perspective. This is with respect to particular components of SL and designed to assist teachers in guiding students through implementing relevant teaching strategies.

Method:
The e-testing model is based on a set of contextual tasks which include subject-specific cognitive items (including both, computer-assessed and open-ended manually assessed responses) in an interdisciplinary context and identifies levels of students’ knowledge and skills, related to the perceived attributes promoting SL.
Test design. Each SL component is assessed at four levels, according to the complexity of thinking processes exhibited by students in undertaking the task. The four levels are arbitrarily labelled as baseline, average, high, and excellence and indicated by the use of (a, b, c, d). The automatic feedback is designed in such a way as to describe student gains regarding each SL component while also indicating the achieved level.
Test administration. As a pilot study, the test was administered to 9th grade students (15-16 years old, N=1617) in March 2019. Students performed the test on personal computers with performance time permitted up to 120 minutes.

Interviews.
Following administration of the e-testing and automatic or centrally organised assessment, semi-structured interviews are held with 10 voluntary science teachers taking part in the pilot study. The interview questions are related to how teachers interpret the automatically generated feedback and the degree to which they feel that the gained feedback can help them to modify their teaching and support their students’ learning. Also, it determines whether and in what respects the teachers suggest changes to the format of the given test feedback.

Data analysis and preliminary results: The interviews are to be conducted and analysed using qualitative content analysis in the spring, 2019.

Based on the results, the format of the automatized test feedback is to be improved.
Keywords:
Scientific Competences, Feedback, Evaluation.