| Australian Journal of Educational Technology 2003, 19(1), 87-99. | AJET 19 |
For almost a decade we have been providing a large group of first year, undergraduate biology students with both offline (paper based) and online assessment resources to support them in their learning. This paper reports on an investigation of the students‘ use of these assessment resources, as well as their perceptions of the usefulness of these resources to their learning. The research plan enabled us to investigate any correlations between use or non-use of the assessment resources and final performance in the course. The results show that while the majority of students use and find useful both offline and online assessment resources, use has no differential impact on final learning outcomes.
Some of these issues have been addressed by the development of online assessment resources in particular when dealing with large student numbers. Such resources are available for use by students any time and any place. In this way large groups of students can be provided with the feedback they are now requesting and this may help them in their final assessment tasks. The development of online assessment resources has a number of advantages over offline, pen and paper, tasks. They can be easily marked, provide instant feedback and can be taken repeatedly by students in order to assess and improve performance. Also online tests can be taken unsupervised in students own time. Clariana (1997) has shown that online assessments allow students to tailor their use to their own learning style, while Zakrzewski & Bull (1999) emphasise the advantages of online assessment in terms of fast feedback to large numbers of students with no staff involvement. They also indicate that the use of online formative assessment prior to summative tests reduces student anxiety. The contribution of formative computer based assessment on improvements in student learning outcomes is documented by Buchanan (2000) who found that undergraduate psychology students who used an online formative assessment package, which provided instant feedback, performed better in the end of course summative assessment than those who did not use the package.
In First Year Biology, at the University of Sydney, the cohort of students is large and we are managing the learning environment in a culture of reducing resources. This has led us to develop a mix of online and offline (paper based) assessment activities that are aimed at enhancing student learning. These activities include both formative and summative items some with the provision of extensive feedback. The online assessment materials and the students‘ perceptions of their usefulness in student learning have previously been described in Peat & Franklin (2002). The gateway to our online resources is the first year biology Virtual Learning Environment that allows easy access for students to all available learning resources (http://FYBio.bio.usyd.edu.au/VLE/L1/). This site provides access to both summative and formative online assessment resources, as well as other learning materials and access to synchronous and asynchronous communications. This development has previously been described (Franklin & Peat, 2001).
We are interested in how our students have used the learning opportunities we have provided and whether they have helped them in their learning. A previous study has indicated that 15-20% of our first year biology students are choosing not to use online resources (Peat, Franklin, Lewis & Sims, 2002). We also have data indicating that some students are not using all the learning resources and have information that suggests that not using these resources may influence final performance (currently unpublished). In this current study we are investigating what contribution formative assessment activities have towards final grades so that we can inform our students about what strategies we believe they could use to enhance their performance.
This paper will report on an investigation of the relationship between the use and perceived usefulness of a variety of online and offline assessment resources to the final performance of students in their first semester first year biology course. The assessment resources are described below in Table 1.
| Type of assessment | Online | Offline (paper based) | How taken |
| Summative |
| -- | Supervised |
| -- |
| Non- supervised | |
| Formative | -- |
| Supervised |
|
| Non- supervised |
Weekly summative quizzes consist of eight multiple choice questions randomly selected from a large bank of questions. Student performance is available as soon as the quiz is completed, the mark is recorded on a database, and cohort histograms are available online. A commercial provider, WebMCQ Pty Ltd, is used to deliver and mark the online quizzes.
Individual laboratory report - students carry out an experiment in class which is then written up as a scientific report as part of their summative assessment tasks. Students are given the opportunity to have a member of staff critique the report prior to preparing the final report. Both content and literacy skills are assessed.
Group work on poster and oral presentation - students work in peer groups of five to choose a topic, do the research required and prepare and present a poster. The poster is presented in class time to the rest of the class and the staff. Students assess the poster presentation by giving formative feedback while the staff member gives summative assessment on the poster and the oral performance.
Self assessment modules are designed to draw together related parts of a course to help students make connections between topics in biology and to promote a deeper learning strategy, whilst providing an enjoyable feedback and reinforcement session. They allow students to identify their level of understanding and consolidate learning whilst working at their own pace in their own time. The most recent discussion about the development and evaluation of these modules is in Peat (2000), Franklin & Peat (2001) and Peat & Franklin (2002).
The mid course practice examination is paper based, taken in class time and administered under examination conditions in order to give the students as close to the "real" examination experience as possible. The students mark their scripts in their own time from an online version. To gain feedback on their answers students use the online version in an interactive way, by entering their answers and having the program mark their performance and give them feedback where appropriate. The feedback is aimed at helping students identify their understanding of course concepts, which in turn might indicate the need for some remedial action. This also helps reduce the stress about end of course examinations, and hopefully, allow students to achieve at a high level in the final assessment. Students perceived to be "at risk" are encouraged to use our online revision materials, designed to enhance student understanding of major topic areas.
Weekly paper based self test quizzes are provided at the end of each week‘s practical notes. Each short test consists of multiple choice and short answer questions and is designed to help students self appraise their performance. The questions are similar to those in the formal examinations and weekly summative quizzes. Answers are provided both online and offline (in a student resources room).
The students (n=1300) enrolled in the first year biology courses are randomly timetabled into 21 laboratory sessions by the university timetabling computer. In 2002, quantitative and qualitative evaluation of student perceptions of their experiences of using the assessment resources was carried out, during randomly selected laboratory sessions before the end of the semester. Participation was voluntary. The survey collected student demographics, including university entry score and prior experience of biology at secondary school. Perceptions of the usefulness of resources were investigated using a four point scale, with students classifying statements according to whether they used a resource, found it not useful, useful or extremely useful. Open ended questions asked students why they had not used a resource (if relevant) and in what way the resource helped them in their learning (if they had used it). These open ended responses were thematically analysed and categorised (Denzin & Lincoln, 1994). Student success was measured by their final mark, which included a compilation of continuous assessment and a final examination.
The implementation of the survey complied with the University of Sydney‘s Ethics Committee Guidelines for research with humans and this enabled us to seek permission to correlate performance with usage/non-usage of materials and with perceptions of usefulness.
Students were asked if they had used the summative and formative assessment materials and how useful they had found them in supporting their learning in the course. Table 2 shows the level of use and perceptions of usefulness for the various assessment materials.
Participation rates in all compulsory assessment tasks were high (97-99%). Student participation in the formative assessment opportunities was much lower, with approximately 20% of students not using the SAMs or the weekly self test quizzes. With the mid course practice exam 97% of the students took the exam (it was a supervised timetabled class activity) but at the time of the survey only 28% of the students had marked it in their own time from online or offline materials. This non-use of both offline and online resources mirrors data previously obtained about the use of online resources by Oliver & Omari (2001) and with first year biology students (Peat, et al. 2002) and with the use by our first year students of offline materials (unpublished). Generally speaking, most of the students who had attempted or completed the various assessment resources found them to be at least useful, if not extremely useful. However, students responded less positively to the summative resources (weekly quiz, report and poster presentation) than to the formative resources (mid-course exam, self assessment modules and weekly self test quiz).
| Summative | Did not use/ Did not do (%) | Percentage of responses from students who completed/ attempted the materials | ||
| Not useful | Useful | Extremely useful | ||
| Weekly quiz | 2 | 10 | 64 | 26 |
| Water quality report | 3 | 23 | 68 | 9 |
| Group poster/ oral presentation | 1 | 25 | 63 | 12 |
| Formative | ||||
| Taking mid course practice exam | 3 | 4 | 52 | 44 |
| Marking mid course practice exam | 72 | n/a | n/a | n/a |
| Self assessment modules | 21 | 5 | 63 | 32 |
| Weekly self test quiz | 18 | 8 | 75 | 17 |
Students were asked to indicate which of three categories best represented their use of the formative assessment resources and this is shown in Figure 1. Students mainly indicated that they used these formative assessment resources for revision and consolidating knowledge rather than learning new knowledge.

Figure 1: Perceived usefulness of formative assessment
resources for supporting learning and understanding
Open ended questions on the survey investigated students‘ perceptions, from students who used the resource, on how using the formative assessment resources helped them in their learning. Each set of responses was independently thematically categorised and is reported in Table 3.
The data in Table 3 supports the information presented in Figure 1 which indicates that students found that the self assessment modules and the self test quizzes were most useful for revision and consolidating knowledge/enhancing understanding.
In addition the open ended responses indicated that the self assessment modules and the self test quizzes were useful in highlighting an awareness of their understanding/lack of understanding of the course content. It is also interesting to note from Figure 1 and Table 3, that students report the use of self assessment modules is also associated with gaining new knowledge. This was not an overt design feature of these resources (Peat, 2000), but it has previously been reported by students (Peat & Franklin, 2002). Figure 1 indicates that students taking the mid course practice exam (97%) perceived it was most useful for revision and consolidating knowledge, whereas those students who had actually marked the practice exam (27%) found marking it most useful for indicating an awareness of their understanding/lack of understanding of the course and in giving them an idea of the final exam structure and expectations (Table 3). Thus is seems that both taking and then marking the mid course practice exam assists in multiple aspects of student learning. We need to put into place a mechanism that encourages marking it to maximise its benefit to students.
| Category | Mark practice exam %* | Use self assessment modules % | Use weekly self test quizzes % |
| Awareness of own understanding | 54 | 18 | 17 |
| Consolidate/ enhance understanding | 9 | 28 | 41 |
| Revision | 6 | 30 | 32 |
| Exam structure/ expectations | 27 | 6 | 4 |
| Measure of ability | 3 | -- | -- |
| Gain new information | -- | 10 | -- |
| Find out what need to learn | -- | -- | 6 |
| Novelty of using different resource | -- | 9 | -- |
| Total | 100 | 100 | 100 |
| * whilst nearly all the students did the practice exam (it was supervised in a lab class), only 27% had marked it at time of survey | |||
| Category | Self assessment modules | Weekly self test quizzes | Marking practice exam |
| No time (yet) | 43% | 37% | 34% |
| No motivation/ lazy | 21% | 38% | 11% |
| Do later for revision | 15% | 12% | 18% |
| No Internet/ computer access | 9% | -- | 22% |
| Prefer other resources | 7% | -- | -- |
| Not know exist | 5% | 13% | -- |
| Not get back | -- | -- | 15% |
Students responses to the more general question "How did the formative assessment resources help in your learning?" are captured in the following quotes:
They test what you do and don‘t know, but in a pressure-free environment.Students who did not use a particular formative assessment resource were asked to explain their reasons for non-usage. Table 4 summarises their responses. Students indicated that time, motivation or awareness of the resources was the primary reason for non-use, however, some indicated that they would use them later for revision.I feel that because it‘s non-compulsory we have more freedom of choosing when to do them. This is good because everyone studies at a different pace.
They made it fun being non-compulsory.
Pearson‘s correlation was used to investigate links between student use and perceptions of usefulness of the various offline and online assessment resources provided in the course and final performance (Table 5).
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ||
| 1. | Final mark n = 338 | 1.000 0 | .006 .917 | .383** .000 | .106 .052 | -.002 .966 | -.001 .982 | .104 .057 | -.025 .652 | -.025 .651 | -.058 .285 |
| 2. | Age range | 1.000 0 | -.020 .738 | .021 .685 | .053 .314 | .045 .391 | .017 .753 | .114* .030 | .068 .196 | -.046 .380 | |
| 3. | University entry score | 1.000 0 | .012 .841 | -.057 .330 | -.029 .625 | .077 .191 | -.033 .572 | -.025 .664 | .042 .472 | ||
| 4. | Sum-quiz useful | 1.000 0 | .080 .128 | .195** .000 | .113** .032 | .058 .270 | .180** .001 | .010 .842 | |||
| 5. | Sum-water report useful | 1.000 0 | .221** .000 | .077 .145 | .079 .131 | .127* .015 | .020 .710 | ||||
| 6. | Sum-group poster/ oral useful | 1.000 0 | .158** .003 | .067 .206 | .102 .053 | .066 .209 | |||||
| 7. | Form-practice exam useful | 1.000 0 | .268** .000 | .202** .000 | .166** .002 | ||||||
| 8. | Form-SAMs useful | 1.000 0 | .508** .000 | .177** .001 | |||||||
| 9. | Form-self test quiz useful | 1.000 0 | .135** .010 | ||||||||
| 10. | Marked practice exam | 1.000 0 | |||||||||
| ** Correlation is significant at the 0.01 level * Correlation is significant at the 0.05 level (2-tailed) (n=358-366) Sum = summative Form = formative | |||||||||||
There is a significant positive correlation between students‘ final mark in the course and their university entry score, which is normally expected for a first year science cohort. There is no significant correlation between the final marks and student use of formative assessment resources. There is a significant correlation between the various formative assessment resources in that students who found any of them useful were more likely to have found all of them useful.
The relationship between end of course student performance and use of formative assessment resources was further investigated. Student performance was clustered into three categories - students with a credit or above for the final mark (65% and over, i.e. excelling students), students with a pass (50-64%) and failing students (less than 50%). Table 6 shows the percentage of students within each of the performance categories who used the formative assessment resources. Interestingly a greater proportion of the students who failed the course had taken advantage of the formative assessment resources - more than the students who passed!
| Formative assessment resource | Failing students n = 33 | Pass grade students n = 152 | Excelling students n = 153 |
| Taking mid course practice exam | 100% | 97% | 97% |
| Marking mid course practice exam | 33% | 26% | 26% |
| Self assessment modules | 85% | 80% | 76% |
| Weekly self test quiz | 88% | 83% | 78% |
| Formative assessment resource | Usage | Mean mark and range (%) | ||
| Failing students | Pass grade students | Excelling students | ||
| Marking mid course practice exam | Marked | 43 (23-49) | 58 (50-64) | 74 (65-91) |
| Not marked | 46 (37-49) | 58 (50-64) | 74 (65-91) | |
| Self assessment modules | Used | 45 (23-49) | 58 (50-64) | 74 (65-91) |
| Not used | 44 (37-49) | 58 (50-64) | 73 (65-87) | |
| Weekly self test quiz | Used | 45 (23-49) | 59 (51-64) | 75 (66-91) |
| Not used | 45 (37-49) | 59 (52-64) | 75 (66-91) | |
Within each student performance category the use and non-use of formative assessment resources was compared with the mean mark for each category of students. Table 7 shows that there is no apparent difference in any of the student categories for final performance outcome for students who did or did not use the various formative assessment resources, thus using the formative assessment resources and finding them useful is not a predictor of learning outcomes for any of the three student performance categories.
It appears that higher achieving students used the formative assessment opportunities less frequently than lower achieving students, possibly because they do not need the extra support. It has been shown (Entwistle, Hounsell, Macaulay, Situnayake & Tait, 1989) that an important contributing cause of failure of first year students is an absence of feedback on progress and this is also cited as a reason to discontinue (McInnis, et al., 1995). As we are currently providing a variety of assessment resources with what we believe to be relevant feedback we are concerned that these resources are not having the desired impact on student learning. Thus the worrying aspect of our results is that, although the poorer students are trying very hard and more of them (compared with the more successful students) are using the formative assessment resources provided, we do not appear to be helping them. This is in contrast to the reports of Buchanan (2000) and Zakrzewski & Bull (1999) who showed that the use of formative tests before summative examinations increase the grade point of students. We, as teachers, need to demonstrate to our students how to use our resources to their advantage. Perhaps to do this we may need to review our feedback and ask ourselves is it good enough?
We need to stop and reflect on why it is that students believe the online and offline assessment materials help them in their learning, even though our current findings seem not to support this. Is it possible our method of data analysis was not sufficiently sensitive for the data? Perhaps some re-analysis of data using different approaches is indicated. In particular we intend to correlate the mark in the exam with use of resources, instead of the overall mark for the course, which is a compilation of the final exam and a variety of continuous assessment items.
Is it possible that our students (the majority in transition from secondary school to university) are not able to fully utilise our materials, perhaps because they have not had sufficient prior experience of such assessment items? Do they need more support in learning how to learn? We need to determine whether transition is an issue, before making decisions as to how to deal with it. Therefore we intend, in 2003, to repeat our survey in both semester one and two, as well as utilising student interviews as part of our methodology, in order to probe deeper into this issue.
Has student learning been improved by the use of online and offline formative assessment opportunities? We are not sure we have any answers yet.
Buchanan, T. (2000). The efficacy of a World Wide Web mediated formative assessment. Journal of Computer Assisted Learning, 16, 193-200.
Clariana, R. B. (1993). A review of multiple-try feedback in traditional and computer based instruction. Journal of Computer Based Instruction, 20(3), 76-74.
Denzin, N. K. and Lincoln, Y. S. (Eds) (1994). Handbook of qualitative research. Thousand Oaks, CA, USA: Sage Publications.
De Vita, G. (2001). Learning styles, culture and inclusive instruction in the multicultural classroom: A business and management perspective. Innovation in Education and Training International, 38(2), 165-174.
Entwistle, N. J., Hounsell, C. J., Macaulay, C., Situnayake, G. & Tait, H. (1989). The Performance of Electrical Engineers in Scottish Education. Report to the Scottish Education Department, Centre for Research on Learning and Instruction Department of Education, University of Edinburgh.
Fowell, S. L., Southgate, L. J. & Bligh, J. G. (1999). Evaluating assessment: The missing link? Medical Education, 33(4), 276-281.
Franklin, S. & Peat, M. (2001). Managing change: The use of mixed delivery modes to increase learning opportunities. Australian Journal of Educational Technology, 17(1), 37-49. http://www.ascilite.org.au/ajet/ajet17/franklin.html
Heffler, B. (2001). Individual learning style and the learning style inventory. Educational Studies, 27(3), 307-316.
Lim, K. F. and Lee, J. (2000). IT skills of university undergraduate students enrolled in a first year unit. Australian Journal of Educational Technology, 16(3), 215-238. http://www.ascilite.org.au/ajet/ajet16/lim.html
Macdonald, J., Mason, R. & Heap, N. (1999). Refining assessment for resource based learning. Assessment and Evaluation in Higher Education, 24(3), 345-354.
McInnis, C., James, R & McNaught, C. (1995). First Year on Campus: Diversity in the initial experiences of Australian undergraduates. A commissioned project for the Committee for the Advancement of University Teaching. Canberra: AGPS.
McInnis, C., James, R & Hartley, R. (2000). Trends in the First Year Experience in Australian Universities. A project funded by the Evaluations and Investigations Programme, Higher Education Division, DETYA. Canberra: AGPS. [11 Sep 2002] http://www.detya.gov.au/archive/highered/eippubs/eip00_6/fye.pdf
Oliver, R. & Omari, A. (2001). Student responses to collaborating and learning in a web-based environment. Journal of Computer Assisted Learning, 17(1), 34-47.
Peat, M. (2000). On-line self-assessment materials: Do these make a difference to student learning? Association for Learning Technology Journal, 8(2), 51-57.
Peat, M. & Franklin, S. (2002). Supporting student learning: The use of computer-based formative assessment modules. British Journal of Educational Technology, 33(5), 517-526.
Peat, M., Franklin, S., Lewis, A. & Sims, R. (2002). Learning human biology: Student views on the usefulness of IT materials in an integrated curriculum. Australian Journal of Educational Technology, 18(2), 255-274. http://www.ascilite.org.au/ajet/ajet18/peat.html
Seale, J., Chapman, J. & Davey, C. (2000). The influence of assessments on students‘ motivation to learn in a therapy degree course. Medical Education, 34(8), 614-621.
Wiliam, D. & Black, P. (1996). Meanings and consequences: A basis for distinguishing formative and summative functions of assessment? British Educational Research Journal, 22, 537-538.
Zakrzewski, S. & Bull, J. (1999). The mass implementation and evaluation of computer-based assessments. Assessment & Evaluation in Higher Education, 23(2), 141-152.
| This article was nominated for an Outstanding Paper Award at ASCILITE 2002, gaining the additional recognition of publication as a revised version in AJET. The reference for the Conference version is: Mary Peat, M. and Franklin, S. (2002). Has student learning been improved by the use of online and offline formative assessment opportunities? In A. Williamson, C. Gunn, A. Young and T. Clear (Eds), Winds of Change in the Sea of Learning: Proceedings of the 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education, pp505-513. Auckland, New Zealand: UNITEC Institute of Technology. Authors: Mary Peat and Sue Franklin School of Biological Sciences, The University of Sydney, Australia Email: maryp@bio.usyd.edu.au Please cite as: Peat, M. and Franklin, S. (2003). Has student learning been improved by the use of online and offline formative assessment opportunities? Australian Journal of Educational Technology, 19(1), 87-99. http://www.ascilite.org.au/ajet/ajet19/peat.html |
Print version errata: p.94, Table 4 - last column - last number (15) should be 15%. p.96, Table 6 bottom line 83%% should be 83%.
聯(lián)系客服