For the next post in my ALT-C series I’m going to highlight a session I didn’t actually attend but immediately regretted when comments started filtering in on twitter.
The session was based around the paper by Rodway-Dyer, Dunne and Newcombe from University of Exeter which summaries a study of audio and visual feedback used in two 1st year undergraduate classes. Click here for the paper and abstract.
Comments I picked up on this paper via twitter appeared to show audio feedback was not well received. Issues highlighted were:
- the finding that “76% of students wanted face-to-face from a tutor in addition to other forms of feedback” [@adamread, @JackieCarter]
- students found that receiving negative audio comments was harder than when written [@adamread, @ali818, @narcomarco]. Although this is still open to debate as @gillysalmon said that “duckling project at Leicester has found human voice easier to give negative feedback by audio than text”
Obviously there are issues with making assumptions based on a few 140 character tweets and it should be noted that the authors conclude that “overall, it seems that "there is considerable potential in using audio and screen visual feedback to support learning”, although students did express concerns in a number of areas.
Having had a chance to digest the paper the question I’m left with is how much of the negative experiences were a result of the wider assessment design rather than the use of audio feedback in itself. For example, reading the focus group discussions for audio feedback in geography I noted that:
- students were not notified that they would be receiving audio feedback;
- that despite the tutors best attempts students hadn’t engaged with assessment criteria; and
- that this was the first essay students submitted at university level and they were unclear of the expected standards.
Similar issues to these were addressed in the Re-Engineering Assessment Practices (REAP) project, which produced an evolving set of assessment principles. Principles which could be successfully applied to the geography example might be:
Help clarify what good performance is – this could be achieved in a number of ways including creating an opportunity for the tutor to discuss criteria with students, or perhaps providing a exemplar of previous submissions with associated audio feedback.
Providing opportunities to act on feedback – as this was the students first submission providing feedback on a draft version of their essay not only allows students to act on feedback (it’s not surprising when students ignore feedback if they have no opportunity to use it).
Facilitates self-assessment and reflection – One of the redesigns piloted during REAP was the Foundation Pharmacy class, in which students submitted a draft using a pro-forma similar to that used by tutors to grade their final submission. Students were required to reflect on distinct sections of their essay, which again also allowed them to engage with the assessment criteria.
Encourage positive motivational beliefs – using the staged feedback described above would perhaps also address the issue of students becoming disillusioned.
Talking to a friend during the lunch break the research methodology used by the authors was also mentioned, in particular the use of ‘stimulated recall’. For this the authors played back examples of audio feedback to the tutor asking him to explain his thought processes and reflect on how his students would have responded to his comments. This methodology seems particularly appropriate to evaluate the use of audio feedback, and is something I want to take a closer look at.
A moment of serendipity
Whilst searching the twitter feed for comments on the session I noticed a tweet by @newmediac which was promoting a free webinar in which “Phil Ice shares research on benefits of audio feedback” (here’s the full tweet). The session has already passed but the recording for this event is here.
The presenter, Phil Ice, has been working on audio feedback in the US for a number of years and has a number of interesting findings (and research methodologies) I haven’t seen in the UK.
For example, Ice and his team report:
students used content for which audio feedback was received approximately 3 times more often than content for which text-based feedback [was] received”
students were 5 to 6 times more likely to apply content for which audio feedback was received at the higher levels of Bloom’s Taxonomy then content for which text-based feedback was received”.
These results were from a small scale study of approximately 30 students so aren’t conclusive. Ice has also conducted a larger studies with over 2,000 students which used the Community of Inquiry Framework Survey. Positive differences were found across a number of indicators including excessive use of audio to address feedback at lower levels is perceived as a barrier by students.
Ice has also conducted studies which breaks audio feedback into four types: global – overall quality; mid level – clarity of thought/argument; micro – word choice/grammar/punctuation; and other – scholarly advice. The study indicates that students prefer a combination of audio and text for global and mid-level comments.
Findings from Ice have been submitted for publication in the Journal of Educational Computing Research (which will soon feature a special issue on ‘Technology-Mediated Feedback for Teaching and Learning’).
Finally, I would like to mention the method Ice uses for audio feedback. He uses the audio comment tool within Acrobat Pro 8 to record comments ‘inline’. This appears to be particularly useful for students to relate comments to p
articular sections of their submitted work. Click here for a sample PDF document with audio feedback (this isn’t compatible with all PDF readers – I’ve tested on Acrobat Reader and Foxit Reader).
Hopefully this post has not only stimulated some ideas in the use of audio feedback, but also highlight a range of methodologies to effectively evaluate it.