Can machines respond to students’ feedback needs?

by Chrissi Nerantzi, University of Leeds, UK.

Universities are greenhouses for and of experimentation, intellectual and human connection and learning that push the boundaries of what is possible. Not everything will work, not everything will succeed. While we love learning to be fun and filled with happiness it is also messy, confusing, disorientating, discomforting. Hard. Lonely at times. Sometimes a struggle. As learning, also means unlearning and re-learning. And learning from (our own) mistakes. Sometimes(?) we wish learning was easier, much easier and the pain of learning would go away.

What role do learning relationships play to celebrate the ups and to help students get through the downs? How can we learn from (own) mistakes? We often talk about creating safe spaces and that we need to be brave but Ahenkorah warns us against such narratives. For her accountable spaces is the way forward where each one of us takes full responsibility for their actions.

How can we foster and nurture diverse learning relationships to create a  respectful learning culture characterised by openness, humane and intellectual connection?

These learning relationships between educators and students are also experienced through feedback which can create challenges as research shows. Feedback is not always understood and can create anxiety and other times students don’t know what to do with it, how it is useful for them. It seems to be the case that feedback satisfaction remains low in higher education . But is it about feedback satisfaction or actually recognising value in feedback for learning? For some years now educators have been engaging with diversifying feedback approaches also using the affordances of digital and networked technologies including multimodal formats.

However, one element that seems to matter in how feedback is experienced is the strength and nature of the learning relationship. This is no surprise as we are social beings. Carless for example speaks about the value of feedback partnerships and Robson recognise the importance of dialogic feedback. We recognise the value of multidirectional feedback and know that when the exclusive source of feedback is the tutor it can create dependency. While we talk about the value of feedback partnerships and mean the involvement of others, Nicol and Kushwah bring our attention to the importance for students to engage first critically with their own work through what they call self-feedback before reaching out to feedback from peers or their tutor. Does this mean that a feedback partnership with themselves is equally important perhaps?  

New feedback practice is emerging, which I think, links with the above and how we would like our students to have agency and deeply engage with their own work in critical ways. Yes, students are proactively seeking feedback on their work-in-progress from the machine, GenAI tools, to support their learning. An opportunity for human to machine feedback conversations, always however initiated by the human (at least for now) and switched-on 24-7. Available on demand and command. Is this a good thing? Does it create new dependencies? And voice chatbots are here too and some talk about synthetic relationships and the associated dangers. However, could we also see these conversations as a form of what Boud and Molley called Mark 2 feedback? Where the student takes responsibility for their learning and (pro-)actively seeks for ways to critically and creatively engage with their own work in order to learn? Could this move reveal agency? An inventive and resourceful way to learn through questioning and from own mistakes? To share work-in-progress students are often reluctant to share in a less exposing way? Could it therefore make students feel less vulnerable perhaps? Even when students articulate the prompt or question (The Innovating Pedagogies 2024 report, published by the Open University talks about a resemblance to socratic questioning and recognise the conversational nature of student and GenAI interactions for learning) with precision on which aspects of their work they wish the machine to provide feedback, is detail and sharpness that students are perhaps less used to including in their message when they seek for feedback from their peers, tutors and others.    

Are new feedback practices emerging that have the potential to transform how students engage in feedback to deepen their learning and get some of the support they feel they need? When they need it? How can we help students develop AI literacy to use it responsibly and be aware of the pitfalls? How can we nurture accountable spaces where we are all responsible and respectful towards each other and ourselves also? How will human-to-human and human-to-machine learning relationships evolve with GenAI as a new study buddy? Is this even possible?

Author

Chrissi Nerantzi (NTF, CATE, PFHEA) is a Professor in Creative and Open Education in the School of Education, a Senior Lead of the Knowledge Equity Network and the Academic Lead for Discover and Explore at the University of Leeds in the United Kingdom.