From traffic lights to roundabouts: learning with AI

At a real and metaphorical level, traffic lights signal strict rules. Dignan (2019) uses traffic lights to describe how legacy organisations cling to past practices. This accurately captures how universities responded when Generative AI (referred to here as AI) disrupted higher education in late 2022. The rapid adoption of assignment traffic lights in universities was an appropriate attempt to provide stability during uncertainty. Yet academics and students were left confused by processes that prioritised policing over learning.

The traffic-light system provided rules at a moment of crisis: Red means do not use AI; Amber means ‘ask your tutor’; Green means use it responsibly and transparently. Yet beneath this apparent clarity, the system generated confusion. Amber became a wide and inconsistent grey zone, while Green was often misunderstood as permission for uncritical ‘copy and paste’, which it never intended to be.

A major limitation of the traffic-light system is that it regulates assessment, not learning. AI traffic lights attempt to regulate the complete, partial, or no use of AI in assignments, with only red being clear what it means: “stay clear of it!” Using AI traffic lights to classify assignments completely seems to ignore the use of AI in learning. Do we need to regulate this as well? Another traffic light system for learning? What if students use AI in their learning, which they do -see open book Learning with AI (edited by students, University of Leeds, 2025) and then are told not to use it in their assignments? Does this mean that the assessment is completely detached from learning? What about assessment as learning? Is assessment part of the learning process? Not an add-on?

Despite adopting the traffic-light system, concerns about academic integrity and the bypassing of critical and creative thinking remain significant. Cases of academic misconduct continue to rise three years later.

This raises a broader question: should we continue policing students’ AI use through rigid rules, or has the traffic-light model become an unsustainable and unhelpful solution? By focusing on restriction, we risk stagnation and missed opportunities for innovation in learning and assessment.

The persistence of the traffic-light model masks a deeper challenge: our reluctance to redesign assessments and reimagine curricula for a world where AI is ubiquitous. What is the real problem we seem to be reluctant to solve? We had the opportunity during the pandemic to radically change and future-proof assessments and curricula. It didn’t really happen beyond “emergency measures” and small pockets of innovations. Now AI has come along, and we seem to be trying to kick the assessment further down the road. Is this wise? Can we reduce academic integrity cases by investing time to reimagine how we learn and assess so that it is meaningful, purposeful and boosts intellectual curiosity and motivation? Should we invest more in building trust and learning relationships, partnership working with our students and focus our efforts and time to develop criticality, judgement and decision making and finally dare to change assessment practices radically instead of surveillance and focusing on being suspicious that our students are cheating? What about reframing what we understand under originality? If the motivation and curiosity to learn are there, cheating is not really relevant, or is it?

This gap reinforces why a rules-based model is insufficient. Many academics are new to AI and may be behind in how students experiment with AI. This is a unique opportunity for academics and students to come together to learn with and from each other, to experiment together and develop their critical and creative capacities within a framework promoting ethical and responsible usage. Such an example is the Education in a Digital Society module. AI is now everywhere, and a university education that is not open and adaptable to change and innovation and ongoing learning will not be useful going forward.

Calls for radical curriculum change are growing. Higher education must now move beyond restrictive rulesets and towards principled, trust-based frameworks such as the Roundabout. Unlike traffic lights, which stop and regulate, the Roundabout fosters flow, judgment and shared responsibility. It supports learners to develop the critical, creative and ethical capacities needed to learn, work and live with AI. This shift extends beyond assessment reform—it represents a broader transition towards education that prepares people to navigate, shape and thrive in an AI-enabled world.

References

Chrissi Nerantzi,University of Leeds, UK.

John Palfreyman, Leeds University Business School, UK.