by Somayeh Aghnia, London School of Innovation (LSI), UK.
There is a familiar pattern in higher education. A new technology arrives. It is framed as transformative. It promises to reshape teaching and learning. And over time, it is absorbed into existing systems. The structure holds. The day-to-day reality changes only marginally.
We’ve seen this with virtual learning environments, MOOCs, lecture capture. Each wave generated its own version of hype and anxiety. Each ultimately became part of the background.
It is tempting to assume this pattern will continue, but it may not. Not because AI is more powerful. But because the conditions under which it is arriving are fundamentally different.
Frictionless access to capability
Previous waves of educational technology entered universities through institutional pathways.They arrived slowly, requiring procurement decisions, infrastructure investment, policy approval, and academic and organisational alignment. Crucially, students did not encounter them before universities chose to adopt them.
This gave universities time and control. They could delay adoption, shape implementation, or ignore the technology altogether. Change happened on institutional terms.
AI does not follow this pattern. It has diffused at a speed and scale that is materially different, reaching widespread, usable adoption in a fraction of the time. And critically, it has done so through tools that are free or low-cost, requiring no institutional mediation and no infrastructural upgrade.
But the deeper shift is not speed alone. What is being accessed is as important if not more. Students are not just accessing information or tools. They are accessing capability, the ability to generate essays, solve problems, produce code, and create outputs that previously required significant effort, skill, or time.
A system out of sync
This creates a more fundamental tension. Universities operate at a particular tempo:
- Programme and curriculum cycles measured in years
- Assessment models designed for stability and comparability
- Governance processes built on deliberation, committees, and consensus
AI operates at a different tempo:
- Rapid iteration
- Continuous improvement
- Immediate availability at scale
The result is a growing misalignment between the system (universities) and the disrupting force (AI), between institutional structures and lived practice. In my recent piece for Higher Education Policy Institute on tempo conflict, I explored how universities are being pulled into a pace of change they did not design for. Universities risk losing their “organising power over practice” when real behaviour moves faster than institutional control.
The actual shifts
At the surface level, much may appear unchanged: lectures still take place, assignments are still submitted, degrees are still awarded. But beneath that surface, more fundamental tensions begin to accumulate:
- Assessment credibility is strained: what does it mean to demonstrate knowledge when capability is always available?
- Effort and authorship become harder to define
- Equity dynamics shift: access is widespread, but effective use is uneven
- Academic authority is challenged: if capability sits outside the institution, what remains inside it?
They are gradual, cumulative, and often ambiguous tensions, but they are not insignificant. And they bring us back to the central question: whether adaptation can occur within existing models of higher education institutions, or whether those models begin to shift.
Absorption depends on certain conditions such as institutions can mediate access, that they can observe and regulate use, and that they can define how capability is demonstrated. These conditions are now under strain. Students have independent, continuous access to capability.
This is why some emerging models are not attempting to retrofit AI into existing systems, but instead are designing institutions around these conditions from the outset.
London School of Innovation is one such example, an AI-native institution designed with capability, assessment, and operations built for this environment rather than adapted to it. More on LSI’s thinking can be found here: The case for an AI-native university, which argues that the question is not how to adopt AI within existing models, but whether those models still hold under new conditions.
The future of HE is not predetermined
There is a temptation to see this as inevitable. It isn’t. Universities still have choices, but each path leads to a different kind of future.
– Ignore the shift and drift under increasing pressure
In this path, universities maintain existing models and assumptions, treating AI as peripheral or temporary. Over time, the gap between institutional design and lived practice widens. Assessment credibility becomes harder to defend, and student behaviour continues to evolve outside institutional visibility.
These universities do not disappear overnight, but they gradually lose authority and relevance. Their credentials may persist, but with increasing questions around what they actually signify. They become more dependent on legacy reputation than current coherence, slowly drifting toward diminished trust and value.
– React with fragmented policies and inconsistent practices
Here, universities respond, but without alignment. Policies are introduced, revised, and contested. Different departments, faculties, and institutions adopt divergent approaches. Some restrict, others integrate, most swing in between.
Over time, these institutions may continue to operate, but as increasingly fragmented systems. Internal coherence weakens, institutional identity becomes less clear, and the value proposition harder to articulate consistently.
Some will persist, often where geographic demand or structural factors sustain them, but in a more unstable and less legible form. Others may find themselves under increasing competitive pressure from institutions with stronger reputational legacies or from those that have more deliberately redesigned their models to align with these new conditions.
– Redesign intentionally, rethinking core assumptions
In this path, universities treat AI not as a tool to integrate, but as a condition to design around. Assessment shifts toward judgement, process, and context. Pedagogy evolves to emphasise interpretation, critique, and the use of capability rather than its substitution. The role of human judgement becomes more explicit, not less.
These institutions are more likely to redefine their relevance. They may start looking different, operating differently, and challenging existing norms, but they will retain (and potentially strengthen) their legitimacy by aligning with the realities of an AI-augmented world.
AI in higher education may not arrive as a clean, designed transformation. It is more likely to emerge in a messy fashion through frictionless access to capability, a mismatch in tempo and continuous external pressure on institutional boundaries.
Not a revolution. But not a continuation either.

Somayeh Aghnia is Co-Founder of the London School of Innovation (LSI), where she works on new models of higher education for an AI-augmented society. She sits at the intersection of technology, education, and business, with a lens shaped by anthropology and philosophy. Her work focuses on how universities, and institutions more broadly, can imagine their futures in the AI-augmented world, and act with genuine agency and conscientiousness to navigate the path ahead, rather than simply surviving it.
LinkedIn: linkedin.com/in/somayeh
Email: somayeh@lsi.ac.uk



