by Andrei Poama, Leiden university, the Netherlands,
When ChatGPT was launched in November 2022, some universities decided to quickly ban students from using it, not least because they feared that GenAI would kill assessment integrity. Bans like these are now rare, but some universities still ask students to fill in disclosure statements about their GenAI use, and emphasize that some assessment tasks – for instance, asking a GenAI tool to write your BA thesis – are strictly prohibited. Other university administrations suggest that teachers should schedule oral examination moments to check if suspicions about some students’ impermissible GenAI use are true.
Many of these policies are meant to prevent cheating behaviour and protect assessment integrity, but are they fair? This is a critical, but neglected question. It asks whether the educational policies that we are putting in place in reaction to GenAI technology – for instance, devices that block GenAI access, mandatory disclosure forms and GenAI detection software or follow-up oral examinations – are imposing fair or unfair burdens on students and teachers, and whether these policies bring any benefits into our assessment practices. For instance, compliance with general bans on GenAI for take-home assignments might differ across gender lines, and thus affect the distribution of short-term benefits that come with undeserved grades. Also, for any two students who have used GenAI in equally impermissible ways, a dishonest, bluff-prone student seems more likely to convince teachers of the contrary, as compared to a sincerely repentant one. Worse, students who did not use GenAI might be wrongly labeled as cheats and unfairly sanctioned as a result.
What are the burdens and benefits that GenAI technologies and the policies designed around these technologies create, and are these burdens and benefits fairly distributed among students, on the one hand, and teachers, on the other? More generally, can assessment be fair or be made to be more fair in the age of GenAI? These are the kind of questions that the Fair Educational Assessment in the Age of GenAI (FAIR-ASSESS) project at Leiden University tries to answer.
FAIR-ASSESS: an action research university network
The FAIR-ASSESS project was initiated by Dr. Andrei Poama, an assistant professor in political philosophy and public policy ethics at the Faculty of Governance and Global Affairs, Leiden University. Together with Dr. Josette Daemen, who jointly coordinates the project, and 20 other academics and support staff from five Leiden faculties, the FAIR-ASSESS project has generated research that maps current GenAI uses by students and teachers in higher-education assessment practices, and constructed a normative, fairness-based framework for thinking about and guiding such GenAI uses.
FAIR-ASSESS is not a closed expert group. It is an open, action research intra-university network. Our ambition is to use educational science, AI expertise, public policy scholarship and philosophical analysis to identify and amplify what students and teachers think should be done about GenAI use throughout the various assessment stages, from design (e.g. generating exam questions or formats), through completion (e.g., helping students with ideation or formative feedback) to evaluation (e.g. assisting teachers with generating summative feedback).
A deliberative assembly
To make student and teacher voices heard, we convened a deliberative assembly composed of 47 participants from all seven Leiden faculties. A deliberative assembly is a diverse group where people discuss a complex problem, and propose ways to solve or mitigate it. In organizing the assembly, we took inspiration from democratic theory and similar initiatives in academia, and worked with dembrane, a young start-up company specialized in stakeholder consultation and citizen forums.
Participants in the deliberative assembly came together during four half-day sessions from January to February 2026. The first session informed participants about how GenAI works, presented them with a review of findings about GenAI uses and experiments in higher-education assessment, and familiarized them with basic assessment concepts and principles. The second session presented participants with assessment practice-inspired cases of GenAI use, and invited them to reflect on the benefits and burdens that such uses might create. The third and fourth sessions asked participants to propose, formulate , and vote on actionable rules and recommendations for regulating GenAI access and use in assessment at Leiden University. All throughout the deliberative process, students and teachers could contact experts in educational science, educational technology, and higher-education policy sciences.

Voting on actionable rules and recommendations for regulating GenAI in the Last session of the assembly
An advisory report: rules and recommendations for fair GenAI use
The deliberative assembly produced an advisory report aimed to help educational policy-makers better regulate GenAI use in assessment practices at Leiden University and potentially beyond. The report contains 32 rules and 27 recommendations. Some of these rules and recommendations are very specific – for instance, they call for a ban on all AI use detectors for assessment purposes, ask that no student be required to use GenAI in their assessments or propose that human-AI interaction become an university-wide transferrable skill. Other rules are broader – for instance, they demand that all course final grades should be based on at least one assessment method that is GenAI-proof and that the university should minimize any dependence on commercial GenAI providers.
The FAIR-ASSESS report is part of a growing landscape of similar frameworks, guidelines, and protocols. What sets our report apart is the deliberative, bottom-up process that produced it by bringing students and teachers together, a vibrant academic network that prepared it, and a shared conviction that assessment principles supported by rigorous educational research and refined by philosophical reflection can help higher-education assessment not just to survive, but to thrive in the age of GenAI. To make sure that this happens, we remain open to collaboration ideas, so feel free to get in touch with us by sending a message to a.poama@fgga.leidenuniv.nl

Andrei Poama is an Assistant Professor in Leiden university and currently leading the FAIR-ASSESS project . His research spans various disciplines and topics, and includes the ethics of public policy, theories of public value(s), normative theories of punishment and criminal sentencing, criminal justice ethics, democratic theory, the ethics of voting and elections, the ethics and epistemology of AI technologies in the public sector, and experimental legal and political philosophy.



