The European Commission’s 2024 mid-point review of its strategy for high-quality, inclusive, and accessible digital education and training provides an important opportunity to evaluate the progress made and to refine the plan for the future. As part of this process, the Commission has called on organisations to submit position papers assessing the effectiveness of the Digital Education Action Plan and offering insights on how it can be improved for the next phase of implementation.
This position paper responds to that call, addressing the question: In your experience and expertise with the Digital Education Action Plan, what actions/policy areas have been effective in achieving their objectives, and which actions/policy areas should be strengthened in the next phase of implementation? By drawing on practical experiences we aim to provide constructive feedback on both the successes and the areas that require further development to ensure the strategy continues to meet its goals in an evolving digital landscape.
In this position paper we would like to reflect on “Priority 1: Fostering the development of a high-performing digital education ecosystem” and in particular “Action 6: Ethical guidelines on the use of AI and data in teaching and learning for educators”
According to the action description, these guidelines aim to clarify how AI is utilised in schools, assist teachers and students with their educational activities, enhance administrative processes, and outline the ethical considerations associated with AI use.
The “Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for Educators” developed under this action, primarily address the school level, which is undoubtedly significant but represents only a part of the broader digital educational landscape. A more comprehensive approach is needed that includes higher education, vocational training, and lifelong learning, aligning with the Digital Education Plan’s aim to address all levels of digital education and training. The AI Pact, published on 22 July 2024, underscores the critical importance of addressing AI use across all levels of education, as it identifies AI use in “educational or vocational training that may determine the access to education and professional course of someone’s life (e.g., scoring of exams),” as a high-risk area. Although the recently published guidelines are transferable across various educational levels and contexts, there would be significant benefits in increasing focus and providing tailored support for higher education institutions, vocational training programs, and lifelong learning initiatives. Strengthening these areas will help ensure a more inclusive and effective digital education ecosystem across all levels of learning.
A survey conducted by the Media & Learning Association between 17 November and 4 December 2023 has revealed a significant gap in the adoption of AI policies across educational institutions. According to our survey 80% of participating institutions from six countries—Portugal, Germany, Finland, The Netherlands, Croatia, Denmark—do not have AI policies in place or the respondents are uncertain whether their institution has one. Despite the relatively limited sample size, the diversity in institution size and geographic spread makes the findings indicative of the need for support and guidance. This lack of institutional AI policies and guidelines is concerning, particularly given the rapid pace at which AI is being integrated into education. The survey identified several barriers to AI policy adoption, including insufficient resources, resistance to change, and the complexity of AI implementation.
Following consultations and discussions with our members at the Media & Learning annual conference, as well as through online events and meetings, it has become evident that there is a pressing need for more comprehensive ethical guidance on the use of AI in higher education. Given the rapid pace of technological advancements, these guidelines must also be regularly updated to stay relevant. For example, some of our members are currently experimenting with AI avatars, raising important questions such as: Can we create avatars of lecturers and then use AI to generate content? What happens if the lecturer leaves the institution? Guidelines on consent and transparency of use, privacy and security (protecting personal and data information, also withdrawal ), equity (avoiding bias), personal accountability (systems not replacing teachers) among others would be welcome. Addressing these and similar concerns is crucial for ensuring the responsible and ethical use of AI in higher education, as well as for fostering the sector’s sustainable, inclusive and ethical development.
Financial, technical, governmental, and EU support, would help institutions develop AI policies that benefit all stakeholders.
When it comes to educational technology, the EC, like many governments, seems to adopt a reactive rather than proactive stance. A clear example is the introduction of smartphones: they were widely adopted in society before schools and governments (e.g., Wallonia, Sweden) responded, often through restrictive measures like bans. This pattern leaves education systems perpetually one step behind technological innovation. Instead of allowing AI to shape education and then reacting, the EC could proactively guide the development and implementation of AI for educational purposes. By setting a forward-thinking agenda for technology companies, the EC could avoid the kind of delayed response that has historically hindered effective technological integration in education. Leading the conversation on AI policy would enable education systems to better anticipate and adapt to the evolving digital landscape, ensuring a more seamless transition.
The Media & Learning Secretariat thanks the members of our SIG on AI in Higher Education for their valuable input and feedback on this position paper.