by Tom Farrelly, Munster Technological University, Ireland & Nick Baker, University of Windsor, Canada.
In crafting this article, we’ve initially drawn from our journal paper titled “Generative Artificial Intelligence: Implications and Considerations for Higher Education Practice,” published last November, as a starting point. Nevertheless, given the rapid evolution of Generative AI’s capabilities and impact, even a mere five-month interval offers us further room for reflection and exploration.
While acknowledging the proliferation of thousands of Generative AI (GenAI) providers, it’s notable that ChatGPT has swiftly attained a status akin to a proprietary eponym, symbolising the broader use of GenAI. Much like “Hoover” has become synonymous with vacuum cleaners, and Google with internet searches, ChatGPT exemplifies a brand name that encapsulates the broader concept in popular discourse. Regardless of the brand used, such is the level of range and level of discourse and discord “that one can be swept along at times alternating between hand-wringing portents of doom and the joyous embrace of potentialities” .
Created with DALLE2
It is certainly true that the current developments in artificial intelligence are changing at such a phenomenal pace that it is almost humanly impossible to keep up. Generative AI tools, such as ChatGPT, Google’s Gemini, Microsoft’s Co-pilot, and the myriad others built on large language models are now capable of both accepting as a prompt and creating multi-modal content. OpenAI’s Sora has been hailed as a significant breakthrough in being able to create hyper-real video content from text prompts.
When we wrote our paper, back in November 2023, one year on from the release of ChatGPT3, we knew that it took considerable effort and skill to craft prompts that would elicit the response you were looking for. As an assistant, ChatGPT needed a lot of help, but now just a few months later, our machine assistants can create content that is virtually imperceptible from human-created content.
Within higher education, a longstanding tradition has celebrated the significance of individual expertise and knowledge. Yet, a profound existential dilemma now grips academia as it grapples with considering the implications of this shift for higher learning.
AI is rapidly becoming ubiquitous across all the tools of our daily work, from websites to office software; it would be almost impossible to avoid interacting with AI in one form or another even if we wanted to. We are in an incredible time in history where most of the world has been handed the keys to an unbelievably powerful suite of tools that have the potential to change our very relationship with knowledge. Used inappropriately, these tools are also potentially powerful weapons that can potentially harm society.
So what does this mean for students? First, it means that they have access to a wide range of assistants that can help them learn and express themselves – from language learning to personal tutors trained specifically on the content of their courses, to data analysis and research assistants, and of course, writing assistance. For international students, the language support alone is being welcomed as a great leveller by the millions of students studying in a language other than their first.
For students with disabilities, the ability to create AI bots that provide exactly the assistance they need will be life-changing. Of course, the academic world rarely looks favourably on students using assistance in a system that privileges normativity and individualism, but that system is coming under increasing pressure as demographics and technology change and evolve, and as society questions the value of a university credential when knowledge is abundantly and freely available.
Created with DALLE2
For higher education, one of the critical questions we must be asking now is, how do we prepare our students for the world they are about to enter, where AI is all around them, incorporated into their jobs, and in many cases, forcing a reevaluation of the role of human skill and knowledge in the workforce. Their world is one where machines will work alongside them, assisting with more and more complex problems. For example, the new Devin agentic AI system released just a few days ago has already proven capable of independently solving complex programming problems submitted to Github (in fact, it managed to solve 13.86% of outstanding Github tickets without human intervention, where previous tools like ChatGPT were only able to solve a fraction of those problems and none independently of human support).
Ultimately, humans are at an inflection point in history, where we are scrambling to put boundaries and guidelines around these ever more capable systems to ensure that they are built and used responsibly, ethically, and safely. This is also a time when we must reconsider what it means to be human. For too long, we have defined our humanity in terms of the bucket of skills, knowledge, and emotions we imagine only humans can demonstrate. But as we barrel towards agentic AI and on to generalised AI, which can act independently of their human masters and which will surpass any human intellectual capacity, we must consider what it is that makes us uniquely human, and ensure that we can hold onto that precious humanity.
Tom Farrelly is an Academic Developer and Senior Lecturer with the N-TUTORR project at Munster Technological University, Ireland.
Nick Baker is the Director of the Office of Open Learning and co-chair of the sub-committee on artificial intelligence at the University of Windsor in Canada.