by Fiona Concannon, University of Galway; Leigh Wolf, University College Dublin; Tom Farrelly, Munster Technological University and Orna Farrell, Dublin City University, Ireland.
“It seems likely that generative AI is here to stay and will develop, so journals will need to find ways of figuring out how to work with them” (Zohny et al., 2023 p.79).
Following the initial surge of global interest in GenAI in November 2022, it was clear that written scholarship would be irrevocably changed. In the intervening period, higher education institutions have scrambled to adjust academic integrity policies and enact major changes to how learning is assessed to assure their certification and accreditation.
But what about the impact on our knowledge exchange practices through our scholarly research? Our research inquiry advances what we know, and our social practices around sharing our results is central to the very existence and advancement of higher education. The peer-reviewed journal is the cornerstone of maintaining standards of excellence, ensuring quality assurance and integrity. How is this impacted by recent advances in GenAI – a technology that can now produce coherent and contextually relevant human-like text? If educators are attending to learning processes with existential reflection and adjustment, what editorial action is needed with respect to publishing?
The publish or perish machine
The field of peer-reviewed scholarship as a sole mechanism to validate research, has increasingly faced challenges and threats in the contemporary higher education landscape. Over many years, the neoliberal context has resulted in pressure to publish for tenure or career advancement (Watermeyer et al., 2023). In this context, various unethical publishing practices pre-date our current GenAI dominated era. These include the misrepresentation of journal article contributions, through practices such as: guest authorship whereby individuals are named who have not made substantial contributions to the research; or ghost authorship where authors fail to acknowledge others who have made significant contributions; or direct plagiarism where ideas or words are presented as one’s own without proper citation or attribution. Each of these unethical examples undermines the transparency and credibility of authorship attribution.
Ethics and authorship with GenAI
The use of AI in scholarship has presented well documented ethical concerns around amplifying biases resulting from algorithmic processing of the data on which GenAI is trained. The training of data on unknown sources may in and of itself constitute plagiarism on a grand scale. Whilst certainly major issues, it is the use of GenAI as an aid writing as a prompt, summary and support in various unknown ways that raises the greatest concern to journal editors and reviewers. Without proper attribution and transparency in how authors are working with GenAI, in presenting this work, how can fair editorial and peer review judgments be made? Over a longer term, research scholarship that is heavily reliant on GenAI challenges notions of authorship, plagiarism, and citation. If, in turn, used by peer-reviewers, this introduces a further complication in the evaluative cycle of human-written scholarship. What constitutes acceptable use, and where do author(s) and reviewer(s) and editor(s) cross a line?
Whilst editorial practices on the whole have come to an understanding of non-attribution of authorship to GenAI large language models (Thorp, 2023), despite some early missteps that grappled with this high-level attribution. As we take a more granular look at journal written text, editorially, how can we know whether work was created by the author or by generative AI? Does it matter if the author takes responsibility for the output? A parallel argument can be made. Data analysis uses increasingly sophisticated tools for supporting researchers. Do we care whether findings were manually computed or whether this analysis was calculated using a software application? But text is so fundamental to the author’s voice and meaning. For many, the heavy use of GenAI and all its inherent risks is a corruption of this messaging, and of research integrity.
Anson (2022) notes how authorship and notions of plagiarism are socially constructed and context specific. If we have an awareness that GenAI produced text is increasingly used within research practices, do we need to accept that authors may be incorporating this within their writing cycle with refinement and adjustments of AI text to modify for accuracy, (if it is of any assistance at all)?
Whilst we need to be concerned with authorship, if the main purpose is knowledge dissemination and confirmation of research quality, or originality, novelty and veracity of research findings, then editorial teams need to be more fully aware of the likely reliance on GenAI in various guises within the author’s writing process.
An open creative experiment and critical reflection
With the technology evolving, and our evolving understanding of what constitutes responsible use – increased digital (AI) literacy is often poised as the solution. Calls for increasing digital literacy are wise and well-intentioned (Ciampa et al., 2023), however, so many literacy practices (digital or otherwise) are often hidden. How can we surface these within a journal publication output?
An open-access journal, the Irish Journal of Technology Enhanced Learning (IJTEL), was willing to invite risk and experimentation and to encourage authors to be transparent in their writing practice. As editors, we invited authors to take a scaffolded examination of using GenAI within a familiar research topic. The result culminated in a special edition comprising five position papers, thirteen short reports and two book reviews (Wolf et al., 2023). Overwhelmingly, they pointed out the shortcomings of GenAI text. We encourage you to look at each article submission with the special edition as each is unique in its approach. Common to all was an honest and open account and critical reflection.
Whilst we are still far from a firm understanding of what constitutes responsible use of GenAI, it is clear that transparency, authenticity and honest reflection remain as important for us as a community of scholars, as it has been for us as educators, as we openly work together to build consensus on standards of rigour, accuracy, and credibility to advance and disseminate our knowledge. The longer-term direction of editorial policies and journal publication as a whole is anyone’s guess.
Journal URL: https://journal.ilta.ie/
Notes:
Anson, C. M. (2022). AI-based text generation and the social construction of “fraudulent authorship”: A revisitation. Composition Studies, 50(1), 37-46. https://compositionstudiesjournal.files.wordpress.com/2022/07/anson.pdf
Ciampa, K., Wolfe, Z. M., & Bronstein, B. (2023). ChatGPT in education: Transforming digital literacy practices. Journal of Adolescent & Adult Literacy, 67(3), 186-195. https://doi.org/10.1002/jaal.1310
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313. https://doi.org/10.1126/science.adg7879
Watermeyer, R., Phipps, L., Lanclos, D., & Knight, C. (2023). Generative AI and the Automating of Academia. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00440-6
Wolf, L., Farrelly, T., Farrell, O., & Concannon, F. (2023). Reflections on a collective creative experiment with genAI: Exploring the boundaries of what is possible. Irish Journal of Technology Enhanced Learning, 7(2), 1-7. https://doi.org/10.22554/ijtel.v7i2.155Zohny, H., McMillan, J., & King, M. (2023). Ethics of generative AI. Journal of medical ethics, 49(2), 79-80. https://doi.org/10.1136/jme-2023-108909
Authors
Fiona Concannon (University of Galway), Leigh Wolf (University College Dublin), Tom Farrelly (Munster Technological University) and Orna Farrell (Dublin City University) are all co-editors of the Irish Journal of Technology Enhanced Learning (IJTEL), and they welcome submissions on any aspect of technology use in education.