Large Language Models (LLMs): Smart Work or Academic Doping?

Large Language Models (LLMs): Smart Work or Academic Doping?

The Academy of Science of South Africa hosted a webinar on Artificial Intelligence titled Large Language Models (LLMs): Smart Work or Academic Doping? as part of its series on polemics in AI.

LLMs are transforming academic research and publishing by significantly increasing scholars’ productivity. A 2023 Nature survey revealed that nearly a third of scientists use generative AI for manuscript preparation, with LLMs aiding in tasks such as coding, brainstorming, and literature reviews. However, LLMs raise significant issues, including biases and exploitation in their training processes and generating errors or inaccurate information. This outsourcing of thought (and, of course, the facilitating of outright cheating by students and scholars) raises concern about overburdening journal editors, peer reviewers, and course administrators alike.

Prof Thokozani Majozi from the University of the Witwatersrand facilitated the webinar where Prof Anne Verhoef (North-West University), Dr Nicky Tjano (University of South Africa), Prof Lynn Morris (University of the Witwatersrand) and Prof David Walwyn (University of Pretoria) presented their findings on the topic.  Prof Verhoef’s presentation focused on the ethical and responsible use of LLMs and other AI tools and emphasised the importance of universities having clear guidelines on the use of AI by students to prevent plagiarism. According to Dr Tjano more than 5000 research papers were retracted in 2023 due to integrity concerns. More investment is needed in educating and training academics on the use of new technologies in research. Prof Morris acknowledged that LLMs can accelerate scientific discoveries and drive innovation in multiple fields but cautioned that without proper regulation the outcomes can have a negative impact. Prof Walwyn gave a presentation on the use of Neural Net Architecture (ANN) as a way of optimising biological systems.

Issues of academic integrity seems to dampen the use of LLMs. There is a need for more policies in regulating the use of AI-tools in research. The quality of research output is paramount in advancing development and caution must be exercised when using LLMs.

The recording for the webinar can be accessed here: https://www.youtube.com/watch?v=GV1V12sZ9qw  and slides here: https://hdl.handle.net/20.500.11911/410