New ethical framework to help navigate use of AI in academic research

Researchers from the University of Oxford, University of Cambridge, University of Copenhagen, National University of Singapore, and other leading institutions have devised philosophically-grounded ethical guidelines for using Large Language Models in academic writing. 

 

A hand reaching out to touch a digital globe of information

As Large Language Models (LLMs) become more prevalent and easy to access, academics across the globe are using their assistance for academic manuscript writing, in particular developing ideas and content. However, several concerns arise from their probabilistic nature relating to plagiarism, authorship attribution, and the integrity of academia as a whole. As AI tools become increasingly sophisticated, clear ethical guidelines are therefore crucial to maintaining the quality and credibility of scholarly work. 

The new research, published in Nature Machine Intelligence, outlines three essential criteria that maximise the beneficial impacts of LLMs on scientific advancement and academic equity:

  • Human vetting and guaranteeing of accuracy and integrity
  • Ensuring substantial human contribution to the work
  • Appropriate acknowledgment and transparency of LLM use

The authors define a template LLM Use Acknowledgement, which researchers can utilise when submitting manuscripts. This practical tool will streamline adherence to ethical standards in AI-assisted academic writing, and provide greater transparency about LLM use. Speaking of the guidelines, co-author, Prof Julian Savulescu, of The Uehiro Oxford Institute, says:

Large Language Models are the Pandora's Box for academic research. They could eliminate academic independence, creativity, originality and thought itself. But they could also facilitate unimaginable co-creation and productivity. These guidelines are the first steps to using LLMs responsibly and ethically in academic writing and research.

This publication marks a crucial step in managing the relationship between human academic work and machine intelligence. By empowering researchers to leverage AI technology ethically, they aim to boost productivity and innovation while preserving academic integrity. Co-author, Dr Brian Earp, of The Uehiro Oxford Institute, notes:

It's appropriate and necessary to be extremely cautious when faced with new technological possibilities, including the ability for human writers to co-create academic material using generative AI. This is especially true when things are scaling up and moving quickly. But ethical guidelines are not only about reducing risk; they are also about maximizing potential benefits.

This new research presents great opportunities for academic communities worldwide, and can be used across all academic disciplines. The significance of the guidelines is outlined by Professor Timo Minssen from the University of Copenhagen:

Guidance is essential in shaping the ethical use of AI in academic research, and in particular concerning the co-creation of academic articles with LLMs. Appropriate acknowledgment based on the principles of research ethics should ensure transparency, ethical integrity, and proper attribution. Ideally, this will promote a collaborative and more inclusive environment where human ingenuity and machine intelligence can enhance scholarly discourse.

Read the research paper here