Author Guidelines on Using Generative AI and Large Language Models
Sage recognizes the value of large language models (LLMs) and generative artificial intelligence (AI) as productivity tools that have the potential to radically change how the scholarly community does numerous tasks and activities. We are all now having to learn about the strengths, weaknesses, and risks of these emerging technologies in research and in the classroom, from K-12 through to college and beyond. As publishers, we are trying to figure out how these tools may adapt the roles we play in scholarly communications. As a publisher with a reputation for innovation and openness to new technologies and the positive things they can do for our community, we are committed to collaborating with our authors, editors, and reviewers to learn how AI can best support their work while ensuring that human creativity and expertise remains at the core of the content we publish.
We know many academics and scholars are already using tools like ChatGPT and other tools to help them do things such as generate research ideas, create draft structures for presentations or proposals, or speed up the writing process by reworking text or summarizing. Other uses are emerging rapidly as we learn more about the capabilities and limitations of the technologies.
We developed these guidelines to support those who choose to experiment with these new tools to create or adapt teaching materials or academic content. We will update these guidelines regularly as our organization learns more about the tools’ limits and opportunities, and we encourage transparency and openness between you and your Sage Editors about what you may be experimenting with in this rapidly changing space.
If you are working as an author on content for Sage Learning Resources and using LLMs or generative AI tools as part of your process, we ask that you:
- Verify the accuracy, validity, and appropriateness of any content, citations or references generated by a large language model or generative AI tool. Please be aware that such models sometimes generate plausible but fictional references (sometimes called hallucinations), and that their training data is incomplete and not always up to date. As an author, you are responsible for the accuracy of all content you submit to Sage for publication, including content developed with these tools.
- Evaluate the risk of bias within any materials generated. Content generated by LLMs sometimes perpetuates biases and stereotypes because previously published content that contains racist, sexist, or other biases is present in the data training sets. Further, a truly broad and diverse range of viewpoints may not be well-represented in the training data.
- Check any materials generated or revised by these tools for plagiarism. LLMs sometimes reproduce text from other sources. Authors must check the original sources to be sure they are not plagiarizing someone else’s work or infringing copyright.
- Disclose the usage of AI tools, including chatbots, in any aspect of the creation of work for publication by Sage.
- Provide a list of any sources used to generate content and citations, including those edited using or generated by language models. Authors should closely check all citations to ensure they are accurate and properly referenced.
When it comes to inputting text into ChatGPT and other generative AI tools, be aware that any information that you share with AI platforms like ChatGPT is collected for the business purposes of their owners and can be reshared with other users. Any content that you input or upload to ChatGPT (or other LLM tools) may be used in ways that you did not intend. Never share sensitive or personal information or proprietary information which Sage will publish or to which Sage will own the copyright with an AI platform like ChatGPT.
We encourage any Sage authors who are experimenting with LLM and AI tools to stay up to date on new developments and potential implications and check back here for updates on our guidelines. Our guidelines will evolve as we better understand how these emerging tech tools can be used responsibly by the scholarly community.
Your editors at Sage look forward to a continued dialog about generative AI tools and their place within scholarly communication, and we will strive for continued transparency and openness. If in doubt, authors should refer to their editor for any specific queries or guidelines if they plan to interact with Generative AI tools as part of their authoring process. We are also always eager to hear the experience of those who choose to experiment with using new AI tools, so do reach out to your editor if you’d like to share your thoughts.