AI Use Policy


SARES AI Policy aims to make things more straightforward and manageable for writers, editors, and reviewers regarding generative AI and AI-assisted technologies. SARES will closely monitor what’s happening in this area and will change or improve the policy as needed. Reviewers should pay close attention to the following advice.


Replicating any portion of an article using a generative AI tool or LLM (Large Language Models like GPT AI), such as creating the abstract or the literature review, is unacceptable. This behavior is per SARES authorship criteria, which stipulate that the authors bear responsibility for the work and are liable for its integrity, validity, and accuracy.


Using generative AI tools or LLMs for result generation or reporting is strictly prohibited per SARES authorship criteria. According to these criteria, authors are held accountable for the development and interpretation of their work, as well as its precision, integrity, and validity. Submitting or publishing images generated by AI tools or large-scale generative models is also prohibited.


Concerns regarding the authenticity, integrity, and validity of the data generated preclude the in-text reporting of statistics using a generative AI tool/LLM; however, the application of such a tool to assist in the analysis of the work is permitted.


Using a generative AI tool or LLM to copyedit an article to improve its language and readability would be acceptable. This proofing usage is similar to how standard tools are utilized to correct spelling and grammar, using content already created by the authors. Instead of generating entirely new content, this approach preserves the author’s ownership of the original work.



Scroll to Top