Generative Artificial Intelligence (AI) Policy
Generative AI and AI-Assisted Technologies Policy
To improve transparency and integrity in scholarly publishing, the European Research Studies Journal (ERSJ) follows best practices recommended by Scopus regarding the responsible use of generative artificial intelligence (GenAI) and AI-assisted technologies in the research, writing, peer-review, and publication processes.
Responsible Use of AI and AI-Assisted Tools
Authors using generative AI or AI-assisted tools must ensure that such tools are used responsibly and in compliance with ethical publishing standards. Authors should carefully review the terms and conditions of any AI tool they employ to ensure the privacy and confidentiality of their data and inputs, including unpublished manuscripts and research materials.
Particular care must be taken when handling personally identifiable data. AI tools must not be used to generate images or content that replicate copyrighted material, depict identifiable individuals, reproduce recognizable voices, or reference identifiable products or brands without appropriate authorization.
Authors remain fully responsible for verifying the accuracy, validity, and originality of all content produced with the assistance of AI tools. They must review outputs carefully to identify and correct factual inaccuracies, potential biases, or misleading information.
Authors should also ensure that the AI tools used are granted only limited rights necessary to provide the requested service and do not obtain rights to reuse, store, or train on the authors’ materials in ways that could compromise intellectual property or confidentiality. The use of AI tools must not impose restrictions that could limit the subsequent publication of the manuscript.
Disclosure of AI Use
To ensure transparency, authors must disclose any use of generative AI or AI-assisted technologies in the preparation of their manuscript. A statement titled “Declaration of Generative AI and AI-Assisted Technologies in the Writing Process” should be included at the end of the manuscript, immediately before the references section.
The declaration should clearly specify:
- the name of the AI tool used,
- the purpose for which it was used (e.g., language editing, idea organization, data analysis support),
- and the extent of human oversight and verification.
This declaration will appear in the published article. Basic automated checks of grammar, spelling, or punctuation do not require disclosure. If AI tools were used during the research process itself, their use should be described in detail in the Methods section.
Authorship and Accountability
Generative AI tools cannot be listed as authors or co-authors and should not be cited as authors in scholarly work. Authorship carries responsibilities that can only be fulfilled by human contributors, including accountability for the accuracy, integrity, and originality of the research.
All listed authors remain responsible for:
- approving the final version of the manuscript,
- ensuring that the work is original and has not been previously published,
- confirming that all listed authors meet authorship criteria,
- and ensuring that the manuscript does not infringe on third-party rights.
The use of AI tools does not diminish the responsibility of authors for the content of their work.
Use of AI in Editorial and Peer Review Processes
ERSJ also encourages transparency regarding the use of AI tools in editorial management and peer review processes. Editors and reviewers should not upload confidential manuscript materials to AI systems that may store, reuse, or train on such content.