AI Policy
The journal adheres to the principles of responsible and transparent use of generative artificial intelligence (AI) in scholarly publishing. The journal’s policy is based on best practices and the recommendations of leading scientometric databases, in particular WoS and Scopus (see: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals).
For Authors
- Generative AI may be used only as an auxiliary tool (e.g., to improve language and readability).
- Authors bear full responsibility for the entire content of the manuscript and must verify the accuracy of materials generated by artificial intelligence.
- AI tools cannot be listed as authors or co-authors.
- If AI has been used (beyond basic language editing), authors must provide an appropriate disclosure specifying the tool and the purpose of its use.
If AI constitutes part of the research methodology, this must be clearly described in the Methods section.
The use of generative AI to create or modify images is prohibited (except where such use forms part of the research design and is properly described). Since researchers increasingly use AI tools for image generation or modification, authors must note that the only exception is the use of AI or AI-assisted tools when such use is an integral component of the research project or methodology. In such cases, a clear description of the created or modified content is required, including an explanation of how AI tools were used in the creation or modification process, as well as the name of the model or tool, version number, extension (if applicable), and the developer.
For Reviewers and Editors
- Manuscripts are confidential documents.
- Reviewers and editors are prohibited from uploading manuscripts or any parts thereof to generative AI services.
- Manuscript evaluation and editorial decisions must be performed exclusively by humans.
Policy Violations
Failure to comply with this policy may result in rejection of the manuscript or retraction of the published article.



