Policy on the use of artificial intelligence

Policy of the scientific journal "Visnyk of Kherson National Technical University"
on the use of artificial intelligence (AI) tools

1. General provisions

The policy covers all aspects of the editorial process, including: manuscript preparation by authors, manuscript review by reviewers, and the editorial process carried out by editors. The policy applies to all artificial intelligence tools available to participants in the publication process, including large language models (LLMs), tools that generate images, code, data, or other content that provides data analysis or text editing.

 2. Policy objective:

- ensuring transparency in the use of AI tools throughout the entire process of an article from submission to publication;

- establishing responsibility for authors and reviewers for AI-generated content;

- protecting the confidentiality of manuscripts and data during the editorial process;

- maintaining the integrity and authenticity of research results.

3. For authors

Authors are fully responsible for the content of their manuscript, regardless of the use of artificial intelligence tools, in particular:

- ensuring the originality of the content submitted for publication and preventing plagiarism;

- checking the reliability and accuracy of data, quotes and other information generated or found by artificial intelligence;

- respecting copyrights and third-party rights;

- authors must be able to provide results (including prompts and data generated or found by artificial intelligence) upon request by the editorial board if the need for verification arises.

AI tools, provided that the essence, data and scientific results, as well as quotes, can be used to edit the text of the manuscript in order to improve readability and eliminate grammatical errors while ensuring thorough verification of all generated content. When processing statistical results, AI can be used for data analysis, for example, provided that its use is carefully described and properly justified in the manuscript. It is also permissible to use AI tools for data visualization, including graph generation, if their creation complies with the ethics of scientific publications.

It is the authors’ responsibility to be fully transparent about the use of AI tools in the preparation of the manuscript. Any use of AI should be clearly stated in the section dedicated to the research methodology. This section should state the parameters and limitations of the AI ​​tools used, including:

- the purpose of using the AI ​​tools;

- the name of the AI ​​tool (e.g. ChatGPT, Grok, Google Gemini, etc.);

- the version of the AI ​​tool;

- a brief description of how the AI ​​tools were used in the work (e.g., graphing, statistical analysis).

The use of AI tools by authors is prohibited for the following purposes:

- creating significant parts of the manuscript, including the introduction, discussion of results, conclusions;

- modifying images (e.g., photographs, drawings, diagrams), unless this is an explicit part of the research methodology and has not been properly described in the relevant section;

- creating fictitious results, falsifying data, as this violates the reliability of the study.

4. For reviewers

To maintain the integrity of the review process, reviewers are required to ensure maximum confidentiality of all submitted manuscripts and are strictly prohibited from sharing data received from the editorial office with AI tools. Submitting an unpublished manuscript to an AI system poses a risk of violating:

- the confidentiality of the review process and data protection;

- the author's intellectual property rights.

Reviewers are permitted to use AI tools to improve the linguistic quality of their reviews, including text editing, provided that the following conditions are met:

- the editorial board is informed of this fact in the comments to the review;

- the content of the manuscript being reviewed is not fed into the artificial intelligence tool.

AI is not permitted to be used for scientific analysis or substantive evaluation of the manuscript. Such evaluation requires critical thinking and expertise performed by a professional person, and the use of AI tools here may lead to errors or bias.

Reviewers are fully responsible for the content of their reviews, which includes the accuracy and ethical integrity of any recommendations or observations. Furthermore, the use of AI does not exempt a reviewer from the obligation to adhere to the review policy.

5. For editors

Editors play a key role in maintaining the integrity and quality of scholarly publications. Given the advances in AI, editors should use these tools ethically and transparently, while ensuring the protection of data and intellectual property. Editors should refrain from making editorial decisions based solely on AI data, from transferring confidential material to unsecured AI systems, and from using AI for substantive evaluation. Editors have a primary responsibility to ensure the confidentiality of data submitted to the editors.