AI Use Policy

  1. Introduction

     This document sets out the principles and rules for the use of artificial intelligence (AI)–based tools in the preparation, peer review, and publication of materials in the scholarly periodical Bulletin of Taras Shevchenko National University of Kyiv (hereinafter the Journal). We acknowledge the potential benefits of AI for improving the efficiency and quality of scholarly publications, while emphasizing the need for its ethical, responsible, and transparent application in accordance with academic integrity and standards of research ethics.

2. Scope

This policy applies to all participants in the Journal’s editorial process, including:

  1. Authors submitting manuscripts to the Journal.
  2. Reviewers conducting expert evaluations of submitted materials.
  3. Members of the editorial board and editorial staff

3. Use of AI Tools by Authors

Authors may use AI tools to improve the quality of their manuscripts, subject to the following mandatory conditions:

Responsibility for content. Authors bear full responsibility for the accuracy, originality, and reliability of all content in their manuscript, including parts created or edited using AI tools. Use of AI does not absolve authors from responsibility for plagiarism, fabrication, data falsification, or other breaches of academic integrity.

Transparency and disclosure. Authors must clearly disclose the use of AI tools in the Acknowledgements section or in a separate subsection titled Use of AI Tools. Disclosures must include the name of the AI tool, its version, and the purpose of its use, for example editing text, grammar checking, data analysis, or image generation.

Authorship limitations. AI tools may not be listed as authors. Authorship is reserved for individuals who have made a substantial intellectual contribution to the research and the preparation of the publication.

Verification of generated content. Authors must carefully verify any text, data, or images generated by AI tools for accuracy, contextual appropriateness, and absence of bias or errors.

Use for data analysis. When AI tools are used for data analysis, authors must describe the analysis methodology, including the AI tools used, in the Materials and Methods section.

Use for image generation. If AI tools were used to generate images or other visual materials, authors must indicate this in the captions of the relevant items and in the Use of AI Tools subsection.

4. Use of AI Tools by Reviewers

Reviewers may use AI tools to enhance the efficiency of the review process, subject to the following conditions:

Confidentiality. Reviewers must not upload confidential manuscript materials (including full text, data, figures, etc.) to AI tools that do not guarantee data confidentiality and security.

Objectivity. Use of AI tools must not compromise the objectivity or impartiality of the review. Reviewers remain fully responsible for the content of their reviews.

Disclosure. Reviewers are encouraged to inform the editorial board if they used AI tools in preparing their review.

5. Use of AI Tools by the Editorial Board and Staff

The editorial board and Journal staff may use AI tools to optimize editorial workflows, such as:

  1. Plagiarism checking using specialized tools.
  2. Formatting checks and verification of compliance with Journal requirements.
  3. Assistance in selecting potential reviewers.
  4. Analysis of trends in scholarly publications.

The editorial board and staff must strictly adhere to principles of confidentiality, data security, and avoidance of biases that may be inherent in some AI tools. Decisions to accept or reject manuscripts are always made by the editorial board based on scientific merit and conformity with the Journal’s standards, not solely on results produced by AI tools.

6. Ethical Aspects and Responsibility

Plagiarism. Use of AI‑generated text without proper disclosure and/or without substantial author revision may be considered plagiarism.

Bias. AI tools may produce content that reflects biases present in their training data. Authors, reviewers, and editors must critically assess AI outputs and avoid disseminating biased content.

Transparency. Maximum transparency about the use of AI tools is a key element of responsible scholarly publishing.

Responsibility. Final responsibility for compliance with the Journal’s ethical norms and standards rests with the authors, reviewers, and members of the editorial board.

7. Handling Violations
     Any detected cases of improper or unethical use of AI tools will be reviewed by the editorial board in accordance with the Journal’s procedures for handling breaches of publication ethics. Measures may include requesting clarifications from the author/reviewer, rejecting the manuscript, retracting a published article, or other actions.

8. Policy Updates
This policy may be reviewed and updated by the Journal’s editorial board to reflect developments in AI technologies and changes in standards within the scholarly community.