УКР
ENG
Search


AI POLICY

1. Introduction and Scope

Problems of Economy acknowledges the growing role of artificial intelligence (AI) tools in scholarly work and seeks to strike a balance between embracing technological opportunities and upholding academic integrity and the reliability of published research.

This Policy governs the use of AI tools and generative AI technologies by all participants in the publication process – authors, reviewers, and editors. It has been developed with reference to the guidelines of the Committee on Publication Ethics (COPE), leading international publishers (Elsevier, Springer Nature, Taylor & Francis, Wiley, SAGE), and current standards in scholarly publishing.

Core principle: Responsibility for the content, accuracy, and integrity of a publication always rests with the human author. AI tools are aids, not substitutes for scientific thinking, analysis, and authorial accountability.

2. Definitions

2.1. Types of AI Tools

Assistive AI – tools that enhance human-generated content without replacing it: grammar, spelling, and punctuation checkers (Grammarly, Microsoft Editor), basic translation, formatting. These tools do not require disclosure.

Generative AI (GenAI) – tools that generate new text, image, or other content in response to user prompts: large language models (ChatGPT, Claude, Gemini, Copilot, LLaMA, etc.), image-generation tools (DALL-E, Midjourney, Stable Diffusion), data synthesis tools, etc. Use of GenAI requires mandatory disclosure.

AI Research Tools – specialised scholarly systems for literature search and analysis (Semantic Scholar, Elicit, Research Rabbit, Consensus, etc.). If these tools are used to synthesise or formulate content included in the manuscript, disclosure is required.

3. For Authors

3.1. General Principles

Authors bear full responsibility for the content, accuracy, and integrity of any submitted manuscript, regardless of which tools – including AI – were used in its preparation. The use of AI does not release an author from any obligation imposed by authorship criteria.

Permitted no disclosure

⚠️ Permitted with disclosure

🚫 Prohibited

Grammar and spelling checks (Grammarly etc.)

Editing and paraphrasing text using GenAI

Listing AI as an author or co-author

Basic style and punctuation checks

Generating or substantially rewriting manuscript sections

Submitting AI-generated text without disclosure or verification

Reference/bibliography formatting via dedicated managers

Synthesising literature or reviewing publications using GenAI

Generating or manipulating figures, charts, tables without disclosure

Basic machine translation followed by human editing

Generating code for data analysis

Using AI to falsify or fabricate data

Spell-checking in a foreign language

Generating ideas or a manuscript structure using GenAI

Uploading confidential manuscripts to public AI services

3.2. Authorship and AI

AI tools may not be listed as authors or co-authors. Authorship entails:

  • intellectual accountability for the content of the work;
  • the ability to answer questions about any aspect of the research;
  • legal responsibility for copyright compliance and originality;
  • signing publishing agreements and affirming consent to their terms.

None of these criteria can be fulfilled by an AI system. This position is consistent with the stance of COPE, ICMJE, and leading international publishers.

3.3. Mandatory Disclosure

When to disclose: whenever generative AI has been used to generate, substantially edit, or reformulate text; synthesise literature; generate code; or create visualisations or other materials that appear directly in the manuscript.

Where to disclose: in a dedicated AI Use Statement – a separate subsection of the manuscript placed after the body text and before the reference list. Where a Methods section is present, detailed information about AI tools should also be included there.

What to include: the name and version of the tool; the purpose of use; the specific tasks for which AI was employed; how the outputs were verified and confirmed for accuracy.

Example of a correct disclosure:  

"In preparing this article, the authors used ChatGPT-4o (OpenAI, version September 2024) for style editing and grammar checking in selected paragraphs of the introduction. All scientific content, data analysis, conclusions, and interpretation of results are solely the authors' own work. All AI-generated changes were reviewed and edited by the authors."

3.4. Author Responsibilities When Using GenAI

  • Verify and fact-check all AI-generated material, including the accuracy of cited sources (AI can generate non-existent references – so-called hallucinations).
  • Ensure that the use of the AI tool does not infringe copyright on input materials and does not grant the AI service rights to use the manuscript for model training.
  • Do not upload confidential or personal data to public AI services.
  • Ensure that the final manuscript reflects the author's own analysis, interpretation, and ideas and is not predominantly AI-generated text.
  • Retain logs or documentation of AI tool use in case requested by the editorial office.

3.5. Use of AI for Figures and Visualisations

The journal prohibits the use of generative AI to create or manipulate figures, tables, charts, or other illustrative materials in the article, unless such generation is itself the subject of the research.

Exception: if AI-based image generation is part of the scientific methodology (e.g. research on generative AI itself), this must be clearly described in the Methods section with specification of the tool, version, and manner of use.

Important:  AI tools should not be cited as primary sources in the reference list. Authors must cite the original scholarly sources underlying any conclusions drawn

4. For Reviewers

4.1. Manuscript Confidentiality

Reviewers receive manuscripts as confidential documents. Uploading any part of a manuscript to a public AI service – even for the purpose of improving the quality of the review – constitutes a breach of confidentiality and may violate the authors' intellectual property rights and applicable data privacy regulations.

Prohibited:  Reviewers must not upload the manuscript or any part thereof, or their own peer review report, to public AI systems (ChatGPT, Claude, Gemini, etc.) for any purpose, including improving the language quality of the review.

4.2. Scientific Assessment of the Manuscript

Peer review requires critical scholarly thinking and expert judgement, which are exclusively human responsibilities. Reviewers must not use generative AI to prepare the substantive content of a review, formulate scientific conclusions, or make publication recommendations.

Assistive use of AI for grammar and spell-checking of the reviewer's own text is permitted – provided that no part of the manuscript is uploaded to the AI system.

4.3. Detecting AI-Generated Content in Manuscripts

If a reviewer suspects that a manuscript contains undisclosed or inadequately documented AI-generated content, they are required to flag this concern to the editorial office in the review report. The editorial office will conduct the relevant verification independently.

5. For Editors

5.1. Confidentiality and Decision-Making

Editors are responsible for the editorial process, the final decision regarding the manuscript, and communicating that decision to authors. These functions must be performed by humans and may not be delegated to AI tools.

Editors must not upload manuscripts, peer review reports, or correspondence with authors to public AI services. This requirement also extends to decision letters and any other correspondence that may contain confidential information.

5.2. Permissible Use of AI by Editors

Editors may use authorised internal tools that comply with confidentiality and data protection requirements – for example, for plagiarism screening, initial manuscript screening, or reviewer identification – provided that such tools do not violate authors' rights or disclose confidential information to third parties.

6. Policy Violations and Consequences

6.1. Violations at the Submission Stage

If undisclosed or inadequately documented use of generative AI that affected the content of the manuscript is identified, the editorial office may:

  • reject the manuscript;
  • require authors to make corrections and provide detailed disclosure;
  • conduct additional checks for originality and academic integrity.

6.2. Violations Identified after Publication

If a violation of this Policy is identified after publication, the editorial office will act in accordance with COPE guidelines, which may include: publication of a Corrigendum, an Expression of Concern, or a Retraction of the article.

7. Policy Review and Updates

The use of AI in scholarly publishing is evolving rapidly. The editorial board is committed to reviewing this Policy at least once per year, in light of new COPE guidance, leading international publisher standards, and regulatory developments. The date of the most recent update is indicated in the footer of this document.

Questions regarding the application of this Policy may be sent to the editorial office at: pe.ua.kh@gmail.com.


Last updated: March 2026

  Problems of Economy, 2009-2026 The site and its metadata are licensed under CC-BY-SA. Write to webmaster