УКР |
|||
ENG |
|||
|
|
|||
AI POLICY1. Introduction and ScopeProblems of Economy acknowledges the growing role of artificial intelligence (AI) tools in scholarly work and seeks to strike a balance between embracing technological opportunities and upholding academic integrity and the reliability of published research. This Policy governs the use of AI tools and generative AI technologies by all participants in the publication process – authors, reviewers, and editors. It has been developed with reference to the guidelines of the Committee on Publication Ethics (COPE), leading international publishers (Elsevier, Springer Nature, Taylor & Francis, Wiley, SAGE), and current standards in scholarly publishing. Core principle: Responsibility for the content, accuracy, and integrity of a publication always rests with the human author. AI tools are aids, not substitutes for scientific thinking, analysis, and authorial accountability. 2. Definitions2.1. Types of AI ToolsAssistive AI – tools that enhance human-generated content without replacing it: grammar, spelling, and punctuation checkers (Grammarly, Microsoft Editor), basic translation, formatting. These tools do not require disclosure. Generative AI (GenAI) – tools that generate new text, image, or other content in response to user prompts: large language models (ChatGPT, Claude, Gemini, Copilot, LLaMA, etc.), image-generation tools (DALL-E, Midjourney, Stable Diffusion), data synthesis tools, etc. Use of GenAI requires mandatory disclosure. AI Research Tools – specialised scholarly systems for literature search and analysis (Semantic Scholar, Elicit, Research Rabbit, Consensus, etc.). If these tools are used to synthesise or formulate content included in the manuscript, disclosure is required. 3. For Authors3.1. General PrinciplesAuthors bear full responsibility for the content, accuracy, and integrity of any submitted manuscript, regardless of which tools – including AI – were used in its preparation. The use of AI does not release an author from any obligation imposed by authorship criteria.
3.2. Authorship and AIAI tools may not be listed as authors or co-authors. Authorship entails:
None of these criteria can be fulfilled by an AI system. This position is consistent with the stance of COPE, ICMJE, and leading international publishers. 3.3. Mandatory DisclosureWhen to disclose: whenever generative AI has been used to generate, substantially edit, or reformulate text; synthesise literature; generate code; or create visualisations or other materials that appear directly in the manuscript. Where to disclose: in a dedicated AI Use Statement – a separate subsection of the manuscript placed after the body text and before the reference list. Where a Methods section is present, detailed information about AI tools should also be included there. What to include: the name and version of the tool; the purpose of use; the specific tasks for which AI was employed; how the outputs were verified and confirmed for accuracy. Example of a correct disclosure: "In preparing this article, the authors used ChatGPT-4o (OpenAI, version September 2024) for style editing and grammar checking in selected paragraphs of the introduction. All scientific content, data analysis, conclusions, and interpretation of results are solely the authors' own work. All AI-generated changes were reviewed and edited by the authors." 3.4. Author Responsibilities When Using GenAI
3.5. Use of AI for Figures and VisualisationsThe journal prohibits the use of generative AI to create or manipulate figures, tables, charts, or other illustrative materials in the article, unless such generation is itself the subject of the research. Exception: if AI-based image generation is part of the scientific methodology (e.g. research on generative AI itself), this must be clearly described in the Methods section with specification of the tool, version, and manner of use. Important: AI tools should not be cited as primary sources in the reference list. Authors must cite the original scholarly sources underlying any conclusions drawn 4. For Reviewers4.1. Manuscript ConfidentialityReviewers receive manuscripts as confidential documents. Uploading any part of a manuscript to a public AI service – even for the purpose of improving the quality of the review – constitutes a breach of confidentiality and may violate the authors' intellectual property rights and applicable data privacy regulations. Prohibited: Reviewers must not upload the manuscript or any part thereof, or their own peer review report, to public AI systems (ChatGPT, Claude, Gemini, etc.) for any purpose, including improving the language quality of the review. 4.2. Scientific Assessment of the ManuscriptPeer review requires critical scholarly thinking and expert judgement, which are exclusively human responsibilities. Reviewers must not use generative AI to prepare the substantive content of a review, formulate scientific conclusions, or make publication recommendations. Assistive use of AI for grammar and spell-checking of the reviewer's own text is permitted – provided that no part of the manuscript is uploaded to the AI system. 4.3. Detecting AI-Generated Content in ManuscriptsIf a reviewer suspects that a manuscript contains undisclosed or inadequately documented AI-generated content, they are required to flag this concern to the editorial office in the review report. The editorial office will conduct the relevant verification independently. 5. For Editors5.1. Confidentiality and Decision-MakingEditors are responsible for the editorial process, the final decision regarding the manuscript, and communicating that decision to authors. These functions must be performed by humans and may not be delegated to AI tools. Editors must not upload manuscripts, peer review reports, or correspondence with authors to public AI services. This requirement also extends to decision letters and any other correspondence that may contain confidential information. 5.2. Permissible Use of AI by EditorsEditors may use authorised internal tools that comply with confidentiality and data protection requirements – for example, for plagiarism screening, initial manuscript screening, or reviewer identification – provided that such tools do not violate authors' rights or disclose confidential information to third parties. 6. Policy Violations and Consequences6.1. Violations at the Submission StageIf undisclosed or inadequately documented use of generative AI that affected the content of the manuscript is identified, the editorial office may:
6.2. Violations Identified after PublicationIf a violation of this Policy is identified after publication, the editorial office will act in accordance with COPE guidelines, which may include: publication of a Corrigendum, an Expression of Concern, or a Retraction of the article. 7. Policy Review and UpdatesThe use of AI in scholarly publishing is evolving rapidly. The editorial board is committed to reviewing this Policy at least once per year, in light of new COPE guidance, leading international publisher standards, and regulatory developments. The date of the most recent update is indicated in the footer of this document. Questions regarding the application of this Policy may be sent to the editorial office at: pe.ua.kh@gmail.com. Last updated: March 2026 |
| Problems of Economy, 2009-2026 | The site and its metadata are licensed under CC-BY-SA. | Write to webmaster |