Generative AI Policy

Indonesian Journal of Artificial Intelligence and Multimodal Systems (IJAIMS) recognizes that generative artificial intelligence tools (e.g., large language models, code assistants, and image generators) may support research and writing workflows. This policy ensures transparency, accountability, and research integrity when such tools are used in preparing manuscripts submitted to the journal.

Authors remain fully responsible for the accuracy, originality, and integrity of all content in their manuscripts, including any content produced with the assistance of generative AI tools.

1. Permitted Uses

Generative AI tools may be used to support productivity, provided their use does not compromise scientific integrity or violate privacy and copyright. Permitted uses may include:

  • Language editing and improving clarity of writing (with author verification)
  • Assisting with code refactoring, documentation, or debugging (with reproducibility checks)
  • Supporting literature mapping or drafting outlines (with proper citation of sources)
  • Generating non-substantive content such as cover letters or summaries (where appropriate)

2. Prohibited or Restricted Uses

The following uses are not permitted or require special caution:

  • Using generative AI to fabricate data, results, references, or citations
  • Submitting AI-generated text as original scholarly contribution without disclosure
  • Uploading confidential, proprietary, or sensitive data (e.g., patient data) into public AI tools
  • Using AI-generated images/figures that misrepresent results or lack appropriate permissions
  • Using AI tools as a substitute for ethical approvals, informed consent, or research governance

3. Disclosure Requirements

Authors must disclose the use of generative AI tools when they materially contribute to manuscript writing, analysis, code, or figure creation. Disclosure should be placed in one of the following:

  • A dedicated “Generative AI Statement” in the manuscript (recommended)
  • The Acknowledgments section (if applicable)
  • The cover letter to the editor (as supplementary information)

Recommended wording: “The authors used [Tool Name, Version] to assist with [e.g., language editing / code refactoring]. All outputs were reviewed and verified by the authors, who take full responsibility for the content.”

4. Authorship and Accountability

  • Generative AI tools cannot be listed as authors.
  • All listed authors must meet authorship criteria and are accountable for the work.
  • Authors must verify that AI assistance did not introduce factual errors, bias, or hallucinated references.

5. Data, Privacy, and Confidentiality

Authors must not input confidential, proprietary, or personally identifiable information into generative AI systems unless they have explicit permission and the tool is approved for such use. For research involving sensitive data, authors must describe how privacy and compliance were maintained.

6. Reproducibility and Documentation

If generative AI tools contribute to code, experiments, or analysis, authors should document:

  • The tool name and version (or provider/model)
  • Key settings relevant to the output (when applicable)
  • Validation steps taken to ensure correctness and reproducibility

7. Editorial Assessment

The editorial team may request additional clarification regarding generative AI usage during editorial screening or peer review. Failure to disclose material use of AI tools may be treated as a breach of publication ethics and may result in rejection or retraction.


Last updated: 2026