Skip to main content
Blockchain in Healthcare Today logoLink to Blockchain in Healthcare Today
letter
. 2025 Aug 27;8(2):10.30953/bhty.v8.440. doi: 10.30953/bhty.v8.440

Integrating AI with Integrity at Blockchain in Healthcare Today: Introducing BHTY’s AI Policy for Authors, Reviewers, and Editors

Jennifer Hinkel 1, Umit Cali 2
PMCID: PMC12860434  PMID: 41623343

To the BHTY community:

Whether you read BHTY as a blockchain developer, a healthcare executive, or an academic, Artificial Intelligence (AI) has likely become part of your daily vocabulary. AI, Machine Learning, Large Language Models (LLMs), Transformers, and Self-Attention are keywords we are all gaining more familiarity with, and building skills around.

These technologies are powerful tools. If used judiciously, they accelerate discovery; whereas if used poorly, they amplify bias, erode privacy, and undermine trust.1 BHTY therefore introduces the AI Policy below to codify a central principle: AI augments scholarship but never replaces human expertise or accountability.

We are embedding mandatory disclosure, bias mitigation, and data-protection measures into our submission, peer-review, and editorial workflows, to advance innovation while upholding rigorous scientific integrity.

Some highlights of the policy include:

  1. Full transparency on AI use (tools, versions, dates, verification, human responsibility).

  2. Bias vigilance and documented mitigation.

  3. Strict data-privacy safeguards.

  4. Human verification and accountability across all roles.

  5. Graduated sanctions, from resubmission to retraction, for violations.

The full policy is below.

We will plan to implement this policy swiftly. We also commit to regularly reviewing this policy among the BHTY Editorial Board at least annually, and we welcome your feedback.

As a closing thought, scientific progress depends on both ingenuity and integrity, traits that still remain rooted in human judgement. AI tools will continue to reshape how we are interacting with technology, data, and knowledge in general; we are just at the beginning. No matter how innovative or exciting these tools are, they serve the healthcare field only when they can be wielded with transparency and discipline. This policy enables thoughtful AI use while safeguarding the credibility of our collective work. By setting this benchmark, BHTY affirms its role as a thought leader in responsible scholarship.

Policy Overview

BHTY, its editors, and publishers recognize that AI tools including LLMs can support legitimate research activities when used ethically and transparently.

Given the utmost importance of academic integrity in scientific research and specific sensitivities around bias, privacy, and accuracy in healthcare-related science, and seeking to balance the interest in using these new technologies with integrity, rigour, and legitimacy, we are seeking to update our guidelines for AI use across activities including submissions, reviews, and editorial functions.

The core principle is that AI tools are simply tools, and cannot replace human expertise, critical thinking, or scholarly rigour. All authors, reviewers, and editors are fully responsible for all actions and words they put their names to, regardless of the tools used to achieve that output.

Ethical and reasonable use of AI tools include:

  • Literature review organisation and screening

  • Data visualisation and basic statistical analysis (with human verification)

  • Language editing and grammar checking

  • Code documentation and commenting

  • Initial draft structuring and outlining

  • Translation assistance for non-native English speakers

  • Code optimization and debugging

  • Smart contract testing and validation

  • Data preprocessing and cleaning (while maintaining compliance with all data privacy regulations)

  • Figure and diagram creation

  • Reference formatting and bibliography management

  • Suggestions for improvement, “feedback” on ideas and writing, and proofing/grammar

Requirements When Using AI

1. Full Disclosure and Documentation

All AI use must be explicitly declared in submissions through an AI Use Declaration Statement (required in all submissions):

  • Specific AI tools used (name, version, provider)

  • Exact purposes for which AI was employed

  • Date(s) of AI tool usage

  • Methods used to verify AI-generated content

  • Statement of human oversight and final responsibility

Example Declaration:

“This research utilized ChatGPT-4.5 (OpenAI, accessed March 2025) for initial literature review organization and Claude 3 (Anthropic, accessed March 2025) for code commenting. All AI-generated content was independently verified against primary sources. The authors take full responsibility for the accuracy and integrity of all content, including any errors or omissions that may have originated from AI assistance.”

2. Bias Assessment and Ethical Review

Authors must demonstrate:

  • Active consideration of potential AI bias in healthcare contexts

  • Recognition of limitations in AI training data representation

  • Assessment of cultural, demographic, and clinical bias risks

  • Documentation of steps taken to mitigate identified biases

  • Acknowledgement of AI limitations in healthcare-specific contexts

  • For more information on bias, reference: University of Oxford Catalog of Bias.2

3. Data Privacy and Security Compliance

The following actions are high risk for privacy and security compliance:

  • Uploading protected health information (PHI) to any AI system

  • Sharing proprietary blockchain implementations or sensitive code with AI systems

  • Using AI tools hosted outside permitted jurisdictions without explicit data agreements

  • Processing patient data through public or commercial AI platforms

  • Sharing institutional or collaborative partner confidential information

  • Sharing BHTY journal confidential information using public AI systems and sending to systems hosted outside permitted jurisdictions, including uploading confidential submissions to commercially available AI tools without privacy protections

Required Safeguards:

  • Use only AI tools with appropriate data handling certifications that you have evaluated and verified as a researcher

  • Implement local AI solutions (i.e. on your own server or computer) when processing sensitive data

  • Maintain audit trails of all AI interactions involving research data

  • Comply with Health Insurance Portability and Accountability Act of 1996 (HIPAA), and General Data Protection Regulation (GDPR), and other applicable privacy regulations

  • Obtain necessary institutional approvals for AI tool usage, including any relevant ethics or Investigation Review Board (IRB) approvals, before commencing

4. Human Accountability and Verification

Author Responsibilities:

  • Personal review and verification of all AI-generated content

  • Independent fact-checking of all claims, statistics, and references

  • Critical evaluation of AI recommendations and outputs

  • Final approval and sign-off on all submitted content

  • Acceptance of full liability for errors, hallucinations/“-false positives,” or inaccuracies

Human authors are expected to:

  • Absolutely ensure that any AI use in research or referencing does not lead to false references, use of references/citations that do not reflect the content of the original source, or similar AI-generated errors.

  • Independently validate all analyses, data interpretations, and figure generation

  • Confirm accuracy of technical implementations and code

  • Verify technical details and specifications

Prohibited Uses of AI for the Purposes of BHTY

The following activities are not permissible for the journal and may be interpreted as violations of research ethics and academic integrity:

  • Using AI to write substantial portions of a submission/manuscript without explicit declaration of this use

  • Submitting AI-generated or AI-derived content that is not verified, e.g. references that do not exist, or references that do not reflect the cited material

  • Submitting AI-generated content as original work without human oversight/verification

  • Using AI for peer review activities or editorial decisions in lieu of careful human expert review and feedback

  • Generating fake data, references, or experimental results, including the generation of synthetic data for analysis without clear indication of AI use

  • Creating fabricated case studies or patient scenarios

  • Bypassing human oversight in critical analysis or conclusions

  • Plagiarism through undisclosed AI assistance

  • Fabrication of research findings using AI

  • Falsification of methodology descriptions

  • Misrepresentation of AI capabilities or limitations

  • Failure to disclose AI use when required

Editorial Review Process

The editorial team acknowledges that no “AI Detection” tools have been demonstrated to have a high level of validity. However, peer reviewers and authors are within their professional scope to question if AI has been used and request documentation of research processes, data, and methods if they have questions about the integrity of a submission’s methods, data, analysis, or writing.

To that end, reviewers and/or editors may:

  • Request additional documentation of research processes

  • Require raw data and methodology verification

  • Conduct enhanced review for AI-assisted submissions

  • Reject submissions with inadequate AI disclosure

Also, submissions declaring AI use will undergo:

  • Additional scrutiny of methodology and data analysis

  • Verification of bias mitigation strategies

  • Assessment of AI appropriateness for the specific research context and use

  • Evaluation of transparency and disclosure adequacy

  • Evaluation of appropriate ethics approval where required

Consequences of Policy Violations

The Editors and the publisher may apply any of the following measures, proportionate to the violation:

First Offense:

  • Manuscript rejection with opportunity for resubmission

  • Required completion of research integrity training

  • Enhanced scrutiny of future submissions

Repeated or Severe Violations:

  • Temporary or permanent publication ban with the journal and affiliated journals

  • Notification to author’s institutional research integrity office for severe violations

  • Retraction of published articles if violations are discovered post-publication

  • Public correction or editorial expression of concern

  • Potential exclusion from editorial board participation or peer review activities

References


Articles from Blockchain in Healthcare Today are provided here courtesy of Partners in Digital Health, LLC

RESOURCES