Publication:
Comparing ChatGPT and Human Expertise: Exploring New Avenues in Peer Review of Single-Case Experimental Research

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Research Projects

Organizational Units

Journal Issue

Abstract

Peer review plays a pivotal role in validating research and upholding academic excellence. However, it grapples with problems like reviewer reluctance, variable review durations and extended publication decision timelines. Artificial intelligence (AI), particularly ChatGPT, holds promise in augmenting the peer review process by enhancing efficiency and objectivity. This study compared peer reviews by human experts and ChatGPT for 18 single-case research design (SCRD) manuscripts in special education and psychology. Human reviewers and ChatGPT were evaluated for concordance in manuscript quality assessments and publication decisions. Findings reveal substantial agreement, suggesting ChatGPT's potential to assist when guided by structured rubrics. However, low agreement in publication recommendations highlights the nuanced nature of these decisions, influenced by subjectivity, domain expertise and contextual understanding. This underscores the necessity of a balanced approach to leverage AI's strengths while respecting human expertise in peer review practices. Implications for practice and recommendation for future research were provided.

Description

Citation

WoS Q

Q1

Scopus Q

Q1

Source

European Journal of Special Needs Education

Volume

40

Issue

5

Start Page

809

End Page

823

Endorsement

Review

Supplemented By

Referenced By