Skip to main content
. 2023 Jul 20;15(7):e42214. doi: 10.7759/cureus.42214

Table 2. The mean readability of texts analyzed by Readable.com and ChatGPT was compared using Student's t-test, and it was found that all the means were significantly different.

Index Reviewer N Mean Std. Error Mean p-value
Flesch-Kincaid Level Human 9 10.8011 0.41134 0.002
  ChatGPT 9 8.9111 0.30887  
Gunning Fox Index Human 9 13.6656 0.4042 0.001
  ChatGPT 9 11.2389 0.2914  
Coleman-Liau Index Human 9 12.7367 0.29822 0.001
  ChatGPT 9 9.9056 0.31639  
SMOG Index Human 9 13.3322 0.3746 0.001
  ChatGPT 9 9.4556 0.3338  
Automated Readability Index Human 9 11.0811 0.45869 0.007
  ChatGPT 9 9.4667 0.24944  
FORCAST Grade Level Human 9 11.3356 0.13822 0.001
  ChatGPT 9 8.6722 0.1706  
Flesch Reading Ease Human 9 46.6433 1.83603 0.001
  ChatGPT 9 64.5167 0.93775