Skip to main content
. 2024 Sep 5;121(37):e2318296121. doi: 10.1073/pnas.2318296121

Table 1.

Comparing the accuracy of SDMs on unseen examples. Bolded entries refer to the top performing model for a given accuracy metric

Model name Data Res. Loss AUCROC AUCPRC Recallobs Recallspp Precspp F1spp Pres. Acc. Top 100obs Top 100spp
Deepbiosphere Remote sensing + Climate 256 m Sampling-aware BCE 0.9496 [0.89 to 0.98] 0.0398 [0.01 to 0.11] 1.0 [0.89 to 1.0] 0.9583 [0.5 to 1.0] 0.0131 [0.004 to 0.04] 0.0258 [0.01 to 0.07] 0.8918 0.7613 0.6667 [0.0 to 0.93]
Bioclim MLP Climate ~1,000 m Sampling-aware BCE 0.9346 [0.86 to 0.98] 0.0346 [0.01 to 0.10] 1.0 [0.86 to 1.0] 0.9643 [0.43 to 1.0] 0.0111 [0.002 to 0.03] 0.0218 [0.005 to 0.06] 0.8820 0.7035 0.5 [0.0 to 0.86]
RS TResNet Remote sensing 256 m Sampling-aware BCE 0.9268 [0.86 to 0.97] 0.0265 [0.01 to 0.08] 1.0 [0.83 to 1.0] 0.8958 [0.47 to 1.0] 0.01 [0.003 to 0.03] 0.0198 [0.01 to 0.05] 0.8645 0.6779 0.5 [0.0 to 0.8]
Inception V3 (34) Remote sensing 256 m CE 0.9391 [0.88 to 0.99]

0.0359

[0.01 to 0.10]

0.0 [0.0 to 0.0] 0.0 [0.0 to 0.0] 0.0 [0.0 to 0.0] 0.0 [0.0 to 0.0] 0.0013 0.7533 0.625 [0.0 to 0.92]
Maxent (33) Climate ~1,000 m N/A 0.8825 [0.78 to 0.95]

0.018

[0.004 to 0.07]

0.0 [0.0 to 0.5] 0.1348 [0.0 to 0.57] 0.0048 [0.0 to 0.03] 0.0089 [0.0 to 0.06] 0.2761 0.2910 0.0417 [0.0 to 0.5]
Random Forest (33) Climate ~1,000 m N/A 0.882 [0.76 to 0.95] 0.0237 [0.004 to 0.09] 0.2821 [0.0 to 0.88] 0.3684 [0.0 to 0.82] 0.0086 [0.0 to 0.04] 0.0166 [0.0 to 0.07] 0.3943 0.3709 0.2857 [0.0 to 0.60]
Random N/A NA N/A 0.4995 [0.48 to 0.52] 0.0022 [0.001 to 0.006] 0.5 [0.4 to 0.6] 0.5 [0.47 to 0.53] 0.0016 [0.001 to 0.01] 0.0031 [0.001 to 0.01] 0.5005 0.0451 0.0333 [0.0 to 0.07]
Frequency N/A NA N/A 0.5 [0.5 to 0.5] 0.0016 [0.001 to 0.01] 0.0 [0.0 to 0.0] 0.0 [0.0 to 0.0] 0.0 [0.0 to 0.0] 0.0 [0.0 to 0.0] 0.0656 0.1952 0.0 [0.0 to 0.0]

Median [IQR] are reported for each accuracy metric and for each species distribution model along with baseline random and frequency-based estimations. Examples used for evaluation were sampled from across all of California and were at least 1.3 km away from any training point (SI Appendix, SM 1.3.1 and Fig. S4A). For more reported accuracy metrics, see SI Appendix, Table S8. Res. = Resolution; MLP = multilayer perceptron; BCE = binary cross-entropy; CE = cross-entropy; spp = per-species; obs = per-observation; AUCROC = area under receiver operating curve; AUCPRC = area under precision–recall curve; Prec = precision; Pres. Acc. = Presence Accuracy.