Skip to main content
. 2023 Jul 26;6:1124553. doi: 10.3389/frai.2023.1124553

Table 2.

Overview of methods for robust decision tree learning.

Method Ensemble Complexity Norm Guarantees? Code available?
Adversarial boosting (Kantchelian et al., 2016) GB nlog(n) l 0 n n
RobustTrees (Chen et al., 2019a) RF+GB nlog(n) l n y
RobustStumps (Andriushchenko and Hein, 2019) GB n 2 l y y
TREANT (Calzavara et al., 2020b) RF n 2 l p a y yb
MetaSilvae (Ranzato and Zanella, 2021) RF ?c l n y
Feat. Part. Forests (Calzavara et al., 2021) RF nlog(n) l 0 y n
GROOT (Vos and Verwer, 2021) RF nlog(n) l n y
CostAwareRobust (Chen et al., 2021) RF+GB nlog(n) l d n y
ROCT (Vos and Verwer, 2022) Single exp(n) l d y y
Relabeling (Vos and Verwer, 2023) RF+GB n 2.5 l e y y
FPRDT (Guo et al., 2022) Single nlog(n) l y n
PRAdaBoost (Guo et al., 2022) Ada nlog(n) l y n

Some methods robustify a single tree (Single), and others are used in ensembles: random forest (RF), gradient boosting (GB) or AdaBoost (Ada). This is indicated in the Ensemble column. The Complexity column shows the complexity of the learning algorithm in number of examples n. The number of features and the size of the models is ignored. The Norm column lists the attack model that is considered by the method. The Guarantees column has a yes (y) value when the learned models are guaranteed to be robust.

a

TREANT has a flexible attack model in the form of rewriting rules, allowing asymmetric perturbations (e.g. only positive), and a maximum budget (e.g. an l1-norm).

b

Vos and Verwer (2021) provide an alternative implementation.

c

The paper includes experiments that show that the genetic algorithm converges in 50-70 iterations for the tested datasets.

d

An asymmetric attack model is supported, i.e., it is possible to allow larger positive than negative perturbations, but it is still a box constraint.

e

Other norms are possible, but this is not evaluated.