Skip to main content
[Preprint]. 2024 Jan 23:2023.02.07.527435. Originally published 2023 Feb 7. [Version 3] doi: 10.1101/2023.02.07.527435

Figure 2:

Figure 2:

UNet++ type architecture [69] used for all the problems in this paper. The black arrows represent a residual block consisting of two convolutions where each convolution in the series is summed to the previous, and the convolution layers are concatenated before a non-linear activation (ELU) [74] is applied. The example output of the network is color scaled from 0 to 1 and represents the probability of introgression at a given allele for a given individual. The loss function (represented by the bold ) is computed with the ground truth from the simulation and is the weighted binary cross entropy function (Equation 3). The weights and biases of the convolution operations are updated via gradient descent during training. The architecture we use for the problems discussed actually contains four down and up-sampling operations rather than the three portrayed here.