Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2024 Aug 7.
Published in final edited form as: Neuroimage. 2020 Oct 1;224:117414. doi: 10.1016/j.neuroimage.2020.117414

Tailored haemodynamic response function increases detection power of fMRI in awake dogs (Canis familiaris)

Magdalena Boch a,b, Sabrina Karl c, Ronald Sladky a, Ludwig Huber c, Claus Lamm a,#,*, Isabella C Wagner a,#,*
PMCID: PMC7616344  EMSID: EMS196378  PMID: 33011420

Abstract

Functional magnetic resonance imaging (fMRI) of awake and unrestrained dogs (Canis familiaris) has been established as a novel opportunity for comparative neuroimaging, promising important insights into the evolutionary roots of human brain function and cognition. However, data processing and analysis pipelines are often derivatives of methodological standards developed for human neuroimaging, which may be problematic due to profound neurophysiological and anatomical differences between humans and dogs. Here, we explore whether dog fMRI studies would benefit from a tailored dog haemodynamic response function (HRF). In two independent experiments, dogs were presented with different visual stimuli. BOLD signal changes in the visual cortex during these experiments were used for (a) the identification and estimation of a tailored dog HRF, and (b) the independent validation of the resulting dog HRF estimate. Time course analyses revealed that the BOLD signal in the primary visual cortex peaked significantly earlier in dogs compared to humans, while being comparable in shape. Deriving a tailored dog HRF significantly improved the model fit in both experiments, compared to the canonical HRF used in human fMRI. Using the dog HRF yielded significantly increased activation during visual stimulation, extending from the occipital lobe to the caudal parietal cortex, the bilateral temporal cortex, into bilateral hippocampal and thalamic regions. In sum, our findings provide robust evidence for an earlier onset of the dog HRF in two visual stimulation paradigms, and suggest that using such an HRF will be important to increase fMRI detection power in canine neuroimaging. By providing the parameters of the tailored dog HRF and related code, we encourage and enable other researchers to validate whether our findings generalize to other sensory modalities and experimental paradigms.

Keywords: Dog, Cognition, Vision, fMRI, Haemodynamic Response function (HRF)

1. Introduction

Animal research involving domesticated dogs (Canis familiaris) yields important insights into non-invasive comparative neuroscience (Andics, Gácsi, Faragó, Kis, & Miklósi, 2014; Bunford, Andics, Kis, Miklósi, & Gácsi, 2017; Fitch, Huber, & Bugnyar, 2010), and allows researchers to study the neural correlates of cognitive abilities, i.e., how dogs perceive or process their environment (e.g. Andics & Miklósi, 2018; Thompkins, Deshpande, Waggoner, & Katz, 2016 for review). For example, recent work has used functional magnetic resonance imaging (fMRI) to study the neural representations during auditory stimulation or lexical processing (Andics et al., 2016, 2014; Prichard et al., 2019; Prichard, Cook, Spivak, Chhibber, & Berns, 2018), face perception (Cuaya, Hernández-Pérez, & Concha, 2016; Dilks et al., 2015; Hernández-Pérez, Concha, & Cuaya, 2018; Szabó et al., 2020; Thompkins et al., 2018), olfactory processing (Berns, Brooks, & Spivak, 2015; Jia et al., 2014), sense for numeracy (Aulet et al., 2019), jealousy (Cook, Prichard, Spivak, & Berns, 2018), and reward processing (Berns, Brooks, & Spivak, 2012; Berns, Brooks, Spivak, & Levy, 2017; Berns, Brooks, & Spivak, 2013; Cook, Prichard, Spivak, & Berns, 2016; Cook, Spivak, & Berns, 2014; Prichard, Chhibber, Athanassiades, Spivak, & Berns, 2018) in dogs. So far, dog fMRI studies have relied on methodological standards originally developed for human (f)MRI, but it has been proposed that hardware as well as data analysis approaches tailored to dogs might be more suitable (Huber & Lamm, 2017). Although the majority of fMRI pre-processing steps are readily transferable from humans to dogs (e.g., slice timing correction, realignment, smoothing), humans and dogs might differ in many aspects other than apparent differences in neuroanatomy (Hecht et al., 2019; Horschler et al., 2019; Schoenebeck & Ostrander, 2013), such as differences in vascular and neuronal physiology. Here, we critically examined the state of the art in canine neuroimaging methodology and aimed at optimizing data processing and analysis pipelines to improve fMRI sensitivity and specificity. fMRI-based neuroimaging commonly uses a the a general linear model (GLM) to describe voxel-wise haemodynamic response time courses by convolving the regressors of the experimental conditions with a haemodynamic response function (referred to as “human HRF” throughout the text). This typically involves a double-gamma function to account for the delayed peak at approx. 5 s after stimulus onset and the post-stimulus undershoot (Friston, Fletcher, et al., 1998; Friston et al., 1995; Friston, Jezzard, & Turner, 1994; Worsley & Friston, 1995). So far, canine neuroimaging studies have used the standard human HRF (e.g., Andics et al., 2014; Cuaya, Hernández-Pérez, & Concha, 2016), a model of the human HRF based on a single gamma function (e.g., Cook et al., 2014; Dilks et al., 2015), or a Fourier basis set (Aguirre et al., 2007). However, assumptions about the (canonical) human HRF shape and its temporal dynamics might not apply in dogs. An accurate HRF model is crucial, as even minor deviations can lead to substantial loss of power (Handwerker, Ollinger, & D’Esposito, 2004), thus not only reducing the chance of detecting true effects but also increasing the likelihood for false-positive results (Lindquist, Meng Loh, Atlas, & Wager, 2009) and inflated effect sizes (e.g., Ioannidis, 2005; Simmons, Nelson, & Simonsohn, 2011). Additionally, fMRI studies with small sample sizes are often considered underpowered (Button et al., 2013; Cremers, Wager, & Yarkoni, 2017; Poldrack et al., 2017; Simmons et al., 2011), which is a ubiquitous problem in canine research due to the complexity of the experiments (median of approx. 12.5 dogs, although sample sizes are increasing). Under these circumstances, it is particularly crucial to test whether the BOLD response in dogs is adequately captured with the canonical human HRF, or some variations of it.

The shape of the human HRF has been discussed extensively since its adoption in fMRI data analysis (Aguirre, Zarahn, & D’Esposito, 1998; Boynton, Engel, Glover, & Heeger, 1996; Glover, 1999). Numerous factors causing HRF variability have been identified, e.g., developmental changes (Arichi et al., 2012), and clinical conditions (Ford, Johnson, Whitfield, Faustman, & Mathalon, 2005). A frequent approach to account for potential HRF variability within a participant sample (used twice for a dog sample, Jia et al., 2014, 2016) is to add temporal and/or dispersion derivatives (TDD) along with the HRF regressor when applying the GLM, used to calculate a so-called informed basis set (Friston, Fletcher, et al., 1998; Friston, Josephs, Rees, & Turner, 1998; Henson, Price, Rugg, Turner, & Friston, 2002). Despite the increased flexibility in the model, the basis function depends on prior knowledge about the average shape of the underlying BOLD signal, which is currently not available in canine neuroscience research.

Previous studies using invasive recordings indeed demonstrated that the HRF varies across mammalian species. In comparison to humans, the HRF was shown to peak earlier in rats (De Zwart et al., 2005; Lambers et al., 2020; Silva, Koretsky, & Duyn, 2007) and mice (Chen et al., 2020), while the HRF in macaque monkeys appears similar (Baumann et al., 2010; Goense & Logothetis, 2008; Koyama et al., 2004; Logothetis, Pauls, Augath, Trinath, & Oeltermann, 2001; Nakahara, Hayashi, Konishi, & Miyashita, 2002; Patel, Cohen, Baker, Snyder, & Corbetta, 2018). Deviations from the human HRF in terms of shape and temporal dynamics seem to decrease in species with closer common ancestry to humans (Upham, Esselstyn, & Jetz, 2019) and with increasing absolute brain size (e.g., Roth & Dicke, 2005 for review). Considering the variations across species and potential differences in underlying neurophysiology, it seems plausible that the human HRF might deviate from the average BOLD signal in dogs. However, precise conclusions are currently not possible, as systematic investigations of the BOLD signal have not yet been performed in dogs.

Here, we aimed to close this gap and used non-invasive fMRI in awake dogs that were specifically trained for this approach. In two independent experiments, we used different visual stimulation experiments and a step-wise analysis approach to establish and validate our results, respectively. In the first experiment, dogs viewed a flickering checker-board interspersed with a baseline condition (flickering checkerboard experiment, experiment 1). The experiment employed a block design, aimed at achieving a robust measure of the average BOLD signal in the primary visual cortex (V1). Based on the resulting V1 BOLD signal data, we identified and estimated a tailored dog HRF, compared its model fit to the one based on using the human HRF, and differences in whole-brain activation between the two HRFs. We also tested if adding time and dispersion derivatives to the human HRF could sufficiently account for potential deviations of the dog- from the human HRF. Data from a second experiment, which had employed an event-related visual stimulation design (face processing experiment, experiment 2), were then used to validate the results from the flickering checkerboard experiment. We opted for visual stimulation as the V1 can be easily located (see e.g., Langley & Grünbaum, 1890; Marquis, 1934; Uemura, 2015; Wing & Smith, 1942), thus ameliorating the problem of a common three-dimensional coordinate system in canines. Finally, to encourage reproducibility, we openly share our data and provide a detailed description of the processing and analysis pipeline (see also for similar challenges on reproducibility in human fMRI: Carp, 2012b, 2012a; Nichols et al., 2017; Poldrack et al., 2017, 2008). Together, our results provide a first investigation on whether the human HRF model appropriately fits the average BOLD signal in dogs and whether estimating a novel dog HRF can increase fMRI specificity and detection power.

2. Materials and methods

2.1. Sample

Seventeen pet dogs participated in the flickering checkerboard experiment (experiment 1; 10 females, age range = 3–11 years, mean = 7.24 years, SD = 2.33 years); consisting of 12 border collies, 2 Australian shepherds, 1 border collie Australian shepherd mix, 1 Labrador retriever and 1 mixed-breed dog (weight range = 15–27 kg, mean = 19.67 kg, SD = 3.87). A subsample of fourteen dogs also participated in the face processing experiment (experiment 2; 8 females, age range = 3–11 years, mean = 7.21 years, SD = 2.46 years) in the same or max. two months apart; consisting of 10 border collies, 1 Labrador retriever, 1 Australian shepherd, 1 border collie Australian shepherd mix and 1 mixed-breed dog (weight range = 15–27 kg, mean = 19.25 kg, SD = 4.03).

All dogs passed an initial medical examination concerning eyesight and general health. The human caregivers gave written informed consent to their dogs’ participation and did not receive any monetary compensation. The dogs were fully awake and unrestrained, and were able to exit the MR scanner at any time. To achieve this, they received extensive training prior to the MRI sessions in order to habituate them to the MRI environment (see Karl, Boch, Virányi, Lamm, & Huber, 2019 for a detailed description of the training procedure, and Berns & Cook, 2016; Strassberg, Waggoner, Deshpande, & Katz, 2019 for similar procedures). The study was approved by the institutional ethics and animal welfare commission in accordance with Good Scientific Practice (GSP) guidelines and national legislation at the University of Veterinary Medicine Vienna (ETK-06/06/2017), based on a pilot study conducted at the University of Vienna. The current study complies with the ARRIVE Guidelines (Kilkenny, Browne, Cuthill, Emerson, & Altman, 2010).

2.2. Experimental setup

2.2.1. Preparation

Together with the dog trainer, the dog entered the MR scanner room wearing earplugs and an additional head bandage to secure optimal earplug positioning and to enhance noise protection. The dog then accessed the scanner bed via a custom-made ramp and positioned the head inside the coil, seated in sphinx position (Fig. 1A). The dog trainer then moved the dog into the scanner bore and visual tasks were presented using an MR-compatible computer screen placed at the end of the scanner bore (32 inch). Additionally, we used the camera of an eye-tracker (Eyelink 1000 Plus, SR Research, Ontario, Canada) to ensure that the dogs stayed awake, did not close their eyes during stimulus onsets, or whether they looked away from the visual stimulation (i.e., downward gaze during stimulus presentation; see supplementary material A, figure S1 for example of monitoring setup), and to monitor overall movement (N = 5 dogs in experiment 2 were not monitored due to later implementation of the camera). The dog trainer remained in the MR-scanner room throughout the entire scan session but left the dog’s visual field before task onset. The majority of the dogs first participated in the flickering checkerboard experiment followed by the face processing experiment in a subsequent MR-session (Fig. 1B). Data acquisition was aborted if the dog moved extensively (as observed using eye-tracking, see above) or if the dog exited the scanner bore during the task. Data collection was then repeated within the same or a subsequent session, depending on the dog’s motivation. Following the scan session, we evaluated the re-alignment parameters and re-invited the dog to repeat the experiment in a subsequent session if head motion exceeded a threshold of 3 mm (Fig. 1C). On average, two scan sessions were necessary to complete the experiment below the motion threshold for both experiments; 12 out of 17 dogs and 9 out of 14 dogs succeeded in their first scan session for experiment 1 and experiment 2, respectively. After completing a run, the dog exited the MR scanner and received a food reward.

Fig. 1. Overview of experimental approach to explore the average BOLD signal in dogs and estimate a tailored dog haemodynamic response function (HRF).

Fig. 1

(A) All dogs were trained to position their head in a 15-channel human knee coil and to stay motionless during data acquisition. (B) We acquired data in two different visual stimulation experiments. In (1), we extracted the average primary visual cortex (V1) BOLD signal using data from a flickering checkerboard experiment, and estimated a tailored dog HRF. We compared this dog HRF to the canonical human HRF, and to the human HRF with time and dispersion derivatives (TDD). in (2), we validated the results using a face processing experiment, whose data served as an independent test data set. (C) Structural scans were acquired in a session prior to functional data acquisition of the visual stimulation experiments; functional tasks were acquired in separate sessions. Movement parameters were assessed after successful completion of a task. If motion exceeded 3 mm, we repeated the task in additional sessions. (D) We created individual tailor-made brain masks using itk-SNAP (Yushkevich et al., 2006) to skull-strip the structural images and consequently improve co-registration and normalization of the canine neuroimaging data.

2.2.2. Flickering checkerboard experiment (experiment 1)

The task used in this experiment alternated between blocks of visual stimulation (flickering checkerboard covering the whole screen and green cross in the centre for 10 s) and a visual baseline with a green cross presented on a black screen for 10 s. The total task duration was 2.2 min, including six blocks of visual stimulation and 6 blocks of baseline in a fixed order, starting with the visual baseline condition (see Fig. 1B). We chose this experiment for the dog HRF estimation based on the fact that a block design can be expected to be more robust and predictable, even if the human and dog HRFs and the actual BOLD signal time courses differed (see supplementary material A, 2 Flipping experiment 1 and 2 for a flipped study design, using the face processing experiment as HRF estimation data set, which resulted in similar results).

2.2.3. Face processing experiment (experiment 2)

The task for experiment 2 alternated between short events of visual stimulation (3 s clips of varying conditions, showing smooth transitions between two facial expressions from different human models, all on white background; 500 × 500 pixels) and a visual baseline with a black cross on a white screen jittered between 3–7 s (see Fig. 1B). Within the scope of the present methodological study, we focused on visual responses compared to baseline, irrespective of the different conditions (results of this will be reported elsewhere). The total task comprised 60 trials of visual stimulation split in two runs.

Each run took 5 min with a short break outside the MR scanner if both runs were acquired in the same session.

2.3. MRI data acquisition

Data were collected using a 3T Siemens Skyra MR-system using a 15-channel coil developed for structural imaging of the human knee. Functional imaging data for both tasks were obtained from 24 axial slices (interleaved acquisition; descending order, covering the whole brain) using a 2-fold multiband-accelerated echo planar imaging (EPI) sequence and a voxel size of 1.5 × 1.5 × 2 mm3 (TR/TE = 1000/38 ms, field of view (FoV) = 144 × 144 × 58 mm3, flip angle = 61°, 20% gap). The task from experiment 1 (flickering checkerboard experiment) consisted of a single run comprising 134 scans, and the task employed in experiment 2 (face processing experiment) comprised two runs of 270 scans each. The dogs had multiple attempts to complete the task in case of excessive head motion (see 2.2. experimental design). For one dog, we truncated the second run to 190 scans due to excessive motion (change of head position) and the dog’s unavailability for repeating the session. The structural image was obtained using a voxel size of 0.7 mm isotropic (TR/TE = 2100/3.13 ms, FoV = 230 × 230 × 165 mm3) and was acquired in a prior scan session, separated from the functional imaging sessions.

2.4. Data processing and statistical analysis

2.4.1. MRI data preprocessing

All imaging data was analysed using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/) and Matlab 2014b (MathWorks; see Fig. 2 for an overview of the workflow). After slice timing correction (referenced to the middle slice, Sladky et al., 2011) and image realignment, the functional images were manually reoriented to match the orientation of the canine breed-averaged template (Nitzsche et al., 2017) with the rostral commissure as a visual reference using the SPM module “Reorient images / Set origin”. We then manually skull-stripped the structural image using an individual binary brain mask for each dog, created using itk-SNAP (Yushkevich et al., 2006). Based on preliminary analyses, skull-stripping canine imaging data proved to be essential for successful automatic co-registration. This way, the co-registration algorithm successfully detects brain borders, not incorrectly relying on large muscles that surround the dog brain but have similar image intensity (see Fig. 1D). The structural image, the individual binary brain mask, and the functional imaging data were then co-registered to the mean image of each run. Next, the structural image was segmented (“Old Segmentation” module of SPM12) into grey matter, white matter, and cerebrospinal fluid, using the tissue probability maps provided by the canine breed-averaged template (Nitzsche et al., 2019). We then normalized (using the “Old Normalization” module of SPM12) the functional and structural imaging data, along with the individual binary brain mask. Lastly, functional images were resliced (1.5 mm isotropic) and smoothed using a 3 mm Gaussian kernel (full-width-at-half-maximum, FWHM).

Fig. 2.

Fig. 2

Schematic description of the tailored data processing workflow for the canine neuroimaging data including (A) functional images and (B) the structural image. For the first level analysis (step 10, First level (GLMs)) functional data are masked using anatomical boundaries (normalized binary mask). Illustrative structural and functional images as well as binary mask were derived from one dog in the sample; tissue probability maps (TPMs) were from the canine breed-averaged atlas (Nitzsche et al., 2019). Numbers, in bold, describe the sequence of processing steps. Est., estimate; Res., resliced, GLM, general linear model.

To additionally account for head motion, we performed motion scrubbing by calculating the scan-to-scan motion for each dog, referring to the framewise displacement (FD) between the current scan t and its preceeding scan t-1. For each scan that exceeded the FD threshold of 0.5 mm, we entered an additional motion regressor to the first-level GLM design matrix (Power, Barnes, Snyder, Schlaggar, & Petersen, 2012; Power et al., 2014). For the checkerboard experiment (experiment 1), on average 7.8% of the scans were removed (~10/134 scans, ranging from 0 to 36 scans; mean FD: 0.23 mm, 90th percentile: 0.39 mm). For the face processing experiment (experiment 2), on average 3.5% (run 1) and 5.5% (run 2) scans were removed (run 1: ~ 10/270 scans; run 2: ~ 15/270 scans; ranging from 0 to 52 across runs; mean FD run 1: 0.18 mm, 90th percentile run1: 0.28 mm; mean FD run 2: 0.22 mm, 90th percentile run 2: 0.34 mm).

2.4.2. Template normalization

We attempted to provide a unified coordinate system by combining two available templates, (1) based on a canine breed-average (Nitzsche et al., 2019) combined with (2) the normalized labels from another canine template based on a single male Golden Retriever (Czeibert, Andics, Petneházy, & Kubinyi, 2019). First, we segmented (“Old Segmentation”) the structural template (Czeibert et al., 2019) using the tissue probability maps provided by the breed-averaged template (Nitzsche et al., 2019). Then, we normalized (“Old Normalization”) both the structural template and the NIfTI-file containing the atlas labels.

2.5. fMRI data analysis

We now provide an overview of the analysis approach followed by more details on each analysis step in the following section (see also Fig. 3). For the exploratory investigation of the average BOLD signal and estimation of the tailored dog HRF, we first analysed activation changes in V1 during experiment 1 (contrast flickering checkerboard > visual baseline) in the following steps: (1) we extracted the average V1 time course of the BOLD signal employing a finite impulse response (FIR) model (exploration and estimation analysis step 1, extraction V1 BOLD signal); (2) we estimated a tailored dog HRF based on the FIR data above (exploration and estimation analysis step 2, dog HRF estimation); (3) we then compared the human canonical HRF (i.e., the default HRF parameters provided by SPM12) with the dog HRF using model fit analysis and Wilcoxon signed ranks tests (exploration and estimation analysis step 3, model fit comparison). Then, to expand comparisons to the whole-brain, (4) we performed first-level analysis using the human HRF, the human HRF with time and dispersion derivatives and the tailored dog HRF (exploration and estimation analysis step 4, first-level GLMs) and (5) analysed neuroimaging data on a group-level along with paired-sample t-tests (exploration and estimation analysis steps 5, group-level activation comparisons).

Fig. 3. Overview o of analyses underpinning the exploration of the average V1 BOLD signal in dogs, estimation of the tailored dog haemodynamic response function (HRF), and validation of the HRF in a second independent data set.

Fig. 3

(A) Data from the flickering checkerboard experiment served for the exploratory and estimation analysis (1) to extract the average V1 BOLD signal in dogs and visually compare it to the canonical human HRF model (i.e., the default HRF parameters provided by SPM12) using a finite impulse response (FIR) model, (2) to estimate a tailored dog HRF based on the empirical data, and (3) to compare model fits of the human and dog HRF in the visual cortex. On the whole-brain level, (4) we then performed first-level analyses using the human HRF, the human HRF along with time and dispersion derivatives (TDD) and the tailored dog HRF in order to (5) perform whole-brain group comparisons using one-sample and paired-sample t-tests across HRF models. (B) Results from (A) were then validated using the data from the face processing experiment as an independent validation data set. All analysis steps were identical to above, except for the dog HRF estimation. GLM, general linear model; BOLD, Blood Oxygenation Level Dependent

Next, to validate the results from experiment 1, which revealed an earlier peak of the V1 BOLD signal in dogs, we cross-validated them by analysing V1 activation changes during the face processing experiment (contrast faces > visual baseline), using a similar but modified approach: (1) we extracted the average time course of the V1 BOLD signal during the face processing experiment using a FIR model (validation step 1, extraction V1 BOLD signal); (2) we compared the HRF models based on their model fit and using Wilcoxon signed ranks tests (validation step 2, model comparison); (3) we performed univariate activation analysis using the human HRF, the human HRF along with time and dispersion derivatives (TDD), and the dog HRF (validation step 3, first-level GLMs); lastly, (4) we performed group activation analyses along with paired-sample t-tests (validation step 4, group-level activation comparisons).

2.5.1. Exploration and estimation analysis: Flickering checkerboard experiment (experiment 1)

Step 1: Extraction average V1 BOLD signal. We used a finite impulse response (FIR) model to measure the average V1 time course of the BOLD signal in dogs. This flexible approach makes minimal assumptions about the shape of the BOLD signal and thus results in independent response estimates for a predetermined number of time bins (in the present case, one time bin per TR). We estimated FIRs covering the visual stimulation blocks (starting at stimulus onset (0 s) until 10 s after stimulus offset), yielding a duration of 20 s. The 20 s where then divided in 20 time bins (TR = 1 s), each modelled with a separate regressor using an impulse response function. We then extract the average V1 time course, based on V1 coordinates obtained from the group-based comparison using the human HRF (exploratory and estimation analysis step 5; see also Table 1, section “human HRF”), using (a) a 4 mm sphere placed around the local maximum of the cluster that covered the occipital lobe (Fig. 4A) and (b) expanding over V1 as determined by our atlas labels (Czeibert et al., 2019; Nitzsche et al., 2019). Finally, we extracted each dog’s average BOLD time series and calculated the time course of activation induced by the visual stimulation block across all dogs.

Table 1. Flickering checkerboard experiment: Task-related activation during visual stimulation.
Contrast, brain region & HRF Coordinates (breed-averaged template) z-value cluster size
x y z
Human HRF: Flickering checkerboard > visual baseline (k = 14)
L caudal splenial gyrus (O) -1 -29 16 5.71 610
L hippocampus (T) -9 -18 1 4.47 15
Human HRF+TDD: Flickering checkerboard > visual baseline (k = 10)
L caudal splenial gyrus (O) 1 -26 19 6.05 246
R medial suprasylvian gyrus (T) 13 -20 18 4.61 11
Dog HRF: Flickering checkerboard > visual baseline (k = 15)
R caudal splenial gyrus (O) 1 -27 18 6.23 823
L medial suprasylvian gyrus (T) -16 -18 19 4.82 30
L caudal suprasylvian gyrus (T) -19 -24 7 4.17 18
L hippocampus (T) -9 -17 3 4.08 23
R hippocampus (T) 8 -15 6 4.04 19
Human HRF+TDD > human HRF: Flickering checkerboard > visual baseline (k = 10)
L caudal splenial gyrus (O) -3 -30 16 5.58 175
Human HRF > dog HRF: Flickering checkerboard > visual baseline (k = 14)
L insular cortex (T) -18 -11 -2 4.58 14
Dog HRF > human HRF: Flickering checkerboard > visual baseline (k = 14)
R caudal splenial gyrus (O) 1 -27 18 5.78 316
R medial suprasylvian gyrus (T) 17 -18 21 4.71 54
L medial suprasylvian gyrus (T) -16 -20 21 4.17 14
Human HRF + TDD > dog HRF: Flickering checkerboard > visual baseline (k = 10)
L caudal splenial gyrus (O) -3 -30 16 6.00 162

Effects were tested for significance with a cluster-defining threshold of p < 0.001 and a cluster probability of p < 0.05 FWE-corrected for multiple comparisons. Critical cluster sizes (k) are reported along with the results for each one-sample t-test, each haemodynamic response (HRF) model, and for the paired-sample t-tests between all HRF models. The first local maximum within each cluster is reported; coordinates represent the location of peak voxels and refer to the canine breed-averaged template (Nitzsche et al., 2019), the template along with another single dog template (Czeibert et al., 2019) served to determine anatomical nomenclature for all tables. TDD, time and dispersion derivatives; O, occipital lobe; T, temporal lobe; L, left; R, right.

Fig. 4.

Fig. 4

Visual comparison reveals an earlier peak of the BOLD signal in dogs as when modelled using the canonical human haemodynamic response function (HRF) for both independent data sets leading to the estimation of a tailored dog HRF. After calculating the finite impulse response (FIR) models, we extracted individual response estimates from the maximal response in primary visual cortex (V1) using coordinates from (A) the standard human HRF for the flickering checkerboard experiment (exploration and estimation analysis, step 5; x = -1, y = -29, z = 16, 4 mm) and (B) the standard human HRF along with time and dispersion derivatives for the face processing experiment (validation analysis, step 4; x = -1, y = -29, z = 19, 4 mm). Based on the extracted data, we calculated the averaged BOLD signal time course for the visual stimulation across trials and dogs for both (A) the flickering checkerboard experiment and (B) the face processing experiment (both runs separately). The dog HRF was estimated based on the average BOLD signal time course from the flickering checkerboard experiment (exploration and estimation analysis, step 2), while the face processing experiment served as an independent test data set to validate the results derived from the exploration and estimation analysis. The tailored dog HRF and the human HRF are plotted in addition to the extracted the BOLD signal time course to display the fit of the HRF models for both experiments. For illustration purposes, the dog and human HRF’s were scaled by the parameter estimates (arbitrary units, a.u.) from the respective GLMs. SEM, standard error of the mean.

Step 2: Estimation of the dog HRF. Based on the results from step 1, which upon visual inspection revealed the need for a tailored dog HRF with earlier onset, we estimated a new parametrization for SPM’s canonical HRF, yielding a tailored dog HRF model. The spm_hrf function uses seven optional parameters to specify the shape of the HRF: the delay of the response (relative to onset, p1 = 6 s), the delay of the undershoot (relative to onset, p2 = 16 s), the dispersion of the response (p3 = 1), the dispersion of undershoot (p4 = 1), the ratio of the response to the undershoot (p5 = 6), the onset (p6 = 0 s), and the length of the kernel (p7 = 32 s). We used MATLAB’s fminsearch function, a multidimensional unconstrained nonlinear minimization method, to optimize the model fit of the regression analysis (R2-statistics of MATLAB’s regress function) by varying the values of p1, p2, p5, p6. The assumed plausible ranges for the haemodynamic parameters were: p1 = [1 10 s], p2 = [1 20 s], p5 = [1 10 s], p6 = [0 5 s], and the regression analysis was identical to a standard SPM first-level analysis (see above, step 1). We chose not to deviate from the default-values for response (p3) or undershoot dispersion (p4), or the overall kernel length (p7) to prevent overfitting.

Step 3: Model fit comparison. We then calculated the individual single-subject R2-statistics of each GLM with the different HRF parameters and compared the model fit to the extracted V1 BOLD signal between human and dog HRF using a Wilcoxon signed ranks test.

Step 4: Human HRF. Using the GLM approach implemented in SPM12, we estimated contrast images for each dog that reflected task-related activation (contrast checkerboard > baseline). The first-level design matrix of each dog contained a task regressor modelling visual stimulation, time-locked to the onset of each block (duration 10 s) and convolved with the human (canonical) HRF. The six realignment parameters along with regressors modelling framewise displacement (see above) were added to the design matrix to account for head motion. Normalized, and individually created binary masks (see above and Figure 2) were used as explicit masks and a high-pass filter with a cut-off at 128 s was applied.

Human HRF+TDD. Next, to account for variability (Friston, Fletcher, et al., 1998; Friston, Josephs, et al., 1998; Henson et al., 2002), we added temporal and dispersion derivatives (TDD) to the human HRF. The visual stimulation regressor was thus convolved with the human HRF along with its TDD. This resulted in three regression parameter estimates consisting of: (1) the human canonical HRF (β^1), (2) the time derivative (β^2), and (3) the dispersion derivative (β^3). We then combined all three regressors to form one “derivative boost (H)”-regressor per dog (Calhoun, Stevens, Pearlson, & Kiehl, 2004; Lindquist et al., 2009): H=sgn(β^1)β^12+β^22+β^32.

Dog HRF. Next, we set up a first-level model (same settings as previously) including the data that was now estimated and convolved using the estimated dog HRF (step 3, human HRF).

Step 5: Group-level activation comparison. To test for activation differences during visual stimulation on a group-level, we implemented one sample t-tests for each HRF model (steps 1, 2, 5; contrasting flickering checkerboard > baseline; H-regressor for TDD model), as well as paired-sample t-tests (checkerboard > baseline). Unless stated otherwise, significance was determined using cluster-level inference with a cluster-defining threshold p < 0.001 and a cluster probability of p < 0.05 family-wise error (FWE) corrected for multiple comparisons. Cluster extent was calculated using the SPM extension “CorrClusTh.m” (by Thomas Nichols, University of Warwick, United Kingdom, and Marko Wilke, University of Tübingen, Germany; https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/nichols/scripts/spm/).

2.5.2. Validation analysis: Face processing experiment (experiment 2)

Independent data obtained during the face processing experiment (experiment 2) were then used to validate the exploratory results and to compare all three HRF models.

Step 1: Extraction average V1 BOLD signal. Similar to above (exploration and estimation analysis, step 1) we used a finite impulse response (FIR) model to extract the individual BOLD signal time courses, but defined 10 time bins starting at the stimulus onset (0 s) until 7 s after stimulus offset. Each time bin had a duration of 1 s (= length of TR) and was modelled with a separate regressor per time bin using impulse response functions. We then placed a 4 mm sphere around the local maxima of the cluster encompassing the V1, and used the coordinates emerging from the human HRF+TDD model (validation analysis, step 5; Table 2 section “human HRF+TDD”) since the human HRF did not survive the significance threshold (Fig. 4B).

Table 2. Face processing experiment: Task-related activation during visual stimulation.
Contrast, brain region & HRF Coordinates (breed-averaged template) z-value cluster size
x y z
Human HRF+TDD: Visual stimulation > visual baseline (k = 21)
R lateral olfactorial gyrus (T) 13 3 -4 4.65 21
R caudal marginal gyrus (O) 1 -29 19 4.29 26
Dog HRF: Visual stimulation > visual baseline (k = 14)
R medial suprasylvian gyrus (T) 16 -23 19 4.88 225
R rostral ectosylvian gyrus (T) 17 -8 10 4.54 53
R caudal splenial gyrus (O) 2 -32 18 4.53 130
L caudal composite gyrus (T) -22 -20 -4 4.49 204
Dog HRF > human HRF: Visual stimulation > visual baseline (k* = 10)
R medial suprasylvian gyrus (T) 17 -23 18 4.19 28
L medial suprasylvian gyrus (T) -18 -24 15 3.70 24
Dog HRF > human HRF+TDD: Visual stimulation > visual baseline (k* = 10)
R ectomarginal gyrus (P) 8 -18 19 4.04 22

Effects were tested for significance with a cluster-defining threshold of p < 0.001 and a cluster probability of p < 0.05 FWE-corrected for multiple comparisons. Critical cluster sizes (k) are reported along with the results for each one sample t-test per haemodynamic response function (HRF) model. For the one-sample t-test based on the human HRF GLM, no cluster survived the threshold. None of the paired-sample t-tests across HRF models survived the critical cluster threshold (k), therefore the significance level was lowered to p < 0.005 with an arbitrary cluster threshold (k*) of 10 voxels. One paired-sample t-test (human HRF vs. human HRF and time and dispersion derivatives (TDD)) did not survive the lowered threshold as well as the contrasts human > dog HRF, human HRF + TDD > dog HRF. The first local maximum within each cluster is reported; coordinates represent the location of peak voxels and refer to the canine breed-averaged template (Nitzsche et al., 2019), the template along with another single dog template (Czeibert et al., 2019) served to determine anatomical nomenclature for all tables. O, occipital lobe; T, temporal lobe; L, left; R, right.

Step 2: Model fit comparison. This step was almost identical to above (exploration and estimation analysis, step 3) but was performed based on the FIR data from experiment 2 (validation analysis, step 1).

Step 3: Human HRF. Analysis was identical to above (exploration and estimation analysis, step 4 human HRF), but visual stimulation was modelled with one task regressor time locked to the event onset (duration of 3 s), contrasted against visual baseline (contrast faces > baseline).

Human HRF+TDD. Analysis was identical to above (exploration and estimation analysis, step 4 human HRF+TDD) using the task regressor from experiment 2 (validation analysis, step 3 human HRF) but resulted in two informed basis sets as this task contained two separate runs. We first calculated the mean of each parameter estimate across both runs (i.e β^1_mean=β^1_run1+β^1_run22) and then, as above, combined all three averaged regressors to one “derivative boost (H)”-regressor per dog.

Dog HRF. We defined the same first-level model as described above (validation analysis, step 3 human HRF) but the task regressor was convolved with the newly estimated dog HRF.

Step 4: Group-level activation comparison. This step was performed based on the first-level results from experiment 2 but otherwise identical to above (exploration and estimation analysis, step 5).

2.6. Data and code availability statement

Unthresholded statistical maps from the exploratory and estimation analysis, the Matlab-based code to calculate the HRF model fits, the FIR data for both experiments, and a spm_my_defaults.m-script containing the dog HRF parameters have been added as supplementary material.

3. Results

3.1. Exploration and estimation analysis: Flickering checkerboard experiment (experiment 1)

FIR model and dog HRF estimation. To investigate the time course of the BOLD response in dogs, we used a model-free analysis (FIR model, exploration and estimation analysis, step 1). Results suggested a temporal difference between the standard (canonical) human HRF and the average response in our canine sample. Visual inspection of the results revealed an earlier peak after visual stimulation onset compared to convolution using a human HRF and, consequently, an earlier decline and return to baseline (Fig. 4A). Therefore, the estimation based on the FIR data (exploration and estimation analysis, step 2) resulted in the following parameter changes to the (canonical) human HRF: a shorter response delay (p1 = 4.3 s), a delay of the undershoot (p2 = 6.6 s), as well as a lower ratio of the response to the undershoot (p5 = 3). This newly estimated dog HRF peaked around 2–3 s earlier as compared to the human HRF (Fig. 4A).

3.1.1. Determining the HRF model fits

R2-statistics of both GLMs calculated individually (main analysis, step 6) revealed a better model fit of the average time course of activation when using the dog HRF, with a mean R2 of 0.64 (SD = 0.21), increasing the fit almost two times in comparison to the model using the human HRF (mean R2 = 0.35 (SD = 0.20). This substantial increase in explained variance was statistically significant (z = 142, p = 0.002).

3.1.2. Visual activation: Human HRF / human HRF+TDD

Expanding to whole-brain comparisons (exploration and estimation analysis, step 5), we performed standard whole-brain GLM analyses similar to other canine neuroimaging papers (e.g., Andics et al., 2016; Cuaya et al., 2016) and localized visual processing areas by convolving fMRI data with the human HRF (exploration and estimation analysis, step 3 human HRF). Results revealed increased activation within the occipital lobe (V1) and within the left hippocampal area (Table 1, section “human HRF”). When accounting for HRF variability (exploration and estimation analysis, step 3, human HRF+TDD), we found similar activation within V1 during visual stimulation (but only about half the size compared to the human HRF) as well as within the right dorsal temporal lobe (Table 1, section “human HRF+TDD). Additionally, V1 clusters stemming from both analysis types expanded from the occipital lobe to portions of the parietal and right temporal lobe (Fig. 5A). Thus, analyses based on the standard human HRF with and without accounting for its variability yielded comparable activation increases in V1 during visual stimulation.

Fig. 5.

Fig. 5

Flickering checkerboard experiment: Comparison of brain activation across haemodynamic response functions (HRF) illustrates increased detection performance using a tailored dog HRF in both primary and higher order visual processing areas (exploratory and estimation analysis). Results are displayed at p < 0.05, FWE-corrected at cluster-level, and using a cluster-defining threshold of p < .001 (see Table 1), overlaid onto the mean structural image. Coordinates refer to the canine breed-averaged atlas (Nitzsche et al., 2019). The first axial plane (A, first row, left) shows the anatomical locations caudal (C), rostral (R), and left hemisphere (LH); all axial planes displayed have the same orientation. The sagittal plane displays the cut coordinates and the anatomical locations dorsal (D), ventral (V). (A) Group-based activation for visual stimulation > baseline (one-sample t-tests) indicate that the analysis using the dog HRF shows the highest sensitivity for the canine neuroimaging data, with the analysis using the human HRF resulting in smaller activation clusters, and the analysis using the human HRF combined with time and dispersion derivatives resulting in even smaller activation clusters. (B) Comparisons of visual stimulation > visual baseline contrasts between all three HRF models (paired-sample t-tests) resulted in similar significant activation changes in the occipital lobe for the human HRF and time and dispersion (TDD) model in contrast to both the human and dog HRF). Comparing the human HRF and dog HRF revealed stronger activation in the primary visual cortex and temporal regions for the dog HRF compared to the dog HRF and activation in the insular cortex for the reverse contrast (not depicted, see Table 1 for details).

3.1.3. Visual activation: Dog HRF

We now report in more detail the brain areas revealing significant activation on a group-level using the tailored dog HRF, since it significantly improved the model fit in the V1 compared to the human HRF (exploration and estimation analysis, steps 2-3). We observed five clusters with stronger activation during visual stimulation compared to baseline (Table 1, section “dog HRF”, Fig. 5), which is more than double the amount of significant clusters, as well as cluster sizes, compared to the remaining models (main analysis, steps 1, 2; Table 1). The largest cluster expanded from the V1 to bilateral parietal and temporal lobe regions, followed by smaller clusters in the right temporal lobe (see Table 1 and Fig. 6 for details).

Fig. 6. Increasing the detection power by using the tailored dog haemodynamic response function (HRF) in the flickering checkerboard experiment allows detailed description of primary and higher-order visual processing areas.

Fig. 6

(A) Visual stimulation against baseline elicited activation in a large region of the occipital lobe peaking at the rostral occipital lobe expanding to the caudal parietal lobe and bilateral dorsal portions of the temporal lobe. In addition, activation in bilateral hippocampal areas increased in response to visual stimulation compared to baseline. Results are displayed at p < 0.05, FWE-corrected at cluster-level, and using a cluster-defining threshold of p < .001 (see Table 1, section “dog HRF”), plotted onto the mean structural image. Atlas maps, coordinates and the anatomical nomenclature refer to the canine breed-averaged atlas (Nitzsche et al., 2019) and additional normalized labels from a single-dog based template (Czeibert et al., 2019). Images are accompanied with anatomical locations caudal (C), rostral (R), dorsal (D), ventral (V), left hemisphere (LH) and right hemisphere (RH). (B) For easier interpretation of the anatomical structures activated, blue-shaded outlines of anatomical regions are displayed together with contours of activated clusters shown in Panel A.

3.1.4. Activation differences during visual stimulation across HRF models

In order to test for whole-brain differences in activation, we compared the human HRF, human HRF+TDD and dog HRF GLMs using paired-sample t-tests (contrast checkerboard > visual baseline; exploration and estimation analysis, step 5). Results revealed significant clusters for all models. However, the analysis using the dog HRF was the only one that resulted in significant differences in activation both in the V1 and bilateral temporal regions (dog HRF > human HRF); the human HRF+TDD increased activation only in a caudal V1 region (human HRF+TDD > human HRF; human HRF+TDD > dog HRF). In sum, the human HRF revealed to be the least sensitive model (see Fig. 5B, Table 1 for details).

3.2. Validation: Face processing experiment (experiment 2)

Next, we validated our novel results in an independent data set and compared all three HRF models.

3.2.1. FIR model and comparison of HRF model fits

Visual inspection of the average activation time course based on the FIR model (validation analysis, step 4) confirmed the results of the exploratory and estimation analysis, as it again revealed an earlier BOLD signal peak (see Fig. 4B). In line with the exploratory results, comparing the average HRF model fits (i.e., R2-statistics) for both runs separately (validation analysis, step 5) revealed that the dog HRF resulted in an R2 eight times higher for the first run (human HRF: mean Rrun12=0.06, SD = 0.11; dog HRF: mean Rrun12=0.5, SD = 0.31) and by almost three times for the second run (human HRF: mean Rrun22=0.15, SD = 0.22; dog HRF: mean Rrun22=0.44, SD = 0.31). Again, the Wilcoxon signed ranks tests indicated that the dog HRF model fit was significantly higher than the human HRF in both runs (Run 1: z = 100, p = 0.001; Run 2: z = 67, p = 0.012), confirming the advantage of using the tailored dog HRF in a data set independent of the dog HRF estimation.

3.2.2. Visual activation during visual stimulation across HRF models

In line with the results from the exploratory and estimation analysis, modelling the dog HRF resulted in the highest number of activated clusters with cluster sizes increasing twelve times in comparison to the model including the human HRF+TDD. Furthermore, the dog HRF was the only model that detected activation beyond the V1 in bilateral temporal regions, while none of these withstood the cluster threshold correction when modelling the human HRF (see Table 2, Fig. 7A for details; validation analysis, step 5). Performing paired-sample t-tests between dog HRF, human HRF and human HRF+TDD (validation analysis, step 5) resulted in no significant differences with the initial strict threshold, but lowering the threshold to p = 0.005 uncorrected indicated that using the dog HRF improved the sensitivity to detect visual processing areas (see Table 2, Fig. 7B for details), thus confirming the exploratory results.

Fig. 7.

Fig. 7

Face processing experiment: Comparison of brain activation in an independent data set confirms increased detection performance using a tailored dog haemodynamic response function (HRF) compared to other HRF models (validation analysis). For display purposes results are displayed at p < .005 (for results at p < 0.05, FWE-corrected at cluster-level, and a cluster-defining threshold of p < .001 see Table 2) on the mean structural image. Coordinates refer to the canine breed-averaged atlas (Nitzsche et al., 2019). The first axial plane (A, first row, left) shows the anatomical locations caudal (C), rostral (R) and left hemisphere (LH); all axial planes displayed have the same orientation. The sagittal plane displays the cut coordinates and the anatomical locations dorsal (D), ventral (V). (A) Group-based activation for visual stimulation > baseline (one sample t-tests) indicate that the human HRF results in almost no activation, the human HRF combined with time and dispersion derivatives (TDD) results in bigger activation clusters and again that the dog HRF shows the highest sensitivity for the canine neuroimaging data. (B) Group comparisons of visual stimulation > visual baseline contrasts between all three HRF models. Group-based activation (paired-sample t-tests) resulted in trends of activation changes in temporal regions for the dog HRF in comparison to both the human HRF and human HRF + TDD model (see Table 2 for detailed results).

4. Discussion

The aim of this study was to explore whether the typically used human haemodynamic response function (HRF) fits the average BOLD signal in dogs and whether detection power for canine neuroimaging data can be improved using a tailored dog HRF. Our results indicate that the human HRF does not fit the average BOLD signal in dogs. We provide initial evidence that the average time course of the V1 BOLD signal in dogs peaks 2-3 s earlier than the human HRF and that the model fit for the primary visual cortex (V1) can be significantly improved using a tailored dog HRF. Expanding to whole-brain activation, the dog HRF again resulted in increased detection power for the dog HRF.

We used two independent visual experiments serving as exploration and estimation analysis (flickering checkerboard experiment, experiment 1) and independent validation sets (face processing experiment, experiment 2). We estimated a tailored dog HRF based on the empirical data from experiment 1, since V1 BOLD signal indicated an earlier peak compared to the human HRF. Following this, we were able to confirm the earlier peak when investigating the V1 BOLD signal in the independent experiment 2. Further, the model fit (i.e., R2-statistics) for the V1 significantly improved (and almost doubled) in experiment 1 and were between eight (run 1) and almost three (run 2) times higher in experiment 2 when comparing to the human HRF. Expanding to whole-brain comparisons, our results provide evidence that the human HRF, compared to the tailored dog HRF, resulted in significantly less activation being detected. Fourth, adding time and dispersion derivatives (TDD) led to significantly increased activation in both experiments, but only within occipital areas. For experiment 1, the human HRF + TDD even led to increased V1 signal compared to the dog HRF. Overall, however, the human HRF+TDD was less sensitive in detecting secondary visual areas resulting in fewer significant clusters, while the dog HRF detected both primary and secondary visual areas during both experiments. These are important findings when considering the small sample sizes in most canine neuroimaging studies. In contrast to human studies, it is more difficult to increase power by increasing the sample size, primarily due to limited availability of canine participants and extensive dog training prior to MR-scanning. Thus, increasing the model fit of the HRF to the average BOLD signal time course is an important alternative tool to further increase the power and therefore increase the reproducibility of future studies.

Our findings are consistent with research in rodents, which suggested that using the human HRF degrades the model fit and, thus, the overall detection performance (Lambers et al., 2020). As in our sample, Lambers and colleagues (2020) observed an earlier peak of the average BOLD signal in rats, proposing differences in brain and vessel size, smaller distances within the brain or a higher capillary and venous flow velocity as potential reasons for the observed patterns (see also De Zwart et al., 2005; Silva et al., 2007). Absolute brain sizes cannot sufficiently explain why the human HRF fits the average BOLD signal in dogs. Although dog brains have a smaller absolute size than human and, on average, macaque brains (e.g., DeFelipe, 2011; Yáñez et al., 2005), the dog breeds in our sample seem to have a similar size as rhesus macaques (Horschler et al., 2019). However, relative size (brain size/body weight) could potentially explain our findings, since the dog brains in our sample (just as rodent brains) seem to have a smaller relative brain size than humans and rhesus macaques (e.g. Baumann et al., 2010; Logothetis et al., 2001 for average body weight in macaques; Roth & Dicke, 2005 for review). Although evolutionary relationship also seems to correlate with the human HRF across species, underlying neurovascular mechanisms remain somewhat unclear. Additionally, skull shapes and sizes also vary within dog species (i.e., across different breeds), resulting in substantial variance in underlying neuroanatomy in dogs (Hecht et al., 2019; Horschler et al., 2019; Schoenebeck & Ostrander, 2013). Since our sample was rather homogenous (70% border collies; all mesocephalic skull shapes) and small, we did not have enough variance to test for potential differences between breeds, skull shapes or sizes. Further, the human HRF parameters provided by SPM have been estimated based on data from a 1.5 Tesla (T) MR scanner (Friston, Josephs, et al., 1998). Although it has never been tested, 3 T or higher field MR could potentially influence BOLD signal measurements (i.e., increased sensitivity to microvasculature). Reviewing the published literature, dog fMRI labs working with unrestrained and fully awake dogs have so far used a 3 T MR scanner. Thus, in terms of magnetic field strength, our dog HRF estimate should be comparable to other canine neuroimaging data. Additionally, differences in heart rate (i.e., Chang, Cunningham, & Glover, 2009), breathing rate (i.e., Birn, Murphy, Handwerker, & Bandettini, 2009), as well as the distance to draining veins (i.e., Bianciardi, Fukunaga, van Gelderen, de Zwart, & Duyn, 2011; Krings, Erberich, Roessler, Reul, & Thron, 1999; Turner, 2002) can also modulate the BOLD signal time course. In line with the observed earlier peak of the BOLD signal in dogs, Manzo, Ootaki, Ootaki, Kamohara, & Fukamachi (2009) report a higher heart rate in dogs (with a similar body weight as in our sample) compared to humans. Unfortunately, due to the lack of physiological measurements, we cannot test the influence of heart and breathing rate, or the distance to draining veins, in the present sample. Additionally, besides body weight, heart rate measurements were also shown to correlate with factors such as age and breed (Hezzell, Humm, Dennis, Agee, & Boswood, 2013). Here, we estimated the dog HRF based on dogs ranging between 4–11 years (Mdn = 8) covering a wide range of ages. Taken together, the average BOLD signal might deviate from the tailored dog HRF across breeds and at different body weight. This could be accounted for by adding time and dispersion derivatives to the dog HRF in future studies.

Our results do not confirm earlier reports of a similar time course of the average BOLD signal to the one in humans (Berns et al., 2012). Unlike Berns et al. (2012), our results suggest that the human HRF does not fit the average time course of the BOLD signal in dogs optimally. However, Berns et al. (2012) studied the subcortical caudate nucleus, while we focused on the cortical BOLD signal in dogs, extracting data from V1. Previous research in other species, i.e. humans showed that the average BOLD signal time course differed between cortical and sub-cortical regions (Handwerker et al., 2004; Lewis, Setsompop, Rosen, & Polimeni, 2018). Thus, our findings do not necessarily contradict the results from Berns et al. (2012) but might be related to the different areas analysed, as well as their neural and vascular characteristics.

While it is possible to accurately estimate HRF parameters from block designs (Shan et al., 2014), studies investigating the shape of the BOLD signal time course typically employ an event-related design (i.e., Friston, Jezzard, & Turner, 1994; Handwerker, Ollinger, & D’Esposito, 2004; Lindquist, Meng Loh, Atlas, & Wager, 2009). A disadvantage of experiment 1 is the fixed on-off cycle (10 s): Dogs might have anticipated the next stimulus onset, and we might have missed a possible BOLD signal undershoot that might have continued into the next block (i.e., longer than the 10 s baseline) which altogether could have affected our observed BOLD signal shape. Nevertheless, we chose experiment 1 as for estimating the HRF because of the robustness of the design itself, and because of the salient visual stimulation (flickering checkerboard). Such stimulation is typically used to elicit solid activation in our V1 target region (i.e., Moradi et al., 2003; Sladky et al., 2011 for examples in humans). Thus this allowed us to achieve increased detection power as well as a reliable dog HRF estimate. We then validated our results by using the event-related face processing experiment (experiment 2; jittered baseline). This confirmed our initial results, resulting again in a better model fit for the dog HRF compared to the human HRF. In addition, we estimated HRF parameters based on the face processing experiment resulting in similar results and numerically almost identical model fits. Although flipping the exploration and validation experiments lead to comparable results, future research should employ block and event-related designs with a jittered resting state period, combining the strengths of both experiments (i.e., short events of flickering checkerboards with jittered baseline).

Exploring their visual environment, humans and non-human animals perform rapid eye movements (saccades) ranging from small to larger movements depending on the visual stimulus. These saccades could have influenced the observed V1 BOLD signal time course. First, if the dogs gazed away from the visual stimulus, the V1 BOLD signal would have decreased. We did not record the dogs’ eye gaze since the dogs were not trained to perform the eye tracker calibration inside the MR scanner (for information on the extensive training procedure for a setting outside the scanner please see Karl, Boch, Virányi, Lamm, & Huber, 2019). However, we monitored their eye movements via a camera from an eye tracker throughout the data collection to ensure that the dogs were awake and generally attending to the task. If the dog always looked away from the MR screen (looking to the side, top or bottom for a long amount of time) we would have stopped the data collection. In addition, we chose experiment 1 as exploration and estimation data set, because the flickering checkerboard expanded over the entire MR screen and the MR screen itself covered the scanner bore. This made it almost impossible to look away from the visual stimulation. However, frequent saccades across a visual stimulus can also lead to a decreased V1 signal (saccadic suppression; i.e., Sylvester, Haynes, & Rees, 2005 or Wurtz, 2008 for review) or a pre-saccadic activity increase (~100 ms, so far detected with single cell recordings in monkeys, i.e., Supèr, Van Der Togt, Spekreijse, & Lamme, 2004). Especially with the flickering checkerboard experiment, the dogs typically did not make frequent saccades since there is no moving object or agent present that they could follow with their gaze. The face processing experiment contained dynamic stimuli (i.e., morphed faces), but we positioned the visual stimuli in the centre of the MR screen and the dogs’ eye field with a size of 500 × 500 pixels to ensure that the dogs can observe the entire stimulus without performing frequent eye movements. Future canine neuroimaging studies using visual stimuli could also explore the possibility to incorporate an eye tracking protocol to record eye gaze data to further strengthen their results.

Overall, our findings provide first evidence that the human HRF in the visual cortex does not optimally fit the HRF observed in dogs. Despite being based on two independent experiments allowing for cross-validation, this evidence should be treated as preliminary, awaiting independent validation in other samples, experimental paradigms, and brain regions. We hope that our approach will encourage future research to test the reproducibility and generalizability of our findings, and to explore whether this could help to increase model fit and detection power in their own canine fMRI datasets. For this reason, we adopted the established and recommended (Carp, 2012b; Nichols et al., 2017; Poldrack et al., 2008) standards from human neuroimaging analyses, provided a detailed description of our workflow and parameters, and made our imaging data and code openly available. Using a simple but salient sensory stimulation experiment also allowed quality assessment of our developed processing pipeline and helped us validate future changes in our pipeline, preventing potentially biased decisions. Additionally, a short (visual) localizer experiment can be used for dog training and getting dogs accustomed to the experimental setup.

Transparent reporting also allows us to build on previous research and facilitates the comparison of results. Based on previous research (e.g., Aguirre et al., 2007; Langley & Grünbaum, 1890; Marquis, 1934; Uemura, 2015; Willis, Quinn, McDonell, Gati, Parent, et al., 2001; Willis, Quinn, McDonell, Gati, Partlow, et al., 2001; Wing & Smith, 1942) we are certain about the location of the V1, but less is known about other higher-order visual association areas. Similar to the human and rhesus macaque visual system (e.g., Orban, Van Essen, & Vanduffel, 2004; Tootell, Tsao, & Vanduffel, 2003 for comparative reviews), we found activation within the dorsal visual stream, extending from the occipital lobe to the caudal parietal lobe and the ventral stream, and including bilateral regions in the temporal lobes, bilateral hippocampus and caudal thalamus. We did not find significant activation in the lateral geniculate body (LGB); (1) regarding the small size of the region, detection might require smaller voxel sizes or (2) differences in individual anatomy might have led to anatomical imprecision, atlases based on larger sample size (Nitzsche et al. 2019: based on N = 16 dogs) could help disentangle this question. Unfortunately, there is still no agreement on a shared template space; publicly available templates (Czeibert et al., 2019; Datta et al., 2012; Liu et al., 2020; Nitzsche et al., 2019) are not in the same space and vary in orientation and origin, thus coordinates from one template cannot be applied to the other. Taken together, these findings can be a next step to further investigate the visual system for dogs, hopefully aiding future investigations of the visual system in dogs or studies focusing on visual paradigms (e.g., face processing Cuaya et al., 2016; Dilks et al., 2015; Hernández-Pérez et al., 2018; Szabó et al., 2020; Thompkins et al., 2018).

4.1. Conclusions

We present first evidence that the average visual-cortical BOLD signal in dogs peaks earlier than the human HRF model. Consequently, the significantly lower model fit suggests that the analysis of canine neuroimaging data using the human HRF leads to loss of power that cannot be accounted for by adding time and dispersion derivatives. We provide a first estimate of the cortical dog HRF resulting in significant activation increase in comparison to the human HRF and validated our results using an independent task. We hope that our findings spark new research that might challenge or add to our results. To increase transparency, we applied open-science practices throughout, and hope this will motivate and facilitate future investigations by other labs, leading to a joint effort to improve detection power in canine neuroimaging research.

Supplementary Material

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.neuroimage.2020.117414.

Supplementary material

Acknowledgements

We thank Lukas Lengersdorff for his helpful comments on the analysis plan, Morris Krainz for his help collecting the data and all the dogs and their caregivers for taking part in our study. This project was supported by the Austrian Science Fund (FWF): W1262-B29 and by the Vienna Science and Technology Fund (WWTF), the City of Vienna and ithuba Capital AG through project CS18-012, and the Messerli Foundation (Sörenberg, Switzerland). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Footnotes

Declaration of Competing Interest

The authors declare no competing financial interests.

Credit authorship contribution statement

Magdalena Boch: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing - original draft, Writing - review & editing, Visualization, Project administration. Sabrina Karl: Investigation, Writing - review & editing. Ronald Sladky: Methodology, Formal analysis, Software, Writing - review & editing. Ludwig Huber: Resources, Writing - review & editing, Supervision, Funding acquisition. Claus Lamm: Conceptualization, Methodology, Resources, Writing - review & editing, Supervision, Funding acqui-sition. Isabella C. Wagner: Conceptualization, Methodology, Writing - original draft, Writing - review & editing, Supervision.

References

  1. Aguirre GK, Komáromy AM, Cideciyan AV, Brainard DH, Aleman TS, Roman AJ, et al. Jacobson SG. Canine and human visual cortex intact and responsive despite early retinal blindness from RPE65 mutation. PLoS Med. 2007;4(6):e230. doi: 10.1371/journal.pmed.0040230. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aguirre GK, Zarahn E, D’Esposito M. The variability of human, BOLD Hemodynamic responses. Neuroimage. 1998;8(4):360–369. doi: 10.1006/nimg.1998.0369. [DOI] [PubMed] [Google Scholar]
  3. Andics A, Gábor A, Gácsi M, Faragó T, Szabó D, Miklósi Á. Neural mechanisms for lexical processing in dogs. Science. 2016;353(6303):1030–1032. doi: 10.1126/science.aaf3777. [DOI] [PubMed] [Google Scholar]
  4. Andics A, Gácsi M, Faragó T, Kis A, Miklósi Á. Voice-sensitive regions in the dog and human brain are revealed by comparative fMRI. Curr Biol. 2014;24(5):574–578. doi: 10.1016/J.CUB.2014.01.058. [DOI] [PubMed] [Google Scholar]
  5. Andics A, Miklósi Á. Neural processes of vocal social perception: Dog-human comparative fMRI studies. Neurosci Biobehav Rev. 2018;85:54–64. doi: 10.1016/j.neubiorev.2017.11.017. [DOI] [PubMed] [Google Scholar]
  6. Arichi T, Fagiolo G, Varela M, Melendez-Calderon A, Allievi A, Merchant N, et al. Edwards AD. Development of BOLD signal hemodynamic responses in the human brain. Neuroimage. 2012;63(2):663–673. doi: 10.1016/j.neuroimage.2012.06.054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Aulet LS, Chiu VC, Prichard A, Spivak M, Lourenco SF, Berns GS. Canine sense of quantity: Evidence for numerical ratio-dependent activation in parietotemporal cortex. Biol Lett. 2019;15(12) doi: 10.1098/rsbl.2019.0666. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Baumann S, Griffiths TD, Rees A, Hunter D, Sun L, Thiele A. Characterisation of the BOLD response time course at different levels of the auditory pathway in non-human primates. Neuroimage. 2010;50(3):1099–1108. doi: 10.1016/j.neuroimage.2009.12.103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Berns GS, Brooks AM, Spivak M. Functional MRI in awake unrestrained dogs. PLoS One. 2012;7(5):e38027. doi: 10.1371/journal.pone.0038027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Berns GS, Brooks AM, Spivak M. Scent of the familiar: An fMRI study of canine brain responses to familiar and unfamiliar human and dog odors. Behav Processes. 2015;110:37–46. doi: 10.1016/J.BEPROC.2014.02.011. [DOI] [PubMed] [Google Scholar]
  11. Berns GS, Brooks AM, Spivak M, Levy K. Functional MRI in awake dogs predicts suitability for assistance work. Sci Rep. 2017;7:43704. doi: 10.1038/srep43704. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Berns GS, Brooks A, Spivak M. Replicability and heterogeneity of awake unrestrained canine fMRI responses. PLoS One. 2013;8(12):e81698. doi: 10.1371/journal.pone.0081698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Berns GS, Cook PF. Why did the dog walk into the MRI. Curr Dir Psychol Sci. 2016;25(5):363–369. doi: 10.1177/0963721416665006. [DOI] [Google Scholar]
  14. Bianciardi M, Fukunaga M, van Gelderen P, de Zwart JA, Duyn JH. Negative BOLD-fMRI signals in large cerebral veins. J Cereb Blood Flow Metab : Off J Int Soc Cereb Blood Flow Metab. 2011;31(2):401–412. doi: 10.1038/jcbfm.2010.164. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Birn RM, Murphy K, Handwerker DA, Bandettini PA. fMRI in the presence of task-correlated breathing variations. Neuroimage. 2009;47(3):1092–1104. doi: 10.1016/j.neuroimage.2009.05.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Boynton GM, Engel SA, Glover GH, Heeger DJ. Linear systems analysis of functional magnetic resonance imaging in human V1. J Neurosci. 1996;16(13):4207–4221. doi: 10.1523/jneurosci.16-13-04207.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Bunford N, Andics A, Kis A, Miklósi Á, Gácsi M. Canis familiaris as a model for non-invasive comparative neuroscience. Trends Neurosci. 2017;40(7):438–452. doi: 10.1016/J.TINS.2017.05.003. [DOI] [PubMed] [Google Scholar]
  18. Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, Munafò MR. Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013;14(5):365–376. doi: 10.1038/nrn3475. [DOI] [PubMed] [Google Scholar]
  19. Calhoun VD, Stevens MC, Pearlson GD, Kiehl KA. fMRI analysis with the general linear model: removal of latency-induced amplitude bias by incorporation of hemodynamic derivative terms. Neuroimage. 2004;22(1):252–257. doi: 10.1016/j.neuroimage.2003.12.029. [DOI] [PubMed] [Google Scholar]
  20. Carp J. On the plurality of (methodological) worlds: estimating the analytic flexibility of fMRI experiments. Front Neurosci. 2012a;6:149. doi: 10.3389/fnins.2012.00149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Carp J. The secret lives of experiments: methods reporting in the fMRI literature. Neuroimage. 2012b;63(1):289–300. doi: 10.1016/j.neuroimage.2012.07.004. [DOI] [PubMed] [Google Scholar]
  22. Chang C, Cunningham JP, Glover GH. Influence of heart rate on the BOLD signal: the cardiac response function. Neuroimage. 2009;44(3):857–869. doi: 10.1016/j.neuroimage.2008.09.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Chen X, Tong C, Han Z, Zhang K, Bo B, Feng Y, Liang Z. Sensory evoked fMRI paradigms in awake mice. Neuroimage. 2020;204 doi: 10.1016/j.neuroimage.2019.116242. [DOI] [PubMed] [Google Scholar]
  24. Cook PF, Prichard A, Spivak M, Berns GS. Awake canine fMRI predicts dogs’ preference for praise vs food. Soc Cogn Affect Neurosci. 2016;11(12):1853–1862. doi: 10.1093/scan/nsw102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Cook PF, Prichard A, Spivak M, Berns GS. Jealousy in dogs? evidence from brain imaging. Anim Sentience. 2018;22(1):1–14. [Google Scholar]
  26. Cook PF, Spivak M, Berns GS. One pair of hands is not like another: caudate BOLD response in dogs depends on signal source and canine temperament. PeerJ. 2014;2:e596. doi: 10.7717/peerj.596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Cremers HR, Wager TD, Yarkoni T. The relation between statistical power and inference in fMRI. PLoS One. 2017;12(11):e0184923. doi: 10.1371/journal.pone.0184923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Cuaya LV, Hernández-Pérez R, Concha L. Our faces in the Dog’s brain: functional imaging reveals temporal cortex activation during perception of human faces. PLoS One. 2016;11(3):e0149431. doi: 10.1371/journal.pone.0149431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Czeibert K, Andics A, Petneházy Ö, Kubinyi E. A detailed canine brain label map for neuroimaging analysis. Biol Futur. 2019;70(2):112–120. doi: 10.1556/019.70.2019.14. [DOI] [PubMed] [Google Scholar]
  30. Datta R, Lee J, Duda J, Avants BB, Vite CH, Tseng B, et al. Aguirre GK. A digital atlas of the dog brain. PLoS One. 2012;7(12):e52140. doi: 10.1371/journal.pone.0052140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. De Zwart JA, Silva AC, Van Gelderen P, Kellman P, Fukunaga M, Chu R, et al. Duyn JH. Temporal dynamics of the BOLD fMRI impulse response. Neuroimage. 2005;24(3):667–677. doi: 10.1016/j.neuroimage.2004.09.013. [DOI] [PubMed] [Google Scholar]
  32. DeFelipe J. The evolution of the brain, the human nature of cortical circuits, and intellectual creativity. Front Neuroanat. 2011;5:1–29. doi: 10.3389/fnana.2011.00029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Dilks DD, Cook PF, Weiller SK, Berns HP, Spivak M, Berns GS. Awake fMRI reveals a specialized region in dog temporal cortex for face processing. PeerJ. 2015;3:e1115. doi: 10.7717/peerj.1115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Fitch WT, Huber L, Bugnyar T. Social cognition and the evolution of language: constructing cognitive phylogenies. Neuron. 2010;65(6):795–814. doi: 10.1016/J.NEURON.2010.03.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Ford JM, Johnson MB, Whitfield SL, Faustman WO, Mathalon DH. Delayed hemodynamic responses in schizophrenia. Neuroimage. 2005;26(3):922–931. doi: 10.1016/j.neuroimage.2005.03.001. [DOI] [PubMed] [Google Scholar]
  36. Friston KJ, Fletcher P, Josephs O, Holmes A, Rugg MD, Turner R. Event-related fMRI: characterizing differential responses. Neuroimage. 1998;7:30–40. doi: 10.1006/nimg.1997.0306. [DOI] [PubMed] [Google Scholar]
  37. Friston KJ, Holmes AP, Poline J-B, Grasby PJ, Williams SCR, Frackowiak RSJ, Turner R. Analysis of fMRI time-series revisited. Neuroimage. 1995;2:45–53. doi: 10.1006/nimg.1995.1007. [DOI] [PubMed] [Google Scholar]
  38. Friston KJ, Jezzard P, Turner R. Analysis of functional MRI time-series. Hum Brain Mapp. 1994;1(2):153–171. doi: 10.1002/hbm.460010207. [DOI] [Google Scholar]
  39. Friston KJ, Josephs O, Rees G, Turner R. Nonlinear event-related responses in fMRI. Magn Reson Med. 1998;39(1):41–52. doi: 10.1002/mrm.1910390109. [DOI] [PubMed] [Google Scholar]
  40. Glover GH. Deconvolution of impulse response in event-related BOLD fMRI. Neuroimage. 1999;9(4):416–429. doi: 10.1006/nimg.1998.0419. [DOI] [PubMed] [Google Scholar]
  41. Goense JBM, Logothetis NK. Neurophysiology of the BOLD fMRI Signal in Awake Monkeys. Curr Biol. 2008;18(9):631–640. doi: 10.1016/j.cub.2008.03.054. [DOI] [PubMed] [Google Scholar]
  42. Handwerker DA, Ollinger JM, D’Esposito M. Variation of BOLD hemodynamic responses across subjects and brain regions and their effects on statistical analyses. Neuroimage. 2004;21(4):1639–1651. doi: 10.1016/j.neuroimage.2003.11.029. [DOI] [PubMed] [Google Scholar]
  43. Hecht EE, Smaers JB, Dunn WJ, Kent M, Preuss TM, Gutman DA. Significant neuroanatomical variation among domestic dog breeds. J Neurosci. 2019;39(39):0303–0319. doi: 10.1523/JNEUROSCI.0303-19.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Henson RNA, Price CJ, Rugg MD, Turner R, Friston KJ. Detecting latency differences in event-related BOLD responses: application to words versus nonwords and initial versus repeated face presentations. Neuroimage. 2002;15(1):83–97. doi: 10.1006/nimg.2001.0940. [DOI] [PubMed] [Google Scholar]
  45. Hernández-Pérez R, Concha L, Cuaya LV. Decoding human emotional faces in the dog’s brain. BioRxiv. 2018 doi: 10.1101/134080. [DOI] [Google Scholar]
  46. Hezzell MJ, Humm K, Dennis SG, Agee L, Boswood A. Relationships between heart rate and age, bodyweight and breed in 10,849 dogs. J Small Anim Pract. 2013;54(6):318–324. doi: 10.1111/jsap.12079. [DOI] [PubMed] [Google Scholar]
  47. Horschler DJ, Hare B, Call J, Kaminski J, Miklósi Á, MacLean EL. Absolute brain size predicts dog breed differences in executive function. Anim Cogn. 2019;22(2):187–198. doi: 10.1007/s10071-018-01234-1. [DOI] [PubMed] [Google Scholar]
  48. Huber L, Lamm C. Understanding dog cognition by functional magnetic resonance imaging. Learn Behav. 2017;45(2):101–102. doi: 10.3758/s13420-017-0261-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2(8):e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Jia H, Pustovyy OM, Waggoner P, Beyers RJ, Schumacher J, Wildey C, et al. Deshpande G. Functional MRI of the olfactory system in conscious dogs. PLoS One. 2014;9(1):e86362. doi: 10.1371/journal.pone.0086362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Jia H, Pustovyy OM, Wang Y, Waggoner P, Beyers RJ, Schumacher J, et al. Deshpande G. Enhancement of Odor-Induced activity in the Canine Brain by Zinc Nanoparticles: a functional MRI study in fully unrestrained conscious dogs. Chem Senses. 2016;41:53–67. doi: 10.1093/chemse/bjv054. [DOI] [PubMed] [Google Scholar]
  52. Karl S, Boch M, Virányi Z, Lamm C, Huber L. Training pet dogs for eye-tracking and awake fMRI. Behav Res Methods. 2019:1–19. doi: 10.3758/s13428-019-01281-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8(6):e1000412. doi: 10.1371/journal.pbio.1000412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Koyama M, Hasegawa I, Osada T, Adachi Y, Nakahara K, Miyashita Y. Functional magnetic resonance imaging of macaque monkeys performing visually guided saccade tasks: comparison of cortical eye fields with humans. Neuron. 2004;41(5):795–807. doi: 10.1016/S0896-6273(04)00047-9. [DOI] [PubMed] [Google Scholar]
  55. Krings T, Erberich SG, Roessler F, Reul J, Thron A. MR blood oxygenation level–dependent signal differences in parenchymal and large draining vessels: implications for functional MR imaging. Am J Neuroradiol. 1999;20(10) [PMC free article] [PubMed] [Google Scholar]
  56. Lambers H, Segeroth M, Albers F, Wachsmuth L, van Alst TM, Faber C. A cortical rat hemodynamic response function for improved detection of BOLD activation under common experimental conditions. Neuroimage. 2020;208 doi: 10.1016/j.neuroimage.2019.116446. [DOI] [PubMed] [Google Scholar]
  57. Langley JN, Grünbaum AS. On the degeneration resulting from removal of the cerebral cortex and corpora striata in the dog. J Physiol. 1890;11:606. [Google Scholar]
  58. Lewis LD, Setsompop K, Rosen BR, Polimeni JR. Stimulus-dependent hemodynamic response timing across the human subcortical-cortical visual pathway identified through high spatiotemporal resolution 7T fMRI. Neuroimage. 2018;181:279–291. doi: 10.1016/j.neuroimage.2018.06.056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Lindquist MA, Loh Meng, Atlas LY, Wager TD. Modeling the hemodynamic response function in fMRI: Efficiency, bias and mis-modeling. Neuroimage. 2009;45(1) doi: 10.1016/j.neuroimage.2008.10.065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Liu X, Tian R, Zuo Z, Zhao H, Wu L, Zhuo Y, et al. Chen L. A highresolution MRI brain template for adult Beagle. Magn Reson Imaging. 2020;68:148–157. doi: 10.1016/j.mri.2020.01.003. [DOI] [PubMed] [Google Scholar]
  61. Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412(6843):150–157. doi: 10.1038/35084005. [DOI] [PubMed] [Google Scholar]
  62. Manzo A, Ootaki Y, Ootaki C, Kamohara K, Fukamachi K. Comparative study of heart rate variability between healthy human subjects and healthy dogs, rabbits and calves. Lab Anim. 2009;43(1):41–45. doi: 10.1258/la.2007.007085. [DOI] [PubMed] [Google Scholar]
  63. Marquis DG. Effects of removal of the visual cortex in mammals, with observations on the retention of light discrimination in dogs. Assoc Res Nerv Ment Dis. 1934:558–592. [Google Scholar]
  64. Moradi F, Liu LC, Cheng K, Waggoner RA, Tanaka K, Ioannides AA. Consistent and precise localization of brain activity in human primary visual cortex by MEG and fMRI. Neuroimage. 2003;18(3):595–609. doi: 10.1016/S1053-8119(02)00053-8. [DOI] [PubMed] [Google Scholar]
  65. Nakahara K, Hayashi T, Konishi S, Miyashita Y. Functional MRI of Macaque monkeys performing a cognitive set-shifting task. Science. 2002;295(5559):1532–1536. doi: 10.1126/science.1067653. [DOI] [PubMed] [Google Scholar]
  66. Nichols TE, Das S, Eickhoff SB, Evans AC, Glatard T, Hanke M, et al. Yeo BTT. Best practices in data analysis and sharing in neuroimaging using MRI. Nat Neurosci. 2017;20(3):299–303. doi: 10.1038/nn.4500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Nitzsche B, Boltze J, Ludewig E, Flegel T, Schmidt MJ, Seeger J, et al. Schulze S. A stereotaxic breed-averaged, symmetric T2w canine brain atlas including detailed morphological and volumetrical data sets. Neuroimage. 2019;187:93–103. doi: 10.1016/j.neuroimage.2018.01.066. [DOI] [PubMed] [Google Scholar]
  68. Orban GA, Van Essen D, Vanduffel W. Comparative mapping of higher visual areas in monkeys and humans. Trends Cogn Sci. 2004 July;8:315–324. doi: 10.1016/j.tics.2004.05.009. [DOI] [PubMed] [Google Scholar]
  69. Patel GH, Cohen AL, Baker JT, Snyder LH, Corbetta M. Comparison of stimulus-evoked BOLD responses in human and monkey visual cortex. BioRxiv. 2018:345330 doi: 10.1101/345330. [DOI] [Google Scholar]
  70. Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafò MR, et al. Yarkoni T. Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci. 2017;18(2):115–126. doi: 10.1038/nrn.2016.167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Poldrack RA, Fletcher PC, Henson RN, Worsley KJ, Brett M, Nichols TE. Guidelines for reporting an fMRI study. Neuroimage. 2008;40(2):409–414. doi: 10.1016/J.NEUROIMAGE.2007.11.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Power JD, Barnes KA, Snyder AZ, Schlaggar BL, Petersen SE. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. Neuroimage. 2012;59(3):2142–2154. doi: 10.1016/J.NEUROIMAGE.2011.10.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Power JD, Mitra A, Laumann TO, Snyder AZ, Schlaggar BL, Petersen SE. Methods to detect, characterize, and remove motion artifact in resting state fMRI. Neuroimage. 2014;84:320–341. doi: 10.1016/j.neuroimage.2013.08.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Prichard A, Chhibber R, Athanassiades K, Spivak M, Berns GS. Fast neural learning in dogs: A multimodal sensory fMRI study. Sci Rep. 2018;8:14614. doi: 10.1038/s41598-018-32990-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Prichard A, Chhibber R, King J, Athanassiades K, Spivak M, Berns GS. Decoding odor mixtures in the dog brain: an awake fMRI study. BioRxiv. 2019:754374. doi: 10.1101/754374. [DOI] [PubMed] [Google Scholar]
  76. Prichard A, Cook PF, Spivak M, Chhibber R, Berns GS. Awake fMRI reveals brain regions for novel word detection in dogs. Front Neurosci. 2018;12:737. doi: 10.3389/fnins.2018.00737. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Roth G, Dicke U. Evolution of the brain and intelligence. Trends Cogn Sci. 2005;9(5):250–257. doi: 10.1016/j.tics.2005.03.005. [DOI] [PubMed] [Google Scholar]
  78. Schoenebeck JJ, Ostrander EA. The genetics of canine skull shape variation. Genetics. 2013;193(2):317–325. doi: 10.1534/genetics.112.145284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Shan ZY, Wright MJ, Thompson PM, McMahon KL, Blokland GGAM, De Zubicaray GI, et al. Reutens DC. Modeling of the hemodynamic responses in block design fMRI studies. J Cereb Blood Flow Metab. 2014;34(2):316–324. doi: 10.1038/jcbfm.2013.200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Silva AC, Koretsky AP, Duyn JH. Functional MRI impulse response for BOLD and CBV contrast in rat somatosensory cortex. Magn Reson Med. 2007;57(6):1110–1118. doi: 10.1002/mrm.21246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22(11):1359–1366. doi: 10.1177/0956797611417632. [DOI] [PubMed] [Google Scholar]
  82. Sladky R, Friston KJ, Tröstl J, Cunnington R, Moser E, Windischberger C. Slice-timing effects and their correction in functional MRI. Neuroimage. 2011;58(2):588–594. doi: 10.1016/J.NEUROIMAGE.2011.06.078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Strassberg LR, Waggoner LP, Deshpande G, Katz JS. Training Dogs for Awake, Unrestrained Functional Magnetic Resonance Imaging. J Vis Exp : JoVE. 2019;(152) doi: 10.3791/60192. [DOI] [PubMed] [Google Scholar]
  84. Sylvester R, Haynes JD, Rees G. Saccades differentially modulate human LGN and V1 responses in the presence and absence of visual stimulation. Curr Biol. 2005;15(1):37–41. doi: 10.1016/j.cub.2004.12.061. [DOI] [PubMed] [Google Scholar]
  85. Szabó D, Gábor A, Gácsi M, Faragó T, Kubinyi E, Miklósi Á, Andics A. On the Face of It: No Differential sensitivity to internal facial features in the dog brain. Front Behav Neurosci. 2020;14:25. doi: 10.3389/fnbeh.2020.00025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Thompkins AM, Deshpande G, Waggoner P, Katz JS. Functional magnetic resonance imaging of the domestic dog: research, methodology, and conceptual issues. Comparat Cogn Behav Rev. 2016;11:63–82. doi: 10.3819/ccbr.2016.110004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Thompkins AM, Ramaiahgari B, Zhao S, Gotoor SSR, Waggoner P, Denney TS, et al. Katz JS. Separate brain areas for processing human and dog faces as revealed by awake fMRI in dogs (Canis familiaris) Learn Behav. 2018;46(4):561–573. doi: 10.3758/s13420-018-0352-z. [DOI] [PubMed] [Google Scholar]
  88. Tootell RBH, Tsao D, Vanduffel W. Neuroimaging weighs in: humans meet macaques in “Primate” visual cortex. J Neurosci. 2003 May 15;23:3981–3989. doi: 10.1523/jneurosci.23-10-03981.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Turner R. How Much Cortex Can a Vein Drain? downstream dilution of activation-related cerebral blood oxygenation changes. Neuroimage. 2002;16(4):1062–1067. doi: 10.1006/nimg.2002.1082. [DOI] [PubMed] [Google Scholar]
  90. Uemura EE. Fundamentals of Canine Neuroanatomy and Neurophysiology. John Wiley & Sons, Inc; Ames, Iowa: 2015. [Google Scholar]
  91. Upham NS, Esselstyn JA, Jetz W. Inferring the mammal tree: species-level sets of phylogenies for questions in ecology, evolution, and conservation. PLoS Biol. 2019;17(12):e3000494. doi: 10.1371/journal.pbio.3000494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Willis CKR, Quinn RP, McDonell WM, Gati J, Parent J, Nicolle D. Functional MRI as a tool to assess vision in dogs: the optimal anesthetic. Vet Ophthalmol. 2001;4(4):243–253. doi: 10.1046/j.1463-5216.2001.00183.x. [DOI] [PubMed] [Google Scholar]
  93. Willis CKR, Quinn RP, McDonell WM, Gati J, Partlow G, Vilis T. Functional MRI activity in the thalamus and occipital cortex of anesthetized dogs induced by monocular and binocular stimulation. Can J Vet Res. 2001;65(3):188–195. [PMC free article] [PubMed] [Google Scholar]
  94. Wing KG, Smith KU. The role of the optic cortex in the dog in the determination of the functional properties of conditioned reactions to light. J Exp Psychol. 1942;31(6):478–496. [Google Scholar]
  95. Worsley KJ, Friston KJ. Analysis of fMRI time-series revisited — again. Neuroimage. 1995;2:173–181. doi: 10.1006/nimg.1995.1023. [DOI] [PubMed] [Google Scholar]
  96. Wurtz RH. Neuronal mechanisms of visual stability. Vision Res. 2008;48(20):2070–2089. doi: 10.1016/j.visres.2008.03.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Yáñez IB, Muñoz A, Contreras J, Gonzalez J, Rodriguez-Veiga E, DeFelipe J. Double bouquet cell in the human cerebral cortex and a comparison with other mammals. J Comp Neurol. 2005;486(4):344–360. doi: 10.1002/cne.20533. [DOI] [PubMed] [Google Scholar]
  98. Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage. 2006;31(3):1116–1128. doi: 10.1016/J.NEUROIMAGE.2006.01.015. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary material

Data Availability Statement

Unthresholded statistical maps from the exploratory and estimation analysis, the Matlab-based code to calculate the HRF model fits, the FIR data for both experiments, and a spm_my_defaults.m-script containing the dog HRF parameters have been added as supplementary material.

RESOURCES