Skip to main content
Environmental Health Perspectives logoLink to Environmental Health Perspectives
. 2025 Apr 23;133(3-4):047010. doi: 10.1289/EHP15100

Ultrafine Particle Mobile Monitoring Study Designs for Epidemiology: Cost and Performance Comparisons

Sun-Young Kim 1,2,, Amanda J Gassett 2, Magali N Blanco 2, Lianne Sheppard 2,3
PMCID: PMC12036699  PMID: 40042987

Abstract

Background:

Given the difficulty of collecting air pollution measurements for individuals, researchers use mobile monitoring to develop accurate models that predict long-term average exposure to air pollution, allowing the investigation of its association with human health. Although recent mobile monitoring studies focused on predictive models’ abilities to select optimal designs, cost is also an important feature.

Objectives:

This study aimed to compare costs to predictive model performance for different mobile monitoring designs.

Methods:

We used data on ultrafine particle stationary roadside mobile monitoring and associated costs collected by the Adult Changes in Thought Air Pollution (ACT-AP) study. By assuming a single-instrument, local monitoring, and constant costs of equipment and investigator oversight, we focused on the incremental cost of staff work days composed mostly of sampling drives and quality control procedures. The ACT-AP complete design included data collection from 309 sites, 29 visits per site, during four seasons, every day of the week. We considered alternative designs by selecting subsets of fewer sites, visits, seasons, days of week, and hours of day. Then, we developed exposure prediction models from each alternative design and calculated cross-validation (CV) statistics using all observations from the complete design. Finally, we compared CV R-squared values and the numbers of staff work days from alternative designs to those from the complete design and demonstrate this exercise in a web application.

Results:

For designs with fewer visits per site, the costs for number of work days were lower and model performance (CV R2) also worsened, but with mild decline above 12 visits per site. The costs were also less for designs with fewer sites when considering at least 100 sites, although the reduction in performance was minimal. For temporally restricted designs that were constrained to have the same number of work days and thus the same cost, restrictions on the number of seasons, days of week, and/or hours of the day adversely impacted model performance.

Discussion:

Our study provides practical guidance to future mobile monitoring campaigns that have the ultimate goal of assessing the health effect of long-term air pollution. Temporally balanced designs with 12 visits per site are a cost-effective option that provide relatively good prediction accuracy with reduced costs. https://doi.org/10.1289/EHP15100

Introduction

Although mobile monitoring has been applied to measuring air pollution emitted from traffic or wildfires for decades,16 more recent mobile monitoring studies have focused on exposure assessment to investigate the association between long-term exposure to air pollution and human health.7,8 Specifically, recent mobile monitoring campaigns aimed to achieve high-quality estimates of people’s long-term average exposures to traffic-related air pollution (TRAP), given the increasing epidemiological evidence of adverse health effects associated with TRAP.911 To achieve this goal, investigators designed or leveraged existing mobile monitoring campaigns and subsequently used air pollution prediction models to provide estimates of individual-level long-term average exposures.

Because mobile monitoring allows substantial flexibility regarding spatial domain and temporal frequency and timing of sampling, there are a multitude of available choices in designing a monitoring campaign. Some recent studies focused on quantifying the accuracy of individual long-term exposure estimates.8,1215 Cost is another important factor that drives monitoring design, particularly given the high expense of payroll and equipment in mobile monitoring.16 Cost can vary dramatically as a function of logistical features; some designs optimized to achieve high accuracy in exposure estimates may not be able to meet a target budget. Thus, an investigation of the trade-off between the increase in cost and the improvement of prediction model performance can help investigators design their mobile monitoring campaigns in the future. Our previous study suggested that greater spatial and temporal coverage in mobile monitoring designs can improve prediction accuracy.13 This study expands our focus to include cost to explore cost-effective design options for future air pollution mobile monitoring campaigns.

Focusing on the goal of providing practical guidance, this study aimed to compare costs of input resources with resulting exposure model performance by comparing and contrasting characteristics of mobile monitoring designs for application to environmental epidemiology. We assume that the ultimate exposure assessment goal in this setting is to produce high-quality predictions of annual average concentrations at study participants’ residences to assess their health effects in a single geographically defined area. We specifically focus on ultrafine particles (UFPs) quantified using a spatial prediction model. Because UFP monitoring equipment can be very expensive and no regulatory monitoring networks include UFP measurements, a mobile platform carrying a few instruments has been widely considered to be a cost-effective option.1719 Despite the increasing attention, there has been limited guidance on effective monitoring designs for epidemiological applications. To quantify costs and compare them with model performances, we leverage our experience in monitoring and modeling short-term stationary roadside measurements of UFP that were produced under the auspices of the Adult Changes in Thought Air Pollution (ACT-AP) study.

Methods

ACT-TRAP Study Overview and Assumptions for the Present Study

As a part of the ACT-AP study that has been investigating the association between long-term exposure to air pollution and brain health for more than 5,400 adults over the age of 65 y in the greater Seattle area, the Adult Changes in Thought Traffic-Related Air Pollution (ACT-TRAP) study specifically focused on an extensive mobile monitoring campaign with high temporal and spatial resolution.20 The mobile monitoring included stationary roadside and nonstationary (on-road) measurements of particulate and gaseous pollutants from multiple instruments loaded on a single vehicle that drove along nine fixed routes with a total length of 1,069km in the 1,200-km2 study area. Driving hours were between 0500 hours and 2300 hours (5 A.M. and 11 P.M.), including business and nonbusiness hours, weekdays and weekends, and all four seasons from March 2019 through March 2020. The monitoring platform did not operate from midnight to 5 A.M. because of logistical difficulty. We used the canonical calendar seasons: spring from 20 March 2019 to 20 June 2019, summer from 21 June to 22 September, autumn from 23 September to 20 December, and winter from 21 December 2019 to 19 March 2020 (average temperatures of each season: 8°, 19°, 12°, and 5°C, respectively). The stationary measurements, at 1 s to 1 min depending on the pollutant, were collected at 309 roadside locations for 2 min at each site, with 29 repeat visits per site. The study team members also carried out make-up sampling for missing data due to instrumentation malfunctions, inclement weather, car trouble, or driver illness or error. Because this campaign included most hours and all seasons and days of week at each sampling site, we considered the annual average of measurements at each site without additional temporal adjustment as the representative annual average concentration. For particle number concentration (PNC) often used to characterize UFP, the ACT-TRAP study computed the median concentration of each 2-min visit, winsorized medians across visits within site using the 5% threshold level, and computed averages at each site. The instrumentation details, including specifications and limits of quantification as well as quality control procedures, are provided below and in our previous published study.20

Design Components

Using the available ACT-TRAP data, the present study quantified features of the complete and alternative (subsampled) monitoring campaign designs for PNC sampled from an unscreened P-Trak 8525 (TSI Inc.). We choose a single instrument and location scenario to focus more directly on design components and omit from consideration differences in cost derived by instrument selection and travel to/from multiple study areas. We selected the P-Trak because it has been commonly used in previous mobile monitoring campaigns and produces high-quality data at a 1-s time scale, as we have documented previously.13,20,21 The limit of quantification was 1 pt/cm3, with the operation temperature range of 0° to 38°C and sample inlet flow rate of 700mL/min.22 Each P-Trak was factory-recalibrated in January 2019 prior to study data collection, and we confirmed their data precision by comparing the measurements from collocated instruments every few weeks.

We considered five design components in this investigation: number of sites, number of visits per site, days of the week, hours of the day, and seasons (Table 1). These five components indicate key spatial and temporal characteristics that contribute to assessing long-term average exposure of air pollution for cohort participants but are commonly limited in previous monitoring campaigns given logistical and/or financial constraints. The complete all-data design included 309 sites; 29 visits per site; all 7 d of the week; most hours of the day, with hours between 0500 hours and 2100 hours (5 A.M. and 11 P.M.); and all four seasons. Alternative designs reduced the number of sites to 100, 150, 200, or 250 (i.e., spatially reduced) or reduced the number of visits per site to 4, 6, 12, 16, 20, or 24 (i.e., temporally reduced), to provide a wide range of values that could be used to visualize the shape of the association between amount of monitoring and model performance. In temporally restricted designs, we restricted the days of the week and the 18 sampling hours from 0500 hours to 2100 hours (5 A.M. to 11 P.M.) to develop three overlapping subsets: a) all 18 h on weekdays only, b) business hours from 0900 hours to 1700 hours (9 A.M. to 5 P.M.) on weekdays, and c) rush hours from 0700 hours to 1000 hours (7 A.M. to 10 A.M.) and 1500 hours to 1800 hours (3 P.M. to 6 P.M.) on weekdays. These temporally restricted designs were intended to be logistically feasible by restricting sampling to regular work hours and days; they may not provide representative long-term average exposure. The alternative designs for season included two or three seasons with no other restrictions. The designs that restricted the numbers of seasons, days, or hours used consistent numbers of sites and visits per site—309 and 12, respectively—to separate the effect of selective timing from the effect of the total amount of monitoring. We did not consider the following designs: weekends only; one season; unbalanced number of visits per site. The first two are unrealistic options for assessing long-term average exposure to air pollution; and the last, although common in the literature, is left for future work. To assess the representative model performance of each alternative design, we sampled 30 campaigns of each design by random sampling from the complete all-data design.

Table 1.

Complete and alternative designs for ultrafine particle mobile monitoring to assess model performance and costs.

Design component Complete all-data designa Alternative monitoring design changes Alternative monitoring design options
Number of sites 309 Fewer sites 100, 150, 200, or 250
Number of visits 29b Fewer visits per site 4, 6, 12, 16, 20, or 24
Day of the week All days Fewer days Weekdays onlyc
Hour Most hours for 0500 hours to 2100 hours (5 A.M.–11 P.M.) Fewer hours Business or rush hours onlyc
Season 4 seasons Fewer seasons 2 or 3 seasonsd
a

The ultrafine particle monitoring campaign of the Adult Changes in Thought Traffic-Related Air Pollution study; we used the complete all-data design as the reference to compare with all alternative designs except temporally-restricted designs with fewer days, hours, or seasons where a reduced design with 12 visits (309 sites and 12 visits) is the “complete all-data” design.

b

Median with a range of 26–35.

c

Monday to Friday only.

d

Six combinations of two seasons and four combinations of three seasons.

Approach to Cost Estimation

We assumed that the total cost of the mobile monitoring campaign is composed of two types: up-front and per-drive-day costs. Table S1 shows the characteristics and examples of up-front and per-drive-day costs. The up-front cost was defined as one-time or long-term costs mostly incurred before the beginning of regular monitoring. Examples included the purchase of instruments and various preparation efforts such as fabrication of the manifold, software development, protocol development, maintenance, and establishment of sampling locations. Together, these activities required about 40 work days of one full-time and three to five part-time staff members in ACT-TRAP. The actual number of days varied, depending on the experience and training status of staff as well as the presence of existing materials and protocols. Most of these elements did not vary by monitoring design. The exception to this was a small amount of incremental time needed to establish sampling locations, which included site-specific selection, review, and documentation, plus time needed for pilot drives of each route.

The per-drive-day cost included the cost of staff time for planned driving, make-up sampling, related in-laboratory quality control activities, vehicle use, and data management. The per-day vehicle costs were based on our university’s long-term or short-term rental, mileage, and fuel rates for fleet services and the monthly parking rate for our building’s lab. Because the P-Trak provided direct real-time measurements that were downloaded after each day of sampling, laboratory data management activities were limited to data review and cleaning as described in Table S2. We primarily assumed that the staff cost of a single work day was constant regardless of the hours of day or days of week worked, because we funded multiple full-time staff members instead of paying overtime. We quantified the staffing effort by using the number of work days as a unit of cost measurement, which can also be expressed as a fraction or multiplier of a year of effort from a full-time staff person.

We estimated that a single full-time staff person should work 229 d in a year, based on the assumptions that a person works 5 d per week and has 3 wk of paid vacation, twelve paid holidays, and 5 sick days. The calculations of required work days for the full single-instrument single-city monitoring design based on ACT-TRAP are shown in Table 2. Given a single instrument, we estimated that 203 d of regular driving would be required to make 29 visits to all sites on seven routes. Each route was designed to be driven in 1 d (7.5 h driving plus 15 min each for setup and takedown) and included an average of 44 sites. We estimated 2 d per season for rescheduled drives to accommodate sick time, car trouble, inclement weather, or other unforeseen circumstances that prevent driving per the original plan. We also assumed that drivers spent 1 day in every 10 sampling days for in-lab quality-control activities, including calibration, lab-based colocation, administrative tasks, meetings, and car or instrument repair. After aggregating all of these work days in the complete design, we obtained a total between 232 and 244 work days, or between 1.0 and 1.1 (232/229–244/229) field technicians/drivers needed for a year. For alternative designs, the number of work days was reduced according to the difference from the complete design. Our cost estimation did not consider analysis work for exposure model development or collaboration efforts with expert groups, because these would vary depending on the scientific aims proposed for the use of the data.

Table 2.

Calculation for the expected number of work days needed for the full single-instrument and single-city monitoring design of the Adult Changes in Thought Traffic-Related Air Pollution study.

Type of cost Number of work days Equation
Available work days (per person) 229 365 d – (52 wk ×2 weekend daysa +12 paid holidays +3 wk ×5 paid vacation days +5 sick days)
Required work days Up-frontb 40c
Per-drive-dayd
 Total 232–244
 Regular drives 203 7 routes ×29 visits
 Rescheduled drives 8–20 2–5 d ×4 seasons
 Quality control 21 (203+8) drive days ×1/10e
Note:

—, no data.

a

Replaced weekend days with two weekdays for the field staff who performed weekend driving.

b

One-time or long-term costs mostly incurred before the beginning of regular monitoring for study preparation.

c

Protocol development, fabrication/assembly of the platform, and pilot testing, which specifically include writing standard operating procedures, developing the instrument setup checklist, testing and troubleshooting the instruments and the setup inside the vehicle, working with information technology personnel to develop the data organization procedure, and other preparation activities.

d

Costs related to data collection based on daily driving.

e

Quality control activities every 10 d.

We estimated monetary values corresponding to the number of work days for alternative and complete designs based on actual expenditures in the ACT-TRAP study as shown in our web application (https://airhealth.shinyapps.io/myshinyapp/). In monetary value estimation, we also highlighted monitoring elements that could incur additional costs: multiple instruments and the potential for a shift premium. For the multiple instruments scenario, we added two additional instruments for PNC (NanoScan and DiSCmini) and four instruments for black carbon (BC), nitrogen dioxide (NO2), carbon dioxide, and carbon monoxide (microAeth MA200, CAPS NO2, LI-850, and Langan, respectively). Although additional PNC instruments are useful to validate PNC measurements, other pollutants can help adjust localized on-road plumes to assess representative residential exposure to UFP as demonstrated in the ACT-TRAP study.21 A temporally varying shift premium scenario illustrates the impact of a 30%, 50%, and 100% higher staff cost rate for weekend and evening driving than that for weekday daytime driving.

Exposure Model Performance

We computed R-squared (R2) values from 10-fold cross-validation (CV) of the spatial prediction model for the complete all-data design as well as alternative designs sampled using the observations collected under the complete all-data design of the ACT-TRAP study. The annual average PNC spatial prediction model was based on the previously published universal kriging with partial least squares (UK-PLS) framework that reduces the dimension of 188 geographic covariates to a few predictors estimated by partial least squares, followed by spatial smoothing based on universal kriging.23 The R code and details of model specification and implementation were previously published.13 In short, the spatial prediction model is:

Ln(PNCi)=β0+m=12βmXm+εi, (1)

where PNCi is the annual average PNC concentration at the ith location; Xm are the first two partial least square component scores estimated from 188 geographic covariates, including traffic, land use, and other potential air pollution sources; β0 and βm are estimated regression coefficients; and εi is the residual term with mean zero and a modeled geostatistical structure. The R code used for this model can be found at https://github.com/magali17/hei_sims.

In our CV, we computed annual average predictions at all 309 sites for all alternative and complete designs except designs with fewer sites. In each alternative design, we divided 309 sites into 10 groups, developed the prediction model using nine groups of sites after holding out one group for test sites, predicted annual average PNC concentrations at test sites, and repeated this procedure for each of the remaining 9 groups. For designs with fewer sites, we obtained predictions using a combination of CV at the 100 to 250 sites included in that design and external validation for the rest of sites not included. For example, in the 250-site design, we computed CV predictions at 250 sites and then applied the model based on 250 sites to obtain external predictions at the remaining 59 sites. For CV statistics, we always calculated mean-square error (MSE)-based R2 values for each alternative design of subset data from predictions at all 309 sites in comparison with the annual average observations from the complete all-data design, instead of comparing to the observations from the corresponding alternative design. We considered the complete all-data design, with observations at 309 sites as the gold standard against which we validated all alternative designs. We computed MSE-R2s to evaluate whether predictions and observations are the same (i.e., are near to the one-to-one line). MSE-R2s are computed as 1 minus the ratio of MSE divided by data variance (scaled by n not n minus 1), instead of more common regression-based R2s calculated as squared Pearson correlation coefficients to assess whether pairs of observations are linearly associated.13 Because we constructed each alternative design by resampling 30 campaigns, we presented median CV MSE-R2 and its fifth and 95th percentiles across 30 campaigns for each design.

Comparison of Costs and Model Performance between Different Designs

We compared costs (computed as work days) to model performances (presented as CV R2s) across various alternative designs compared with the relevant reference design (Table 3). This comparison indicates the trade-off between the deterioration in model performance and the reduction in cost. For the reference design, we used the complete all-data design (309 sites and 29 visits) to compare to alternative designs that were spatially reduced with fewer sites (100, 150, 200, or 250) or temporally reduced with fewer visits per site (4, 6, 12, 16, 20, or 24). For all the other alternative designs with temporal restriction(s), i.e., those with fewer days (weekday only), hours (business or rush hours only), and seasons (two or three seasons), the reference design was a reduced design with 12 visits (309 sites and 12 visits) as the “complete all-data” design of temporally restricted designs. We selected this reference based on both feasibility as well as reasonably good model performance in our previous study.13 For example, two seasons only have 114 d (229 available work days/2), and it would not be physically possible to drive 7 routes more than 12 times over two seasons using a single instrument platform (12 repeat visits ×7 routes regular driving +8 make-up driving +21 quality control=113 days). Therefore, a realistic two-season design inherently requires fewer repeat visits with a realistic maximum of 12 visits in comparison with the complete all-data design, avoiding much higher up-front costs to purchase more equipment, hire drivers, and set up multiple platforms to do additional sampling in each season.

Table 3.

Model performance and estimated per-drive-day costs across different alternative sampling designs based on the data from the ACT-TRAP mobile monitoring campaign given a single instrument for UFP monitoring (P-Trak) in a single study area.

Design Version Design component Cost (work days for per-drive-day costs) Model performance
n of sites n of visits per sitea n of routesb n of seasons Regular drive Rescheduled drivec Quality control Total CV R2d Reference designe
Complete Allf 309 29 7 4 203 8 21 232 0.77
Alternative Fewer sites 250 250 29 6 4 174 8 18 200 0.76 Complete
200 200 29 5 4 145 8 15 168 0.74 Complete
150 150 29 3 4 87 8 10 105 0.70 Complete
100 100 29 2 4 58 8 7 73 0.67 Complete
Fewer visits 24 309 24 7 4 168 8 18 194 0.72 Complete
20 309 20 7 4 140 8 15 163 0.72 Complete
16 309 16 7 4 112 8 12 132 0.71 Complete
12 309 12 7 4 84 8 9 101 0.70 Complete
6 309 6 7 4 42 8 5 55 0.63 Complete
4 309 4 7 4 28 8 4 40 0.60 Complete
Fewer daysg Weekday 309 12 7 4 84 8 9 101 0.65 Reduced
Fewer hoursh Business 309 12 7 4 84 8 9 101 0.55 Reduced
Rush 309 12 7 4 84 8 9 101 0.64 Reduced
Fewer seasonsg,i 3 309 12 7 3 84 6 9 99 0.69 Reduced
2 309 12 7 2 84 4 9 97 0.67 Reduced
Note: 

—, no data; ACT-TRAP, Adult Changes in Thought Traffic-Related Air Pollution; CV, cross-validated; UFP, ultrafine particle.

a

Maximum numbers of visits feasible given the restriction of reduced designs in a single mobile platform.

b

A single route is a schedule of driving covering on average 35 sites that provides the number of driving days.

c

Make-up driving for at least 2 d per season.

d

Median cross-validation (CV) R2 across campaigns (n=30) for each design using predictions of UFP annual averages at all 309 sites; for the designs with fewer sites, predictions were obtained from cross-validation for 100 to 250 sites and external validation for the rest of sites not included in a given campaign.

e

Application of the complete reference design (“Complete”) to alternative designs with spatially reduction with fewer sites or temporal reduction with fewer visits, and the reduced reference design with 12 visits (“Reduced”; 309 sites but 12 visits instead of 29 visits) to alternative designs with temporal restriction including fewer days, hours, or seasons.

f

A total of 309 sites, 29 visits per site, all days, most hours from 0500 hours to 2300 hours (5 A.M.–11 P.M.), and four seasons.

g

We did not consider the designs of weekend-only and one season because they are not realistic.

h

Business hours: weekdays 0900 hours to 1700 hours (9 A.M.–5 P.M.); Rush hours: weekdays 0700 hours to 1000 hours (7 A.M.–10 A.M.) and 1500 hours to 1800 hours (3 P.M.–6 P.M.).

i

Specific combination such as summer–winter and summer–spring.

All statistical analyses including exposure model performance and comparison to cost were performed in R (version 4.1.2; R Development Core Team).

Results

Table 3 shows the summary of design components, costs, and model performances. The complete all-data design showed the highest CV R2 of 0.77. CV R2s were lower as we reduced the number of sites, visits, seasons, days, and hours, showing the lowest performance for designs with limited repeat visits and hours (median CV R2=0.60 for four visits and 0.55 for business hours only). The number of work days was also the largest in the complete all-data reference design (232 d) with linear decreases for fewer sites or visits designs (73 and 40 d for 100 sites and four visits, respectively).

The relationships between cost and model performance varied across design components. (Table 3; Table S2; Figure 1A–D). In particular, we found a contrast in this relationship for the spatially and temporally reduced vs. temporally restricted designs. As shown in Figure 1A–D, the cost was continuously higher and model performance was better as we included more sites or visits in the design, whereas costs were constant despite improving model performance as we removed temporal restrictions by adding more seasons, days, or hours. In comparison with the complete all-data design, the number of work days decreased linearly as the number of sites or visits decreased. The model performances, however, showed different rates of decline based on the number of sites or visits per site. Model performance CV R2s were lower by 3–7 percentage points in comparison with the complete all-data reference design, until the number of sites or visits dropped to 200 sites or 12 repeat visits with all other design components identical to those of the complete all-data design (median R2s of 0.70–0.74 vs. 0.77) (Table 3; Table S2). When the monitoring design had fewer than 200 sites or 12 visits, CV R2s markedly decreased by more than 7–17 percentage points (median R2s of 0.60–0.70 vs. 0.77). In contrast, the restriction in numbers of seasons, days, or hours resulted in decreased CV R2s, even though the number of work days was roughly the same as the reduced 12-visit reference design. When we included two or three seasons in the monitoring campaign, CV R2s were lower by up to 5 percentage points in comparison with the reduced reference design (median R2s of 0.67–0.69 vs. 0.70). The fewer-days design restricted to weekdays also gave about a 5-percentage point lower CV R2 (median R2s of 0.65 vs. 0.70). The fewer hours designs showed the most notable decrease in CV R2 down to a median of 0.55 when the monitoring was restricted to business hours only.

Figure 1.

Figures 1A to 1D are titled Spatial reduction (fewer sites), Temporal reduction (fewer visits), Temporal restriction (fewer seasons), and Temporal restriction (fewer days per hours), plotting Cost (number of work days), ranging from 40 to 280 in increments of 80 (left y-axis) and Model performance (cross-validated uppercase r squared), ranging from 0.4 to 0.8 in increments of 0.1 (right y-axis) across number of sites (number per 100 kilometer squared), ranging as 100 (8), 150 (12), 200 (17), 250 (21), and 300 (26: complete reference design); number of repeat visits per site, ranging from 5 to 25 in increments of 5 and 25 to 29 (complete reference design) in increments of 4, number of seasons, ranging from 2 to 4 (reduced reference design) in unit increments, and day per hour, ranging as business, rush, weekday, and all (reduced reference design) (x-axis).

Relationships between the number of work days and median CV R2s across complete and alternative mobile monitoring designs according to the number of sites (A), visits (B), seasons (C), and days and hours (D) from the ACT-TRAP mobile monitoring campaign. The “ref” and “ref*” indicate the complete all-data reference and reduced reference designs, respectively. The open red circles and blue diamonds represent cost (number of work days) and CV R2s, respectively, from the complete reference design that serves as the gold standard for comparing spatially reduced alternative designs with fewer sites (A) or temporally reduced alternatives with fewer visits (B). The red stars and blue diamonds with cross represent work days and CV R2s, respectively, from the reduced reference design that is equivalent to the complete all-data design except reduced to 12 visits instead of 29 and is used as the “complete” design for temporally restricted alternative designs with fewer seasons, days, or hours (C and D). In (B), the black dotted horizontal and vertical lines highlight the CV R2 for the reduced reference design. Note: ACT-TRAP, Adult Changes in Thought Traffic-Related Air Pollution; CV, cross-validated.

The estimated monetary values for up-front and per-drive-day costs combined showed similar relationships with model performance as those for workday costs. Although CV R2s decreased in the temporally restricted alternative designs with fewer seasons, days, or hours (median R2s of 0.55–0.69 vs. 0.70), the costs were similar to those for the reduced 12-visit reference design (USD $155,000$162,000 vs. USD $162,000) (Table S2). However, the spatially- or temporally reduced alternative designs with fewer sites or visits showed much lower costs (USD $124,000$205,000) in comparison with the complete all-data reference design (USD $219,000), as well as a reduction in model performance (median R2s of 0.60–0.70 vs. 0.77). When we compared these estimated costs from our primary monitoring scenario with a single instrument and fixed staff cost to those from additional monitoring scenarios, the total cost of the complete design slightly increased with temporally varying shift premium (USD $232,000$264,000 vs. USD $219,000) and increased by more than 80% with multiple instruments (USD $398,000 vs. USD $219,000) (Table S3; https://airhealth.shinyapps.io/myshinyapp/). However, the spatially and temporally reduced designs showed much lower costs, whereas the costs across temporally restricted designs were similar to the reference design (Figures S1 and S2).

Discussion

Our study investigated the trade-offs between model performance and cost in alternative mobile monitoring designs and attempted to identify optimal monitoring designs to satisfy the goal of reducing costs while maintaining good quality predictions of air pollution. We found better alternative designs had reduced sites or repeat visits without temporal restrictions or restricted sampling temporally in seasons, days, or hours. Designs that restricted the number of seasons, days of week, and/or hours of the day adversely impacted model performance without reducing cost in comparison with designs requiring the same number of work days. The design with optimal cost–performance trade-off included 12 visits per site, at least two seasons, both weekdays and weekends, and most hours, including business, early morning, and nighttime hours [to 2300 hours (11 P.M.)]. Further, although cost increases were driven by the number of sites or visits, the cost–performance trade-off became less favorable above 12 repeat visits due to the relatively slower increase in performance. This finding remained when we extended the monitoring campaign scenarios to multiple instruments and temporally varying staff costs.

Our study extends and improves on previous studies that considered monitoring design guidance by considering cost and leveraging a yearlong monitoring campaign with balanced sampling over all days of the week and hours of the day. The literature includes a few recent studies of mobile monitoring designs that provide insights into the relationship between design components and model performance with the ultimate goal of epidemiological application (Table S4). Like our work, these studies sampled from their original campaign to create new campaigns with specific designs and compared the performance of prediction models developed from these subsets to that of the complete design. Two studies in Oakland, California, in the United States leveraged a campaign that collected BC and nitric oxide (NO) during weekday business hours in the period 2015–2017.8,14 Using land use regression model to predict from annual medians on all 19,149 30-m road segments, the first study showed that at least four repeat visits gave comparable model performance to the complete dataset of 10–41 visits, depending on the road segment.8 The second study applied a mixed-effects model to predict average concentrations using the same data and reported CV R2s of 0.8 and 0.9 for 5 and 15 repeat visits per road segment, respectively.14 In Montreal, Canada, another mobile monitoring study collected PNC for 34 d over three seasons and showed that reduced campaigns with 12 repeat visits per site gave a median adjusted R2 close to 0.7, which is lower than 0.74 for the complete data of at least 16 visits.24,25 As with our findings, all these studies suggested the important role of repeat visits in model performance, although these studies used nonstationary on-road data as opposed to our use of stationary roadside data. In contrast to our work, their insights into the optimal monitoring designs were based on temporally limited monitoring campaigns over <40d or during the daytime on weekdays; this may limit their generalizability.26 A recent review also pointed out the limited temporal coverage of mobile monitoring studies, noting that only 15% of studies lasted for at least 1 y. It also reported a median of 10 repeat visits from 41 studies that developed spatial regression models.16

Our focus on costs provides a different optimization of monitoring designs from those that focused exclusively on prediction accuracy. Our previous study explored the role of different design components and model performance by using the ACT-TRAP study data and showed improved model performance with more sites and repeat visits in a design that covered most hours and days.13 In this study, we showed that although costs increased linearly with increasing numbers of sites or visits, performance increased much more steeply from 4 to 12 visits per site than from 12 to 26 visits per site in a design that had 29 total visits per site. We also showed that designs that included multiple different seasons, weekdays and weekends, and most day and night hours improved prediction accuracy with no cost increase, assuming the per-drive-day cost did not vary by time of day or day of week. Thus, leveraging the temporally and spatially resolved monitoring data and extending experiments using various subsampled datasets, we can conclude that temporally balanced designs with 12 visits per site are the most cost-effective.

For the purposes of this study, we considered a simple monitoring environment by assuming a single-city, -pollutant, and -instrument condition. This approach allowed us to focus on the changes in incremental costs derived by per-drive-day sampling. Application of monitoring to more than one city results in travel costs, including car rentals, flights, additional travel days, housing and per diem, and shipping of instruments, when the primary institution mainly takes charge of monitoring in multiple cities. These additional costs can vary widely depending on the target city. Multicity campaigns also need to plan for additional coordination time to find lab space at a partner institution and possibly subcontract costs to that institution. Additional instruments require more setup and takedown time each drive day, more data management and quality control, and more up-front time to develop protocols. This additional workload associated with more instruments will likely limit the length of the driving routes and thus the number of locations that can be visited in one day. For example, the ACT-TRAP study involved multiple particulate and gaseous instruments that required an hour to set up and take down at the beginning and end of each driving day, and thus data collection was limited to 6 h of driving (in comparison with 7.5 h in our primary monitoring scenario with a single instrument), resulting in more routes to visit all 309 sites (9 vs. 7), fewer sites on each route (35 vs. 44 on average), and more total driving days (261 vs. 233). In addition, each instrument has unique up-front, consumable, and per-drive-day costs. The P-Trak, chosen in this study, is a USD $7,000 instrument that requires low-cost consumables and does not need special calibration. Another instrument used for PNC monitoring in the ACT-TRAP study, the NanoScan, costs USD $30,000 up front, which is approximately the equivalent of 56 d of per-drive-day costs for the simplified single-instrument design. The manufacturer also recommends factory maintenance and calibration once per year, costing USD $2,000 for service and shipping. Although this additional cost results in much higher total costs in comparison with those of the single instrument scenario, the cost increase is mostly derived from up-front costs, which are constant across different designs as shown in Table S3. The relationship between per-drive-day costs and design components in the multi-instrument scenario were consistent with our findings based on a single instrument (Figures S1 and S2).

We employed the simplifying assumption of identical staffing costs for business hours vs. nonbusiness hours sampling as well as weekday vs. weekend sampling, which resulted in roughly constant costs between all designs requiring the same number of work days. We based this simplification on our experience using full-time staff and relying on a constant staff cost rate across different days of the week or hours of the day. However, this could vary between institutions and countries. When we applied higher staff costs to evening and weekend sampling, the estimated cost was slightly higher, as shown in Figure S1. However, the increases were relatively small and do not affect our overall interpretation that the temporally unrestricted designs are the most cost effective (Table S3; Figures S1 and S2).

Our study has several limitations, and future studies could develop alternative cost estimation approaches to represent the complex reality of mobile monitoring campaigns. There are different costs and logistical considerations associated with funding graduate students vs. staff, and full- vs. part-time workers. For example, students might need to accommodate class schedules or may not be available for 8-h shifts. A variable shift schedule could also require extra up-front planning to accommodate the design with the available staff or additional time to hire staff willing and able to work an unusual schedule. This staffing plan is also more vulnerable to staff turnover or disruption, if field technicians develop unexpected obligations outside of work, such as family duties, potentially causing delays or data loss. In addition, it is important to note that all our comparisons were constrained by the full dataset available for analysis. Had we collected additional data, e.g., at more sites or more visits per site, the changes in performance statistics may have been different for the numbers of sites and visits per site that we considered. Finally, all our exposure assessments focused on ambient exposure, which could affect exposure misclassification and health inference, although consideration of indoor air quality or personal exposure were outside the scope of our study.

We focused on mobile monitoring stationary roadside sampling with 2-min measurements and excluded nonstationary sampling measurements at every second to characterize residential rather than on-road pollution levels. Our previous analysis of minute-level data from two regulatory monitoring sites in Seattle showed that multiple visits of a 1-min duration minimized the error in estimating the long-term average concentration of NO and NO2 (See Figure S2 in Blanco et al.).19 This finding supports our focus on 2-min stationary roadside data when our goal is long-term average prediction at residential locations. In addition, we have shown that on-road data require additional sampling and analysis costs to characterize residential off-road exposures. Specifically, additional pollutants must be measured to account for on-road sources such as tailpipe pollution.21 Our multi-instrument scenario indicates a possible and realistic attempt to account for on-road plumes to assess representative residential exposures to UFP by using other gaseous pollutants. Furthermore, nonstationary measurements could be highly correlated, given adjacent sampling locations. These unique features of nonstationary data could result in inconsistent long-term average concentrations in comparison with those from stationary roadside data when combined for developing a single exposure prediction model. Future studies should examine both on-road data nonstationary alone as well as on-road data combined with stationary roadside data to provide guidance for exposure model development of combined data.

Some of the performance statistics in our results may be influenced by randomly resampling the same observations in designs with restricted sites, visits, seasons, days, and/or hours. We summarized the model performance using the median across 30 campaigns to avoid the impact of reporting extreme performance statistics. In addition, our resampling from the temporally balanced data could also reduce the impact. The ACT-TRAP study scheduled its drive days for operational and technical feasibility as seen in other monitoring campaigns but rotated the monitoring schedule on a biweekly basis to accomplish its overall temporally balanced data collection. Specifically, ACT-TRAP consistently monitored all routes during a specific 8-h time slot in the morning, midday, or evening over a 2-wk period before transitioning to a new 8-h time slot over the next 2-wk period. The monitoring over 8-h time blocks helped reduce technician burden in comparison with a less consistent schedule and accommodated battery charging, which takes at least 8 h. Furthermore, ACT-TRAP’s temporally balanced data collection allowed all performance statistics to be evaluated against a consistent set of observations, which facilitated comparisons between designs.27 Note that the random resampling we applied could provide an advantage to our performance statistics that would not be seen in real monitoring campaigns. For example, studies that use fewer sites or fewer visit designs could preferentially include the sites selected for practical or convenience reasons rather than randomly. There is also inherent spatial and temporal correlation in the measurement approach used in any realistic mobile monitoring study that is not preserved in our random resampling approach. In spite of this feature and optimistic performance statistics, our approach still provides an overall perspective and general guidance. Future assessments could further restrict their sampling approach to account for the inherent correlation in mobile campaigns. It is possible that mobile monitoring designs for characterizing UFP that are supplemental to longer-term, fixed site monitoring campaigns or emerging monitoring options could be even less expensive. More research is needed to establish best practices for integrating different monitoring data sampled by different instruments as well as monitoring designs. New monitoring technology such as low-cost sensors, commonly identified as devices costing <USD $2,500, can also reduce data collection costs for fine particulate matter with aerodynamic diameter 2.5μm (PM2.5) and gases, but to date there are no validated low-cost sensors for UFP.28 Different mobile monitoring platforms based on walking, biking, and public transportation, as performed in some previous studies, may also be able to reduce costs.2931

Although the present study focused on the impact of design choices on exposure assessment, our findings also provide some insight into the impact on health inference. In related work, we applied PNC predictions obtained from the same mobile monitoring data and alternative mobile monitoring designs similar to those discussed in the present paper to cognitive function in the ACT cohort, and compared health estimates.32 In general, although the pattern of health estimates was less conclusive than the prediction model performances, we found similar disadvantages with the temporally restricted designs as we found for cost. Whereas there were small differences in health estimates from fewer visits or seasons designs in comparison with the reference designs, most health estimates from the business- and rush-hour only designs were less comparable to the reference health estimates. We observed many biased and some unbiased health estimates for these temporally restricted designs with biased health estimates in both directions. Temporally restricted monitoring designs that exclude nonbusiness hours have been common choices in previous mobile monitoring campaigns. Our findings indicate that future studies can improve the accuracy in exposure prediction as well as health estimation with little additional cost when they use a design with temporally unrestricted monitoring.

In summary, despite emerging interest in and increasing deployment of mobile monitoring campaigns to assess long-term exposure to air pollution for epidemiological applications, few studies have provided guidance on monitoring designs. The few studies that have considered the question of how much is enough sampling have focused exclusively on the accuracy of air pollution prediction and have not also considered cost, which is a significant determinant of mobile monitoring design, given its high expense and flexible design options. By focusing on the trade-off between cost and predictive exposure model performance, we have provided practical guidance to help future mobile monitoring studies optimize their designs to assess the health effects of traffic-related air pollution.

Supplementary Material

ehp15100.s001.acco.pdf (601.7KB, pdf)

Acknowledgments

Research described in this article was conducted under contract to the Health Effects Institute (HEI), an organization jointly funded by the US Environmental Protection Agency (US EPA) (Assistance Award No. CR-83998101) and certain motor vehicle and engine manufacturers. The contents of this article do not necessarily reflect the views of HEI nor its sponsors, nor do the contents necessarily reflect the views and policies of the US EPA or motor vehicle and engine manufacturers. This work was also supported by the ACT-AP Study funded by theNational Institute of Environmental Health Sciences and the National Institute on Aging (R01ES026187), as well as the National Research Foundation of Korea (2022R1A2C2009971) and the National Cancer Center of Korea (NCC-2310220, NCC-24H1720).

Conclusions and opinions are those of the individual authors and do not necessarily reflect the policies or views of EHP Publishing or the National Institute of Environmental Health Sciences.

References

  • 1.Austin E, Xiang J, Gould TR, Shirai JH, Yun S, Yost MG, et al. 2021. Distinct ultrafine particle profiles associated with aircraft and roadway traffic. Environ Sci Technol 55(5):2847–2858, PMID: 33544581, 10.1021/acs.est.0c05933. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Boanini C, Mecca D, Pognant F, Bo M, Clerico M. 2021. Integrated mobile laboratory for air pollution assessment: literature review and cc-TrAIRer design. Atmosphere 12(8):1004, 10.3390/atmos12081004. [DOI] [Google Scholar]
  • 3.Larson T, Su J, Baribeau AM, Buzzelli M, Setton E, Brauer M. 2007. A spatial model of urban winter woodsmoke concentrations. Environ Sci Technol 41(7):2429–2436, PMID: 17438796, 10.1021/es0614060. [DOI] [PubMed] [Google Scholar]
  • 4.Loeppky JA, Cagle AS, Sherriff M, Lindsay A, Willis P. 2013. A local initiative for mobile monitoring to measure residential wood smoke concentration and distribution. Air Qual Atmos Health 6(3):641–653, 10.1007/s11869-013-0203-1. [DOI] [Google Scholar]
  • 5.Pirjola L, Lähde T, Niemi JV, Kousa A, Rönkkö T, Karjalainen P, et al. 2012. Spatial and temporal characterization of traffic emissions in urban microenvironments with a mobile laboratory. Atmos Environ 63:156–167, 10.1016/j.atmosenv.2012.09.022. [DOI] [Google Scholar]
  • 6.Wagstaff M, Henderson SB, McLean KE, Brauer M. 2022. Development of methods for citizen scientist mapping of residential woodsmoke in small communities. J Environ Manage 311:114788, PMID: 35255327, 10.1016/j.jenvman.2022.114788. [DOI] [PubMed] [Google Scholar]
  • 7.Kerckhoffs J, Hoek G, Messier KP, Brunekreef B, Meliefste K, Klompmaker JO, et al. 2016. Comparison of ultrafine particle and black carbon concentration predictions from a mobile and short-term stationary land-use regression model. Environ Sci Technol 50(23):12894–12902, PMID: 27809494, 10.1021/acs.est.6b03476. [DOI] [PubMed] [Google Scholar]
  • 8.Messier KP, Chambliss SE, Gani S, Alvarez R, Brauer M, Choi JJ, et al. 2018. Mapping air pollution with google street view cars: efficient approaches with mobile monitoring and land use regression. Environ Sci Technol 52(21):12563–12572, PMID: 30354135, 10.1021/acs.est.8b03395. [DOI] [PubMed] [Google Scholar]
  • 9.Boogaard H, Patton AP, Atkinson RW, Brook JR, Chang HH, Crouse DL, et al. 2022. Long-term exposure to traffic-related air pollution and selected health outcomes: a systematic review and meta-analysis. Environ Int 164:107262, PMID: 35569389, 10.1016/j.envint.2022.107262. [DOI] [PubMed] [Google Scholar]
  • 10.Kerckhoffs J, Hoek G, Portengen L, Brunekreef B, Vermeulen RCH. 2019. Performance of prediction algorithms for modeling outdoor air pollution spatial surfaces. Environ Sci Technol 53(3):1413–1421, PMID: 30609353, 10.1021/acs.est.8b06038. [DOI] [PubMed] [Google Scholar]
  • 11.Weichenthal S, Ryswyk KV, Goldstein A, Bagg S, Shekkarizfard M, Hatzopoulou M. 2016. A land use regression model for ambient ultrafine particles in Montreal, Canada: a comparison of linear regression and a machine learning approach. Environ Res 146:65–72, PMID: 26720396, 10.1016/j.envres.2015.12.016. [DOI] [PubMed] [Google Scholar]
  • 12.Apte JS, Messier KP, Gani S, Brauer M, Kirchstetter TW, Lunden MM, et al. 2017. High-resolution air pollution mapping with google street view cars: exploiting big data. Environ Sci Technol 51(12):6999–7008, PMID: 28578585, 10.1021/acs.est.7b00891. [DOI] [PubMed] [Google Scholar]
  • 13.Blanco MN, Bi J, Austin E, Larson TV, Marshall JD, Sheppard L. 2023. Impact of mobile monitoring network design on air pollution exposure assessment models. Environ Sci Technol 57(1):440–450, PMID: 36508743, 10.1021/acs.est.2c05338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Kerckhoffs J, Hoek G, Vermeulen R. 2024. Mobile monitoring of air pollutants; performance evaluation of a mixed-model land use regression framework in relation to the number of drive days. Environ Res 240(pt 2):117457, PMID: 37865326, 10.1016/j.envres.2023.117457. [DOI] [PubMed] [Google Scholar]
  • 15.Klompmaker JO, Montagne DR, Meliefste K, Hoek G, Brunekreef B. 2015. Spatial variation of ultrafine particles and black carbon in two cities: results from a short-term measurement campaign. Sci Total Environ 508:266–275, PMID: 25486637, 10.1016/j.scitotenv.2014.11.088. [DOI] [PubMed] [Google Scholar]
  • 16.Wang A, Paul S, deSouza P, Machida Y, Mora S, Duarte F, et al. 2023. Key themes, trends, and drivers of mobile ambient air quality monitoring: a systematic review and meta-analysis. Environ Sci Technol 57(26):9427–9444, PMID: 37343238, 10.1021/acs.est.2c06310. [DOI] [PubMed] [Google Scholar]
  • 17.Gozzi F, Della Ventura G, Marcelli A. 2016. Mobile monitoring of particulate matter: state of art and perspectives. Atmos Pollut Res 7(2):228–234, 10.1016/j.apr.2015.09.007. [DOI] [Google Scholar]
  • 18.Presto AA, Saha PK, Robinson AL. 2021. Past, present, and future of ultrafine particle exposures in North America. Atmos Environ X 10:100109, 10.1016/j.aeaoa.2021.100109. [DOI] [Google Scholar]
  • 19.Vallabani NVS, Gruzieva O, Elihn K, Juárez-Facio AT, Steimer SS, Kuhn J, et al. 2023. Toxicity and health effects of ultrafine particles: towards an understanding of the relative impacts of different transport modes. Environ Res 231(pt 2):116186, PMID: 37224945, 10.1016/j.envres.2023.116186. [DOI] [PubMed] [Google Scholar]
  • 20.Blanco MN, Gassett A, Gould T, Doubleday A, Slager DL, Austin E, et al. 2022. Characterization of annual average traffic-related air pollution concentrations in the greater Seattle area from a year-long mobile monitoring campaign. Environ Sci Technol 56(16):11460–11472, PMID: 35917479, 10.1021/acs.est.2c01077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Doubleday A, Blanco MN, Austin E, Marshall JD, Larson TV, Sheppard L. 2023. Characterizing ultrafine particle mobile monitoring data for epidemiology. Environ Sci Technol 57(26):9538–9547, PMID: 37326603, 10.1021/acs.est.3c00800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.TSI Incorporated. P-Trak Ultra Particle Counter Model 8525. https://tsi.com/getmedia/f30434e0-ccea-4eb8-a15a-8d6bf42fbd0e/PTrakSpec2980197?ext=.pdf [accessed 11 April 2015].
  • 23.Sampson PD, Richards M, Szpiro AA, Bergen S, Sheppard L, Larson TV, et al. 2013. A regionalized national universal kriging model using Partial Least Squares regression for estimating annual PM2.5 concentrations in epidemiology. Atmos Environ (1994) 75:383–392, PMID: 24015108, 10.1016/j.atmosenv.2013.04.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Hatzopoulou M, Valois MF, Levy I, Mihele C, Lu G, Bagg S, et al. 2017. Robustness of land-use regression models developed from mobile air pollutant measurements. Environ Sci Technol 51(7):3938–3947, PMID: 28241115, 10.1021/acs.est.7b00366. [DOI] [PubMed] [Google Scholar]
  • 25.Levy I, Mihele C, Lu G, Narayan J, Hilker N, Brook JR. 2014. Elucidating multipollutant exposure across a complex metropolitan area by systematic deployment of a mobile laboratory. Atmos Chem Phys 14(14):7173–7193, 10.5194/acp-14-7173-2014. [DOI] [Google Scholar]
  • 26.Kim SY, Blanco MN, Bi J, Larson TV, Sheppard L. 2023. Exposure assessment for air pollution epidemiology: a scoping review of emerging monitoring platforms and designs. Environ Res 223:115451, PMID: 36764437, 10.1016/j.envres.2023.115451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Blanco MN, Doubleday A, Austin E, Marshall JD, Seto E, Larson TV, et al. 2023. Design and evaluation of short-term monitoring campaigns for long-term air pollution exposure assessment. J Expo Sci Environ Epidemiol 33(3):465–473, PMID: 36045136, 10.1038/s41370-022-00470-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Morawska L, Thai PK, Liu X, Asumadu-Sakyi A, Ayoko G, Bartonova A, et al. 2018. Applications of low-cost sensing technologies for air quality monitoring and exposure assessment: how far have they gone? Environ Int 116:286–299, PMID: 29704807, 10.1016/j.envint.2018.04.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Farrell W, Weichenthal S, Goldberg M, Valois MF, Shekarrizfard M, Hatzopoulou M. 2016. Near roadway air pollution across a spatially extensive road and cycling network. Environ Pollut 212:498–507, PMID: 26967536, 10.1016/j.envpol.2016.02.041. [DOI] [PubMed] [Google Scholar]
  • 30.Hankey S, Marshall JD. 2015. Land use regression models of on-road particulate air pollution (particle number, black carbon, PM2.5, particle size) using mobile monitoring. Environ Sci Technol 49(15):9194–9202, PMID: 26134458, 10.1021/acs.est.5b01209. [DOI] [PubMed] [Google Scholar]
  • 31.Mueller MD, Hasenfratz D, Saukh O, Fierz M, Hueglin C. 2016. Statistical modelling of particle number concentration in Zurich at high spatio-temporal resolution utilizing data from a mobile sensor network. Atmos Environ 126:171–181, 10.1016/j.atmosenv.2015.11.033. [DOI] [Google Scholar]
  • 32.Blanco MN, Szpiro A, Crane P, Sheppard L. 2024. Impact of Roadside Mobile Monitoring Design on Epidemiologic Inference – A Case Study of Ultrafine Particles and Cognitive Function. ChemRxiv, 10.26434/chemrxiv-2024-np70t. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ehp15100.s001.acco.pdf (601.7KB, pdf)

Articles from Environmental Health Perspectives are provided here courtesy of National Institute of Environmental Health Sciences

RESOURCES