Abstract
This perspective article is a product of a workshop of experts convened by the Institute for the Advancement of Food and Nutrition Sciences (IAFNS), a nonprofit organization that brings together scientists from government, academia, and industry to catalyze science relevant to food and nutrition for public benefit. An expert group was convened in March 2022 to discuss the current issues surrounding cognitive task selection in nutrition research, with a focus on solutions toward informing dietary guidance for cognitive health, to address a gap identified in the 2020 United States Dietary Guidelines Advisory Committee report, specifically the “considerable variation in testing methods used, [and] inconsistent validity and reliability of cognitive testing methods.” To address this issue, we first undertook an umbrella review of relevant reviews already undertaken; these indicate agreement on some of the issues that affect heterogeneity in task selection, and on many of the fundamental principles underlying the selection of cognitive outcome measures. However, resolving the points of disagreement is critical to ensuring a meaningful impact on the issue of heterogeneity in task selection; these issues hamper the evaluation of existing data for informing dietary guidance. This summary of the literature is therefore followed by the expert group’s perspective in the form of a discussion of potential solutions to these challenges, with the aim of building on the work of previous reviews in the area and advancing dietary guidance for cognitive health.
Registered on PROSPERO: CRD42022348106. Data described in the manuscript, code book, and analytic code will be made publicly and freely available without restriction at doi.org/10.17605/OSF.IO/XRZCK
Keywords: cognitive test selection, nutrition, umbrella review, methodological heterogeneity, dietary guidance
Statement of Significance.
Despite several high-quality reviews in this field over the last 2 decades, there has been little in the way of substantive change in the methods being used to conduct studies, hampering the harmonization of the evidence and thus, its utility for informing dietary guidance. The present article comprehensively updates the field by first providing an umbrella review of the published reviews, followed by the IAFNS expert group’s perspective on how to move the field forward by addressing the challenges and areas of disagreement in the existing reviews.
Introduction
There has been much interest over the last several decades in the effects of diet and nutrition on cognitive outcomes. Despite much promising research in several areas of nutrition, firm conclusions applicable to the United States dietary guidance are still limited. The Dietary Guidelines Advisory Committee’s (DGAC) 2020 scientific report states that “Limited evidence suggests that dietary patterns containing vegetables, fruits, unsaturated vegetable oils and/or nuts, legumes, and fish or seafood consumed during adulthood are associated with a lower risk of age-related cognitive impairment and/or dementia.” (p. 30) [1]. The report specifically highlights “considerable variation in testing methods used, [and] inconsistent validity and reliability of cognitive testing methods” (p. 31) as a major limitation preventing the drawing of strong conclusions. As the Dietary Guidelines for Americans (DGA) informs federal nutrition programs and policies, the inclusion of recommendations that impact cognitive health can have significant implications for health and health care costs for the United States population. This article provides the perspectives of an expert group convened by the Institute for the Advancement of Food and Nutrition Sciences (IAFNS) in March 2022 to discuss the issues in cognitive task selection in nutrition research, with a focus on solutions toward informing dietary guidance for cognitive health.
Cognition refers to a collection of mental processes, including attention, learning, memory, and executive functions such as planning and reasoning. Optimizing nutrition provides a non-invasive approach to correct and/or maintain cognitive functions known to be affected by aging, such as memory and processing speed [2]. There is existing evidence that nutrition is associated with changes in cognition from the prenatal period across the lifespan into old age [3]. Age-associated cognitive decline has been the focus of much nutrition research: for example, a recent review of the literature found that increasing ω-3 FAs (either through diet or supplementation) benefits cognition and can reduce inflammatory markers believed to have a role in cognitive decline [4]. However, the results of studies in healthy populations are more mixed with many studies reporting null findings [5]. Polyphenols are another area of nutrition where improvements have been found in people with cognitive impairment [6]. In whole-diet research, the Mediterranean dietary pattern has been shown in a randomized trial to improve cognition in a sample of people aged 60–80 y [7]. There has been considerably less research on healthy adults, particularly in middle adulthood, despite these comprising most of the working population who may well benefit from the cognitive benefits of nutritional intervention.
Cognitive functions can be objectively assessed by a wide variety of neuropsychological tests, including tests of reaction time, recall, and other skills mapped to often overlapping cognitive abilities. However, the results of cognitive trials are often difficult to interpret because cognition itself is so complex. This problem is intensified by wide heterogeneity in cognitive task selection in nutrition trials and a lack of standardization in reporting [8]. Developing a better understanding of the nutritional interventions that might affect which cognitive functions—and in what population, under what conditions—is a key task for researchers. The lack of consensus over the cognitive domains themselves, and over the appropriateness of different cognitive tests, poses a major problem. Given the disagreement over taxonomy and the mechanisms underlying many of the available tests, it is almost impossible to determine whether the failure to reproduce an effect in a given study is due to the inefficacy of the nutritional intervention, or simply a function of which facet of cognition is being measured by a specific test or test battery. Further, even when the same area of cognitive functioning is targeted, it turns out that different measures of “the same thing” are measuring different things. For example, 7 well-known measures purporting to assess “working memory,” were found to correlate only moderately [9].
In terms of health claims related to cognitive benefits, various government authorities take different approaches to data evaluation and substantiation requirements. Here the approaches of the United States FDA, Center for Devices and Radiological Health (FDA-CDRH), the European Food Safety Authority (EFSA), and Health Canada are summarized as presented during the IAFNS dialog. At present, the FDA Center for Food Safety and Applied Nutrition has approved one qualified health claim relevant to cognitive function, specific to phosphatidylserine and cognitive function and dementia [10]. Claims for this dietary component must be “qualified” with specific language regarding limitations because of the lack of significant scientific agreement that evidence supports the indicated effect. Structure/function claims can describe the role of a nutrient or dietary ingredient and its effect on the normal structure or function of the human body and must be “truthful and not misleading” [11]. For any of these claim types, there is no specific guidance related to cognitive performance measure selection or study methods, but rather, each new relationship claim would be evaluated on its own merit. The FDA-CDRH oversees the approval of medical devices that are used to measure cognitive performance. In this area, FDA-CDRH emphasized test-retest reliability, which is a persistent challenge in cognitive performance testing. The FDA-CDRH has supported the use of test batteries with available normative data, but primarily to verify that no cognitive change has occurred (for example, during drug administration) rather than where small effects are expected, as in nutrition science. Other authorities are more focused on principles than the specific test used when it comes to approving health claims. For example, the EFSA does not have a list of accepted cognitive tests but instead looks at previous research validating the measure when considering the evidence for submitted health claims [12]. They do not, however, accept bespoke measures, which are common practices among psychological scientists. Rather, measures must be validated and preferably normed by other studies to be considered as evidence for a health claim. Health Canada emphasized the need to consider subdomains when making health claims; for example, to make a general health claim about cognitive health, improvements would be needed across several domains of cognitive function (for example, memory, attention), or the claim would need to be revised for specificity. The DGAC emphasized the need for more scientifically rigorous research in the area in general, with randomized controlled trials much in need. They also stated that measures need to be objective; the research they had reviewed for the 2020 dietary guidelines report [1] often included self-reports by parents, in lieu of objective cognitive tests. This, combined with a lack of harmonization and validity in cognitive tests used, meant that they were not able to generate firm recommendations. As the 2025 DGAC committee begins deliberations in the coming years, the same challenges will be likely identified.
The challenge of heterogeneity in this field of research is not a new one. However, despite several high-quality reviews over the last 2 decades (for example, a 2014 review by International Life Sciences Institute Europe) [13], there has been little in the way of substantive change in the methods being used to conduct studies, and hence in the utility of the studies for supporting dietary guidance. As the number of reviews increases, umbrella reviews are becoming increasingly important to the advancement of various fields [14], to avoid the continual replication of review topics without moving the area forward. The present article aims to advance this area of literature by providing the perspective of the IAFNS expert group. As the basis of this perspective, a review of reviews—or umbrella review—of the articles that have previously addressed the issue of cognitive task selection in nutrition research in healthy adolescents, adults, and older adults, was presented. Healthy populations were selected because although much of the research in this area has been undertaken in older adults with some progression of cognitive decline, the science indicates that nutritional interventions may be more effective as a preventative measure before the onset of decline [15]. Additionally, the COcoa Supplement and Multivitamin Outcomes Study for the Mind (COSMOS-Mind) trial [16] recently found the benefits of multivitamin-mineral supplementation for cognition in healthy older adults, demonstrating that preventative measures can be taken at any point in the lifespan. This means that understanding the impact of nutritional interventions in healthy people of all ages, before the onset of any cognitive decline, is of the utmost importance to help elucidate which nutritional interventions are likely to provide the most protection. Conducting this initial umbrella review will allow us to identify areas of agreement, disagreement, and progress, before moving on to suggest solutions for the development of authoritative dietary guidance.
Methods
Selection criteria
The present review was conducted using the PRISMA [17] process for systematic reviews. Study selection criteria were defined a priori (with one post hoc change; see below) and the protocol was pre-registered on PROSPERO before searches were completed (CRD42022348106) and is also registered on the Open Science Framework (DOI: 10.17605/OSF.IO/V82QX). Only articles written or available in English and published in full in peer-reviewed journals were included; abstracts and conference reports were excluded. Included articles were required to be peer-reviewed publications that focused specifically on all of the following elements: human nutrition, human cognition, and cognitive test selection. There were no restrictions on the types of reviews that were eligible to be included; all review types (for example, perspective articles, systematic reviews, expert reviews, and guidance documents) were eligible for inclusion in the present review. The population of interest was healthy adolescents, adults, and older adults, as defined by the DGA [18] age ranges (9–18, 19–59, and >59 y); reviews focusing only on children or people with cognitive impairments were excluded. Older adults were initially excluded, but a post hoc decision was made to include them based on a lack of scientific reason for exclusion, given that the scope of the present review is to enhance dietary guidance for healthy populations.
Search strategy
A systematic literature search of PubMed/Medline and Scopus was conducted to find reviews that were published up to 22 July, 2022.
Medical subject headings (MeSH) terms were used to search PubMed/Medline, and expanded search terms were generated for Scopus based on pilot searches. Sets of search and MeSH terms included terms related to 1) publication type (for example, “review,” “perspective”) 2) nutrition (for example, “nutrition,” “diet”) 3) cognition (for example, “cognition,” “attention”) 4) age (for example, “adults,” “adolescents”) in Scopus only, and 5) test selection (for example, “methodology,” “test selection”). See Supplemental Information 1 for replicable searches for each database. Terms related to age were not included in the PubMed/Medline search, as the MeSH term age ranges do not match the age ranges specified by our inclusion criteria. The MeSH term “Adults” refers to people aged 18–44 y, as opposed to the DGA range of 19–59 y, which is what our inclusion criterion was based upon. We, therefore, screened Medline articles manually for the population rather than limiting the search using MeSH terms.
Selection of reviews
The review selection was completed using the PRISMA process [17]. Two authors (ARR and HY) completed the initial screening, using titles, abstracts, and keywords to exclude citations that were clearly irrelevant. If there was any doubt about inclusion, the article was included in the next stage. After the initial screening, all remaining full-text articles were screened independently by 2 authors (ARR and HY) to assess their eligibility for inclusion in the review. The authors of the present review were not blinded to the authors, journals, results, or conclusions of the included articles. Any disagreement was resolved by discussion.
Synthesis
Because the scope of this article was a review of different types of reviews (for example, perspective articles, guidance documents, systematic reviews) rather than combining meta-analyses, a narrative synthesis of the results was performed with no quantitative analysis.
Results
Results of the search
The results of the search are represented in the PRISMA [17] flowchart in Figure 1. In total, 6610 abstracts were retrieved from electronic database searches. After duplicates were removed, 6356 individual abstracts remained, of which 6334 were clearly irrelevant. The remaining articles were sought as a full-text version; 21 out of 22 were available as full-text articles in English, but after full-text review, 9 were excluded for being outside the scope of this review (see Figure 1 for the breakdown of reasons for exclusion). Based on the screening process, 12 reviews met the criteria for inclusion.
FIGURE 1.
PRISMA flow diagram.
Included reviews
Table 1 [[19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30]] shows the characteristics of the included reviews. All studies were conducted in the last 20 y. The included articles were a mixture of different types of reviews. Six of the included reviews, termed “guidance documents” in the present review, were geared toward providing some sort of guiding principles for cognitive test selection in nutrition science [13, [19], [20], [21], [22], [23]]. One of the included reviews was systematic [24]; one was a perspective article [8]; and one was a short “primer” for non-experts [25]. Three of the reviews are included together in the table because they are part of a dialog that was published in one journal [[26], [27], [28]]. Most focused on a variety of domains, although Benton et al. [19] focused on memory and Kallus et al. [20] focused on attention. All reviews had some focus on prospective harmonization, that is, reducing the heterogeneity in the task selection in future studies so that data can be more easily combined and compared across studies. Some also discussed methods for retrospective harmonization, that is, ways to retrospectively compare data from different studies, for example, in reviews and meta-analyses. All reviews defined the domains considered, although the chosen taxonomy varied. Most relied on an expert group or the primary author’s opinion on the criteria to determine the validity of tests and domains. Some reviews also included a systematic or non-systematic review of specific tasks used in relation to specific nutrients. The level and type of analysis varied: for example, one review focused on determining a useful guiding cognitive taxonomy and the principles for mapping this taxonomy to commonly used tests [8], whereas the PASSCLAIM (Process for the Assessment of Scientific Support for Claims on Foods) report further discussed how to map tasks and domains to potential health claims [21], which was also an aim of the article by Martini et al. [23].
TABLE 1.
Details of included reviews, including reference, a type of review, objective, domains considered, population, review method, and outcomes. Presented with guidance documents first, followed by systematic reviews, non-systematic reviews, and finally opinions and editorials
Reference | Type of review | Objective | Domains | Population | Methods | Outcomes |
---|---|---|---|---|---|---|
De Jager et al. (2014) [13] | Guidance document | Provide guidelines for those planning to study the effects of nutrition on cognition Identify/apply criteria for validation of tests |
Memory (including verbal, visual, spatial, and verbal working memory), selective and sustained attention, executive function, information processing speed, and global cognitive function. | Adults | Expert group Review of domain/paradigm sensitivity to polyphenol, B vitamin, and n3FA intervention trials |
Criteria should include: Everyday functional or behavioral relevance, Neural mechanisms, Appropriate target populations, Paradigm’s utility, validity, and reliability, Established sensitivity to nutraceuticals (Acute or Long-term effects). Domain with the most evidence for effect of nutrition is verbal memory. |
Benton et al. (2005) [19] | Guidance document | To make recommendations for the assessment of memory | Memory (details the different types using the basic “textbook” model). Also reviews changes in different aspects of memory in aging and dementia. | Adults | Expert opinion | Gives examples of commonly used tests in mainstream Psychology research, for example, Wechsler, MMSE, word association/paired associates etc. Consider reliability, validity, normative data. |
Kallus et al. (2005) [20] | Guidance document/review | To give an overview of changes in different facets of attention and psychomotor functions beyond 50 y, as well as assessment methods for attention and psychomotor performance. | Attention span (aka working memory), selective attention, vigilance, focused attention, shifting attention, and divided attention. Evaluates mediators/moderators of task performance that might be dependent on age, for example, high performance variance and of age-related confounding variables like health status. |
Adults over 50 y | Expert opinion/review of attentional aging studies, and effects of specific nutrients | Gives examples of commonly used tests in mainstream Psychology research, for example, Stroop, Trail making, Digit Symbol Substitution Test, Vigilance Test. |
Westenhoefer et al. (2004) [21] | Review and guidance document | Review existing methodologies, which may be used to substantiate and validate such claims of desirable effects of foods on mental state and performance |
1) mood; 2) arousal, activation, vigilance, attention, and sleep; 3) motivation and effort; 4) perception; 5) memory; and 6) intelligence. Potential claims related to the area of mental state and performance are listed Scientific constructs and concepts related to this field are defined and methods of assessment are reviewed. |
Adults | Non-systematic review and expert opinion | Validated methodologies do exist to generate scientific evidence in this area. Factors that should be considered include language and culture, size of the effect, amount of active substance, and subpopulations. |
Wesnes (2010) [22] | Guidance document | Present a set of criteria that any test or test battery should fulfill before being considered “fit for the purpose” | Attention, working memory, episodic memory, motor control, and aspects of executive function. | All | Expert opinion | Utility, reliability, sensitivity, and validity are the independent minimum requirements Clinical relevance, everyday behavioral relevance, and normative databases are also highly desirable |
Martini et al. 2018 [23] | Guidance document | To improve the quality of applications provided by applicants to the European Food Safety Authority, through an appropriate choice of outcome variables and methods of measurement | Global cognitive function, attention and sustained attention, alertness, memory, problem solving, abstract reasoning, intelligence, learning, language, implicit memory. | All | Expert group review | To successfully substantiate health claims through EFSA, it is necessary to select appropriate outcome variables and methods of measurement. Gives guidance on selection of appropriate measures, including a critical evaluation of individual tests based on the review. |
Reference | Type of review | Objective | Domains | Population | Methods | Outcomes |
Macready et al. (2010) [24] | Review | Review the cognitive methods used in existing randomized controlled studies that have explored the effects of nutrition on human cognition, with a view to identifying domains and individual tasks within those domains that have shown greatest sensitivity to chronic supplementation. | Executive function (focused attention, sustained attention, inhibition, switching/shifting, updating, decision-making, planning, visual search, and verbal fluency), Memory (working, episodic, semantic, procedural, implicit, prospective, short-term, long-term, recognition, verbal, visuospatial, and numerical), Motor (psychomotor processing speed, and motor function), perception, IQ (crystallized intelligence and fluid intelligence) | Adults | Systematic review | “Scattergun” approach to task selection limits the ability to make reliable comparisons across studies. Some important aspects of cognition are under-represented, for example, prospective memory. Executive function, and spatial working memory may be most sensitive. Task demand is important. Need to 1) pay closer attention to animal studies and to previous human work when identifying appropriate cognitive tests, 2) consider whether tasks are appropriate to the target population, 3) take greater care to avoid statistical artifacts likely to bring about null findings, such as lack of power, and type I errors, and 4) include more than a single task within a domain (for example, 2 executive function tasks) to determine whether a null effect for a particular nutrient is a real finding or reflects a lack of task sensitivity to the nutrient. |
Pase and Stough (2014) [8] | Perspective | To review and describe current theories of cognitive ability and explain, with working examples, how such theories can guide the handling of cognitive outcomes in nutrition research. | Carroll’s cognitive model [29] and the Cattell–Horn model [30] are the main focus | All | Non-systematic review Application of CDC model to group cognitive tests from a collection of nutrition research clinical trials |
Agreement on a taxonomy is needed Carroll’s cognitive model [29] and the Cattell–Horn model [30] can be used to guide Avoid combining cognitive test data based on arbitrary rules |
Schmitt et al. (2005) [25] | Primer for non-experts | Provide those individuals who lack a background in experimental cognitive science with a basic overview of the main concepts, issues, and pitfalls of human cognitive research. | Executive functions (reasoning, planning, concept formation, evaluation, and strategic thinking), memory functions (short-term and long-term encoding, storage and retrieval functions, and working memory), attention functions (selective, divided, and sustained), perceptual functions, psychomotor functions, and language skills | All | Expert opinion | Discusses some general principles for task selection including basing decisions on prior research, measuring confounding factors such as arousal or mood, considering speed/accuracy trade-offs, length of the test battery (fatigue), test-retest variability, population appropriateness and homogeneity, ecological validity |
Dangour, and Allen (2013) [26] Pase and Stough (2013) [27] Kennedy (2013) [28] |
Dialog | In the original editorial, Dangour and Allen [26] aim to review the evidence for n3FA affecting brain function. This is followed by 2 replies via letters to the editor. | Pase and Stough [27] suggested taxonomy: language, reasoning, memory and learning, visual perception, auditory perception, idea production, cognitive speed, knowledge and achievement, and miscellaneous abilities | All | Expert opinion | In the original editorial Dangour, and Allen [26] suggest that a lack of preregistration of primary end points, and heterogeneity of test used in cognitive intervention trials affects reliability of outcomes. Pase and Stough [27] aimed to identify an agreeable cognitive taxonomy and suggests Carroll’s “Three Stratum Theory” [29]. Kennedy [28] disagreed on the basis that 1) it would be hard to map task outcomes onto Carroll’s factor analyzed domains, and 2) cognitive taxonomies are fluid and evolving including Carroll’s. |
In the population column, the term “All” has been used to classify reviews that have a more general focus of cognition and are therefore not limited to a specific population. n3FA, ω-3 FAs; EFSA, European Food Safety Authority; IQ, Intelligence Quotient; MMSE, Mini-Mental State Examination.
Overall findings of the included reviews
Areas of agreement
Considering that these reviews span 2 decades of cognitive nutrition research, there were many areas where all reviews that covered a particular subject matter were in agreement (Table 2). All the included reviews agreed that validated tests that can be used to examine the effects of nutrition on cognition do exist, and all reviews that mentioned specific tests gave similar well-known examples of good practice. They also agreed that a paradigm’s utility, validity, and reliability are the minimal requirement for deciding to use a particular test (see Wesnes [22] for a particularly detailed exploration of these concepts). In addition, all reviews agreed that test sensitivity to diet or nutraceuticals (acute or long-term effects) is critical, and it was suggested in several reviews that gaining the understanding of which tests were likely to be sensitive to changes caused by nutritional interventions (for example, from previous literature or animal research) should be a paramount consideration in test selection, over and above the availability and ease of administration. All reviews agreed that tasks should be tailored to the target population to avoid ceiling and floor effects and to ensure that the task difficulty was appropriate for the study population to avoid loss of motivation; one particular example that was mentioned as problematic was the use of screening tests such as the Mini Mental State Examination (MMSE) in healthy populations, where the tests may not be useful as they do not have sufficient sensitivity to change.
TABLE 2.
Areas of agreement and disagreement among the included reviews
Areas of agreement | Areas of disagreement |
---|---|
All agree that validated tests that can be used to examine the effects of nutrition do exist. | Discrepant findings regarding the most sensitive domain. |
All give similar “well -known” tasks as examples. | Disagreement over whether it is possible to establish a “guiding taxonomy,” and the usefulness of this approach. |
All agree that a paradigm utility, validity, and reliability are the minimal requirement. | Different taxonomies were used for the purpose of different reviews. |
All agree that test sensitivity to diet/nutraceuticals (acute or long-term effects) (for example, from previous literature/animal research) is critical and should be a paramount consideration in test selection. | No agreement over how cognitive tests should be combined. |
All agree that tasks should be tailored to the target population (that is, avoid ceiling/floor effects, for example MMSE). | No agreement on the number of cognitive tests to be used: due to overlap between domains, some suggest that a comprehensive battery be used; others prefer to emphasis risk of false positives. |
No agreement on the usefulness of “global cognition” measures. | |
Other factors | |
Specific issues discussed in some papers are ignored by others (for example, neural mechanisms, biological plausibility, ecological validity, normative data, whether animal research is a useful basis, power, under-represented domains, speed/accuracy trade-off, simultaneous measurement of mood/arousal/motivation, test-retest variability, practice, preregistration, other individual differences such as culture/language, and conceptual confusion). | Only one factor seemed to change in the literature over time: computerized testing was mentioned as a primary way forward in earlier reviews, but this was not mentioned in later reviews. |
MMSE, Mini-Mental State Examination.
Areas of disagreement
Among the reviews, there were discrepant findings regarding the domains most likely to be sensitive to nutritional intervention; for example, verbal memory was suggested by de Jager et al. [13], whereas executive functioning and special working memory was suggested by Macready et al. [24]. There was disagreement over whether it is possible to establish a “guiding taxonomy” to help with task selection, and over the usefulness of this approach more generally [8,[26], [27], [28]]; different taxonomies were even used for the purposes of the various review articles [8,13,19,24]. Pase and Stough [8] put forward a strong case that defining a guiding taxonomy would allow researchers to easily combine test outcomes into broader domains, allowing a more meaningful comparison between studies, essentially providing a “universal cognitive language” (p. 236). However, several robust objections to the taxonomy chosen had been brought forward in the course of the dialog the previous year [28]. There was also no agreement among the reviews more generally over whether and how cognitive tests should be combined, although several reviews agreed that it was of primary importance to combine tests in a theoretically based manner, if they were to be combined at all. There was also some argument about the number of cognitive tests that should be used in a given study. Because of the overlap between domains, some suggested that a comprehensive battery be used [19], whereas other reviews emphasized risk of type 1 error and increased task demands for larger test batteries [24,25]. However, in despite of concerns over type 1 error, Macready et al. [24] recommended that researchers use more than a single task to measure each domain, to help determine whether a null effect for a given nutritional intervention is real, or if it is simply reflective of insufficient task sensitivity. There was no agreement on the potential usefulness of “global cognition” measures. Specific issues discussed in some articles were not covered by others, including neural mechanism, biological plausibility, ecological validity, normative data, whether animal research is a useful basis, power, under-represented domains, speed-accuracy trade-off, simultaneous measurement of mood/arousal/motivation, test-retest variability, practice effects, preregistration, conceptual confusion, and other individual differences such as culture and language.
Changes over time
Only one factor seemed to change in the literature over time: computerized testing. In the earliest of the included reviews, computerized testing was held up as a primary way forward for improving cognitive testing in nutrition research. However, this is not mentioned or focused on in the same way in later reviews, perhaps indicating that computerized cognitive testing was becoming the norm. Earlier articles from pre-2010 [22,25], extoll the advantages of computerized testing (“Computerized tests have the advantage of a standardized presentation and accurate and detailed response capture” [25] p. 460, and “The administrators and the participants rated the computerized tests as more acceptable, and only 91% of the sample could complete the pencil-and-paper tasks, whereas 100% were able to complete the computerized tests.” [22] p. 527). In contrast, in a later article from 2014, aside from mentioning specific computerized tests by name, de Jager et al. [13] only mentioned in past tense “…the move from traditional and established paper-and-pencil tests to automated computerized test batteries that offer customized tests tailored to the user” (p. 163), and by 2018, Martini et al. [23] do not mention computerized testing at all. This was the only clearly identifiable change that occurred through the 2 decades of reviews in the field.
Discussion
This article has so far focused on providing an overview of the previous reviews of cognitive task selection in nutrition science. Most of the previous reviews in the area focused on prospective harmonization, which is perhaps unsurprising because reducing the heterogeneity in the task selection still appears to be one of the biggest issues in cognitive nutrition science. Overall, the reviews in this area, undertaken over the last 2 decades, agree on some of the issues that affect the heterogeneity in task selection, and on many of the fundamental principles that should be followed to select appropriate cognitive measures, including reliability, validity, sensitivity, utility, appropriateness for the study population, and comparability to existing supporting literature. However, the previous reviews disagree on many points that would need to be agreed upon to have a meaningful impact on the heterogeneity issue in the area, such as deciding on a taxonomy (or even agreeing whether a guiding taxonomy would be helpful), and whether and how tasks should be combined into composite scores. Here, the key challenges in the area are outlined and some possible solutions are offered based on the outcome of the discussions among the IAFNS expert group and supported here by published literature to help existing and future cognitive nutrition research to be used to inform dietary guidance.
Key challenges
There are several key challenges that are the most problematic when it comes to evaluating evidence to inform United States dietary guidance; these are summarized in Table 3 and expanded upon below alongside the potential solutions outlined by the expert group.
TABLE 3.
An overview of the challenges and potential solutions in this area.
Challenges | Potential solutions |
---|---|
Existing evidence synthesis |
|
Biological plausibility and clinical relevance |
|
Understanding test sensitivity, particularly to nutrition |
|
Understanding what is “normal” and for whom |
|
Do we need a guiding taxonomy? |
|
Composite scores |
|
Issues in reporting |
|
Prospective harmonization |
|
Test availability |
|
Translation of test results into health claims/substantiated product benefits |
|
Applicability of evidence to the dietary guidelines committee questions |
|
Existing evidence synthesis
An important question in this area is how to combine and evaluate the existing data in a meaningful way, given the prevailing heterogeneity issues. When conducting a systematic review or meta-analysis, combining data from heterogeneous tasks across different studies is very challenging, and in many cases may not even be practical as it dramatically decreases the reliability of the review itself.
Retrospective harmonization approaches may be one way to enable better use of more of the previous research to inform current dietary guidance. These approaches involve placing disparate measures on a common scale; for example, by standardizing scores for similar, but not identical, measures into Z-scores, allowing them to be aggregated. A recent example of this comes from the Environmental Influecnes on Child Health Outcomes (ECHO) study [31], which combined 9 different (but highly correlated) measures of maternal depression to allow for direct comparison across measures and time points. How well this type of approach might work for cognition research may well depend on the domain under consideration and the sample for which the test was validated; although mood seems well suited to the process, it may be more difficult to aggregate executive function scores because it is a relatively poorly defined concept. One way around this issue may be, instead of taking the labeling of instruments as measuring “executive functioning” at face value, to instead look at the individual items to see whether they are measuring the same construct. Being able to successfully harmonize data across studies in this way would be invaluable to advancing dietary guidance, given that one reason much of the existing research cannot currently be used to inform conclusions is because of issues with task heterogeneity.
Biological plausibility and clinical relevance
A lack of biological plausibility may affect the rating of directness of evidence; therefore, cognitive assessment tools should be selected based on their ability to demonstrate biological plausibility, meaning that there should be a plausible relationship between diet (or its components), brain function, and the behavior under consideration. Evidence for biological plausibility could come, for example, from animal studies, previous human research, or from measures of neural activation in humans while performing cognitive tasks. However, although electrophysiological measurements (for example, electroencephalogram (EEG)) and neuroimaging (for example, functional Magnetic Resonance Imaging (fMRI)) may provide insight into the biological plausibility of nutrition effects on cognition, as well as the relationship between neuronal activity, brain topography, and cognitive functioning is at present relatively poorly understood, so these methods require cautious interpretation [25]. Nevertheless, a thorough understanding of the brain regions involved in cognitive task performance is key to relating findings from animal studies to human trials [24]; therefore, gathering these data are likely to be beneficial to the advancement of dietary guidance. In addition, it is likely that the mechanisms of action of nutritional interventions on cognitive functioning may be different in healthy adolescents, adults, and older adults. Nutritional interventions in adolescence may potentially be efficacious by contributing to the ongoing neurodevelopment [32], whereas efficacy in interventions later in life could be due to effects on adult neurogenesis, helping to preserve cognitive function during aging [33]. These differences in mechanisms of action across the lifespan will inevitably lead to variability in outcomes for different populations: it important that we understand which nutritional interventions have biologically plausible mechanisms for which populations, across the different cognitive domains. For example, because glucose metabolism is altered with aging, there has been a consistent finding that although healthy elderly individuals are susceptible to cognitive enhancement by glucose on some cognitive tests, young adults do not demonstrate the same effect [34]. Therefore, even among healthy individuals, the age of the population should be carefully considered when designing studies to provide mechanistic evidence.
Understanding test sensitivity, particularly to nutrition
It is also noted that the effects in nutrition studies are likely to be very small; so small in fact that the error on most cognitive measures is bigger than the change we would expect to see in short-term nutrition trials. This point echoes the test sensitivity issue that recurred throughout the included reviews; therefore, tests should only be included in nutrition trials when they have good test-retest reliability, and it can be reasonably assumed that they will be sufficiently sensitive to detect changes caused by a nutritional intervention in the population that is being tested. Using tests that have not demonstrated appropriate sensitivity to nutritional intervention increases risk of type 2 error, which can misinform dietary guidance by skewing the results of systematic reviews to the negative.
One way to determine which tests are likely to be sensitive to nutrition may be to flip the current model on its head. Many nutrition researchers are, understandably, most interested in the nutrition element of their research, and therefore tend to hold cognition constant (that is, use the same cognitive test) while manipulating the nutritional factors (for example, testing different individual micronutrients for memory). However, to really narrow down which tests are most sensitive to nutrition, we could move toward holding the nutritional intervention constant and focusing on the methods we used to evaluate cognitive factors instead. This has been done quite extensively with glucose as the nutritional intervention, with tightly controlled experiments aiming to understand the effects of glucose on many different aspects of cognition in many populations across the lifespan [35]. Further research streams of this kind to explore test sensitivity to other nutritional components and dietary patterns would be beneficial.
As suggested by the DGAC, larger randomized controlled trials can also ameliorate error problems, as error is randomized out between the intervention and comparator groups. Although it is not debatable that studies need to be appropriately powered—and must be reported in such a way that it is evident that the study power is adequate—effects must also be clinically meaningful, rather than simply statistically significant, to be useful for informing dietary guidance. Although this may be easier to establish in other populations (for example, older adults with cognitive impairments), it is harder to establish what is a clinically meaningful change for a healthy individual of any age, young or old.
As mentioned above, all the existing reviews agreed that cognitive tests should be selected specifically for the target population. For healthy populations, researchers must ensure that the tests selected have sufficient sensitivity to change to detect the smaller effects seen in these populations. In addition, even among healthy individuals with no cognitive decline, the appropriate cognitive test battery will be different for adolescents, adults, and older adults. For example, as mentioned above, the MMSE is unlikely to have sufficient sensitivity to detect changes in cognition in samples of young, healthy individuals; research has suggested that the Montreal Cognitive Assessment may be more appropriate for detecting normal (asymptomatic) changes in cognition across the lifespan of healthy adults [36].
Understanding what is “normal”
The existence of normative databases for cognitive tools is also a factor that should be considered in the tool selection and may be particularly important for understanding what constitutes a meaningful change for a healthy adult or adolescent. However, although many included reviews mentioned the importance of normed tests, there are many models applicable to nutrition research that would not necessarily specifically benefit from the use of normed tests. For some studies, comparing the performance of 2 groups, even better 2 matched groups, or before/after repeated measures designs for the same persons, can be accomplished and provide meaningful data without using a normed test. In addition, how normative data are collected (for example, the clinical workup completed to establish a “normal” population, the inclusion/exclusion criteria, etc.) and the method of statistical analysis for the normality of the data itself (depending on how comparisons are made) should also be considered, and these are often not well described.
Do we need a guiding taxonomy?
The issue of taxonomy is still predominant. There is >1 plausible taxonomy, and the domains we seek to study are artificial constructs and relatively abstract concepts, which presents a terminology challenge. In addition, psychology is a relatively young science, and our understanding of brain function and cognition is continually evolving; therefore, any guiding taxonomy would need to be constantly tested and updated. Linked to this is the considerable overlap between domains. Psychology is, at present, experiencing something of a paradigm change, moving away from a “modular” view of the brain; therefore, some previous considerations such as neural mechanisms may need regular revision. For example, connectivity in the Default Mode Network has recently been implicated in changes in episodic memory and processing speed among healthy older adults [37]. Individuals may also vary in the extent of the modularity of their cognitive functioning, potentially adding yet more noise to the measurement of individual cognitive function. A somewhat related issue (discussed above) is that cognitive tests are prone to measuring slightly different but related functions [9], and that cognitive functions are interdependent (for example, it is necessary to be able to maintain sustained attention to complete many working memory tasks). Interchanging terms and mislabeling constructs is common, so perhaps developing a guiding taxonomy may be more useful than was previously thought.
Investigators in nutrition research also rarely present a theoretical rationale supporting their cognitive test choice. There appears to be a disconnect from the advances in basic cognitive science, with a tendency to uncritically reproduce arguments and practices that have been applied previously, especially within the same research group; however, the argument that a task “has been used before” may not be an adequate or acceptable rationale. We also believe that this practice contributes to the underrepresentation of some domains; for example, procedural memory is an important cognitive domain known to be impacted by aging [2], but it is not represented in nutrition research.
Composite scores
In nutrition research, it is common for researchers to create composite scores by grouping data from several individual cognitive tests together into a broader domain. This can be helpful as it reduces risk of type 1 error resulting from multiple comparisons, as outlined above, and it increases the reliability of the scores, which may be particularly helpful for poorly defined areas, such as executive function. However, it is noted that because cognitive nutrition science is in its relative infancy, this standard practice might be obscuring positive effects on nutrition-sensitive measures, which could slow progress in the field. Therefore, if researchers want to create these composite scores, it should be done in a theoretically based manner consistent with previous trials, as emphasized by Pase and Stough [8] (a guiding taxonomy would help with this). It would also be good to see individual test scores included with study reports, even if only in Supplemental Information 1. This would allow evaluation across studies of what kinds of tests are most promising for nutrition research, and give a more nuanced look at how nutritional interventions may be affecting different cognitive functions.
Issues in reporting
Reporting issues exist that affect the quality of the research in nutritional cognitive science. These include general reporting issues such as not providing sufficient methodological details to allow for replication, not providing full ingredient lists for supplements and placebos, or not providing effect sizes or sufficient data to allow for a post hoc calculation. Some of these issues were mentioned in the included reviews, including a lack of specificity in the reporting of the cognitive tests. Many researchers in this field may not present the tasks in sufficient detail nor include measures of test-retest reliability and evidence of periodic test certification of test administrators. Often, one may only be directed to the manual. In other cases, the tests may be described but with insufficient detail, with key test parameters missing from the report.
The open science movement (for example, the Center for Open Science - https://www.cos.io/) already provides much guidance on how to report study outcomes in a transparent manner, including preregistration, a priori defined primary outcomes, intent-to-treat analysis, which is highly recommended for researchers in this area to use. An additional set of standard reporting guidelines specifically for cognitive nutrition research would ensure that the reported research in the area is sufficiently detailed to allow systematic reviewers to use the evidence. This would offer more clarity and detail on what has been done in the field to date and would increase the reliability of future systematic reviews, which could feed directly into informing dietary guidance. Furthermore, a set of guidelines specifically for reporting cognitive test parameters could make a valuable addition.
More usage of open science methodologies in this area would also pave the way for a central, de-identified database of cognitive test outcomes after nutritional intervention because data sharing is an important part of the movement toward transparency in science. Data repositories are available for these purposes, and we strongly recommend that cognitive nutrition researchers make use of these to help advance the field.
Prospective harmonization
Going forward, it would be beneficial to have a set of recommended tests for nutrition research that can be combined using data harmonization approaches. These could be added to the researchers’ choice of test, but it would mean that large sets of pooled data could be created to answer questions to inform dietary guidance. Creating “new” cognitive tests compounds existing challenges, given that 2 of the main authorities stated that these would not be accepted as evidence for a health claim. Thus, it may also be beneficial to establish written guidance for the creation and reporting of new cognitive test batteries, so that at minimum, new tests can be compared with existing tests (given sufficient methodological uniformity and detail of reporting).
When researching domains that have, as yet, not been represented in cognitive nutrition research (for example, procedural memory, social cognition), researchers are positioned to start with homogenous approaches, to avoid the issues currently facing other domains. Some forward planning regarding test selection for these under-represented domains could help to avoid the replication of the heterogeneity issue across newly researched domains.
Test availability
Another issue in cognitive test selection is that most test batteries are behind a paywall. If these were made freely available, adoption by researchers unfamiliar with the tests could be facilitated. Another option would be the construction of an advisory database, describing cognitive tests and their availability and associated costs.
Translation of test results into health claims/substantiated product benefits
As noted above, for the translation of test results into health claims to be successful, tests must be objective (that is, researchers should avoid self- and parent-report wherever possible) and the broadness or specificity of the health claim should fit the evidence provided. For example, for a broad claim about improving memory, improvements should be shown in several areas of memory. If this is not the case, the health claim should be revised to be more specific.
Applicability of evidence to dietary guideline questions
There are broader issues that may affect the applicability of some cognitive nutritional research to the DGA. First, although most of the research into the heterogeneity issue and the appropriateness of cognitive tests for nutrition research seems to have been conducted in normal populations, many of the systematic reviews conducted by the DGAC (supported by the Nutrition Evidence Systematic Review team within the USDA) are conducted in special populations (for example, children birth to 2 y), and insufficient evidence is available to provide a more specific guidance in particular subgroups across the lifespan (including in early and middle adulthood). Second, the previous round (2020–2025) of DGAC questions were based on dietary patterns (for example, Mediterranean diet, etc.) rather than nutritional components (for example, micronutrients, ω-3 FAs). Although studying dietary patterns makes sense from the aspect of ecological validity, much of the research-base in the area tends to isolate dietary components to test these in a controlled manner. Thus, much of the existing laboratory-based research may not be applicable to the questions posed by dietary guidelines committees.
The draft DGAC 2025–2030 questions were released for public comment in April 2022, and cognitive health is again emphasized [38]. Although the questions are not yet finalized, when they were released for public comment, there were 2 questions relating to cognition: “What is the relationship between dietary patterns consumed and risk of cognitive decline, mild cognitive impairment, dementia, and Alzheimer’s disease?” and “What is the relationship between dietary patterns consumed before and during pregnancy and lactation and developmental milestones, including neurocognitive development, in the child?” It is noteworthy that these questions focus once again on dietary patterns rather than dietary components. This could mean that much of the laboratory-based research that has been conducted in healthy adults may once again not be applicable to the questions posed. Solutions to this could include encouraging cognitive nutrition researchers to focus more on the dietary patterns or lobbying for the inclusion of the most well-supported individual nutrients in the review questions posed by the dietary guidelines committees. It should also be noted that the questions focus primarily on the beginning and end of the lifespan. Dietary guidance varies significantly for different stages of life; therefore, the absence of questions relating to adolescence and middle adulthood is concerning.
In conclusion, this article has endeavored to provide an overview of the previous reviews of issues in cognitive task selection in nutrition science, with a focus on advancing dietary guidance for cognitive health, followed by the perspective of the IAFNS expert group on addressing challenges and areas of disagreement in the field. Most of the reviews included in the umbrella review focused on prospective harmonization: reducing the heterogeneity in the task selection. However, after 2 decades of high-quality reviews, heterogeneity in the task selection still appears to be one of the biggest issues in cognitive nutrition science. Overall, the existing reviews in this area agree on some of the issues that affect the heterogeneity in task selection and on many of the fundamental principles that should be followed to select appropriate cognitive measures; they also, however, disagree on many points that would need to be agreed to have a meaningful impact on the heterogeneity issue, to allow existing and future cognitive nutrition research to be used to inform dietary guidance.
The final section of this article presented some potential solutions that were put forward by the IAFNS expert working group for this topic, including techniques for both prospective and retrospective harmonization, several sets of guidelines and other documents that would be useful if developed, and areas where future research could help to ameliorate the heterogeneity issue. If these methods and practices were implemented universally by researchers in the field, and large datasets could subsequently be pooled to examine questions relating to nutrition and cognition across the lifespan, this could go a long way toward enabling firm recommendations from the dietary guidelines committees. The work that has been done and is still being undertaken in this area is ground-breaking and could improve the lives of many millions more people if it could be translated into firm dietary recommendations for the general public. Bridging this gap is of the utmost importance and should be the common goal of cognitive nutrition researchers worldwide.
Acknowledgments
We thank the other attendees of the expert meeting who contributed to the ideas within this article, including Julie Nevins, Nutrition Evidence Systematic Review, Center for Nutrition Policy and Promotion, USDA. The authors report no conflicts of interest. AR received a small honorarium for the production of the manuscript. All authors contributed intellectually to the formation of the manuscript. AR and HY undertook a systematic searching process and wrote the manuscript draft. All authors provided input for the first draft and reviewed the final draft before submission. Data described in the manuscript are readily available in the indicated references reviewed.
Footnotes
Supplementary data to this article can be found online at https://doi.org/10.1016/j.advnut.2023.03.010.
Funding
RG, JG, LM, LS, CT, PW, and HY received no compensation for this work. MEL is an employee of the Institute for the Advancement of Food and Nutrition Sciences (IAFNS), a non-profit research organization. ARR was supported by IAFNS via grant IAFNS-ROMIJNAMY-20220201 from the Cognitive Health Committee. IAFNS also supported the publication costs for this work. IAFNS is a nonprofit science organization that pools funding from industry and advances science through in-kind and financial contributions from private and public sector members.
Author disclosures
The authors report no conflicts of interest.
Appendix A. Supplementary data
The following is the Supplementary data to this article:
References
- 1.Dietary Guidelines Advisory Committee . Agricultural Research Service; 2015. Scientific report of the 2015 Dietary Guidelines Advisory Committee: advisory report to the Secretary of Health and Human Services and the Secretary of Agriculture. 2019–2009. [Google Scholar]
- 2.Christensen H. What cognitive changes can be expected with normal ageing? Aust. N. Z. J. Psychiatry. 2001;35(6):768–775. doi: 10.1046/j.1440-1614.2001.00966.x. [DOI] [PubMed] [Google Scholar]
- 3.Goyal M.S., Iannotti L.L., Raichle M.E. Brain nutrition: a life span approach. Annu. Rev. Nutr. 2018;38:381–399. doi: 10.1146/annurev-nutr-082117-051652. [DOI] [PubMed] [Google Scholar]
- 4.Singh J.E. Dietary sources of omega-3 fatty acids versus omega-3 fatty acid supplementation effects on cognition and inflammation. Curr. Nutr. Rep. 2020;9(3):264–277. doi: 10.1007/s13668-020-00329-x. [DOI] [PubMed] [Google Scholar]
- 5.Mazereeuw G., Lanctôt K.L., Chau S.A., Swardfager W., Herrmann N. Effects of omega-3 fatty acids on cognitive performance: a meta-analysis. Neurobiol. Aging. 2012;33(7):1482. doi: 10.1016/j.neurobiolaging.2011.12.014. e17–1482.e29. [DOI] [PubMed] [Google Scholar]
- 6.Krikorian R., Shidler M.D., Nash T.A., Kalt W., Vinqvist-Tymchuk M.R., Shukitt-Hale B., et al. Blueberry supplementation improves memory in older adults. J. Agric. Food Chem. 2010;58(7):3996–4000. doi: 10.1021/jf9029332. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Martínez-Lapiscina E.H., Clavero P., Toledo E., Estruch R., Salas-Salvadó J., San Julián B., et al. Mediterranean diet improves cognition: the PREDIMED-NAVARRA randomised trial. J. Neurol. Neurosurg. Psychiatry. 2013;84(12):1318–1325. doi: 10.1136/jnnp-2012-304792. [DOI] [PubMed] [Google Scholar]
- 8.Pase M.P., Stough C. An evidence-based method for examining and reporting cognitive processes in nutrition research. Nutr. Res. Rev. 2014;27(2):232–241. doi: 10.1017/S0954422414000158. [DOI] [PubMed] [Google Scholar]
- 9.Waters G.S., Caplan D. The reliability and stability of verbal working memory measures. Behav. Res. Methods Instrum. Comput. 2003;35(4):550–564. doi: 10.3758/bf03195534. [DOI] [PubMed] [Google Scholar]
- 10.Qualified health claim: final decision letter - phosphatidylserine and cognitive dysfunction and dementia. FDA; 2003. [Internet] [cited 30 August, 2022]. Available from: http://wayback.archive-it.org/7993/20171114183737/https:/www.fda.gov/Food/IngredientsPackagingLabeling/LabelingNutrition/ucm072999.htm. [Google Scholar]
- 11.Label claims for conventional foods and dietary supplements. FDA; 2016. http://wayback.archive-it.org/7993/20171114183635/https://www.fda.gov/Food/IngredientsPackagingLabeling/LabelingNutrition/ucm111447.htm [Internet] [cited 30 August, 2022]. Available from: [Google Scholar]
- 12.F. Aguilar, R. Crebelli, B. Dusemund, P. Galtier, J. Gilbert, D. Gott, et al. (2012) Guidance for submission for food additive evaluations, E.F.S.A. J. (2012). 10 (7) 276.
- 13.de Jager C.A., Dye L., de Bruin E.A., Butler L., Fletcher J., Lamport D.J., et al. Criteria for validation and selection of cognitive tests for investigating the effects of foods and nutrients. Nutr. Rev. 2014;72(3):162–179. doi: 10.1111/nure.12094. [DOI] [PubMed] [Google Scholar]
- 14.Fusar-Poli P., Radua J. Ten simple rules for conducting umbrella reviews. Evid. Based Ment. Health. 2018;21(3):95–100. doi: 10.1136/ebmental-2018-300014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Spencer S.J., Korosi A., Layé S., Shukitt-Hale B., Barrientos R.M. Food for thought: how nutrition impacts cognition and emotion. npj Sci. Food. 2017;1(1):7. doi: 10.1038/s41538-017-0008-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.L.D. Baker, J.E. Manson, S.R. Rapp, H.D. Sesso, S.A. Gaussoin, S.A. Shumaker, et al., Effects of cocoa extract and a multivitamin on cognitive function: a randomized clinical trial, Alzheimers Dement, 2022. 1-12. [DOI] [PMC free article] [PubMed]
- 17.Page M.J., McKenzie J.E., Bossuyt P.M., Boutron I., Hoffmann T.C., Mulrow C.D., et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int. J. Surg. 2021;88 doi: 10.1016/j.ijsu.2021.105906. [DOI] [PubMed] [Google Scholar]
- 18.Dietary Guidelines for Americans (2020–2025) 9th Edition. US Department of Agriculture and US Department of Health and Human Services; 2020. [cited 12 December, 2022]. Available from: DietaryGuidelines.gov. [Google Scholar]
- 19.Benton D., Kallus K.W., Schmitt J.A. How should we measure nutrition-induced improvements in memory? Eur. J. Nutr. 2005;44(8):485–498. doi: 10.1007/s00394-005-0583-6. [DOI] [PubMed] [Google Scholar]
- 20.Kallus K.W., Schmitt J.A., Benton D. Attention, psychomotor functions and age. Eur. J. Nutr. 2005;44(8):465–484. doi: 10.1007/s00394-005-0584-5. [DOI] [PubMed] [Google Scholar]
- 21.Westenhoefer J., Bellisle F., Blundell J.E., de Vries J., Edwards D., Kallu W., et al. PASSCLAIM1—mental state and performance. Eur. J. Nutr. 2004;43(2):II85–II117. doi: 10.1007/s00394-004-1204-5. [DOI] [PubMed] [Google Scholar]
- 22.Wesnes K.A. Evaluation of techniques to identify beneficial effects of nutrition and natural products on cognitive function. Nutr. Rev. 2010;68(Suppl 1):S22–S28. doi: 10.1111/j.1753-4887.2010.00328.x. [DOI] [PubMed] [Google Scholar]
- 23.Martini D., Innocenti A., Cosentino C., Bedogni G., Zavaroni I., Ventura M., et al. Claimed effects, outcome variables and methods of measurement for health claims proposed under regulation (EC) 1924/2006 and related to cognitive function in adults. Arch. Ital. Biol. 2018;156(1–2):64–86. doi: 10.12871/00039829201817. [DOI] [PubMed] [Google Scholar]
- 24.Macready A.L., Butler L.T., Kennedy O.B., Ellis J.A., Williams C.M., Spencer J.P. Cognitive tests used in chronic adult human randomised controlled trial micronutrient and phytochemical intervention studies. Nutr. Res. Rev. 2010;23(2):200–229. doi: 10.1017/S0954422410000119. [DOI] [PubMed] [Google Scholar]
- 25.Schmitt J.A., Benton D., Kallus K.W. General methodological considerations for the assessment of nutritional influences on human cognitive functions. Eur. J. Nutr. 2005;44(8):459–464. doi: 10.1007/s00394-005-0585-4. [DOI] [PubMed] [Google Scholar]
- 26.Dangour A.D., Allen E. Oxford University Press; 2013. Do omega-3 fats boost brain function in adults? Are we any closer to an answer? pp. 909–910. [DOI] [PubMed] [Google Scholar]
- 27.Pase M.P., Stough C. Describing a taxonomy of cognitive processes for clinical trials assessing cognition. Am. J. Clin. Nutr. 2013;98(2):509–510. doi: 10.3945/ajcn.113.065532. [DOI] [PubMed] [Google Scholar]
- 28.Kennedy D.O. Reply to MP Pase and C Stough. Am. J. Clin. Nutr. 2013;98(2):510–511. doi: 10.3945/ajcn.113.065730. [DOI] [PubMed] [Google Scholar]
- 29.Carroll J.B. Cambridge University Press; 1993. Human cognitive abilities: a survey of factor-analytic studies. [Google Scholar]
- 30.Schneider W.J., McGrew K.S. In: Contemporary intellectual assessment theories, tests and issues. 3rd Edition. Flanagan D., Harrison P., editors. Guilford Press; New York: 2012. The Cattell-Horn-Carroll model of intelligence. [Google Scholar]
- 31.Blackwell C.K., Tang X., Elliott A.J., Thomes T., Louwagie H., Gershon R., et al. Developing a common metric for depression across adulthood: linking PROMIS depression with the Edinburgh Postnatal Depression Scale. Psychol. Assess. 2021;33(7):610–618. doi: 10.1037/pas0001009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Norris S.A., Frongillo E.A., Black M.M., Dong Y., Fall C., Lampl M., et al. Nutrition in adolescent growth and development. Lancet. 2022;399(10320):172–184. doi: 10.1016/S0140-6736(21)01590-7. [DOI] [PubMed] [Google Scholar]
- 33.Poulose S.M., Miller M.G., Scott T., Shukitt-Hale B. Nutritional factors affecting adult neurogenesis and cognitive function. Adv. Nutr. 2017;8(6):804–811. doi: 10.3945/an.117.016261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Korol D.L. Enhancing cognitive function across the life span. Ann. N. Y. Acad. Sci. 2002;959(1):167–179. doi: 10.1111/j.1749-6632.2002.tb02091.x. [DOI] [PubMed] [Google Scholar]
- 35.Sünram-Lea S.I., Owen L. The impact of diet-based glycaemic response and glucose regulation on cognition: evidence across the lifespan. Proc. Nutr. Soc. 2017;76(4):466–477. doi: 10.1017/S0029665117000829. [DOI] [PubMed] [Google Scholar]
- 36.Gluhm S., Goldstein J., Loc K., Colt A., Van Liew C.V., Corey-Bloom J. Cognitive performance on the mini-mental state examination and the montreal cognitive assessment across the healthy adult lifespan. Cogn. Behav. Neurol. 2013;26(1):1–5. doi: 10.1097/WNN.0b013e31828b7d26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Staffaroni A.M., Brown J.A., Casaletto K.B., Elahi F.M., Deng J., Neuhaus J., et al. The longitudinal trajectory of default mode network connectivity in healthy older adults varies as a function of age and is associated with changes in episodic memory and processing speed. J. Neurosci. 2018;38(11):2809–2817. doi: 10.1523/JNEUROSCI.3067-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Proposed scientific questions. Dietary Guidelines for Americans. USDA; 2022. https://www.dietaryguidelines.gov/work-under-way/view-proposed-scientific-questions [Internet] [cited 30 August, 2022]. Available from: [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.