Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2024 Jun 18;28(6):1362–1376. doi: 10.1111/jiec.13509

Embed systemic equity throughout industrial ecology applications: How to address machine learning unfairness and bias

Joe F Bozeman III 1,2,, Catharina Hollauer 1, Arjun Thangaraj Ramshankar 1, Shalini Nakkasunchi 3, Jenna Jambeck 4, Andrea Hicks 5, Melissa Bilec 6, Darren McCauley 7, Oliver Heidrich 3
PMCID: PMC11667658  PMID: 39722860

Abstract

Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle —from conception to implementation—with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity should be integrated throughout, however, leaves much to be desired as industrial ecology (IE) researchers, practitioners, and decision‐makers attempt to employ equitable practices. In this forum piece, we use a critical review approach to explain how socioecological inequities emerge in ML applications across their life cycle stages by leveraging the food system. We exemplify the use of a comprehensive questionnaire to delineate unfair ML bias across data bias, algorithmic bias, and selection and deployment bias categories. Finally, we provide consolidated guidance and tailored strategies to help address AI/ML unfair bias and inequity in IE applications. Specifically, the guidance and tools help to address sensitivity, reliability, and uncertainty challenges. There is also discussion on how bias and inequity in AI/ML affect other IE research and design domains, besides the food system—such as living labs and circularity. We conclude with an explanation of the future directions IE should take to address unfair bias and inequity in AI/ML. Last, we call for systemic equity to be embedded throughout IE applications to fundamentally understand domain‐specific socioecological inequities, identify potential unfairness in ML, and select mitigation strategies in a manner that translates across different research domains.

Keywords: artificial intelligence, justice, machine learning, machine learning bias, social equity, unfairness

1. INTRODUCTION

There is a clear movement to embed equity throughout innovative research and design processes across industry sectors and scientific disciplines. Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle—from conception to implementation—with an emphasis on fairness and responsibility in artificial intelligence (AI) and machine learning (ML) applications (Wailoo et al., 2023). Simply stating that equity should be integrated throughout, however, leaves much to be desired as researchers, practitioners, decision‐makers, and the like attempt to employ equitable practices within their respective activities. The industrial ecology (IE) community is a transdisciplinary collection of innovative scholars and practitioners active in developing and deploying equity‐centered praxis (Bozeman Iii, Chopra et al., 2022; Illsley et al., 2007; Liu et al., 2022; Sullivan et al., 2018). Nevertheless, specific methods, frameworks, and tools must be further refined, promulgated, and socially accepted by the broader IE community for a more just and equitable future to be realized. The current study helps to address this by providing apt methods, frameworks, and tools for IE stakeholders and the like.

It is important to establish that ML is a subset of AI. AI refers to the computational emulation of human thought and task performance (e.g., the development and deployment of human‐like robots, super‐human computers, and “smart” devices), whereas ML encompasses data‐driven algorithms and technologies that enable pattern identification and decision‐making at speeds and scales that ideally surpass human capabilities. ML algorithms have been applied in varied domains such as disease detection and diagnosis (Chen et al., 2017; Fatima & Pasha, 2017), automated driving (Grigorescu et al., 2020; Nascimento et al., 2020), criminal justice (Berk & Hyatt, 2015; Diyasa et al., 2021), and financial services (Baudry & Robert, 2019; Roy & George, 2017). Despite the various domains ML has been applied to, algorithms and datasets used for ML can contain inconsistencies that create or reinforce unfair bias or inequity. These types of inequities are also influenced by broader societal factors. For instance, pervasive societal conditions including historical colonialism (e.g., racism, feminism, and the implementation of inequitable laws) currently affect matters of AI/ML bias and inequity (Mohamed et al., 2020). Furthermore, spaces where AI/ML applications are tested and administered frequently—such as academic institutions—are no exception to the influences of these societal factors given that “research as well as the social systems that facilitate research and design are inextricably linked” (Bozeman, 2024). It is, therefore, necessary to develop proper guidelines to help ensure fairness in ML decision‐making (Kaur et al., 2023).

Establishing equity‐centered priorities and guidance for transdisciplinary research activities that involve ML applications is a step toward embedding systemic equity throughout. For instance, in year 2022, an international group of transdisciplinary scholars established three research priorities for just and sustainable urban systems—social equity and justice; circularity; and digital twins, where the social equity and justice priority was established as fully cross‐functional (Bozeman Iii, Chopra et al., 2022). This means that social equity and justice must be fully integrated into the circularity and digital twins activities, as the other two priorities have a strong reliance on data‐driven ML applications (Awan et al., 2021; Bozeman Iii, Chopra et al., 2022). Although establishing equity‐centered research priorities and guidance is helpful, effectively addressing unfair bias in ML—or ML inequity—would benefit from more refined strategies.

A major challenge in providing meaningful strategies to address ML unfair bias in IE applications is matching equity‐centered tools with evolving IE methodology (e.g., input–output, life cycle assessment [LCA], and material flow analysis). Of the IE approaches commonly used to date, LCA provides a methodological landscape comprehensive enough to represent ML bias and inequity at each stage—from cradle‐to‐grave for linear applications or cradle‐to‐cradle for circular ones, while allowing for refined enough scenarios to be unveiled for corresponding strategies to be proposed. These are the primary reasons why, in the current study, LCA was chosen as the IE methodology used to exemplify how to address ML inequity in IE.

There are a multitude of potential research subject matter within LCA. Since the aim of the current study is to provide meaningful tools to address inequity in LCA‐inspired ML applications, it is also important to identify an appropriate subfield of study. The food–energy–water nexus is an apt domain to explore ML inequity given its clear connections between social and ecological (socioecological) components. For instance, previous works have found that human dietary choice has significantly different impacts on environmental media (e.g., greenhouse gas [GHG] emissions, land, and water impacts) across sociodemographic subgroups (e.g., race, ethnicity, and income class) (Bozeman et al., 2019, 2020).

The current study is primarily intended to provide tangible concepts, tools, strategies, and frameworks for addressing ML inequity in LCA through a critical review. We first provide an important framework for understanding inequity more holistically, insights into example socioecological inequities that occur within the food system, and examples of related inequities that are worth highlighting. Next, we provide tools and strategies for addressing unfair ML bias in IE applications. Then, we conclude with an overview of ways that bias and inequity in AI/ML implicate other IE research and design domains to inspire future research directions.

2. UNDERSTANDING SOCIOECOLOGICAL INEQUITY IS FOUNDATIONAL

One of the major issues in framing equity investigations is the variation in how core equity and justice concepts are delineated. For instance, some researchers have delineated justice and equity to include concepts such as cosmopolitan and restorative justice (Figueroa & Waitt, 2010; Minguet, 2021; Romero‐Lankao & Nobler, 2021), whereas others have simply used the concept of recognitional justice which overlaps with and can effectively represent cosmopolitan and restorative concepts. These evolving distinctions serve important purposes when it comes to exploring varying types of equity and justice applications in research. However, too many concept distinctions can undermine practical implementation efforts in applied, transdisciplinary, or community‐based contexts.

The systemic equity framework—which, to meet systemic equity, requires the simultaneous, effective, and long‐term administration of resources, policies, and addressing the cultural needs of the systematically marginalized across human sociodemographic subgroups—was developed to help address this sprawl in justice and equity distinctions, especially for energy and environmental researchers working toward transdisciplinary effect. It establishes three core equity concepts that effectively encompass all to‐date equity and justice concept variations (see Figure 1) (Bozeman Iii, Nobler et al., 2022). Unlike previous three‐tenet frameworks (Jenkins et al., 2016; McCauley & Heffron, 2018), this framework provides clarity on the difference between justice and equity, which are terms often used interchangeably. That is, equity refers to being fair and unbiased as a function of an organization or system, whereas justice primarily involves removing barriers that prevent the implementation of equity. Just as importantly, this framework provides terminology for when equity efforts are ineffective in achieving systemic equity (i.e., ostensible, aspirational, and exploitational equity) (Bozeman Iii, Nobler et al., 2022).

FIGURE 1.

FIGURE 1

Venn diagram of the systemic equity framework. Source: From Bozeman Iii, Nobler et al. (2022).

We use the systemic equity framework to exemplify socioecological inequity in the food system (see Figure 2). Four life cycle stage delineations were used in alignment with food‐system LCA literature (Bozeman et al., 2020): production, consumption, human and ecosystem impacts, and governance and policy. Typical life cycle stages tend to follow a material extraction, material processing, manufacturing, use, and waste management flow with potential disposition pathways (i.e., recycle, remanufacture, and reuse) (Matthews et al., 2014). The food system life cycle stages of the current study align with this traditional format, where the production stage encompasses material extraction, material processing, and manufacturing; consumption encompasses use; and human and ecosystem impacts and governance & policy effectively represent waste management. Disposition pathways are not considered in the current study given our primary study aim.

FIGURE 2.

FIGURE 2

An overview of distributive, procedural, and recognitional equity factors of the food system across four life cycle stages.

It is important to emphasize that the food system exemplified herein is primarily considered a highly industrialized, US‐based system where international imports and exports, profit‐driven decision‐making, data‐driven or precision agriculture, and pesticide use are key features. The AI/ML technologies involved in such a food system include automation to assess and manage soil, precision technology in fertilizer application decision‐making, informed genetics to increase agricultural yields (i.e., gene‐edited crops), multi‐scale climatic resources for geo‐spatial analysis, and ML‐driven policy analysis in the development of pro‐environmental agro‐climate and economic interventions (Basso & Antle, 2020; Clapp & Ruder, 2020). The following subsections contextualize and highlight some of the inequities of the four food‐system life cycle stages and their associated AI/ML technologies.

2.1. Production inequity

Understanding how inequity might present itself in the production life cycle stage of the food system requires familiarity with what activities this stage involves. To help contextualize this content, we highlight inequity centered on a US perspective but with international implications. The production stage includes raw material acquisition, processing, and markets. Raw material acquisition and processing, in this context, may involve fertilizer use, pesticide use, livestock feed production, land use, labor, and capital for items such as equipment and infrastructure (Forbord & Vik, 2017). Markets encompass activities and factors such as food supply chain, labor markets (i.e., the supply and demand of employment availability), food distribution, product pricing, and revenue distribution (Busch & Spiller, 2016; Flies et al., 2018; Forbord & Vik, 2017; Mejía & García‐Díaz, 2018; Stevanović et al., 2017).

Inequity can emerge in a plethora of forms in this life cycle stage (see Figure 2). Nonetheless, a limited set of inequity examples are highlighted here for each of the core equities of the systemic equity framework (i.e., distributive, procedural, and recognitional). Distributive inequity can emerge when practical and monetary access to arable land and state‐of‐the‐art equipment are systematically inhibitive for marginalized farmers and laborers. For example, the trend of agricultural digitalization in North America—which involves the use of advanced technologies such as sensors and robotics primarily for increased, cost‐effective food production—reveals tradeoffs in rising land costs and a deepening divide in a labor market dominated by high‐skill and low‐skill farm workers, thereby exacerbating the plights of an already marginalized labor force (e.g., low‐income, immigrant, Indigenous, women, and persons with disabilities) (Rotz et al., 2019). As for procedural inequity, matters of human gender and sexuality have been associated with inhibited access to farm land and subsidies (Leslie et al., 2019). Furthermore, the US Farm Bill has been shown to adversely impact low‐income farmers internationally due to the lack of effective integration of the concerns of historically marginalized farmers (Schmitz et al., 2006). This latter point is a form of recognitional inequity.

2.2. Consumption inequity

The consumption life cycle stage includes distribution, acquisition, food preparation, and consumption factors. In this stage, distribution and acquisition activities involve food safety and storage practices during transport (Hadi & Block, 2014; Roccato et al., 2017). Preparation and consumption encompass activities such as calorie intake from retail foodstuff, food choice factors in purchasing less nutritious versus more nutritious foodstuff, food preparation effects (e.g., differences in time availability for home‐cooked meals vs. fast food purchasing), and human consumer acquisition in the form of dietary preference (Alkon et al., 2013; Hadi & Block, 2014; Poti et al., 2017; Trubek et al., 2017).

We highlight economic, transport, and regulatory inequities for the consumption stage (see Figure 2). For example, the inequitable distribution of effective economic and transport resources can inhibit the purchase and consumption of healthier, environmentally friendly foods—win‐win foods (Willett et al., 2019)—for the Latinx subgroup in the United States (Bozeman et al., 2019). Furthermore, systemic health disparities and procedural inequities can emerge when zoning and tax laws facilitate the development of fast‐food restaurants rather than healthy food stores in lower‐income regions (Sushil et al., 2017).

2.3. Human and ecosystem impact inequity

The human and ecosystem impact life cycle stage includes waste management and ecological, social, and health effects. The food system yields untenable amounts of food waste in industrialized countries (Dou et al., 2016; Gustavsson et al., 2011; Scholz et al., 2015), while unhealthy food consumption increases health risks such as chronic kidney disease, diabetes, and inhibited childhood development (Ahola et al., 2016; Banerjee et al., 2017; Rose‐Jacobs et al., 2008; Sellers et al., 2009). The ecosystem impacts that the food system creates are well established and include anthropogenic GHG emissions, reactive nitrogen from agricultural practices, and the mismanagement of land and freshwater resources (Bozeman et al., 2020; Forbord & Vik, 2017; Lin & Lei, 2015; WallisDeVries & Bobbink, 2017).

Inequity can emerge here, for example, when the distribution of effective food and municipal waste programming is not enjoyed by all community types (see Figure 2). Income inequality has been found to have an increasing adverse effect on municipal solid waste management as wastes from the food system are typically a part of solid waste streams (Kocak & Baglitas, 2022).

2.4. Governance and policy inequity

The governance and policy life cycle stage encompasses policies, edicts, and taxes that food systems are affected by. The US Farm Bill, for instance, is typically renewed every 5 years and has implications on farm commodity pricing, trade, and rural development (Yan et al., 2015). Food inspection and intervention practices also have serious implications for government oversight and human health outcomes (Eyles et al., 2012; Gittelsohn et al., 2017; Powell et al., 2011).

The inequitable enforcement of laws is a distributive challenge in governance and policy. For example, per‐ and polyfluoroalkyl substances (PFAS) exposure, which can occur in several food system activities (e.g., fertilizer land application and the use of non‐stick coating on pots and pans in food preparation), has been associated with the increased risk of liver disease in the elderly (Huang et al., 2024; Wu et al., 2023). Having policy and structure around how to pay for the measurement of PFAS, the safe management of PFAS‐containing waste, and the identification of PFAS hotspots in the physical environment are evolving challenges (Ng et al., 2021). The ineffective governance of agrochemicals, while used to increase agricultural yields, has been linked to soil and water contamination and adverse farm worker health outcomes such as acute poisoning and chronic health effects. There is an urgent need to holistically address these types of agrochemical toxicity challenges by balancing environmental and human health concerns for socioecological benefit (Anjaria & Vaghela, 2024). Each of these can yield procedural and recognitional inequities when decision‐making power for the historically marginalized is not effectively integrated, or the social norms of these historically marginalized groups are stigmatized as not being meaningful or legitimate (e.g., undervaluing the cultural norm of storytelling in comparison to institutional quantitative measures as a meaningful decision‐making tool).

3. IDENTIFYING AND ADDRESSING ML INEQUITY IN LIFE CYCLE APPLICATIONS

In the previous section, the connection between socioecological inequity and the food system was established. This is important since the food system is the primary mechanism of the current study to exemplify how ML's unfair bias and inequity can be addressed. Nevertheless, it is difficult to directly address a problem that has not been properly identified, especially in the context of ML (Lee & Singh, 2021). This section, therefore, serves as guidance on how to systematically identify bias in ML with an explanation for associated tools.

As an initial step toward embedding systemic equity in IE and ML applications, it is recommended to employ an accessible and user‐friendly questionnaire to help preliminarily assess ML models or research designs. In Table 1, we build on the Wells‐Du Bois protocol for addressing unfair bias in AI/ML (Monroe‐White & Lecy, 2022). It leverages several sources to provide a simple question‐style format that allows researchers, designers, and practitioners to explore eight component questions, broken into three categories (i.e., data bias [DB], algorithmic bias [AB], and selection and deployment bias [SDB]) (Zhang et al., 2018; Buolamwini & Gebru, 2018; Caton & Haas, 2023; Celis et al., 2021; Chiril et al., 2020; Hastie et al., 2009; Kohli et al., 2021; Monroe‐White & Lecy, 2022; Rennie et al., 2003; Zliobaite, 2015). Answering the types of questions in Table 1 before fully designing and employing an AI/ML or data‐driven model may reveal inequities that would otherwise go unnoticed or underappreciated. Even responding to questions that a user asserts as inapplicable is a meaningful use of this tool if rationale for the inapplicability is provided. This inapplicability rationale could serve as important content for others to critically identify, assess, and adapt their AI/ML efforts to reduce unfair bias in the future (e.g., effective and more refined study limitation and future research direction content).

TABLE 1.

Questionnaire for identifying bias and inequity in machine learning (ML).

Data bias (DB)
DB.1. Does the data overlook, erroneously represent, or systemically exclude a sociodemographic subgroup?

DB.2. Does the data represent the subjectivity or impartiality of humans? How does this bias affect the intended outcomes?

DB.3. Does the data represent the true distribution of your model's target population, whether human or not? What errors or limitations were there in the data collection method(s)?

Algorithmic bias (AB)
AB.1. Could the model treat a particular demographic differently, even without explicit identity markers?
AB.2. Are algorithmic outcomes disparate across respective subgroups?
AB.3. If the models are predictive, have you examined their accuracy by sociodemographic subgroup to ensure performance is not significantly different? Specifically, what is your value orientation and what are the public/social implications of this work?
Selection and deployment bias (SDB)
SDB.1. What are your goals and intended outcomes? Is there any ill intent involved?
SDB.2. What are the unintended consequences of your work? How can your results be potentially manipulated to abuse or harm?

Note: Example and consolidated results from this questionnaire are provided in Figure 3 for the food system life cycle.

We provide example inequities across each of these three categories by leveraging the life cycle and systemic equity framing of Section 2. Figure 3 provides examples of how bias might emerge in food systems for the DB, AB, and SDB categories. Next, we highlight how each of the life cycle stages links to bias or inequity.

FIGURE 3.

FIGURE 3

An overview of how food system life cycle stages link to the data bias, algorithmic bias, and selection and deployment bias categories of Table 1.

The DB category shows a commonality across each food system life cycle stage (see Figure 3). Each stage has the potential to incorporate skewed or erroneously representative existing data. This is not an uncommon experience for data scientists across scientific disciplines and industry sectors. For example, a total product life cycle framework was proposed to help address healthcare equity in AI/ML applications and medical devices by, in part, challenging the assumption that retrospectively collected data—or representative existing data—are perfectly correct (i.e., ground truth) (Abràmoff et al., 2023). Such an assumption can exacerbate inequities whether intentional or not. Other health outcomes are affected by the food system as previously established (refer to Section 2.3). Furthermore, data‐driven modeling has and will continue to have a significant impact on farming resource allocation and supply chain dynamics (Wolfert et al., 2017). These points help to explain why answering the questions of the DB category is so important in moving toward systemic equity.

The AB category has interesting inequity examples across the food system life cycle. For AB, biased training datasets could adversely impact food system clustering representations and decision‐making dynamics in ML applications for the production and government & policy life cycle stages. Previous work has found that AI/ML outcomes can be skewed in the forms of training DB and transfer context bias given that marginalized human populations tend to be underrepresented in new sources of digital data (Galaz et al., 2021). Another work suggests that US Farm Bill agricultural loan projections can introduce specification bias when attempting to qualify for uncertainty in economic relationships (Batarseh et al., 2021).

The consequences that the SDB category may yield are wide ranging. For the food system, this category includes examples that center on omission and failure to include explicit bias mitigation strategies. ‘Explicit’, in this context, means the employment of strategies and tools that clearly articulate and are direct in their intent to address unfair bias or inequity in ML applications rather than tools that are not.

Attaining and reporting on ML results are only part of what needs to be performed when using these applications. Trust in ML can be adversely affected when fairness, explainability, and security measures are lackluster (Choraś et al., 2020). Table 1 helps to identify issues in this regard so that appropriate tools can be employed to address these matters. Some specific tools and strategies to help further conceptualize and address unfair bias in ML are explained in the following subsection.

3.1. Specific concepts, tools, and strategies

The steady increase in the use of AI/ML brings the need to address the concept of uncertainty (Hüllermeier & Waegeman, 2021). Uncertainty relates to the standard probabilities and probabilistic scenarios fundamental to deriving predictions and patterns from data. Biases in data and algorithms can introduce uncertainty in ML, which can make it difficult to distinguish between actual patterns in the data from the effects of bias (Mehrabi et al., 2021). In the context of sampling, bias does not relate only to the type of bias that leads to discriminating or unfair decisions but also to the discrepancy that arises when the data do not accurately represent the true population or the distribution that a model is learning from. For example, ML accuracy and identification can be undermined when the statistical properties that determine a sample class change over time causing a phenomenon called concept drift (Fernando & Komninos, 2024; Palli et al., 2024). Addressing unfair bias is, thus, crucial in reducing AI/ML uncertainty and improving the accuracy of predictions and inferences (Barredo Arrieta et al., 2020; Kaur et al., 2023; Ntoutsi et al., 2020).

There are mainly two types of uncertainties that should be addressed when developing ML solutions: aleatoric and epistemic uncertainty (Hüllermeier & Waegeman, 2021). Both types of uncertainty are important to ML decision‐making, as they can impact the reliability of a ML model's output and inform decisions about how to improve the model or associated data.

The first type of uncertainty—aleatoric uncertainty—is often referred to as data uncertainty, and it represents the inherent variability or randomness in the data itself. It is the type of uncertainty that arises from factors such as measurement errors, natural variations, or intrinsic uncertainty as a fundamental property of the data. For instance, in weather predictions, uncertainties can come from a random variation in temperature due to chaotic atmospheric processes. In the case of a model tailored to normal temperature conditions, this aleatoric uncertainty can interfere with the decision criteria, misleading the confidence of the outcome and leading to inaccurate predictions.

Bias in the data can potentially amplify and interact with aleatoric uncertainties, especially when it affects the distribution and variability of data. In addition, some ML algorithms have inherent biases in the way they process data (Mehrabi et al., 2021). These biases can be systematic and affect the model's ability to understand underlying data distribution and data variability and to capture the full range of expected outcomes. In other words, in cases like these, the model tends to constantly favor certain outcomes such that predictions become less reliable.

Epistemic uncertainty—the second type of uncertainty—arises from incomplete or imperfect information in data and inappropriate choice of algorithms. It is the type of uncertainty that can be reduced as you gather additional data and make improvements in the model. When the data are biased or incomplete, they might not contain enough variability and representative examples for the model to understand the underlying patterns and distribution of the data, leading to models with epistemic uncertainty. Bias in algorithms can also contribute to epistemic uncertainty when it poses challenges to the model's understanding of the data attributes. It could be that the model predictions are inaccurate because either the model choice is inappropriate or the data do not meet quality and quantity requirements.

Uncertainty is a prevalent aspect of ML, and addressing it requires distinct approaches to modeling based on various contexts and types of learning problems. A reliable representation of uncertainty is desirable and should be included in any ML application to allow for adequate decisionmaking, especially in domain‐sensitive or safety‐critical applications such as medical, justice, or social‐oriented systems (Proceedings of the Conference on Fairness, Accountability & Transparency, 2019; De‐Arteaga et al., 2019; Ganz et al., 2021; Hüllermeier & Waegeman, 2021).

Next, we focus on supervised ML approaches to help illustrate some of the strategies that handle uncertainty and approaches to reduce it when exacerbated by bias. In general terms, supervised learning involves ML from labeled data to make predictions or classifications, whereas unsupervised learning entails discovering patterns and structures in unlabeled data. Supervised approaches have model and data aspects that need to be addressed to handle unfair bias (Proceedings of the Conference on Fairness, Accountability & Transparency, 2019; Barocas & Selbst, 2016). The data utilized in this setting make use of annotated examples with a focus on a set of classes or groups for training and validating the model. Data collection should ensure a representative dataset to help mitigate bias by ensuring that it reflects the characteristics or distribution of the target or classes.

Associated data annotation procedures should follow guidelines that include these three factors: (1) the development of a manual for annotator training to allow for a proper understanding of the data and domain, which ensures consistency and quality; (2) proper metrics for annotator agreement and bias; and (3) a sufficient number of annotators to account for the variability in human judgment. These guidelines help to reduce the impact of individual annotator biases and ensure that the dataset is more robust and representative. Also, models trained on representative data are more likely to generalize well to new and unseen data. This ensures that the model is robust to adversarial examples and, thus, more reliable. Taken together, these approaches and guidance can help enhance the quality and reliability of the annotated data, which in turn leads to better ML and more accurate results in supervised ML settings.

The choice of model type and proper metrics for model evaluation and bias quantification also plays a major role in developing ML applications (Czarnowska et al., 2021; Hardt et al., 2016; Hastie et al., 2009). In general, ensuring representativity, reproducibility, and transparency is important for addressing fairness and ethical matters. An algorithm design or DB can be responsible for discriminatory outcomes that reproduce or even magnify patterns of discrimination (Proceedings of the Conference on Fairness, Accountability, & Transparency, 2019; Barocas & Selbst, 2016). This may result in discrimination that reinforces and exacerbates existing inequity. That is, the human beliefs, biases, values, and assumptions involved in the data selection and training process can be propagated or compounded through feedback loops if not systematically addressed.

For the reasons discussed, the concept of fairness has been extensively studied in ML regarding its capacity to yield fair outcomes (Chen et al., 2018; Chouldechova & Roth, 2018; Hardt et al., 2016; Lo Piano, 2020). As automated decision‐making systems become increasingly normalized, it is crucial that their adoption is made in a transparent, fair and accountable manner (Rudin, 2019; Selbst & Barocas, 2018). The idea is to keep in mind that unfair outcomes can arise from both models and data. It is necessary to create mechanisms that can circumvent the potential harm this can have on decision making when using ML. Transparency in data collection, curation, and handling processes is necessary to allow for better accountability of uncertainties. Furthermore, decision making can be ineffective and may lead to negative consequences and fairness issues in cases where transparency, representativity, and reproducibility aspects are not primary considerations. In Table 2, we build upon these aforementioned concepts and tools to provide effective strategies for addressing unfair bias in ML.

TABLE 2.

Sources and mitigation strategies for addressing unfair machine learning (ML) bias.

Type of bias Source Select mitigation strategies
DB‐data bias

Data contain pre‐existing societal, cultural, or historical biases; inherit biases.

Data do not represent the true distribution of the population that the model is targeted to learn from.

Errors or limitations in data collection methods.

Careful data collection, sampling, and curation; data augmentation techniques; re‐sampling or re‐weighting of data; diverse and inclusive training data; ensuring balanced examples for the classes and training for annotation of labels in supervised learning settings.

Revisit data collection and evaluate the model carefully to understand the limitations of the data and to account for any uncertainty or errors found during the model selection and evaluation process.

AB‐algorithmic bias Design and algorithms used in ML. Develop fair and unbiased algorithms and frameworks; Regularly audit and evaluate model outputs for bias and uncertainties; Implement algorithmic fairness techniques and metrics to account for any inequity or uncertainty exacerbated by bias.
SDB‐selection and deployment bias

Related to model selection, choice of evaluation metrics, and issues during deployment, such as user interface bias.

Human prejudices or stereotypes affect the design of the model and training process, which is propagated to model results.

Choose appropriate evaluation metrics and validate the outcomes with proper robustness tests. Consider the bias‐variance trade‐off when selecting models and avoid overfitting.

Promote diversity and inclusivity in ML teams and have diversity and inclusion guidelines in model development and decision making; Robust ethical principles and guidelines with regular assessments of the impact of ML on users; Adjust system behavior based on feedback and fairness assessments.

Ensure transparent and interpretable frameworks and decision‐making processes; Regularly update and retrain models when or if fresh and more recent data are available; Consider model performance and ensure robustness checks to account for any uncertainties in model performance.

4. OTHER IE RESEARCH AND DESIGN DOMAIN IMPLICATIONS

It has been established that ML inequity has implications on not only its computational outputs but also the real‐world environments of effected stakeholders through decision making and socioecological impact. So far, we have largely used the food system to frame related tools and guidance. In this section, we highlight some other examples of IE research and design domains that can be implicated, namely living labs and circularity.

4.1. Living labs

Living labs allow for the deployment of open innovation through coproduction and user‐centric mechanisms (Nyström et al., 2014). These labs support the development of sustainable technologies and services and their testing in real‐world environments (Bulkeley & Castán Broto, 2013; Evans et al., 2016). Living labs provide solutions for complex sustainability challenges including technological, social, ecological, and environmental aspects (Díaz et al., 2019; Köhler et al., 2019), and they do this by leveraging the skills and experiences of the community, other stakeholders, and convergent science (Kiemen & Ballon, 2012; Leal Filho et al., 2023). These labs operate by adopting five principles of innovation (i.e., openness, influence, sustainability, realism, and value). Where intellectual property rights apply, this may be limited or prohibited in some cases to protect personal data (Bergvall‐Kåreborn et al., 2009).

In data‐driven and AI/ML applications, living labs have made effective use of ML. This is especially the case in data mining, where living lab stakeholders seek to understand how innovation is trending from research activities and conceptual foci establishment (e.g., ecosystems, cities, universities, and users) to practical applications on design and management (Westerlund et al., 2018). Some of these applications are presented in Table 3.

TABLE 3.

Select living lab artificial intelligence/machine learning (AI/ML) applications by research discipline.

Research discipline AI/ML application in living labs
Transportation UbiGo smart mobility app (Fluidtime, 2022; Marvin et al., 2018; Menny et al., 2018)
Health and childcare Safety system for the elderly and childcare (Nishdia et al., 2017); Chronic care management (Burbridge et al., 2017; Lee et al., 2011); Young individual mental health improvement (Rauschenberg et al., 2021)
Water and solid waste management Water saving study in Ireland (Davies, 2018)
Housing and infrastructure Environmental pollutant monitoring (Nesti, 2018; WaagFutureLab, 2021); Energy monitoring (Andresen et al., 2007; Ståhlbröst & Holst, 2013)
Agriculture and forestry Tool for rural agriculture development (CORDIS, 2012; Mabrouki et al., 2010); enhancing farming practices (Banson et al., 2016; ILVO, 2023; McPhee et al., 2021)
Tourism and others A single tool with information on tourism, parking, noise, environment, waste, and safety (Shin, 2019; USIgnite, 2023)

Living labs are themselves exemplary spaces to test the efficiency of strategies to address unfair bias and inequity in ML, as is currently the case for new technological testing such as indoor environmental monitoring systems (Kim et al., 2022). Living labs reflect complex micro‐physical and social systems that allow us to understand real‐world interactions between innovation and users (Huang & Thomas, 2021). We posit that living lab experiments should be consciously designed for both proof of concept and the optimization of tools and strategies that address unfair bias in AI/ML. Many other AI/ML applications have and will be used in living labs given their capacity to investigate a wide range of subject matter. It follows that AI/ML inequity and bias can emerge in living labs if equity‐centered apporaches are not employed.

4.2. Circularity

Traditional economies are linear in structure, following a take–make–dispose paradigm, whereas the circular economy—a primary aspect of circularity—shifts toward a recycle–make–repurpose model (Bozeman Iii, Chopra et al., 2022; Calisto Friant et al., 2020; Fullerton et al., 2022). Comprehensive circular economic research and design requires meaningful and convergent contributions from IE, community, and practitioners among other stakeholders. Addressing circular economic challenges relies upon understanding and investigating supply chains, values, energy transitions, waste management, and sustainable transport whether explicitly stated or not (Bozeman Iii, Chopra et al., 2022; Heffron & McCauley, 2014; Kontar et al., 2021; Ramshankar et al., 2023). AI/ML applications have been employed to yield insights in this regard. For example, biomass energy conversion—a form of energy transition—was investigated using predictive neural network modeling to satisfy Industry 4.0 principles and to help address circular economy challenges (Sakiewicz et al., 2020). Digital technologies—such as AI/ML or digital twins—are anticipated to play a significant role in transitioning to a circular economy (Bozeman Iii, Chopra et al., 2022; Pagoropoulos et al., 2017). In taking these factors together, it is apparent that reducing AI/ML bias must be integrated into these proceedings if we are to move toward systemic equity.

Additionally, while interest in the circular economy has increased significantly, research has been largely focused on technology and products. At the same time, human‐centered design and community‐based participatory research are missing as key drivers (Ali et al., 2008; Balazs & Morello‐Frosch, 2013). Communities, which often bear the socioecological burden of waste impacts, are interested in implementing circular economy strategies.

Each community offers unique opportunities and challenges. Integrating community‐centered design around the circular economy through quantitative, qualitative, and community‐level data, while providing actionable circular strategies guidance, is missing. Additionally, in the context of the current study, ownership of data and the curation of data are central elements of the conversation. Thus, the circularity assessment protocol (CAP) was developed and has been implemented at the time of this writing in over 50 cities and 20 countries (Maddalene et al., 2023). CAP is a standardized assessment protocol to inform decision makers through collecting community‐level data on material usage and management. CAP consists of seven spokes: input, community, material and product design, use, collection, end of cycle, and leakage. At the center, the system is driven by policy, economics, and governance with key influencers including non‐governmental organizations, industry, and government. Currently, CAP is being expanded to converge circularity across four different categories (i.e., molecules, plastics, organic materials, and the built environment), while further integrating social equity, data accessibility, and community‐wide training. Overall, CAP aims to bring people together through collaborative data collection and analysis across transdisciplinary stakeholder groups and domains. Taken together, we posit that the ML tools, approaches, and strategies described in the current study have utility in these IE domains and beyond.

5. CONCLUSION AND FUTURE DIRECTIONS

In the current study, we provided an important framework for understanding inequity from a more holistic point of view (i.e., the systemic equity framework) and provided insights by presenting examples of socioecological inequities that occur within the food system life cycle. We also provided tools for addressing unfair ML bias or inequity in IE applications. Specifically, we provided an eight‐component, three‐category questionnaire to preliminarily identify bias and inequity in ML (Table 1) and mitigation strategies for addressing the same (Table 2). Then, we concluded with an overview of ways that bias and inequity in AI/ML implicate other research and design domains to inspire future research directions (Section 4). On this latter point, we encourage future researchers and designers to adapt the critical review and framing approach used in the current study to further explore addressing socioecological inequity in the effected domains highlighted herein (i.e., living labs and circularity) and beyond. In conclusion, addressing unfair bias and inequity in ML requires understanding socioecological inequity and embedding system equity throughout.

CONFLICT OF INTEREST STATEMENT

The authors declare no conflict of interest.

ACKNOWLEDGMENTS

During the 2023 International Society for IE, a special session titled “Experiences and impacts of user‐centric research that can lead to much‐needed transition” was held. The current study was inspired, in part, by the content facilitated and presented there by Shalini Nakkasunchi and Drs. Oliver Heidrich, Joe F. Bozeman III, Melissa Bilec, Andrea Hicks, Vimi Dookhun, and Darren McCauley. We thank the Georgia Institute of Technology's Renewable Bioproduct Institute, the National Science Foundation (Grant #: 2236080), and the United Kingdom Ministry of Defence's Defence Innovation Fund Top‐Level Budget Ideas Scheme (61182036) for their financial support, which helped to make the current study possible.

Bozeman III, J. F. , Hollauer, C. , Ramshankar, A. T. , Nakkasunchi, S. , Jambeck, J. , Hicks, A. , Bilec, M. , McCauley, D. , & Heidrich, O. (2024). Embed systemic equity throughout industrial ecology applications: How to address machine learning unfairness and bias. Journal of Industrial Ecology, 28, 1362–1376. 10.1111/jiec.13509

Editor Managing Review: Göran Finnveden

DATA AVAILABILITY STATEMENT

Data sharing is not applicable—no new data were generated.

REFERENCES

  1. Abràmoff, M. D. , Tarver, M. E. , Loyo‐Berrios, N. , Trujillo, S. , Char, D. , Obermeyer, Z. , Eydelman, M. B. , & Maisel, W. H. (2023). Considerations for addressing bias in artificial intelligence for health equity. NPJ Digital Medicine, 6(1), 170–170. 10.1038/s41746-023-00913-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ahola, A. J. , Freese, R. , Mäkimattila, S. , Forsblom, C. , & Groop, P. H. (2016). Dietary patterns are associated with various vascular health markers and complications in type 1 diabetes. Journal of Diabetes and its Complications, 30(6), 1144–1150. 10.1016/j.jdiacomp.2016.03.028 [DOI] [PubMed] [Google Scholar]
  3. Ali, R. , Olden, K. , & Xu, S. (2008). Community‐Based participatory research: A vehicle to promote public engagement for environmental health in China. Environmental Health Perspectives, 116(10), 1281–1284. 10.1289/ehp.11399 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Alkon, A. H. , Block, D. , Moore, K. , Gillis, C. , Dinuccio, N. , & Chavez, N. (2013). Foodways of the urban poor. Geoforum, 48, 126–135. 10.1016/j.geoforum.2013.04.021 [DOI] [Google Scholar]
  5. Andresen, S. H. , Krogstie, J. , & Jelle, T. (2007). Lab and research activities at Wireless Trondheim. ISWCS, 7, 385–389. [Google Scholar]
  6. Anjaria, P. , & Vaghela, S. (2024). Toxicity of agrochemicals: Impact on environment and human health. Journal of Toxicological Studies, 2(1), 250. 10.59400/jts.v2i1.250 [DOI] [Google Scholar]
  7. Awan, U. , Shamim, S. , Khan, Z. , Zia, N. U. , Shariq, S. M. , & Khan, M. N. (2021). Big data analytics capability and decision‐making: The role of data‐driven insight on circular economy performance. Technological Forecasting & Social Change, 168, 120766. 10.1016/j.techfore.2021.120766 [DOI] [Google Scholar]
  8. Balazs, C. L. , & Morello‐Frosch, R. (2013). The three R's: How community based participatory research strengthens the rigor, relevance and reach of science. Environmental Justice, 6(1), 9–16. 10.1089/env.2012.0017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Banerjee, T. , Crews, D. C. , Wesson, D. E. , Dharmarajan, S. , Saran, R. , Ríos Burrows, N. , Saydah, S. , Powe, N. R. , Powe, N. R. , Banerjee, T. , Hsu, C. Y. , Bibbins‐Domingo, K. , Mcculloch, C. , Crews, D. , Grubbs, V. , Peralta, C. , Shlipak, M. , Rubinsky, A. , Hsu, R. , … Waller, L. (2017). Food insecurity, CKD, and subsequent ESRD in US adults. American Journal of Kidney Diseases, 70(1), 38–47. 10.1053/j.ajkd.2016.10.035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Banson, K. E. , Nguyen, N. C. , & Bosch, O. J. H. (2016). Systemic management to address the challenges facing the performance of agriculture in Africa: Case study in Ghana. Systems Research and Behavioral Science, 33(4), 544–574. [Google Scholar]
  11. Barocas, S. , & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. 10.15779/Z38BG31 [DOI] [Google Scholar]
  12. Barredo Arrieta, A. , Díaz‐Rodríguez, N. , Del Ser, J. , Bennetot, A. , Tabik, S. , Barbado, A. , Garcia, S. , Gil‐Lopez, S. , Molina, D. , Benjamins, R. , Chatila, R. , & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. 10.1016/j.inffus.2019.12.012 [DOI] [Google Scholar]
  13. Basso, B. , & Antle, J. (2020). Digital agriculture to design sustainable agricultural systems. Nature Sustainability, 3(4), 254–256. 10.1038/s41893-020-0510-0 [DOI] [Google Scholar]
  14. Batarseh, F. A. , Gopinath, M. , Monken, A. , & Gu, Z. (2021). Public policymaking for international agricultural trade using association rules and ensemble machine learning. Machine Learning with Applications, 5, 100046. 10.1016/j.mlwa.2021.100046 [DOI] [Google Scholar]
  15. Baudry, M. , & Robert, C. Y. (2019). A machine learning approach for individual claims reserving in insurance. Applied Stochastic Models in Business and Industry, 35(5), 1127–1155. 10.1002/asmb.2455 [DOI] [Google Scholar]
  16. Bergvall‐Kåreborn, B. , Eriksson, C. I. , Ståhlbröst, A. , & Svensson, J. (2009). A milieu for innovation: defining living labs. In ISPIM Innovation Symposium: 06/12/2009‐09/12/2009.
  17. Berk, R. , & Hyatt, J. (2015). Machine learning forecasts of risk to inform sentencing decisions. Federal Sentencing Reporter, 27(4), 222–228. 10.1525/fsr.2015.27.4.222 [DOI] [Google Scholar]
  18. Bozeman, J. F. (2024). Bolstering integrity in environmental data science and machine learning requires understanding socioecological inequity. Frontiers of Environmental Science & Engineering, 18(5), 65. 10.1007/s11783-024-1825-2 [DOI] [Google Scholar]
  19. Bozeman, J. F. , Ashton, W. S. , & Theis, T. L. (2019). Distinguishing environmental impacts of household food‐spending patterns among U.S. demographic groups. Environmental Engineering Science, 36(7), 763–777. 10.1089/ees.2018.0433 [DOI] [Google Scholar]
  20. Bozeman, J. F. , Bozeman, R. , & Theis, T. L. (2020). Overcoming climate change adaptation barriers: A study on food–energy–water impacts of the average American diet by demographic group. Journal of Industrial Ecology, 24(2), 383–399. 10.1111/jiec.12859 [DOI] [Google Scholar]
  21. Bozeman, J. F. , Chopra, S. S. , James, P. , Muhammad, S. , Cai, H. , Tong, K. , Carrasquillo, M. , Rickenbacker, H. , Nock, D. , Ashton, W. , Heidrich, O. , Derrible, S. , & Bilec, M. (2022). Three research priorities for just and sustainable urban systems: Now is the time to refocus. Journal of Industrial Ecology, 27(2), 382–394. 10.1111/jiec.13360 [DOI] [Google Scholar]
  22. Bozeman, J. F. , Nobler, E. , & Nock, D. (2022). A path toward systemic equity in life cycle assessment and decision‐making: Standardizing sociodemographic data practices. Environmental Engineering Science, 39, 759–769. 10.1089/ees.2021.0375 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Bulkeley, H. , & Castán Broto, V. (2013). Government by experiment? Global cities and the governing of climate change. Transactions of the Institute of British Geographers, 38(3), 361–375. [Google Scholar]
  24. Buolamwini, J. , & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of machine learning research: Vol. 81. Conference on fairness, accountability and transparency (pp. 77–91). PMLR.
  25. Burbridge, M. , Morrison, G. M. , van Rijn, M. , Silvester, S. , Keyson, D. V. , Virdee, L. , Baedeker, C. , & Liedtke, C. (2017). Business models for sustainability in living labs. In Living labs (pp. 391–403). Springer. [Google Scholar]
  26. Busch, G. , & Spiller, A. (2016). Farmer share and fair distribution in food chains from a consumer's perspective. Journal of Economic Psychology, 55, 149–158. 10.1016/j.joep.2016.03.007 [DOI] [Google Scholar]
  27. Calisto Friant, M. , Vermeulen, W. J. V. , & Salomone, R. (2020). A typology of circular economy discourses: Navigating the diverse visions of a contested paradigm. Resources, Conservation and Recycling, 161, 104917. 10.1016/j.resconrec.2020.104917 [DOI] [Google Scholar]
  28. Caton, S. , & Haas, C. (2023). Fairness in Machine Learning: A Survey. ACM Computing Surveys, 56, 1–38. 10.1145/3616865 [DOI] [Google Scholar]
  29. Celis, L. E. , Mehrotra, A. , & Vishnoi, N. K. (2021). Fair classification with adversarial. perturbations. arXiv.org, 10.48550/arxiv.2106.05964 [DOI] [Google Scholar]
  30. Chen, I. , Johansson, F. D. , & Sontag, D. (2018). Why is my classifier discriminatory? arXiv.org, 10.48550/arxiv.1805.12002 [DOI] [Google Scholar]
  31. Chen, M. , Hao, Y. , Hwang, K. , Wang, L. , & Wang, L. (2017). Disease prediction by machine learning over big data from healthcare communities. IEEE Access, 5, 8869–8879. 10.1109/ACCESS.2017.2694446 [DOI] [Google Scholar]
  32. Chiril, P. , Moriceau, V. , Benamara, F. , Mari, A. , Origgi, G. , & Coulomb‐Gully, M. (2020). An annotated corpus for sexism detection in French tweets. In Proceedings of the Twelfth Language Resources and Evaluation Conference (pp. 1397–1403). European Language Resources Association. [Google Scholar]
  33. Choraś, M. , Pawlicki, M. , Puchalski, D. , & Kozik, R. (2020). Machine learning—The results are not the only thing that matters! What about security, explainability and fairness? In Computational science—ICCS 2020 (pp. 615–628). Springer International Publishing. 10.1007/978-3-030-50423-6_46 [DOI] [Google Scholar]
  34. Chouldechova, A. , & Roth, A. (2018). The frontiers of fairness in machine learning. arXiv.org, 10.48550/arxiv.1810.08810 [DOI] [Google Scholar]
  35. Clapp, J. , & Ruder, S. L. (2020). Precision technologies for agriculture: Digital farming, gene‐edited crops, and the politics of sustainability. Global Environmental Politics, 20(3), 49–69. 10.1162/glep_a_00566 [DOI] [Google Scholar]
  36. CORDIS . (2012). Collaboration@Rural : A collaborative platform for working and living in rural areas.
  37. Czarnowska, P. , Vyas, Y. , & Shah, K. (2021). Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics. Transactions of the Association for Computational Linguistics, 9, 1249–1267. 10.1162/tacl_a_00425 [DOI] [Google Scholar]
  38. Davies, A. (2018). HomeLabs: Domestic living laboratories under conditions of austerity. In Urban living labs (pp. 126–146). Routledge. [Google Scholar]
  39. De‐Arteaga, M. , Romanov, A. , Wallach, H. , Chayes, J. , Borgs, C. , Chouldechova, A. , Geyik, S. , Kenthapadi, K. , & Kalai, A. T. (2019). Bias in bios: A case study of semantic representation bias in a high‐stakes setting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 120–128). Association for Computing Machinery. 10.1145/3287560.3287572 [DOI] [Google Scholar]
  40. Díaz, S. M. , Settele, J. , Brondízio, E. , Ngo, H. , Guèze, M. , Agard, J. , & Butchart, S. (2019). The global assessment report on biodiversity and ecosystem services: Summary for policy makers.
  41. Diyasa, G. S. M. , Fauzi, A. , Idhom, M. , & Setiawan, A. (2021). Multi‐face recognition for the detection of prisoners in jail using a modified cascade classifier and CNN. Journal of Physics. Conference Series, 1844(1), 012005. 10.1088/1742-6596/1844/1/012005 [DOI] [Google Scholar]
  42. Dou, Z. , Ferguson, J. D. , Galligan, D. T. , Kelly, A. M. , Finn, S. M. , & Giegengack, R. (2016). Assessing U.S. food wastage and opportunities for reduction. Global Food Security, 8, 19–26. 10.1016/j.gfs.2016.02.001 [DOI] [Google Scholar]
  43. Evans, J. , Karvonen, A. , & Raven, R. (2016). The experimental city: New modes and prospects of urban transformation. In The experimental city (pp. 1–12). Routledge. [Google Scholar]
  44. Eyles, H. , Ni Mhurchu, C. , Nghiem, N. , & Blakely, T. (2012). Food pricing strategies, population diets, and non‐communicable disease: A systematic review of simulation studies. PLoS Medicine, 9(12), e1001353–e1001353. 10.1371/journal.pmed.1001353 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Fatima, M. , & Pasha, M. (2017). Survey of machine learning algorithms for disease diagnostic. Journal of Intelligent Learning Systems and Applications, 09(1), 1–16. 10.4236/jilsa.2017.91001 [DOI] [Google Scholar]
  46. Fernando, D. W. , & Komninos, N. (2024). FeSAD ransomware detection framework with machine learning using adaption to concept drift. Computers & Security, 137, 103629. 10.1016/j.cose.2023.103629 [DOI] [Google Scholar]
  47. Figueroa, R. M. , & Waitt, G. (2010). Climb: Restorative justice, environmental heritage, and the moral terrains of Uluṟu‐Kata Tjuṯa National Park. Environmental Philosophy, 7(2), 135–163. 10.5840/envirophil20107219 [DOI] [Google Scholar]
  48. Flies, E. J. , Brook, B. W. , Blomqvist, L. , & Buettel, J. C. (2018). Forecasting future global food demand: A systematic review and meta‐analysis of model complexity. Environment International, 120, 93–103. 10.1016/j.envint.2018.07.019 [DOI] [PubMed] [Google Scholar]
  49. Fluidtime . (2022). The MaaS project UbiGo became the starting point for the new mobility paradigm that is becoming a reality in cities and regions around the world. Fluidtime. https://www.fluidtime.com/en/ubigo/
  50. Forbord, M. , & Vik, J. (2017). Food, farmers, and the future: Investigating prospects of increased food production within a national context. Land Use Policy, 67, 546–557. 10.1016/j.landusepol.2017.06.031 [DOI] [Google Scholar]
  51. Fullerton, D. , Babbitt, C. W. , Bilec, M. M. , He, S. , Isenhour, C. , Khanna, V. , Lee, E. , & Theis, T. L. (2022). Introducing the circular economy to economists. Annual Review of Resource Economics, 14(1), 493–514. 10.1146/annurev-resource-101321-053659 [DOI] [Google Scholar]
  52. Galaz, V. , Centeno, M. A. , Callahan, P. W. , Causevic, A. , Patterson, T. , Brass, I. , Baum, S. , Farber, D. , Fischer, J. , Garcia, D. , Mcphearson, T. , Jimenez, D. , King, B. , Larcey, P. , & Levy, K. (2021). Artificial intelligence, systemic risks, and sustainability. Technology in Society, 67, 101741. 10.1016/j.techsoc.2021.101741 [DOI] [Google Scholar]
  53. Ganz, M. , Holm, S. H. , & Feragen, A. (2021). Assessing bias in medical ai. Workshop on Interpretable ML in Healthcare at International Conference on Machine Learning (ICML).
  54. Gittelsohn, J. , Trude, A. C. B. , & Kim, H. (2017). Pricing strategies to encourage availability, purchase, and consumption of healthy foods and beverages: A systematic review. Preventing Chronic Disease, 14, E107. 10.5888/pcd14.170213 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Grigorescu, S. , Trasnea, B. , Cocias, T. , & Macesanu, G. (2020). A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3), 362–386. 10.1002/rob.21918 [DOI] [Google Scholar]
  56. Gustavsson, J. , Cederberg, C. , Sonesson, U. , van Otterdijk, R. , & Meybeck, A. (2011). Global food losses and food waste: extent causes and prevention . Save Food Congress.
  57. Hadi, R. , & Block, L. (2014). I take therefore I choose? The impact of active vs. passive acquisition on food consumption. Appetite, 80, 168–173. 10.1016/j.appet.2014.05.003 [DOI] [PubMed] [Google Scholar]
  58. Hardt, M. , Price, E. , & Srebro, N. (2016). Equality of opportunity in supervised learning. 10.48550/arxiv.1610.02413 [DOI]
  59. Hastie, T. , Tibshirani, R. , & Friedman, J. H. (2009). The elements of statistical learning data mining, inference, and prediction (2nd ed.). Springer. [Google Scholar]
  60. Heffron, R. J. , & Mccauley, D. (2014). Achieving sustainable supply chains through energy justice. Applied Energy, 123, 435–437. 10.1016/j.apenergy.2013.12.034 [DOI] [Google Scholar]
  61. Hong Huang, J. , & Thomas, E. (2021). A review of living lab research and methods for user involvement. Technology Innovation Management Review, 11(9/10), 88–107. 10.22215/TIMREVIEW/1467 [DOI] [Google Scholar]
  62. Huang, W. , Focker, M. , Van Dongen, K. C. W. , & Van Der Fels ‐ Klerx, H. J. (2024). Factors influencing the fate of chemical food safety hazards in the terrestrial circular primary food production system‐A comprehensive review. Comprehensive Reviews in Food Science and Food Safety, 23(2), e13324. [DOI] [PubMed] [Google Scholar]
  63. Hüllermeier, E. , & Waegeman, W. (2021). Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine learning, 110(3), 457–506. 10.1007/s10994-021-05946-3 [DOI] [Google Scholar]
  64. Illsley, B. , Jackson, T. , & Lynch, B. (2007). Promoting environmental justice through industrial symbiosis: Developing pelletised wood fuel to tackle Scottish rural fuel poverty. Progress in Industrial Ecology, An International Journal, 4(3‐4), 219–232. [Google Scholar]
  65. ILVO . (2023). ILVO Living Lab Agrifood Technology. https://ilvo.vlaanderen.be/en/living‐labs/living‐lab‐agrifood‐technology
  66. Jenkins, K. , McCauley, D. , Heffron, R. , Stephan, H. , & Rehner, R. (2016). Energy justice: A conceptual review. Energy Research & Social Science, 11, 174–182. 10.1016/j.erss.2015.10.004 [DOI] [Google Scholar]
  67. Kaur, D. , Uslu, S. , Rittichier, K. J. , & Durresi, A. (2023). Trustworthy artificial intelligence: A review. ACM Computing Surveys, 55(2), 1–38. 10.1145/3491209 [DOI] [Google Scholar]
  68. Kiemen, M. , & Ballon, P. (2012). Living labs & stigmergic prototyping: Towards a Convergent Approach [Conference presentation]. ISPIM Conference Proceedings, Barcelona, Spain.
  69. Kim, J. , Kim, S. , Bae, S. , Kim, M. , Cho, Y. , & Lee, K.‐I. (2022). Indoor environment monitoring system tested in a living lab. Building and Environment, 214, 108879. 10.1016/j.buildenv.2022.108879 [DOI] [Google Scholar]
  70. Kocak, E. , & Baglitas, H. H. (2022). The path to sustainable municipal solid waste management: Do human development, energy efficiency, and income inequality matter? Sustainable Development (Bradford, West Yorkshire, England), 30(6), 1947–1962. 10.1002/sd.2361 [DOI] [Google Scholar]
  71. Köhler, J. , Geels, F. W. , Kern, F. , Markard, J. , Onsongo, E. , Wieczorek, A. , Alkemade, F. , Avelino, F. , Bergek, A. , Boons, F. , Fünfschilling, L. , Hess, D. , Holtz, G. , Hyysalo, S. , Jenkins, K. , Kivimaa, P. , Martiskainen, M. , Mcmeekin, A. , Mühlemeier, M. S. , … Wells, P. (2019). An agenda for sustainability transitions research: State of the art and future directions. Environmental Innovation and Societal Transitions, 31, 1–32. [Google Scholar]
  72. Kohli, G. , Kaur, P. , & Bedi, J. (2021). Arguably at comma@ icon: Detection of multilingual aggressive, gender biased, and communally charged tweets using ensemble and fine‐tuned IndicBERT. In Proceedings of the 18th International Conference on Natural Language Processing: Shared Task on Multilingual Gender Biased and Communal Language Identification (pp. 46–52). NLP Association of India. [Google Scholar]
  73. Kontar, W. , Ahn, S. , & Hicks, A. (2021). Autonomous vehicle adoption: Use phase environmental implications. Environmental Research Letters, 16(6), 064010. 10.1088/1748-9326/abf6f4 [DOI] [Google Scholar]
  74. Leal Filho, W. , Ozuyar, P. G. , Dinis, M. A. P. , Azul, A. M. , Alvarez, M. G. , Da Silva Neiva, S. , Salvia, A. L. , Borsari, B. , Danila, A. , & Vasconcelos, C. R. (2023). Living labs in the context of the UN sustainable development goals: State of the art. Sustainability Science, 18(3), 1163–1179. 10.1007/s11625-022-01240-w [DOI] [Google Scholar]
  75. Lee, C. K. , Lee, J. , Lo, P. W. , Tang, H. L. , Hsiao, W. H. , Liu, J. Y. , & Lin, T. L (2011). Taiwan perspective: Developing smart living technology. International Journal of Automation and Smart Technology, 1(1), 93–106. [Google Scholar]
  76. Lee, M. S. A. , & Singh, J. (2021). Risk identification questionnaire for detecting unintended bias in the machine learning development life cycle. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 704–714). Association for Computing Machinery. 10.1145/3461702.3462572 [DOI]
  77. Leslie, I. S. , Wypler, J. , & Bell, M. M. (2019). Relational agriculture: Gender, sexuality, and sustainability in U.S. farming. Society & Natural Resources, 32(8), 853–874. 10.1080/08941920.2019.1610626 [DOI] [Google Scholar]
  78. Lin, B. , & Lei, X. (2015). Carbon emissions reduction in China's food industry. Energy Policy, 86, 483–492. 10.1016/j.enpol.2015.07.030 [DOI] [Google Scholar]
  79. Liu, Y. , Cheng, P. , & Hu, L. (2022). How do justice and top management beliefs matter in industrial symbiosis collaboration: An exploratory study from China. Journal of Industrial Ecology, 26(3), 891–906. 10.1111/jiec.13235 [DOI] [Google Scholar]
  80. Lo Piano, S. (2020). Ethical principles in machine learning and artificial intelligence: Cases from the field and possible ways forward. Humanities & Social Sciences Communications, 7(1), 1–7. 10.1057/s41599-020-0501-9 [DOI] [Google Scholar]
  81. Mabrouki, O. , Chibani, A. , Amirat, Y. , Fernandez, M. V. , & de la Cruz, M. N. (2010). Context‐aware collaborative platform in rural living labs. International Workshop on Cooperation and Interoperability, Architecture and Ontology. 10.1007/978-3-642-13048-9_5 [DOI]
  82. Maddalene, T. , Youngblood, K. , Abas, A. , Browder, K. , Cecchini, E. , Finder, S. , Gaidhani, S. , Handayani, W. , Hoang, N. X. , Jaiswal, K. , Martin, E. , Menon, S. , O'brien, Q. , Roy, P. , Septiarani, B. , Trung, N. H. , Voltmer, C. , Werner, M. , Wong, R. , & Jambeck, J. R. (2023). Circularity in cities: A comparative tool to inform prevention of plastic pollution. Resources, Conservation and Recycling, 198, 107156. 10.1016/j.resconrec.2023.107156 [DOI] [Google Scholar]
  83. Marvin, S. , Bulkeley, H. , Mai, L. , McCormick, K. , & Palgan, Y. V. (2018). Urban living labs: Experimenting with city futures. Routledge. [Google Scholar]
  84. Matthews, H. S. , Hendrickson, C. T. , & Matthews, D. H. (2014). Life cycle assessment: Quantitative approaches for decisions that matter. Open Access Textbook. [Google Scholar]
  85. Mccauley, D. , & Heffron, R. (2018). Just transition: Integrating climate, energy and environmental justice. Energy Policy, 119, 1–7. 10.1016/j.enpol.2018.04.014 [DOI] [Google Scholar]
  86. Mcphee, C. , Bancerz, M. , Mambrini‐Doudet, M. , Chrétien, F. , Huyghe, C. , & Gracia‐Garza, J. (2021). The defining characteristics of agroecosystem living Labs. Sustainability, 13(4), 1718. [Google Scholar]
  87. Mehrabi, N. , Morstatter, F. , Saxena, N. , Lerman, K. , & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. 10.1145/3457607 [DOI] [Google Scholar]
  88. Mejía, G. , & García‐Díaz, C. (2018). Market‐level effects of firm‐level adaptation and intermediation in networked markets of fresh foods: A case study in Colombia. Agricultural Systems, 160, 132–142. 10.1016/j.agsy.2017.06.003 [DOI] [Google Scholar]
  89. Menny, M. , Palgan, Y. V. , & Mccormick, K. (2018). Urban living labs and the role of users in co‐creation. GAIA‐Ecological Perspectives for Science and Society, 27(1), 68–77. [Google Scholar]
  90. Minguet, A. (2021). Environmental justice movements and restorative justice. The International Journal of Restorative Justice, 4(1), 60–80. 10.5553/TIJRJ.000067 [DOI] [Google Scholar]
  91. Mohamed, S. , Png, M. T. , & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659–684. 10.1007/s13347-020-00405-8 [DOI] [Google Scholar]
  92. Monroe‐White, T. , & Lecy, J. (2022). The Wells‐Du Bois Protocol for machine learning bias: Building critical quantitative foundations for third sector scholarship. VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations, 34, 170–184. 10.1007/s11266-022-00479-2 [DOI] [Google Scholar]
  93. Nascimento, A. M. , Vismari, L. F. , Molina, C. B. S. T. , Cugnasca, P. S. , Camargo, J. B. , Almeida, J. R. D. , Inam, R. , Fersman, E. , Marquezini, M. V. , & Hata, A. Y. (2020). A systematic literature review about the impact of artificial intelligence on autonomous vehicle safety. IEEE Transactions on Intelligent Transportation Systems, 21(12), 4928–4946. 10.1109/TITS.2019.2949915 [DOI] [Google Scholar]
  94. Nesti, G. (2018). Co‐production for innovation: The urban living lab experience. Policy and Society, 37(3), 310–325. [Google Scholar]
  95. Ng, C. , Cousins, I. T. , DeWitt, J. C. , Glüge, J. , Goldenman, G. , Herzke, D. , Lohmann, R. , Miller, M. , Patton, S. , Scheringer, M. , Trier, X. , & Wang, Z. (2021). Addressing urgent questions for PFAS in the 21st century. Environmental Science & Technology, 55(19), 12755–12765. 10.1021/acs.est.1c03386 [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Nishdia, Y. , Kitamura, K. , Yamamoto, H. , Takahashi, Y. , & Mizoguchi, H. (2017). Living function resilient service using a mock living lab and real living labs: Development of balcony‐IoT and handrail‐IoT for healthcare. Procedia Computer Science, 113, 121–129. [Google Scholar]
  97. Ntoutsi, E. , Fafalios, P. , Gadiraju, U. , Iosifidis, V. , Nejdl, W. , Vidal, M.‐E. , Ruggieri, S. , Turini, F. , Papadopoulos, S. , Krasanakis, E. , Kompatsiaris, I. , Kinder‐Kurlanda, K. , Wagner, C. , Karimi, F. , Fernandez, M. , Alani, H. , Berendt, B. , Kruegel, T. , Heinze, C. , … Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews. Data mining and Knowledge Discovery, 10(3), e1356–e1n/a. 10.1002/widm.1356 [DOI] [Google Scholar]
  98. Nyström, A. G. , Leminen, S. , Westerlund, M. , & Kortelainen, M. (2014). Actor roles and role patterns influencing innovation in living labs. Industrial Marketing Management, 43(3), 483–495. 10.1016/j.indmarman.2013.12.016 [DOI] [Google Scholar]
  99. Pagoropoulos, A. , Pigosso, D. C. A. , & Mcaloone, T. C. (2017). The emergent role of digital technologies in the circular economy: A review. Procedia CIRP, 64, 19–24. 10.1016/j.procir.2017.02.047 [DOI] [Google Scholar]
  100. Palli, A. S. , Jaafar, J. , Gilal, A. R. , Alsughayyir, A. , Gomes, H. M. , Alshanqiti, A. , & Omar, M. (2024). Online machine learning from non‐stationary data streams in the presence of concept drift and class imbalance: A systematic review. Journal of ICT, 23(1), 105–139. 10.32890/jict2024.23.1.5 [DOI] [Google Scholar]
  101. Poti, J. M. , Yoon, E. , Hollingsworth, B. , Ostrowski, J. , Wandell, J. , Miles, D. R. , & Popkin, B. M. (2017). Development of a food composition database to monitor changes in packaged foods and beverages. Journal of Food Composition and Analysis, 64, 18–26. 10.1016/j.jfca.2017.07.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Powell, D. A. , Jacob, C. J. , & Chapman, B. J. (2011). Enhancing food safety culture to reduce rates of foodborne illness. Food Control, 22(6), 817–822. 10.1016/j.foodcont.2010.12.009 [DOI] [Google Scholar]
  103. Proceedings of the Conference on Fairness, Accountability, and Transparency . (2019). Atlanta, GA, USA.
  104. Ramshankar, A. T. , Desai, A. G. , De La Villarmois, J. A. , & Bozeman Iii, J. F. (2023). Sustainability analysis of overhead cable line powered freight trucks: A life cycle impact and techno‐economic assessment toward transport electrification. Environmental Research, Infrastructure and Sustainability: ERIS, 3(1), 015010. 10.1088/2634-4505/acc273 [DOI] [Google Scholar]
  105. Rauschenberg, C. , Goetzl, C. , Schick, A. , Koppe, G. , Durstewitz, D. , Krumm, S. , & Reininghaus, U. (2021). Living lab AI4U‐artificial intelligence for personalized digital mental health promotion and prevention in youth. European Journal of Public Health, 31(Supplement_3), ckab164.746. [Google Scholar]
  106. Rennie, J. D. , Shih, L. , Teevan, J. , & Karger, D. R. (2003). Tackling the poor assumptions of naive bayes text classifiers. In Proceedings of the 20th International Conference on Machine Learning (ICML‐03) (pp. 616–623). AAAI Press. [Google Scholar]
  107. Roccato, A. , Uyttendaele, M. , & Membré, J. M. (2017). Analysis of domestic refrigerator temperatures and home storage time distributions for shelf‐life studies and food safety risk assessment. Food Research International, 96, 171–181. 10.1016/j.foodres.2017.02.017 [DOI] [PubMed] [Google Scholar]
  108. Romero‐Lankao, P. , & Nobler, E. (2021). Energy justice: Key concepts and metrics relevant to EERE transportation projects https://www.nrel.gov/docs/fy21osti/80206.pdf
  109. Rose‐Jacobs, R. , Black, M. M. , Casey, P. H. , Cook, J. T. , Cutts, D. B. , Chilton, M. , Heeren, T. , Levenson, S. M. , Meyers, A. F. , & Frank, D. A. (2008). Household food insecurity: Associations with at‐risk infant and toddler development. Pediatrics, 121(1), 65. 10.1542/peds.2006-3717 [DOI] [PubMed] [Google Scholar]
  110. Rotz, S. , Gravely, E. , Mosby, I. , Duncan, E. , Finnis, E. , Horgan, M. , Leblanc, J. , Martin, R. , Neufeld, H. T. , Nixon, A. , Pant, L. , Shalla, V. , & Fraser, E. (2019). Automated pastures and the digital divide: How agricultural technologies are shaping labour and rural communities. Journal of Rural Studies, 68, 112–122. 10.1016/j.jrurstud.2019.01.023 [DOI] [Google Scholar]
  111. Roy, R. , & George, K. T. (2017). Detecting insurance claims fraud using machine learning techniques. In 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT). IEEE.
  112. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. 10.1038/s42256-019-0048-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Sakiewicz, P. , Piotrowski, K. , & Kalisz, S. (2020). Neural network prediction of parameters of biomass ashes, reused within the circular economy frame. Renewable Energy, 162, 743–753. 10.1016/j.renene.2020.08.088 [DOI] [Google Scholar]
  114. Schmitz, A. , Schmitz, T. G. , & Rossi, F. (2006). Agricultural subsidies in developed countries: Impact on global welfare. Applied Economic Perspectives and Policy, 28(3), 416–425. 10.1111/j.1467-9353.2006.00307.x [DOI] [Google Scholar]
  115. Scholz, K. , Eriksson, M. , & Strid, I. (2015). Carbon footprint of supermarket food waste. Resources, Conservation and Recycling, 94, 56–65. 10.1016/j.resconrec.2014.11.016 [DOI] [Google Scholar]
  116. Selbst, A. , & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085. [Google Scholar]
  117. Sellers, E. A. , Moore, K. , & Dean, H. J. (2009). Clinical management of type 2 diabetes in indigenous youth. Pediatric Clinics of North America, 56(6), 1441–1459. 10.1016/j.pcl.2009.09.013 [DOI] [PubMed] [Google Scholar]
  118. Shin, D. (2019). A living lab as socio‐technical ecosystem: Evaluating the Korean living lab of internet of things. Government Information Quarterly, 36(2), 264–275. [Google Scholar]
  119. Ståhlbröst, A. , & Holst, M. (2013). The living lab: Methodology handbook. Vinnova. [Google Scholar]
  120. Stevanović, M. , Popp, A. , Bodirsky, B. L. , Humpenöder, F. , Müller, C. , Weindl, I. , Dietrich, J. P. , Lotze‐Campen, H. , Kreidenweis, U. , Rolinski, S. , Biewald, A. , & Wang, X. (2017). Mitigation strategies for greenhouse gas emissions from agriculture and land‐use change: Consequences for food prices. Environmental Science & Technology, 51(1), 365–374. 10.1021/acs.est.6b04291 [DOI] [PubMed] [Google Scholar]
  121. Sullivan, K. , Thomas, S. , & Rosano, M. (2018). Using industrial ecology and strategic management concepts to pursue the Sustainable Development Goals. Journal of Cleaner Production, 174, 237–246. 10.1016/j.jclepro.2017.10.201 [DOI] [Google Scholar]
  122. Sushil, Z. , Vandevijvere, S. , Exeter, D. J. , & Swinburn, B. (2017). Food swamps by area socioeconomic deprivation in New Zealand: A national study. International Journal of Public Health, 62(8), 869–877. 10.1007/s00038-017-0983-4 [DOI] [PubMed] [Google Scholar]
  123. Trubek, A. B. , Carabello, M. , Morgan, C. , & Lahne, J. (2017). Empowered to cook: The crucial role of ‘food agency’ in making meals. Appetite, 116, 297–305. 10.1016/j.appet.2017.05.017 [DOI] [PubMed] [Google Scholar]
  124. USIgnite . (2023). Bukchon IoT Project: leveraging IoT to resolve urban issues of Seoul through public‐private partnership (Seoul). US Ignite. https://www.us‐ignite.org/apps/ubwvwaujtuh4vvprtsbvjf/
  125. WaagFutureLab . (2021). Amsterdam Smart Citizens Lab. WAAG Future Lab. https://waag.org/en/project/amsterdam‐smart‐citizens‐lab/
  126. Wailoo, K. A. , Dzau, V. J. , & Yamamoto, K. R. (2023). Embed equity throughout innovation. Science (American Association for the Advancement of Science), 381(6662), 1029–1029. 10.1126/science.adk6365 [DOI] [PubMed] [Google Scholar]
  127. Wallisdevries, M. F. , & Bobbink, R. (2017). Nitrogen deposition impacts on biodiversity in terrestrial ecosystems: Mechanisms and perspectives for restoration. Biological Conservation, 212, 387–389. 10.1016/j.biocon.2017.01.017 [DOI] [Google Scholar]
  128. Westerlund, M. , Leminen, S. , & Rajahonka, M. (2018). A topic modelling analysis of living labs research. Technology Innovation Management Review, 8(7), 40–51. 10.22215/timreview/1170 [DOI] [Google Scholar]
  129. Willett, W. , Rockström, J. , Loken, B. , Springmann, M. , Lang, T. , Vermeulen, S. , Garnett, T. , Tilman, D. , Declerck, F. , Wood, A. , Jonell, M. , Clark, M. , Gordon, L. J. , Fanzo, J. , Hawkes, C. , Zurayk, R. , Rivera, J. A. , De Vries, W. , Majele Sibanda, L. , … Murray, C. J. L. (2019). Food in the Anthropocene: The EAT‐Lancet Commission on healthy diets from sustainable food systems. Lancet, 393(10170), 447–492. 10.1016/S0140-6736(18)31788-4 [DOI] [PubMed] [Google Scholar]
  130. Wolfert, S. , Ge, L. , Verdouw, C. , & Bogaardt, M. J. (2017). Big data in smart farming—A review. Agricultural Systems, 153, 69–80. 10.1016/j.agsy.2017.01.023 [DOI] [Google Scholar]
  131. Wu, Z. , Ouyang, T. , Liu, H. , Cao, L. , & Chen, W. (2023). Perfluoroalkyl substance (PFAS) exposure and risk of nonalcoholic fatty liver disease in the elderly: Results from NHANES 2003–2014. Environmental Science and Pollution Research International, 30(23), 64342–64351. 10.1007/s11356-023-26941-2 [DOI] [PubMed] [Google Scholar]
  132. Yan, R. , Bastian, N. D. , & Griffin, P. M. (2015). Association of food environment and food retailers with obesity in US adults. Health & Place, 33, 19–24. 10.1016/j.healthplace.2015.02.004 [DOI] [PubMed] [Google Scholar]
  133. Zhang, B. H. , Lemoine, B. , & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. arXiv.org, 10.48550/arxiv.1801.07593 [DOI] [Google Scholar]
  134. Zliobaite, I. (2015). On the relation between accuracy and fairness in binary classification. 10.48550/arxiv.1505.05723 [DOI]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing is not applicable—no new data were generated.


Articles from Journal of Industrial Ecology are provided here courtesy of Wiley

RESOURCES