Abstract
Life cycle assessment (LCA) practitioners face many challenges in their efforts to describe, share, review, and revise their product system models; and to reproduce the models and results of others. Current Life cycle inventory modeling techniques have weaknesses in the areas of describing model structure; documenting the use of proxy or non-ideal data; specifying allocation; and including modeler’s observations and assumptions -- all affecting how the study is interpreted and limiting the reuse of models. Moreover, LCA software systems manage modeling information in different and sometimes non-compatible ways. Practitioners must also deal with licensing, privacy / confidentiality of data, and other issues around data access which impact how a model can be shared. The aim of this SETAC North America working group is to define a roadmap of the technical advances needed to achieve easier LCA model sharing and improve replicability of LCA results among different users in a way that is independent of the LCA software used to compute the results and does not infringe on any licensing restrictions or confidentiality requirements.
Introduction and Current Status
The “product system” is a fundamental concept in life cycle assessment (LCA). According to ISO 14044 (2006), the product system is “[the] collection of unit processes with elementary and product flows, performing one or more defined functions, and which models the life cycle of a product.” Conceptually, the product system model (PSM), also called an inventory model or LCI system model, is the main work product of the LCA practitioner. The model describes what unit processes, including aggregated datasets, are used and how they are linked together. The model is then used to generate a life cycle inventory (LCI) and compute life cycle impact assessment (LCIA) results. The main purpose of the LCA report is to describe in detail how the model was constructed. At a basic level, computing LCIA results can be described as a set of linear algebra operations on input matrices (Heijungs and Sun 2002), but a great deal of information is omitted, such as data quality, parameter uncertainty, use of proxy data, or the modelling of multifunctional activities. Furthermore, some data cannot be shared due to licensing restrictions or confidentiality concerns.
Despite the importance of the product system to the practice of LCA, there is no agreement within the LCA community on what exactly a PSM is from the computational point of view or how it is best described. Community efforts to develop data formats and interoperability standards have typically focused on describing unit processes. A centerpiece of this work is the ISO 14048 standard (2002), which lays out detailed requirements for documentation of complete databases and unit processes, focusing on metadata and descriptive information. A number of ISO-14048compliant data formats have been developed (Rydberg and Palsson 2009). However, these formats do not consider how collections of datasets are used to make PSMs. The UNEP-SETAC Life Cycle Initiative’s Global Life Cycle Data Access (GLAD) project (UN environment), which seeks to create a worldwide, software independent repository for LCI data, similarly focuses on datasets without considering models. Significantly, ISO 14044 and 14048 do not recognize a distinction between the foreground and the background of an LCA model, even though these two parts are prepared and maintained very differently (Clift et al. 1998).
Partly as a result of the lack of standard guidance, practitioners face many challenges regarding documentation and sharing of PSMs. Evaluation of data quality is challenging, and the use of proxy or non-ideal data leads to results that are difficult to interpret (Canals et al. 2011; Hetherington et al. 2014; Edelen and Ingwersen 2017). Multi-functional processes are also a common source of ambiguity in reviewing models and interpreting results. While the ISO 14048 standard includes requirements for the documentation of the treatment of co-products via allocation or substitution, the standard does not provide guidance for how such documentation should be written, and it remains a challenging issue (Muench and Guenther 2013; Heath et al. 2014). Different databases and software systems have different characteristic approaches for elementary flow modeling (Edelen et al. 2017) and LCIA implementation (Herrmann and Moltesen 2015; Speck et al. 2016), confounding reproducibility of results. Finally, nearly all models contain some form of confidential or proprietary information, which constrains how models may be shared and reviewed (Kuczenski et al. 2017).
In other instances where people construct computational models to answer scientific questions, it is imperative to allow those results to be reproduced for verification or reuse (Mesirov 2010). This invariably requires that the computation procedure can be expressed distinctly from the input data (Buckheit and Donoho 1995; Fomel and Claerbout 2009), but this is generally not possible in LCA. Currently, although there are common elements to most PSMs, the actual model construction is completely dependent on the software used and the modelling choices of the practitioner, and there is no easy way to share models with collaborators or reviewers if so desired (Vandepaer and Gibon (in press)). A considerable amount of metadata such as temporal representativeness or system boundaries is coded as free text or graphical images that require human interpretation, and therefore add uncertainty and imprecision when models are shared and adapted. They can also introduce a significant burden in terms of cost and labor. In addition, there is no standard way to describe changes to the PSM. Past efforts to conduct meta-analyses of LCA studies were hampered by insufficient reporting of modelling assumptions, data and results (Heath and Mann 2012; Price and Kendall 2012). Some scholars make robust attempts to document their modelling choices graphically to meet explicit objectives for transparency or reusability (e.g. (Steubing et al. 2015), supporting information; (Miller et al. 2016), Figure 2; (Cheung et al. 2017), Supporting Information) but even these approaches still demand significant labor for another researcher to adapt the models.
Some of these challenges are evident in the use of product category rules (PCRs) for writing environmental product declarations (EPDs). Currently a PCR describes the system boundaries, data quality, and other attributes of the LCA used to compute the results (Fet and Skaar 2006). However, PCRs have many of the same shortcomings found in other LCA reports. Preparing an EPD from a PCR still presents technical challenges similar to a conventional LCA (Mukherjee and Dylla 2017), and EPD results show great sensitivity to modelling decisions (Modahl et al. 2013) and the choice of PCR (Subramanian et al. 2012).
Scope
The proposed roadmap module addresses how PSMs themselves, and particularly the foreground, are described, shared, reviewed, and revised. The SETAC North America Product System Model Description and Revision working group has identified a need for a common structure to foreground modeling that is independent of any particular LCA software or database that would in principle allow study authors to describe their models unambiguously, facilitating revision, validation and reuse of inventories. Given the complexity of PSM, the description should be machine-readable to be efficient. This in turn would enable data users and decision makers to better understand how modeling choices influence the results, and could also illustrate how models change in revision. This direction would also support recommendations on best practices and future directions for PCR development as described in the PCR Guidance ((2013), sections 7.5 and 7.7).
It is the intent that the work described in this roadmap builds on current practices and ongoing research. Existing resources like ISO 14048, and efforts like GLAD, are important as an adjunct to the current work. Where the work described in this roadmap differs is that it extends a set of recommendations for PSMs that build on the availability of LCI datasets (sharable, transportable or open data) by defining the next logical step: functional and documentary requirements for a precise mathematical description of the product system as it is modelled by the practitioner. The envisioned PSM description contains only as much data as is necessary to unambiguously describe the unit processes used in its construction, and to create explicit linkages between LCI unit processes.
Definition of “Product System Model”
The portion of an LCA study that is specific to the product system being modeled is often called the foreground of the model, while the parts that reflect the industrial economy as a whole and are drawn from reference databases are the background. Without advancing a formal distinction between these two components, in simple terms the foreground portions are designed by the modeler, whereas the background processes are selected and adapted. A description of the PSM should cover both of these activities, but should exclude the preparation of the background databases.
The problem of model description is distinct from the problem of data interoperability (Ingwersen et al. 2015), although they are related. Interoperability generally refers to the ability to “effectively describe” data resources, enable users to locate and retrieve datasets, and ensure that the contents of the datasets are interpreted in a manner consistent with their intended meaning. In contrast, model description and revision concerns what the data user does with the datasets, i.e., how they are linked together to represent a product system.
Thus, this document distinguishes “datasets” from PSMs according to the following definitions:
- Datasets include information on the elementary or intermediate exchanges associated with specific industrial processes or activities.
- A dataset can describe a process and/or flows related to a process.
- Metadata about geographic locale, reference year and/or time period of applicability, level of review, administrative information, etc. belong to datasets, and are already well-handled by existing data exchange formats.
- Numeric input data pertaining to flow properties or process inventories belong to datasets.
- Datasets can be (but are not necessarily) derived from PSMs.
- PSMs include information on which datasets are used in a study and how datasets are connected to one another.
- PSMs are a structure to contain datasets.
- Numeric output data, such as LCI results or impact category indicator results, cannot in general be computed from individual datasets and so result from the use of PSM (an exception is the computed impact category scores of single process datasets).
- Aggregated datasets in particular represent the outputs of PSM. Ideally, aggregated datasets would include precise descriptions of the models from which they were derived.
Some aspects of inventory model design are associated with certain software systems or modes of analysis. These aspects, such as: 1) basic computational approach (matrix inversion vs. sequential or iterative), details of representation, visualizations; 2) software-specific classification systems, parameterization systems, and methods for handling numerical uncertainty (Cooper et al. 2012), are not universally applied in LCA and thus may need further development before being included in a universal description of the PSM.
A PSM is also distinct from a database system model, such as the different system models of ecoinvent (Wernet et al. 2015). Analyzing co-production modelling in background databases may involve thousands of processes and it is considerably complex. The ecoinvent system models present alternative co-production strategies that are consistently applied to an entire database. Projects such as Ocelot (Mutel 2017) are used to construct database system models systematically on background processes. Review of co-production strategies is much more straightforward if it is limited to foreground processes.
Areas for improvement
The roadmapping group has identified a number of features that any framework for describing and sharing PSMs should have in order to ensure effective interpretation, critical review, and data reuse.
A description of the PSM should enable a reviewer, data user, or practitioner to:
-
1
identify unambiguously the specific datasets used in a PSM;
-
2
relate process references to their geographic/temporal/technological scope (e.g. “hydrogen produced from steam cracking” as modeled in Germany for the reference year 2015);
-
3
understand how multi-output and multi-functional processes were transformed into single output processes.
When handling co-production by allocation or partitioning:- demonstrate that the sum of the allocated datasets is equal to the un-allocated dataset;
-
evaluate continuity errors such as mass, energy, or element balances;When handling co-production by substitution:
- identify the datasets that were used to implement the substitution;
- If substitution requires complex product system modeling, such as “allocation at the point of substitution” (Wernet et al. 2015) or system expansion beyond simple substitution (Weidema 2000), the modeling steps required should be included in the PSM.
To perform any of the checks in item 3 requires access to the original multi-functional dataset. It would also be equivalent to have access to allocated datasets for all co-products and sufficient information about how co-production allocation was done (e.g. allocation partitioning coefficients), because then the un-allocated dataset could be recreated. This may not be possible for highly aggregated industry data or for datasets involving a large number of allocated processes. Thorough documentation of allocation can satisfy some description and sharing objectives but would not allow the partitioning to be verified or altered.
For understanding the structure of PSMs:
-
4
follow linked input/output flows to identify connected processes; and
-
5
identify and extract any available information from the model description to perform uncertainty analysis, especially when using the model in comparisons.
For testing model sensitivity, or in cases where a model user would like to alter the model:
-
6
substitute one reference to a specific LCI dataset for another in the same model;
-
7
add, remove, or alter elements of the model to adapt it for new purposes;
-
8
review and modify co-production approaches; measure the sensitivity of results to co-production modeling decisions by testing alternative methods.
For collaboration:
-
9
easily obtain access to any publicly available datasets referenced in a model;
-
10
if the specific version of a dataset is not available, understand the differences between datasets used in the model and those currently available;
-
11
share the structure of a model without including proprietary inventory data;
-
12
view a model that has been shared with free software, without exposing private or proprietary data;
-
13
given a model, verify that a reported LCIA result is correct;
-
14
describe how a model has been changed between successive versions.
Whenever possible, movements towards this goal should be compatible with existing data formats and interoperable with the major LCA modeling software— particularly standardized allocation and nomenclature, or the ability to readily adjust those aspects.
Objectives
In order to achieve the requirements listed above, this working group has identified the following objectives for the LCA research community:
Establish a method for formally describing PSMs. The method should allow a study author to describe the contents and structure of his/her model to a colleague, client, or critical reviewer, without ambiguity.
Develop guidance for publishing information about inventory models that protects datasets. Many foreground unit processes are confidential, and non-confidential datasets are often subject to licensing restrictions or other legal restrictions on their distribution. Any approach for model publishing has to protect the privacy and security of data.
Establish a framework for open publication of PSMs. The framework should use existing formats and standards wherever possible, and should permit adaptation by existing LCA software packages without depending on a specific LCA software.
Milestones
Short term milestones require near term research by the academic and general LCA community. Medium term and long term milestones may require completion of the short term milestones or require more extensive research. Milestones should be completed in the term indicated by the color placement.
1. Describing Model Contents
This section deals with documenting the use of standard data sets from major data providers.
Publicly available LCI datasets, including those available for license, are regularly used in models as background data. The accessibility of these datasets -- the ease of assessment, acquisition, comprehension, and modification -- is an important aspect of the success of this roadmap. The working group’s first 3 improvement areas listed in Section 2 pertain to datasets that will better enable the seamless sharing and review of inventory models.
| ID | Milestone | Short term (0–3 years) | Medium term (3–7 years) | Long term (>7 years) |
|---|---|---|---|---|
| 1.1 | Freely and commercially available datasets should be unambiguously identified using a stable, standard reference format, such as a uniform resource identifier (URI, also called a “hyperlink”) | |||
| 1.2 | Freely and commercially available datasets should describe how allocation, substitution, or system expansion was performed for multi-functional processes, including parameter values like price, mass, etc. used to perform allocation | |||
| 1.3 | Freely and commercially available datasets should provide sufficient metadata for potential users to assess fitness for purpose before purchasing the dataset or database license. | |||
| 1.4 | Freely and commercially available unit process datasets should allow users to modify allocation, substitution, or system expansion wherever possible. | |||
| 1.5 | Freely and commercially available datasets used in Environmental Product Declarations should be disclosed by explicit reference. |
2. Describing Model Structure
This section deals with the “process-flow diagram”, the description of how contained processes are connected to one another.
A PSM is made up of a collection of datasets. The description of the model should enable a reader to identify what datasets are used and how they are linked together, even if nothing else about the datasets is known. This section addresses items 4–8 in the areas for improvement.
| ID | Milestone | Short term (0–3 years) | Medium term (3–7 years) | Long term (>7 years) |
|---|---|---|---|---|
| 2.1 | Reach community agreement on the contents of a minimal description for PSM. | |||
| 2.2 | Describe model foreground boundary in a machine-readable way. Define cut-off flows as intermediate flows that cross the system boundary. | |||
| 2.3 | Research regarding the protection of confidential information in models. | |||
| 2.4 | PCRs should be supplemented with formal descriptions of expected model structure. |
3. Collaborative Use of Models
Model sharing, review, revision, and visualization
The ability of critical reviewers and collaborators to view and update models is of equal importance to the ability to share them. While some software systems enable users to track changes internally, it would be valuable for this ability to extend beyond single software systems. This section addresses items 9–14 in the areas for improvement.
| ID | Milestone | Short term (0–3 years) | Medium term (3–7 years) | Long term (>7 years) |
|---|---|---|---|---|
| 3.1 | Obtain / review datasets used in a model. | |||
| 3.2 | Develop a set of requirements (not limited to any specific LCA software) to describe changes to PSMs | |||
| 3.3 | Develop requirements for software that will provide automatic accuracy checking of model computations or LCIA results. | |||
| 3.4 | Develop IT infrastructure to enable practitioners to share or publish models. | |||
| 3.5 | Develop software (see section A.1.2), as well as new functionality in existing software, to view and interpret models. | |||
| 3.6 | Develop a consistent interface, such as an API specification or a domain-specific scripting language (see section A.1.2), for communicating information about PSMs. |
Cross Cutting Issues
In addition to the main areas of development, the final roadmap report presents a list of cross-cutting issues that are expected to be addressed in other sections of the roadmap that would help in creating a consensus approach for describing PSMs.
A basic requirement to describe model contents is to enable datasets to be unambiguously identified, as noted in Milestone 1.1. Although, most datasets are already given unique identifiers that allow them to be retrieved within a given database, this level of identification is not sufficient because these identifiers are not resolvable – if a reader is given an identifier by itself, that information is not sufficient to understand what is being identified. A uniform resource identifier (URI), also called a “hyperlink,” is a form of unique identifier that can be followed to a particular resource using domain name resolution, which is part of the core infrastructure of the Internet. Data providers that are already maintaining unique identifiers for datasets are encouraged to make those unique links stable, persistent, and resolvable via the Web, which will allow different parties to agree on which datasets are being referenced.
Whether or not a model is accompanied by an LCA study report that has been critically reviewed is important for anyone planning to reuse a model. The review history of the model should be included in both the GLAD effort and elsewhere in the Roadmap, together with a hyperlink or other contact information for obtaining a copy of the LCA study report.
At this time, the GLAD effort is developing a platform for sharing LCI data at the unit process level. These efforts and those of the future should be coordinated with the Product System Model Description and Revision roadmap milestones.
The GLAD Initiative and the LCA Uncertainty roadmap should address data uncertainty in a way that enables sharing between tools, reproducibility of data quality assessments, and modification to suit a different purpose. Achieving these milestones will support PCRs and EPDs developments and critical review of those documents. They should also support better comparisons of results, particularly if uncertainty is captured. This should be taken into consideration in ongoing and future sections of the Roadmap.
There is an increase in use of external models to improve LCI and a growing complexity of databases. It could be of concern to link PSMs to the outputs of other big databases and data sources. Specifying how this linkage is documented, and how external data could be integrated with LCI computation should be addressed in future work.
Much of the goal of these milestones is to enable reproducibility of LCIA results. Ways of tying the impact assessment method to the model need to be addressed. Further research is required to render LCIA computations more transparent and reviewable. LCIA methods are, in principle, independent of LCA software and of inventory data, so impact assessment data (characterization factors) should be excluded from both datasets and PSMs. However, in practice LCIA methods must be integrated with inventory data sources before they can be used, and implementations vary across software systems. In addition, there is a strong relationship between the PSM and the use of spatially regionalized characterization models (Verones et al. 2017). There is a need for standard formats in the reporting of spatially differentiated models and choices regarding the spatiotemporal scales of both inventory and impact modeling. Complexities will undoubtedly arise in the case of site-specific fate, transport, and exposure modeling that must be further considered.
Once the model sharing requirements have been developed, software tools should be developed and/or adapted to support the review and sharing of models.
Conclusions
It is our goal that the objectives and milestones laid out in this document will provide guidance to researchers, data providers, and software makers on the steps required to facilitate the description and sharing of product system models. If the LCA community is successful in creating a consensus approach for describing PSMs, this approach would serve as a major stepping stone to overcome three issues that arise often in LCA: transparency, reproducibility, and extensibility.
While “transparency” in ISO 14044 today refers to the disclosure of all relevant information in an LCA report, a common description of inventory models would shift that focus from text and tables in a report to the model as a digital object itself. One way to provide “strong transparency” would be to allow another LCA practitioner to automatically “read” an LCI model and understand the system described and inspect the flows that link the various processes. Note this is to understand, not necessarily to agree; but a common understanding can lead to a discussion which produces agreement.
Reproducibility, which follows from transparency, is the ability of another LCA practitioner to replicate the results from an understanding of the PSM. It is also increasingly important for the use of LCA in the scientific domain, in which an understanding of the provenance of results is vital. A related idea to reproducibility is the ability to test sensitivity to model parameters, data sources, and dataset selections, in order to evaluate the decisions of the modelers. Reproducibility will serve the LCA community by increasing the defensibility of the results of individual LCA studies and of LCA practice as a whole, which will encourage more widespread use of LCA in decision support.
The third issue, extensibility, refers to the ability of practitioners to build upon one another’s work. Having a precise description of a product model will enable LCA studies to be adapted, extended, and used by reference in other studies. We envision a landscape in which a complete LCA by one group (say, on lithium batteries) can be embedded within another study by an unaffiliated group (say, of an electric vehicle) without reproducing the study results by hand. While licensing and confidentiality, as well as technical interoperability, present strong challenges to this vision, having a framework for describing and sharing models is a necessary first step.
Acknowledgments
The authors would like to acknowledge the contributions from the participants of the workshop in the 2016 SETAC-North America Meeting and the support of SETAC. We greatly appreciate the community members who provided anonymous survey feedback. Working group members included Ben Mourad Amor (University of Sherbrooke), Miguel Astudillo (University of Sherbrooke), Bill Bernstein (NIST), Paula Bernstein (PRe Sustainability), Marcos Esterman (Rochester Institute of Technology), David Evers (Hexion), Karl Haapala (Oregon State University), Troy Hawkins (Eastern Research Group), Wesley Ingwersen (US EPA), Christoph Koffler (thinkstep), Brandon Kuczenski (University of California, Santa Barbara), Lise Laurin (EarthShift Global), Antonino Marvuglia (Luxembourg Institute of Science and Technology), David Meyer (US EPA), KC Morris (NIST), Christopher Mutel (Paul Scherrer Institut), Tomas Navarrete (Luxembourg Institute of Science and Technology), Massimo Pizzol (Aalborg University), Devarajan Ramanujan (Massachusetts Institute of Technology), Barclay Satterfield (Eastman Chemical).
Footnotes
Publisher's Disclaimer: Disclaimer
The research presented was not performed or funded by EPA and was not subject to EPA’s quality system requirements. The views expressed in this article are those of the authors and do not necessarily represent the views or the policies of the U.S. Environmental Protection Agency.
References
- Buckheit J, Donoho D (1995) WaveLab and Reproducible Research In: G O AA (eds) Wavelets and Statistics. Springer, New York, NY, pp 55–81 [Google Scholar]
- Canals LMI, Azapagic A, Doka G, et al. (2011) Approaches for Addressing Life Cycle Assessment Data Gaps for Bio-based Products. J Ind Ecol 15:707–725. doi: 10.1111/j.1530-9290.2011.00369.x [DOI] [Google Scholar]
- Cheung CW, Berger M, Finkbeiner M (2017) Comparative life cycle assessment of re-use and replacement for video projectors. Int J Life Cycle Assess 1–13. doi: 10.1007/s11367-017-1301-3 [DOI] [Google Scholar]
- Clift R, Frischknecht R, Huppes G, et al. (1998) Towards a coherent approach to life cycle inventory analysis. In: SETAC Europe. Brussels, [Google Scholar]
- Cooper JS, Noon M, Kahn E (2012) Parameterization in Life Cycle Assessment inventory data: review of current use and the representation of uncertainty. Int J Life Cycle Assess 17:689–695. doi: 10.1007/s11367-012-0411-1 [DOI] [Google Scholar]
- Edelen A, Ingwersen WW (2017) The creation, management, and use of data quality information for life cycle assessment. Int J Life Cycle Assess. doi: 10.1007/s11367-017-1348-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edelen A, Ingwersen WW, Rodríguez C, et al. (2017) Critical review of elementary flows in LCA data. Int J Life Cycle Assess 1–13. doi: 10.1007/s11367-017-1354-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fet AM, Skaar C (2006) Eco-labeling, product category rules and certification procedures based on ISO 14025 requirements. Int J Life Cycle Assess 11:49–54. doi: 10.1065/lca2006.01.237 [DOI] [Google Scholar]
- Fomel S, Claerbout JF (2009) Reproducible Research. Comput Sci Eng 11:5–7. doi: 10.1109/MCSE.2010.113 [DOI] [Google Scholar]
- Heath G a., Mann MK (2012) Background and Reflections on the Life Cycle Assessment Harmonization Project. J Ind Ecol 16:8–11. doi: 10.1111/j.1530-9290.2012.00478.x [DOI] [Google Scholar]
- Heath G a, O’Donoughue P, Arent DJ, Bazilian M (2014) Harmonization of initial estimates of shale gas life cycle greenhouse gas emissions for electric power generation. Proc Natl Acad Sci U S A 111:E3167–76. doi: 10.1073/pnas.1309334111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heijungs R, Sun S (2002) The computational structure of life cycle assessment. Int J Life Cycle Assess 7:314–314. doi: 10.1007/BF02978899 [DOI] [Google Scholar]
- Herrmann IT, Moltesen A (2015) Does it matter which Life Cycle Assessment (LCA) tool you choose? - A comparative assessment of SimaPro and GaBi. J Clean Prod 86:163–169. doi: 10.1016/j.jclepro.2014.08.004 [DOI] [Google Scholar]
- Hetherington AC, Borrion AL, Griffiths OG, McManus MC (2014) Use of LCA as a development tool within early research: Challenges and issues across different sectors. Int J Life Cycle Assess 19:130–143. doi: 10.1007/s11367-013-0627-8 [DOI] [Google Scholar]
- Ingwersen WW, Hawkins TR, Transue TR, et al. (2015) A new data architecture for advancing life cycle assessment. Int J Life Cycle Assess 20:520–526. doi: 10.1007/s11367-015-0850-6 [DOI] [Google Scholar]
- Ingwersen WW, Subramanian V (eds) (2013) Guidance for Product Category Rule Development In: Prod. Categ. Rule Guid. Dev. Initiat. version 1.0. http://www.pcrguidance.org/. Accessed 20 Dec 2017 [Google Scholar]
- ISO (2006) Environmental management — Life cycle assessment — Requirements and guidelines (ISO 14044:2006). [Google Scholar]
- ISO (2002) ISO/TS 14048:2002 - Environmental management -- Life cycle assessment -- Data documentation format. [Google Scholar]
- Kuczenski B, Sahin C, El Abbadi A (2017) Privacy-preserving aggregation in life cycle assessment. Environ Syst Decis 37:13–21. doi: 10.1007/s10669-016-9620-7 [DOI] [Google Scholar]
- Mesirov JP (2010) Accessible reproducible research. Science 327:415–416. doi: 10.1126/science.1179653 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller SA, Billington SL, Lepech MD (2016) Influence of carbon feedstock on potentially net beneficial environmental impacts of bio-based composites. J Clean Prod 132:266–278. doi: 10.1016/j.jclepro.2015.11.047 [DOI] [Google Scholar]
- Modahl IS, Askham C, Lyng KA, et al. (2013) Comparison of two versions of an EPD, using generic and specific data for the foreground system, and some methodological implications. Int J Life Cycle Assess 18:241–251. doi: 10.1007/s11367-012-0449-0 [DOI] [Google Scholar]
- Muench S, Guenther E (2013) A systematic review of bioenergy life cycle assessments. Appl Energy 112:257–273. doi: 10.1016/j.apenergy.2013.06.001 [DOI] [Google Scholar]
- Mukherjee A, Dylla H (2017) Challenges to Using Environmental Product Declarations in Communicating Life-Cycle Assessment Results. J Transp Res board. doi: 10.3141/2639-11 [DOI] [Google Scholar]
- Mutel CL (2017) Ocelot In: Open source Link. Framew. life cycle Assess. https://osf.io/apg8j/. Accessed 20 Dec 2017 [Google Scholar]
- Price L, Kendall A (2012) Wind Power as a Case Study: Improving Life Cycle Assessment Reporting to Better Enable Meta-analyses. J Ind Ecol. doi: 10.1111/j.1530-9290.2011.00458.x [DOI] [Google Scholar]
- Rydberg T, Palsson A-C (2009) Towards a Nordic Guideline for Nomenclature and Data exchange format for Life Cycle Assessment - NorDEX. [Google Scholar]
- Speck R, Selke S, Auras R, Fitzsimmons J (2016) Life Cycle Assessment Software: Selection Can Impact Results. J Ind Ecol 20:18–28. doi: 10.1111/jiec.12245 [DOI] [Google Scholar]
- Steubing B, Mutel C, Suter F, Hellweg S (2015) Streamlining scenario analysis and optimization of key choices in value chains using a modular LCA approach. Int J Life Cycle Assess 21:510–522. doi: 10.1007/s11367-015-1015-3 [DOI] [Google Scholar]
- Subramanian V, Ingwersen W, Hensler C, Collie H (2012) Product Category Rules Comparing Product Category Rules from Different Programs : Learned Outcomes Towards Global Alignment. 1–14. [Google Scholar]
- UN environment The global LCA data access network. http://web.unep.org/resourceefficiency/what-we-do/assessment/life-cycle-thinking/global-lca-data-access-network. Accessed 22 Dec 2017
- Vandepaer L, Gibon T (in press) The integration of energy scenarios into LCA: LCM 2017 conference workshop, Luxembourg, September 5, 2017. Int J Life Cycle Assess. [Google Scholar]
- Verones F, Bare J, Bulle C, et al. (2017) LCIA framework and cross-cutting issues guidance within the UNEP-SETAC Life Cycle Initiative. J Clean Prod 161:957–967. doi: 10.1016/j.jclepro.2017.05.206 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weidema BP (2000) Avoiding Co-Product Allocation in Life-Cycle Assessment. J Ind Ecol 4:11–33. doi: 10.1162/108819800300106366 [DOI] [Google Scholar]
- Wernet G, Bauer C, Steubing B, et al. (2015) The ecoinvent database version 3 (part I): overview and methodology. Int J Life Cycle Assess 3:1280–1230. doi: 10.1007/s11367-016-1087-8 [DOI] [Google Scholar]
