Abstract
This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises.
Keywords: Agility, Asset utilization, Metrics classification, Performance evaluation, Smart manufacturing, Sustainability
1. Introduction
Rapidly changing global markets have increased the competitive pressure on businesses across the world. Businesses in high- Gross Domestic Product countries must overcome high labor costs and increased regulations while continuing to provide innovative, quality products and services. Manufacturing companies, in particular, must continue to develop and implement new technology solutions to remain competitive. These solutions, when implemented, must enable manufacturing systems to function at continually higher performance levels typified by lower operating costs, higher quality products, and better decision making. The consistent need for improved performance in manufacturing has fueled the development of a number of generalized production concepts [1]. Some examples include intelligent manufacturing [2–4], agent-based Manufacturing [5–7], agile Manufacturing [8], next generation manufacturing [9], advanced manufacturing [10], and digital manufacturing [11]. While these concepts have been implemented with varying degrees of success, opportunities still exist to increase performance by using advanced hardware and software systems that leverage sensor inputs in near real time.
Sensor-enabled manufacturing equipment and controller software have provided a new, data-rich environment that can help improve system performance. We generally refer to such systems as smart manufacturing systems (SMS) [12], which take advantage of the availability of affordable sensors, computing, and data-storage; Internet-of-Things architectures; machine learning and data-mining algorithms; and cloud-based storage and data-management systems. Such systems are highly adaptive and have some level of autonomy. More importantly, SMS also have the capabilities to enable rapid realization of products, dynamic response to changing demand, and real-time performance optimization of production and supply-chain networks [13]. This paper reports on an effort to support the adoption and use of the SMS paradigm in industry through the development of structured approaches for selecting and computing metrics using the above technologies.
2. Performance Metrics for Smart Manufacturing Systems
Smart manufacturing systems use information and communications technology (ICT) to implement the capabilities described in Section 1 [13]. SMS provide information to decision makers by collecting, monitoring, and analyzing data collected from relevant sources within and beyond the manufacturing system itself. In this way, SMS can help manufacturers identify and respond to system-level manufacturing inefficiencies, which have been growing in importance given increasing market demands for improved flexibility, productivity, and sustainability [14]. The key to the success of SMS is to identify the right set of data needed to assure the desired performance of the system. In general, we can define the performance of a SMS using three inter-related criteria: agility, asset utilization, and sustainability [15].
Agility is typically understood as the ability of a system to respond rapidly, efficiently, and effectively to variations changes in the environment and/or requirements [16–17]. Asset utilization refers to the efficient and effective use of manufacturing resources to create value. When considered together, an agile manufacturing system that best utilizes its assets is one that efficiently and effectively responds to external and internal changes and consistently provides value using minimal time, effort, and cost. It is important to differentiate efficiency from effectiveness in this definition since they refer to two separate characteristics of a manufacturing system. Duflou et al. [18] define efficiency as “the amount of resources required to produce a given level of output” and effectiveness as “making wise choices with respect to how resources are used.” Alternatively, we can rely on the following definitions to clarify efficiency and effectiveness further for smart manufacturing: efficiency is the proficiency of a system to convert inputs to outputs while effectiveness is the ability of a system to provide specific value using given resources. Ideal performance demands that a smart manufacturing system be both efficient and effective.
Sustainability is becoming a global concern. The Brundtland Committee defined sustainability as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs” [20]. Generally speaking, there are three pillars of sustainability, which are generally referred to as the triple bottom line: the environment, the economy, and the society. In this context, we can argue that sustainability includes agility and asset utilization considerations since both of the latter categories affect the environmental and economic impacts of manufacturing systems.
Much of the technical literature on sustainable manufacturing focuses on those processes and systems that minimize environmental and social impacts while simultaneously maximizing economic benefits [21–26]. For example, Rachuri et al. [22] argue that sustainable manufacturing can be viewed as a systems approach for the creation and distribution (supply chain) of innovative products and services that minimizes resources (e.g., materials, energy, water, and land); eliminates toxic substances; and produces zero waste across the entire life cycle of products and services. In all definitions, sustainability within a manufacturing system must be considered in the context of a closed system using spatially- and temporally-defined boundaries [18], [27]. This and other requirements and characteristics of performance metrics selection and evaluation are the focus of Section 3.
2.1 Performance Metrics Landscape
A large number of metrics and indicators have been defined to capture the performance of manufacturing systems from multiple perspectives – all related to the three pillars of sustainability. For example, there has been a significant amount of work to expand or consolidate the three pillars into more detailed or easily applicable ideas, e.g., expanding the categories to include specific subcategories, such as agility or flexibility [16], [28–29], and asset-utilization [30], or consolidating the broad categories into a smaller number of easily comparable algorithmic metrics [31–32]. While these specific and consolidated metrics are important and useful, they are not well suited for capturing the details needed to understand where improvements may be needed and should be used primarily in conjunction with more detailed performance metrics [24].
Metrics that attempt to address all three pillars of sustainability are called 3-D metrics. These metrics are very useful as in conducting an initial performance evaluation, but they are not usually detailed enough to find the source of a specific inefficiency. In these cases, we can use more specific performance metrics by considering 2-D and 1-D metrics, which cover two or one of the pillars of sustainability, respectively [24–25]. It is important to assure that any selected performance metric captures useful and relevant information efficiently and effectively. Characteristics of ideal metrics include comprehensiveness, controllability, cost effectiveness, manageability, meaningfulness, robustness, and timeliness [33].
Singh et al. [34] reviewed 41 sustainability indicators and reported that only a few deal with all three pillars of sustainability. There are even fewer that focus on the product or process; the most notable of these is the Ford Product Sustainability Index (FORD PSI) [35]. This index lists eight, product-specific indicators to account for the three pillars of sustainability: life-cycle global-warming potential, life-cycle air-quality potential, sustainable materials, restricted substances, drive-by-exterior noise, safety, mobility capability, and life-cycle ownership costs [36]. Other works have suggested more general performance metrics for process, equipment, and cell levels, but they are not considered complete catalogs [26].
A more detailed assessment can be conducted when evaluating specific processes and cell; but, these assessments are based on indicators related to emission, energy, material, and water intensities. These indicators are associated with two of the pillars only: environment and economy [37–40]. There is still a significant need to expand these lower-level assessments to include a more comprehensive list of society-related and manufacturing-specific metrics (e.g., agility) more completely. Creating a comprehensive list of multi-level performance metrics that adequately accounts for all of the sustainability pillars remains elusive. A significant obstacle is the many types of manufacturing processes and the fact that many metrics are process or cell specific. An exhaustive list would be useful if an accompanying selection manual allows users to choose appropriate performance metrics for specific stakeholders and across hierarchical levels. A careful balance is needed to consider the appropriate number of performance metrics for analysis since each additional metric can require significant time, money, and resources to determine.
Jain [41] points out that the utility of available sustainability metrics and specific indicators is limited in making project choices and policy decisions. He proposed the use of a multi-attribute utility framework for measuring sustainability progress. Gutowski et al. [42] proposed thermodynamic metrics for sustainability and accepted that a single aggregated metric cannot offer a completely satisfactory solution for all situations. Jain and Rachuri [43] considered the dimension of maturity for sustainable manufacturing and proposed a basic set of metrics to use for manufacturing systems within small and medium enterprises.
2.2 Supply Chain Metrics: An Example of the Challenge
SMS extend across the manufacturing enterprise and include the supply chain. Choosing metrics at the supply-chain level has been the focus of the Supply Chain Council, an industrial consortium with almost 1000 member companies. The Council has used its own Supply Chain Operations Reference (SCOR) model, which provides a standard perspective for supply-chain planning and execution, to choose those metrics. To date, the SCOR model includes over 150 metrics along five attributes: reliability, responsiveness, agility, cost, and asset-management efficiency. A variety of additional metrics have also been developed outside of the SCOR model.
The literature includes other sets of supply-chain metrics, some of which overlap those defined in the SCOR model. For example, Houda and Said [44] define a supply-chain sustainability index that measures organizational performance by integrating an organization’s social, economic, and environmental goals. This approach allows systematic coordination of key, inter-organizational, business processes to improve the long-term economic performance of the company’s supply chain. Lin and Li [45] define defects per opportunity, yield, and rolled-throughput yield as performance measures for supply chain based on the Six-Sigma philosophy.
Clearly, there is no shortage of metrics. What is needed is an organizing scheme to assess and use appropriate metrics for any proposed performance evaluation.
2.3 Metrics Repositories and Hierarchies
There have been many attempts to develop such an organizing scheme for such a wide variety of metrics. These attempts have been motivated by the difficulty that manufacturers have had in using these metrics. A number of repositories for performance metrics have been created, such as the Sustainable Manufacturing Indicators Repository [46]. The metrics in such repositories cover a range of aspects of organizational operations and associated impacts.
Graedel and Allenby [47] recognize the challenge of grouping sustainability metrics to communicate aggregated achievement. They support setting up a hierarchical grouping scheme across local, national, and global levels. Jin and High [48] propose a hierarchy of “stressor–status–effect–integrality–well-being” to classify sustainability metrics. In addition to the causal chain represented by the hierarchy, Jin and High [48] suggest using multiple criteria “across the causal chains” at each level of the hierarchy.
Joung et al. [49] use their five dimensions to categorize sustainability indicators, including environmental stewardship, economic growth, social well-being, technological advancement, and performance management. Fiskel et al. [50] also present a classification scheme for sustainability indicators. While both approaches present useful classification schemes, they address only sustainability-related metrics.
The SCOR model discussed earlier presents a model for linking metrics. That linking is based on a three-level hierarchical structure, where lower-level metrics serve as diagnostics for higher-level metrics (SCC 2012). The level 1 metrics assess high-level measures that go across SCOR processes. The lower-level metrics generally focus on a subset of SCOR processes.
2.4 Metrics Landscape Observations
Various efforts described above to develop an organizing scheme, such as repositories and hierarchies, offer a promising direction of work. The presented efforts have helped in providing structure to evaluate agility, asset utilization, and sustainability as well as better manage processes, facilities, and supply chains. However, efforts are required to provide a similar scheme for SMS metrics. The scheme in turn can support the implementation of infrastructure for the collection of metrics and their use for real-time decision making in SMS [2–4]. We now present our approach to developing such a scheme.
3. Proposed Classifications of Metrics
Classification schemes or frameworks for manufacturing metrics have been presented in the past, but most schemes or frameworks do not focus on the three criteria described in Section 2 for smart manufacturing systems: agility, asset utilization, and sustainability. Blackburn and Valerdi [51] recommend an open-minded, value-focused approach to understanding performance before identifying metrics and setting up measurement processes to collect data to support performance characterization and analysis. SMS must account for measurement throughout the manufacturing enterprise, including the process, machine, and facility levels. In this work, we develop a classification scheme to be used in creating a repository of metrics for smart manufacturing. This section describes the motives behind the classification and a review of existing classification and data structure for the repository illustrated with example metrics.
3.1 Motivation
Competitive priorities can differ significantly between manufacturing companies; hence, a suitable set of performance metrics for one company might not be suitable for another. Additionally, the metrics-selection process should capture the proper definitions of the performance metrics to improve their selection and use [52–53]. A significant challenge in this regard is the development of classification methods for an increasingly large number of existing performance measures [54]. The ability of an organization to choose a specific metric for a particular situation is hindered when the purpose for the performance measure is not captured anywhere. Many of the performance measures contain information on when they should be used but not where they would be efficient and effective measurements. The unavailability of classification schemes that deal with these aspects leaves organizations with little practical guidance on how to decide what performance measures are suitable for their specific needs [55].
Another important driver for classifications is the ability to provide a performance measurement system with contextual information. Every company must deal with its own unique environment, and the most important factors that affect a company’s performance can vary greatly. These factors are in turn interrelated and change over time, which makes the task of classification challenging. The classification schemes should capture the contextual information that supports the company’s strategic objectives. It should also focus on short-term as well as long-term results. The content information for performance measurement should include the formulas used to compute the metrics. The requirements for simplicity and accuracy of performance measures are not always compatible with each other, which makes compromises unavoidable. For example, an exact definition of a performance measure can lead to a very complex formula when a simple formula would be easier to measure, comprehend, and calculate (although it may not be as precise). This is also related to multi-level modeling of manufacturing systems. Finally, it is important to note that these metrics are normally stochastic in nature and hence require a good understanding of and ability to quantify uncertainty.
3.2 Key Characteristics of Performance Metrics
Performance metrics are important since they “integrate organizational activities across various managerial levels and functions” [56]. The key characteristics of any successful metric are that it be objective, unbiased, equitable, computable, and operational. Three important technical issues impact these characteristics: the notion of functional units for normalization, spatial boundaries for analysis, and temporal boundaries for analysis. Understanding the historical and/or relative performance of a manufacturing system is primarily enabled by the choice of the metrics used in a performance assessment. Each such metric provides a measure by which two items or alternatives are compared. The selection of a metric must satisfy two requirements: it must allow for a comparison based on equal utility and it must address stakeholder demands. We can address the first requirement by defining metrics that reflect effectiveness. For example, comparing a rough machining operation with a finish machining operation using the energy consumed per-unit-volume machined provides minimal insight. This happens because the selected metric does not reflect equal utility since the operations do not provide the same value to the finished part [57–58]. The second requirement can be addressed by defining a selection of metrics that may be applicable across an array of stakeholders. A machinist, for example, may find energy per unit time most useful to identify potentially abnormal variations in the process while a designer may find energy per part feature most useful to gain better insights into the impacts of his/her design.
3.3 Boundary of Analysis
An equally important consideration when defining metrics for SMS is the boundary of analysis of system performance. This boundary consists of both a spatial and a temporal component. Fig. 1 shows one example of the spatial and temporal nature of manufacturing decision making that focuses on the product life cycle from design to operation. The vertical axis represents the hierarchy within the manufacturing enterprise with the addition of product as the smallest spatial unit that interacts with the machine. While the figure shows activities in parallel, they might not occur in that manner in practice. Each row should be considered independently for the development that takes place over time. This example highlights how decisions made for one specific portion of the manufacturing system and/or at one specific time of the product life cycle can affect other spatial and temporal perspectives [37].
Figure 1.
Spatial and temporal perspectives of design and manufacturing decision making (Reich-Weiser et al. 2008).
The temporal perspective in Fig. 1 can be extended to the end-of-life phase to complete the product life cycle. The spatial perspective can be extended from the enterprise to the larger supply chain. The supply-chain layer can be viewed as orthogonal to other spatial views because it focuses on a temporal flow across multiple manufacturing and logistics enterprises. For example, the ARC Advisory Group’s collaborative manufacturing model (ARC CMM) identifies enterprise operations, product and asset life cycle, and value chain as three broader domains for manufacturing enterprises [59]. The ARC CMM is analogous to the spatial and temporal perspectives presented in Fig. 1: the enterprise operations domain can be summarized in the facility and machine layers, the product and asset life cycle domain aligns well with the temporal perspective, and the value chain domain is orthogonal to the spatial perspectives captured in Fig. 1.
Focusing on the spatial component of the analysis boundary, we can generalize our perspective to that shown in Fig. 2. Here, manufacturing activities are divided into a hierarchy of six different levels: enterprise, factory or facility, line, cell or machine, tooling and setup, and process (e.g., physical interactions that process material to create a product, such as material removal at the tool-chip interface in machining) [14]. These levels correspond well to other existing models and standards in the literature, such as IEC 62264-1 [60], which is based upon ANSI/ISA-95 [61]. Considerations at the enterprise and facility levels typically focus on supply chains and corporate goals, whereas considerations from the line to the process levels tend to focus on shop-floor operations. Again, any adjustments made at one level may affect other levels, and so all defined metrics should maintain a “systems” perspective to assure performance throughout a smart manufacturing system and its environment. Metrics along the spatial dimension should be aligned to ensure that they reflect the same strategic objectives and that they can be aggregated. Some metrics may be aggregated simply by addition. For example, resource consumption and emissions at the machine and equipment level may be added to generate those at line level, those at line level may be added to generate those at the factory level, and so on. Others may require averaging (e.g., utilizations) while others may require more complex analysis (e.g., throughput).
Figure 2.
Six levels of the manufacturing hierarchy (Dornfeld et al. 2009).
Metrics play an important role as the subject entity (product or manufacturing system) develops along the temporal dimension from concept to implementation as partially represented in Fig. 1. When developing a concept during the design phase, the metrics are used to define the requirements objectively. These may include metrics for throughput rate, utilization levels, and emission levels for a manufacturing system concept. In case of a product, the metrics will depend on the product of interest. For example, metrics may be included for mileage, crash performance, and cost for an automobile. These metrics then become design targets throughout the remainder of the design phase.
Analytical tools may be used during the early design phase to evaluate the potential product’s performance using the selected metrics. As the design matures, more detailed evaluation tools, such as simulation, may be used to ensure that the desired performance, based on the selected metrics, can be met with the design. The initial physical realization of the design (a prototype product or the pilot manufacturing system) can then be evaluated for its ability to meet the desired performance, and, furthermore, adjustments can be made as needed to meet these performance goals. The performance of the final realization (manufactured product or fully implemented manufacturing system) is then tracked using the same set of metrics that were used to specify the requirements. Over time, the requirements may change and modifications may be needed to the subject entity with another cycle of design and implementation. The key aspect to note is that the same set of metrics are used across the temporal dimensions for the entire product life cycle. Some of the same set of metrics may be used for end-of-life decisions for product and its components.
The intersection of spatial and temporal domains helps determine the measurement and aggregation methods for the metrics. For example, if we focus on the agility criterion at the machine level, then the changeover time from one product to another may be aggregated from time elements required to set up the selected tooling coupled with those required for the machine settings. At the prototype stage, one can measure the changeover time using a time study of the operator. The domains are thus useful in identifying the transition of purpose and collection method of metrics. They also help in identifying the need for linkage among the metrics at different spatial and temporal stages. The spatial and temporal domains can be used as two of the major classification themes for metrics.
3.4 Classification themes for metrics
The preceding sections have discussed multiple aspects that may guide the identification of themes to classify metrics. Several different themes for metrics classifications exist in the performance management domain; some of these are highlighted in this section. The classification themes listed in this section are not intended to be exhaustive but merely examples to guide discussion.
The criteria of interest defined earlier in this paper for smart manufacturing performance provide some potential classifications themes as shown below:
Category – Identifies the smart manufacturing criteria of interest as agility, asset utilization, and/or sustainability. The subcategories for agility may be environmental, quality, flexibility, and adaptability. Subcategories for asset utilization may be economic, flexibility, and adaptability. Finally, the subcategories for sustainability may be identified as environmental, economic, and social following the discussion earlier in the paper.
Audience – Identifies the stakeholder group typically using this metric which could be internal and/or external.
The following classification themes are related to the spatial and temporal dimensions discussed in Section 3.3:
Spatial Domains – Multiple spatial domains may be identified: one along the hierarchy presented in Fig. 2 and the other along functional lines. The hierarchy view is indicated with manufacturing hierarchy with subdomains of enterprise, factory or facility, line, cell or machine, tooling and setup, and process, as shown in Fig. 2. The functional view is indicated as enterprise functions with subdomains of production, inventory, quality, maintenance, and business.
Temporal Domains – Multiple temporal domains may be identified, one along the product life cycle and another along the supply chain. The temporal domain along the product life cycle is indicated with subdomains of design, manufacture, use, and end of life. The temporal domains along the supply chain with subdomains of plan, source, make, and deliver.
Target – This dimension identifies the target whose performance is being assessed by the metric. A target may be a product or process.
Relationship to process – This dimension identifies the information type that is input to some process or outcome of some process.
Decision type (strategic/tactical/operational) – This dimension focuses on the kind of decision the measure is meant to support. A decision type may be strategic, tactical, or operational.
Aggregation level – This dimension indicates if the measure is of overall or partial nature.
Calculation timing – This dimension focuses on information about when the performance measure is calculated. It may be real time (calculated at the actual time during which an event occurs), periodical (calculated at certain intervals of time), or on-demand (calculated after a specific data selection request).
Orientation – This dimension identifies if the event(s) held leads (relates to the potential performance of future event(s)) or lags (describes the performance of the past event(s)).
Additional examples of classification themes are listed below.
Source of data – Internal (data from sources within organization) or external (data from sources outside the organization)
Type of data – Subjective (based on perception or opinion) or objective (based on observable facts not involving opinion)
Reference – Benchmark (compares an organization with others) or self-referenced (does not involve any comparison with another organization)
Measurement unit – Monetary, physical, or dimensionless; this dimension relates to which unit the measure is expressed in.
Metric value type – Quantitative (ability to describe the performance measure numerically) or qualitative (ability to describe the performance measure non-numerically)
Desired trend of the metrics – This dimension identifies the improvement direction, i.e., whether higher or lower is better.
Type of measurement unit – This dimension focuses on the types of units of measure, e.g., a ratio is a relation between two units of measure while a utilization is a relation between two elements that both have time as a unit of measure.
Uncertainty – This dimension identifies the level of uncertainty that may vary from low to high. It can be considered as the inverse of accuracy. In general, measures with high uncertainty (low accuracy) may be simpler to measure and/or calculate.
In an ad-hoc manner, performance measures can be classified into cost-related and non-cost-related performance measurements; however, the usefulness of such a classification is limited. As described previously, one can classify a group of individual performance measures in terms of manufacturing performance goals:
Zero defects – Quality-related metrics
Zero backorders – Speed-related metrics
Zero breakdown – Utilization-related metrics
Zero stocks – Agility-related metrics
Zero waste – Productivity-related metrics
Zero emissions – Sustainability-related metrics
There are many examples of different performance measures that can be found under the categories listed above. We have identified several categories and subcategories of metrics that are particularly relevant to smart manufacturing. We have also discussed a number of themes that may be used for the classification of the metrics. These themes can be broadly organized into three groups as shown in Fig. 3: content, scope, and context. The content information focuses on performance-evaluation methodology. The scope information identifies the functional and physical hierarchical levels (described in IEC 62264-1 [60] and IEC 62264-3 [62], respectively) where the performance measure is applicable. The context information details the different characteristics, as shown in Fig. 3. If the target of a metric is a process, then production methodology identifies the relation of the metric to a particular type of production, such as batch, continuous, or discrete. If the target of a metric is a product, then the production methodology identifies the flow of the products through single path, multiple path, or a network structure as described in ISO 22400-1 [63].
Figure 3.
Classification themes for performance measure repository.
After identifying metrics, themes, and groupings, we have generated an exhaustive classification scheme that is demonstrated with a sample set of metrics in Tables 1–3 that represent the three themes: content, scope, and context. These themes are neither exhaustive nor have wide acceptance. However, in the absence of systematization these themes give structure to classification.
Table 1.
Metrics – Content.
| Name | Order Lead Time |
Manufacturing Agility |
Water Discharged |
Set up Time |
Total Cost |
Processing energy consumption per year |
Volume flexibility - range |
Changeover time | Performance efficiency |
|---|---|---|---|---|---|---|---|---|---|
| Description | The time that elapses between the receipt of the customer’s order and the delivery of the goods | A performance index that measures the capacity of a company to maintain its competitiveness in an environment that is characterized by constant and unpredictable change | Water sent back out into the world, including heated and treated water | Time required to set up a process | Total cost associated with the process being performed | Total energy consumed during all processing periods over one year | The range of output volumes at which a process can run profitably | Process of converting a line or machine from running one product to another | A measurement of the actual operating speed of machine relative to its designed operating speed irrespective of availability or quality |
| Formula | - | - | - | - | - | - | - | - | - |
| Unit of Measure | Time unit | (Capability index) | Volume per time | Time per process | Currency unit | Kilowatts | Part/ time | Time unit | Speed unit |
| Unit of Measure Type | Physical | Dimension-less | Physical | Physical | Monetary | Physical | Physical | Physical | Physical |
| Trend | Lower is better | Higher is better | Lower is better | Lower is better | Lower is better | Lower is better | Lower is better | Higher is better | Lower is better |
Table 3.
Metrics – Context.
| Name | Order Lead Time |
Manufacturing Agility |
Water Discharged | Set up Time | Total Cost | Processing energy consumption per year |
Volume flexibility - range |
Changeover time |
Performance efficiency |
|---|---|---|---|---|---|---|---|---|---|
| Time | Real Time | Real time, Periodical | Real time | Real time, Periodical | On-Demand, Periodical, | Periodical | Periodical | Real-time | Periodical |
| Lead/Lag | Lag | Lag | Lead, lag | Lag | Lead, lag | Lead, lag | Lead, lag | Lag | Lag |
| Metric Value Type | Quantitative | Quantitative | Quantitative | Quantitative | Quantitative | Quantitative | Quantitative | Quantitative | Quantitative |
| Audience | Internal, external | Internal | Internal, external | Internal | Internal, external | Internal | Internal | Internal | Internal |
| Target | Process | Process, Product, other | Process and Product | Process, Product, other | Process, product, other | Process | Process, product | Process | Process, product |
| Production Methodology | Batch, continuous, discrete | Batch, continuous, discrete Single path, multiple path, network structure |
Batch, continuous, discrete Single path, multiple path, network structure |
Batch, continuous, discrete Single path, multiple path, network structure |
Batch, Continuous, and Discrete | Batch, Continuous, and Discrete | Batch, Continuous, and Discrete Single path, multiple path, network structure |
Batch, Continuous, and Discrete | Batch, Continuous, and Discrete Single path, multiple path, network structure |
| Reference | Self-referenced | Self-referenced | Benchmark | Self-referenced, benchmark | Self-referenced | Self-referenced, benchmark | Self-referenced | Self-referenced | Self-referenced |
4. Conceptual Model for Performance Metrics
A conceptual model for describing the input data and the unit models underlying performance metrics is needed to explain system-level requirements for smart manufacturing. Such a model includes data composition, measurement bandwidth, storage requirements, and computational resources. As previously noted, the performance of SMS can be based on three inter-related criteria: agility, asset utilization, and sustainability. These criteria inherently involve evaluation of a subset of metrics according to the specific characteristics of the organization and/or industry involved. Because these criteria may draw upon similar notions pertaining to resource utilization, productivity, quality, etc., individual metrics may be classified according to more than one criteria. It is also important to recognize that the data required for any specific performance metric may be shared among multiple metrics. For example, the cycle time of a process may affect agility, asset utilization, or sustainability in a different manner. It is important to see this notion for developing the metrics: there may not be a unique criterion to which a metric can fit. The system data can come in the form of information obtained at multiple levels in the spatial hierarchy shown in Fig. 2. These data need to be aggregated, scaled, normalized, and analyzed to help in decision making at various levels of the enterprise hierarchy.
The system data needed for the computational models underlying smart manufacturing performance metrics can be obtained from a number of sources. This data is stored digitally in databases and systems that are used to monitor and control operations at the enterprise, factory, cell, and process levels. This data is typically heterogeneous. At the highest system level (enterprise or physical level 6), information pertaining to the performance of the supply chain and manufacturing system is stored in enterprise resource planning (ERP) systems. Example data at this physical level includes order placement/fulfillment dates, on-hand/backorder inventory levels, material transit information, labor activity, and defect/rework levels. Data pertaining to the manufacturing process at the lowest system level (process or physical level 1) is available as raw or processed information from deployed sensing equipment. Examples of such sensing platforms include data from dynamometry, thermal, and displacement sensors. Computational models are needed to integrate these various data elements into readily interpretable performance metrics. These levels can be changed to fit the hierarchy shown in Fig. 2.
5. Discussion
5.1 Selection and use of metrics for smart manufacturing
The selection of metrics for smart manufacturing needs to be based on the value drivers for manufacturing. Three of those drivers discussed in this paper are agility, asset utilization, and sustainability. The entire manufacturing organization needs to clearly focus on these value drivers and such a focus can be achieved via careful selection and use of performance metrics. The metrics need to integrate across temporal and spatial levels to maintain focus on the defined goals over time and across the hierarchy of the organization. The vision and strategy identified by the top levels of the organization should continually support tracking and improvements in the value drivers and their balance with other measures, such as the financial performance.
The classification scheme developed in this paper can help in the selection of performance metrics based on the strategic goals related to the value drivers. The initial selection of the metrics can be completed using filtering, which is based on the dimensions set to the desired context. For example, decision makers can identify metrics for agility and the desired subcategory of flexibility as a first step. The selected group of metrics can then be filtered further by the domain (e.g., supply chain) and subdomain of quality and source. An integrated set can then be selected successively by going through the physical hierarchy starting from system data at the process level up through the enterprise and supply chain levels. The metrics set can be further narrowed based on the content and context dimensions. For example, the context for a selected source supplier can be defined using the capabilities of their information systems to track data in real time or periodically, the goals of the stakeholders across the organization hierarchy, the target process, or the production methodology used in the organization. The filtered set based on content, scope, and context can then be further analyzed using tools that can implement the conceptual model for metrics. The successive filtering approach can be viewed as a top-down approach relating the top level objectives to a subset of metrics.
The conceptual model can take the filtered set, developed based on the classification scheme, and select the integrated set of metrics based on additional information required. The additional information may include data at the lowest level collected from sensors and tracking systems and the linkages to performance metrics and decision factors at successive higher levels. Some metrics may be immediately filtered out due to the lack of available data. One can build a conceptual model that can identify the algebraic operators and computational models that can link the available system data to a subset of filtered metrics. In this sense, the conceptual model can proceed from the bottom-up using available system data to correspond with the top-down approach used earlier to move from decision-maker objectives to the filtered subset of metrics. The resulting metrics from these combined top-down and bottom-up approaches can then be linked together with tools to support the monitoring and control across the organizational hierarchy. Such monitoring and associated analytics can enable appropriate and timely control actions, which improves the overall system performance. The selection of the right set of integrated metrics can thus enable smart manufacturing.
The selection process can also identify the need for data collection systems, sensors, and computational tools. While the above example assumed that a feasible set existed that allowed the linking of decision objectives to metrics and system data, it is quite possible that such a feasible set cannot be found due to a lack of one or more of the components in this scheme. The desired data collection system and sensors may not have been implemented to generate the data required. Alternatively, the data might exist, but the organization may be lacking the computational tools to generate the metrics from the data. Another scenario is the lack of decision-support systems requiring human intervention and discipline for processing the information provided by multiple metrics. The metrics selection approach can thus identify the infrastructure required to support decision making to help move the organization towards the identified goals and thus towards smart manufacturing.
5.2 Future research
The proposed approach to select and use performance metrics based on the classification scheme presented requires further research. It needs to be implemented on a comprehensive set of metrics. Collecting and classifying the comprehensive set of metrics may have to be carried out successively for different sectors within manufacturing. Similarly, development of the conceptual model using process algebra, an algebraic approach to the study of concurrent processes, will be beneficial. Such a conceptual model will help in a systematized implementation of data to decision making platform for smart manufacturing.
There are larger research issues beyond those involved in further development and implementation of the classification scheme and conceptual model. The rapid advances in information technology in manufacturing are leading to the availability of a large amount of data, which has inundated personnel across the enterprise and product life cycle. The lack of systems that process and present the data in a manner supportive of decision making may mean that a large amount of data may be going unprocessed. More importantly, it is unnecessary to manage such large datasets when smaller amounts of targeted data may lead to equally (if not more) valuable information. Research is needed to identify the pertinent and potentially useful information out of the large amount of data being collected. This will require efforts in the following major areas:
Identification of approaches that point to the causal relationships between system data and metrics
This requires going beyond calculations of data quantifying the effect (such as inventory levels) to the connection with data on factors causing the effect (such as a faulty dispatching policy due to a degraded sensor).
Identification of the right amount of data to calculate metrics that in turn can support decision making
This requires developing approaches to find the 6V characteristics of data (Volume, Variety, Velocity, Veracity, Volatility, and Validity) to generate different metrics. For example, do the inventory levels need to be monitored in real time or periodically? Does it need to be monitored for each single part type or can it be monitored per part family? Does it need to be monitored while being moved from one workstation to next? Approaches are needed to identify the appropriate level for each of the 6Vs for the data required for different metrics. While the tendency is to collect as much data as possible due to reducing technology costs, the large amounts of data may impede an organization’s ability to refine their data due to the distraction caused by trivial details.
Assuring data quality
Since the performance metrics discussed here are based on data it will be necessary to specifically address any issues associated with the “quality” of that data. By quality, we mean, at least, that the data is 1) statistically or reliably representative of the process or system it is characterizing (is it repeatable), 2) sufficiently rich in content to be able to be valid for the defined ranges of observation, and, finally, 3) independently verified. Research is required to determine means for assuring data quality consistent with current statistical practices.
Development of verification and validation approaches for metrics
Research is required to verify and validate the selections made using the classification schemes, the software tools, and the conceptual model. How does one ensure that the decisions made using the selected metrics were the best possible? Could there have been alternate metrics that would have led to better performance? Another type of verification and validation that is less challenging is to ensure that the metrics are being correctly generated. Approaches are required for alternate ways of calculating the metrics to allow users to cross check their results. Further, such approaches need to be set up for automated verification and validation in the context of smart manufacturing.
Standardization of metrics
Is it possible to standardize metrics associated with high-level objectives across manufacturing? Is it possible to do so across segments, such as automotive manufacturing? Are there unique characteristics of an organization and its environment that prevent the use of a standard set of metrics?
Finally, we propose to demonstrate the concept using case studies with selected metrics. The selected metrics will combine 1) the top-down selection of metrics that support strategic decisions with 2) the available bottom-up filtering data based on the available infrastructure in the manufacturing organization. This approach not only helps select metrics based on available data, but it also helps to identify the constraints on metrics and directions for investments for additional data collection, storage, and analytic capabilities. This will allow populating a database for the selected environments and decision contexts rather than comprehensively across manufacturing. Similarly, the ontology and computational models will be developed for the selected decisions contexts. A demonstration of the approach is intended to encourage other researchers and practitioners to contribute in populating the database and development of the ontology and computational models.
The above are important research areas that need to be explored. The development and implementation of the classification scheme and conceptual model for performance metrics proposed can help provide the experience and knowledge needed to support such research.
Table 2.
Metrics – Scope.
| Name | Order Lead Time |
Manufacturing Agility |
Water Discharged |
Set up Time | Total Cost | Processing energy consumption per year |
Volume flexibility - range |
Changeover time |
Performance efficiency |
|
|---|---|---|---|---|---|---|---|---|---|---|
| Domain | Spatial | Enterprise | Enterprise | - | Enterprise | Life cycle | - | Enterprise | Enterprise | Enterprise |
| Temporal | Supply chain | Supply chain | Life cycle | Life cycle | Life cycle, supply chain | Life cycle | Life cycle, supply chain | - | - | |
| SubDomain | Spatial | Production, inventory, quality, maintenance, business | Production, inventory, quality, maintenance, business | - | Production, inventory, quality, maintenance, business | Production, inventory, quality, maintenance, business | Production, inventory, quality, maintenance, business | Production, inventory, quality, maintenance, business | Production, inventory, quality, maintenance, business | |
| Temporal | Plan, source, make and deliver. | Plan, source, make and deliver. | Design, manufacture, use, end of life | Design, Manufacture | Design, manufacture, use, end of life; Plan source, make, deliver | Design, manufacture, use, end of life | Design, manufacture, use, end of life; Plan, source, make, deliver | - | - | |
| Category | Asset utilization | Agility | Sustainability | Agility | Asset utilization | Sustainability, asset utilization | ||||
| SubCategory | Economic, flexibility, adaptability | Environmental, quality, flexibility, adaptability | Environmental | Economic | Economic | Environmental, economic | Adaptability | Economic | Economic, flexibility, | |
| Physical | Enterprise, factory, line, cell/ machine, process | Enterprise, factory, line, cell/ machine, process | Enterprise, factory, line, cell/machine, process | Line, cell/ machine, tooling and setup, process | Enterprise, factory, line, cell/ machine, process | Line, cell/ machine, tooling and setup, process | Line, cell /machine, tooling and setup, process | Line, cell/ machine, process | Line, cell/ machine, process | |
| Functional | Business planning and logistics | Manufacturing Management, business planning and logistics | Manufacturing Management, production process, business planning and logistics | Manufacturing management, business planning and logistics | Production process, manufacturing management. | Manufacturing management, business planning and Logistics | Production process | Manufacturing management | ||
Acknowledgments
The work described was funded by the United States Government and is not subject to copyright. The authors would like to acknowledge the contributions by KC Morris and Paul Witherell of NIST on reviewing the earlier versions of the paper.
Footnotes
Disclaimers
The findings expressed or implied in this report do not necessarily reflect the official view or policy of the U.S. Department of Commerce or the United States Government. Some software products may have been identified in context in this report. This does not imply a recommendation or endorsement of the software products by the authors or NIST, nor does it imply that such software products are necessarily the best available for the purpose.
References
- 1.Jovane F, Koren Y, Boër CR. Present and future of flexible automation: towards new paradigms. CIRP Annals-Manufacturing Technology. 2003;52(2):543–560. [Google Scholar]
- 2.Tian GY, Yin G, Taylor D. Internet-based manufacturing: A review and a new infrastructure for distributed intelligent manufacturing. Journal of intelligent manufacturing. 2002;13(5):323–338. [Google Scholar]
- 3.Teti R, Kumara SRT. Intelligent computing methods for manufacturing systems. Ann CIRP. 1997;46(2):629. [Google Scholar]
- 4.Yoshikawa H. Manufacturing and the 21st century — Intelligent manufacturing systems and the renaissance of the manufacturing industry. Technological Forecasting and Social Change. 1995;49(2):195–213. [Google Scholar]
- 5.Kumara S, Kashyap R, Soyster A, editors. AI in Manufacturing Theory and Practice. Atlanta, GA: Institute of Industrial Engineers; 1989. [Google Scholar]
- 6.Balakrishnan A, Kumara S, Sundaresan S. Manufacturing in the Digital Age: Exploiting Information Technologies for Product Realization, information Systems Frontiers. 1999 Jul;1(1):25–50. [Google Scholar]
- 7.Monostori L, Vancza J, Kumara S. Agent Based Manufacturing Systems. Annals of CIRP. 2006;2 [Google Scholar]
- 8.Nagel R, Dove R. 21st Century Manufacturing Enterprise Strategy. 1 & 2. Iacocca Institute, Lehigh University; 1991. [Google Scholar]
- 9.NGM. Next-Generation Manufacturing - A Framework for Action, Vol. I – Summary Report. NGM Project Office; Bethlehem, PA: 1997. [Google Scholar]
- 10.PCAST. Report to the President on Capturing Domestic Competitive Advantage in Advanced Manufacturing. President’s Council of Advisors on Science and Technology; 2012. [Accessed April 5, 2014]. via: http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast_amp_steering_committee_report_final_july_17_2012.pdf. [Google Scholar]
- 11.Chryssolouris G, Mavrikios D, Papakostas N, Mourtzis D, Michalos G, Georgoulias K. Digital manufacturing: history, perspectives, and outlook. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture. 2009;223(5):451–462. [Google Scholar]
- 12.SMLC. SMLC Forum: Priorities, infrastructure, and collaboration for implementation of smart manufacturing: Workshop Summary Report. [Accessed April 6, 2014];Smart Manufacturing Leadership Coalition (SMLC) 2012 via: https://smartmanufacturingcoalition.org/sites/default/files/smlc_forum_report_vf_0.pdf.
- 13.NIST. EL Program: Smart Manufacturing Systems Design and Analysis. National Institute of Standards and Technology; 2014. [Accessed March 3, 2015]. via: http://www.nist.gov/el/msid/syseng/upload/SMSDAFY2014.pdf. [Google Scholar]
- 14.Dornfeld DA, Wright PK, Vijayaraghavan A, Helu M. Enabling manufacturing research through interoperability. Transactions of the NAMRI/SME. 2009;37:443–450. [Google Scholar]
- 15.Kumaraguru S, Kulvatunyou B, Morris KC. Integrating real-time analytics and continuous performance management in smart manufacturing systems. Proceedings of Advances in Production Management Systems; 2014. [Google Scholar]
- 16.D’Souza DE, Williams FP. Toward a taxonomy of manufacturing flexibility dimensions. Journal of Operations Management. 2000;18(5):577–593. [Google Scholar]
- 17.Koren Y, Heisel U, Jovane F, Moriwaki T, Pritschow G, Ulsoy G, Van Brussel H. Reconfigurable manufacturing systems. CIRP Annals – Manufacturing Technology. 1999;48(2):527–540. [Google Scholar]
- 18.Duflou JR, Sutherland JW, Dornfeld DA, Herrmann C, Jeswiet J, Kara S, Hauschild M, Kellens K. Towards energy and resource efficient manufacturing: A process and systems approach. CIRP Annals – Manufacturing Technology. 2012;61(2):587–609. [Google Scholar]
- 20.WCED. Report of the World Commission on Environment and Development: Our Common Future. [Accessed Jun 6, 2014];Technical Report Annex to A/42/427, UN World Commission on Environment and Development. 1987 via: http://www.un-documents.net/wced-ocf.htm.
- 21.Lee JY, Lee YT. A framework for a research inventory of sustainability assessment in manufacturing. Journal of Cleaner Production. 2014;79:207–218. [Google Scholar]
- 22.Rachuri S, Lee JH, Narayanan A, Sarkar P, Lyons K, Sriram R, Kemmerer S. Sustainable manufacturing: Metrics, standards, and infrastructure - NIST Workshop Report, NIST Interagency/Internal Report (NISTIR) – 7683. 2010. [Google Scholar]
- 23.Feng SC, Joung CB. Development overview of sustainable manufacturing metrics. Proceedings of the 17th CIRP International Conference on Life Cycle Engineering; 2010. [Google Scholar]
- 24.Sikdar SK. Sustainable development and sustainability metrics. AlChE Journal. 2003;49(8):1928–1932. [Google Scholar]
- 25.Martins AA, Mata TM, Costa CAV, Sikdar SK. Framework for sustainability metrics. Industrial and Engineering Chemistry Research. 2006;46(10):2962–2973. [Google Scholar]
- 26.Lu T, Gupta A, Jayal AD, Badurdeen F, Feng SC, Dillon OW, Jawahir IS. A framework of product and process metrics for sustainable manufacturing. Proceedings of the 8th International Conference on Sustainable Manufacturing; 2010. [Google Scholar]
- 27.Helu M, Dornfeld DA. Principles of Green Manufacturing. In: Dornfeld DA, editor. Green Manufacturing: Fundamentals and Applications. Springer; New York: 2013. [Google Scholar]
- 28.Hallgren M, Olhager J. Lean and agile manufacturing: external and internal drivers and performance outcomes. International Journal of Operations & Production Management. 2009;29(10):976–999. [Google Scholar]
- 29.Sethi AK, Sethi SP. Flexibility in manufacturing: a survey. International Journal of Flexible Manufacturing Systems. 1990;2(4):289–328. [Google Scholar]
- 30.Davis J, Edgar T, Porter J, Bernaden J, Sarli MS. Smart manufacturing, manufacturing intelligence and demand-dynamic performance. Computers & Chemical Engineering. 2012;47:145–156. [Google Scholar]
- 31.Huang SH, Dismukes JP, Shi J, Qi S, Razzak MA, Bodhale R, Robinson DE. Manufacturing productivity improvement using effectiveness metrics and simulation analysis. International Journal of Production Research. 2003;41(3):513–527. [Google Scholar]
- 32.Muchiri P, Pintelon L. Performance measurement using overall equipment effectiveness (OEE): literature review and practical application discussion. International Journal of Production Research. 2008;46(13):3517–3535. [Google Scholar]
- 33.Fiksel J, McDaniel J, Mendenhall C. Measuring progress towards sustainability principles: process and best practices. Ohio: Battelle Memorial Institute; 1999. [Google Scholar]
- 34.Singh RK, Murty HR, Gupta SK, Dikshit AK. An overview of sustainability assessment methodologies. Ecological Indicators. 2009;9(2):189–212. [Google Scholar]
- 35.Joung CB, Carrell J, Sarkar P, Feng SC. Categorization of indicators for sustainable manufacturing. Ecological Indicators. 2013;24:148–157. [Google Scholar]
- 36.Schmidt W-P, Taylor A. Ford of Europe’s product sustainability index. Proceedings of 13th CIRP International Conference on Life Cycle Engineering; 2006. [Google Scholar]
- 37.Reich-Weiser C, Vijayaraghavan A, Dornfeld DA. Metrics for sustainable manufacturing. ASME 2008 International Manufacturing Science and Engineering Conference collocated with the 3rd JSME/ASME International Conference on Materials and Processing. American Society of Mechanical Engineers; 2008. [Google Scholar]
- 38.Dahmus JB, Gutowski TG. An environmental analysis of machining. Proceedings of 2004 international mechanical engineering congress and exposition. American Society of Mechanical Engineers; 2004. [Google Scholar]
- 39.Benkedjouh T, Medjaher K, Zerhouni N, Rechak S. Health assessment and life prediction of cutting tools based on support vector regression. Journal of Intelligent Manufacturing. 2013:1–11. [Google Scholar]
- 40.Robinson S. Doctoral dissertation. University of California; Berkeley: 2013. An Environmental and Economic Trade-Off Analysis of Manufacturing Process Chains to Inform Decision Making for Sustainability. [Google Scholar]
- 41.Jain R. Sustainability: metrics, specific indicators and preference index. Clean Technologies and Environmental Policy. 2005;7(2):71–72. doi: 10.1007/s10098-005-0273-3.2005. [DOI] [Google Scholar]
- 42.Gutowski TG, Sekulic DP, Bakshi BR. Preliminary thoughts on the application of thermodynamics to the development of sustainability criteria. Proceedings of IEEE International Symposium on Sustainable Systems and Technology (ISSST ‘09); [DOI] [Google Scholar]
- 43.Jain S, Rachuri S. NIST Interagency/Internal Report (NISTIR) – 7989. 2014. Maturity model concepts for sustainable manufacturing. [Google Scholar]
- 44.Houda M, Said T. Sustainability metrics for a supply chain: The case of small and medium enterprises. Proceedings of the 4th International Conference on Logistics; 2011. [Google Scholar]
- 45.Lin LC, Li TS. An integrated framework for supply chain performance measurement using six-sigma metrics. Software Quality Journal. 2010;18(3):387–406. [Google Scholar]
- 47.Graedel TE, Allenby BR. Hierarchical metrics for sustainability. environmental quality management. 2002 Winter;:21–30. doi: 10.1002/tqem.1006.2002. [DOI] [Google Scholar]
- 48.Jin X, High KA. Application of hierarchical life cycle impact assessment in the identification of environmental sustainability metrics sustainability metrics. Oklahoma State University; 2004. [Accessed December 12, 2016]. via: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.583.6856&rep=rep1&type=pdf. [Google Scholar]
- 49.Fiskel J, Eason T, Frederickson H. A framework for sustainability indicators at EPA. National Risk Management Research Laboratory, Office of Research and Development, U.S. Environmental Protection Agency; 2012. [Accessed December 3, 2014]. Report EPA/600/R/12/687. via: http://epa.gov/sustainability/docs/framework-for-sustainability-indicators-at-epa.pdf. [Google Scholar]
- 50.SCC. Supply Chain Operations Reference Model, Revision 11.0. Supply Chain Council; 2012. [Google Scholar]
- 51.Blackburn C, Valerdi R. Navigating the metrics landscape: An introductory literature guide to metric selection, implementation, & decision making. Proceedings of 7th Annual Conference on Systems Engineering Research 2009 (CSER 2009); Loughborough University; 2009. [Google Scholar]
- 52.Tangen S. An overview of frequently used performance measures. Work study. 2003;52(7):347–354. [Google Scholar]
- 53.Tangen S. Improving the performance of a performance measure. Measuring Business Excellence. 2005;9(2):4–11. [Google Scholar]
- 54.Medori D, Steeple D. A framework for auditing and enhancing performance measurement systems. International Journal of Operations & Production Management. 2000;20(5):520–533. [Google Scholar]
- 55.Tangen S. Performance measurement: from philosophy to practice. International Journal of Productivity and Performance Management. 2004;53(8):726–737. [Google Scholar]
- 56.McNair CJ, Mosconi W, Norris JF. Beyond the bottom line - Measuring world class performance. Business One Irwin; Homewood, IL: 1989. [Google Scholar]
- 57.Helu M, Vijayaraghavan A, Dornfeld DA. Evaluating the relationship between use phase environmental impacts and manufacturing process precision. CIRP Annals – Manufacturing Technology. 2011;60(1):49–52. [Google Scholar]
- 58.Helu M, Behmann B, Meier H, Dornfeld D, Lanza G, Schulze V. Total cost analysis of process time reduction as a green machining strategy. CIRP Annals – Manufacturing Technology. 2012;61(1):55–58. [Google Scholar]
- 59.ARC. Collaborative Manufacturing Management, Reference Sheet CMMREF-V0904-a. ARC Advisory Group; 2014. [Accessed February 17, 2015]. via: http://www.arcweb.com/brochures/Collaborative%20Manufacturing%20Management%20Ref%20Sheet.pdf. [Google Scholar]
- 60.International Standards Organization (ISO) IEC 62264-1:2013 Enterprise-Control System Integration -- Part 1: Models and Terminology. 2013. [Google Scholar]
- 61.International Society of Automation (ISA) ANSI/ISA-95.00.01-2010 (IEC 62264-1 Mod) Enterprise-Control System Integration - Part 1: Models and Terminology. 2010. [Google Scholar]
- 62.International Standards Organization (ISO) IEC 62264-3:2007 Enterprise-Control System Integration – Part 3: Activity Models of Manufacturing Operations Management. 2007. [Google Scholar]
- 63.International Standards Organization (ISO) ISO 22400-1:2014(en). Automation systems and integration -- Key performance indicators (KPIs) for manufacturing operations management -- Part 2: Overview, concepts and terminology. 2014. [Google Scholar]



