Skip to main content
Journal of Research of the National Bureau of Standards logoLink to Journal of Research of the National Bureau of Standards
. 1985 Jul-Aug;90(4):305–317. doi: 10.6028/jres.090.019

Metrics and Techniques to Measure Microcomputer Productivity

Wilma M Osborne 1, Lynne Rosenthal 1
PMCID: PMC6692284  PMID: 34566158

Abstract

While it is generally assumed that the use of microcomputers helps to improve productivity in an office environment, quantitative measures in this area are lacking. This paper addresses the measurement of the effect on productivity in an end user, office environment as a result of the introduction of micromputer-based technology. It is concerned with defining how productivity can be measured in such an environment and with current efforts to measure changes in productivity. It identifies and assesses the various techniques and measures used to describe the magnitude of productivity improvements that result from the use of microcomputers in the workplace, and makes recommendations for ways in which changes in productivity, may be measured.

Keywords: added-value, automation, efficiency, measure, methodology, metrics, microcomputers, office enviornment, productivity, qualitative measurement, quantitative measurement

Introduction

The role of information processing has changed with the introduction of the microcomputer into the workplace. The microcomputer has become a “tool” that often enables the user to directly control information processing needs without the assistance of the professional ADP staff. The relative low cost of microcomputers, their adaptability to various applications, and the availability of software which is useful to non-computer professionals has resulted in the proliferation and use of microcomputers in the office environment.

While there is general agreement that microcomputers do increase office productivity, there is little hard, quantifiable data to actually support this claim. Many companies have published reports of substantial improvements to productivity, but few have attempted to actually measure the changes, and fewer still have achieved any reliable measurements. By and large, the published reports have drawn the conclusion of productivity gains based on the perception of the managers and workers in the affected office environments.

The use of microcomputers by professional and clerical staffs overlap but are functionally different. The clerical staff primarily utilizes word processing which can be measured at the individual level (e.g., number of words in within a specific period). It appears, however, that the primary benefits to the professional staff are not in speeding up the information flow, but in improving the depth of analysis and understanding of the available information. Therefore, professional staff productivity should be measured in terms of the information handled and processed. Through spread sheet analysis and accounting and financial systems, the professional can better understand the significance of the information and thus, can make more informed decisions which result in improved performance of the entire organization.

This paper addresses the measurement of the effect on productivity in an end user, office environment as a result of the introduction of microcomputer-based technology. It analyzes and assesses the various measurement techniques presented and makes recommendations on how to measure productivity gains in the functional workforce. Measuring productivity changes in an office environment primarily involves the assessment of the impact of new technology on qualitative factors which cannot be measured directly. Thus, the inability to assign a numerical value to a factor such as “efficiency” limits the measurement and makes it highly subjective. In general, if the product or service is completed faster; if the quality and performance is improved; if there are noticeable, desirable differences in the process; then there are productivity gains for the organization.

Productivity is often equated with efficiency (reducing unit costs, improving output per hour, reducing errors, etc.) or quantity of work performed. Quality of output may, in fact, be far more important than quantity when measuring and assessing traditional white collar worker performance since accurate communication of information is the primary function of such workers. The usefulness and effectiveness of this output is determined by such qualitative characteristics as “efficiency,” “completeness,” and “effectiveness.” Although these attributes are discernible to those involved, they are not directly measurable and thus, an assessment of changes in productivity is much more difficult to make. Productivity measurement can be facilitated if the input properties and the output properties are clearly defined. This is more difficult to do in a white collar environment since most managerial and professional work does not have well-defined, measurable inputs or well-defined outputs. Thus, the usefulness of any productivity measurement is dependent upon the accuracy of the perceptions of the qualitative factors which are used.

The introduction and use of microcomputers have increased the productivity of users, but measuring these increases has not been easy. Much of the difficulty is due to the fact that techniques for measuring productivity are neither well-known nor well-defined. Further, the benefits do not always appear as single, discernible entities, rather they are often small gains across many tasks. As a result, few organizations have initiated productivity measurements citing the lack of money, time, knowledge, or an inclination to perform a comprehensive, scientific study. The situation is further complicated by the lack of information needed to measure productivity.

One of the perennial problems is the difficulty in measuring the productivity of value-added activities. Inevitably when a manual system is automated, many new functions and products are generated, usually without additional resources and at negligible cost. These intangible byproducts of automation, which did not exist previously, can enhance a manager’s decision making ability. Another problem in measuring productivity gains is the omission of some of the cost of converting from manual to automated methods, and a comparison of these costs against the benefits. Some of the costs typically omitted are site preparation, training, and ongoing software maintenance. Quantifying professional or managerial productivity has proven to be difficult, particularly for such activities as: preparation for and attending meetings, reading and writing reports, responding to telephone calls, etc.

Prior to implementing a productivity measurement program, there are a number of factors to be considered. While the methodology (presented in table 5) is recommended for measuring changes in productivity, it may not be feasible depending on organizational constraints. These constraints include:

Table 5.

Productivity measurement techniques.

Questionnaire/Survey Empirical Analysis
- Before and after 1 measurement - Before and after measurement
- Assessment of need - Intuitive
- Quantitative metrics - Quantitative metrics
- Qualitative measures - Qualitative measures
- Methodology/formula
1

The establishment of a baseline level of productivity prior to the introduction of the new technology is essential to the success of a “before and after” assessment.

  • the unavailability of measurement data/ information. It is difficult to measure changes in productivity in the absence of baseline data. If the information needed to conduct such a study is incomplete or unavailable it may be better to rely on the judgement of the users of the microcomputers.

  • the lack of well-defined productivity measurement techniques. Measures of productivity that are appropriate for the different levels and functions in the organization must be well-defined.

  • the cost, time, and effort to effectively measure productivity.

  • the lack of a means to capture much of what is done in an office environment.

  • the size of the organization or project, and the type of applications. If the environment is small, the cost of such a study could outweigh the benefits.

  • the lack of (active) participation in the productivity measurement program by all impacted by the introduction of microcomputer.

Once the decision has been made to measure changes in productivity, it is often desirable to implement a pro - ductivity improvement program to maximize productivity gains resulting from microcomputer use.

Quite simply, organizations are as concerned with what a productivity program is going to cost as with what it is going to return. Knowing how to spend funds where the most benefit can be realized, however, is sometimes difficult. Before productivity can be effectively measured, there must be a thorough understanding of what is being done, the activities performed, for whom, and why. While the old maxim, “you can’t improve what you can’t measure” is not entirely true, it is important to develop a strategy for defining what to measure, when to measure, and how to use the measurement data.

As the activities performed in the functional workplace become more complex and technical in nature, and ready access to reliable information becomes more critical, it is essential to make better use of each individual’s time. The effectiveness of any productivity measure is how accurately it reflects what is taking place with respect to the ability of the individual, and ultimately the organization, to perform tasks easier, faster, and better.

Measuring Changes in Productivity

Techniques for measuring changes in productivity rely on two types of measurements: quantitative and qualitative. Quantitative techniques generally measure quantities of work over some unit of time such as “pieces per hour,” “person-hours per completed product,” or “defects per unit of time.” These types of measurements are fairly easy to quantify and are typical of measurements made when investigating productivity changes in a production, blue-collar environment. Qualitative techniques are those which address less tangible attributes such as “quality,” “effectiveness,” and “efficiency.” Measurement of such attributes is very difficult because it is highly subjective.

Assessing productivity and changes resulting from the introduction of microcomputers primarily requires the measuring or estimating of qualitative attributes rather than quantitative factors. While it is not possible to obtain highly definitive qualitative measurements, it is possible to assess the relative changes in productivity through a careful and consistent assessment of selected qualitative attributes before and after the introduction of the new technology.

Measuring changes in productivity can be done at the global (organization), local (functional unit), or individual level. Measurement of quantitative factors can be successfully performed at the lowest individual levels since exact counts can be obtained for specific factors (number of letters typed, hours worked, number of forms processed, etc.). Assessment of qualitative attributes, however, is more reliable at a higher level within an organization since the individual variances and inconsistencies will tend to balance out.

Based on the literature search which was conducted as part of this task, there is a great deal of interest in productivity measurement in general and the effects of microcomputers on productivity in particular. This literature search included more than 200 sources, of which 47 appear at the end of this paper as a selected bibliography. Many organizations have initiated efforts to determine the effects of microcomputers on individual and organizational productivity. However, there is a dearth of information on any actual quantifiable changes. These reports consistently discuss the difficulties in performing such measurement and usually conclude that there has been improvement in productivity on a global level as a result of the introduction of microcomputers.

Some of the techniques discussed later in this section have been proposed to provide quantifiable measurements. Organizations have been reluctant, however, to embark on the effort required to make such measurements citing the cost, lack of time, lack of information, and lack of proof that these techniques will actually provide useable results. Thus, the consensus is that microcomputers have improved productivity, but there is little actual documentation to support that conclusion.

What Should be Measured

Productivity measurement, particularly of qualitative items, can be a costly endeavor. In a small organization with limited resources, it may be unreasonable to undertake a comprehensive study either to determine productivity gains or to determine how achieve to them. Regardless of the size of the organization, it may be determined after a careful evaluation of organization goals, objectives, and requirements that the difficulty of implementing a productivity measurement program just would not be cost effective. Another instance in which a program to measure changes in productivity may be unnecessary is when it is obvious that the use of microcomputers has resulted in improved productivity.

The attributes and factors to be measured must be selected and carefully defined. Since each organization is unique, the attributes and factors selected for measurement will differ. After the technology has been introduced, sufficient time should transpire before conducting the second productivity measurement. This is important because there may be a short-term decline in productivity which occurs while the users are learning to use the new technology properly. The baseline and new productivity levels should then be carefully evaluated and an assessment made on the relative productivity changes which have been realized.

Factors that complicate measurement

Several factors frequently complicate the measurements:

  • Value-added activities may not be adequately measured. When a manual system is automated, many new functions and products may be generated without requiring additional resources or cost. These activities, while difficult to measure, should be taken into consideration.

  • A number of cost factors are frequently omitted when measuring productivity gains. Included are those costs associated with the conversion from manual to automated procedures such as: site preparation, training, and maintenance of both the hardware and the software systems.

  • The introduction of new technology frequently results in substantial changes to the office environment. Personnel become responsible for different or additional parts of the process, duties shift, and some work may be eliminated while new work is created. Thus, attempts to measure individual productivity changes is often a case of comparing apples and oranges. For this reason, the assessment should be made at the organizational level rather than the individual level.

Factors and attributes for measuring productivity

The following three tables list some of the more commonly used factors and attributes used in evaluating productivity. Table 1 identifies tangible, quantitative factors which can be measured directly. Table 2 and table 3 identify less tangible, more subjective, qualitative attributes which should be assessed in attempting to evaluate the productivity of an organization. Section three provides a discussion of several case studies which make use of many of the factors and attributes identified in the tables below.

Table 1.

Factors that can be measured and quantified to determine productivity gains.

- workload
- schedules
- cost/budget
- end products
- training cost
- size of staff
- methods/techniques
- response/turnaround time
- time to perform a specific task
- number of new requests/alternatives examined
- outputs before and after using microcomputers
- amount of data handled, sorted, and calculated

Table 2.

Attributes which can be measured but not easily quantified to assess productivity gains.

- accuracy
- efficiency
- reliability
- completeness
- user acceptance
- data accessibility
- value added capabilities
- improved analysis (budget, trends, etc.)
- timeliness of reports/tickler files/information

Table 3.

Attributes which are not easily measured nor quantifiable to determine productivity gains.

- control
- flexibility
- communication
- attitude and morale
- quality of decisions
- new insights and learning
- better understanding of business
- effectiveness (of team work, etc.)
- quality of presentations (graphic displays, etc)

The productivity improvements most frequently mentioned as a result of using microcomputers are increased workload, new or more work accomplished in a shorter time, and cost saving. Improved accuracy, efficiency, quality, attitude and morale are also cited as benefits of microcomputer use. Not all of factors and attributes identified may be appropriate to a specific situation, while others not found in these tables may be critical in specific environments.

Criteria to measure desired outcomes

The ultimate objective of introducing microcomputers is to increase the productivity of an organization. This may be accomplished by reducing costs, avoiding increases in costs, increasing value added activities/products, increasing employee satisfaction or becoming more competitive. Our findings indicate that the identification of desired outcomes and the selection of appropriate measurement criteria are essential to the success of any productivity measurement program. The desired outcome, more than any other factor, influences the choice of criteria for measuring the outcome and determining if goals have been achieved. Table 4 identifies some possible criteria for different objectives and goals.

Table 4.

Criteria to measure desired outcomes for OA projects.

Desired Outcome Possible Criteria
To increase organizational productivity Total output in number of units produced as a function of labor, investment, etc. measured in dollars
To reduce or avoid costs Cost of labor, materials, and overhead
To increase value-added with products/services Contribution to profits from improved products/services
To increase managerial productivity Time required to complete tasks and level of individual, unit, and organization productivity
To increase timeliness of information Average and variance of time to prepare/distribute information
To increase quality of information Quality, accuracy, and com pleteness of information used to generate products
To provide more job satisfaction Turnover or absenteeism

Measurement Techniques

Measurable improvements in productivity generally can be attributed to a combination of human resource and technological factors. Therefore, any effort to determine changes in productivity as a result of the introduction of microcomputers should consider both the work environment (equipment and tools) and the employee effectiveness (training, education, attitude).

There are few effective measurement techniques available for measuring productivity in the office environment. The qualitative measurements which are most useful in assessing changes in the office productivity are most difficult to obtain. The quantitative measuring techniques can be characterized as a comparison of INPUT/OUTPUT before and after microcomputers are used. Although the INPUT is generally well defined, the difficult aspect in performing this type of measurement in the functional workplace is quantifying the OUTPUT.

The most commonly used techniques for measuring changes in productivity are Questionnaires and Empirical Analysis (see table 5). The primary difference between these two approaches is that the latter relies more on intuitive knowledge and information gained through experience and less on a systematic, structured methodology. Both of these techniques make use of before and after information concerning the qualitative and quantitative aspects of the process and the products; both are relatively easy to employ; both can be administered formally or informally; and both can be used for almost any size and type of environment/organization.

Questionnaire technique

The questionnaire/survey method of assessing changes in productivity resulting from the introduction of microcomputers appears to be one of the most useful. One of the advantages provided by the technique is that it can readily be adapted to measure global, as well as localized changes in productivity. The questionnaire can be used to gather information about characteristics and functions of the organization; projected requirements and desired features; work process, equipment, and products; and profile data on individual performance. It can also be used to isolate problem areas, determine employee attitudes, and solicit suggestions.

Another advantage provided by the questionnaire/ survey method is that it can be used to obtain information from multiple levels of the organization. This approach makes it possible to obtain information about perceived qualitative, as well as quantitative changes in productivity. This is essential since the perceptions of the changes may differ. In fact, everyone who either uses microcomputers, or is impacted by changes in the environment as a result of their use, should be surveyed. Properly employed, it is one of the most effective techniques for obtaining, comparing, evaluating, and measuring changes in productivity within the organization.

Empirical analysis measuring technique

Empirical analysis is employed to measure productivity before and after the introduction of microcomputers in the workplace. Empirical analysis relies primarily on experience and observations about a particular area or environment, and does not generally make use of systematic methods or methodologies. It may be performed by weighing and measuring the applicable, quantitative or tangible factors such as those identified in table 1. The weights are generally assigned to the variables to be measured based on the value and function of that variable within the specific environment. Qualitative factors may also be taken into consideration. Using empirical analysis, a baseline is established against which all changes are measured.

Recommendations on measurement techniques

When using either the questionnaire or the empirical analysis technique to measure productivity, it is important to focus on the extent of productivity changes at the global level, as opposed to the short term or incremental increases at the local level. The introduction of any new technology involves a learning period during which there may actually be a decline in productivity. Consequently, sufficient time must be allowed after microcomputers are introduced into the organization to permit a stable state of operation to be established before the productivity impact is assessed. The usefulness of these techniques is dependent upon the accuracy and completeness of the information collected, and the skill and knowledge of the surveyor or evaluator.

Table 6 contains a set of recommended steps which should be followed in measuring productivity changes in an office environment.

Table 6.

Steps in measuring changes in productivity.

A. Planning and Preparing to Measure Change
 1. Determine the feasibility of measuring productivity changes in the specific environment.
 2. Develop a definition of productivity for the specific organizational environment.
 3. Identify the attributes to be used in measuring productivity and define a method of quantifying the subjective qualitative attributes.
 4. Develop and implement productivity measures at the global or organizational level rather than at an individual level to ensure a reliable assessment.
B. Performing Measurements
 1. Perform a measurement to establish the baseline productivity level.
 2. Introduce the new technology.
 3. Permit a sufficient period of time to elapse to allow any short term decreases in productivity to be eliminated.
 4. Re-measure the productivity levels.
 5. Carefully evaluate the results to ensure that the results are interpreted within the context of the organizational environment. The evaluation should be performed at the highest, global level possible to avoid local aberrations and biases.

Methodology for Measuring Productivity

There are a number of approaches for determining a methodology for measuring productivity changes. One approach is to evaluate attempts by other organizations to measure productivity, while another is to determine the estimated cost of such a program, and then compare this cost with the expected gains in productivity. Whether or not either of these methods are employed, it is essential to determine the goals or desired outcomes of a productivity measurement program for the specific office environment. See table 4 for some desired outcome measurement criteria.

Preparing to measure change

The first step in preparing to measure changes in productivity after defining the goals is to determine the feasibility of such a program. This requires that those involved have a thorough understanding of what is meant by productivity, of the activities to be measured, and of the areas likely to benefit most from the application of productivity measures. Since productivity has different meanings in different environments, it is essential to establish a definition that is suitable for the environment that is to be measured.

Each environment is unique and the attributes and factors useful in one may not be appropriate for another. This is especially true in the case of qualitative, subjective attributes such as reliability, accessibility, and efficiency. Therefore, it is necessary to define not only a set of factors and attributes (productivity indicators), but a method for quantifying the qualitative attributes. It is also essential that changes in productivity be measured at the global or organizational level, as well as the local or individual level to ensure the least amount of bias. If there is little or no assurance that a program to measure changes in productivity can be justified from a budgetary standpoint, the costs are likely to outweigh the gains. Finally, if it is determined that the implementation of such a program is not feasible, then planning should cease.

Performing Measurements

Once the decision has been made to proceed, it is essential to establish and measure the baselined activities before and after microcomputers axe introduced. After a sufficient time has elapsed, there should be a careful assessment and evaluation of any changes in productivity. This process should be repeated over a period of time at unscheduled intervals at sufficiently high levels within the organization to preclude constraints and otherwise, unrepresentative conclusions. Table 6 describes a methodology for both preparing for and performing productivity measurements.

Case Studies

As part of our study, we surveyed a number of organizations to determine their experiences in measuring productivity. We found that:

  • Virtually every organization reported substantial gains in productivity.

  • Most organizations and studies which reported gains in productivity from the use of microcomputers based that claim on a perceived improvement and the subjective judgment of management.

  • Actual, quantitative measurement studies were either not conducted or did not yield quantifiable results.

This section presents the findings of eight case studies. Five of these axe based on reports which are listed at the beginning of each case study, and three on information obtained directly from the organization. Table 7 provides a matrix which identifies the factors and attributes referenced in each of the following case studies.

Table 7.

Case Study Matrix.

Firm Table 1 Factors Measured Table 2/3 Attributes Assessed
1. NSRDC workload staff time etc. timeliness efficiency quality
2. GAO workload manpower cost number hours data handled improved analysis efficiency
3. GSA manpower cost workload etc. qualitative benefits timeliness, etc.
4. USS costs workload etc. users/mgrs. best judge of how much productivity is achieved (no detail)
5. American Productivity Center
Bethlehem Steel improved morale timeliness communication reliability (errors reduced) quality of life teamwork
Polaroid costs workload
6. Banking cost value-added capabilities
7. Data Processing and Research time workload
8. Brokerage Firm cost user acceptance

While these eight case studies provide some insight on the different approaches used to determine changes in productivity, they demonstrate the difficulty in identifying and measuring subjective attributes. Each of the case studies reference increased workload, new or more work accomplished in shorter time, and cost saving as a result of using microcomputers. In many environments, microcomputers are credited with increasing productivity if schedules are met in a timely manner, if the workload is handled without additional staff, and if the products are acceptable and of high quality without the benefit of any “formal” measurements. Table 7 lists factors or indicators identified in the selected case studies.

As indicated, very little quantitative data is available and most of the conclusions of improved productivity are based on intuitive belief, not on firm, scientific measurement.

1. Naval Ship Research And Development Center

Van Eseltine, R.T., “The Scientific/Engineering Workstation Experiment: Plans and Progress,” Proceedings of the 22nd Annual Technical Symposium of the Washington, DC, Chapter of the ACM, Cosponsored by the National Bureau ofStandards, June 1983, (Questionnaire and Empirical Analysis)

As part of the Technical Office Automation and Communication (TOFACS) project, the David W, Taylor Naval Ship Research and Development Center has undertaken a study of the effects of individual scientific/ engineering workstations (SEWs) on the productivity of scientists and engineers. A prototype network of SEWs was developed to assess the changes in productivity which, could result from the introduction of SEWs. Initial results of this research indicate that workstations are viable tools which aid productivity in a scientific and engineering environment.

The technique used in this study was to have the subjects being studied perform the evaluation and assessment of their before and after productivity levels for a brief description of the methodology employed. A rather complex formula was used which basically involved applying values to attributes before and after the introduction of the workstations, and then calculating the change in the ratio of the Output to Input totals. Quantitative data was gathered by having the technical staff assess the changes in how long it took to complete a typical task with and without the scientific and engineering workstation. Most of the other data was more subjective (qualitative) in nature and less easy to quantify. The use of the subjects to evaluate their changes in productivity resulted in more consistent assessments on an individual basis, but may have also introduced biases which could affect the findings. In general, this method appeared to work satisfactorily in this R&D laboratory environment and the general methodology could be applicable to other similar measurement attempts.

2. General Accounting Office

“Workstation Project Report To Information Policy Committee,” Directed by Kenneth Pollock, Associate Directot of Information Management Systems, General Accounting Office, Internal Study, 1982. (Questionnaire)

The GAO initiated an electronic workstation project to determine if the installation of workstations “could be cost effective at GAO in performing various auditor functions.” The auditing (workload) functions are described and activities are divided into categories for the purpose of measurement. Benefits and problems are discussed. A matrix of automatable and non-automatable activities was defined and provided the basis for determining how best to utilize the workstations.

This report discusses the study and the methods employed and concludes thatcan approximate 25% increase in the capacity to perform audit functions was realized as a result of the introduction of the electronic workstations. The basic measurement unit utilized was the number of staff hours actually needed versus the estimated number of hours which would have been required if the electronic workstation had not been introduced. The report contains little information on the actual collection and analysis of data.

3. General Services Administration

“Final Report on the GSA End User Computer Pilot Project,” Prepared by The General Service Administration’s End User Computer Support Staff (KGS-1), September 28, 1983. (Empirical Analysis)

The General Services Administration conducted a pilot project to study the effects of the introduction of microcomputers within GSA. The project involved 500 GSA employees using 53 microcomputers and consisted of the automation of 175 applications. The report provides information on the experiences during the project and identifies the actions completed or initiated to facilitate end user computing. The report summarizes both the qualitative and quantitative benefits encountered by the end users. Most improvements were a result of automating manual operations. Specific examples of productivity gains are given in terms of cost savings, man-hours, and increased workloads/tasks.

In discussing the findings, the report notes that in some cases productivity increases were measured “in terms of staff hours, or dollars,” but in others they were not quantifiable because “the microcomputers satisfied requirements that were previously postponed due to staffing shortages.” Direct changes in productivity were measured in terms of changes in staff hours or dollars whenever possible.

However, most of the conclusions drawn in this report are based on qualitative estimates of perceived improvement to the process. Nevertheless, there is a very strong indication that “the use of microcomputers can pay for themselves in less than one year” and can “help provide better, more timely products and more in-depth analysis.”

4. United States Senate

“The Pilot Test of Office Automation Equipment in the Offices of United States Senators,” Committee on Rules and Administration United States Senate, S-Prt 98–120, November 1983. (Questionnaire, Empirical Analysis, and Before/After Measurement)

This pilot study focused on office automation in the U.S. Senate offices. The method used to measure productivity gains was a before and after analysis of the functional requirements and the day-to-day office workload. Participants were asked to complete forms which identified the areas that could be improved most by automation. They were also asked to rank each of the areas in terms of importance to the performance of their responsibilities. Guidelines were developed to ensure that everyone recorded the same types of information in assessing changes as a result of automation. The key aspect of the productivity measurement program was the requirement that productivity goals and cost justification be established for each workstation to be installed.

The test demonstrated that the staff could quickly learn to use the equipment and put it to productive use. The report does not present any detailed information on measuring productivity changes but does state that “in the final analysis, the actual users and office managers are the best judges of how much improvement has been achieved.”

5. American Productivity Center

Steel, “White-Collar Productivity: The National Challenge and Case Studies,” sponsored by Steel- case Inc., Grand Rapids, Michigan, 1982. (Questionnaire and Empirical Analysis)

Steelcase Inc. commissioned the American Productivity Center, a nationally recognized expert on productivity issues, to conduct a study on productivity in the workplace. During this six month study, tire Productivity Center sent survey questionnaires to 600 U.S. firms and received 140 responses. The study is based on those responses and includes 25 case studies selected on the basis of the techniques used to assess and measure productivity gains resulting from the introduction of office automation. While this study does not identify any unique measurement techniques, it does suggest that almost any productivity improvement program, no matter how unstructured, can result in increased productivity.

Word processing was found to be the most effective factor in improving office productivity. Other factors consistently cited were team building and the work environment. The productivity measurement programs of two of the most representative case studies are described and evaluated below:

a). Bethlehem Steel (Methodology/Intuitive)

Bethlehem Steel initiated a Productivity through Office Systems (PROS) effort to improve productivity of the 400 person sales force and their support personnel. The thrust of PROS, which was aimed at the secretarial force, involved the introduction of both an office automation system and a “Quality of Work Life” (QWL) methodology. The QWL methodology program encourages greater employee participation, and provides for training in all aspects of an office that contribute to the “quality of life” in the office. As part of this effort, monthly PROS meetings were held to address team building, problem solving, and other issues which can strengthen employee capabilities. No formal techniques or measurements were undertaken either before or after the office automation system was installed. There is a strong perception, however, that there were substantial gains in productivity which could be attributed to both office automation and to the QWL methodology program. The reported subjective estimates of the productivity improvements were:

  • increased output 20%

  • more timely delivery 80%

  • credibility of offices 20%

  • morale improved 20%

  • task difficulty reduced 20%

  • communication improved 50%

  • space more effectively used 25%

  • response time reduced 80%

  • errors reduced 5%

  • quality of service enhanced 50%

b). Polaroid (Needs Assessment)

Polaroid, spurred by a reduction of 4000 employees between 1978 and 1982, established an Office Technology Council to determine how to “better manage and utilize emerging office technologies, reduce cost and enhance the effectiveness and productivity of personnel.” As a first task, the Council developed and implemented a seven-step “Needs Assessment Methodology” to justify the acquisition of new technology. An examination was made of the personnel, workload, and tools needed within the organization to accomplish its mission; data was gathered and evaluated; and a summary of the qualitative benefits expected from the application of electronic technology was made. Polaroid considers this type of assessment helpful in identifying methods for improving productivity prior to the introduction of new technology. The seven steps of the methodology are:

  1. Orientation (Overall mission, functions, needs, equipment, costs)

  2. Professional activity profile

  3. Administrative profile

  4. Administrative reporting

  5. Detailed workload

  6. Word processing benefits summary

  7. Financial Analysis worksheet

6. Banking (Empirical Analysis)

A large northeastern bank is currently using 75 microcomputers for such varied applications as: budget and financial analysis, gas and oil studies, balance sheet reporting, and custom tailored accounts. Although a formal study has not been conducted, this firm believes that significant gains in productivity have been achieved and that microcomputers have proven to be very cost effective.

7. Data Processing and Research (Empirical Analysis)

A large data processing firm provides service for the 50 major U.S. banks in the country. Some of the applications handled by the firm include: a significant amount of file transfer—from micro to mainframe; extensive word processing; budget analysis; and value added processing. A much greater workload (volume of transactions) is now being handled in a shorter period of time due to the use of microcomputers. No formal study has been conducted, but it is accepted by the firm that the use of microcomputers has resulted in increased productivity.

8. Brokerage Firm (Empirical Analysis)

A large New York brokerage firm makes extensive use of microcomputers to handle customer accounts around the country. While no formal studies have been undertaken, customers have indicated that the use of microcomputers generally results in productivity gains and that the replication of successful microcomputer applications would increase both productivity and cost effectiveness. Overall, this firm has been successful in the introduction of microcomputers. However, concerns were expressed that there may not always be sufficient control and coordination of this process.

As stated in the introduction to this section, two distinct conclusions can be drawn from the available information on the measurement of the effect on productivity from the introduction of microcomputer-based technology into the office environment.

  • Nearly everyone daims to have obtained significant improvements in productivity.

  • Virtually no one has successfully measured and quantified those changes.

There is strong, anecdotal, circumstantial evidence that most of the claims of increased productivity are correct. There is, however, little reason to accept the specific percentages which are cited. Nevertheless, the strong perception of improved productivity has been sufficient to justify the further acquisition of microcomputers in many organizations.

Improving Productivity: Management Considerations

Once the meaning of productivity is understood, the next step is to ensure that the environment is conducive to the performance of high quality work, and that the responsibilities for a high level of productivity are well- defined. Individuals should have a clear understanding of their job responsibilities and should be held accountable for results. They must be made to feel that they make a difference. If productivity is a major organizational concern, then the individuals within the organization must also be a major concern. This implies that there must be a feeling engendered among the workers that they are: valued, trusted, challenged, making a contribution, and involved in decisions affecting them.

Increases in productivity and profitability, however, cannot be achieved simply with acquisition of new and better technology. Such acquisitions should be accompanied by a management commitment to implement a cost-justified, strategic, system integration approach which addresses, the human or social aspects of automation on both the individual and the organization. Too often, productivity has little or no meaning for the individual since it is viewed as an attempt by management to impose more procedures and controls, and ultimately, more work with little additional remuneration. There is likely to be little incentive to use new tools and techniques or to improve performance, if a feeling exists that productivity measures are only being taken to increase the organization’s image or profits at the expense of the worker.

Improvements in productivity require upper management to play an active role in the productivity improvement program and also requires that the affected individuals realize benefits from the changes. The introduction and use of microcomputer and related hardware and software must be accompanied by proper and adequate training and regard for the individual’s working environment.

In order to achieve significant productivity gains, there must be an integrated, cost-justified, program designed to achieve improvements in predefined areas which would benefit most by the improved productivity. The areas most frequently cited as being in need of improvement are: management, incentives for individuals, work environments, tools, training, software quality and software maintenance. Therefore, the initiation of a comprehensive productivity improvement program can be both a costly and long term effort which requires careful planning, coordination, and the cooperation of all concerned.

It is essential to plan for some recognizable gain within the first two years, substantially more within the next two or three, and depending upon the extent and cost of the effort to achieve an increase in productivity, there should he still more gain within the next five years. A well-planned productivity program should have a substantial payoff within three to ten years.

Implementing a productivity program, however, is not without some risks. If it is not well-planned, there is likely to be excessive optimism and overestimation of potential productivity gains. If the efforts are not well-coordinated, there may be only spotty success, resulting in little overall benefit to the organization. If it is viewed negatively, the workers may decide to thwart its implementation, causing it to fail. And finally, unless there is cooperation between each level of management, professionals, clericals, end users, and others for whom the program is intended, there is little likelihood for success.

Table 8 outlines the basic steps which should be taken to establish a Productivity Improvement Program within an organization. Adherence to this program will help ensure that the measurement of changes in pro - duetivity, using the metrics discussed earlier in this paper will result in definite gains in productivity from the introduction of current technology into an office environment.

Table 8.

Establishing a productivity improvement program.

1. Determine how the tasks/jobs are currently done.
2. Determine where performance can be improved.
3. Define the level of productivity gain expected.
4. Perform a cost-benefit analysis to determine if this is a feasible expectation.
5. Understand that introducing automation requires significant up front costs.
6. Take future inflation into account.
7. Determine the amount to be committed to achieve productivity gains.
8. Determine for whom equipment will be used. In white collar domains, there are clearly two sets of users: professional and
clerical.
9. Determine for which applications equipment will be used.
10. Determine what kind of equipment will be used.
11. Evaluate large, difficult to maintain programs for possible replacement with off the shelf packages.
12. Evaluate activities that require substantial resources, i.e., OR, simulation, job scheduling, for possible replacement by more efficient ones.
13. Run a pilot to substantiate the expected improvements.
14. If the pilot is successful, go forward with the whole program.

Summary and Conclusions

While no definitive techniques for measuring productivity changes in an functional workplace were identified, this study did find overwhelming support for the idea that the introduction of microcomputers to the functional workforce will result in significant improvements in productivity. The “measurement” techniques employed have usually been highly subjective and sensitive to biasing factors which can make the cited statistics highly suspect. The most that can be concluded is that microcomputers appear to increase productivity and that this perception is very widely held.

A few organizations have identified a set of measures unique to their environments. Some of these, as in the case of NSRDC, may be conceptually transferred to other environments. However, with the exception of traditional method of measuring inputs and outputs (before and after), there are virtually no universally accepted productivity measures for use in an office environment.

Summary

Our findings indicate that there are several key factors which should be clearly understood when attempting to evaluate and measure the productivity within an organization. These findings are summarized below:

• Few Effective Measures

The measurement of changes in productivity in a production environment is a well-understood process; however, the measurement of such changes in an office environment is much more difficult to quantify. While various methods have been proposed for measuring changes in productivity resulting from the introduction of microcomputers, very few are effective.

The useful measures are those which primarily address qualitative aspects of the environment and the work produced. These measures are highly subjective and thus, must also be evaluated in light of the methods and techniques used to produce the measurements.

• Qualitative Measures Should Be Global

Too often, an attempt is made to measure productivity at the atomic or detailed level. The introduction of microcomputers may affect how and what individuals in an organization do to varying degrees. Indeed, the actual work (throughput) of some individuals may appear to decrease. While this would seem to contraindicate the use of microcomputers in this case, an examination at a higher, macro level, may show that there has been a resultant overall improvement in productivity.

The key to a successful productivity improvement program is to define what is expected in terms of pro ductivity gains, specifically, what and how to measure, and then proceed. The best measurements of productivity changes in an office environment are qualitative. These measures are the most accurate when employed at an organizational level. It should be clearly understood that the goal of any productivity program is to improve the overall productivity of the organization. Thus, the measurements should be made at the global level and indications of increases or decreases in productivity at individual levels should be evaluated within the context of the productivity changes of the entire organization.

• Measurements Must Be Made Against an Established Baseline

Results of measurement techniques can be highly suspect since the items being measured or counted are often subject to various conflicting interpretations. Regardless of the method or technique used, it is essential to measure changes in productivity against an established baseline. The questionnaire method appears to be the most useful in obtaining, comparing, evaluating, and measuring relative changes in productivity against a baseline. This method provides for gathering information on qualitative, as well as quantitative changes in productivity.

• Allow Sufficient Time Before Measuring Productivity Changes

The introduction of any new technology involves a learning period during which there may actually be a decline in productivity. On the other hand, the excitement of being involved in an “experiment,” such as the introduction of microcomputers in an organization, may also lead to an increase in productivity. The effect, however, may be more pronounced immediately after the introduction of microcomputers than later. Consequently, sufficient time must be allowed after microcomputers are introduced into the organization to permit a stable state of operation to be established before the productivity impact is assessed.

• Measurements Should Be Carefully Evaluated

Improper or undisciplined use of microcomputers can be counterproductive due to the wrong work being done or to new work being created which is not needed. Thus, the measurement of the effects of microcomputer use should be carefully evaluated. Care should be taken to evaluate the impact on the entire organization, not just the directly affected individual. In addition, measurements must be made over a long enough time period to balance any short term drops or rises in productivity.

Microcomputer use within the organization decentralizes the computing resources and enables users (ADP professionals as well as those without previous ADP experience) to be in direct control of their information processing activities. The user has a wide range of information on which to base decisions and can make those decisions quickly and accurately. Consequently, the user can perform his job more efficiently and effectively. While microcomputers are frequently used as stand alone, general purpose tools, they are rapidly becoming a means of accessing the large scale systems and other microcomputers. The link between the microcomputer and mainframe provides the user with broader range of capabilities and greater potential for productivity gains. The result is the work being done faster, the quality of reports and other documents is improved, and work is performed which was previously not possible.

Conclusions

The primary purpose for measuring changes in productivity resulting from the use of microcomputers is to provide evidence of their cost-effectiveness and impact on the organization. This can be accomplished in a number of ways ranging from a cursory examination of activities and products before and after the introduction of microcomputers, to a plan which encompasses the recommended methodology outlined in table 6. We strongly recommend the use of this methodology to determine first, whether to initiate a measurement program; and secondly, how to go about performing the measurements.

A significant number of Federal Government and private sector organizations have introduced microcomputers and subsequently, have been able to identify gains in overall productivity. Although many of these organizations introduced microcomputers without the benefit of a prior productivity measurement study or program, evidence of their cost effectiveness and impact on the organization has come from both managers and endusers. The management of these organizations is convinced that improvements in products, services, efficiency, morale, and numerous other areas occurred as a result of using microcomputers.

Prior to initiating a study or program to measure changes in productivity, some consideration should be given to the feasibility and cost benefits of such a program. (see step 1 of table 6). If it is determined that a productivity measurement program is not feasible, or if there is little or no assurance that such a program could be justified from a budgetary standpoint, then it may be better to rely on the judgement of the managers and users of the microcomputers.

Therefore, while a study or program to measure changes in productivity as a result of using microcomputers may be a worthwhile endeavor, it is not necessary or appropriate for every environment.

Biography

About the Authors: Wilma M. Osborne and Lynne Rosenthal are with NBS’ Institute for Computer Sciences and Technology.

Selected Bibliography

  1. Abrams Marshal D., “Using the Desktop Computer for Project Management, Proceedings of the 22nd Annuel Technical Symposium of the Washington, DC, Chapter of the ACM, Co-sponsored by the National Bureau of Standards, June 1983. [Google Scholar]
  2. Bartino Jim, “Study Takes Exception To Belief That Firms Don’t Control Micros,” Computerworid, Volume 17, Number 51, December 19, 1983, pp 1,6. [Google Scholar]
  3. Basili V., and Zelkowitz M., “Analyzing Medium Scale Software Development,” Proceedings of the 3rd International Conference on Software Engineering, IEEE, 1978, 00. pp 116. [Google Scholar]
  4. Boczany William J., “Justifying Office Automation,” Journal of Systems Management, July 1983. pp 15–19. [Google Scholar]
  5. Boehm Barry W., “The TRW Software Productivity System,” September 1983. [Google Scholar]
  6. Brown Bruce R., “Productivity Measurement in Software Engineering,” prepared by Social Security Administration (SSA), Office of System Integration, Software Technology and Engineering Center Staff, June 1983, [Google Scholar]
  7. Brown Gary D„ and Sefton Donald H., “The Micros vs The Applications Logjam,” Datamation, January 1984, 96–104. [Google Scholar]
  8. Clucas Richard, “Are Your Computers Paying Off?,” Personal Computing, December 1983, pp 118–122,231,232. Office Automation Issue, June 15, 1983, Volume 17, Number 24A, pp 47–52. [Google Scholar]
  9. Cochran Henry, “Fourth-Generation Languages,” Computerworld, Office Automation Issue, June 15, 1983, Volume 17, Number 24A, pp 47–52. [Google Scholar]
  10. COMP83a, “Corporate Moves With Micros,” Computerworld, Office Automation Issue, Volume 17, Number 41A, October 12, 1983, pp 13–15. [Google Scholar]
  11. COMP83b, “DP Managers Say DSS Needed To Tie Together Corporate Micros,” Computerworld, Volume 17, Number 42, October 12, 1983, pp 32. [Google Scholar]
  12. COMP83a, “Corporate Moves With Micros,” Computerworld, Office Automation Issue, Volume 17, Number 41A, October 12, 1983, pp 13–15. [Google Scholar]
  13. COMP83b, “DP Managers Say DSS Needed To Tie Together Corporate Micros,” Computerworld, Volume 17, Number 42, October 12, 1983, pp 32. [Google Scholar]
  14. COMP83c, “Workstation Rules Out Paperwork for Court,” Computerworld, Volume 17, Number 46, November 14, 1983, pp 50. [Google Scholar]
  15. COMP83d, “Most Senior Accountants Found Using Micro,” Computerworld, Volume 17, Number 46, November 14, 1983, pp 26. [Google Scholar]
  16. DATA83, “Micros at Big Firms: A Survey,” conducted by Data Decisions, Datamation, November 1983, pp 161–174. [Google Scholar]
  17. EDP83b, “Future Effects of The End User Computing,” EDP Analyzer, Volume 21, Number 11, November, 1983, pp 1–12. [Google Scholar]
  18. Feezor Betty, “Microcomputers: A Delicate Balance,” Computerworld, Office Automation Issue, Volume 17, Number 32A, August 17, 1983, pp 9–10. [Google Scholar]
  19. Ferris David, “The Micro-Mainframe Connection,” Datamation, November 83, pp 127–138. [Google Scholar]
  20. Fleming Maureen, and Silverstein Jeffrey, “Microcomputers and Productivity. An Analysis of Microcomputer Hardware and Software Usage in Business,” Knowledge Industry Publications, White Plains, New York, 1984, [Google Scholar]
  21. Fried Lousi, “Nine Principles for Ergonomic Software,” Datamation, November 1983, pp 163–166. [Google Scholar]
  22. GAOS2, “Strong Central Management of Office Automation Will Boost Productivity,” Comptroller General, General Accounting Office report AFMD 82–54, September 1982. [Google Scholar]
  23. GA083, “Federal Productivity Suffers Because Word Processing Is Not Well Managed,” General Accounting Office, FGMSD-79– 17, Report to Congress of the United States, April 6, 1979. [Google Scholar]
  24. Gillin Paul, “One Unified Strategy,” Computerworld, Office Automation Issue, Volume 17, Number 42, October 17, 1983, pp 22. [Google Scholar]
  25. Goldfield Randy J., “Achieving Greater White-Collar Productivity in the New Office,” BYTE, May 1983, pp 154–172. [Google Scholar]
  26. Grabow Paul C., Noble William, and Huang Cheng-Chi, “Reusable Software Implementation Technology Reviews,” Prepared by Hughes Aircraft Company, Ground Systems Group, N66001–83-D-0095, FR84–17-660, October 1984. [Google Scholar]
  27. Horwitt Elisabeth, “Creating Your Own Solutions,” Business Computer Systems, June 1983, pp 130–141. [Google Scholar]
  28. IDC84, “Strategies for Microcomputers and Office Systems, Cost Justification of Office Systems,” Prepared by IDC Corporate Headquarters for Continous Information Service Clients, IDC No. 2533, Framingham, MA: 01701, July 1984. [Google Scholar]
  29. Keen Peter G. W., “Decision Support Systems and Managerial Productivity Analysis,” Sloan School of Management, Massachusetts Institute of Technology, September 1980. [Google Scholar]
  30. Lambert G. N., “A Comparative Study of System Response Time On Program Developer Productivity,” IBM Systems Journal, Vol. 23, Number 1, 1984, pp 36–43. [Google Scholar]
  31. Lochovshy Fred, “Improving Office Productivity: A Technology Perspective,” Proceedings of the IEEE, Volume 71, Number 4, April 1983, pp 512–518. [Google Scholar]
  32. Lyons Gordon, “Microcomputers and the Writing of Programs,” Proceedings of Trends and Applications, sponsored by IEEE Computer Society and the National Bureau of Standards, May 1982, pp 65–68. [Google Scholar]
  33. Marcus M. Lynne, “The New Office: More Than You Bargained For,” Computerworld, Office Automation Issue, February 23, 1983, Volume 17, Number 8A, pp 37–44. [Google Scholar]
  34. Martin James, Application Development Without Programmers, Prentice-Hall, Inc., New Jersey, 1982, pp 161–177. [Google Scholar]
  35. Osborne Wilma, and Rosenthal Lynn, Metrics and Microcomputer Productivity Bibliography, DOD/TCST Information Technology Productivity Study, National Bureau of Standards, Institute for Computer Sciences and Technology, 1984. [Google Scholar]
  36. Plasket Richard and Wilneff Paula, “Productivity and DP Management; Losing Control?,” Journal of Systems Management, October 1983, pp 30–33. [Google Scholar]
  37. Powers Dick, “Conquering Microphobia,” Computerworld, Office Automation Issue, Volume 17, Number 32A, August 17, 1983, pp 49–50. [Google Scholar]
  38. Pressman Roger S., Software Engineering, A Practitioner’s Approach, McGraw-Hill Book Company, New York, NY, 1982, pp 66–69, 164, 173, 329. [Google Scholar]
  39. Regan Harry J., “Executive Workstations: Efficiencies and Opportunities for’Management,” Proceeding of the 22nd Annual Technical Symposium of the Washington DC Chapter of the ACM, cosponsored by NBS, June 1983. [Google Scholar]
  40. Rubin Charles, “Computing In High Places,” Personal Computing, November 1983, pp 77–85, [Google Scholar]
  41. Ryan Hugh, “End-User Game Plan,” Datamation, December 1983, pp 241–244. [Google Scholar]
  42. Scharer Laura, “User Training: Less is More,” Datamation, July 1983, pp 175–182. [Google Scholar]
  43. Teger Sandra L., “Factors Impacting the Evolution of Office Automation,” Proceedings of the IEEE, Volume 71, Number 4, April 1983, pp 503–511. [Google Scholar]
  44. Thadhani A. J., “Factors Affecting Programmer Productivity During Application Development,” IBM Systems Journal, Vol 23, Number 1, 1984, pp 19–35. [Google Scholar]
  45. Tharrington James M., “How Microcomputers Can Aid in Applications,” Computerworld, November 14, 1983, pp 80. [Google Scholar]
  46. Walton William B., “New Support for the End User,” Computerworld, Office Automation Issue, Volume 17, Number 32A, August 17, 1983, pp 27–32. [Google Scholar]
  47. Young Arthur, “The Impact of Low Cost Computing Technology on the Department of Defense,” report by Arthur Young and Co,, February 8, 1982. [Google Scholar]
  48. Zack Robert, and Guthrie Steven, “The Micro-to-Mainframe Link,” Computerworld, Office Automation Issue, Volume 17, Number 48A, November 30, 1983, pp 11–15. [Google Scholar]

Articles from Journal of Research of the National Bureau of Standards are provided here courtesy of National Institute of Standards and Technology

RESOURCES