Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2008 Jan 29;178(3):273–274. doi: 10.1503/cmaj.071820

Testing the functionality of new medical devices

Stephen Strauss 1
PMCID: PMC2211343  PMID: 18227439

It's a scenario likely repeated daily across the country. A hospital needs new pieces of equipment that will cost millions of dollars and have to remain in use for a decade or more.

Competing vendors each proclaim their products is not only the best technically, but also the easiest and safest to operate. To sort out such claims in the past, a hospital might have asked a staff expert to try out the new machinery or sent someone to observe it in use.

But for the last 3 years, institutions have increasingly been turning to the Healthcare Human Factors Group of Toronto's University Health Network for an objective evaluation of the “usability” of competing devices.

Their success at identifying which of several similar machines is most likely going to lead to medical error — particularly when used in an often frenetic hospital setting — has allowed the group to become the world's largest hospital-based usability/ergonomics/human factors (these terms are used interchangeably) laboratory.

Housed in a $6-million facility, the lab now employs 10 full-time staff and 5 graduate students. The not-for-profit Healthcare Human Factors Group claims that one of its great strengths is its access to 3000 University Health Network nurses and 1000 doctors as test subjects.

One classic example of the group's work involved a deliberation by several Toronto-area hospitals over which of 4 competing automatic external defibrillators to buy. All the machines were theoretically so simple to operate that manufacturers had been promoting them as an ideal technology for ordinary people responding to heart attacks in airports and schools.

But the reality was starkly different. In a simulated emergency, simply getting a machine out of its case proved an embarrassing complication. During the test, nurses who were unfamiliar with the device couldn't find the latch that unhooked its carrying case. Others couldn't figure out which of 2 zippers to unzip to take a different machine out its case.

This fumbling could have potentially fatal consequences, points out Anjum Chagpar, manager of the Healthcare Human Factors Group. “With every minute that passes, there is a 10% decrease in the likelihood of a successful resuscitation.”

Not only did the tests convince the hospitals which device to buy, it made them aware of how subjective and flawed their initial impressions had been.

Dr. Rick Cooper, who was a participant in testing 3 devices by Chagpar's team, says they went into the evaluation with a “bias based on the specifications of a device and our impressions when we or when experts handled the devices. After the tests were conducted, this was completely turned around,” says Cooper, a professor of anesthesia at the University of Toronto. “Our first choice had previously been ranked as fourth.”

This sort of ranking is not something that all companies necessarily want. “Some have said we don't want our product evaluated, and we don't care if you purchase it,” says Chagpar.

Other vendors have had to be removed from viewing the test procedures behind 1-way glass because they became agitated watching nurses and doctors make potentially dangerous errors, says Joseph Cafazzo, the University Health Network's director of medical device informatics and health-care human factors team.

Despite the corporate concerns, the lab has become a usability test bed for hospitals and health ministries across the country, as well as for governments and manufacturers elsewhere.

A shining example of the latter is the new “smart” pump-infusion system that the facility helped develop with the American arm of Smiths Group PLC, a London-based company. The process started with pencil and paper drawings; 10 iterations and 2 years of work resulted in a full-fledged machine that is currently awaiting U.S. Food and Drug Administration (FDA) approval.

The cost for the group's services ranges from $10 000 to $50 000, depending on the number of devices and their sophistication. The test results are shared with clients, and Chagpar says they hope to start publishing results in peer-reviewed journals in the future.

In a larger sense, the team's efforts represent a realization that human error in operating a device can be a major cause of patient death and injury in an age of sophisticated machinery.

A driving regulatory force has been the FDA's 1997 adaptation of a general principle that required medical manufacturers to “demonstrate adherence to good design practices.”

This has since been expanded into what is known as the IEC 60601-1-6 design code, which sets a similar standard for the usability of medical devices around the world. To meet the standard, companies need both human factors skill and objectivity about their failures.

Tom Ulseth, Smith's worldwide marketing manager, says “There is a lot of value to an objective perspective like that which Toronto brings. When you bring work inside it becomes too close to you, you become too biased about it working.”

To which Cooper adds: “The devices shouldn't be evaluated by engineers, that is by the people who are designing them. They should be evaluated by the people who are using them.” — Stephen Strauss, Toronto, Ont.

graphic file with name 8FFUA.jpg

Figure. The props take a break at the Toronto-based University Health Network's device usability testing laboratory. Image by: Stephen Strauss


Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES