Skip to main content
Digital Health logoLink to Digital Health
. 2019 Jan 22;5:2055207619827722. doi: 10.1177/2055207619827722

Three laws for paperlessness

Harold Thimbleby 1,
PMCID: PMC6348492  PMID: 30719323

Short abstract

We are familiar with paper and rarely think much about it, except that in healthcare there seems to be too much of it, and it is slow, inefficient, and so old. In contrast, paperlessness promises the future and freedom from paper’s obvious limitations. We need to think clearly how to ensure paperlessness really improves healthcare, hence three simple laws:

1. Keep in sight the goal of improving healthcare. Paperlessness must be first about improving clinical processes, supporting staff and patients, not about replacing paper with new ‘solutions’.

2. Only implement evidence-based change. Pursue paperlessness only where there is scientific evidence it is better for the real task. Successful paperlessness depends on user centred design and on quality implementation.

3. Plan for cultural change and moving goal posts. Culture has to change to take advantage of technology, and technology is changing at pace regardless. Paperless requires planning for monitoring, improvement, revision and, eventually, obsolescence and further innovation. Pay attention to culture, including regulation, and to developing human skills to exploit new technologies.

Keywords: Paperlessness, digitisation, health IT, patient safety, healthcare improvement, digital transformation

Introduction

Writing on physical material has been the only form of permanent recording of information for millennia, and paper is the paradigmatic material for that purpose. Paper has been around since 200bc when it was invented in China — at least if we do not count its independent invention by wasps, nor count papyrus and tapa, which served the same purpose but were invented thousands of years earlier. Some of the earliest paper was used for wrapping, and some of the very earliest writing on paper was for labelling medicines.1

Today, paper is a familiar and mature technology that supports numerous tasks in healthcare, from prescriptions to meeting agendas, from patient notes to patient incident reports, from notices to books.

Computers have long promised to transform our use of information, most obviously by going ‘paperless’ – an aspiration that dates back to the 1970s.2 The UK Secretary of State for Health, the Rt. Hon. Jeremy Hunt, originally promised about £4bn in 2013 for the National Health Service (NHS) to go paperless by 2018,3,4 a goal that has been frequently revised. The UK Wachter report5 explores and extols the benefits of digitisation for healthcare and for the NHS in particular. The report summarises a digital ‘maturity model’, which has a basic readiness level assessing how well an organisation can ‘plan, deliver and optimise the digital systems it needs to operate paper-free at the point of care’. This equates paperlessness with digital maturity.

Replacing paper seems simple, and other industries do not need nudging to go paperless, both suggesting that the inertia in healthcare is more to do with culture than just the mechanics of replacing paper.

Just going paperless seems easy, but ensuring improvement requires scientific approaches, and – for it to work as planned and realise efficiency and safety benefits—it must be aligned with culture change. If nothing changes when paperless solutions are introduced, they will be working against existing cultural habits. In particular, much information practice and governance harks back to the last century, and new paperless technologies (such as Skype and WhatsApp) undermine assumptions; staff adopting new technologies may become subversive and hence introduce unanticipated risks. Cultural change is not easy, nor is selecting and implementing paperless technologies to effect change, and hence we need clear laws that help us explicitly think how to make improvement more certain.

Structure and readership

This article samples the paper–paperless tension and promises. It is designed to stimulate wider discussion, thinking, research and informed debate, and it focuses that debate into three laws for paperlessness. The article has three parts:

  1. Comparison of paper and paperlessness;

  2. Case studies and discussion;

  3. The three laws and subsidiary points arising from them.

The article will be useful for any healthcare organisation developing a digital or paperless strategy, or for organisations in the process of quality improvement implementing digital strategies.

Definitions

Notwithstanding the wide range of meanings and associations, for this article, ‘paper’ and its twin ‘paperlessness’ will be considered to be technologies for recording, editing and sharing information – primarily, for the purposes of this article, clinical information. Consistent with the intention of NHS policy, we take paperlessness to refer to IT-based replacements of conventional paper, otherwise known as digitisation (and thus we exclude telephone, face to face and other strictly paperless forms of activity). However, technologies do not work in isolation; they are embedded in socio-technical systems, processes and cultures: what works is neither paper nor paperless, but the whole system, people, technologies and their activities, which contains paper or paperlessness as components.

Part 1. Comparison of paper and paperlessness

Paper versus paperless: a quick comparison

We grew up with paper and we understand how it works. Paper has entered our language: we read the paper and we paper over problems. Paper is physical, and makes inefficiencies obvious: we can see mountains of paper, thick patient notes and untidy desks. Perhaps we will free ourselves from paperwork by going paperless.

Encyclopaedia Britannica was a very successful company based on paper, producing a world-leading and highly profitable paper encyclopaedia. Then they tried a paperless version (on CDs, as it happened) but it was not commercially successful.6 That is, they were aware of paperlessness as a ‘good idea’ but, culturally, they were wed to their old business model and that held them back. Then the rest of the world went digital, and Britannica had to reinvent itself. At first sight, we may feel sentimental about the loss of a cultural icon, but from the point of view of the rest of the world, paperlessness (at least in this case) has been an improvement for everyone else: encyclopaedic information is now available to people at negligible cost over the internet (albeit, along with a lot of fake information). The question, then, is how to help patients benefit from paperlessness without undermining the infrastructure and expertise to operate, which Britannica lost.

Another moral is that sensible people get caught out; they find it hard to change to exploit disruptive technologies, and this enables more agile innovators to leave them behind. But there are also lots of innovators who fail too; sensible people get caught out on both sides. There is no reason why we should be immune when considering paperlessness for healthcare.

We understand how paper works, and we have deep cultural knowledge about it. While we may not believe a signature written in block capitals, nobody is clear what a paperless signature should look like or even mean (we reluctantly share passwords, but there is a taboo against copying written signatures). Other useful properties of paper include being reliable: for instance, dropping paper, standing on it, or getting it wet rarely ruins it. Paper rarely disappears while you are writing on it, whereas computer bugs are notorious for losing information unexpectedly.

Pieces of paper are physical and in one place, and are easy to manage. There are no physical limitations on paperlessness. Paperless information may be accessed anywhere in the world. While this is an advantage, it raises security problems, as it is very hard to provide physical security for all computer screens. Paperless systems are therefore notorious for requiring passwords, which slows clinicians down. Barcodes, RFIDs and other technologies can avoid the need for typing (and memorising) passwords, but they may be lost or shared and create misleading data on who is actually using the systems. Paperless information can also be duplicated. This enables many people to have sight of it, but it can lead to problems – I was recently paid double my personal expenses because my employer duplicated my paperless expense claims by accident. Such an error would be hard to achieve with paper.

Tradeoffs and transitions

Because paperlessness is new, it is tempting to see the ‘digital transformation’ to paperlessness as improvement. But paperlessness is not a simple ‘one dimensional’ tradeoff where things can only get better; it is much more complex. Sellen and Harper7 make four useful cost/benefit distinctions:

  1. Affordance. The physical properties of paper that allow us to use it effectively. Our record-keeping and all other processes have evolved to fit closely with the affordance of paper, and paperless systems that do not mirror these properties are necessarily harder and perhaps less reliable to use.

  2. Symbolic problem. Paper symbolises the old and the old-fashioned. Paperlessness symbolises the new and exciting. Healthcare should be driven by evidence, not by symbolism.

  3. Cost problem. Paper and printing have obvious and easy to measure costs, but digital technologies have other costs, including training and obsolescence – and cybersecurity.

  4. Interactional problem. How we use paper – how it limits activities to the location where it is used, how it is costly to distribute, and how it is costly to copy and process information on it – and for each use whether these constraints out-weigh their benefits (e.g. reliability and physical security).

Sellen and Harper argue that we should not set out to replace paper but should work towards a future that integrates paper and paperlessness, making the best of both worlds, with each working in concert with the other. To do this, we need to be clear what we are trying to do, rather than focusing on how we currently do it on paper and how that might be digitised away. Instead, we have to be very clear what we need to do, as opposed to what we do, because paper itself formed the habits for doing those things in the first place. In any event, it is going to be impossible to transition to an ‘ideal’ paperless world overnight, and therefore, for the foreseeable future, we must have systems that co-exist in concert with both paper and paperlessness.

Speed and efficiency

It has been said, ‘In a world of immediacy, we need the right information when we need it – and not a second later’,8 but this sentiment emphasises the benefits to the recipient of information; it does not consider any benefits to frontline workers, and it does not emphasise benefits to patients. Paperlessness makes it easier for policy makers to analyse huge amounts of data, but it may not improve the quality of the data. Creating data usually makes work at the front line. If not carefully designed, many people will be doing urgent work to collect more information so managers can have the ‘right information when they need it’ and the ‘benefits’ will not be so clear at the point of adding value to the organisation, next to the patient. Indeed, paperlessness increases the gap between front-line staff and managers, a key reason why IT projects fail.9 Paperlessness creates a new workforce of IT specialists who tell everyone else how they must work, because computers are often inflexible.

Healthcare is work, and both paper and paperlessness are ways of supporting that work, and, clearly, we should only dream of paperlessness when it offers improvements or new things better than paper. Going paperless should not be about replacing paper, but about improving processes; indeed, if computers just simulate paper, they tend to make the problems worse because they fail to properly integrate how things are actually done. Furthermore, we need to be clear who benefits, and where those benefits accrue for each paperless idea.

Comparison with pharmaceuticals

Paperlessness, and IT more generally, is an expensive healthcare intervention with high recurrent costs, and we should treat it at least as cautiously as drug translation (moving laboratory drugs into approved pharmaceutical products). Patient health depends on reliable IT as much as on reliable drugs; in fact, everything relies on IT – from reception to X rays, sterilisation to heating, beds to dispensing.

The pace of paperless innovation leads many to argue that experimental trials take too long. They say that by the time a randomised controlled trial (RCT) on some paperless intervention is completed, the technology will be obsolete. On the contrary:

  1. One does not rush into using drugs on a large scale until there is evidence they are appropriate and effective, even though new drugs may be continually developed during the trials. New drugs do not make evaluation irrelevant. Likewise, the promise of new IT does not devalue RCTs and other methodologies: whatever IT systems we use should be justified, even if we anticipate they will eventually become obsolete.

  2. When bloodletting was popular, when a patient died, the physicians claimed that the patient may have survived if only they had been able to let more blood. The popular arguments for updating IT are worryingly analogous: the old IT did not work, so we need more, newer IT. Bloodletting was popular for hundreds of years, and sensible people believed in it. Upgrading IT without RCTs or other evidence is an identical problem. (However, note that there are many types of IT, and one or two RCTs will not show that paperlessness in general is a good idea.)

We know how to evaluate drugs and translate laboratory promise into hospital practice, yet every paperless innovation is different and poses different scientific challenges to evaluate rigorously. Even with well-understood ideas like antimicrobials, drug translation from test tube to prescription involves expensive, rigorous work, typically taking a decade. In paperless technologies, each innovation changes the rules (consider cloud, blockchain, IoT…) and the appropriate scientific methodologies to evaluate these innovations rigorously in the clinical context are not available – many of the ideas are less than a decade old themselves. Perhaps paperlessness innovation in healthcare should be as slow as drug development to allow for developing appropriate methodologies? If so, this slow approach would counter the pace of the consumer market, and, as a side effect, reduce the continual up-front costs (including training and infrastructure) of new IT. The goal is not to have the latest paperless technology, surely, but to get the best for patient care – and until it is clinically evaluated, there is no way to know one way or the other.

We recognise that a drug newly developed in the laboratory is exciting, but (largely thanks to thalidomide) we do not let that excitement distract us from the hard work of evaluating the drug’s effectiveness and side-effects. There is an excellent book on the problems with the science of drugs;10 it is a sobering thought that we are even less aware of the quality of the science of paperlessness.

Plan for a permanent gap

There will always be a gap between what we think paperlessness promises today and what disruptive ideas will next appear in the future. This tension creates commercial drivers, but creates a permanent skill and infrastructure gap. The right balance, and the contractual terms behind it, will be hard to work out, let alone keep on top of as the technologies and opportunities develop beyond our current imagination.

Innovation not only creates new ideas that we want in healthcare – stretching the gap – but also creates obsolescence, as manufacturers eliminate products with reducing profit margins. A notorious example is the BBC Domesday Project, started in 1984 to mark the 900th anniversary of the Domesday Book, a nationwide census of life in Britain. The project decided to use laser discs and bespoke software to record the Domesday Project, but these are no longer usable. Ironically, a 2011 BBC project, Domesday Reloaded, which tried to recover the information (see http://www.bbc.co.uk/history/domesday) uses Flash, a deprecated technology that is no longer supported on many platforms because of its security risks.

In stark contrast, the original vellum Domesday Book is still readable nearly a thousand years after it was recorded.

A permanent gap means that ‘paperless’ can never be 100% achieved, and therefore that successful paperless projects must interoperate with a range of technology generations. Put positively, planning for a permanent gap will help ensure that systems will degrade gracefully if systems fail, since there will always (by design) be older versions of the system still available for use. Paper is of course the ‘oldest’ technology that will still be running.

Part 2. Case studies and discussion

Two illustrative case studies

Abbreviations and auto-completion

Paperlessness promises less work, though it actually promises less work per job, and hence managers will expect us to get more done in the same time. As we work faster, we have less time to be aware of and to correct our errors. Since we mostly make errors when we are unaware of them (if we were aware of them, we would be much less likely to make them and far more likely to correct them), our conscious experience of the speed/error tradeoff is biased and unlikely to give us a reliable insight into the true tradeoffs.

My father died in hospital from a preventable error, and the error was reported by the cardiologist on Datix, an incident reporting system. The incident was reported on 27 August. My father died on 25 August, so he was already dead when the incident was reported. Yet the report says, ‘full recovery in up to 1 month’.

If the clinician had been writing on a paper form, it is unlikely he would have written that misleading statement. Datix tries to make paperless reporting easier by providing canned texts, but by doing so it creates an aura of certainty in what it reports. Datix provides a menu, and one choice is much the same as another (they are only millimetres apart), so perhaps the cardiologist chose the closest they could – or wanted to – for the situation. Perhaps they wanted to conceal the severity of the incident? In any case, somebody reviewing the report cannot see what the cardiologist was thinking (whatever that was) as what the computer made so easy to write down in full was, on any interpretation, incorrect.

The problem is ubiquitous. Micrograms and milligrams are a thousand times different, but the computer may make writing mg or mcg so easy that the wrong choice is also easy. Sometimes paperless systems auto-complete what is being written, so if the user types mi, the computer may immediately offer crograms or lligrams. Computers may use the context to present the most common choice first, so the order of the choices is unpredictable. Often, just typing space will then autocomplete with either, but unpredictably. As the user ‘knows’ what they typed, they will not thoroughly check what was autocompleted.

In contrast, on paper, the abbreviation ‘mi’ is obviously inappropriate, and anyone reading it would query it. There are many dangerous clinical abbreviations, and the Institute for Safe Medication Practices (ISMP) provides protocols for avoiding them11 but, curiously, paperless systems rarely follow the ISMP’s recommendations.12 An example from ISMP’s list is that 100000 has been mistaken as both 10000 and 1000000; to reduce the likelihood of misreading, you should either use commas (writing 100,000) or write the units (as in 10 thousand). Both of these suggestions take more keystrokes, but they reduce the chances of error. Another example is that the standard Latin abbreviation ‘qn’ (nightly) may be misread as ‘qh’ (hourly). The solution is to write nightly in full – over three times longer, but achieving a much lower risk of error.

Forms

Both paper and paperless computer forms are familiar, yet they have interesting differences that are easy to overlook.

For example, if a Microsoft Word document is designed as a form and printed off on paper, the form will work as expected. Indeed, what you see on the screen is what will be printed on paper. But if the form is emailed to somebody else, that is, as a paperless form, it will rarely work as expected. A tick box that is easy to tick with a pen on paper might be a square, ‘□’, and changing that square to a tick inside Word may be quite hard if not impossible. The problem is that □ is a symbol, possibly a drawing of a box (perhaps not even a box you can type into), and the user will have to change it to another symbol to fill in the form correctly – and other things may get edited by mistake as the user tries to do this. In contrast, it is really quite hard to modify a paper form by accident.

Or the form may have a lines to write your data on, such as:

Name: _______________________ Date: ________ but these lines, which are easy to write over on paper, are just a series of underline characters ‘___’ (or tab lines) and trying to type over them shifts everything to the right and messes up the layout of the form. (If the lines were drawn using tabs, then the shifting is jerky as well.)

Or perhaps an Excel spreadsheet is designed as a form. One cell asks for the user’s phone number, and the user can enter their phone number in the cell next to it. It seems easy, but there are still problems. An Excel form I was sent to register attendance at an international patient safety conference let me enter my phone number, which is +44 7525191956, but it discarded the ‘+’ and displayed it obscurely as 4.47525E+11, and when I tried entering it without the country code (i.e. 07525…) it was displayed as 7525… The problem is that the poorly-designed Excel form expected a number, but telephone ‘numbers’ are not mathematical numbers.12 Clearly, the person who designed the Excel form had not tested it adequately, and probably had not been aware of the need to, nor the extensive literature that would have helped (see Develop mature attitudes to software, below). The example here may seem ‘trivial’ but it illustrates a programming oversight that in a clinical context may have induced patient harm. A recent healthcare case compromising patient care misinterpreted US postal codes (ZIP codes) that started with a zero:13 the problem is that the programmer treated ZIP codes as numbers, and hence a number with or without a leading zero was the same, but that is not true either of phone numbers nor of ZIP codes (let alone ZIP+4 codes, such as NY 12201–7050, which – if misinterpreted – involves a subtraction).

Paper forms do not need ‘saving’. When you fill in a paper form, it is not going to disappear if you have a break and come back to it later. Yet computer forms regularly disappear because we forget to save them. Why should we have to save a computer form, when paper forms do not even need saving?

We can fill in a paper form in any order we please. For instance, we can read over the whole form to get an idea of what is wanted, then fill in the sections we are able to. There might be a bit in a form we do not yet know the answer to, and we can come back to it later after we have tracked down any information needed.

On many paperless forms, if you don’t fill in everything correctly, the computer probably will not even allow you to save it! That means you may have wasted some time filling in several pages of the form, only to be faced with an impasse you could not have anticipated.

Often fields are fixed sizes, and paperless fields truncate what the user types — so, say, ‘anaphylactic shock to penicillin’ might become ‘anaphy’ without either the person writing noticing the error nor the person reading it noticing its importance; at least with a paper form, the user can write outside the boxes. Often, computer forms have to be filled in in the order the computer requires: if any information is deemed missing, the computer will not reveal the rest of the form to be completed. Interestingly, paper is not immune to the same truncation problems when computers print on paper; Figure 1 gives an example.

Figure 1.

Figure 1.

Paper is not immune from computer bugs. The doctor’s name (Johnstone? Jones?…) and the name of their medical centre, shown in the lower part of the picture, have been truncated by the paper label printer. That the truncation is an unnecessary bug is proved by the label printer successfully printing the longer patient’s name and address in full, without truncation.

Paperlessness makes form creation seem so easy anybody feels they can do it. However, without careful development and testing, paperless forms are not easy to use. Best practice is to trial forms with real users with real tasks and iteratively improve them: there is an international standard, ISO 9241,14 to this effect. There is no excuse for paperless forms to be more of a hindrance than paper forms.

Sample paperless risks

Security

Conventions have been developed over centuries to make paper (and the information on it) secure and reliable. For example, paper can be locked in filing cabinets, and can have watermarks that do not photocopy. Paper can have intricate markings so that it can be used for secure applications like money and legal documents. Paper can be signed (and we have taboos about copying signatures). Paper can be mailed ‘recorded delivery’ to confirm that only intended recipients get it. Paper can be securely shredded.

The worldwide attack by the WannaCry malware is a warning that computer systems can be made inoperable by security problems that do not affect paper. It is possible to burn down one hospital and lose all its patient records, but it is not possible to destroy systems across a country on the same day, as WannaCry came close to doing.

Ironically, printing on paper is a safe and reliable form of backup, and many hospitals routinely print patient records when they are warned there will be IT upgrades, in case the computer systems are unusable at a critical moment.

Elsewhere I have documented a major problem with a paperless system that would not have come to light if it had not been for paper records. The discrepancy between paper records and paperless records led to a criminal investigation, which collapsed at trial because the paperless records were unreliable.15

Invisibility

Paper is visible. We can see what we have read and what we have not read. We can put bookmarks in and easily keep track of where we are in a large pile of paper notes. Crucially, we can easily keep track of what we have not read.

Paperless systems are more obscure: it is hard to see what you cannot see. It sounds obvious when it is said like that, but a computer screen is rarely large enough to show all the critical information, and the user has to scroll to find what they want. Sometimes a user may be unaware that there is critical information that is not visible.

Interoperability

With the notable exception of unreadable prescriptions, paper is essentially interoperable: send somebody information on paper, and it is pretty much a rule that if you could read it, they will be able to. Even if you bought your paper from Xerox and they use Hewlett-Packard paper, there will be no problems.

In contrast, paperless information made in one place may be unusable elsewhere.

Such interoperability problems are due to lack of standardisation, in turn due to human culture, diversity and innovation. For example, a date written as 4/8/18 could mean any of four different dates, which of course is a problem regardless of paperlessness. However, paperlessness increases the risk of disguising the ambiguity and acting on it (e.g. storing the wrong patient age) without anybody noticing.

Paper is ‘inefficient’, gets around slowly and is read by thoughtful people, so it is not a pressing problem that different specialities (physiotherapy, nursing, paediatrics, etc.) use different terminology. However, when paperless information gets everywhere instantly, and is processed by the same computer programs, it is important that terminology is consistent. Paperless ‘interoperability problems’ are thus often symptoms of underlying human cultural interoperability problems. Successful paperless programmes cannot just replace paper without faster technology first solving the cultural differences.

Interoperability sounds like a technical term, and hence implies it is a technical problem. No. Lack of interoperability arises because of a failure to harmonise systems and standards, in turn based on a failure to do the research to find out what the best standards might be. Then, in the absence of standards, diversity thrives. The dangers – in unnecessary harms to patients – of this are well-described by Warren.16

By definition, innovation has to be different, and continual IT transformation creates a complex tension in healthcare. We want to solve healthcare problems, but we may be merely buying technology whose new features will in turn become tomorrow’s problems.

Upgrades and bugs

Paper rarely fails, but computer systems often have bugs. Some bugs are obvious, and perhaps can be worked around. Other bugs stop work: crashes are the most obvious, but other bugs may stop things working and perhaps nobody will notice until much harm has been caused. The 2018 breast cancer screening bug17 is an example of a long-standing paperless bug that nobody noticed.

This screening bug was not noticed for a decade and had consequences for 450,000 patients. A clinical trial based on it (AgeX18) assumed the IT was correct – arguably, it was a classic case of experimenting on the clinical factors and ignoring the reliability of the whole system that affects the empirical clinical data. The system had no doubt passed empirical testing, and the bug was not apparent to any subsequent evaluations, yet the bug should have been avoided in the first place by higher quality software engineering. Just as RCTs control for variation in patients and care, rigorous construction of software controls for variation in IT.

Bugs, noticed or unnoticed, should be fixed, which leads to the upgrading problem. Paperless systems may not work while their software is being upgraded (and this can take a significant time), and once upgraded there may be knock-on effects with interoperability with other systems that have not been upgraded. When something like a handheld tablet has its operating system upgraded, many apps may also need upgrading; the process can take hours, and while it is happening full functionality will not be available.

Sample paperless opportunities

Digital rights management

With few exceptions, once you buy a piece of paper it is yours forever. Paper rarely self-destructs or does weird things.

Paperlessness is different. A paperless object can be programmed to expire or disappear when someone tries to access it outside a geographical region (geofencing), such as the hospital premises. Different people may have different views of the ‘same’ data; for example, patient data on gender reassignment may be made invisible to some users. A few unauthorised attempts to read data can result in the data being made permanently inaccessible – bricking the device and ensuring security. It is possible to make information accessible only when certain people (or roles) are present to authorise its use. Biosecurity and multistep authentication can significantly reduce security breaches.

Digital rights management (DRM) controls who accesses information, for how long and for what purposes, and it can automatically delete information that is no longer relevant.

The scope and reach of DRM in healthcare is just being explored. There are new ideas, such as smart contracts19 along with AI, which could be used from prescriptions to summarising health records (as well as billing) to pseudonymising (anonymising) data for research.

While there are considerable opportunities, there are also considerable challenges of feature interaction and manufacturer interoperability. Feature interaction means that (e.g. DRM) concepts that separately make sense do not work as expected together. (Feature interaction is the IT analogy to drug side-effects.) More generally, all advantages of paperlessness (such as rights management), that on balance may be positive, also introduce new side-effects that must be thoroughly researched.

Animation and other paperless innovations

Paperless systems can do more than paper-based systems. Speech and video are easy for computers but impossible for paper (unless you hire a theatre company). Paperless systems can be shared between many users in ways that would be impractical and unreliable for paper – networked systems can give simultaneous access to paperless information to many users who can concurrently review and edit it.

Patient photographs, automatic checking of patient identity, controlling drug dispensing and analysing trends are all possible in paperless forms. Currently, all these ideas have been implemented, but not to agreed standards. So, while the opportunities of going beyond paper can improve patient care, the downside is that new ideas tend to compromise interoperability.

Finding information

Finding information in a computer is generally easier than finding paper information, and you can search with incomplete or not-quite-correct information. On the other hand, finding poorly filed information in a computer is harder than finding poorly filed paper, particularly if the information is spread over several systems. Computers can store a vast amount of information, and in the worst case there is no systematic way of searching it. Indeed, you can delete information in a computer by mistake and have no idea the information is lost, whereas it is hard to destroy paper without realising.

Sometimes, when patients make complaints and ask for their patient notes, bits are missing. Paperlessness here could work either way: intentional deletion of notes could be completely invisible, or once written, notes could be unchangeable by anyone. Paper notes have dates written in by hand, and the dates may be incorrect or added later. Paperless notes can be automatically time-stamped, providing no opportunity for deception. Unlike paper, paperless notes can be automatically (and hence reliably) backed up and be impossible to destroy.

Blockchain et cetera…

Blockchain is a new (decade old) paperless technology that promises to replace money – cleverly simulating all of the properties of paper currency (hard to forge, hard to duplicate, tied to value, etc.) and improving anonymity. Blockchain has attracted huge investment. It is not clear how blockchain might help healthcare, as money has many essential properties that are very different from clinical data. Money is simple: it obeys the laws of arithmetic. Money is fungible: my £1 can be replaced by your £1 with no consequences. Money is anonymous: your £1 does not have your name on it, and your bank account works like mine does. Money errors can be corrected easily: healthcare errors often result in non-recoverable harms. You can make money out of money (e.g. with interest on loans, dividends and derivatives): in healthcare, profit from data is controversial, and subject to complex legislation. These and many other properties that make blockchain feasible (and profitable) are inappropriate for healthcare.

However, blockchain has important lessons for paperlessness:

  1. Blockchain is a new idea that ‘changes everything’. What other new ideas will there be?

  2. Blockchain thinking (e.g. distributed ledger thinking) may help innovate in paperlessness. For example, in the complex case reported in Thimbleby,15 what was effectively a badly implemented paper/paperless ledger resulted in chaos; new thinking – and improvement – is sorely needed.

  3. Some early adopters of blockchain are in prison, and some types of cryptocurrency have had serious failures. In other words, tremendous excitement and investment does not guarantee success.

  4. The continual churn of innovations like blockchain will ensure that the promises of paperlessness will always be fresh and driving new disruptive ideas.

Sample principles for implementation

Understanding

In the 21st century, why is any part of any modern information-based organisation not already efficiently paperless? If a neurology department insists on posting a paper letter about a patient’s diagnosis (as mine does) it must be because this makes some sort of sense to them. The speed and efficiency for the patient is not their concern, nor is empowering the patient with the information, nor is the desire to integrate the patient data with other medical records to help improve their care, let alone to support a research project. Something else is going on.

If we do not identify and address this underlying culture, any paperless system will encounter the same culture unchanged. The result is likely to be resistance, workarounds as old habits continue to play out and a gap between work-as-imagined (as managers think it is) and work-as-done, as people actually do it (see section Work as done versus work as imagined, below).

Nevertheless ‘addressing’ the underlying culture does not mean bringing it into line with what the new paperless technologies might be offering. Rather, it means working out how to align the human culture with the technical culture. Perhaps, and most likely, the greatest benefits will come from both changing.

There is not so much a paperless ‘solution’ to adopt, but a paperless future to adapt. Moreover the adaptation is not just in the technology, but a mutual adaptation – alignment – as old and truly bad (e.g. ad hoc) processes are replaced with ones that are more readily and more effectively supported by paperless interventions.

Make it right

The history of paperlessness is that it does not so much solve problems as make problems visible. Computers are programmed, and if healthcare does not behave the way the computers are programmed, then there will be problems, and people (both clinicians and IT staff) will work around the problems creating further layers of obfuscation. Some of the problems are due to computers (i.e. poor engineering), some are due to misunderstanding what healthcare does, and some problems are due to the flexible (and often undocumented) things that clinicians do that computers find hard to support. The combined human and paperless system, the socio-technical system, has to be aligned. It follows that improvement cannot be achieved just by upgrading from paper to paperless, or by upgrading the IT considered in isolation, nor just by following a ‘digital vision’ without integrating it into the rest of the activities. Successful digital transformation is not a matter of technology and training to match it; it requires education and research on all sides – the printing press did not revolutionise information, but the literacy that followed it, albeit taking centuries to develop, did.

Paper is reliable (it can fail, but it fails in easily anticipated ways, like fire), whereas paperlessness systems crash and have obscure bugs (see Figure 2, which happened while writing this paper). Surprising things can happen with paperless systems, and the problems may not be noticed immediately. Paperlessness can introduce bugs that cannot exist on paper; this is a fundamental and widespread problem affecting numerous clinical IT systems.12

Figure 2.

Figure 2.

A paperless bug that stopped work on this article.

Computers can make mistakes and, unlike paper, they can make millions of unnoticed mistakes in a second. A terrorist cyber attack, a bug or an innocent operator mistake could cause chaos across the country that is impossible with paper. The safeguards for paper and paperless approaches are very different, and both have to be well understood in their own ways to properly protect against failure. (Failure will always eventually happen, so both paper and paperless systems need recovery strategies.)

So-called misfeatures are features that made sense to the designers, but do not make sense to the users. Indeed, misfeatures may seem to make a lot of sense to salesmen and procurement, but both may be unaware of how systems are actually used in clinical practice. For example, paperless systems can have timeouts, which are impossible for paper. If a user does nothing to a paperless form for a minute (or whatever the timeout is set to), the paperless form locks the user out and they have to enter their password again. Sometimes, the data the user has entered is also deleted. This feature makes sense to designers because after a few minutes they cannot be certain who is using the system, so there is a security risk – therefore they ‘must’ log the user out. But the user may be interacting with the patient, and getting sense may take several minutes, perhaps longer than the timeout. The result can be frustration, and by forcing the user to re-enter data, the chances of error increase.

The main science of paperlessness is human–computer interaction (HCI). HCI provides concepts, theories and methodologies for developing effective and safe paperless systems and for evaluating the effectiveness of paperless systems. Core HCI concepts for both paper and paperlessness are workarounds, affordance, internationalisation, predictability, task and the user model… Central to HCI is to know and understand the user’s task, and then the new computer system is designed to support that task. Example HCI research shows that reading comprehension, speed and accuracy in the clinical context is the same for paper and paperless, but that clinicians prefer paper because of its extra features.20

Underpinning the science of HCI are formal methods; indeed, there is a sub-field within HCI, formal methods in HCI. HCI assumes that systems do what they are intended to do; formal methods ensure they really do what is intended. (Paperlessness introduces modes – which paper does not have – and modes may interact with each other. Formal methods can analyse modes and ensure they work correctly in any combination.)

It is a certainty that paperless systems designed without recourse to formal methods will not work as intended, and certainly should not be used in safety critical applications, such as the core areas of healthcare. In contrast, the underlying science of paper (such as acid free chemistry) is well-established, general and routine for all paper regardless of its use; such generality is not the case for any paperless system, as each is structurally differently and requires its own application of both HCI and formal methods.

Mentioning formal methods may be a surprise to many healthcare developers. In aviation, where lives depend on reliable software, the key role of formal methods is taken for granted. For the same reasons, healthcare should demand no less.

Make it better

One of the core methodologies of HCI is iterative design, which links evaluation with improving development (ISO 9241 again14). HCI can help improve both paper and paperless work. (For example, there is a body of knowledge within HCI on vision and legibility, which applies to both.)

An important implication of iterative design is never to ‘buy a finished system’. The system you want will not work as well as expected, and fixing it will be frustrating, slow and expensive. Instead, contract to have an improving system performance, measured against agreed monitoring, benchmarks of performance and reliability.

Two of the fundamental insights of HCI are that what people want is not necessarily what makes them more efficient or safer (equally, what manufacturers convince us we want is not necessarily what will make things better). HCI has methodologies to sort these differences out, but these insights mean that opinionated people are rarely right unless their views are based on good HCI research.

For example, the Wachter report5 mentions reducing mouse clicking, as mouse clicks take up time and too many seem unnecessarily tedious. But reducing clicking also reduces redundancy, and therefore increases the risk of unnoticed, uncorrected error. As mentioned above, my employer has made paying my expenses easy, but so easy they sometimes do it twice without noticing the extra effort; whether the regular time saving is worth the occasional expensive error probably has not been assessed – and in healthcare errors are harder to recover from than correcting a payment. Without thorough experimental research, it is never clear where the best tradeoffs lie.

Make it legal

Although there are legal restrictions on the use of paper (e.g. confidentiality of patient data, signatures and contracts), there are no real limits on how paper is designed, what structure forms should take and so forth, beyond certain requirements on the information required (e.g. requiring a date). In contrast, paperless systems may come under the Medical Devices and In Vitro Diagnostic Regulations (and equivalent regulations worldwide), which impose requirements such as performing risk assessments, and having processes for post-market surveillance, managing security and passwords, and so forth.

The reader is referred to the Medicines & Healthcare products Regulatory Agency (MHRA) web site (www.gov.uk/mhra) for up to date guidance, and to Health and Social Care Information Centre21 as a very useful summary.

Sample principles for thinking

We are familiar with paper and its problems (having grown up with it), but less familiar with paperlessness, not least because the technology is in continual flux. Paper is a boring, fungible commodity, whereas IT is exciting and unique, and it has skilled salesmen. These and other differences make it hard to think clearly about the relative benefits.

Healthcare interventions should be justified by science and evidence. Good science avoids problems such as success bias: for instance, poor ideas that do not work disappear. For example, while we can see the success of the paperless Amazon, without research it is hard to appreciate the numerous failures where similar companies disappeared. Therefore we are biased to think that paperlessness is easier and more effective than it really is.

We are familiar with Amazon (and Facebook, WhatsApp, eBay and more) and tend to take them for granted. But they are huge and have huge resources and world-leading technical expertise that dwarf the NHS.

Not only do we think paperlessness is easier and more effective than it is, we think we can achieve results cheaply! The example of poor paperless form design above illustrates how such thinking is endemic and reduces the efficiency of the NHS in every area.

Paperlessness is an expensive intervention, and the rigour of the science supporting it should be commensurate with the cost–benefit it is expected to offer. Strangely, there is little science that has examined paperlessness as rigorously as simpler interventions, such as hand washing.

Be aware of implementation bias

According to the UK National Information Board, ‘Better use of data and technology has the power to improve health, transforming the quality and reducing the cost of health and care services.’22 This is misleading; it might be better expressed as ‘Use of better data and better technology…’

Implementation bias is our tendency to emphasise the thing that (hopefully) implements or achieves the behaviour we want. Instead of thinking about an abstraction like patient care or processes like handover, implementation bias means we focus on concrete things like technology or data, and then convince ourselves we just need to modernise them to get the best things. It is easier to think about buying things than think through improving quality. Manufacturers inevitably market their products as ‘solutions’. Coincidentally, buying new computers aligns with the capitalist imperative, and leverages our consumerism. We all, as individuals, want the latest stuff! – see below on maturity.

It sounds cynical, but assuming that the latest technology promises an improvement for healthcare is, for want of science, no more than an extrapolation from our personal enthusiasm – which has been deliberately stimulated by persuasive technologies to promote addiction (and hence profit for app companies, social media in particular). Without rigorous science, there is no reason to generalise our personal excitement to clinical effectiveness.

The UK National Programme for IT (NPfIT), set out to modernise NHS computing in the early 2000s, became the world’s largest IT failure. What we should take from that fiasco is not that today’s technology is different or that we have new development methods, but that sensible people (as there were at Britannica) walked with open eyes into problems with misplaced certainty and an astonishing lack of precaution. Technology has changed since then, but the point of NPfIT was that technology had changed then – it always will! But NHS culture is still largely unchanged. NHS culture has created a muddle of IT systems (see Interoperability, above): implementation bias makes us think that we can fix the IT with ‘better IT’, but in fact what needs fixing is the culture that creates the complexity that causes the problems. When this muddle is sorted out – how much is unavoidable complexity and how much is unnecessary? – we may see clearly enough to specify requirements for technology that support an effective healthcare system (rather than perpetuating all of the current system, both its bad and good parts).

Work as done versus work as imagined

One of the significant paperless changes that is happening as we approach the 2020s is that clinicians are becoming subversive – they are inventing ways to work around inefficiencies despite the systems (and regulations) that are provided. Medical apps are used to support their work, but these apps work in parallel with NHS systems. Sometimes it is easier to photograph patient data on an NHS computer screen and WhatsApp the picture, but confidential data is then on a personal device. Yet these individuals are paperless innovators. To understand the issues, we need to understand WhatsApp and its competitors, but that exercise encourages implementation bias. WhatsApp and end-to-end encryption are fascinating, but the question is: how are they helping safe and effective patient care, or helping staff to be happier?

Be aware of cognitive biases

The availability heuristic means that if something is obvious, it must be important: so piles of paper are obviously bad, but invisible ‘piles’ of computer work are less important. There are many such cognitive biases that are relevant to the paper/paperless debate.

Attribute substitution is the psychological effect where a hard decision-making process is unconsciously replaced with an easier process that is superficially equivalent.23 For example, understanding the culture and problems of healthcare, which means understanding complex clinical tasks, is hard, and if we want to improve healthcare it is easier to focus on a simple concept like ‘digital maturity’ than on understanding the deep issues. Furthermore, the latest technology is attractive (adverts ensure it seems so) and if it is attractive, then it is natural to think it is better. Indeed, why would it feel so attractive if it was not good? We want to replace our own technology with newer stuff so we want to update the NHS with newer technology, thereby substituting our simple personal preferences for a complex assessment. We thereby save ourselves a lot of complex evaluation, RCTs, analysis of formal specifications, systematic reviews and more, let alone questioning whether just being more modern is better. Simply replacing centuries-old paper with tomorrow’s technology is a no brainer…

Develop mature attitudes to software

Thanks to the high profile of cybersecurity vulnerabilities (WannaCry, Spectre NG and more), we know that programming is not always of high enough quality. Cybersecurity problems are only one symptom of immature programming and IT management.15 Poor usability, poor user experience, bugs, poor interoperability, loss of data and other problems are other consequences of poor programming standards – and the failure of healthcare to procure quality products.24

As Figure 3 suggests, improvements to programming maturity will significantly improve clinical effectiveness and safety.

Figure 3.

Figure 3.

Human Factors usually focuses on the sharp end of clinical practice, but there are both sharp-end and blunt-end human factors. Errors happen at both the clinical sharp end and at the blunt developer end; in both cases errors happen because they are not noticed nor corrected. At the sharp end, high workload and time pressures induce error, but fortunately errors only harm one patient – if at all – one at a time. In contrast, at the blunt end, there is no time pressure, but unnoticed and unfixed design errors can harm or induce harm in thousands of patients when the systems get into use. Being aware of, avoiding and having processes for fixing design error deserves much more attention than it currently gets.

Consumer IT is specifically designed to be addictive, raising dopamine levels, etc.;25 when we are concerned with delivering healthcare, this is misdirection. Raised dopamine levels get us addicted to and thus help sell consumer products, but they are not necessarily helpful for safe clinical use – you do not want clinicians addicted to the device and its apps, but for apps to be effective at patient care. We have considerable difficulty in thinking clearly with IT; a parody will help: just because a sports car is easier to use, more nimble and makes an ambulance look slow and antiquated does not make it better than an ambulance. Ambulances are designed to do a healthcare job, and sports cars are designed to appeal to individuals. Fortunately, few individuals confuse the two – we have many words for thinking about transport (bicycle, horse, plane, car, lorry, pickup…) but we have few words to think clearly about paperlessness. The generic terms IT, digital, paperlessness, cloud, et cetera do not discriminate or help clear thinking.

The British newspaper The Times26 reports teenagers now being hired straight from school by companies seeking to protect themselves from cyber attack. Our culture is that programming is easy, children can do it, and indeed, they are more ahead than adults. But this is a symptom of misdirection. Consider the analogy: if jewellers were hiring children to test their security, you would ask why they did not first hire engineers to make their stores professionally secure! We would not be celebrating that children know more about breaking windows than adults.

Programming is very easy – even children can do it! – but safe programming, which is what we want in healthcare, is very difficult and requires professional expertise. Paperlessness in healthcare requires professional high-quality programming. Unfortunately, there are currently few ways to find competent programmers who can implement quality paperless systems. You could ask whether the supplier knows (and uses) ISO 924114 and has read (and uses) The art of computer programming,27 what do they think of correct by construction, Spark Ada, PVS, Alloy, Spin, SMV, and more; see Figure 4. You should require that there is a defined process to discover, monitor, report, fix and recover from inevitable bugs. Installing a system promised as a ‘solution’ with no clear plan for continual improvement is naïve.

Figure 4.

Figure 4.

Most programs are thousands of lines of code and longer. Shown above are a few lines of very simple program code (written in JavaScript, a very popular programming language). A good programmer will be able to work out exactly what this code does without needing to run it on a computer: how many stars does it print for any n? However, most programmers are unable to work out what it does, which means there are probably many parts of their programs that they do not understand, and which therefore must be deemed unreliable. NB: if the word ‘var’ is omitted in the code above (or the names ‘f’, ‘n’ or ‘i’ misspelt, etc.) the program will still compile and run, but will do something completely different. Perhaps omitting ‘var’ (or mistakenly typing it out of habit) was a typo? In other words, programs are both very complicated (should it do what you think?) and very fragile (does it do what you think?). Significant skills are required to develop reliable programs, and that is before you worry about managing the errors in other people’s code (e.g. in libraries) within your own programs.

International standards and regulation

Working knowledge of medical device directives and various ISO standards dealing with safety, cybersecurity and medical system software development and corresponding quality management requirements is essential. The relevant standards (ISO 9241, 606061, 62366, 14971, IEC 61508, etc., etc.) are not just boring rules but contain mines of helpful information; most standards include informative bibliographies as background information. The standards help to do user centred design or risk management, et cetera, as well as specifying how these processes must be done for medical systems. In turn, properly following the standards ensures compliance with current regulation. An inspiring place to start is the World health strategy ebook;28 it and similar resources should be consulted to ensure reference is being made to current standards and legislation.

The US Food and Drug Administration and the UK MHRA web sites both have many helpful resources (as other countries’ regulators will also), including white papers exploring future directions that developers should consider, particularly given the typical lifecycle and timescale of IT projects.

Part 3. The three laws and subsidiary points arising from them

The three laws of paperless healthcare

In our dreams, paperlessness can do many things better than paper, but that does not mean that our dreams will be realised;29 there are many forces in the world – notably, commercial pressure, proprietary protectionism and the pace of technological developments to create market churn – all of which drive the seductive story of easy, unqualified improvement.

The route from paper to a better paperlessness is not automatic; being certain about paper, as we are, does not mean we can be certain about paperlessness. The problems of paper are symptoms of poor information management; therefore, going paperlessness does not in itself affect the underlying disease. However, going paperless introduces new risks (e.g. costs, training, interoperability, obsolescence, licensing, cyber attacks, liabilities…) as well as amazing opportunities (e.g. personal health records, telemedicine, medical apps, smart contracts, big data analytics…).

We have argued throughout that we cannot rely on intuition. We must rely on rigorous science and evaluation to establish actual long-term patient and staff benefit.

The three laws of paperless healthcare are:

1. Keep in sight the goal of improving healthcare

Paperlessness must be first about improving clinical processes, supporting staff and patients, not about replacing paper with new ‘solutions’.

While paperlessness is an attractive ideological goal, we should not lose sight of the true goal: to improve patient care and outcomes, and to improve both patient and staff satisfaction. Hence Law 1: keep in sight the goal of improving healthcare: address the disease, not the symptoms.

  • 1.1. People use paper and paperless systems as intertwined socio-technical systems. Upgrading to paperless systems without redesigning and integrating the whole system will fail.

  • 1.2. Paper problems are a symptom of the scale and type of work that healthcare does; just going paperless will be ineffective if it does not address underlying problems. Hence: we are poor at recognising and properly using paperless maturity. Political decisions, management decisions, IT staff employment decisions, and even manufacturer and consultant competence are all questionable.

  • 1.3. Paperlessness has not been around long enough for there to be much expertise in the field, and there is certainly not, given the scale of the venture. Mature paperless decisions have to allow for uncertainty, despite the political pressures to invest now and take rapid action. The success of a few high-profile paperless systems encourages us to be over-confident – we are relatively unaware of the baseline and of all the failed paperless systems.

  • 1.4. Improvement begs the question cui bono (‘who benefits’)? It is easy to overlook, but in principle we do want patients to benefit, in turn supported by staff who benefit, who are more satisfied, happy and accomplished in their work. But improvement could benefit secondary activities like billing, auditing and general administration, and become an end in itself to justify a growing IT investment and feeding supplier growth – the dangers of this are well-documented by Seddon,9 who points out that the people who specify computer systems are generally not the people who use them. There is everything to gain with a win–win to all parties, but the focus must primarily be on patient care. Because of the quantifiable cost of IT and the unquantifiable value of patient safety, it is easy to overlook.

  • 1.5. The contract with paperless system suppliers should require specified performance improvement, not merely timely delivery. See Law 2 for requirements on performance evaluation.

2. Only implement evidence-based change

Pursue paperlessness only where there is scientific evidence it is better for the real task. Successful paperlessness depends on user centred design and on quality implementation.

Our experience as IT consumers encourages us to think uncritically about IT and about paperlessness in particular. Despite everyone’s strong personal preferences about paperlessness, rigorous scientific work is needed to establish whether our intuitions are valid in the complex, pressurised clinical environment.

  • 2.1. More questions need asking, and for almost all questions, more and more careful research is needed to answer them. For example, there has been considerable research on which computer menus are easier to use, but (to my knowledge) none compare whether paperless menus are better or worse than paper, and if so, under what circumstances. Even fewer experiments have been conducted in healthcare contexts. Research funders should consider improving the quality of the available science; see Table 1.30

  • 2.2. Promises of success must be checked against evidence, and careful analysis of the cost/benefit must be performed. Typically, this will require experiments or systematic reviews. Costs must include training and obsolescence.

  • 2.3. Paperlessness has risks that must be understood and managed. Interoperability, training, usability and cybersecurity are obvious areas of risk. Paperless hazards tend to have huge and rapid impact exacerbated by nationwide and international communication. Good paperlessness does not just supersede paper but opens new opportunities. Paper has inefficiencies that can be overcome with paperlessness, but changing how healthcare works to better fit paperlessness is better than merely going paperless and still doing things the same way. Successful paperlessness will be designed to fit the clinical tasks that need doing, it should not simply make the traditional paper pushing faster.

  • 2.4. The methodologies widely used for assessing digital health are empirical,31 and this can only approximate the truth, for instance to some statistical confidence level. If there are flaws or bugs in the underlying implementations, empirical testing is likely to miss them but they will eventually have clinical consequences.

  • 2.5. For almost all applications, safety cannot be achieved by testing (testing can only show there are problems; testing cannot show there are no problems). Instead, safety must be built in and assured by techniques such as ‘correct by construction’. By analogy, we do not test whether a building is fireproof by trying to set fire to it, we have a safety argument that it is fireproof by its construction.

  • 2.6. UK criminal law (Health and Safety at Work Act 1974 3i) states that, ‘It shall be the duty of every employer to conduct his undertaking in such a way as to ensure, so far as is reasonably practicable, that persons not in his employment who may be affected thereby are not thereby exposed to risks to their health or safety.’ Failure to assess whether the introduction of paperless working will improve patient safety is therefore a criminal offence.

3. Plan for cultural change and moving goal posts

Culture has to change to take advantage of technology, and technology is changing at pace regardless. Paperless requires planning for monitoring, improvement, revision, and, eventually, obsolescence and further innovation. Pay attention to culture, including regulation, and to developing human skills to exploit new technologies.

Hence, to ensure that the results of scientific evaluation (Law 2) are reliable, the systems must be reliably implemented (Law 3).

  • 3.1. Before any paperless project is undertaken, a risk analysis should be performed. What are the consequences of failure, down-time and other IT problems? And, given the risk, how professional an implementation is required?

  • 3.2. Software has bugs, and in healthcare it should have as few as possible. To ensure healthcare software is safe (or safe enough) requires professional expertise that is not commonly available. Safety requires rigorously addressing cybersecurity, safety properties, formal specification and proving correctness.

  • 3.3. Do not buy ‘solutions’. Nothing is a perfect solution, so plan for processes that support continual improvement. Refer to ISO 9241’s iterative design.

  • 3.4. Any strategy must be dynamic – today’s promised solution will be tomorrow’s obsolescent problem. Paperless strategies must acknowledge and include keeping up to date with continual innovation. They must balance the pressure of ever-changing possibilities against effective rapid improvement.

  • 3.5.   The contract with system developers should include participation in and responding to training and evaluation processes.

  • 3.6. When there is evidence that anything is better, define open standards so everyone can benefit. Without enforced standards and harmonisation, paperlessness will create more and more interoperability problems. In other words, when benefits are identified, they must be made available to all, and not tied to proprietary or local solutions.

  • 3.7. To have a paperless future that gets closer and closer to our dreams, we will need a lot of well-informed wisdom – and a lot of advanced scientific knowledge (especially in HCI, formal methods and cybersecurity) integrated with healthcare. Unfortunately, this is knowledge that is neither currently widely available nor of adequate quality and generality.

  • 3.8. The best way to improve real IT (paperless) skills is to recruit good staff, and to do that, use external assessors (e.g. IT professors from a nearby university) in selection and interview processes. Otherwise, the IT department will be recruiting people who impress them, rather than highly-skilled people whose excellence they may not recognise. Next, current staff need continual training, yet current staff are the last people to realise they are under-skilled; again, use qualified externals to help assess training needs.

Any use of this article and its laws should include expanding and tailoring them to the task in hand. The Healthtech Declaration32 provides further ideas for healthcare IT, going beyond paperlessness alone.

This article has explored some of the opportunities and pitfalls of our paperless excitement, and the laws are a way to keep the tensions explicit… rather than fall for the seductive simplicity of just replacing paper with new technology solutions.

‘The problems of the real world are primarily those you are left with when you refuse to apply their effective solutions.’

– EW Dijkstra33

Acknowledgements

Ann Blandford, Martin Elliott, Ross Koppel, Mark Temple, Prue Thimbleby, Martyn Thomas and John Warren provided many insightful comments. Harold Thimbleby is See Change Fellow in Digital Health, and thanks them for the funding.

Author biography

Harold Thimbleby is See Change Fellow in Digital Health at Swansea University, Wales. Harold is an internationally respected computer scientist, campaigning to improve healthcare IT and patient safety. He is an Honorary Fellow of the RCP, Expert Advisor on IT to the RCP, advocate for the Clinical Human Factors Group, member of the World Health Organization’s Global Patient Safety network, and an Expert Assessor for the MHRA. He’s been an expert witness in several NHS cases involving IT. His research team won the 2014 GE Healthcare Award for Outstanding Impact in Healthcare. Harold is a popular speaker who has given over 550 invited talks in over 30 different countries. His web site is http://www.harold.thimbleby.net

Contributorship

Harold Thimbleby researched the literature and conceived the study, and wrote all drafts of the manuscript. He reviewed and edited the manuscript and approved the final version of the manuscript.

Conflict of interest

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Ethical approval

No human or animal participants were used in this research; ethical approval was not required.

Funding

This work was supported by See Change (M&RA-P, Scotland).

Guarantor

Harold Thimbleby.

Peer review

This manuscript was reviewed by Stuart Harrison, NHS Digital Clinical Safety, Leeds, UK, and one other individual who has chosen to remain anonymous.

References


Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES