When the Editor of the British Journal of Clinical Pharmacology commissioned this article he probably expected a scholarly step-by-step account of the development of our discipline. He also asked that the review should consider current difficulties and future prospects. A period of reflection and some background literature searches soon convinced this author that a comprehensive account would be so lengthy and take so long to research thoroughly that it was more likely to be complete for the 80th anniversary of the British Pharmacological Society (BPS) than the 75th. So this is a personal account by someone who has lived and worked through just under 50 years of the 75 years the Editor wanted to cover. To minimize personal bias I have sought advice from a substantial number of those who have been leaders in the development of the discipline. Their names are listed at the end of the article. I am profoundly grateful to all of them and if I have misrepresented their views here and there I hope that they will forgive me. For the many very active participants I have not mentioned I plead the exigencies of time and page space. The article contains no references because there would have been hundreds, but it does include many names, so those who want to know more can use Google and Medline.
At the conclusion of the article I have tried to address some of the rather pessimistic views about the future of clinical pharmacology that are current in many academic circles. They are views that I understand but do not entirely share. From the standpoint of someone who now works in industry, the challenges and opportunities for clinical pharmacologists are greater than they have ever been, but a vibrant academic clinical pharmacology community is essential for the practice of clinical medicine and the training of physicians as well as an important source of ideas for industry. At the very end of this article I have put forward some possible solutions to the current problems. To overseas readers they may seem rather UK oriented, but I believe that similar concepts, adapted to local circumstances, could be applied more widely. Academic and health service clinical pharmacology is needed and must not be allowed to wither.
Origins
The history of clinical pharmacology is very much longer than the use of that descriptive name. In a real sense it goes back to the work of Chinese, Indian and Peruvian traditional practitioners, who discovered the activity in herbal remedies that we now know as artemisinin, reserpine and quinine. However, we know little about who was responsible and what doubts and difficulties they encountered. The fully documented period of drug discovery is much more recent, for example William Withering’s publication on the purple foxglove in 1785, but it was still largely a combination of astute observation with trial and error. When the scientific era dawned the biological sciences at first lagged behind the physical sciences. Physiology was the first to develop a strong discipline and it was from physiology that pharmacology emerged. Arnold Burgen, who became professor of pharmacology in Cambridge, President of the International Union of Pharmacology (IUPHAR) and an important sponsor of clinical pharmacology, recalls being ticked off at lunchtime seminars at the Middlesex Hospital (in 1945) by the famous physiologist Samson Wright ‘because he still talked like a clinician’. Not much encouragement there to become a clinical pharmacologist, but Arnold took it as a challenge to bring clinical know-how and basic science together and was soon conducting a trial of tetraethyl pyrophosphate in myasthenia gravis!
Clinical pharmacology: who coined the name?
There is debate about who first used the term ‘clinical pharmacology’, but it was probably Harry Gold, a Professor of Pharmacology at Cornell University, who had carried out seminal work on the human pharmacology of digitalis glycosides in the late 1930s and early 1940s. An important landmark in 1941 was the publication of the first edition of Goodman and Gilman’s very influential textbook The Pharmacological Basis of Therapeutics. At this time the medical research scene was still very fluid. The British pharmacologist Sir John Gaddum once described the pharmacologist as a jack of all trades. Gaddum was referring to a period, which I can just remember, when the experimental pharmacologist built his own instruments, purified his own drugs, devised assays, anaesthetized animals, smoked his own recording drum. But in modern terms the clinical pharmacologist has also to be a jack of many trades or at least scientific disciplines, including pharmacology, biochemistry (in drug metabolism), mathematics and statistics (in pharmacokinetics and clinical trial design), experimental medicine, safety assessment and pharmacovigilance. In the modern world all these disciplines have their separate identities (and sometimes silos), but when this story begins 75 + years ago, such subdivisions scarcely existed. Take a single example: Sir Horace Smirk, the father of the modern treatment of hypertension through his introduction of the quaternary ammonium ganglion blocking drugs for malignant hypertension. Smirk graduated in medicine from Manchester and spent some time receiving research training in the early 1930s at the outstanding Department of Medicine at University College, London. By 1935 he was Professor of Pharmacology at the Egyptian University in Cairo. By 1940 he was Professor of Medicine at the University of Otago in New Zealand and it was from that apparently remote centre that he applied the basic pharmacology discoveries of Paton and Zaimis concerning the quaternary ammonium ganglion blocking drugs to severely ill hypertensive patients, with dramatic results. At that time almost all pharmacologists were medically trained, formal medical subspecialty training scarcely existed, and it was easy to move between laboratory and clinic, as Gold and Smirk did. In one sense we have spent the intervening 65 years trying to re-create that golden era of free movement of ideas and disciplines between pharmacology and medicine, albeit in a much more complex world.
Insulin, the first major breakthrough
For this article I had to choose a starting point, and although it is just over 75 years ago I selected the discovery of insulin by the team led by Banting and Best, because of its enormous impact on a deadly disease and because they did the whole thing from isolation of the active principle, good-quality experimental work in animals and introduction into man. They were clinical pharmacologists, although it was not a term they knew. Fred Banting was a young surgeon who recruited Charles Best, just finishing his medical training, to work with him in the Department of Physiology in Toronto, to try to isolate the active antidiabetic principle known to be present in the pancreas from the earlier work of von Mering and Minkowski in 1889. The problem was that proteolytic enzymes in the pancreas rapidly degraded insulin when tissue extracts were made. During 1921 they made rapid progress, helped by the finding that fetal calves had no proteolytic enzymes in their pancreas but did produce insulin. With this and other material they were able to stabilize and purify preparations of insulin and use them to lower the blood sugar concentration markedly in diabetic dogs, prolonging their lives by up to 70 days. With the help of James Collip, a biochemist from chemical pathology, the purification process from adult pancreas was greatly improved by fractional precipitation in alcohol. The first to inject crude material into man were Banting and Best – they injected themselves, but all they achieved was a local inflammatory reaction. The first patient treated with the purified extract was a 14-year-old boy, Leonard Thompson, with diabetic ketoacidosis, who received the first dose of insulin on 23 January 1922. There was very marked clinical improvement within a short time. Insulin became commercially available later in 1922, as a result of prodigious efforts by the Eli Lilly company in Indianapolis. One of the early patients treated, a Canadian lawyer, survived into his late seventies. The contrast between ‘dogs to humans in 8 months’ of the insulin programme and the lengthy steps now taken with new drugs, extending from 10 to 15 years from inception of an idea to general use, is painfully obvious.
Quinine, Goldwater Memorial, the National Institutes of Health, and Clinical Pharmacology in the USA
After Pearl Harbor the Japanese quickly occupied Indonesia, which was the main source of quinine. This created a crisis in supply for the American Army operating in malarial South Pacific islands.
James Shannon, a young renal physiologist, was recruited to solve the problem of developing synthetic antimalarials and set up a laboratory at the Goldwater Memorial Hospital in New York. Shannon was quickly convinced that the concentration a drug achieved was more important than the dose. Bernard Brodie and his technician, Sidney Udenfriend, were given the task of developing analytical techniques. They developed methods of extracting drugs into ‘the least polar solvent’ and then measuring them with a spectrophotometer or fluorometer.
After the end of the war Shannon became Director of the Squibb Institute, but Brodie remained at Goldwater and produced a stream of papers on drug disposition and metabolism. Brodie and Julius Axelrod studied the metabolism of acetanilide and discovered that one of its metabolites was N-acetylaminophenol. They went on to show that this metabolite, better known as acetaminophen or paracetamol, was an analgesic.
In 1949 Shannon left Squibb to become the Scientific Director of the National Institutes of Health (NIH). He persuaded Brodie to join him there and to establish the Laboratory of Chemical Pharmacology. It was said of Shannon that he only had to whistle in New York and all the scientists he wanted came running. It was the interdisciplinary approach forged during the malaria programme that Shannon used to transform the NIH. Sidney Udenfriend was also recruited and the technicians included Julius Axelrod. John Burns worked with Brodie at Goldwater and after his move to industry became a major sponsor of clinical pharmacology. Drug metabolism and the responsible cytochrome P450 enzymes became a major focus of the NIH laboratory that was later led by Jim Gillette. Among the fellows who trained there was Donald Davies, who later became head of biochemical pharmacology in my group at the Hammersmith Hospital in London. The NIH laboratory was also largely responsible for developing the concept of reactive metabolites and their role in toxicity.
Shannon recruited Albert Sjoerdsma to the NIH Clinical Center in 1953 and his alliance with Sidney Udenfriend was very productive. Sjoerdsma had trained in both pharmacology and medicine at Chicago and in 1955 he became head of a new NIH Department of Experimental Therapeutics. This became the major training ground for a generation of distinguished US clinical pharmacologists, including Jim Crout, John Oates, Leon Goldberg, R. J. Levin and Ken Melmon. Each of these was to found a major clinical pharmacology research and training centre, particularly John Oates at Vanderbilt University in Nashville, Tennessee. A major focus of Sjoerdsma’s group and of his trainees was the drug treatment of hypertension and it was during research in Sjoerdsma’s laboratory, led by John Oates, that the hypotensive activity of methyldopa was discovered. The drug treatment of hypertension was also a major driver of the development of clinical pharmacology in Europe.
Developments in Europe
In Europe there was no central driver in the shape of a James Shannon or the NIH, so development was more multicentric, often arising from research-minded physicians working with pharmaceutical companies.
In France Henri Laborit, a surgeon interested in treating shock, had noted some interesting responses from chlorpromazine. He recommended it for trial to the psychiatrist Pierre Deniker. I recall talking to Deniker at the World Health Organization (WHO), when he said that the first indication he had of something important was when his head nurse came to him and said ‘Prof. Deniker, this drug is quite unlike anything we have ever tried before in schizophrenia’.
In the UK the main drivers were two powerful professors of medicine, John McMichael at Hammersmith and Max Rosenheim at University College Hospital. Both were interested in cardiovascular disease, particularly hypertension, and established some of the first hypertension clinics. As a young senior house officer (junior resident) I had already resolved to study drug therapy and had secured a Medical Research Council (MRC) training fellowship to go to the Department of Pharmacology in Oxford. One morning, while I was making rounds at the Brompton Hospital, the phone rang. It was John McMichael asking me (it sounded more like a command) to come back to the Hammersmith Hospital as his research registrar and take responsibility for the hypertension clinic. I was exceedingly fortunate in the timing, for within a few years we studied a gamut of new drugs, such as chlorothiazide, bretylium, guanethidine, methyldopa, propranolol and clonidine. During that time accelerated hypertension changed from a disease with a prognosis as bad as lung cancer to something that any well-trained internist could manage. With MRC support I was able to recruit Donald Davies from Brodie’s laboratory at the NIH as head of biochemical pharmacology and to embark on a multidisciplinary approach to the study of drugs in man. We had the good fortune to recruit a string of very bright young physicians to the department, including Alasdair Breckenridge, Mike Rawlins, Charles George, John Reid, Peter Lewis, Morris Brown, Jim Ritter and many others from the UK and overseas. Meanwhile, at University College Max Rosenheim recruited Desmond Laurence. Desmond had come out of the army wanting to train as a hospital physician, but there was intense competition from the large number of doctors being demobilized and he took a supernumerary post in the Department of Medicine at St Thomas’ Hospital. Because of his interest in treatment he was advised to apply for a Readership that had just become vacant at University College, although Desmond felt that he had few qualifications for it. Fortunately, he was appointed (as a Senior Lecturer) and so began the first clinical pharmacology unit in the UK. Laurence took a very active role in working on many antihypertensive compounds and his textbook on clinical pharmacology became a bible for generations of medical students. Brian Prichard in his department made important contributions in a number of areas, notably in discovering the hypotensive effect of propranolol.
In Sweden the drive to develop clinical pharmacology was given a major impetus by the distinguished pharmacologist, Borje Uvnas. The different origins of clinical pharmacology, in the UK and USA mainly from internal medicine and in Scandinavia from pharmacology, account for some of the differences in practice and funding that persist to this day, although there has been considerable convergence.
The involvement of industry and the advance of regulation
In the early days of drug regulation by government agencies, such as the Food and Drug Administration (FDA) in the USA and the Committee on Safety of Medicines (CSM) in the UK, the main emphasis was on safety, dating back to the Elixir Sulfanilamide disaster of 1937 in the USA. This tragedy occurred shortly after the introduction of sulfanilamide, the first sulphonamide antimicrobial drug, when diethylene glycol was used as the diluent in the formulation of a liquid formulation known as Elixir Sulfanilamide and 105 patients died from its use. Under the drug regulations then in force, premarketing toxicity testing was not required. In reaction to this calamity, the US Congress passed the 1938 Federal Food, Drug and Cosmetic Act, which required proof of safety before the release of a new drug. It is worth noting that several pivotal points in the history of drug regulation have occurred in response to drug disasters, such as those caused by sulfanilamide, thalidomide and practolol. The Kefauver–Harris Drug Control Act, passed by Congress in 1962 in response to the thalidomide catastrophe in Europe, and the establishment of the Dunlop Committee in 1963 and the UK Medicines Act of 1968, strengthened US and UK regulation. These new laws changed everything, with the requirement that drugs must have demonstrated efficacy. They were very important drivers for developing clinical pharmacology, but more about the role of the pharmaceutical industry later.
The age of excitement
The development of clinical pharmacology in the 1950s and 1960s took place in an atmosphere of high excitement. There was a flood of new and important drugs, a general recognition of the need for more systematic investigation of drug action in man, and career opportunities were opening up. Drug safety disasters such as thalidomide gave a further powerful impetus. When the Committee on Safety of Drugs was formed in the aftermath of thalidomide, it was to a clinical pharmacologist (although his chair was named Therapeutics), Derrick Dunlop, that the British Government turned to take the chair.
During this period new departments were created at the Hammersmith Hospital (Dollery) and University College (Laurence) in London, in Nashville (Oates), Kansas City (Azarnoff), San Francisco (Melmon) and Atlanta and later Chicago (Goldberg) in the United States, and in Stockholm (Sjöqvist) in Sweden. Support came from both research funding agencies (the MRC in the UK and the NIH in the USA) and some major pharmaceutical companies. The WHO held meetings and issued reports on the development of clinical pharmacology. John Burns, now in industry, organized a series of meetings in the USA that brought together the nucleus of people leading the new discipline. New journals were established, such as the British Journal of Clinical Pharmacology, with Paul Turner as its first editor. New societies or clinical pharmacology sections of existing national and international pharmacological societies, such as the BPS and the IUPHAR, were formed. Very capable young physicians and scientists were attracted into the new discipline.
From these origins clinical pharmacologists spread out in all directions. Clinical pharmacology courses were introduced into the medical undergraduate curriculum and several centres offered postgraduate workshops and training programmes for clinical pharmacologists. In the UK clinical pharmacology was recognized as a specialty within internal medicine, with its own training programme. To counter the flood of promotion by pharmaceutical companies, both governments (Prescribers’ Journal) and private agencies (Drug and Therapeutics Bulletin, The Medical Letter) produced publications containing impartial assessments of the beneficial and adverse effects of medicines. Clinical pharmacologists figured prominently on the editorial boards of these publications and were often their editors, e.g. Andrew Herxheimer and Joe Collier, successive editors of the Drug and Therapeutics Bulletin. Clinical pharmacologists were prominent in devising and promoting pharmacovigilance systems, both local and national, for reporting and analysing adverse drug reactions. The newly formed hospital formulary committees almost always included a clinical pharmacologist, if one was available. Many clinical pharmacologists, particularly in English-speaking countries, provided front-line clinical services, usually in general internal medicine with a cardiovascular bias, but clinical pharmacologists established themselves in other therapeutic areas, e.g. Malcolm Lader in psychopharmacology at the Maudsley Institute in London. When they did not run their own services, some clinical pharmacologists tried to build up hospital consulting services on drug problems. I have personal memories of going on ‘clinical pharmacology rounds’ with Ken Melmon’s team in San Francisco.
What was less apparent to the participants at the time was that the expansion of their discipline took place at a time when both research and clinical service budgets were expanding rapidly. When the squeeze on university and hospital budgets began in the 1990s, many of these services, provided on a ‘pro bono’ basis by clinical pharmacologists, much of whose support was from research funds, began to crumble – but that comes later in the story.
The next stage I have called the age of divergence, when many of the disciplines that contribute to clinical pharmacology (such as drug metabolism, pharmacokinetics and clinical trials) grew so large and important in their own right that they had their own meetings and, to some extent, the specialty of clinical pharmacology fragmented. As we shall see later, the pharmaceutical industry dealt with the problem by creating project teams, with team members from each of these disciplines, but the academic centres struggled to remain comprehensive.
The era of growth and divergence
Many disciplines contribute to understanding drug action, therapeutic efficacy and safety in man. As the number and importance of therapeutic drugs grew rapidly in the 1950s, 1960s and 1970s, so did those scientific disciplines. They formed their own scientific societies, held their own meetings and founded their own journals. It was all perfectly understandable and mirrored what was happening across the whole range of biomedical disciplines, but it had a downside – they were often focused on the same drug molecules and yet their communication with one another was far from perfect. It is for that reason that I have chosen to describe the growth of these disciplines under separate headings, because that is mostly the way it happened.
Drug metabolism and clinical pharmacology
Pharmacologists, both basic and clinical, soon realized the importance of drug metabolism in explaining large interindividual differences in response to drugs. The symposium on ‘Plasma Concentrations and Drug Effects’ organized by the BPS at the Hammersmith Hospital in 1970 attracted a registration that exceeded the capacity of a lecture theatre that seated over 500 people. As the subject developed, however, much of the fundamental groundwork was done in laboratories of biochemistry and molecular biology.
Drug-metabolizing enzymes
It is critical to the work of any clinical pharmacologist to understand the absorption, distribution, metabolism and excretion of the chemical compounds that he or she administers to patients. As most drugs are lipid-soluble entities that are only very slowly excreted by the kidneys, chemical transformation in the body to make them more water soluble is essential. Two of the great pioneers of studies of drug metabolism were Tecwyn (RT) Williams at St Mary’s Hospital in London and Bernard Brodie at the NIH. RT’s interest grew from his work on glucuronyl transferase to a wide interest in the biochemistry of drug metabolism, while Brodie’s interest was always more to do with the ways in which metabolism influenced the actions of drugs and their toxicity. Both trained a large number of postdocs who took up critical positions in the field. Brodie’s laboratory at the NIH became the world centre for drug metabolism research. Scientists there were the first to investigate inhibition and induction of drug metabolism, to purify the enzymes responsible and to recognize the importance of metabolites in drug toxicity. The researchers included John Burns, Jim Gillette, Alan Conney, Wayne Levin and Ryuchi Kato, among many others A breakthrough was the observation in 1962, by Ryo Sato and Tsuneo Omura in Estabrook’s laboratory, that carbon monoxide inhibited the haemoprotein responsible for much of drug metabolism and that this effect was reversed by shining light of 450 nm wavelength on the reaction mixture. Hence the name cytochrome P450.
Initially it was thought that there was single cytochrome P450 with multiple binding sites and it was Anthony Lu at Merck and Minor Coon at Ann Arbor who recognized that cytochrome P450 was a mixture of several different isoforms, the so-called mixed function oxidases.
The molecular biology of the cytochrome P450 isoenzymes was largely worked out by Daniel Nebert and Frank Gonzales at the NIH and Urs Meyer in Switzerland. It has since become standard practice in industry to express the five main human cytochrome P450s for use in the developability work-up of new molecules. This shows which isoforms of cytochrome P450 are responsible for metabolism of the compound or may be inhibited by it, and metabolism-dependent inhibition of the enzyme gives an indication of the formation of electrophilic metabolites.
Reactive metabolites
One of the major contributions of studies of drug metabolism has been the recognition of the importance of reactive metabolites in drug toxicity, particularly hepatotoxicity and so-called idiosyncratic toxicity (severe events of very low frequency). This work can be traced back as far as the work of Landsteiner and Jacobs on sensitization by simple chemicals in the 1930s, but the key compound in the progression of this understanding was paracetamol (acetaminophen), because of the frequency of its use in attempted suicide. The main features of its toxic effects were elucidated in Jim Gillette’s laboratory at the NIH in the 1970s. Paracetamol is metabolically activated by cytochrome P450 enzymes to a reactive metabolite that depletes glutathione and becomes covalently bound to proteins. Repletion of glutathione prevented the toxicity and this became the foundation of the treatment of paracetamol overdose with sulphydryl donors such as methionine, cysteamine and N-acetylcysteine, to which Laurie Prescott in the UK made important contributions. The reactive metabolite was identified later as N-acetyl-para-benzoquinone imine (NAPQI). Recently, Kevin Park’s laboratory in Liverpool has made further progress in understanding the mechanism of paracetamol toxicity, by demonstrating that paracetamol directly inhibits γ-glutamylcysteine synthetase. I end this paragraph with a quote from Stephen E. Clarke, GSK’s head of drug metabolism in the UK: ‘It was thought to be a bad idea for chemicals to form reactive species and covalently bind to proteins over 50 years ago. It still is!’.
Pharmacogenetics and drug metabolism
Werner Kalow in Canada was the real founder of pharmacogenetic studies in clinical pharmacology and his book on pharmacogenetics, published in 1962, was a landmark. UK clinical pharmacologists were early in the field of pharmacogenetics applied to drug metabolism. David Price Evans carried out extensive research on the acetylator phenotype and its effects on drug metabolism and toxicity, particularly for isoniazid. Robert Smith, working in Tecwyn Williams’s laboratory, made a seminal observation when he took a small dose of debrisoquine (supervised by Peter Sever) and became markedly hypotensive. This was a deliberate experiment. Bob Smith believed that the wide range of doses of antihypertensive drugs required to control blood pressure could not be explained by differences in target organ sensitivity. Further investigation showed that Bob’s ability to hydroxylate debrisoquine at the 4 position was very low, and from that a polymorphism of cytochrome P450, now called CYP2D6, was recognized. In current parlance, Bob was a poor metabolizer of debrisoquine. At about the same time, Michel Eichelbaum in Germany made an independent observation of the same polymorphism, using sparteine as a substrate. More recently, polmorphisms of less common drug-metabolizing enzymes have been recognized that can have very serious consequences. Both thioguanine and mercaptopurine are metabolized by thiopurine methyltransferase, an enzyme that has many polymorphisms, and marked deficiency of this enzyme can cause severe toxicity with normal doses of thioguanine and mercaptopurine. Rare polymorphisms of dihydropyrimidine dehydrogenase can result in severe 5-fluorouracil toxicity.
Despite major efforts, particularly by Sjöqvist’s team in Sweden, and others with major interests in therapeutic drug monitoring, adoption of these findings into the routine practice of clinical medicine has been very slow. On the other hand, in the pharmaceutical industry it is part of the routine work-up of new compounds to evaluate their routes of metabolism and the effects of common enzyme polymorphisms, both in vitro and in vivo. Possibly the clinical application tide will turn now that diagnostic chips are becoming available for some of the most frequent polymorphisms of drug metabolism. What is needed, however, is large-scale clinical trials to demonstrate that it is possible to achieve more effective (and particularly cost effective) and safer therapeutic outcomes by taking genetic polymorphisms of the drug-metabolizing enzymes into account. Was the failure to plan and carry out such trials a consequence of a ‘siloed’ approach to clinical pharmacology or of lack of interest by funding agencies and pharmaceutical companies?
Clinical pharmacology and pharmacokinetics
The origin of the term pharmacokinetics is attributed to F. H. Dost in a paper published in 1953, but as early as 1847 Buchanan in England had measured the arterial concentrations of ether during general anaesthesia and inferred that rapid recovery after short periods of anaesthesia with ether was due to redistribution into the tissues. In 1937 Teorell, a Swedish physiologist, published the first physiologically based pharmacokinetic model, using five compartments: the drug depot, the circulation, fluid volume, renal elimination and tissue inactivation. He used real volumes, but it was years before his contribution was recognized and even today we are only slowly returning to physiological models, albeit with much greater knowledge of drug distribution, membrane permeability, protein binding, transporter molecules and the like. In a 1981 review in the journal Pharmacology and Therapeutics, John Wagner gave a very detailed account of the early history of pharmacokinetics, but I must acknowledge the great help received from Malcolm Rowland in preparing the account that follows.
Although clinical pharmacologists have been heavily involved in pharmacokinetic studies – sometimes to the extent that journals of clinical pharmacology were dominated by rather dull pharmacokinetic papers – much of the work on the underlying principles was pioneered in Schools of Pharmacy, mainly in the USA. John Wagner, Eino Nelson, Gary Levy, Sidney Riegelman, Eckert Kruger-Thiemer (in Germany), William Jusko and Malcolm Rowland (who later returned to the UK) made particularly noteworthy contributions. This account concentrates on the development of concepts that came to have a major influence on clinical pharmacology and is not comprehensive. Absorption kinetics were initially worked out by Wagner and Nelson and dose-dependent kinetics by Jusko and Kruger-Thiemer, who contributed new ideas to the design of dosage regimens and an understanding of the effects of drug distribution. A development of particular importance to clinical pharmacology was the partnership between Ken Melmon in clinical pharmacology and Sidney Riegelman in pharmacokinetics at the University of California in San Francisco during the late 1960s. It was here that concepts of pharmacokinetic/pharmacodynamic (PK/PD) analysis began to develop.
The idea of applying the concept of clearance, well established in renal physiology, to drug kinetics was of singular importance and one to which Malcolm Rowland and the Vanderbilt clinical pharmacology group, headed by Grant Wilkinson, made the major contribution. From this it was a short step to understand that if the systemic clearance of a drug was a high fraction of liver blood flow, presystemic clearance, or ‘first-pass clearance’ was the main determinant of the amount that reaches the systemic circulation. This concept proved to be valuable in understanding the kinetics of the β-adrenergic blocking drug propranolol, work to which David Shand in clinical pharmacology at Vanderbilt made a particular contribution.
Mention must also be made of E. J. Ariens’ trenchant reminder to pharmacokineticists and clinical pharmacologists about the importance of chirality in his 1984 paper in the European Journal of Clinical Pharmacology, entitled ‘Stereochemistry, a basis for sophisticated nonsense in pharmacokinetics and clinical pharmacology’.
Currently, a major focus of interest in pharmacokinetics is the role of transporters in moving drugs across cell membranes, an area that is particularly associated with Yuichi Sugiyama in Tokyo and Les Benet in San Francisco. This has required a serious revision of ideas for those of us who were taught in pharmacology classes that movement of drugs across cell membranes involved dissolution in the lipid bilayer of cell membranes and diffusion out the other side. Amongst other outcomes, this development has led to an understanding of the mechanisms responsible for the interaction between digoxin and inhibitors of P-glycoprotein, such as verapamil. It is also of great significance in understanding the function of the blood–brain barrier, which now seems more like a dyke lined with rows of pumps than the impermeable barrier that we were once taught. The hepatocyte pumps, particularly the organic anion transporters such as OATP1B1, are of major interest in safety assessment, because of their ability to concentrate many aryl organic acids by more than 100-fold in hepatocytes over plasma. Industrial clinical pharmacologists and pharmacokineticists now routinely assess the effects of transporters when considering the kinetics of new molecules.
Pharmacokinetics by itself is interesting, but the real significance for clinical pharmacologists in when pharmacokinetics is correlated with accurate pharmacodynamic measurements in PK/PD analysis. Lennart Paalzow from Uppsala and Lewis Sheiner and his group from San Francisco were pioneers in this area. Lew Sheiner had a major role in applying to clinical pharmacology the concept of using a response surface to portray a three-dimensional interaction, for example to display plasma concentration, drug efficacy and toxicity on the same plot. Jusko pointed out that concentration–effect relations are often nonlinear, particularly when the drug response is only indirectly related to the pharmacological action, as is often the case in therapeutics.
A major development, championed by Meindert Danhof from Leiden, has been to apply systems analysis to the long chain of events from the administration of the drug, its absorption into the body, metabolism, protein binding, gaining access to the target receptor or enzyme, signal transduction, changes in biological pathways and the effects of negative and positive feedback control loops to the eventual therapeutic effect. This is undoubtedly destined to grow and grow as the new science of systems biology brings together biomedical scientists, engineers, mathematicians and informatics specialists to address pharmacological problems.
Another developing area is the application of population PK/PD analysis to large-scale clinical trials using sparse pharmacokinetic sampling. The idea is attractive, but the relatively crude nature of the measurements made in most clinical trials creates serious difficulties.
Clinical pharmacology and clinical trials
Clinical trials of a sort are as old as mankind, but controlled trials are a much more recent invention. James Lind has as good a title to being the originator as anyone. In 1747 on HMS Salisbury, patrolling in the English Channel, Lind selected 12 men, all suffering from similar symptoms of scurvy, and divided them into six pairs. He gave each group different additions to their basic diet. Two men received a quart of cider a day and two others were given an unspecified elixir three times a day. One pair was treated with sea water and another was fed with a combination of garlic, mustard and horseradish. Two men were given spoonfuls of vinegar and the last two were given two oranges and one lemon every day. Four out of the six groups reported no change, the men given cider reported only a slight improvement, but the two men fed citrus fruits experienced a remarkable recovery. But to remain within my 75-year review span the palm of honour really belongs to the UK’s MRC, and particularly its first treatment trial in tuberculosis, and to Austin Bradford Hill, the statistician to the trial. Bradford Hill had set out the principles of clinical trial design in a book based on a series of articles published in the Lancet in 1937. Given the uncertain prognosis of pulmonary tuberculosis and the limited supply of the drug in 1947, Bradford Hill (who had himself wanted to study medicine but was advised not to do so when he contracted tuberculosis) proposed that it would be unethical not to assess what advantage streptomycin offered in this form of the disease compared with the current standard treatment, bed rest. This view was accepted and a fully randomized controlled trial was conducted. Patients included in the trial were limited to those aged between 15 and 30 with ‘acute progressive bilateral pulmonary tuberculosis of recent origin, bacteriologically proved and unsuitable for collapse therapy’. Both the streptomycin and control groups received the ‘standard of care’ treatment, bed rest. The result was dramatic: tuberculosis was now a curable disease. The MRC tuberculosis trials are also noteworthy for being among the first scientific studies of adherence to treatment. Patients were taking para-aminosalicyclic acid (PAS) and isoniazid (INAH) combined in a cachet, because PAS was unpleasant to take and there was concern that it was most likely to be omitted if given alone. It was easy to test for PAS in the urine and the MRC trial team arranged for health visitors to make unadvertized home visits to collect urine samples. When these were tested, only about half showed any evidence of PAS. Had Bradford Hill trained in medicine, as he desired, there is little doubt that we would have welcomed him as one of the founders of clinical pharmacology.
In the last 60 years here has been an enormous growth in the number of randomized controlled trials. They are the key to the development of new drugs and the foundation stone of evidence-based medicine. Most of these trials have a simple parallel-group design and the choice of dose is crucial. When possible, trials include an inactive placebo as a control comparator. Here a clinical pharmacologist, Louis Lasagna, made a singular contribution. While at Harvard, studying anaesthesia and analgesia, he observed that when surgical patients suffering from wound pain were given a subcutaneous injection of 1 ml of sterile saline, three or four out of every 10 such patients reported satisfactory relief of pain. He concluded that study of the placebo response was essential in clinical trials.
There is no doubt that the existence of the UK National Health Service (NHS) and the support of granting agencies, particularly the MRC and Cancer Research UK (CRUK), has helped to make the UK a world centre of high-quality clinical trials. The UK was also fortunate to have a cadre of very able medical statisticians who were particularly interested in clinical trial design. To mention only a few, Austin Bradford Hill, Richard Doll, Geoffrey Rose, Bill Miall, Tom Mead and Richard Peto.
An old saying in the pharmaceutical industry is that the thing most often got wrong in clinical trials is the dose. In that context, it is surprising that most clinical pharmacologists outside industry have taken relatively little interest in issues of dose selection, individual differences in response and adherence to treatment in clinical trials. This is particularly the case because trial design is evolving in ways which make them more interesting to the clinical pharmacologist. One example is the use of adaptive designs. These come in different flavours, but one designed to address the dose selection problem is to start the trial with a number of different doses (as many as five or six in some cases) and hold an interim review, at which some doses will be dropped for lack of efficacy or an unacceptable burden of adverse effects. Another variant is to use a crossover design or blind placebo substitution for part of the trial, so that all patients get both active and placebo therapy. Blind placebo substitution is particularly useful for assessing symptomatic adverse effects and for evaluating diseases in which placebo effects or regression to the mean are important confounders. It is now common practice to obtain plasma concentration measurements during clinical trials and to subject the trial to a PK/PD analysis for both efficacy and adverse effects. A development of this approach is the ‘randomized concentration-controlled trial’ (RCCT), in which a deliberate attempt is made to hold the plasma concentration within a defined range. Carl Peck has been a vigorous advocate of this approach. Concentration-controlled trials have not been widely adopted, because they are much more complex to manage than fixed-dose trials and difficult to implement when a drug is marketed, but for critical compounds with a narrow therapeutic range they can be very useful. There has also been a revival of interest in a Bayesian approach to clinical trials, as it allows adjustments during the trial as acquisition of new information makes it possible to update the prior assumptions on which the trial was based. This is an area in which the MRC Unit in Cambridge has been particularly active, with David Spiegelhalter as a leading exponent of the Bayesian approach. Clinical trial design meetings are now often enlivened by debates between Bayesians and the more traditional frequentists.
The contribution of clinical pharmacology to the effective use of medicines to treat tropical diseases, especially malaria, also deserves mention. The Department of Pharmacology and Therapeutics in Liverpool has been particularly active in investigating the efficacy and pharmacokinetics of new drug combinations used to treat uncomplicated falciparum malaria, e.g. chlorproguanil–dapsone–artesunate. A major contribution has also been made by Nick White’s group, based in Oxford but working in Thailand and Vietnam, whose work on artemisinin combinations is pivotal to the modern treatment of uncomplicated malaria. Clinical pharmacologists who work in comfortable clinics and laboratories in the UK and elsewhere in the developed world owe a special debt to those who work at the front line of their discipline in tropical countries with more limited facilities.
Clinical trials are in the midst of a renaissance and the opportunities for clinical pharmacologists to contribute, particularly to prediction of dose and assessment of mechanism-based adverse effects, are growing rapidly.
The rise of the pharmaceutical industry
This is not the place to write the history of the pharmaceutical industry, but it is essential to recognize that the development of academic clinical pharmacology is inextricably interwoven with the avalanche of new medicines that have been discovered and/or developed by pharmaceutical companies in the private sector. In the early days the German pharmaceutical companies that arose from the dyestuff manufacturers were dominant, but after the 1939–1945 war it was mainly US, UK and Swiss companies which made the major innovations. More recently, Japanese companies that have mainly grown out of pharmaceutical wholesalers have made important contributions. However, lest we forget, had it not been for brilliant drug hunters, such as George Hitchings and Gertrude Elion, Paul Janssens and James Black, clinical pharmacology would probably not exist in its present form and the health of the world would be much the poorer.
The flow of new compounds for hypertension, oedema, bacterial infections and psychiatric diseases was particularly strong in the period between 1950 and 1965. In those days chemistry was the main strength of pharmaceutical companies and their medical functions were not well developed. Pharmaceutical companies sought the collaboration of interested physicians when they had a new compound and the study protocol was often very brief, sometimes virtually non-existent. The conduct of the study was left largely to the investigator. Direct interaction between the discovery scientists and the clinical investigators was frequent and studies progressed rapidly. In retrospect, the quality of such studies was not very high, but they usually reached the right conclusion remarkably quickly without major safety problems. The diseases under study were often those in which it was relatively easy to see a marked response in a few days, for example, acute bacterial infections, accelerated hypertension, severe oedema, leukaemia. In some respects this was a golden age for budding clinical pharmacologists, but it did not last.
As attention shifted to more chronic and less severe diseases, a single centre rarely had sufficient patients for a clinical trial. Major academic centres with a referral practice had few of the relatively straightforward patients that were needed in clinical trials, so industrial attention moved towards large district general hospitals or family practices. Regulatory agencies started to insist on detailed formal protocols and better documentation of efficacy and safety on standard case report forms. The response of pharmaceutical companies was to recruit staff who were experts in clinical pharmacology, clinical trial design, statistical analysis and regulatory affairs, and operational staff who could organize the paperwork, clinical trial supplies, quality control of trial centres and the like. As trials grew ever larger, multiple centres were needed, often spread around the world. All the patients had to be studied using the same protocol and this was now written by the company, often with advice from external consultants. To be sure that their data would be acceptable, companies sought opinions about protocols from regulatory agencies, and the FDA in particular often required changes. After exhaustive internal and regulatory review, companies were understandably reluctant to amend protocols because of the time and cost involved. The era of the non-negotiable protocol had arrived and the relationship of the investigator and academic clinical pharmacologist to the pharmaceutical company became largely that of a contractor paid by the number of patients recruited and protocols successfully completed.
For a time, academic centres still had a substantial input to the design of very early studies up to what is now termed ‘proof of concept’, but the increasing use of healthy volunteers in early human studies required special facilities for housing the volunteers for up to 2 weeks. A few academic centres created their own in-house units for this purpose and some of the larger pharmaceutical companies built their own clinical pharmacology units, but a large number of commercial contract houses set up residential facilities and the best of these were capable of high-quality work to the exacting time-lines required by industry. Over time these contract houses have taken over the majority of early-phase work on new compounds. There were and are still opportunities for clinical pharmacologists to propose ‘investigator sponsored studies’, but these too have become more tightly regulated for compounds that are not yet on the market. Companies fear that independent investigators will carry out studies that may require a great deal of additional work, with resultant costs and loss of time. Sometimes there may have been an element of not wanting to explore the properties of a molecule that might make it appear less desirable than the image the sponsors wish to create. The role of the academic clinical pharmacologist in the development of new drugs has declined, but that of the clinical pharmacologist in industry has grown. Industry is much more effective at putting together multidisciplinary project teams than academia and in many areas of drug development the expertise in universities has fallen well behind that available in industry. This has consequences for clinical pharmacology training which are only partially addressed by seconding trainees to a period in industry. Most clinical pharmacologists now work in industry, but, with a few exceptions, their work is not well known outside the companies that employ them, because relatively little of it is published. This limits their ability to act as role models and mentors for physicians considering a career in clinical pharmacology. The Association of the British Pharmaceutical Industry (ABPI) in the UK supports a number of training posts for specialist registrars in clinical pharmacology, but unless there are vibrant academic departments behind them such schemes may not always attract the quality of candidate that is sought.
The last 10 years have seen the beginning of a further paradigm shift, driven by the very high attrition rate of new compounds and the enormous costs of drug development. Because patent life is relatively short and the development process is long, companies are under pressure to speed up development and reduce costs. One solution that is being adopted by many companies is to move later-phase clinical trials to lower-cost countries in Eastern Europe, Asia and South America, while basic work on drug discovery has migrated in the opposite direction, towards the USA, attracted by the enormous investment of the NIH in biomedical research and the large market for pharmaceuticals in the USA. Unless the UK academic base is strong, it is foreseeable that industrial clinical pharmacology will also begin to migrate overseas.
Clinical pharmacology and experimental medicine
The heart of clinical pharmacology lies in measurement of drug efficacy and safety in man. In the early days of clinical pharmacology most of the measurement techniques used methods already available in clinical medicine or human physiology. The sphygmomanometer, the spirometer, the body plethysmograph, the cardiac catheter and the electrocardiogram were pressed into service and were used most effectively in the investigation of cardiovascular and respiratory drugs. However, to use what originated as diagnostic technology in measuring drug actions required modification and standardization. Digit preference and digit avoidance was a particular problem with blood pressure measurements, so the random-zero sphygmomanometer was developed. Observer bias and patient expectation had to be managed by using placebos or blinded observers.
Clinical pharmacology has been integral to experimental medicine, particularly using drugs to probe mechanisms of physiology and disease and establishing proof of concept in vivo. An early example was the use of the forearm as a model system to study the human vasculature, particularly in the UK. It involves infusing agonists, usually vasoconstrictors, into the brachial artery through a very fine needle and measuring changes in forearm blood flow by plethysmography. The method was originally developed by Tony Dornhorst in Sharpey-Shafer’s Department of Medicine at St Thomas’ Hospital in London in 1960 and since then has been used extensively to elucidate mechanisms of vascular control. The technique is now widely used in cardiovascular research and illustrates the influence that the discipline has had, particularly in developing new ways of studying drug effects in man. A recent example is in the investigation of novel endothelin antagonists.
Academic clinical pharmacologists have one advantage over their industrial colleagues – they usually do the studies themselves rather than through an intermediary contract research organization. They also often put more effort into training their subjects and habituating them to the experimental environment. Industrial contract houses almost always specify that the dose must be taken with the subject fasting, and because of subsequent measurements the period without food can stretch to hours. This is a recipe for nausea and increases the risk of fainting on standing. One lesson we learnt at the Hammersmith Hospital was that a small glass of orange juice and a slice of bread or toast does little to the pharmacokinetics of an oral dose but increases the physiological stability of the subject materially. Unfortunately, it does nothing to help that other great source of volunteer symptoms—caffeine-withdrawal headache.
Although clinical pharmacologists often use complex technology for measurements, very simple techniques have proved to be very valuable in studies in experimental medicine. An example is the measurement of peripheral β2-adrenergic effects after inhalation of a β2 agonist. Measurements of fine tremor of the extended hands and the falls in serum potassium concentration and diastolic blood pressure have proved to be excellent pharmacodynamic indices of peripheral effects. Similarly, the use of pre- and postweighed dental cotton wool rolls, placed on either side of the tongue and in the buccal pouches for a minute, have proved very useful for assessing the effects of drugs on salivary flow. Symptom questionnaires that subjects complete themselves can provide very useful information about adverse effects, such as sedation and nausea, that are otherwise difficult to measure. They are much more reliable than investigators’ recording of ‘volunteered’ symptoms.
Gradually, new technology was developed. I remember sitting in the Department of Medical Physics in the basement of the Hammersmith Hospital at midnight, watching the decatron tubes spinning as I counted carbon-14 in urine samples from an early study of the (very low) bioavailability of the quaternary ammonium adrenergic neuron-blocking drug bretylium. The counter was either the second or third liquid scintillation counter in the world; it had been constructed by the workshop glassblower and was housed in a retired ice-cream freezer. The samples were changed by carefully turning a series of glass taps. (The decatron was a stage before the Nixie tube, which displayed actual numbers.) I had administered the labelled drug, with an active carrier, had measured the lying and standing blood pressures and was trying to understand the source of the great variability in response to this drug.
For a time in the late 1960s and early 1970s practically every clinical pharmacologist in the world appeared to be studying the effect of β-adrenergic blocking drugs on heart rate, but it was Brian Prichard at University College Hospital who discovered their blood pressure-lowering effect – and that was unexpected. One of the simple discoveries was that if the plasma propranolol concentration was falling linearly on a semilogarithmic scale and the concentration was on the straight middle part of the log dose–response curve, the heart rate fell linearly with time. Yet to this day at clinical pharmacology meetings one often hears speakers referring to the plasma half-life of the drug as though it were the half-life of its action.
Pharmacokinetics did not really take off in the era of the spectrophotometer and spectrofluorometer because of problems with sensitivity and specificity, and it was the ready availability of gas chromatography, and later liquid chromatography, which transformed practical pharmacokinetics and made the measurement of plasma concentrations a routine part of primarily pharmacodynamic clinical pharmacology studies. This development was not entirely beneficial, as clinical pharmacology meetings and journals began to be flooded with abstracts containing rather uninteresting pharmacokinetic studies. That era has not entirely ended. The problem was that it was so much easier to make accurate measurements of plasma concentrations than to make pharmacodynamic measurements of anything like comparable accuracy.
Some of the better equipped centres began to carry out much more complex studies, integrating pharmacodynamics, pharmacokinetics and drug metabolism, to gain new insights. John Oates’s discovery of the antihypertensive action of methyldopa in Al Sjoerdsma’s laboratory at the NIH and their later work on prostaglandins, Folke Sjöqvist’s work in Stockholm on antidepressants and some of the work Donald Davies and our team did at the Hammersmith Hospital on clonidine and other centrally acting drugs and on leukotrienes were of this kind. The word ‘biomarker’ had not come into general use, but studies were made using plasma and urine catecholamine concentrations to assess the effects of drugs on the sympathetic nervous system, plasma and urine hydrolysis/metabolic products of prostacyclin, thromboxane and leukotrienes to study the vascular, bronchial and platelet effects of prostanoids, and hormone concentrations to study neuroendocrine actions. These would now be hailed as the successful use of new biomarkers. The badge of a successful clinical pharmacology department came to be the number of gas-chromatography/mass spectrometry instruments they owned (Vanderbilt was the winner). It now seems that the ownership of an Accelerator Mass Spectrometer to count single atoms of labelled drug molecules may become its 21st century equivalent. A company equipped with Sciex LC/MS-MS spectrometers can often develop in a day or two a reliable drug assay that would have taken months 25 years ago.
As clinical pharmacology moved into new areas, such as oncology, psychiatry and bone and joint disease, new methods of clinical measurement had to be developed. Imaging methods have become increasingly important, especially positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). PET labelling, usually with 11C, has made it possible to measure receptor occupancy in the human brain, something which must have seemed almost inconceivable to early clinical pharmacologists. fMRI gives a fairly reliable indication of the brain areas that are being activated when a subject performs test tasks such as mental arithmetic and allows the study of drug effects on these responses. Advanced imaging methods are beginning to be widely applied in later-phase trials and this is likely to prove one of the greatest growth areas of clinical pharmacology and experimental medicine.
Traditionally, most oncology trials use the very simple tumour size criteria set out in Response Evaluation Criteria in Solid Tumours (RECIST). RECIST has important limitations. It tells the investigator how the size of the tumour has changed, but nothing about drug penetration into it, tumour cell viability or tumour blood flow. It also often takes many weeks or even months to give a read-out. New imaging methods make it feasible to assess all of these parameters. 11C-labelled drug can be used in a PET camera to assess drug entry into the tumour mass. 11C-fluorodeoxyglucose (FDG) can be used in a similar way to measure changes in glucose utilization in vivo and a variety of isotope and non-isotope imaging methods can be used to assess blood flow. These methods also offer the potential for giving much quicker answers indicative of a tumour response when drugs that interfere with kinases, vascular receptors or growth factors (such as EGFR and VEGF) stop a tumour growing without having a dramatic effect on its size.
Psychiatry remains a difficult area for the clinical pharmacologist. Drug responses are slow to develop and are very variable and placebo responses are often large, particularly in depression. Rating scales, such as the Hamilton Depression Rating Scale (HAM-D) and the Alzheimer’s Disease Assessment Scale (ADAS-COG), are amongst the mainstays in clinical trials. Often these scales were devised to aid clinical assessment, but were not necessarily sensitive or specific enough to give a good read-out on psychopharmacological effects. Efforts to develop better rating scales continue, but much attention is now being focused on imaging methods, particularly fMRI. fMRI relies upon measuring very small changes in brain blood flow as areas of brain are activated by a challenge such as viewing cartoons of sad or smiling faces.
In skeletal disease very precise imaging techniques make it possible to apply engineering principles to calculate changes in bone strength with treatment or to measure the thickness of articular cartilage and the size of the pits in it to assess osteoarthritis.
The ability to make much more exact measurements of drug action is beginning to revolutionize experimental medicine in some of these difficult therapeutic areas. There is a great opportunity for clinical pharmacologists if they take full advantage of it.
Present concerns, future hopes
There is a widespread belief that clinical pharmacology has not fully delivered on its early promise and may be faltering, particularly in university medical centres. That there are problems is undoubted, and their causes are worth examining, but for a discipline held to be in decline, demand for its services in the pharmaceutical industry, government regulatory agencies and assessment bodies is remarkably robust.
One of the problems is fundamental – what exactly is the clinical element of clinical pharmacology? Is it a laboratory discipline dealing with biomarkers, pharmacokinetics, drug metabolism and genetics based on human samples? Or is it a desk discipline dealing with design and evaluation of clinical trials, drug utilization on a local and national level, clinical guidelines for drug use and pharmacovigilance? Or is it a hands-on clinical discipline dealing with patient care, experimental medicine studies of old and new drugs, clinical investigation of adverse reactions and interactions and consultancy services to other clinicians who have drug problems. One (correct) answer to these questions is that it is ‘all of the above’, but in the medical school and hospital setting the exact answer makes a great deal of difference to where the specialty is located, its training and promotion routes, whether it requires a medical training to do it and, critically, who pays.
If it is predominantly a laboratory discipline, generations of pharmacologists, pharmacists and biochemical pharmacologists have demonstrated their ability to perform at a high level and many are proud to call themselves clinical pharmacologists. If it is mainly a desk discipline, much of the work can be, and is, done to a high standard by epidemiologists and statisticians. If it is concerned with direct patient care there is immediate competition with clinical subspecialties, cardiology, oncology, neurology and so on, which have their own lengthy and demanding training programmes. But somewhere in this maelstrom of disciplines and silos there is a need for someone who can pull it all together. He or she knows the medical problem well, is pretty well informed about the pharmacology and metabolic fate of the drug in man and how it will interact with the disease the patient suffers, knows how to design a small-scale intensive clinical trial and interpret the results and is a good enough physician to make sure that it is ethical and safe. Not a polymath who can do all of these things superbly, but one who is well enough informed about all of them to ask the right questions and seek appropriate advice and who is very expert in one or two of them and can take the lead. This will almost certainly be a physician with research training, including laboratory experience, for if clinical pharmacology loses its clinical input most of its value and relevance to healthcare delivery in the community will be lost. But how do they train, where do they work and who pays?
A clinical specialist is disease focused. A clinical pharmacologist is drug and disease focused, but knowledge of all the properties of the drug in relation to disease is the essential contribution. This involves the ability to have a reasonable working knowledge of a number of fields, stretching from molecular pharmacology, through animal pharmacology, safety assessment, pharmacokinetics and metabolism, measurement of drug action in man, PK/PD, clinical trial design, target disease morbidity and mortality, pharmacogenomics, epidemiology, pharmacovigilance, risk management to drug economics, utilization and regulation. It is a formidable list, which makes John Gaddum’s remark about the pharmacologist as a ‘jack of all trades’ ring in one’s ears, and some may point to the danger of being ‘master of none’. That sort of gibe is the fate of those who work in translational research. The clinical pharmacologist, if he or she wants to survive as a funded researcher, has to specialize in no more than one or two of these areas. Yet it is the ability to integrate across the spectrum that makes a clinical pharmacologist so valuable to industry, to regulators and to fellow clinicians. To train in an environment in which knowledge integration in this fashion is part of the currency of the group is an enormous asset, but such multidisciplinary academic centres, always few, are declining in number. Often clinical pharmacologists who work in industry are more effective at doing it than their academic colleagues.
The problem is in funding such multidisciplinary centres. When I consulted him, Alastair Wood from Vanderbilt made some important points concerning this issue. He noted that government and others have made great use of our services – particularly as we have become older and more experienced. However, we are paid for our research success, not for the ‘service’ we provide in improving the nation’s health through rational drug use and rational drug development. The problem is that no one has produced a career track that lets clinical pharmacologists of the ‘district hospital’ variety contribute and develop a stable career. Any clinical manager faced with a choice between an endoscope-passing gastroenterologist and a clinical pharmacologist will choose the former. Trainees are rightly sensitive to ‘what they will be doing 10 years from now’ and, while they see a small cadre of us who have survived and actually done well as clinical pharmacologists, they worry about the ‘back-up plan’. So, while a GI fellow may aspire to be a tenured professor in another prestigious institution, his parents-in-law know that if he ‘gets real’ he can become a practising gastroenterologist and pay the bills.
Alasdair Breckenridge and Michael Rawlins expressed similar opinions, from a UK perspective, when I consulted them. Both identified an additional problem in the UK, where the wholesale adoption of problem-oriented clinical training for medical students has led to the almost complete disappearance of teaching of clinical pharmacology in many medical schools. Furthermore, although hospital consultant numbers in the UK have increased rapidly, the number of designated clinical pharmacology posts has fallen. In Sweden, the passing of the 1997 Drug Reform Act has entrenched the position of clinical pharmacologists on the drug and therapeutic committees that are mandated in each of the 21 regions of Sweden, but the equivalent function in the UK has largely been assumed by NICE, albeit headed by a clinical pharmacologist, Michael Rawlins.
These problems are real enough, but they must be overcome, simply because clinical pharmacology adds substantial value – to healthcare through promoting safe and effective use of drugs, to industry as a core discipline in the move from preclinical research to man, and to regulatory bodies that must decide whether, and on what terms, new medicines should be marketed. Medicines are too important for the health of the community to manage without clinical scientists who are experts in their properties and use. How can this be done?
The way ahead
There are several possible options for consolidation of the existing skill base and future expansion. We should try to promote all of them and enlist the major pharmaceutical companies to help argue the case.
(1) Developing experimental or translational medicine: the UK Clinical Research Collaboration
There is general agreement that the study of integrative processes in health and disease is an essential component of the postgenomic era and that the necessary skills are in short supply. Great opportunities have been unleashed, using novel methods of clinical measurement (imaging, biomarkers, biosensors, etc.) allied to genetic analysis. A large part of translational medicine will involve pharmacological interventions and a knowledge of clinical pharmacology will be a critical skill, both in academia and in industry. The UK Clinical Research Collaboration (UKCRC) has been established to address this problem. It has five core activities:
developing a comprehensive infrastructure to underpin clinical research;
building an expert research workforce to support clinical research;
developing incentives for research in the NHS;
streamlining the regulatory and governance environment;
developing a coordinated approach to research funding.
It is essential that part of the new UKCRC resource is used to strengthen clinical pharmacology in the UK. Following President Bush’s recent State of the Union address about the importance of research in retaining a competitive edge for the USA, there should be few doubts that research is the key to future economic success for the West and it is obvious that the pharmaceutical industry is vital to UK prosperity.
(2) Service support of pharmacotherapy
The effective and safe use of medicines requires both considerable skill and knowledge of the information sources and regulatory requirements that surround them. Hospital drug and therapeutic committees, national regulatory agencies (such as the Medicines and Healthcare products Regulatory Agency and NICE) all have need of experts in clinical pharmacology and therapeutics. The development of intelligent clinical decision support systems requires highly expert knowledge of drug doses and potential interactions and, increasingly, of pharmacogenetics. In the UK, the central importance of drug therapy for an ageing community needs to be argued with greater force to the key managers in the NHS and in the Treasury. This means that there must be NHS jobs for physicians who really understand drugs as drugs. Too often, drugs are regarded as simply a financial drain on an overspent budget, not as the key to well-being for many patients.
(3) Re-integration of pharmacology
The disciplines that contribute to pharmacology (molecular pharmacology, systems pharmacology, safety pharmacology, clinical pharmacology, pharmacokinetics, drug metabolism, pharmacogenetics, experimental medicine, clinical trials, pharmacovigilance, etc.) have grown up in different academic departments and communication between them is suboptimal, even in industry. Is it an impossible dream that pharmacology will become an integrated science, encompassing all of these disciplines and reaching out into translational medicine? If it could be achieved, the science would be stronger and its value in supporting industry would be much greater. The clinical pharmacologist would then become the clinical arm of a multidisciplinary undertaking. Such posts would be scientifically very rewarding, but it would have to be closely coupled to translational medicine if it was to be attractive to bright physicians. The country would be able to fund only a small number of such centres, perhaps four or five, but their influence could be extended by outreach networks. This would secure an exciting future and fulfil a national need.
(4) Reviving the teaching of clinical pharmacology to medical students and practising doctors
At a time when hospital mangers complain that many new medical graduates lack practical skills, including how to treat patients with widely used medicines, the medical schools have shot themselves, or rather their students, in the foot by virtually abolishing the teaching of clinical pharmacology and therapeutics. There are signs that a rethink is beginning, but it must be given momentum by continuing pressure.
(5) Contributing to personalized medicine
To many, personalized medicine is almost equated with pharmacogenetics, but this is a gross oversimplification. Individuals vary greatly in their responses to drugs, and for a large number of reasons. Diseases vary in severity and aetiology and often different medicines are used in severe forms of a disease from those that are used in milder forms. Combination treatment is commonplace in many chronic diseases, both to increase efficacy and, by reducing the dose of each component, to minimize adverse effects. Most patients taking medications are elderly and are often being treated with medicines for several different diseases. Rates of drug metabolism, and thus exposure, vary widely for both environmental reasons (induction and inhibition of drug-metabolizing enzymes) and genetic reasons. Intercurrent disease of the liver, kidneys and heart can affect drug exposure, tissue distribution and therapeutic responses more than genetic differences in metabolism or receptor polymorphisms. Genetics has a role in many of these factors and is dominant in a few. There is an enormous opportunity for clinical pharmacologists to contribute their know-how to the design of trials to test the best strategies for individualizing the management of common diseases. It should not be left to the geneticists.
(6) Shortening the training of clinical specialists
It is still simply unacceptable that if a young doctor in Britain wants to achieve specialist registration in general internal medicine, an organ speciality and clinical pharmacology, as many considering clinical pharmacology as a career would probably wish to do, it is likely to take 7 or 8 years. The medical community and the Department of Health should hang their heads in shame that this situation has been allowed to develop. However, is our own Specialist Advisory Committee (SAC) in clinical pharmacology itself guiltless? It needs to be addressed not by adding up from the bottom but counting down from the top. Decide what the maximum reasonable period of interdisciplinary specialist training is, say 5 years, and then agree what should be included in the training programme.





Acknowledgments
Thanks to the following clinical pharmacologists and other medical scientists who helped me greatly with this article; the errors and omissions are my responsibility: Daniel Azarnoff, James Banting, Alasdair Breckenridge, Morris Brown, Arnold Burgen, Christie Carrico (ASPET), Stephen E. Clarke, Donald Davies, Garrett Fitzgerald, Desmond Laurence, Buhm Soon Park (Office of NIH History), Michael Rawlins, Malcolm Rowland, Albert Sjoerdsma, Folke Sjöqvist, Patrick Vallance and Alasdair Wilson.



