Skip to main content
EMBO Reports logoLink to EMBO Reports
editorial
. 2004 Nov;5(11):1016–1020. doi: 10.1038/sj.embor.7400284

Reductionism and complexity in molecular biology

Marc HV Van Regenmortel 1,1
PMCID: PMC1299179  PMID: 15520799

Summary

Scientists now have the tools to unravel biological complexity and overcome the limitations of reductionism


The reductionist method of dissecting biological systems into their constituent parts has been effective in explaining the chemical basis of numerous living processes. However, many biologists now realize that this approach has reached its limit. Biological systems are extremely complex and have emergent properties that cannot be explained, or even predicted, by studying their individual parts. The reductionist approach—although successful in the early days of molecular biology—underestimates this complexity and therefore has an increasingly detrimental influence on many areas of biomedical research, including drug discovery and vaccine development.

The claim made by Francis Crick (1966) that “The ultimate aim of the modern movement in biology is to explain all biology in terms of physics and chemistry” epitomizes the reductionist mindset that has pervaded molecular biology for half a century. The theory is that because biological systems are composed solely of atoms and molecules, without the influence of 'alien' or 'spiritual' forces, it should be possible to explain them using the physicochemical properties of their individual components, down to the atomic level. The most extreme manifestation of the reductionist view is the belief that is held by some neuroscientists that consciousness and mental states can be reduced to chemical reactions that occur in the brain (Bickle, 2003; Van Regenmortel, 2004).

Reductionists analyse a larger system by breaking it down into pieces and determining the connections between the parts. They assume that the isolated molecules and their structure have sufficient explanatory power to provide an understanding of the whole system. As the value of methodo-logical reductionism has been particularly evident in molecular biology, it might seem odd that, in recent years, biologists have become increasingly critical of the idea that biological systems can be fully explained using physics and chemistry. Their situation is similar to that of an art student asking about the significance of Michelangelo's David and being told that it is just a piece of marble hewn into a statue in 1504. This is certainly true, but it evades pertinent questions about the anatomy of the statue, its creation at the beginning of the Florentine Renaissance, its significance in European art history, or even the scars on its left arm that were plastered after it was broken in three places during the anti-Medici revolt of 1527. In an analogous way, the biology, development, physiology, behaviour or fate of a human being cannot be adequately explained along reductionist lines that consider only chemical composition. Anti-reductionists therefore regard biology as an autonomous discipline that requires its own vocabulary and concepts that are not found in chemistry and physics. Both sides have discussed their standpoints at several recent international meetings (Bock & Goode, 1998; Van Regenmortel & Hull, 2002; Van Regenmortel, 2004) and the main disagreement between the protagonists is about what constitutes a good scientific explanation.

The most extreme manifestation of the reductionist view is the belief that is held by some neuroscientists that consciousness and mental states can be reduced to chemical reactions that occur in the brain

Today, it is clear that the specificity of a complex biological activity does not arise from the specificity of the individual molecules that are involved, as these components frequently function in many different processes. For instance, genes that affect memory formation in the fruit fly encode proteins in the cyclic AMP (cAMP) signalling pathway that are not specific to memory. It is the particular cellular compartment and environment in which a second messenger, such as cAMP, is released that allow a gene product to have a unique effect. Biological specificity results from the way in which these components assemble and function together (Morange, 2001a). Interactions between the parts, as well as influences from the environment, give rise to new features, such as network behaviour (Alm & Arkin, 2003), which are absent in the isolated components.

Consequently, 'emergence' has appeared as a new concept that complements 'reduction' when reduction fails (Van Regenmortel, 2004). Emergent properties resist any attempt at being predicted or deduced by explicit calculation or any other means. In this regard, emergent properties differ from resultant properties, which can be predicted from lower-level information. For instance, the resultant mass of a multi-component protein assembly is simply equal to the sum of the masses of each individual component. However, the way in which we taste the saltiness of sodium chloride is not reducible to the properties of sodium and chlorine gas. An important aspect of emergent properties is that they have their own causal powers, which are not reducible to the powers of their constituents. For instance, the experience of pain can alter human behaviour, but the lower-level chemical reactions in the neurons that are involved in the perception of pain are not the cause of the altered behaviour, as the pain itself has causal efficacy. According to the principles of emergence, the natural world is divided into hierarchies that have evolved over evolutionary time (Kim, 1999; Morowitz, 2002). Reductionists advocate the idea of 'upward causation' by which molecular states bring about higher-level phenomena, whereas proponents of emergence accept 'downward causation' by which higher-level systems influence lower-level configurations (Kim, 1999).

Anti-reductionists therefore regard biology as an autonomous discipline that requires its own vocabulary and concepts that are not found in chemistry and physics

Although biology has always been a science of complex systems, complexity itself has only recently acquired the status of a new concept, partly because of the advent of electronic computing and the possibility of simulating complex systems and biological networks using mathematical models (Emmeche, 1997; Alm & Arkin, 2003). Because complex systems have emergent properties, it should be clear from the preceding discussion that their behaviour cannot be understood or predicted simply by analysing the structure of their components. The constituents of a complex system interact in many ways, including negative feedback and feed-forward control, which lead to dynamic features that cannot be predicted satisfactorily by linear mathematical models that disregard cooperativity and non-additive effects. In view of the complexity of informational pathways and networks, new types of mathematics are required for modelling these systems (Aderem & Smith, 2004).

Another essential property of complex biological systems is their robustness (Csete & Doyle, 2002; Kitano, 2002). Robust systems tend to be impervious to changes in the environment because they are able to adapt and have redundant components that can act as a backup if individual components fail. A further characteristic of complex systems is their modularity (Alm & Arkin, 2003): subsystems are physically and functionally insulated so that failure in one module does not spread to other parts with possibly lethal consequences. This modularity, however, does not prevent different compartments from communicating with each other (Weng et al, 1999). An additional peculiarity of complex biological systems is that they are open—that is, they exchange matter and energy with their environment—and are therefore not in thermodynamic equilibrium. In the past, the reductionist agenda of molecular biologists has made them turn a blind eye to emergence, complexity and robustness, which has had a profound influence on biological and biomedical research during the past 50 years. In the following sections, I describe some of the harmful effects of reductionist thinking in drug-discovery programmes and vaccinology.

The number of new drugs that are approved by the US Food and Drug Administration has declined steadily from more than 50 drugs per annum 10 years ago to less than 20 drugs in 2002. This worrying trend has persisted despite continuous mergers and acquisitions in the industry and annual research and development expenditures of approximately US$30 billion. Commentators have attributed this poor performance to a range of institutional causes, such as inefficient project management, increased regulatory requirements, a decline in the clinical science that deals with whole organisms, an overemphasis on technology-driven research and an unwillingness to concentrate on products that are not likely to generate sales of at least US$0.5–1.0 billion per annum (Drews, 2003; Gershell & Atkins, 2003; Kubinyi, 2003; Miska, 2003). Furthermore, it seems that the new strategies of drug discovery, which are based on high-throughput screening, combinatorial chemistry, genomics, proteomics and bioinformatics, are not bringing forth the new products that were anticipated (Kubinyi, 2003; Glassman & Sun, 2004). Knowledge of the genome sequences of humans and various pathogenic agents has led to the identification of only a limited number of new drug targets (Drews, 2003). Moreover, Glassman & Sun (2004) listed several biotechnological projects that have, so far, failed to live up to expectations, including gene therapy, stem-cell research, antisense technology and cancer vaccines. A common problem with many of these innovations is that their potential risks and unwanted side effects tended to be overlooked initially, as was the case for gene therapy (Williams & Baum, 2003).

It remains true that human disease is best studied in human subjects

However, there is probably a more fundamental reason for these failures: namely, that most of these approaches have been guided by unmitigated reductionism. As a result, the complexity of biological systems, whole organisms and patients tends to be underrated (Horrobin, 2001). Most human diseases result from the interaction of many gene products, and we rarely know all of the genes and gene products that are involved in a particular biological function. Nevertheless, to achieve an understanding of complex genetic networks, biologists tend to rely on experiments that involve single gene deletions. Knockout experiments in mice, in which a gene that is considered to be essential is inactivated or removed, are widely used to infer the role of individual genes. In many such experiments, the knockout is found to have no effect whatsoever, despite the fact that the gene encodes a protein that is believed to be essential. In other cases, the knockout has a completely unexpected effect (Morange, 2001a). Furthermore, disruption of the same gene can have diverse effects in different strains of mice (Pearson, 2002). Such findings question the wisdom of extrapolating data that are obtained in mice to other species. In fact, there is little reason to assume that experiments with genetically modified mice will necessarily provide insights into the complex gene interactions that occur in humans (Horrobin, 2003).

Vaccination is [...] firmly anchored in the biological realm and cannot be reduced to the level of chemistry

The disappointing results of knockout experiments are partly caused by gene redundancy and pleiotropy, and the fact that gene products are components of pathways and networks in which genes acting in parallel systems can compensate for missing ones (Morange, 2001b). As many factors simultaneously influence the behaviour of a system, one part might function only in the presence of other components. The essential contribution of other genes in achieving a particular function will therefore be missed, which will further encourage the reductionist view that a single gene has adequate explanatory power (Van Regenmortel, 2004).

Another factor that is responsible for disappointing results in drug discovery is the excessive reliance on in vitro systems. Many researchers claim that in vitro cell cultures, or even computer models, might be able to reflect accurately and reliably the functioning of an intact human. There is considerable evidence, however, for a lack of congruence between in vitro assays and the in vivo systems that they attempt to model. There is no doubt that pharmaceutical research is hampered by insufficient whole-animal studies. Furthermore, even animal models of human disease are often inadequate and are a poor surrogate for clinical studies in humans. It remains true that human disease is best studied in human subjects (Horrobin, 2003).

Another defect of reductionist thinking is that it analyses complex network interactions in terms of simple causal chains and mechanistic models. This overlooks the fact that any clinical state is the end result of many biochemical pathways and networks, and fails to appreciate that diseases result from alterations to complex systems of homeostasis. Reductionists favour causal explanations that give undue explanatory weight to a single factor. By contrast, many biologists favour functional explanations for a structure or cellular process, and emphasize the selective advantage of these features during evolutionary history—after all, evolution selects for function, not structure. Functional explanations are more useful for understanding complex biological systems with many interactions than are causal explanations that give unwarranted importance to a single factor (Van Regenmortel, 2002). Lewontin (2000) also stressed the reciprocal relationships between genes, organisms and their environment, in which all three elements act as both causes and effects.

Another area of biomedical research that has been strongly influenced by reductionist thinking is the so-called rational design of vaccines, which is based on the assumption that the principles of structure-based drug design are applicable to vaccines. However, this disregards the fact that the relationship between a drug and its receptor or target molecule is fairly specific, whereas the relationship between an antigen and an antibody is much less restricted. The binding site of an immunoglobulin molecule comprises around 50 hypervariable residues that together make up the complementarity determining regions (CDRs). Approximately 10–15 of these residues usually participate in the interaction with an individual epitope, but the full complement of all 50 hypervariable residues does not constitute an actual binding site for any epitope. This means that around 35 CDR residues can potentially bind to other epitopes that bear little or no resemblance to the first, which explains the extensive multispecificity of immunoglobulins and the occurrence of many different paratopes or binding sites in each molecule. The ability of an immunoglobulin molecule to bind various antigenic structures is further enhanced by the considerable flexibility of the CDRs, which allows the binding site to adopt various conformations (James et al, 2003). The binding reaction involves a combination of conformational selection and induced fit (Bosshard, 2001; Goh et al, 2004), and entails a mutual adaptation of the two interacting partners (Westhof et al, 1984; Tainer et al, 1985).Inline graphic

In recent years, rational design has become fashionable in vaccine research as opposed to empirical discovery (Van Regenmortel, 2000). The term 'rational' implies that research makes extensive use of molecular data and structural knowledge, whereas the term 'design' indicates that the biological activity of the developed products is predictable. Rational design is therefore presented as a more scientific approach than the empirical 'trial-and-error' screening and selection of molecules. The belief that a molecular-design strategy will be successful for developing new vaccines is typical of the reductionist mindset, as it assumes that a biological phenomenon, such as protection against infection, can be reduced to the level of chemistry. However, there are many reasons why a reductionist approach to vaccine development is unlikely to succeed.

The impossibility of reducing biology to chemistry is responsible for the lack of success in developing structure-based vaccines

First, the antigenic determinants, or epitopes, of an infectious agent are emergent entities that are defined by their specific antibody partners and exist only in the context of the immune system. Epitopes and paratopes are not intrinsic features of an antigen and an immunoglobulin molecule, respectively, and cannot be identified independently of a binding reaction. Furthermore, antigenic and immunoglobulin-combining sites are fuzzy recognition sites that consist of several individual epitopes and paratopes (Van Regenmortel, 1999). Whereas a molecular-design strategy for improving antigenic reactivity is applicable to a single pair of interacting molecules—for instance, one epitope and a monoclonal antibody—it cannot be applied to the numerous epitopes that are involved in the protective immune response to a given pathogen.

Second, eliciting antibodies that simply bind to the pathogen is of little value in vaccine development. What are required are antibodies that have a functional activity; namely, the ability to neutralize the infectious agent in vivo. Our ability to predict the function of proteins is limited and our capacity to predict the neutralizing activity of an antibody from its chemical structure is practically nonexistent (Van Regenmortel, 2000, 2002). Vaccination and protective immunity have a meaning only at the level of the whole organism: molecules, tissues and organs cannot be vaccinated. Vaccination is therefore firmly anchored in the biological realm and cannot be reduced to the level of chemistry.

Third, despite an unprecedented global research effort, no vaccine against human immunodeficiency virus (HIV) is in sight (Burton & Moore, 1998). Although a reductionist approach to HIV-vaccine development continues to be advocated (Burton et al, 2004), there is no evidence that this will be effective. This approach involves determining the atomic structure of monoclonal antibodies against HIV antigens using X-ray crystallography, with the aim of elucidating the structure of the HIV epitopes. The justification for these studies is the assumption that knowledge of the structure of the epitopes that are recognized by neutralizing antibodies will help to design an effective HIV vaccine. The X-ray crystallographic analysis of broadly reactive HIV-neutralizing antibodies might indeed determine the structure of epitopes inside antibody-binding pockets, but it does not tell us how to use immunization to induce antibodies with the same specificity (Van Regenmortel, 2002; Burton et al 2004). The structures of epitopes and paratopes that are present in a complex represent the final conformation at the end of a dynamic process of conformational selection, induced fit and somatic mutation. It is not possible to infer which epitope conformation in the immunogen was ultimately responsible for the appearance of neutralizing antibodies.

In fact, immunogenicity depends on the biological potential of the host that is being immunized; in other words, on extrinsic factors, such as the immunoglobulin gene repertoire, self-tolerance, the production of cytokines, and various cellular and regulatory mechanisms. Unfortunately, we do not know how to control these aspects of the immune system to produce neutralizing antibodies (Van Regenmortel, 2001, 2002). Further difficulties are that antibodies act in a collective manner and that the neutralizing synergy between various antibodies cannot be reduced to the simple additive effect of individual molecules (Zeder-Lutz et al, 2001). Sometimes the synergy occurs because the binding of one antibody leads to a conformational change in the antigen, which then increases its accessibility for other antibodies.

The reductionist approach of using peptide fragments of a virus protein for vaccination purposes has also achieved little success (Van Regenmortel, 2001). The peptides that are used are either short sequences, which are known as continuous epitopes of the viral protein (Van Regenmortel, 1999), or so-called mimotopes, which are peptides that show little or no sequence similarity to any of the viral proteins but are believed to mimic a discontinuous epitope of the virus (Meloen et al, 2000). Discontinuous epitopes are made up of amino-acid residues from distant regions of the viral protein, which are brought together by the folding of the peptide chain. As antibodies harbour many paratope subsites—each able to bind to related or unrelated epitopes—it remains possible that the mimotope binds to a different subsite to that which interacts with the discontinuous epitope that induced the antibody. In fact, the extent of mimicry achieved by a mimotope peptide might be so limited that it might not be able to elicit antibodies that recognize and neutralize the virus (Van Regenmortel, 1999). Attempts have been made to reconstitute discontinuous epitopes by synthesis or by the selection of phage-displayed peptides. However, although such reconstituted epitopes might bind to viral antibodies, they are rarely able to elicit protective antibodies (Enshell-Seijffers et al, 2003; Oomen et al, 2003; Villen et al, 2004).

Extreme holism, according to which everything is connected, certainly does not provide a methodological alternative

Once more, it is the failure to distinguish antigenicity—that is, antigenic reactivity—from immunogenicity that leads to the unwarranted expectation that it should be relatively straightforward to design effective peptide-based synthetic vaccines. The impossibility of reducing biology to chemistry is responsible for the lack of success in developing structure-based vaccines. Moreover, it is safe to assume that vaccine development will continue to rely on the same empirical approaches that have been used successfully in the past (Van Regenmortel, 2001, 2002).

In light of these failures, it has become popular to criticize the reductionist approach that is used in the study of biological systems (Lewontin, 2000), although it is more difficult to determine what should be done instead. Extreme holism, according to which everything is connected, certainly does not provide a methodological alternative. What are needed are new experimental techniques for investigating the unique complexity of biological systems that results from the bewildering diversity of interactions and regulatory networks. Recent developments in high-throughput microarrays, nanotechnologies, bioinformatics and systems biology are providing the data that molecular biologists need to simulate the behaviour of complex biological networks and systems (Kitano, 2002; Alm & Arkin, 2003; Aderem & Smith, 2004; Blake, 2004). If these simulations make it possible to predict the reactions of a system, we will have achieved some degree of understanding, even if we cannot identify the innumerable causal interactions that are involved (Berger, 1998).

Gene ontologies that provide a standardized vocabulary for data exploration (Blake, 2004), and software programmes, such as Cytoscape (Aderem & Smith, 2004), which create visual representations of biological systems, make it possible to handle enormous amounts of data and build realistic models of complex systems. An important present limitation is the paucity of quantitative information about the kinetic parameters that underlie protein–protein and protein–DNA interactions (Alm & Arkin, 2003). However, it is undeniable that molecular biologists now have at their disposal the tools that are needed to unravel biological complexity and overcome the limitations of reductionism. Given our failures in developing drugs and vaccines against a wide range of debilitating diseases, this move away from the reductionist viewpoint and toolset is a high priority for both biological and biomedical research.

graphic file with name 5-7400284i1.jpg

References

  1. Aderem A, Smith KD (2004) A systems approach to dissecting immunity and inflammation. Semin Immunol 16: 55–67 [DOI] [PubMed] [Google Scholar]
  2. Alm E, Arkin AP (2003) Biological networks. Curr Opin Struct Biol 13: 193–202 [DOI] [PubMed] [Google Scholar]
  3. Berger R (1998) Understanding science: why causes are not enough. Philos Sci 65: 306–332 [Google Scholar]
  4. Bickle J (2003) Philosophy and Neuroscience: a Ruthlessly Reductive Account. Kluwer Academic, Dordrecht, The Netherlands [Google Scholar]
  5. Blake J (2004) Bio-ontologies: fast and furious. Nat Biotech 22: 773–774 [DOI] [PubMed] [Google Scholar]
  6. Bock G, Goode J (1998) The Limits of Reductionism in Biology. Novartis Foundation Symposium no 213. Wiley, Chichester, UK [Google Scholar]
  7. Bosshard HR (2001) Molecular recognition by induced fit: how fit is the concept? News Physiol Sci 16: 171–173 [DOI] [PubMed] [Google Scholar]
  8. Burton DR, Moore JP (1998) Why do we not have an HIV vaccine and how can we make one? Nat Med 4: 495–498 [DOI] [PubMed] [Google Scholar]
  9. Burton DR, Desrosiers RC, Doms RW, Koff WC, Kwong PD, Moore JP, Nabel GJ, Sodroski J, Wilson IA, Wyatt RT (2004) HIV vaccine design and the neutralizing antibody problem. Nat Immunol 5: 233–236 [DOI] [PubMed] [Google Scholar]
  10. Crick FHC (1966) Of Molecules and Men. University of Washington Press, Seattle, WA, USA [Google Scholar]
  11. Csete ME, Doyle JC (2002) Reserve engineering of biological complexity. Science 295: 1664–1669 [DOI] [PubMed] [Google Scholar]
  12. Drews J (2003) Strategic trends in the drug industry. Drug Discov Today 8: 411–420 [DOI] [PubMed] [Google Scholar]
  13. Emmeche C (1997) Aspects of complexity in life and science. Philosophica 59: 41–68 [Google Scholar]
  14. Enshell-Seijffers D, Denisov D, Groisman B, Smelyanski L, Meyuhas R, Gross G, Denisova G, Gershoni JM (2003) The mapping and reconstitution of a conformational discontinuous B-cell epitope of HIV-1. J Mol Biol 334: 87–101 [DOI] [PubMed] [Google Scholar]
  15. Gershell LJ, Atkins JH (2003) A brief history of novel drug discovery technologies. Nat Rev Drug Discov 2: 321–327 [DOI] [PubMed] [Google Scholar]
  16. Glassman RH, Sun AY (2004) Biotechnology: identifying advances from the hype. Nat Rev Drug Discov 3: 177–183 [DOI] [PubMed] [Google Scholar]
  17. Goh Cs, Milburn D, Gerstein M (2004) Conformational changes associated with protein–protein interactions. Curr Opin Struct Biol 14: 104–109 [DOI] [PubMed] [Google Scholar]
  18. Horrobin DF (2001) Realism in drug discovery: could Cassandra be right? Nature Biotech 19: 1099–1100 [DOI] [PubMed] [Google Scholar]
  19. Horrobin DF (2003) Modern biomedical research: an internally self-consistent universe with little contact with medical reality? Nat Rev Drug Discov 2: 151–154 [DOI] [PubMed] [Google Scholar]
  20. James LC, Roversi P, Tawfik DS (2003) Antibody multispecificity mediated by conformational diversity. Science 299: 1362–1367 [DOI] [PubMed] [Google Scholar]
  21. Kim J (1999) Making sense of emergence. Philos Stud 95: 3–36 [Google Scholar]
  22. Kitano H (2002) Systems biology: a brief overview. Science 295: 1662–1664 [DOI] [PubMed] [Google Scholar]
  23. Kubinyi H (2003) Drug research: myths, hype and reality. Nat Rev Drug Discov 2: 665–668 [DOI] [PubMed] [Google Scholar]
  24. Lewontin R (2000) The Triple Helix. Gene, Organism and Environment. Harvard University Press, Cambridge, MA, USA [Google Scholar]
  25. Meloen RH, Puyk WC, Slootstra JW (2000) Mimotopes: realisation of an unlikely concept. J Mol Recognit 13: 352–359 [DOI] [PubMed] [Google Scholar]
  26. Miska D (2003) Biotech's twentieth birthday blues. Nat Rev Drug Discov 2: 231–233 [DOI] [PubMed] [Google Scholar]
  27. Morange M (2001a) A successful form of reductionism. The Biochemist 23: 37–39 [Google Scholar]
  28. Morange M (2001b) The Misunderstood Gene. Harvard University Press, Cambridge, MA, USA [Google Scholar]
  29. Morowitz HJ (2002) The Emergence of Everything. How the World Became Complex. Oxford University Press, Oxford, UK [Google Scholar]
  30. Oomen CJ, Hoogerhout P, Bonvin AMJJ, Kuipers B, Brugghe H, Timmermans H, Haseley SR, Van Alphen L, Gros P (2003) Immunogenicity of peptide-vaccine candidates predicted by molecular dynamics simulations. J Mol Biol 328: 1083–1089 [DOI] [PubMed] [Google Scholar]
  31. Pearson H (2002) Surviving a knockout blow. Nature 415: 8–9 [DOI] [PubMed] [Google Scholar]
  32. Tainer JA, Getzoff ED, Paterson Y, Olson AJ, Lerner RA (1985) The atomic mobility component of protein antigenicity. Ann Rev Immunol 3: 501–535 [DOI] [PubMed] [Google Scholar]
  33. Van Regenmortel MHV (1999) in Van Regenmortel MHV, Muller S (eds), Synthetic Peptides as Antigens pp1–78. Elsevier, Amsterdam, The Netherlands [Google Scholar]
  34. Van Regenmortel MHV (2000) Are there two distinct research strategies for developing biologically active molecules: rational design and empirical selection? J Mol Recognit 13: 1–4 [DOI] [PubMed] [Google Scholar]
  35. Van Regenmortel MHV (2001) Pitfalls of reductionism in the design of peptide-based vaccines. Vaccine 19: 2369–2374 [DOI] [PubMed] [Google Scholar]
  36. Van Regenmortel MHV (2002) Reductionism and the search for structure–function relationships in antibody molecules. J Mol Recognit 15: 240–247 [DOI] [PubMed] [Google Scholar]
  37. Van Regenmortel MHV (2004) Biological complexity emerges from the ashes of genetic reductionism. J Mol Recognit 17: 145–148 [DOI] [PubMed] [Google Scholar]
  38. Van Regenmortel MHV, Hull D (2002) Promises and Limits of Reductionism in the Biomedical Sciences. Wiley, Chichester, UK [Google Scholar]
  39. Villen J, de Oliviera E, Nunez JI, Molina N, Sobrino F, Andreu D (2004) Towards a multisite synthetic vaccine to foot-and-mouth disease: addition of a discontinuous site peptide mimic increases the neutralization response in immunized animals. Vaccine 22: 3523–3529 [DOI] [PubMed] [Google Scholar]
  40. Weng G, Bhalla US, Iyengar R (1999) Complexity in biological signalling systems. Science 284: 92–96 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Westhof E, Altschuh D, Moras D, Bloomer AC, Mondragon A, Klug A, Van Regenmortel MHV (1984) Correlation between segmental mobility and the location of antigenic determinants in proteins. Nature 311: 123–126 [DOI] [PubMed] [Google Scholar]
  42. Williams DA, Baum C (2003) Gene therapy: new challenges ahead. Science 302: 400–401 [DOI] [PubMed] [Google Scholar]
  43. Zeder-Lutz G, Hoebeke J, Van Regenmortel MHV (2001) Differential recognition of epitopes present on monomeric and oligomeric forms of gp 160 glycoprotein of human immunodeficiency virus type 1 by human monoclonal antibodies. Eur J Biochem 268: 2856–2866 [DOI] [PubMed] [Google Scholar]

Articles from EMBO Reports are provided here courtesy of Nature Publishing Group

RESOURCES