Abstract
This article examines two influential authors who have addressed the interface between the fields of chemistry and physics and have reached opposite conclusions about whether or not emergence and downward causation represent genuine phenomena. While McLaughlin concludes that emergence is impossible in the light of quantum mechanics, Hendry regards issues connected with the status of molecular structure as supporting emergence. The present author suggests that one should not be persuaded by either of these arguments and pleads for a form of agnosticism over the reality of emergence and downward causation until further studies might be carried out.
Keywords: emergence, top-down causation, chemistry, decoherence
1. Introduction
The question of downward causation and especially that of emergence have become increasingly popular in recent years. More and more philosophers and even researchers in the hard sciences are willing to eschew a strictly reductionist approach and seem willing to embrace the view that emergence may be a real phenomenon [1]. Nevertheless, there is complete disagreement as to what emergence might be, and how it should be recognized. It is not an exaggeration to say that everybody wants to talk of emergentism but nobody knows what to say about it. In this respect, the Templeton Foundation meeting held at the Royal Society has been illuminating and yet also frustrating because of a lack of anything approaching a consensus as to what even the right questions might be about emergentism.
The present article is intended as a contribution to this preliminary debate as to the possible existence of emergence and downward causation from the perspective of the philosophy of chemistry, a discipline that has generally been unrepresented in the wider debate [2]. The article will consist of two parts. First, we conduct a critical examination of an article by the Rutgers University philosopher Brian McLaughlin. McLaughlin essentially argues that emergentism, of a form that will be explained, does not stand any possible chance of being a reality.
The second part will consist of an analysis of the papers of Robin Hendry, also a philosopher, who is at the University of Durham in the UK and who has written widely on the philosophy of chemistry. According to Hendry, a certain technical issue in molecular quantum mechanics opens the door for the existence of emergence.
These two opposing views will be considered by the present author who will argue that these issues have been insufficiently explored and that as things stand one is led to a position of agnosticism as to whether emergence and downward causation constitute genuinely phenomena.
2. Mclaughlin's paper and british emergentism
The notion of emergentism is by no means new. In the nineteenth century, a succession of British philosophers were drawn to this idea and wrote several papers and books with the aim of setting forth their positions. Perhaps the most influential among them was C. D. Broad (1887–1971).
McLaughlin [3] has written a frequently cited paper in which he seeks to give an overview of the philosophical school that he dubs ‘British Emergentism’, which includes the work of J. S. Mill, Bain, Morgan and most recently C. D. Broad.
Emergentists held, rather uncontroversially, that the natural kinds at each scientific level are wholly composed of kinds of lower levels, and ultimately of kinds of elementary particles. However, they also maintained, according to McLaughlin, that,
some special science kinds from each special science can be wholly composed of the types of structures of material particles that endow the kinds in question with fundamental causal powers.
(McLaughlin [3], pp. 50–51)
These powers were said to ‘emerge’ from the types of structures in question. One example given repeatedly by the British emergentists was that of chemical elements, which have the power to bond to other elements by virtue of their internal microscopic structures. According to the emergentists, when these causal powers operate, they bring about the movement of particles. The striking part, as McLaughlin calls it, about the emergentist claim is that the kinds pertaining to a special science, such as chemistry, are said to have the power to influence microscopic motions of particles in ways that are not anticipated by the laws governing the microscopic particles. Emergentism is thus committed to the possibility of ‘downward causation’.
For example, emergentists such as Broad believed that chemical bonding represents an example of emergence and the operation of downward causation. Indeed, he went as far as to declare that the situation with which we are faced in chemistry
… seems to offer the most plausible example of emergent behaviour.
(Broad [4], p. 65)
Broad also stressed that if mechanistic chemistry were true it should be possible to deduce the chemical behaviour of any element from the number and arrangement of such particles, without needing to observe a sample of the element in question, which is something that is clearly not the case. Against this position, McLaughlin maintains that the coming of quantum mechanics and the quantum mechanical theory of bonding has rendered these emergentist claims untenable. In fact, he is very categorical about the prospects for modern day emergentism:
it is, I contend, no coincidence that the last major work in the British Emergentist tradition coincided with the advent of quantum mechanics. Quantum mechanics and the various scientific advances made possible are arguably what led to British Emergentism's downfall… quantum mechanical explanations of chemical bonding in terms of electromagneticism [sic], and various advances this made possible in molecular biology and genetics—for example the discovery of the structure of DNA—make the main doctrines of British emergentism, so far as the chemical and the biological are concerned at least, seem enormously implausible. Given the advent of quantum mechanics and these other scientific theories, there seems not a scintilla of evidence that there are emergent causal powers or laws in the sense in question… and there seems not a scintilla of evidence that there is downward causation from the psychological, biological and chemical levels.
(McLaughlin [3], pp. 54–55)
These anti-emergentist claims can be criticized on several different fronts. Granted that the quantum mechanical theory of bonding that McLaughlin appeals to does provide a more fundamental account of chemical bonding than the classical, or Lewis's, theory. Nevertheless, it does not permit one to predict in advance the behaviour of elements or the properties that a compound might have once any two or more elements have combined together. Moreover, it is not as though there was a complete absence of any theoretical understanding of chemical bonding before the quantum theory was introduced. Lewis's theory, whereby covalent bonds occur when elements share pairs of electrons, gave a good account of the bonding in most compounds. In spite of these comments, I do not believe that one should draw the conclusion that emergence is necessarily a genuine phenomenon, because future theories of chemical bonding might very well do a better job of predicting the properties of compounds from those of their component elements. It is important to draw a distinction between apparent emergence that might occur for theoretical reasons having to do with our current theories of chemistry and physics from the deeper claims of what one might call ontological emergence.
Admittedly, the quantum mechanical theory (devised by Heitler, London, Pauling, Millikan and others) goes beyond this ‘homely picture’ of pairs of electrons, mysteriously holding atoms together. However, Lewis's concept of bonds as pairs of electrons is not thereby refuted but rather given a deeper physical mechanism. According to the quantum mechanical account, electrons are regarded as occupying bonding and anti-bonding orbitals. To a first approximation, if the number of bonding electrons exceeds the number of anti-bonding electrons, then the molecule is predicted to be a stable one. Moreover, the electrons occupy these orbitals, two by two, in pairs. The deeper understanding lies in the fact that the electrons are regarded as spinning in opposite directions within all such pairs. Indeed, it is the exchange energy associated with electron spin, which accounts quantitatively for the bonding in any compound, and it is in this last respect that the quantum mechanical theory goes beyond Lewis's theory.
The discovery of the structure of DNA was driven almost entirely by the X-ray diffraction evidence that became available to Crick and Watson, courtesy of Wilkins and Franklin. It did not rest on any quantum mechanical calculations or indeed any insights provided by the theory. It involved model building and cardboard cutouts of bases. McLaughlin does not say anything whatsoever about pre-quantum mechanical theories of bonding, except to imply that they were completely inadequate. At the same time, he suggests that the quantum mechanical theory has provided a complete answer to the question of bonding. Neither of these extreme positions is correct.
Quantum mechanics cannot yet predict what compounds will actually form. Broad's complaint about the inability of mechanistic or classical chemistry to predict the properties of elements, or the outcome of chemical reactions between any two given elements, remains unanswered to this day. Why then should we accept McLaughlin's claim that pioneer quantum chemistry, or even today's version of the theory of bonding, can so decisively deal a death blow to any notions of emergence and downward causation?
3. Hendry's espousal of downward causation
Having just argued that McLaughlin has not in fact ruled out the emergence of chemistry from physics and the operation of downward causation between these two levels, I now examine the work of Hendry who takes the opposite view. Hendry argues that according to the current state of quantum chemistry, there is at least as much evidence for emergence as there is for ontological reduction of chemistry but actually goes a little further in favouring emergence.
First of all, Hendry has done a splendid job of distinguishing between epistemological or theoretical reduction of chemistry,
all this leads to an impasse—temperamental reductionists and non-reductionists can agree that classical inter-theoretic reductions of chemistry are not currently available, but will differ in how they interpret the situation. As long as reduction is seen as a dated inter-theoretic achievement, however, the issue is essentially future directed—both sides must wait and see, even if they would bet different ways.
(Hendry [5], p. 184)
Perhaps the answer concerns their different underlying metaphysical views. To identify those views requires that we separate the inter-theoretic and the metaphysical aspects of the reduction debate: the former concern the explanatory relationships between theories and the latter the relationships between their subject matter. This separation is necessary because there are reasons why the inter-theoretic reduction of a special science may fail that are quite independent of any metaphysical relationship between physics and the special sciences. The reasons are twofold, and can be illustrated by brief reflections on (i) how physical theories are applied to complex physical systems and (ii) the nature of scientific disciplines.
(Hendry [5], p. 184)
The first reason that Hendry gives concerns the application of a theory like quantum mechanics to actual cases. Whereas the theory is highly abstract, any case in question is rather specific and necessitates the use of approximations of all kinds. It is possible that any failure of reduction can be blamed on this move. This being the case, a reduction would have failed on epistemological or inter-theoretical grounds. One cannot conclude, Hendry argues, that there is a lack of ontological reduction.
The second reason Hendry gives is that two scientific disciplines such as chemistry and physics typically develop independently as history unfolds, and that there is no guarantee that the two sciences mesh together in such a way that reduction can be demonstrated. If this is the case, then once again any apparent lack of reduction can be attributed to inter-theoretical issues and one cannot rule out the ontological reduction of one level to another one.
The failure of reductionism on these sorts of grounds cannot be conclusive when it comes to the more general question of ontological reduction. In order to articulate a form of ontological reduction, we need to look elsewhere.
Hendry then turns to this more difficult task,
if the reduction debate is to develop beyond the impasse over inter-theoretic reduction, it must turn to the ontological relationships between the entities, processes, and laws studied by different sciences, which are fallibly and provisionally described by their theories. One obvious requirement on a criterion of ontological reduction is that whether or not it obtains must be a substantive metaphysical issue that transcends the question of what explanatory relationships exist between theories now, or might exist in the future, even though inter-theoretic relationships must continue to be relevant evidence. It should also be acceptable to both reductionists and non-reductionists.
(Hendry [5], p. 184)
This is an important point that, as I will argue, Hendry fails to adhere to when he addresses the issue more closely. Moreover, I believe that it is quite impossible to give any arguments that transcend our current explanatory schemes and theories. I believe that Hendry and other authors who claim to separate the ontological question from the inter-theoretical one may be mistaken, especially as they admit that ontological issues are to be approached via current inter-theoretical relationships. Hendry continues,
reducibility is at the strong end of the spectrum because it is the limiting case that denies the distinct existence of what is dependent—the reductionists slogan is that x is reducible to y just in case x is ‘nothing but’ its reduction base, y. One can imagine many ways to cash out this slogan, depending on the aspect under which the reduced is held to be ‘nothing but’ its reduction base, but a consensus has emerged in recent philosophy of mind that the relevant aspect should be causal. Alexander's dictum is the principle, often cited by Kim (1998, p. 119, 2005, p. 159), according to which being real requires having causal powers.
(Hendry [5], p. 184)
In this way, a connection is forged between the question of causation and that of reduction. This is important for what follows. But notice that as Hendry concedes, this is only one option. Moreover, the fact that a consensus may have arisen in the philosophy of mind may, or may not, be relevant to research in the philosophy of chemistry.
The connection, even if true between reduction and causation that has been established in the philosophy of mind is of even less relevance to chemists and physicists trying to grapple with the question of whether chemistry is ontologically reduced to physics. Why after all should they buy the ‘consensus’ that may have emerged in the philosophy of mind? And if it comes to that, even in mainstream philosophy of science, the importance of causation is far from universally accepted.1
But let us grant Hendry this connection and see where it might lead him. He writes,
the ontological reductionist thinks that special-science properties are no more than their physical bases because the causal powers they confer are a subset of those conferred by their physical bases; the emergentist sees them as distinct and non-reducible just because the causal powers they confer are not exhausted by those conferred by their physical bases. The additional causal powers are exerted in downward causation.
(Hendry [5], p. 185)
And so, at the very least, Hendry has provided us with a clear connection between downward causation and emergence.
Hendry also points out that,
emergentism invokes downward causation—the special-science properties sometimes push their physical supervenience bases around. Ontological reductionism assumes the causal closure, or completeness, of the physical—effects are brought about solely by physical causes via physical laws.
(Hendry [5], p. 185)
At this point, Hendry, like McLaughlin before him, appeals to the work of C. D. Broad on emergentism and claims that Broad's work provides ‘an account of emergence from which a model of downward causation is easily extracted’. Broad makes a contrast between ‘pure mechanism’ whereby every material object is made of fundamental particles of one kind of stuff and emergentism where this is not the case. One physical law governs the interaction between the particles, and according to pure mechanism, this law determines the behaviour of every material object. Again, quoting Hendry,
Broad's account of the disagreement between pure mechanism and emergentism is easily formulated within quantum mechanics, in which the motions are governed by Hamiltonian operators determined by the forces acting within a system.
(Hendry [5], p. 184)
In fact, much work in the philosophy of physics has aimed at identifying whether reductionism breaks down in the context of quantum mechanics and the findings have been notoriously inconclusive. Once again let us grant Hendry the benefit of the doubt in order to see how he intends to identify the operation of emergence/downward causation in the context of quantum mechanics.
Hendry claims that, whereas the reductionist posits a resultant Hamiltonian, the emergentist posits a non-resultant Hamiltonian also called a configurational Hamiltonian.
Where does downward causation fit into this? For the emergentist, every complex system is composed of the same basic stuff, but some complex systems are covered by non-resultant or configurational Hamiltonians. In an emergent complex system, the behaviour of the basic stuff of which it is made is governed by a configurational Hamiltonian, which is different from what it would be were its behaviour governed by the resultant Hamiltonian.
(Hendry [5], p. 184)
So far, so good, but where are the alleged configurational Hamiltonians in modern quantum mechanics? Here, now we come to what I consider to be one of the main weaknesses in Hendry's account. Needless to say, one cannot just examine the mathematical expressions in quantum mechanics and immediately conclude that a Hamiltonian is configurational, or not, as Hendry seems to be implying. What Hendry does next is to turn to molecular quantum mechanics and in particular, the use of the Born–Oppenheimer approximation.
The Born–Oppenheimer approximation considers nuclei in a molecule to be stationary while the electrons are permitted to move. The energy of the system can then be minimized and this process can be repeated at will in order to arrive at the minimum energy. Chemists thereby find the ‘structure’ of the molecule which is governed by the relative positions of the nuclei. According to Woolley, and more recently Hendry, this is not good enough. In the absence of applying the Born–Oppenheimer approximation, or without fixing the positions of the nuclei, they claim that quantum mechanics fails to distinguish between the two isomers of C2H6O1, for example. This leads Woolley and Hendry to conclude that molecular structure is somehow an alien concept, without a true quantum mechanical foundation. Woolley [6] has claimed for many years that molecular structure does not reduce to quantum mechanics. Hendry has picked up this view and continues to champion it via an elaborate argument drawing on the work of the early twentieth century British philosopher C. D. Broad. Hendry claims categorically that molecular structure does not belong in quantum mechanics and must be ‘put in by hand’. What he is referring to is part of a bigger problem that has long plagued the foundations of quantum mechanics, namely the problem of the collapse of the wave function. This problem has gradually become clearer with the growing realization of the role of quantum decoherence in physics and other disciplines.
According to quantum mechanics, molecules can be said to be in a superposition of a number of possible structures. For example, C2H6O1 can be regarded as a superposition of quantum states representing the structures of quantum states representing the structures of C2H5OH and CH3OCH3. Woolley, and now Hendry, concentrates on the fact that until an observation is carried out, neither of these structures has been actualized. The inference they draw from this state of affairs is that there is no intrinsic structure in the molecule, which—if it were true—would indeed mean that structure is not fundamental. However, the study of decoherence has shown that it is not just observations that serve to collapse the superpositions in the quantum mechanical equations [7]. The collapse can also be brought about by the molecules interacting with their environment, something that Hendry occasionally mentions but quickly dismisses.
Moreover, it was traditionally believed that any collapse in the wave function was an instantaneous process. More recent studies have shown that decoherence takes place in a finite time, depending upon many factors.2 In the case of the C2H6O1 molecule, the decoherence is so rapid (about one femtosecond)3 as to mean that for all intents and purposes the molecule collapses into either ethanol or dimethyl ether so quickly that it would be foolish to dwell on the brief instant when the structure is yet to settle on one of the isomers.4
Taking account of quantum decoherence allows one to tame the effect of entanglement and appears to alleviate the concern that ontological entities such as molecules with particular structures might not exist in their own right. However, importing decoherence does not allow one to predict any particular outcomes in the sense that the decohered wave function only provides the probable outcomes for classical molecular structures. The remaining concern can be addressed by assuming that collapse of the wave function is taking place continually even in the absence of any observers.
Such an intuition is supported by the fact that the classical macroscopic world is populated by definite outcomes. Presumably, such definite outcomes also existed at times before there were any observers present in the world to notice these definite features, rather than probabilistic features about the world.5
Although Hendry does mention the possibility of interaction with the environment, he does not mention decoherence, when he refers to the work of Ramsay [9] who has discussed the influence of the environment of molecules.
Hendry also cites Hans Primas, who lays the blame for the apparent ungroundedness of molecular structure on the fact that we usually regard molecules as being isolated systems. According to Primas [10], if we allow for interactions with the outside world, then the lack of structure evaporates. But Hendry is not impressed with either of these solutions. He continues to regard structure as a deep problem that deserves the attention of philosophers of chemistry [11].
On a more general point, it might be well to take account of Hendry's earlier warning to the effect that ontological considerations should not be anchored to the present state of our theoretical understanding. Why should we suppose that the failure of present-day quantum mechanics to recover molecular structure is a reflection of the ontological situation rather than just a deficiency that will be removed as the theoretical treatment is improved. By attaching so much importance on the Born–Oppenheimer approximation and its philosophical consequences, if any, Hendry is falling into the very trap that he warns us against falling into.
4. Symmetry breaking
Hendry places the Born–Oppenheimer question into the wider context of symmetry breaking. Although the equations of quantum chemistry treat a molecule such as HCl as though it were symmetrical, all the properties of the molecule, such as its having an electric dipole, point to its being asymmetrical. How does this asymmetry come about?
According to Hendry, the initial symmetry is somehow broken to yield the familiar asymmetrical HCl molecule where one end has a partial positive charge, whereas the other end bears a partial negative charge. It is this mysterious symmetry breaking that Hendry identifies with downward causation. The symmetry breaking seems to ‘come from above’ as it were, and tells the molecule what structure to adopt.
However, how can one be sure that this symmetry breaking is an ontological feature as Hendry seems to interpret it? Could it not be that molecular quantum mechanics in its present state of development is still not able to capture structure for whatever reason? After all, the asymmetry could be present right from the start, whatever the mathematical equations seem to imply according to the present state of theoretical development.
One only needs to think naively about the issue to see that this notion is not so implausible. The HCl molecule is made of two quite dissimilar atoms—one of hydrogen and one of chlorine. It is to be expected that the resulting molecule would retain some asymmetrical features to reflect the fact that it is formed from two rather dissimilar parts.
Other cases of symmetry breaking discussed in modern physics are different in that the symmetry may be present in an ontological sense. For example, the four forces of nature are believed to have been unified at some moment soon after the Big Bang. This fundamental single force subsequently became separated into the four forces that we have now as a result of a process of symmetry breaking. If Hendry insists on claiming that the molecular case is analogous to this cosmological one, then we would presumably have many other instances of downward causation even within physics without having to even venture into the physics–chemistry interface.
In the molecular case, the apparent breaking of symmetry is, I suggest, entirely epistemological/theoretical. There is little doubt that the molecule of HCl should be asymmetrical, given that it is formed from two unlike atoms. The fact that the current quantum mechanical treatment does not capture this structure in an ab initio manner is, I claim, a theoretical rather than an ontological issue.
If all cases of symmetry breaking reveal downward causation this would mean that the phenomenon is rather rampant including the cases of symmetry breaking to yield a predominance of matter over anti-matter [12,13], the predominance of laevo amino acids over dextro isomers in the biological world and many other cases [14–16].
Finally, Hendry of course recognizes the possibility that a future physics may well capture molecular structure in an ab initio manner without the need to invoke downward causation but he believes that the burden lies with the reductionsist to show that there are no configurational Hamiltonians or in other words to show that there is no downward causation, rather than on the emergentist to show that it exists. For example, he claims,
but molecular quantum mechanics has, for good reasons, based its explanations on Coulombic Hamiltonians, and it is entirely unclear how introducing (for instance) the weak nuclear force into consideration would account for the complex and varied symmetry properties of molecules.
(Hendry [5], p. 185)
This is a rather odd statement given the ongoing research programme that seeks to invoke such effects in the case of at least one other important instance of symmetry breaking, namely the predominance of one form of amino acid isomers in Nature over the isomers of the opposite handedness.6
More generally, it would appear that Hendry has failed to heed his own warning that any ontological conclusions that one draws about reduction and emergence. My own conclusion is that just as McLaughlin has failed to rule out the occurrence of emergence and downward causation, so Hendry has failed to make a case in their favour. I suggest that the best attitude to adopt towards the concepts of emergence and downward causation is one of agnosticism.
Although I argued against McLaughlin that, for example, the properties of compounds cannot yet be fully derived from those of the component elements this does not oblige me to side with Hendry even though it may yet turn out that emergence does exist. My project in this article has been a more modest one of trying to analyse the arguments of two protagonists who are willing to take up a position on the question. As I see it, we are still not in any position to pronounce on such a difficult question as to the existence or otherwise of emergence or any accompanying downward causation. On the basis of the work carried out thus far, I suggest that one is led to a position of agnosticism over whether or not emergence and downward causation really play any role whatsoever.
What is required is more work on the questions of reduction, emergence and causation in the context of the borders between physics and chemistry. At present, the literature contains a few isolated studies such as the ones that have been analysed here. This is a little surprising given the frequent calls for more attention to the ‘special sciences’ made by philosophers of science. It is also surprising given that the manner in which chemistry interfaces with physics represents perhaps the ‘first step’ in the reductive hierarchy dealing with the special sciences and their relationship to the fundamental science of physics.
Footnotes
The re-appearance of causes in the philosophy of science is a long and difficult issue which would require at least an entire lecture to address. Suffice it to say that the symmetry between explanation and derivation which existed in the logical positivist account of science became challenged because of some cases where one had derivation while it appeared as though there was no explanation such as the case of the ‘Flagpole example’. The length of the shadow cast by a flagpole can be said to be caused by the flagpole but it seems odd to claim that the length of the flagpole itself is in any sense caused by its own shadow.
I offer the following analogy to illustrate the importance of this change in perception. When Newton published his theory of gravitation, which involved instantaneous action at a distance, there was some well-deserved criticism of this feature. The successor theory, Einstein's general theory of relativity dispels the mystery of instantaneous gravitational effects with effects that are mediated at a finite velocity, that of light and are carried by a field rather than simply at a distance. So, it is with the change from an instantaneous collapse of the wave function to a collapse that takes a finite amount of time to occur.
I am grateful to Benjamin Schwartz of the Department of Chemistry and Biochemistry at UCLA for discussions concerning decoherence and for his estimation of the decoherence time in this molecule.
I am not implying that quantum decoherence solves the collapse problem. Readers can learn more about this subtle question from the article by Bacciagaluppi [8].
Just as reactions may begin in a non-equilibrium state, they commonly settle down to equilibrium. Similarly, one might assume that an initial situation involving two or more possible molecular structures will invariably settle down to produce a specific equilibrium structure.
See article by Cline [17].
References
- 1.Bedau M. A., Humphreys P. (eds) 2007. Emergence: contemporary readings in philosophy and science. Cambridge, MA: MIT Press [Google Scholar]
- 2.Scerri E. R. 2008. Collected papers in the philosophy of chemistry. London, UK: Imperial College Press [Google Scholar]
- 3.McLaughlin B. 1992. The rise and fall of British emergentism. In Emergence or reduction? Essays on the prospect of a nonreductive physicalism (eds Beckerman A., Flohr H., Kim J.), pp. 49–93 Berlin, Germany: de Gruyter [Google Scholar]
- 4.Broad C. D. 1925. The mind and its place in nature. London, UK: Kegan Paul, Trench and Trubner [Google Scholar]
- 5.Hendry R. F. 2010. Ontological reducation and molecular structure. Stud. Hist. Philos. Mod. Phys. 41, 183–191 10.1016/j.shpsb.2010.03.005 (doi:10.1016/j.shpsb.2010.03.005) [DOI] [Google Scholar]
- 6.Woolley R. G. 1998. Is there a quantum definition of a molecule? J. Math. Chem. 23, 3–12 10.1023/A:1019144518901 (doi:10.1023/A:1019144518901) [DOI] [Google Scholar]
- 7.Zurek W. H. 1991. Decoherence and the transition from quantum to classical. Phys. Today 44, 36–44 10.1063/1.881293 (doi:10.1063/1.881293) [DOI] [Google Scholar]
- 8.Bacciagaluppi G. 2007. The role of decoherence in quantum mechanics. Stanford Encyclopedia of Philosophy. See http://plato.stanford.edu/entries/qm-decoherence/
- 9.Ramsay J. 1997. Molecular shape, reduction, explanation and approximate concepts. Synthese 111, 233–251 10.1023/A:1004901931804 (doi:10.1023/A:1004901931804) [DOI] [Google Scholar]
- 10.Primas H. 1981. Chemistry, quantum mechanics and reductionism. Perspectives in theoretical chemistry. Berlin, Germany: Springer [Google Scholar]
- 11.Hendry R. F., Needham P., Weisberg M. 2011. Entry for philosophy of chemistry. Stanford Encyclopedia of Philosophy. See http://plato.stanford.edu/entries/chemistry/
- 12.Cheng T. P., Li L. F. 2006. Gauge theory of elementary particle physics. Oxford, UK: Oxford University Press [Google Scholar]
- 13.Donoghue J. F., Golowich E., Holstein B. R. 1994. Dynamics of the standard model. Cambridge, UK: Cambridge University Press [Google Scholar]
- 14.Riehl J. P. 2009. Mirror-image asymmetry: an introduction to the origin and consequences of chirality. New York, NY: John Wiley and Sons [Google Scholar]
- 15.Ulbricht T. L. 1975. The origin of optical asymmetry on earth. Orig. Life 6, 303–315 10.1007/BF01130336 (doi:10.1007/BF01130336) [DOI] [PubMed] [Google Scholar]
- 16.Viglione R. G. 2004. Theoretical determination of parity-violating vibrational frequency differences between the enantiomers of chiral molecules. J. Chem. Phys 121, 9959. 10.1063/1.1807815 (doi:10.1063/1.1807815) [DOI] [PubMed] [Google Scholar]
- 17.Cline D. 2005. On the physical origin of the homochirality of life. Eur. Rev. 13(Suppl. 2), 49–59 10.1017/S1062798705000657 (doi:10.1017/S1062798705000657) [DOI] [Google Scholar]