Is space both finite and infinite?

This is an interesting diagram, for at least a couple of reasons. Firstly, as the author, cosmologist Anthony Aguirre explains in his paper Eternal Inflation, past and future, “it may well represent the current best bet for how the observable universe actually originated.” Secondly, it demonstrates nicely how, according to general relativistic cosmology, the observable universe could be both spatially infinite and spatially finite.

Aguirre’s diagram represents the creation of our observable universe according to a certain scenario proposed by inflationary cosmology, so let’s begin by recapping the basic idea of the latter. Inflation suggests that there is a scalar field, the ‘inflaton’, whose ‘equation of state’ is such that a positive energy density corresponds to a negative pressure. In general relativity, a matter field with negative pressure generates a repulsive gravitational effect. Inflationary cosmology suggests that at some time in the early universe, the energy density of the universe came to be dominated by the non-zero energy density of the inflaton field. A region of the universe in this so-called false vacuum state would undergo exponential expansion until the inflaton field dropped into a lower energy state. This lower energy state is conventionally considered to be the ‘true vacuum’ state, the lowest energy state of the inflaton field.

The diagram above represents a particular type of inflationary scenario in which inflation ends locally at a ‘nucleation point’, by quantum tunnelling from the false vacuum value φF, to a value φW. An expanding bubble of lower-energy forms, the bubble wall expanding outwards at the speed of light. The bubble wall is duly represented on Aguirre’s diagram by the vee-shape. (If one were to add an extra spatial dimension to the diagram, then one would represent the expanding bubble wall as a cone-shape).

Whilst the bubble wall possesses an inflaton field value of φW, the region inside the bubble evolves towards lower-energy inflaton field values until it reaches the true vacuum field value φT. The second diagram here simply plots the potential energy V(φ) as a function of the inflaton field value φ. (Note that Aguirre’s first diagram erroneously denotes the true vacuum as φF).

Now, in general relativistic cosmology there is no preferential way of slicing up a space-time into a family of 3-dimensional spaces. If there were a preferential slicing, it would provide a basis for absolute simultaneity, contradicting the principles of relativity. Inflationary cosmology is just general relativistic cosmology with an inflaton field, so an inflationary space-time can also be sliced up in any number of ways.

If the bubble from which our observable universe arose, were nucleated at a single point, and its wall expanded at the finite speed of light, it might seem natural to think that the bubble must be finite in spatial extent, and non-uniform at each moment of time. The closer to the centre of the bubble, the smaller the value of the inflaton field φ. This would correspond to slicing up the region inside the vee-shape on the first diagram with a series of horizontal lines.

However, the conventional models of general relativistic cosmology, the Friedmann-Robertson-Walker space-times, which are purportedly preceded by the inflationary transition to a true vacuum, are considered to be spatially homogeneous and isotropic. If we carry this slicing convention back to the inflationary bubble, then we must slice it along surfaces with constant values of the inflaton field φ. These correspond to a stack of hyperboloids inside the vee-shape on the diagram, each hyperboloid being an infinite 3-dimensional space of constant negative curvature. Under this slicing, the inflaton field still evolves towards the true vacuum state, but it evolves uniformly, and blends into a spatially infinite Friedmann-Robertson-Walker space-time.

Thus, seen from this perspective, an inflationary bubble, nucleated at a single point, and growing at a finite speed, is nevertheless capable of harbouring an infinite amount of space.

Published in: on February 26, 2011 at 8:48 pm  Comments (1)  

The Minotaur and the Monopole

In a moonlit glade of an enchanted wood, the Minotaur of Sirius-A sat and mourned the loss of his beloved Medusa, caustic tears burning a crater in the mossy ground betwixt his feet. The last survivor of a dying species, it was centuries since Daedalus-A’s labyrinth had crumbled to ruin, and millennia since the Minotaur had feasted on the final consignment of virgins from Athens-A.

Such was the solitary intensity of his grief, the Minotaur resolved to be re-united with his partner by any means possible, and began furiously researching cosmology in the deserted library of Athens-A. There, he discovered that a universe just like our own – and therefore containing his dearest, alive again – could be created, without the need for an initial singularity, if one could only find a magnetic monopole.

The Minotaur, however, had no idea where to begin searching for such an item, and immediately lapsed into a pit of mythological depression, hurling himself out of the colonnaded library, and back to his friendless forest dell. There he lay forlorn in supine agony, and many days passed until, by providence, he descried above a lofty flock of migrating birds, and pondered anew the basis for such navigational feats. Thus it was that the Minotaur commenced a five-year effort, harvesting the magnetite from a billion birds, slowly implanting the precious ferrimagnetic mineral within his bovine skull.

Eventually, the harvest complete, the Minotaur tuned his magnetoreception to the faint magnetic fields which permeate the galaxy, and embarked on a thousand-year odyssey, pursuing hidden Teslatic paths between sparkling spiral arms and gigantic dust lanes, seeking hints and traces of the fabled monopole.

Finally, the grieving beast alighted upon a dark, brooding planet where magnetic field lines converged in densely-packed radial spokes. There, in a dark sulphuric cavern, between rivers of roiling magma, the Minotaur cleaved his monopole from an obsidian peninsula.

Summoning forth his cumulative, concentrated sorrow, he crushed the pulsing topological defect within his hand, compressing it until a black hole was formed. At once, a Reissner-Nordstrom space-time effloresced within, and a bubble universe blossomed with false vacuum energy inside the double black hole horizons.

The Minotaur was instantly sucked inside his own creation, his mass-energy converted to surface tension in the bubble wall. The child universe expanded and cooled, stars and galaxies formed, planets coalesced, life evolved, civilisations waxed and waned, and after billions of years, in a moonlit glade of an enchanted wood, the Minotaur of Sirius-A sat and mourned the loss of his beloved Medusa, caustic tears burning a crater in the mossy ground betwixt his feet.

Published in: on February 5, 2011 at 12:46 pm  Leave a Comment  

The Many Worlds of Hugh Everett III

Investigative reporter Peter Byrne has written a fabulous book which traces the life and career of Hugh Everett III, the inventor of the Many Worlds Interpretation of quantum theory.

Everett devised the Many-Worlds Interpretation for his 1957 PhD thesis, but the interpretation was neglected and derided at the time, and Everett himself never returned to academia. Charting Everett’s intellectual and personal adventure, Byrne has uncovered some priceless material. Historians and sociologists of science will be particularly interested to note the pressure exerted by John Wheeler, Everett’s thesis supervisor, for Everett to retract and rewrite much of the thesis, so that it would avoid antagonising Wheeler’s scientific hero and mentor, Niels Bohr.

Byrne’s account of the philosophical issues surrounding quantum theory is amongst the best to be found outside of the professional literature. The author has made a massive effort to understand and explain the concepts involved, and, crucially, has extensively consulted philosophers of physics such as Jeffrey Barrett, Simon Saunders and David Wallace. This level of scholarship is reflected in the final product, which puts most popular science accounts of quantum theory to shame. Byrne should receive huge plaudits for the diligence of his work here.

Everett is a particularly fascinating individual because after completing his PhD thesis, he disappeared into the world of US military research, initially working on the optimisation problems surrounding nuclear warfare. However, the reader seeking an informative, sober, impartial analysis of Cold War politics and strategy will be sorely disappointed here. What we get instead is an unbalanced, sub-Michael Moore, caricature of the era. As just one illustration of this, consider the following claims made by Byrne:

“During much of the 1950s, the de facto strategy of the Strategic Air Command under General Curtis LeMay was to ‘preventatively’ launch everything in its nuclear arsenal,” (p74). “During the 1950s, the operating nuclear war plan of the United States was all or nothing. General Curtis LeMay, head of the Strategic Air Command, told a Gaither commissioner that a surprise attack by Soviet bombers would destroy the bulk of his B-52 bombers on the ground. He said that the official doctrine of deterrence by threatening a ‘second-strike’, or ‘massive retaliation’, was an improbable dream. He announced that SAC airplanes flew over the Soviet Union 24 hours a day picking up radio transmissions, and, ‘If I see that the Russians are amassing their planes for an attack, I’m going to knock the shit out of them before they take off the ground.’ And he intended to do this under his own recognizance, regardless of the opinions of civilian leaders, such as the president. Deterrence, for LeMay meant striking first and without warning,” (p195).

Other historical analyses suggest, however, that US Strategy in the early stages of the Cold War was one of preemption rather than prevention, and there is a crucial distinction here which Byrne fails to emphasise:

“A first strike can take three forms. A preemptive attack is one made in immediate anticipation of enemy attack. A surprise attack against an enemy who is not yet preparing his own attack is either simply aggressive, or if undertaken from fear of an eventual threat posed by the enemy, preventive…the difference between the preemptive and preventive variants has often been confused, even by professional strategists.” (Nuclear blackmail and nuclear balance, Richard K.Betts, p161). “NSC 68 [a 1950 document which formed the basis of US Cold War strategy for twenty years] rejected preventive war but tentatively embraced preemption,” (ibid., p162).

Whether General Curtis LeMay privately endorsed a preventive strategy at various times is a moot point. The quote used by Byrne, however, is merely evidence that he supported a strategy of preemption, not one of prevention. Moreover, in a briefing given by SAC in March 1954 concerning its war plans, General LeMay explicitly stated: “I want to make it clear that I am not advocating a preventive war; however, I believe that if the US is pushed in a corner far enough, we would not hesitate to strike first.” (Preventive attack and weapons of mass destruction, A comparative historical analysis, Lyle J.Goldstein, p43)

To claim, as Byrne does, that the US Strategic Air Command had a de facto strategy of preventive nuclear war, is therefore quite misleading. On recognising this, one might begin to doubt the veracity of other claims made by Byrne, and that would be unfortunate, because this is otherwise a great book.

As an investigative reporter, Byrne “specializes in uncovering government and corporate corruption.” This is an important duty to society, but it is also crucial not to begin with the assumption that all government activity is corrupt. Byrne, sadly, lapses into a simplistic worldview in which most US Cold War politicians, scientists and generals are portrayed as self-serving, war-mongering maniacs. This is a serious flaw in any work which seeks to provide a definitive historical record, rather than mere propaganda.

It also has to be said that the book is peppered with typographical errors, which include frequent misuse of the apostrophe. In a £25 book, this is unacceptable, and it is time for publishers to recognise that a book suffuse with typographical errors is quite literally a defective product.

Nevertheless, despite these reservations, on balance Byrne has written a fantastic account of the life of Hugh Everett, and the philosophical conundra posed by quantum theory.

Published in: on November 14, 2010 at 6:50 pm  Leave a Comment  

Air traffic control and radiobiology

Despite their superficially disparate nature, there is a striking formal similarity between air traffic control, and the processes studied in radiobiology.

In air traffic control, the objects of attention are the aircraft flight paths across a bounded region of airspace, called a sector. In radiobiology, the objects of attention are the energetic particle tracks across the bounded region of space occupied by a biological cell.

In air traffic control, there is a relationship between the number of flights passing through a sector, and the workload of an air traffic controller, and this relationship is given by a linear-quadratic function. In radiobiology, there is a dose-response relationship between the dose of radiation inflicted on a cell, and the biological response of interest, which may be the number of chromosome aberrations, DNA mutations, or the probability of cell death. For radiation of a fixed type and energy, the dose inflicted on a cell essentially corresponds to the number of particle tracks crossing the cell. Hence, the dose-response relationship is a relationship between the number of particle tracks crossing a cell, and the biological consequences. In the case of so-called chromosome translocations, a response crucially related to the probability of subsequent carcinogenesis, the dose-response relationship is given by a linear-quadratic function.

Let us elaborate on these linear-quadratic relationships a little in order to understand the reasons for such formal similarity. In the case of air traffic, the linear component of controller workload is due to (i) the number of routine flight level and airspeed instructions issued per aircraft, and (ii) the communication required with other controllers, when an aircraft is received from, or transferred to another sector. This component of controller workload is independent of the flow-rate, the number of aircraft passing through the sector per hour.

The quadratic component of controller workload is that associated with aircraft conflict-prediction and resolution; there are regulatory separation minima between aircraft, which must not be infringed. Each aircraft could be in potential conflict with any other aircraft in that same sector in the same time-window, hence this component of workload squares with the number of flights. This component of workload is clearly flow-rate dependent; at times of very low flow-rate, it will vanish.

In radiobiology, it is generally acknowledged that in those circumstances where there is a linear-quadratic dose-response relationship, the linear component arises from intra-track mechanisms, whilst the quadratic component arises from inter-track mechanisms. For example, chromosome translocations occur when genetic material is exchanged between two different chromosomes. It is generally thought that such chromosome aberrations occur because the two separate chromosomes both suffer double-strand breaks; i.e., the double-helix of DNA is thought to be broken in two separate chromosomes. The fragments from the two broken chromosomes are then exchanged, rather than spliced back to the correct chromosomes from which they originated.

There is a linear-quadratic relationship between radiation dose and the number of chromosome translocations in an irradiated cell. The linear component is due to individual particle tracks breaking two separate chromosomes. In contrast, the quadratic component is thought to be due to independent particle tracks breaking two separate chromosomes. This component squares with the dose because a break caused by one particle has a chance of interacting with a break caused by any other particle which passes through the cell within the same time-frame, (a period determined by the cycle of cellular repair processes). This component of the dose-response relationship is therefore dose-rate dependent; at low dose-rates it vanishes.

As yet, however, there appear to be no textbooks for those wishing to jointly specialise in air traffic control and radiobiology.

Published in: on April 17, 2010 at 12:06 pm  Leave a Comment  

Nuclear physics in other universes

The anthropic principle claims that the universe we live in is finely-tuned to permit the existence of life. According to current mathematical physics, there are many aspects of our physical universe which are contingent rather than necessary, and these include such things as the values of the numerous free parameters in the standard model of particle physics, and the parameters which specify the initial conditions in general relativistic models of the universe. The values of these parameters cannot be theoretically derived, and need to be determined by experiment and observation. The anthropic principle is based upon analysis which shows that if the values of any one of those parameters were to be changed, only slightly, then the universe would be inhospitable to life.

There is, however, some recent theoretical work which suggests that the fine-tuning claim which supports the anthropic principle may be just a little glib.

One of the favourite parameters used in anthropic arguments is the cosmological constant Λ. This appears to have a very small, but non-zero value. It is the cosmological constant which is driving the accelerated expansion of the universe discovered in 1998. If Λ were very much larger, it is argued, then the universe would have expanded far too quickly for any stars and galaxies to form. However, this argument tacitly assumes that the value of all the other parameters of physics are held constant. As Lee Smolin points out in Scientific alternatives to the anthropic principle, one can also vary the amplitude Q of the density fluctuations (which seed the formation of galaxies). If the value of Λ is increased, and the expansion of the universe is accelerated, then one can increase Q to compensate: “one can have stars and galaxies in a universe in which both Q and Λ are raised several orders of magnitude from their present value.” Modern cosmologists hypothesize that the amplitude of the density fluctuations is a consequence of inflation, the short period of exponential expansion in the early universe’s history, which was driven by a scalar field called the inflaton. As Smolin points out, Q therefore depends upon the parameters which define the inflaton potential energy function, parameters such as the mass and self-coupling, which are free parameters.

The existence of life in our universe is also dependent upon nuclear physics, in the sense that life appears to require the existence of hydrogen, carbon and oxygen, and these chemical elements can only exist if the nuclear physics of a universe permits the stable existence of atomic nuclei of electric charge equal to 1 (hydrogen), 6 (carbon) and 8 (oxygen), and if the nuclear physics of a universe facilitates the fusion of carbon and oxygen from primordial hydrogen.

Recent research, summarised by Alejandro Jenkins and Gilad Perez in the January issue of Scientific American, suggests that these criteria can be satisfied by universes with a nuclear physics quite different from our own.

The first case considered is a so-called ‘weakless universe’. Our own universe has four forces (gravity, electromagnetism, the strong nuclear force, and the weak nuclear force). A weakless universe has only three forces, being deprived of the weak force. This is a significant omission as far as nuclear physics is concerned, because the weak force is required for neutrons to transform into protons and vice versa.

A proton consists of two up quarks and one down quark, whilst a neutron consists of one up quark and two down quarks. If one of the down quarks in a neutron emits a W particle, (one of the so-called gauge bosons of the weak force), it will transform into an up quark, and the neutron will transform into a proton. This is called beta-decay. The W gauge boson then decays into an electron and an anti-electron-neutrino, the conventional radioactive products of beta decay. Conversely, if one of the up quarks in a proton emits a W+ gauge boson, it will transform into a down quark, and the proton will transform into a neutron.

The latter process is required for the nuclear fusion which takes places within the stars in our own universe. The so-called PPI chain requires pairs of protons to fuse together, and for one of the protons to transform into a neutron. The result is a nucleus with one proton and one neutron, called deuterium. Further protons then fuse with such deuterium nuclei to form helium-3 nuclei, and pairs of helium 3 nuclei then fuse into helium-4, spitting out two surplus protons in the process.

Remove the weak force, then, and stellar nuclear fusion can’t get started, can it? On the contrary, this conclusion only holds if all the other parameter values are held fixed. Harnik, Kribs and Perez argue in A universe without Weak Interactions, that if the level of asymmetry between matter and anti-matter in the early universe is also varied, then universes without the weak force can still synthesize the atomic nuclei required to support life. If the so-called baryon asymmetry is reduced, then a sufficient proportion of deuterium nuclei will be left over from big-bang nucleosynthesis, for the fusion of heavier elements to proceed within stars. Such stars will be colder and smaller than our own stars, but will still support a zone of habitability for planets orbiting at just the right distance.

The second case considered is that in which the masses of the quarks are varied. In our own universe, only the two lightest quarks, the up and the down quark, combine to form stable baryons: the proton and the neutron. The other quarks are too massive to form such baryons, hence they do not participate in the physics of atomic nuclei. The down quark in our universe is heavier than the up quark, hence the neutron is slightly heavier than the proton. Jaffe, Jenkins and Kimichi argue in Quark Masses: An Environmental Impact Statement, that universes with other combinations of quark masses could support the existence of life.

If, for example, the up quark is set to be heavier than the down quark, and protons are therefore set to be heavier than neutrons, then whilst hydrogen itself would be become unstable, the heavier isotopes of hydrogen, such as deuterium and tritium, would act as stable substitutes.

Another potential life-supporting universe is one in which the mass of the strange quark is reduced to a value close to that of the up quark, and the down quark is given a much lower mass than either of them. In such a universe, the sigma minus baryon Σ, consisting of two down quarks and one strange quark, functions as a substitute for the proton, and can combine with neutrons to form stable isotopes of hydrogen, carbon and oxygen.

As Perez and Jenkins comment, “Physicists in such a universe might be puzzled by the fact that the up and strange quarks would have almost identical masses. They might even imagine that this amazing coincidence has an anthropic explanation, based on the need for organic chemistry. We know, however, that such an explanation would be wrong, because our world has organic chemistry even though the masses of the [up] and strange quarks are quite different.”

Published in: on April 2, 2010 at 12:29 am  Leave a Comment  

Robert Wright and the Evolution of God

Bryan Appleyard writes an appreciative review of Robert Wright’s new book, The Evolution of God, in The Sunday Times, claiming that it constitutes “a scientifically based corrective to the absurd rhetoric of militant atheism.”

However, the logic of Appleyard’s review does little to substantiate this claim. Bryan firstly argues that “Even after 9/11, [the atheists] can’t prove [that religion is a bad thing] because, especially in the 20th century, non-religious nastiness was infinitely worse than religious.” So, the argument here is as follows: non-religious nastiness in the 20th century was worse than religious nastiness in preceding centuries, hence religion isn’t bad. That seems a rather perverse form of logic. One might conclude, instead, that religiously-inspired nastiness is a bad thing, and nastiness inspired in the 20th century by Marxism and fascism was also a bad thing.

Appleyard claims that “the persistence of religion in all human societies strongly suggests that, even in the most basic Darwinian terms, it has been good for us as a species.” This is another logical error: the persistence of a behavioural characteristic in a reproducing species does not entail that it is beneficial to survival in itself, for it may simply be a by-product of other traits which do, in combination, make a net contribution to survival.

When Appleyard then turns to the heart of Wright’s argument, we find what appears to be an attempt to equate the concept of God with some form of cosmic evolution:

What is clear, for Wright, is that there is an organising principle in the world and that this principle may well be materialistically explicable but it is, nonetheless, moral and progressive…Dawkins said Darwin [showed] how design arose through purely material means — evolution through natural selection is the ‘blind watchmaker’. Wright says this misses the point. The point is not how the watch was designed but the fact that it is designed. Some process has led to its existence and it is that process that matters because the mechanism and purpose of the watch clearly make it different in kind from, say, rocks. Equally, humans also require a different type of explanation from rocks. It may be natural selection or it may be some innate force in the universe. Either way, it is reasonable to associate this force with morality and God.

On the basis of this, Wright’s argument is simply the latest in a long line which attempt to define a pantheistic concept of God. In this case, God is equated with the physical process of cosmic evolution. Such a pantheistic concept of God is straighforwardly inconsistent with the notion of a transcendent, supernatural and personal God held by theistic religions such as Islam, Christianity and Judaism. Moreover, one also wonders how Wright manages to derive morality from the existence of evolution without committing the naturalistic fallacy, thereby deriving an ‘ought’ from an ‘is’.

Even leaving aside the inconsistency with theistic religion, there are serious problems with any pantheistic proposal to equate God with cosmic evolution. The primary problem is that evolution by natural selection cannot meet the demand for irrevocable progress which such a variety of pantheism places upon it. In particular, the notion that evolution necessarily leads to ever-greater complexity is a myth. As Michel Le Page points out:

“Evolution often takes away rather than adding. For instance, cave fish lose their eyes, while parasites like tapeworms lose their guts. Such simplification might be much more widespread than realised. Some apparently primitive creatures are turning out to be the descendants of more complex creatures rather than their ancestors. For instance, it appears the ancestor of brainless starfish and sea urchins had a brain.”

But most seriously for Wright’s argument, in cosmic terms the growth of entropy will dominate, and as the universe tends towards thermodynamic equilibrium, the energy flows which presently permit the evolution of complexity and life, will subside and ultimately cease. The evolution of complexity and life is therefore, cosmically speaking, something of a transient phenomenon, and if atheists such as Dawkins have forced religious apologists to the point where God has to be equated with an ephemeral physical process, then it seems that the atheists really have won the argument convincingly.

Published in: on September 1, 2009 at 5:16 pm  Leave a Comment  

Karen Armstrong and the case for God

[Modern theists] give the name of ‘God’ to some vague abstraction which they have created for themselves; having done so they can pose before all the world as deists, as believers in God, and they can even boast that they have recognized a higher, purer concept of God, notwithstanding that their God is now nothing more than an insubstantial shadow and no longer the mighty personality of religious doctrines. (Freud, The Future of an Illusion).

In The Case for God: What religion really means, former Catholic nun Karen Armstrong reiterates a now-familiar line of defence against the new wave of atheism. This generally amounts to the complaint that atheists such as Richard Dawkins have a theologiclly-uninformed, and mistakenly literalist interpretation of religious scripture. Thus, amongst those of an educated, literary-ecclesiastical background, religion is defended by advocating a metaphorical interpretation of scripture, and an aesthetic-mytho-poetic concept of God.

Wary of the power of science to overthrow religious worldviews, as demonstrated in the Copernican and Darwninian revolutions, the modern theist ushers God into an ontological safe-zone, where he cannot be subject to refutation by empirical means. Realising, however, that even this stronghold cannot resist the barbs of logic and reason, God is blindfolded, and bundled unceremoniously into a waiting limousine, whence he is taken at breakneck speed to a supra-logical and supra-semantic realm, beyond all human understanding.

“God is, by definition, infinitely beyond human language,” writes Christopher Hart in The Sunday Times. “Yet thanks to the misapplication of science to religious faith, we remain literal-minded and spiritually immature, frightened of the silence and solitude in which the Ancient of Days, the Unnameable, might be experienced, though never understood.

“We need to think of God not as a being, but as Being. Armstrong points us towards a vast tradition in all religions in which, in essence, you can ultimately say nothing about God, since God is no thing. In Islam, all speaking or theorising about the nature of Allah is mere zannah, fanciful ­guesswork. Instead, try ‘silence, reverence and awe,’ she says; or music, ritual, the steady habit of compassion, and a graceful acceptance of mystery and ‘unknowing’…God is dead — but, Armstrong suggests, all we have lost is a mistaken and limited notion of God anyway: a big, powerful, invisible man who does stuff.”

All of which will come as a surprise to the majority of monotheistic religious believers in the world, who believe that the universe was created by God, that God answers prayers and performs miracles, and provides the means for an afterlife.

Hart’s proposition that God is not a being, but Being itself, is the familiar doctrine of pantheism, which is inconsistent with the personal nature of God enshrined in Christianity, Judaism and Islam. The notion espoused by these religions that God is a transcendent, supernatural, personal being, who created the natural universe, is inconsistent with the pantheistic notion that God is an immanent, non-supernatural, non-personal being, equivalent to the natural universe. But, of course, it is precisely the existence of such irritating contradictions which explains the modern theist’s desire to push God into a supra-logical realm.

To propose that the notion of God is beyond all human understanding, language and logic, is to acknowledge that there is no coherent, comprehensible content to belief in God. Not only is belief in God belief without reason or evidence, but it is a belief without coherent content. The proponent of the modern educated defence against atheism is, in effect, admitting:

‘I have a belief, without reason or evidence, in a meaningless proposition.’

At which point, I rest my case.

Published in: on August 9, 2009 at 9:08 am  Leave a Comment  

Mathematical logic and multiverses

The concepts of mathematical logic, introduced to explain Godel’s theorem, can also be exploited to shed further light on the question of multiverses in mathematical physics.

Recall that any physical theory whose domain extends to the entire universe, (i.e. any cosmological theory), has a multiverse associated with it: namely, the class of all models of that theory. Both complete and incomplete theories are capable of generating such multiverses. The class of models of a complete theory will be mutually non-isomorphic, but they will nevertheless be elementarily equivalent. Two models of a theory are defined to be elementarily equivalent if they share the same truth-values for all the sentences of the language. Whilst isomorphic models must be elementarily equivalent, there is no need for elementarily equivalent models to be isomorphic. Recalling that a complete theory T is one in which any sentence s, or its negation Not(s), belongs to the theory T, it follows that every model of a complete theory must be elementarily equivalent.

Alternatively, if a theory is such that there are sentences which are true in some models but not in others, then that theory must be incomplete. In this case, the models of the theory will be mutually non-isomorphic and elementarily inequivalent.

Hence, mathematical logic suggests that the application of mathematical physics to the universe as a whole can generate two different types of multiverse: classes of non-isomorphic but elementarily equivalent models; and classes of non-isomorphic and elementarily inequivalent models.

The question then arises: are there any conditions under which a theory has only one model, up to isomorphism? In other words, are there conditions under which a theory doesn’t generate a multiverse, and the problem of contingency (‘Why this universe and not some other?’) is eliminated?

A corollary of the Upward Lowenheim-Skolem theorem provides an answer to this. The latter entails that any theory which has a model of any infinite cardinality, will have models of all infinite cardinalities. Models of different cardinality obviously cannot be isomorphic, hence any theory, complete or incomplete, which has at least one model of infinite cardinality, will have a multiverse associated with. (In the case of a complete theory, the models of different cardinality will be elementarily equivalent, even if they are non-isomorphic). Needless to say, general relativity has models which employ the cardinality of the continuum, hence general relativity will possess models of every cardinality.

For a theory of mathematical physics to have only one possible model, it must have only a finite model. A Theory of Everything must have a unique finite model if the problem of contingency, and the potential existence of a multiverse is to be eliminated.

Published in: on July 25, 2009 at 10:57 am  Comments (1)  

Theories of everything and Godel’s theorem

Does Godel’s incompleteness theorem entail that the physicist’s dream of a Theory of Everything (ToE) is impossible? It’s a question which, curiously, has received scant attention in the philosophy of physics literature.

To understand the question, first we’ll need to introduce some concepts from mathematical logic: A theory T is a set of sentences, in some language, which is closed under logical implication. In other words, any sentence which can be derived from a subset of the sentences in a theory, is itself a sentence in the theory. A model M for a theory T is an interpretation of the variables, predicates, relations and operations of the langauge in which that theory is expressed, which renders each sentence in the theory as true. Theories generally have many different models: for example, each different vector space is a model for the theory of vector spaces, and each different group is a model for the theory of groups. Conversely, given any model, there is a theory Th(M) which consists of the sentences which are true in the model M.

Now, a theory T is defined to be complete if for any sentence s, either s or Not(s) belongs to T. A theory T is defined to be decidable if there is an effective procedure of deciding whether any given sentence s belongs to T, (where an ‘effective procedure’ is generally defined to be a finitely-specifiable sequence of algorithmic steps). A theory is axiomatizable if there is a decidable set of sentences in the theory, whose closure under logical implication equals the entire theory.

It transpires that the theory of arithmetic (technically, Peano arithmetic) is both incomplete and undecidable. Moreover, whilst Peano arithmetic is axiomatizable, there is a particular model of Peano arithmetic, whose theory is typically referred to as Number theory, which Godel demonstrated to be undecidable and non-axiomatizable. Godel obtained sentences s, which are true in the model, but which cannot be proven from the theory of the model. These sentences are of the self-referential form, s = ‘I am not provable from A’, where A is a subset of sentences in the theory.

Any theory which includes Peano arithmetic will be incomplete, hence if a final Theory of Everything includes Peano arithmetic, then the final theory will also be incomplete. The use of Peano arithmetic is fairly pervasive in mathematical physics, hence, at first sight, this appears to be highly damaging to the prospects for a final Theory of Everything in physics.

In some mitigation, for the application of mathematics to the physical world one’s conscience may be fairly untroubled by the difficulties of self-referential statements. However, undecidable statements which are free from self-reference have been found in various branches of mathematics. For example, it has been proven that there is no general means of proving whether or not a pair of ‘triangulated’ 4-dimensional manifolds are homeomorphic (topologically identical).

Crucially, however, whilst the theory of a model, Th(M), may be undecidable, it is guaranteed to be complete, and it is the models of a theory which purport to represent physical reality. A final Theory of Everything might have no need of Peano arithmetic, and might well be complete and decidable. However, even if a final Theory of Everything is incomplete and undecidable, the physical universe will be a model M of that theory, and every sentence in the language of the theory will either belong or not belong to Th(M).

Published in: on July 5, 2009 at 12:05 pm  Leave a Comment  

Lee Smolin and the multiverse

Lee Smolin argues in Physics World against the notion that there exists a multiverse of timeless universes. Smolin believes that the need to invoke a multiverse is rooted in the dichotomy between laws and initial conditions in existing theoretical physics, and suggests moving beyond this paradigm.

A choice of initial conditions, however, is merely one of the means by which particular solutions to the laws of physics are identified. More generally, there are boundary conditions, and free parameters in the equations, which have no special relationship to the nature of time. Each theory in physics represents (a part of) the physical universe by a mathematical structure; the laws associated with that theory select a particular sub-class of models with that structure; and the application of a theory to explain or predict a particular empirical phenomenon requires the selection of a particular solution, i.e., a particular model. The choice of initial conditions, or boundary conditions, or the choice of particular values for the free parameters in the equations, is simply a way of picking out a particular model of a mathematical structure. For example, in general relativity, the structure is that of a 4-dimensional Lorentzian manifold, the Einstein field equations select a sub-class of all the possible 4-dimensional Lorentzian manifolds, and the choice of boundary conditions or initial conditions selects a particular 4-dimensional Lorentzian manifold within that sub-class.

As a consequence, any theory whose domain extends to the entire universe, (i.e. any cosmological theory), has a multiverse associated with it: namely, the class of all models of that theory. Irrespective of whether a future theory abolishes the dictotomy between laws and initial conditions, the application of that theory will require a means of identifying particular models of the mathematical structure selected by the theory. If there is only one physical universe, as Smolin claims, then the problem of contingency will remain: why does this particular model exist and not any one of the other possibilities? The invocation of a multiverse solves the problem of contingency by postulating that all the possible models physically exist.

Published in: on June 14, 2009 at 2:51 pm  Leave a Comment