Is the discussion of free will an illusion?

Biologist Martin Heisenberg writes an article for Nature which purports to address the issue of free will, but ultimately does nothing of the kind.

Heisenberg describes the actual research around which the article is constructed, as follows:

My lab has demonstrated that fruit flies, in situations they have never encountered, can modify their expectations about the consequences of their actions. They can solve problems that no individual fly in the evolutionary history of the species has solved before. Our experiments show that they actively initiate behaviour. Like humans who can paint with their toes, we have found that flies can be made to use several different motor outputs to escape a life-threatening danger or to visually stabilize their orientation in space.

The ‘expectations’ of fruit flies?

Let us be generous, and accept that this term is used metaphorically. The problem with Heisenberg’s article owes far more to the general thrust of the argument, which is merely to claim that animals are capable of adapting their behaviour, that “behavioural output can be independent of sensory input.” Yet, as Heisenberg admits himself, “the idea that animals act only in response to external stimuli has long been abandoned, and it is well established that they initiate behaviour on the basis of their internal states, as we do.” But given that this fact is well-established, it is difficult to see what Heisenberg thinks has been newly discovered in his lab.

Let us accept that Heisenberg’s lab have correctly interpreted their empirical data, and that fruit lies are indeed capable of adapting to their environment. This would constitute a type of learning, but it is difficult to see how this bears upon the issue of free will. Neural networks, for example, are capable of learning, and there is a body of literature which demonstrates that recurrent neural networks can be trained to behave like deterministic finite-state automata (DFA). Fruit-fly learning and subsequent behaviour could be represented by such a neural network, but a neural network that can be trained to behave like a DFA is hardly considered to be the epitome of freely-willed behaviour. Neural networks themselves can be either deterministic or stochastic (i.e., random), but both types of causation are distinct from Heisenberg’s notion of freely-willed behaviour as “self-generated,” (i.e., neither determined, nor random).

If fruit flies are indeed capable of adapting to their environment, then this would be inconsistent with a behaviouristic interpretation of fluit fly behaviour (i.e., an interpretation which denies that fruit flies possess internal states), but it is perfectly consistent with a deterministic interpretation of their behaviour (as well as being quite irrelevant to the issue of free will). Without internal states, there can be no variation in the output response to input stimuli, but with internal states, the response to a stimulus can vary depending upon the internal state, and the internal state can be the result of prior learning.

So Heisenberg’s lab have perhaps found evidence for the existence of internal states in fruit flies, but such a finding is of no relevance to the issue of free will.

Advertisements
Published in: on May 29, 2009 at 10:58 pm  Comments (2)  

Primeval music

The European Space Agency’s Planck satellite is due to launch from French Guiana on May 14th. Pending a successful deployment, Planck will measure the temperature of the cosmic microwave background radiation (CMBR) across the entire celestial sphere, with greater sensitivity and spatial resolution than achieved by its predecessor, NASA’s WMAP satellite. The variations in the temperature of the CMBR reflect variations in the density of matter when the universe was 380,000 years old, at the time of so-called ‘recombination’ when atomic nuclei captured previously free electrons.

New Scientist duly have an article to herald the launch, which claims that “these so-called anisotropies are believed to be due to inflation…During inflation, quantum fluctuations in space-time were extended to cosmological scales: by the time the CMB was released, these fluctuations had led to variations in the distribution of matter across the universe. Denser regions of the universe produced CMB photons slightly colder than average, and vice versa.”

In fact, whilst it is claimed by cosmologists that temperature fluctuations more than a few degrees across are the imprint of fluctuations present at the end of the inflationary period, fluctuations smaller than a degree are believed to be the result of acoustic oscillations in the plasma of baryons, electrons and photons present between the end of inflation and the time of recombination. These small-scale fluctuations are therefore the visible remnant of the earliest sound waves in the universe.

For the large angular-scale fluctuations, the denser regions redshifted the light climbing out of those regions, and therefore produce cooler spots in the CMBR; in contrast, for the small angular-scale fluctuations, denser regions were regions where the plasma was hotter, hence these denser regions produce hotter spots in the CMBR.

Published in: on May 19, 2009 at 6:24 pm  Leave a Comment  

Fluid dynamics of the local stream

The first genuinely warm day of Spring. The Sun opens up the landscape into a buzzing, multi-hued repository of beauty and intricately detailed physical process. The garden is stratified by colour: three blood-red tulips surge vertically against an emerald background of lawn, hedge and tree, themselves shouldering an aquamarine sky.

Taking a walk to the local stream, limitless complexity abounds. Where the flow is shallow, and the bed is pebbly, a series of undulations appear in the surface flow; standing waves perhaps? Fronds of vegetation protrude into the waterway, and small vortices spin off their tips, passing a short distance diagonally down the streamflow. In places, the flow is narrow, and vegetation chokes both sides; here, the vortices cross-hatch the surface.

Some parts of the stream are silent and languid; others tinkle and babble, and here the flow is turbulent. Sudden irregularities and constrictions cause small waves to break, and jets to impact the water, trapping bubbles of air; cavitation creates bubbles of water vapour where the water impacts upon rock and stone; the bubbles oscillate, creating sound waves in the water, which propagate to the surface, and thence transmit to the air as a tranquilising murmur.

Each square metre of this totally unremarkable watercourse, is worthy of its own treatise; each unit area deserves its own magnus opus from a fluid dynamicist.

Published in: on May 2, 2009 at 7:05 pm  Comments (2)  

The Eureka machine and cliodynamics

Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination. (Einstein)

Michael Schmidt and Hod Lipson have apparently developed an automated search algorithm which discovers physical laws and conservation equations from scratch. The algorithm scrutinises the experimental data extracted from the motion capture of physical systems, and reproduces the classical laws which explain the data. Or, as The Guardian claimed, Schmidt and Lipson have developed a ‘Eureka machine’.

In a technique Schmidt and Lipson refer to as ‘symbolic regression’, their algorithm searches the space of possible mathematical expressions until it finds analytical expressions which reproduce the empirical data. Starting from algebraic operations and simple analytical functions such as sine and cosine, the algorithm randomly re-combines previous equations and parameters, and tests each set of expressions for accuracy against the empirical data, until it reaches a desired level of accuracy. Schmidt and Lipson’s algorithm was able to converge on the Hamiltonians, Lagrangians and force laws of classical physical systems, including non-linear systems.

As an aside, if it is true that civilization is a non-linear classical physical system, then Schmidt and Lipson’s algorithm could perhaps be applied to the data generated by human history, to discover the fundamental laws of cliodynamics. The difficulties of extracting empirical data in this case, where there is only historical documentation rather than motion capture, are obviously not to be underestimated. Moreover, whilst Schmidt and Lipson are able to pre-specify what the state variables of their systems are – they direct their software to look at positions, velocities and accelerations – in the case of cliodynamics, a central difficulty is identifying what the state variables actually are.

Schmidt and Lipson’s work raises a number of funamental issues for both the philosophy of science, and for physics. The fact that their algorithm converges on unique, self-consistent laws, seems to undermine the purported underdetermination of theory by data, a popular bone of contention in the philosophy of science.

It also looks like this work is the first serious step down a road which will considerably alter, and perhaps reduce the creative opportunities for physicists. There would still be, of course, the need to develop such algorithms, to prepare the input data, and to interpret the output. And it should also be emphasised that, from the perspective of mathematical physics, the primary creative task is the discovery of mathematical structures, not the discovery of the laws satisfied by the variables embedded in those structures. An algorithm which discovers the mathematical structures necessary to represent the physical world is a step beyond the work of Schmidt and Lipson. Nevertheless, whilst mathematical physicists might take this consolation, the long-term prospects may not be quite as rosy for their counterparts in theoretical physics.

Published in: on April 18, 2009 at 11:28 am  Leave a Comment  

Conor Cunningham and Darwinism

Philosopher and theologian, Conor Cunningham, argues that Darwinism is consistent with Christianity. His argument is that the Biblical account of the creation of man in Genesis, is merely allegorical, and that it was traditional amongst the Founding Fathers of the Christian church not to interpret Genesis literally. Augustine, he claims, would not have been perturbed were he to have known about Darwinian evolution.

Cunningham, however, seems to have missed a crucial point. Darwin’s account of the origin of mankind not only refuted the literal Biblical account, it also refutes the belief that God is responsible for the existence of mankind. Evolution by natural selection is not a deterministic process, hence unless one postulates that the universe is deterministic on a lower level than that at which evolutionary biology operates (a postulate which quantum theory renders problematic), the evolution of humanity by natural selection entails that the existence of humanity is a matter of pure chance; the existence of humanity is a contingent property of the universe, something which might not have happened at all.

Hence, Darwinian evolution is inconsistent with the essential Christian belief that God is responsible for the existence of mankind.

Published in: on April 11, 2009 at 11:07 am  Leave a Comment  

Super-nonsense

It’s rational to believe that irrationality is an ineliminable aspect of human mentality. As the consistently superb Paul Broks puts it,

The capacity to hold rational thoughts alongside irrational intuitions is part of the mind’s design. Even if we deny belief in the supernatural – in ghosts, say, or astrology – we are all inclined towards magical thinking and superstition. It’s a frame of mind that one direction opens out to a dream world of myth and imagination and the other leads to practical creativity in the arts and sciences. The dark side is mental illness.

Psychologist Bruce Hood claims in his recent book Supersense, that such irrationality has evolved by natural selection, by virtue of contributing to our survival at some point in the past. This is a viable hypothesis, although, as Michael Brooks points out in his New Scientist review, not necessarily one which is supported by any solid evidence as yet. However, Hood also appears to extrude the following philosophically flimsy argument from this hypothesis:

There are good, scientific reasons why religion won’t disappear…Spiritual thinking is not about being simple-minded or stupid it’s about being human. We are, [Hood] suggests, “a sacred species”…Our supersense gives us sacred values, and our sacred values create taboos. Taboos, in turn, provide a means for group cohesion. “Irrationality makes our beliefs rational because these beliefs hold society together,” Hood says. If hardened sceptics were to accept that irrationality is, well, rational insofar as it serves to hold societies together, that would constitute an important step toward a more tolerant and unified society.

The first error here lies in the conflation of the irrational with the sacred. There are many types of irrationality, some of which are necessary to maintain personal relationships and social cohesion, but which don’t involve the religious type of sacred belief. Socially-cohesive irrationality may well involve holding certain things as sacred in the sense that they are held in great reverence, but without involving the religious notion of the sacred, which explicitly requires belief in the supernatural.

Secondly, the religiously sacred brand of irrationality is demonstrably unnecessary for social cohesion. There are, for example, numerous non-religious, professional or collegiate groups, such as doctors, trades unions, and soldiers, which are not bound together by a shared belief in the sacred or supernatural, but simply by shared interests and experiences. Moreover, the existence of socially cohesive secular European states attests to the socially superfluous nature of religiously sacred belief.

Thirdly, the religiously sacred strain of irrationality is demonstrably insufficient to promote social cohesion. For example, the notoriously religious United States is beset with much higher levels of violence and homicide than secular Europe; that’s hardly a great advert for the socially cohesive power of religion.

Finally, even if it is acknowledged that there are circumstances under which religiously sacred beliefs do promote greater social cohesion, such as that to be found within the Islamic theocracies which spawned Al-Qaeda and Hezbollah, the existence of religiously-driven social cohesion seems to promote a vicious in-group/out-group mentality which leads to inter-group conflict. In this respect, it is perhaps no coincidence that the notoriously religious United States is also the notoriously war-mongering United States.

Published in: on April 3, 2009 at 5:50 pm  Leave a Comment  

Templeton prize won by epistemic structural realist

The Templeton Prize for 2009, worth a cool £1 million, has been awarded to French philosopher of physics, Bernard d’Espagnat.

D’Espagnat accepts that there is a world which exists independently of experience, observation, and measurement, and in philosophical terms he is therefore a realist. He believes, however, that whilst science enables us to “glimpse some basic structures of…reality,” it cannot provide complete knowledge of the world which exists beyond the empirical data; rather, it is a ‘veiled’ reality. D’Espagnat therefore endorses a version of what is referred to in modern philosophy of science as epistemic structural realism. (In contrast, an ontic structural realist holds that the structure of reality is the only thing which exists).

In theological terms, D’Espagnat’s epistemological structural realism then enables him to advocate a pantheistic, noumenal concept of God. In other words, God is equated with the noumenal world, the unknowable world beyond our empirical experience and observation. Such a proposal is distinct from pantheistic notions which equate God with the natural world, because D’Espagnat relegates the natural world – the world of space, time and matter – to what Kant referred to as the ‘phenomenal’ world, the world produced by the modus operandi of our minds upon the noumenal world.

Last year’s winner of the Templeton Prize, Michael Heller, can perhaps be classified as an ontic structural realist, hence it seems that all the philosophical bases are being covered here.

New Scientist‘s very own embedded philosopher of physics, Amanda Gefter, concludes:

It would be nonsensical to paint [D’Espagnat’s God] with the figure of a personal God or attribute to it specific concerns or commandments.

The ‘veiled reality’, then, can in no way help Christians or Muslims or Jews or anyone else rationalise their specific beliefs. The Templeton Foundation – despite being headed up by John Templeton Jr, an evangelical Christian – claims to afford no bias to any particular religion, and by awarding their prize to d’Espagnat, I think they’ve proven that to be true.

I happen to believe that drawing any spiritual conclusions from quantum mechanics is an unfounded leap in logic – but if someone out there in the world is willing to pay someone £1 million for pondering the nature of reality, that’s a world I’m happy to live in.

Published in: on March 25, 2009 at 8:47 pm  Leave a Comment  

What is an elementary particle?

Democritus famously proposed that all matter consists of microscopic particles, called ‘atoms’, which are not themselves composed of other particles. By historical accident, it is the chemical elements which are referred to as atoms, despite the fact that they are composed of smaller particles. Nevertheless, it is currently believed that there are particles which are not composed of other particles, and these entities, which include electrons, photons and quarks, are dubbed ‘elementary particles’. The Democritean vision of elementary particles as miniature snooker balls, however, has been somewhat vitiated by quantum theory, and it is not merely the classical notion of a particle as a localisable entity which has been undermined, but the mereological notion that a composite system has a unique decomposition into elementary entities.

According to modern theoretical physics, the fundamental types of things which exist are quantum fields, and particles are merely excited states of the underlying quantum field. Given that these modes of excitation satisfy the principles of quantum theory, they are often dubbed ‘excitation quanta’. Even when there are no particles present, the quantum field is simply in its lowest-energy state, and this non-zero energy of the so-called ‘vacuum state’ duly has a detectable effect.

Because particles are excitation quanta of an underlying field, their identity conditions are more akin to those of waves or vibrations in a continuous medium than miniature snooker balls. For example, if one begins with a number of separate travelling waves on the surface of a body of water, and they merge together to form a standing wave, then the individual identities of the original constituent waves would be lost. This has some similarity with quantum phenomena: For example, there are conditions under which one can say that there is an N-particle state of a quantum field, but in which it appears to be impossible to individuate N distinguishable particles; there are states of a quantum field in which there are simply an indefinite number of particles present; and given a quantum field state in which no particles are present in one reference frame, this is the same state for which there will be many particles present in an accelerated reference frame.

The analogy with vibrations or oscillations in a continuous medium also leads to a better understanding of what an elementary particle might be. On a classical level, Fourier analysis treats each wave as an element in a vector space. By selecting a basis for a vector space, one can decompose any element as a linear combination of basis vectors; select a different basis, and one can decompose the same element as a different linear combination. In Fourier analysis, plane waves are selected as the preferential basis vectors, and in a sense, these are the elementary waves. As Roberto Torretti points out, telecommunication companies use Fourier analysis to “literally superpose the electromagnetic renderings of many simultaneous long-distance messages in a single wave train that is echoed by satellite and then automatically analyzed at the destination exchange into its several components, each one of which is transmitted over a separate private telephone line…the signal could also be split into other, meaningless components if the analysis were not guided by human interests and aims.” (The Philosophy of Physics, p393).

This clearly undermines the Democritean mereological concept of elementarity, in which a composite entity has a unique decomposition into a set of indivisible entities.

It is also well-known amongst physicists that the classical description of the oscillations in a continuous medium can be quantized. When the vibrations in a crystalline solid are so quantized, one obtains elementary modes of excitation called phonons. (These are also considered to be the elementary modes of the sound waves in a solid). Similarly, when the oscillations of the electrons in a plasma are subjected to a quantum description, the elementary modes of excitation are called plasmons. In this context, elementarity seems to correspond to nothing more than the choice of a particularly convenient decomposition of oscillatory behaviour.

Richard Feynman once described the function of high-energy particle colliders as akin to smashing watches together, and then looking at the gears, cogs and springs which fly out, in order to better understand how the watches are put together. Given the considerations above, a better analogy might be to imagine high-energy incoming waves, which collide and merge, and then split apart into different, smaller, outgoing waves. Certainly, when elementary particles and anti-particles collide, and transform into different types of outgoing particles, the use of an analogy which employs composite systems, such as watches, breaks down somewhat.

So these are the mereological questions which beset elementary particles, but even if we successfully elucidate the quantum concepts of parts and wholes, we are still left with the question, ‘What is an elementary particle?’. Attempts to answer this question have employed the notions of intrinsic properties and extrinsic properties.

An intrinsic property of an object can be defined to be a property which the object possesses independently of its relationships to other objects. In contrast, an extrinsic property can be defined to be a property which an object possesses depending upon its relationships with other objects. Thus, one might deem that a particle’s mass and charge are intrinsic properties, whilst its velocity is an extrinsic property, depending as it does upon the reference frame chosen.

Now, in terms of these concepts, classical physics offers a nice clear definition of an elementary particle: it is a system which has a unique intrinsic state. Souriau and Cushman-de Vries define an elementary system to be one in which the restricted Poincare group acts transitively upon its ‘space of motions’, (Structure of Dynamical Systems: A Symplectic View of Physics‎, p173). The space of motions here is the set of all possible histories of a system. Within special relativity, the Poincare group provides the group of all possible transformations between those reference frames which are unaccelerated, and therefore free from the influence of forces. (One also says that the Poincare group is the space-time symmetry group of special relativity). Those properties which change under the action of the Poincare group, must be extrinsic properties, and histories which are related by a Poincare transformation are the same intrinsic history. Now, if the (restricted) Poincare group acts ‘transitively’ upon the space of histories of a system, it entails that any two histories, v and w, are related by a Poincare transformation g, v=gw, and therefore there is only one intrinsic history. In classical physics, an elementary particle has a unique intrinsic state, and a unique intrinsic history.

However, the situation changes in a subtle fashion in quantum theory. Here, Wigner established that each type of elementary particle corresponds to an irreducible Hilbert space representation of the restricted Poincare group. (An irreducible representation is one in which there are no subspaces invariant under the group action, apart from the null vector and the entire Hilbert space). However, the irreducibility of a group representation does not entail the transitivity of the group action, and neglect of this fact has a tendency to lead some authors astray.

For example, the mathematician J.M.G. Fell adopted Wigner’s notion that the irreducibility of a representation is the defining characteristic of an elementary particle representation, and argued that the ensuing group action is “essentially” transitive upon the state space of such a representation. He argued from this that an elementary particle has only one intrinsic state:

“It can never undergo any intrinsic change. Any change which it appears to undergo (change in position, velocity, etc.) can be ‘cancelled out’ by an appropriate change in the frame of reference of the observer. Such a material system is called an elementary system or an elementary particle. The word ‘elementary’ reflects our preconception that, if a physical system undergoes an intrinsic change, it must be that the system is ‘composite’, and that the change consists in some rearrangement of the ‘elementary parts’,” (Fell, J.M.G., and Doran, R.S. (1988). Representations of *-Algebras, Locally Compact Groups, and Banach *-Algebraic Bundles, p31).

Unfortunately, Wigner’s irreducible representations are representations upon infinite-dimensional Hilbert spaces, whilst the Poincare group has only 10 dimensions. A group can only act transitively upon a space with the same dimension as the group itself, hence in quantum theory, the restricted Poincare group has many different ‘orbits’ upon the state spaces of elementary particles. Each orbit corresponds to a different intrinsic state of the elementary particle. Essentially, this is because a particle is represented in quantum theory by a wave-function, a field-like object, and the multitude of possible, locally-varying, intrinsic changes in such an object cannot be cancelled out by the rigid transformations of the Poincare group.

One needs to carefully distinguish the false notion that the space-time symmetry group acts transitively upon the quantum state space of an elementary system, from the correct notion that any vector in such a state space is ‘cyclic’ with respect to the action of the space-time symmetry group. If one takes the orbit of the action of the space-time symmetry group upon an arbitrary vector, and if one then takes the set of all superpositions of the elements in that orbit (technically, if one takes the topological closure of the complex linear span of all the elements in the orbit), then one obtains the entire state space. The vector chosen is said to be a cyclic vector, and the representation is said to be cyclic. In the case of an irreducible representation of the space-time symmetry group, the orbit of a single state takes one through a sufficient number of orthogonal states to span the entire infinite-dimensional state space. However, this doesn’t entail that there is only one intrinsic state! The mathematical operation of taking a linear combination of a set of states does not correspond to a change of physical reference frame.

In the Stanford Encyclopedia of Philosophy, Meinard Kuhlmann asserts that:

The physical justification for linking up irreducible representations with elementary systems is the requirement that “there must be no relativistically invariant distinction between the various states of the system” (Newton & Wigner 1949). In other words the state space of an elementary system shall have no internal structure with respect to relativistic transformations. Put more technically, the state space of an elementary system must not contain any relativistically invariant subspaces, i.e., it must be the state space of an irreducible representation of the relevant invariance group. If the state space of an elementary system had relativistically invariant subspaces then it would be appropriate to associate these subspaces with elementary systems. The requirement that a state space has to be relativistically invariant means that starting from any of its states it must be possible to get to all the other states by superposition of those states which result from relativistic transformations of the state one started with. (2006, Section 5.1.1).

It is indeed true, by definition, that under an irreducible representation of the
space-time symmetry group, there can be no non-trivial subspace which is invariant under the action of the symmetry group. However, for the reasons explained above, this does not entail that there can be no “relativistically invariant distinction between the various states of the system”. There can indeed be such a distinction, defined by the different orbits of the symmetry group. Note also that Kuhlmann conflates an irreducible group representation with a cyclic representation; irreducibility is not the same thing as cyclicity.

In modern physics, then, elementarity is far from elementary.

What is an elementary particle?

Gordon McCabe Standard Model

Published in: on March 1, 2009 at 11:32 pm  Leave a Comment  

The savage Darwinian

When Bryan Appleyard isn’t moonlighting as The Priest Who Kicks Ass, he can be found writing eclectic articles for The Sunday Times. Remarkably, yours truly even receives a mention in Bryan’s latest piece, a recollection and analysis of Bryan’s personal blogging experiences, as an introduction to the 100 Best Blogs.

Gordon McCabe McCabism

Published in: on February 23, 2009 at 11:00 pm  Leave a Comment  

Amanda Gefter

I’ve recently stumbled upon Amanda Gefter, an editor for the Opinion section of New Scientist. Amanda studied the philosophy of physics at the London School of Economics, and writes about cosmology, so I guess there is a certain similarity of background. Moreover, Amanda is also very interested in science and religion. A couple of months ago she wrote a timely article which drew attention to the latest tactic of the creationists, (and their apologists, some of whom, it must be said, write for British newspapers):

“They are attempting to resurrect Cartesian dualism – the idea that brain and mind are two fundamentally different kinds of things, material and immaterial – in the hope that it will make room in science both for supernatural forces and for a soul.”

Amanda also spoke to Michael Heller earlier in the year, and concluded that

Heller comes across as a contemplative, kind and brilliant man with an impressive intellectual range, flitting easily between talk of complex philosophical ideas and sophisticated mathematical physics. (I was intrigued that his current work is focused on ridding physics of the big bang singularity – despite the fact that many Catholics have latched on to the idea of the singularity as the space left for God and his creative power.)”

I wonder if Amanda also gets asked “What on Earth is the philosophy of physics?

Amanda Gefter

Published in: on February 22, 2009 at 12:00 pm  Leave a Comment