The Fabric of Reality
This is possible because understanding does not depend on knowing a lot of facts as such, but on having the right concepts, explanations and theories.
One comparatively simple and comprehensible theory can cover an infinity of indigestible facts.
Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them.
Yet even though the formula summarizes infinitely more facts than the archives do, knowing it does not amount to understanding planetary motions. Facts cannot be understood just by being summarized in a formula, any more than by being listed on paper or committed to memory. They can be understood only by being explained.
explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself.
Prediction–even perfect, universal prediction–is simply no substitute for explanation.
Similarly, in scientific research the oracle would not provide us with any new theory. Not until we already had a theory, and had thought of an experiment that would test it, could we possibly ask the oracle what would happen if the theory were subjected to that test.
Thus, the oracle would not be replacing theories at all: it would be replacing experiments.
To say that prediction is the purpose of a scientific theory is to confuse means with ends. It is like saying that the purpose of a spaceship is to burn fuel. In fact, burning fuel is only one of many things a spaceship has to do to accomplish its real purpose, which is to transport its payload from one point in space to another. Passing experimental tests is only one of many things a theory has to do to achieve the real purpose of science, which is to explain the world.
new theory may be a simplification of an existing one, as when the Arabic (decimal) notation for numbers superseded Roman numerals. (The theory here is an implicit one. Each notation renders certain operations, statements and thoughts about numbers simpler than others, and hence it embodies a theory about which relationships between numbers are useful or interesting.)
What is an explanation, as opposed to a mere statement of fact such as a correct description or prediction?
This illustrates another attribute of understanding. It is possible to understand something without knowing that one understands it, or even without having specifically heard of it.
We understand the fabric of reality only by understanding theories that explain it. And since they explain more than we are immediately aware of, we can understand more than we are immediately aware that we understand.
So, even though our stock of known theories is indeed snowballing, just as our stock of recorded facts is, that still does not necessarily make the whole structure harder to understand than it used to be. For while our specific theories are becoming more numerous and more detailed, they are continually being ‘demoted’ as the understanding they contain is taken over by deep, general theories.
That is, new ideas often do more than just supersede, simplify or unify existing ones. They also extend human understanding into areas that were previously not understood at all–or whose very existence was not guessed at.
Thus the issue of whether it is becoming harder or easier to understand everything that is understood depends on the overall balance between these two opposing effects of the growth of knowledge: the increasing breadth of our theories, and their increasing depth.
Breadth makes it harder; depth makes it easier.
The reason why higher-level subjects can be studied at all is that under special circumstances the stupendously complex behaviour of vast numbers of particles resolves itself into a measure of simplicity and comprehensibility. This is called emergence: high-level simplicity ‘emerges’ from low-level complexity.
The purpose of high-level sciences is to enable us to understand emergent phenomena, of which the most important are, as we shall see, life, thought and computation.
The fabric of reality does not consist only of reductionist ingredients like space, time and subatomic particles, but also of life, thought, computation and the other things to which those explanations refer. What makes a theory more fundamental, and less derivative, is not its closeness to the supposed predictive base of physics, but its closeness to our deepest explanatory theories.
all explanations will be understood against the backdrop of universality, and every new idea will automatically tend to illuminate not just a particular subject, but, to varying degrees, all subjects.
Scientific knowledge, like all human knowledge, consists primarily of explanations.
As new theories supersede old ones, our knowledge is becoming both broader (as new subjects are created) and deeper (as our fundamental theories explain more, and become more general). Depth is winning. Thus we are not heading away from a state in which one person could understand everything that was understood, but towards it.
Borrowing the terminology of goldsmiths, one might say that light is not infinitely ‘malleable’. Like gold, a small amount of light can be evenly spread over a very large area, but eventually if one tries to spread it out further it gets lumpy.
There are no measurable continuous quantities in physics.
From everyday experience we know that it does, for we cannot see round corners. But careful experiments show that light does not always travel in straight lines. Under some circumstances it bends.
Puzzling though it may be that light rays should bend when passing through small holes, it is not, I think, fundamentally disturbing. In any case, what matters for our present purposes is that it does bend. This means that shadows in general need not look like silhouettes of the objects that cast them.
Why laser light and not torchlight? Only because the precise shape of a shadow also depends on the colour of the light in which it is cast; white light, as produced by a torch, contains a mixture of all visible colours, so it can cast shadows with multicoloured fringes.
Interference is not a special property of photons alone. Quantum theory predicts, and experiment confirms, that it occurs for every sort of particle. So there must be hosts of shadow neutrons accompanying every tangible neutron, hosts of shadow electrons accompanying every electron, and so on. Each of these shadow particles is detectable only indirectly, through its interference with the motion of its tangible counterpart.
reality. A new word, multiverse, has been coined to denote physical reality as a whole.
The heart of the argument is that single-particle interference phenomena unequivocally rule out the possibility that the tangible universe around us is all that exists.
Two particular implications of those laws are relevant. First, every subatomic particle has counterparts in other universes, and is interfered with only by those counterparts. It is not directly affected by any other particles in those universes.
But what is a rare event in any one universe is a common event in the multiverse as a whole.
‘tangible’. Tangibility is relative to a given observer.
Thus observations of ever smaller physical effects have been forcing ever greater changes in our world-view.
Thus the physical evidence that directly sways us, and causes us to adopt one theory or world-view rather than another, is less than millimetric: it is measured in thousandths of a millimetre
There is no getting away from the fact that we human beings are small creatures with only a few inaccurate, incomplete channels through which we receive all information from outside ourselves.
We interpret this information as evidence of a large and complex external universe (or multiverse). But when we are weighing up this evidence, we are literally contemplating nothing more than patterns of weak electric current trickling through our own brains.
universe.) The chicken noticed that the farmer came every day to feed it. It predicted that the farmer would continue to bring food every day. Inductivists think that the chicken had ‘extrapolated’ its observations into a theory, and that each feeding time added justification to that theory. Then one day the farmer came and wrung the chicken’s neck. The disappointment experienced by Russell’s chicken has also been experienced by trillions of other chickens. This inductively justifies the conclusion that induction cannot justify any conclusions!
In fact, it is impossible to extrapolate observations unless one has already placed them within an explanatory framework.
The fact that the same observational evidence can be ‘extrapolated’ to give two diametrically opposite predictions according to which explanation one adopts, and cannot justify either of them, is not some accidental limitation of the farmyard environment:
No scientific reasoning, and indeed no successful reasoning of any kind, has ever fitted the inductivist description.
misconception. What we need is an explanation-centred theory of knowledge: a theory of how explanations come into being and how they are justified;
Observational evidence about meteorology was far more readily available than in astronomy, but no one paid much attention to it, and no one induced any theories from it about cold fronts or anticyclones.
Common sense suggests that clouds move with the wind. When they drift in other directions, it is reasonable to surmise that the wind can be different at different altitudes, and is rather unpredictable, and so it is easy to conclude that there is no more to be explained. Some people, no doubt, took this view about planets, and assumed that they were just glowing objects on the celestial sphere, blown about by high-altitude winds, or perhaps moved by angels, and that there was no more to be explained. But others were not satisfied with that, and guessed that there were deeper explanations behind the wandering of planets. So they searched for such explanations, and found them.
Thus the solution, however good, is not the end of the story: it is a starting-point for the next problem-solving process (stage 5).
Whereas an incorrect prediction automatically renders the underlying explanation unsatisfactory, a correct prediction says nothing at all about the underlying explanation.
So all the theories are being subjected to variation and selection, according to criteria which are themselves subject to variation and selection. The whole process resembles biological evolution.
The new world-view that may be implicit in a theory that solves a problem, and the distinctive features of a new species that takes over a niche, are emergent properties of the problem or niche.
obtaining solutions is inherently complex. There is no simple way of discovering the true nature of planets, given (say) a critique of the celestial-sphere theory and some additional observations, just as there is no simple way of designing the DNA of a koala bear, given the properties of eucalyptus trees. Evolution, or trial and error–especially the focused, purposeful form of trial and error called scientific discovery–are the only ways.
Popper has called his theory that knowledge can grow only by conjecture and refutation, in the manner of Figure 3.3, an evolutionary epistemology.
Both in science and in biological evolution, evolutionary success depends on the creation and survival of objective knowledge, which in biology is called adaptation.
When we succeed in solving a problem, scientific or otherwise, we end up with a set of theories which, though they are not problem-free, we find preferable to the theories we started with.
The problem in genuine science is always to understand some aspect of the fabric of reality, by finding explanations that are as broad and deep, and as true and specific, as possible.
Thus we acquire ever more knowledge of reality by solving problems and finding better explanations.
As he put it, ‘the Book of Nature is written in mathematical symbols’.
Their world-view was false, but it was not illogical.
Problem-solving, after all, is a process that takes place entirely within human minds.
A prediction, or any assertion, that cannot be defended might still be true, but an explanation that cannot be defended is not an explanation.
The irreducible complexity of that story makes it philosophically untenable to deny that the objects exist.
Given a shred of a theory, or rather, shreds of several rival theories, the evidence is available out there to enable us to distinguish between them.
Anyone can search for it, find it and improve upon it if they take the trouble. They do not need authorization, or initiation, or holy texts. They need only be looking in the right way–with fertile problems and promising theories in mind. This open accessibility, not only of evidence but of the whole mechanism of knowledge acquisition, is a key attribute of Galileo’s conception of reality.
There are mathematical symbols in physical reality. The fact that it is we who put them there does not make them any less physical. In those symbols–in our planetariums, books, films and computer memories, and in our brains–there are images of physical reality at large, images not just of the appearance of objects, but of the structure of reality. There are laws and explanations, reductive and emergent. There are descriptions and explanations of the Big Bang and of subnuclear particles and processes; there are mathematical abstractions; fiction; art; morality; shadow photons; parallel universes. To the extent that these symbols, images and theories are true–that is, they resemble in appropriate respects the concrete or abstract things they refer to–their existence gives reality a new sort of self-similarity, the self-similarity we call knowledge.
virtual reality. The term refers to any situation in which a person is artificially given the experience of being in a specified environment. For example, a flight simulator–a machine that gives pilots the experience of flying an aircraft without their having to leave the ground–is a type of virtual-reality generator.
Since we experience our environment through our senses, any virtual-reality generator must be able to manipulate our senses, overriding their normal functioning so that we can experience the specified environment instead of our actual one.
Today we can do that much more accurately, using movies and sound recordings, though still not accurately enough for the simulated environment to be mistaken for the original. I
One can conceive of a technology beyond virtual reality, which could also induce specified internal experiences. A few internal experiences, such as moods induced by certain drugs, can already be artificially rendered, and no doubt in future it will be possible to extend that repertoire. But a generator of specifiable internal experiences would in general have to be able to override the normal functioning of the user’s mind as well as the senses. In other words, it would be replacing the user by a different person.
creation of artificial sense-impressions–image generation–so let us begin there. What constraints do the laws of physics impose on the ability of image generators to create artificial images, to render detail and to cover their respective sensory ranges?
To distinguish between a real aircraft and a simulation, a pilot would only have to fly it in a free-fall trajectory and see whether weightlessness occurred or not.
Thus the laws of physics impose no limit on the range and accuracy of image generators. There is no possible sensation, or sequence of sensations, that human beings are capable of experiencing that could not in principle be rendered artificially.
From the foregoing discussion it seems that any virtual-reality generator must have at least three principal components: a set of sensors (which may be nerve-impulse detectors) to detect what the user is doing, a set of image generators (which may be nerve-stimulation devices), and a computer in control.
The connecting cable contributes nothing to the user’s perceived environment, being from the user’s point of view ‘transparent’, just as we naturally do not perceive our own nerves as being part of our environment.
Thus virtual-reality generators of the future would be better described as having only one principal component, a computer, together with some trivial peripheral devices.
What environments we shall be able to render will no longer depend on what sensors and image generators we can build, but on what environments we can specify. ‘Specifying’ an environment will mean supplying a program for the computer, which is the heart of the virtual-reality generator.
But in virtual reality there are usually no particular images intended: what is intended is a certain environment for the user to experience. Specifying a virtual-reality environment does not mean specifying what the user will experience, but rather specifying how the environment would respond to each of the user’s possible actions.
humans will not do so for dolphins or extraterrestrials. To render a given environment for a user with given types of sense organs, a virtual-reality generator must be physically adapted to such sense organs and its computer must be programmed with their characteristics.
a short-sighted view of science is that it is all about predicting our sense-impressions. The correct view is that, while sense-impressions always play a role, what science is about is understanding the whole of reality, of which only an infinitesimal proportion is ever experienced.
So accurately rendering a physically possible environment depends on understanding its physics. The converse is also true: discovering the physics of an environment depends on creating a virtual-reality rendering of it.
science and the virtual-reality rendering of physically possible environments are two terms denoting the same activity.
Imagination is a straightforward form of virtual reality. What may not be so obvious is that our ‘direct’ experience of the world through our senses is virtual reality too.
What we experience directly is a virtual-reality rendering, conveniently generated for us by our unconscious minds from sensory data plus complex inborn and acquired theories (i.e. programs) about how to interpret them.
So it is not just science–reasoning about the physical world–that involves virtual reality. All reasoning, all thinking and all external experience are forms of virtual reality.
Whatever improvements may be made in the distant future, the repertoire of the entire technology of virtual reality will never grow beyond a certain fixed set of environments.
That is just the general feature of virtual reality that I have already discussed, namely that experience cannot prove that one is in a given environment, be it the Centre Court at Wimbledon or an environment of the Cantgotu type.
that mathematical calculation is a physical process (in particular, as I have explained, it is a virtual-reality rendering process), so it is impossible to determine by mathematical reasoning what can or cannot be calculated mathematically.
The scope of virtual reality, and its wider implications for the comprehensibility of nature and other aspects of the fabric of reality, depends on whether the relevant computers are physically realizable.
In particular, any genuine universal computer must itself be physically realizable. This leads to a stronger version of the Turing principle:
As I have said, all our external experiences are of virtual reality, generated by our own brains. And since our concepts and theories (whether inborn or learned) are never perfect, all our renderings are indeed inaccurate. That is to say, they give us the experience of an environment that is significantly different from the environment that we are really in.
So even if you had lived within the rendered environment all your life, and did not have your own memories of the outside world to account for as well, your knowledge would not be confined to that environment. You would know that, even though the universe seemed to have a certain layout and obey certain laws, there must be a wider universe outside it, obeying different laws of physics.
problems. I also explained (following Popper) how science does make progress, by conjecturing new explanations and then choosing between the best ones by experiment.
It is merely one of the effects of life, and the basis of life is molecular. It is the fact that there exist molecules which cause certain environments to make copies of those molecules. Such molecules are called replicators.
Not all replicators are biological, and not all replicators are molecules. For example, a self-copying computer program (such as a computer virus) is a replicator.
A good joke is another replicator, for it causes its listeners to retell it to further listeners. Richard Dawkins has coined the term meme (rhyming with ‘cream’) for replicators that are human ideas, such as jokes.
The gene for manufacturing it is present in almost every cell of the body, but it is switched on only in certain specialized cells in the pancreas, and then only when it is needed.
At the molecular level, this is all that any gene can program its cellular computer to do: manufacture a certain chemical. But genes succeed in being replicators because these low-level chemical programs add up, through layer upon layer of complex control and feedback, to sophisticated high-level instructions. Jointly, the insulin gene and the genes involved in switching it on and off amount to a complete program for the regulation of sugar in the bloodstream.
I shall also use the term niche for the set of all possible environments which a given replicator would cause to make copies of it.
Not everything that can be copied is a replicator. A replicator causes its environment to copy it: that is, it contributes causally to its own copying.
The accuracy of a virtual-reality rendering depends not only on the responses the machine actually makes to what the user actually does, but also on responses it does not, in the event, make to things the user does not in fact do. This similarity between living processes and virtual reality is no coincidence, as I shall shortly explain.
So an organism is the immediate environment which copies the real replicators: the organism’s genes.
The property of being a replicator is highly contextual–that is, it depends on intricate details of the replicator’s environment: an entity is a replicator in one environment and not in another.
So living processes and virtual-reality renderings are, superficial differences aside, the same sort of process.
Genes embody knowledge about their niches.
It is the survival of knowledge, and not necessarily of the gene or any other physical object, that is the common factor between replicating and non-replicating genes.
So, strictly speaking, it is a piece of knowledge rather than a physical object that is or is not adapted to a certain niche.
The point is that although all known life is based on replicators, what the phenomenon of life is really about is knowledge.
We can give a definition of adaptation directly in terms of knowledge: an entity is adapted to its niche if it embodies knowledge that causes the niche to keep that knowledge in existence.
Thus one cannot predict the future of the Sun without taking a position on the future of life on Earth, and in particular on the future of knowledge.
Life achieves its effects not by being larger, more massive or more energetic than other physical processes, but by being more knowledgeable.
Take a DNA segment, for instance. In some universes there are no DNA molecules at all. Some universes containing DNA are so dissimilar to ours that there is no way of identifying which DNA segment in the other universe corresponds to the one we are considering in this universe.
So knowledge is a fundamental physical quantity after all, and the phenomenon of life is only slightly less so.
They are regular across many nearby universes, while all the non-gene, junk-DNA segments are irregular. As for the degree of adaptation of a gene, this is almost as easy to estimate. The better-adapted genes will have the same structure over a wider range of universes–they will have bigger ‘crystals’.
Quantum computation is therefore nothing less than a distinctively new way of harnessing nature.
The earliest inventions for harnessing nature were tools powered by human muscles. They revolutionized our ancestors’ situation, but they suffered from the limitation that they required continuous human attention and effort during every moment of their use. Subsequent technology overcame that limitation: human beings managed to domesticate certain animals and plants, turning the biological adaptations in those organisms to human ends. Thus the crops could grow, and the guard dogs could watch, even while their owners slept. Another new type of technology began when human beings went beyond merely exploiting existing adaptations (and existing non-biological phenomena such as fire), and created completely new adaptations in the world, in the form of pottery, bricks, wheels, metal artefacts and machines. To do this they had to think about, and understand, the natural laws governing the world–including, as I have explained, not only its superficial aspects but the underlying fabric of reality. There followed thousands of years of progress in this type of technology–harnessing some of the materials, forces and energies of physics. In the twentieth century information was added to this list when the invention of computers allowed complex information processing to be performed outside human brains.
Quantum computation, which is now in its early infancy, is a distinct further step in this progression. It will be the first technology that allows useful tasks to be performed in collaboration between parallel universes. A quantum computer would be capable of distributing components of a complex task among vast numbers of parallel universes, and then sharing the results.
computational universality–the fact that a single physically possible computer can, given enough time and memory, perform any computation that any other physically possible computer can perform.
So we can infer that the laws of physics, in addition to mandating their own comprehensibility through the Turing principle, ensure that the corresponding evolutionary processes, such as life and thought, are neither too time-consuming nor require too many resources of any other kind to occur in reality.
If the object in question is the Earth’s atmosphere, then a hurricane may have occurred in 30 per cent of universes, say, and not in the remaining 70 per cent. Subjectively we perceive this as a single, unpredictable or ‘random’ outcome, though from the multiverse point of view all the outcomes have actually happened. This parallel-universe multiplicity is the real reason for the unpredictability of the weather.
Unpredictability has nothing to do with the available computational resources. Classical systems are unpredictable (or would be, if they existed) because of their sensitivity to initial conditions. Quantum systems do not have that sensitivity, but are unpredictable because they behave differently in different universes, and so appear random in most universes. In neither case will any amount of computation lessen the unpredictability.
Intractability, by contrast, is a computational-resource issue. It refers to a situation where we could readily make the prediction if only we could perform the required computation, but we cannot do so because the resources required are impractically large.
Intractability is in principle a greater impediment to universality than unpredictability could ever be.
It is only the strong quantum interference between the various paths taken by charged particles in parallel universes that prevents such catastrophes and makes solid matter possible.
Thus I can choose two 125-digit prime numbers and keep them secret, but multiply them together and make their 250-digit product public. Anyone can send me a message using that number as the key, but only I can read the messages because only I know the secret factors.
Quantum computation is a qualitatively new way of harnessing nature.
Where did Euclid obtain the knowledge of geometry which he expressed in his famous axioms, when no genuine circles, points or straight lines were available to him? Where does the certainty of a mathematical proof come from, if no one can perceive the abstract entities that the proof refers to? Plato’s answer was that we do not obtain our knowledge of such things from this world of shadow and illusion. Instead, we obtain it directly from the real world of Forms itself.
David Hilbert, the great German mathematician who provided much of the mathematical infrastructure of both the general theory of relativity and quantum theory, remarked that ‘the literature of mathematics is glutted with inanities and absurdities which have had their source in the infinite’.
Gödel proved first that any set of rules of inference that is capable of correctly validating even the proofs of ordinary arithmetic could never validate a proof of its own consistency. Therefore there is no hope of finding the provably consistent set of rules that Hilbert envisaged.
So explanation does, after all, play the same paramount role in pure mathematics as it does in science. Explaining and understanding the world–the physical world and the world of mathematical abstractions–is in both cases the object of the exercise. Proof and observation are merely means by which we check our explanations.
Nothing can move from one moment to another. To exist at all at a particular moment means to exist there for ever. Our consciousness exists at all our (waking) moments.
We do not experience time flowing, or passing. What we experience are differences between our present perceptions and our present memories of past perceptions.
Philosophically, the most important cause-and-effect processes are our conscious decisions and the consequent actions.
In Newtonian physics, for instance, if at any moment one knows the positions and velocities of all the masses in an isolated system, such as the solar system, one can in principle calculate (predict) where those masses will be at all times thereafter.
two conditions must hold for an entity to be a cause of its own replication: first, that the entity is in fact replicated; and second, that most variants of it, in the same situation, would not be replicated. This definition embodies the idea that a cause is something that makes a difference to its effects, and it also works for causation in general.
For X to be a cause of Y, two conditions must hold: first, that X and Y both happen; and second, that Y would not have happened if X had been otherwise. For example, sunlight was a cause of life on Earth because both sunlight and life actually occurred on Earth, and because life would not have evolved in the absence of sunlight.
But in the multiverse, universes are present in definite proportions, so it is meaningful to say that certain types of event are ‘very rare’ or ‘very common’ in the multiverse, and that some events follow others ‘in most cases’.
For whereas the initial, ‘spinning’ state of the coin may be experienced by an observer, the final combined ‘heads’ and ‘tails’ state does not correspond to any possible experience of the observer.
In everyday experience, however, causes always precede their effects, and this is because–at least in our vicinity in the multiverse–the number of distinct types of snapshot tends to increase rapidly with time, and hardly ever decreases. This property is related to the second law of thermodynamics, which states that ordered energy, such as chemical or gravitational potential energy, may be converted entirely into disordered energy, i.e. heat, but never vice versa.
Heat is microscopically random motion.
We exist in multiple versions, in universes called ‘moments’. Each version of us is not directly aware of the others, but has evidence of their existence because physical laws link the contents of different universes. It is tempting to suppose that the moment of which we are aware is the only real one, or is at least a little more real than the others. But that is just solipsism.
All moments are physically real. The whole of the multiverse is physically real. Nothing else is.
We should be able to cross-check this evidence, and determine the date more precisely, by observing some natural long-term ‘calendar’ such as the shapes of the constellations in the night sky or the relative proportions of various radioactive elements in rocks.
Einstein’s special theory of relativity, which says that in general an observer who accelerates or decelerates experiences less time than an observer who is at rest or in uniform motion. For example, an astronaut who went on a round-trip involving acceleration to speeds close to that of light would experience much less time than an observer who remained on Earth.
A more accurate way of thinking about the inter-universe ‘trade’ in knowledge is to think of all our knowledge-generating processes, our whole culture and civilization, and all the thought processes in the minds of every individual, and indeed the entire evolving biosphere as well, as being a gigantic computation.
Everyone takes it for granted that the truth is not obvious, and that the obvious need not be true;
that ideas are to be accepted or rejected according to their content and not their origin; that the greatest minds can easily make mistakes; and that the most trivial-seeming objection may be the key to a great new discovery.
But all evolution promotes the ‘good’ (i.e. the replication) of the best-replicating genes–hence the term ‘selfish gene’.
‘What is life?’ This problem was solved by Darwin. The essence of the solution was the idea that the intricate and apparently purposeful design that is apparent in living organisms is not built into reality ab initio, but is an emergent consequence of the operation of the laws of physics.
knowledge can be understood as complexity that extends across large numbers of universes.
Thus the problem with taking any of these fundamental theories individually as the basis of a world-view is that they are each, in an extended sense, reductionist. That is, they have a monolithic explanatory structure in which everything follows from a few extremely deep ideas. But that leaves aspects of the subject entirely unexplained. In contrast, the explanatory structure that they jointly provide for the fabric of reality is not hierarchical: each of the four strands contains principles which are ‘emergent’ from the perspective of the other three, but nevertheless help to explain them.
Again, the value of a design feature is understandable only in the context of a given purpose for the designed object. But we may find that it is possible to improve designs by incorporating a good aesthetic criterion into the design criteria. Such aesthetic criteria would be incalculable from the design criteria; one of their uses would be to improve the design criteria themselves. The relationship would again be one of explanatory emergence. And artistic value, or beauty, would be a sort of emergent design.
The ends of the universe are, as Popper said, for us to choose. Indeed, to a large extent the content of future intelligent thoughts is what will happen, for in the end the whole of space and its contents will be the computer.
This justifies the use of epistemological terminology such as ‘problem’, ‘solution’, ‘reasoning’ and ‘knowledge’ in ethics and aesthetics. Thus, if ethics and aesthetics are at all compatible with the world-view advocated in this book, beauty and rightness must be as objective as scientific or mathematical truth.
Admittedly, in the limit (which no one experiences), at the instant when the universe ends, everything that is comprehensible may have been understood. But at every finite point our descendants’ knowledge will be riddled with errors. Their knowledge will be greater, deeper and broader than we can imagine, but they will make mistakes on a correspondingly titanic scale too.