1. The Reach of Explanations

How do we know? One of the most remarkable things about science is the contrast between the enormous reach and power of our best theories and the precarious, local means by which we create them. No human has ever been at the surface of a star, let alone visited the core where the transmutation happens and the energy is produced. Yet we see those cold dots in our sky and know that we are looking at the white-hot surfaces of distant nuclear furnaces.
Scientific theories are explanations: assertions about what is out there and how it behaves. Where do these theories come from? For most of the history of science, it was mistakenly believed that we ‘derive’ them from the evidence of our senses–a philosophical doctrine known as empiricism:
But, in reality, scientific theories are not ‘derived’ from anything. We do not read them in nature, nor does nature write them into us. They are guesses–bold conjectures. Human minds create them by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them.
Thus, despite being quite wrong about where scientific knowledge comes from, empiricism was a great step forward in both the philosophy and the history of science.
For millennia people dreamed about flying, but they experienced only falling. Then they discovered good explanatory theories about flying, and then they flew–in that order. Before 1945, no human being had ever observed a nuclear-fission (atomic-bomb) explosion; there may never have been one in the history of the universe. Yet the first such explosion, and the conditions under which it would occur, had been accurately predicted–but not from the assumption that the future would be like the past. Even sunrise–that favourite example of inductivists–is not always observed every twenty-four hours: when viewed from orbit it may happen every ninety minutes, or not at all.
As the ancient philosopher Heraclitus remarked, ‘No man ever steps in the same river twice, for it is not the same river and he is not the same man.’
Similarly, theory tells us that if we see sunrise reflected in a mirror, or in a video or a virtual-reality game, that does not count as seeing it twice. Thus the very idea that an experience has been repeated is not itself a sensory experience, but a theory.
Discovering a new explanation is inherently an act of creativity.
So it is fallibilism, not mere rejection of authority, that is essential for the initiation of unlimited knowledge growth–the beginning of infinity.
conjecture, the real source of all our theories.
The brain attaches those interpretations–‘head’, ‘stomach’ and ‘up there’–to events that are in fact within the brain itself.
as witness the celestial-sphere theory, as well as every optical illusion and conjuring trick. So we perceive nothing as what it really is. It is all theoretical interpretation: conjecture.
But one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.
What was needed for the sustained, rapid growth of knowledge was a tradition of criticism.
Testability is now generally accepted as the defining characteristic of the scientific method. Popper called it the ‘criterion of demarcation’ between science and non-science.
The reason that testability is not enough is that prediction is not, and cannot be, the purpose of science.
If the explanation of a conjuring trick were evident in its appearance, there would be no trick. If the explanations of physical phenomena were evident in their appearance, empiricism would be true and there would be no need for science as we know it.
Knowledge that is both familiar and uncontroversial is background knowledge. A predictive theory whose explanatory content consists only of background knowledge is a rule of thumb.
I shall call a situation in which we experience conflicting ideas a problem.
Solving a problem means creating an explanation that does not have the conflict.
Expectations are theories too.
This is another general fact about scientific explanation: if one has a misconception, observations that conflict with one’s expectations may (or may not) spur one into making further conjectures, but no amount of observing will correct the misconception until after one has thought of a better idea; in contrast, if one has the right idea one can explain the phenomenon even if there are large errors in the data.
That freedom to make drastic changes in those mythical explanations of seasons is the fundamental flaw in them.
As the physicist Richard Feynman said, ‘Science is what we have learned about how to keep from fooling ourselves.’
The quest for good explanations is, I believe, the basic regulating principle not only of science, but of the Enlightenment generally.
An entire political, moral, economic and intellectual culture–roughly what is now called ‘the West’–grew around the values entailed by the quest for good explanations, such as tolerance of dissent, openness to change, distrust of dogmatism and authority, and the aspiration to progress both by individuals and for the culture as a whole.
That is what we do today. We do not test every testable theory, but only the few that we find are good explanations.
Conjectures are the products of creative imagination. But the problem with imagination is that it can create fiction much more easily than truth.
This reach of explanations is another meaning of ‘the beginning of infinity’. It is the ability of some of them to solve problems beyond those that they were created to solve.
The better an explanation is, the more rigidly its reach is determined–because the harder it is to vary an explanation, the harder it is in particular to construct a variant with a different reach, whether larger or smaller, that is still an explanation.
If you find a nugget of gold anywhere in the universe, you can be sure that in its history there was either a supernova or an intelligent being with an explanation.
And if you find an explanation anywhere in the universe, you know that there must have been an intelligent being. A supernova alone would not suffice.
Theory-laden There is no such thing as ‘raw’ experience. All our experience of the world comes through layers of conscious and unconscious interpretation.
The real source of our theories is conjecture, and the real source of our knowledge is conjecture alternating with criticism. We
The role of experiment and observation is to choose between existing theories, not to be the source of new ones.

2. Closer to Reality

The universe is not there to overwhelm us; it is our home, and our resource. The bigger the better.
Perhaps those galaxy-cataloguing computer programs were written by those same graduate students, distilling what they had learned into reproducible algorithms. Which means that they must have learned something while performing a task that a computer performs without learning anything.
as Feynman said, we keep learning more about how not to fool ourselves.
The primary function of the telescope’s optics is to reduce the illusion that the stars are few, faint, twinkling and moving. The same is true of every feature of the telescope, and of all other scientific instruments: each layer of indirectness, through its associated theory, corrects errors, illusions, misleading perspectives and gaps.
Our minds, through the methodological criterion that I mentioned in Chapter 1, conclude that a particular thing is real if and only if it figures in our best explanation of something.
It may seem strange that scientific instruments bring us closer to reality when in purely physical terms they only ever separate us further from it. But we observe nothing directly anyway. All observation is theory-laden.

3. The Spark

Most ancient accounts of the reality beyond our everyday experience were not only false, they had a radically different character from modern ones: they were anthropocentric. That is to say, they centred on human beings, and more broadly on people–entities with intentions and human-like thoughts–which included powerful, supernatural people such as spirits and gods.
Before anything was known about how the world works, trying to explain physical phenomena in terms of purposeful, human-like thought and action may have been a reasonable approach. After all, that is how we explain much of our everyday experience even today:
For one thing, about 80 per cent of that matter is thought to be invisible ‘dark matter’, which can neither emit nor absorb light.
Moreover, we are an uncommon form of ordinary matter. The commonest form is plasma (atoms dissociated into their electrically charged components), which typically emits bright, visible light because it is in stars, which are rather hot. We scums are mainly infra-red emitters because we contain liquids and complex chemicals which can exist only at a much lower range of temperatures.
Almost all the atoms in intergalactic space are hydrogen or helium, so there is no chemistry.
Cold, dark and empty. That unimaginably desolate environment is typical of the universe–and is another measure of how untypical the Earth and its chemical scum are, in a straightforward physical sense.
There is a life-support system in Oxfordshire today, but it was not provided by the biosphere. It has been built by humans. It consists of clothes, houses, farms, hospitals, an electrical grid, a sewage system and so on. Nearly the whole of the Earth’s biosphere in its primeval state was likewise incapable of keeping an unprotected human alive for long.
That is no accident: most populations, of most species, are living close to the edge of disaster and death. It has to be that way, because as soon as some small group, somewhere, begins to have a slightly easier life than that, for any reason–for instance, an increased food supply, or the extinction of a competitor or predator–then its numbers increase. As a result, its other resources are depleted by the increased usage; so an increasing proportion of the population now has to colonize more marginal habitats and make do with inferior resources, and so on.
The biosphere only ever achieves stability–and only temporarily at that–by continually neglecting, harming, disabling and killing individuals.
Our pre-human ancestors in the Great Rift Valley used such knowledge too, and our own species must have come into existence already dependent on it for survival.
Today, almost the entire capacity of the Earth’s ‘life-support system for humans’ has been provided not for us but by us, using our ability to create new knowledge.
The Earth did provide the raw materials for our survival–just as the sun has provided the energy, and supernovae provided the elements, and so on. But a heap of raw materials is not the same thing as a life-support system. It takes knowledge to convert the one into the other, and biological evolution never provided us with enough knowledge to survive, let alone to thrive.
Any assumption that the world is inexplicable can lead only to extremely bad explanations.
People had dreamed for millennia of flying to the moon, but it was only with the advent of Newton’s theories about the behaviour of invisible entities such as forces and momentum that they began to understand what was needed in order to go there.
This increasingly intimate connection between explaining the world and controlling it is no accident, but is part of the deep structure of the world.
possibilities. That is to say, every putative physical transformation, to be performed in a given time with given resources or under any other conditions, is either–impossible because it is forbidden by the laws of nature; or–achievable, given the right knowledge.
The ability to create and use explanatory knowledge gives people a power to transform nature which is ultimately not limited by parochial factors, as all other adaptations are, but only by universal laws.
A gene pool is carved and whittled through generations of ancestral natural selection to fit [a particular] environment. In theory a knowledgeable zoologist, presented with the complete transcript of a genome [the set of all the genes of an organism], should be able to reconstruct the environmental circumstances that did the carving. In this sense the DNA is a coded description of ancestral environments.
To be precise, the ‘knowledgeable zoologist’ would be able to reconstruct only those aspects of the organism’s ancestral environment that exerted selection pressure–such as the types of prey that existed there, what behaviours would catch them, what chemicals would digest them and so on.
For example, all primates require vitamin C. Without it, they fall ill and die of the disease scurvy, but their genes do not contain the knowledge of how to synthesize it. So, whenever any non-human primate is in an environment that does not supply vitamin C for an extended period, it dies. Any account that overlooks this fact will overestimate the reach of those species. Humans are primates, yet their reach has nothing to do with which environments supply vitamin C. Humans can create and apply new knowledge of how to cause it to be synthesized from a wide range of raw materials, by agriculture or in chemical factories. And, just as essentially, humans can discover for themselves that, in most environments, they need to do that in order to survive.
I specified robot space vehicles because all technological knowledge can eventually be implemented in automated devices.
And the more advanced technology becomes, the shorter is the gap between inspiration and automation.
In the unique case of humans, the difference between a hospitable environment and a deathtrap depends on what knowledge they have created.
human bodies (including their brains) are factories for transforming anything into anything that the laws of nature allow. They are ‘universal constructors’.
paper. As Einstein remarked, ‘My pencil and I are more clever than I.’ In terms of computational repertoire, our computers–and brains–are already universal
So human reach is essentially the same as the reach of explanatory knowledge itself. An environment is within human reach if it is possible to create an open-ended stream of explanatory knowledge there.
Everything needed for the open-ended creation of knowledge is here in abundance, in the Earth’s biosphere.
because humans are universal constructors, every problem of finding or transforming resources can be no more than a transient factor limiting the creation of knowledge in a given environment. And therefore matter, energy and evidence are the only requirements that an environment needs to have in order to be a venue for open-ended knowledge creation.
Though any particular problem is a transient factor, the condition of having to solve problems in order to survive and continue to create knowledge is permanent.
an unproblematic state is a state without creative thought. Its other name is death.
And so the maxim that I suggested should be carved in stone, namely ‘The Earth’s biosphere is incapable of supporting human life’ is actually a special case of a much more general truth, namely that, for people, problems are inevitable. So let us carve that in stone:
So a complementary and equally important truth about people and the physical world is that problems are soluble. By ‘soluble’ I mean that the right knowledge would solve them.
All people in the universe, once they have understood enough to free themselves from parochial obstacles, face essentially the same opportunities.
But either way, in the universe at large, knowledge-friendliness is the rule, not the exception. That is to say, the rule is person-friendliness to people who have the relevant knowledge. Death is the rule for those who do not.
In all cases, the class of transformations that could happen spontaneously–in the absence of knowledge–is negligibly small compared with the class that could be effected artificially by intelligent beings who wanted those transformations to happen.
(All scientific measurements involve chains of proxies.)

4. Creation

The knowledge in human brains and the knowledge in biological adaptations are both created by evolution in the broad sense: the variation of existing information, alternating with selection. In the case of human knowledge, the variation is by conjecture, and the selection is by criticism and experiment.
Human brains and DNA molecules each have many functions, but among other things they are general-purpose information-storage media: they are in principle capable of storing any kind of information. Moreover, the two types of information that they respectively evolved to store have a property of cosmic significance in common: once they are physically embodied in a suitable environment, they tend to cause themselves to remain so. Such information–which I call knowledge–is very unlikely to come into existence other than through the error-correcting processes of evolution or thought.
There are also important differences between those two kinds of knowledge. One is that biological knowledge is non-explanatory, and therefore has limited reach; explanatory human knowledge can have broad or even unlimited reach. Another difference is that mutations are random, while conjectures can be constructed intentionally for a purpose.
For instance, when people are trying to understand an idea that they hear from others, they typically understand it to mean what makes most sense to them, or what they are most expecting to hear, or what they fear to hear, and so on. Those meanings are conjectured by the listener or reader, and may differ from what the speaker or writer intended.
People rarely express ideas in exactly the same words in which they heard them.
They also translate from one language to another, and between spoken and written language, and so on. Yet we rightly call what is transmitted the same idea–the same meme–throughout.
Thus, in the case of most memes, the real replicator is abstract: it is the knowledge itself.
gene. But one day the genes of a rare species could survive its extinction by causing themselves to be stored on a computer and then implanted into a cell of a different species.
The physicist Brandon Carter calculated in 1974 that if the strength of the interaction between charged particles were a few per cent smaller, no planets would ever have formed and the only condensed objects in the universe would be stars; and if it were a few per cent greater, then no stars would ever explode, and so no elements other than hydrogen and helium would exist outside them. In either case there would be no complex chemistry and hence presumably no life.
However–and here we are reaching Sciama’s main conclusion–that prediction changes radically if there are several constants to explain. For although any one constant is unlikely to be near the edge of its range, the more constants there are, the more likely it is that at least one of them will be. This can be illustrated pictorially as follows, with our bull’s-eye replaced by a line segment, a square, a cube… and we can imagine this sequence continuing for as many dimensions as there are fine-tuned constants in nature. Arbitrarily define ‘near the edge’ as meaning ‘within 10 per cent of the whole range from it’. Then in the case of one constant, as shown in the diagram, 20 per cent of its possible values are near one of the two edges of the range, and 80 per cent are ‘away from the edge’. But with two constants a pair of values has to satisfy two constraints in order to be ‘away from the edge’. Only 64 per cent of them do so. Hence 36 per cent are near the edge. With three constants, nearly half the possible choices are near the edge. With 100 constants, over 99.9999999 per cent of them are.

5. The Reality of Abstractions

Furthermore, everyday events are stupendously complex when expressed in terms of fundamental physics. If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do–even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task. Fortunately, some of that complexity resolves itself into a higher-level simplicity.
below. In other words, the behaviour of that whole class of high-level phenomena is quasi-autonomous–almost self-contained. This resolution into explicability at a higher, quasi-autonomous level is known as emergence.
Emergent phenomena are a tiny minority. We can predict when the water will boil, and that bubbles will form when it does, but if you wanted to predict where each bubble will go (or, to be precise, what the probabilities of its various possible motions are–see Chapter 11), you would be out of luck. Still less is it feasible to predict the countless microscopically defined properties of the water, such as whether an odd or an even number of its electrons will be affected by the heating during a given period. Fortunately, we are uninterested in predicting or explaining most of those properties, despite the fact that they are the overwhelming majority.
Whenever a high-level explanation does follow logically from low-level ones, that also means that the high-level one implies something about the low-level ones. Thus, additional high-level theories, provided that they were all consistent, would place more and more constraints on what the low-level theories could be.
In any case, emergent phenomena are essential to the explicability of the world. Long before humans had much explanatory knowledge, they were able to control nature by using rules of thumb.
Long before that, it was only genes that were encoding rules of thumb, and the knowledge in them, too, was about emergent phenomena. Thus emergence is another beginning of infinity: all knowledge-creation depends on, and physically consists of, emergent phenomena.
experience provides problems only by bringing already-existing ideas into conflict.
This argument that abstractions really exist does not tell us what they exist as–for instance, which of them are purely emergent aspects of others, and which exist independently of the others.

6. The Jump to Universality

Then, instead of each new word temporarily breaking the system, the system can itself be used to coin new words, in an easy and decentralized way.
Alphabets were confined to special purposes such as writing rare words or transliterating foreign names.
Some historians believe that the idea of an alphabet-based writing system was conceived only once in human history–by some unknown predecessors of the Phoenicians, who then spread it throughout the Mediterranean–so that every alphabet-based writing system that has ever existed is either descended from or inspired by that Phoenician one.
That is a universal system of tallying. But, like levels of emergence, there is a hierarchy of universality. The next level above tallying is counting, which involves numerals. When tallying goats one is merely thinking ‘another, and another, and another’; but when counting them one is thinking ‘forty, forty-one, forty-two…’
By exploiting the universal laws of addition, those rules gave the system some important reach beyond tallying–such as the ability to perform arithmetic. For example, consider the numbers seven (VII) and eight (VIII). The rules say that placing them side by side–VIIVIII–is the same as adding them. Then they tell us to rearrange the symbols in order of decreasing value: VVIIIII. Then they tell us to replace the two V’s by X, and the five I’s by V. The result is XV, which is the representation of fifteen. Something new has happened here, which is more than just a matter of shorthand: an abstract truth has been discovered, and proved, about seven, eight and fifteen without anyone having counted or tallied anything. Numbers have been manipulated in their own right, via their numerals.
knowledge is information which, when it is physically embodied in a suitable environment, tends to cause itself to remain so.
People consist of abstract information, including the distinctive ideas, theories, intentions, feelings and other states of mind that characterize an ‘I’. To object to being ‘controlled’ by Roman numerals when we find them helpful is like protesting at being controlled by one’s own intentions.
The only way to emancipate arithmetic from tallying is with rules of universal reach. As with alphabets, a small set of basic rules and symbols is sufficient.
The largest benefits of any universality, beyond whatever parochial problem it is intended to solve, come from its being useful for further innovation. And innovation is unpredictable.
A jump to universality that played an important role in the early history of the Enlightenment was the invention of movable-type printing. Movable type consisted of individual pieces of metal, each embossed with one letter of the alphabet. Earlier forms of printing had merely streamlined writing in the same way that Roman numerals streamlined tallying: each page was engraved on a printing plate and thus all the symbols on it could be copied in a single action. But, given a supply of movable type with several instances of each letter, one does no further metalwork. One merely arranges the type into words and sentences. One does not have to know, in order to manufacture type, what the documents that it will eventually print are going to say: it is universal.
These improvements led to a jump to universality in about 1970, when several companies independently produced a microprocessor, a universal classical computer on a single silicon chip.
It is a remarkable fact that, in that sense (that is to say, ignoring issues of speed, memory capacity and input–output devices), the human ‘computers’ of old, the steam-powered Analytical Engine with its literal bells and whistles, the room-sized vacuum-tube computers of the Second World War, and present-day supercomputers all have an identical repertoire of computations.
information that cannot be reliably retrieved is not really being stored.
Because of the necessity for error-correction, all jumps to universality occur in digital systems. It is why spoken languages build words out of a finite set of elementary sounds: speech would not be intelligible if it were analogue. It would not be possible to repeat, nor even to remember, what anyone had said.
The mysterious universality of DNA as a constructor may have been the first universality to exist. But, of all the different forms of universality, the most significant physically is the characteristic universality of people, namely that they are universal explainers, which makes them universal constructors as well.

7. Artificial Creativity

Turing did understand that artificial intelligence (AI) must in principle be possible because a universal computer is a universal simulator.
The probability that the outputs of such templates will continue to resemble the products of human thought diminishes exponentially with the number of utterances.
task): if you can’t program it, you haven’t understood it.
Explaining how an AI program works in detail might well be intractably complicated. In practice the author’s explanation would always be at some emergent, abstract level. But that would not prevent it from being a good explanation.
It would just explain how it could happen, and why we should expect it to happen, given how the program works.
This is exactly the analogue of a ‘trick’ that a programmer has built into a chatbot: the chatbot responds ‘as though’ it had created some of the knowledge while composing its response, but in fact all the knowledge was created earlier and elsewhere.
The field of artificial (general) intelligence has made no progress because there is an unsolved philosophical problem at its heart: we do not understand how creativity works. Once that has been solved, programming it will not be difficult.

8. A Window on Infinity

Every room is at the beginning of infinity. That is one of the attributes of the unbounded growth of knowledge too: we are only just scratching the surface, and shall never be doing anything else.
‘P  NP’. It is, roughly speaking, that there exist classes of mathematical questions whose answers can be verified efficiently once one has them but cannot be computed efficiently in the first place by a universal (classical) computer.
Then there are the limitations of epistemology: we cannot create knowledge other than by the fallible method of conjecture and criticism; errors are inevitable, and only error-correcting processes can succeed or continue for long.
The most important of all limitations on knowledge-creation is that we cannot prophesy: we cannot predict the content of ideas yet to be created, or their effects. This limitation is not only consistent with the unlimited growth of knowledge, it is entailed by it, as I shall explain in the next chapter.
One of them is that, if unlimited progress really is going to happen, not only are we now at almost the very beginning of it, we always shall be.

9. Optimism

Russian roulette is merely random. Although we cannot predict the outcome, we do know what the possible outcomes are, and the probability of each, provided that the rules of the game are obeyed. The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created. Hence the possible outcomes are not yet known, let alone their probabilities.
Lagrange had remarked that Isaac Newton had not only been the greatest genius who ever lived, but also the luckiest, for ‘the system of the world can be discovered only once.’ Lagrange would never know that some of his own work, which he had regarded as a mere translation of Newton’s into a more elegant mathematical language, was a step towards the replacement of Newton’s ‘system of the world’.
Blind optimism is a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen.
The reason for these paradoxes and parallels between blind optimism and blind pessimism is that those two approaches are very similar at the level of explanation. Both are prophetic: both purport to know unknowable things about the future of knowledge.
random. A probability of one in 250,000 of such an impact in any given year means that a typical person on Earth would have a far larger chance of dying of an asteroid impact than in an aeroplane crash. And the next such object to strike us is already out there at this moment, speeding towards us with nothing to stop it except human knowledge.
The question ‘How can we hope to detect and eliminate error?’ is echoed by Feynman’s remark that ‘science is what we have learned about how to keep from fooling ourselves’. And the answer is basically the same for human decision-making as it is for science: it requires a tradition of criticism, in which good explanations are sought–for example, explanations of what has gone wrong, what would be better, what effect various policies have had in the past and would have in the future.
But it also assumes that improving upon them is possible: problems are soluble. The ideal towards which this is working is not that nothing unexpected will go wrong, but that when it does it will be an opportunity for further progress.
Optimism is, in the first instance, a way of explaining failure, not prophesying success.
Whenever we try to improve things and fail, it is not because the spiteful (or unfathomably benevolent) gods are thwarting us or punishing us for trying, or because we have reached a limit on the capacity of reason to make improvements, or because it is best that we fail, but always because we did not know enough, in time.
He knows that, if progress is to be made, some of the opportunities and some of the discoveries will be inconceivable in advance. Progress cannot take place at all unless someone is open to, and prepares for, those inconceivable possibilities.
But if our prisoner is going to escape by creating a new idea, he cannot possibly know that idea today, and therefore he cannot let the assumption that it will never exist condition his planning.
Optimism implies all the other necessary conditions for knowledge to grow, and for knowledge-creating civilizations to last, and hence for the beginning of infinity.
we want to engage with projects that will involve creating new knowledge. And an optimist expects the creation of knowledge to constitute progress–including its unforeseeable consequences.
The first attribute that Pericles cited was Athens’ democracy. And he explained why. Not because ‘the people should rule’, but because it promotes ‘wise action’.
Problems are soluble, and each particular evil is a problem that can be solved. An optimistic civilization is open and not afraid to innovate, and is based on traditions of criticism.

10. A Dream of Socrates

The knowledge that you seek–objective knowledge–is hard to come by, but attainable. That mental state that you do not seek–justified belief–is sought by many people, especially priests and philosophers.
as knowledge held immune from criticism never can be improved!
Thought, explanation and persuasion. And now they would understand better why thievery is harmful, through their new explanations.*
Nevertheless, you have conceded that even those things that you thought were the easiest to see literally are in fact not easy to see at all without prior knowledge about them. In fact nothing is easy to see without prior knowledge. All knowledge of the world is hard to come by.
What matters in all cases is the explanation you create, within your own mind, for the facts, and for the observations and advice in question.
I have no need to trust the source if the argument itself is persuasive. And no way of using any source unless I also have a persuasive argument.
I see. When we hear something being said, we guess what it means, without realizing what we are doing. That is beginning to make sense to me. Except–guesswork isn’t knowledge!
And when we can change them no more, we have understood some objective truth. And, as if that were not enough, what we understand we then control. It is like magic, only real. We are like gods!
The immediate reason is that the original sources of scientific theories are almost never good sources. How could they be? All subsequent expositions are intended to be improvements on them, and some succeed, and improvements are cumulative.
The way to converge with each other is to converge upon the truth.

11. The Multiverse

not only do all good science-fiction plots resemble scientific explanation in this way, in the broadest sense all good art does.
One is, as with all fiction, to allow the reader to engage with the story, and the easiest way to do that is to draw on themes that are already familiar.
The opposing incentive is to explore the strongest possible version of a fictional-science premise, and its strangest possible implications–which pushes in the anti-anthropocentric direction.
a dollar is an abstraction. Indeed, it is a piece of abstract knowledge. As I discussed in Chapter 4, knowledge, once embodied in physical form in a suitable environment, causes itself to remain so.
combination. But it is meaningful that half the energy that was there has been dissipated. It turns out that, in quantum physics, elementary particles are configurational entities too.
Diversity within fungibility is a widespread phenomenon in the multiverse, as I shall explain. One big difference from the case of fungible money is that in the latter case we never have to wonder about–or predict–what it would be like to be a dollar. That is to say, what it would be like to be fungible, and then to become differentiated. Many applications of quantum theory require us to do exactly that.
Thus the outcomes of such experiments are subjectively random (from the perspective of any observer) even though everything that is happening is completely determined objectively.
That is why the larger and more complex an object or process is, the less its gross behaviour is affected by interference. At that ‘coarse-grained’ level of emergence, events in the multiverse consist of autonomous histories, with each coarse-grained history consisting of a swathe of many histories differing only in microscopic details but affecting each other through interference.
An important consequence for the construction of measuring devices (including eyes) is that no matter how far away the source is, the kick given to an atom by an arriving photon is always the same: it is just that the weaker the signal is, the fewer kicks there are.
Just as the starship crew members could achieve the effect of large amounts of computation by sharing information with their doppelgängers computing the same function on different inputs, so an algorithm that makes use of quantum parallelism does the same.
We are channels of information flow. So are histories, and so are all relatively autonomous objects within histories; but we sentient beings are extremely unusual channels, along which (sometimes) knowledge grows. This can have dramatic effects, not only within a history (where it can, for instance, have effects that do not diminish with distance), but also across the multiverse.
Since the growth of knowledge is a process of error-correction, and since there are many more ways of being wrong than right, knowledge-creating entities rapidly become more alike in different histories than other entities. As far as is known, knowledge-creating processes are unique in both these respects: all other effects diminish with distance in space, and become increasingly different across the multiverse, in the long run.

12. A Physicist’s History of Bad Philosophy

photon: the path that was not taken affects the one that was.
With hindsight, we can state the rule of thumb like this: whenever a measurement is made, all the histories but one cease to exist. The surviving one is chosen at random, with the probability of each possible outcome being equal to the total measure of all the histories in which that outcome occurs.
Its combination of vagueness, immunity from criticism, and the prestige and perceived authority of fundamental physics opened the door to countless systems of pseudo-science and quackery supposedly based on quantum theory.
And in Dublin in 1952 Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might ‘seem lunatic’. It was that, when his equation seems to be describing several different histories, they are ‘not alternatives but all really happen simultaneously’. This is the earliest known reference to the multiverse.
Error is the normal state of our knowledge, and is no disgrace.
Problems are inevitable, but they can be solved by imaginative, critical thought that seeks good explanations.
Bad philosophy has always existed too. For instance, children have always been told, ‘Because I say so.’
Instead, when the physicist Ludwig Boltzmann used atomic theory to unify thermodynamics and mechanics, he was so vilified by Mach and other positivists that he was driven to despair, which may have contributed to his suicide just before the tide turned and most branches of physics shook off Mach’s influence.
Bad philosophy cannot easily be countered by good philosophy–argument and explanation–because it holds itself immune.

14. Why are Flowers Beautiful?

‘an idea so simple, so beautiful, that when we grasp it… how could it have been otherwise?’
Elegance is the beauty in explanations.
Composers like Ludwig van Beethoven agonized through change after change, apparently seeking something that they knew was there to be created, apparently meeting a standard that could be met only after much creative effort and much failure.
Attractiveness (to a given audience) can be functional, and is a down-to-earth, scientifically measurable quantity. Art can be literally attractive in the sense of causing people to move towards it.
New art is unpredictable, like new scientific discoveries.
The whole activity of skydiving is beautiful, and part of that beauty is in the very sensations that evolved to deter us from trying it. The conclusion is inescapable: that attraction is not inborn, just as the contents of a newly discovered law of physics or mathematical theorem are not inborn.
This is the reproductive mechanism that flowering plants evolved and which most still use today: before there were insects, there were no flowers on Earth. But the mechanism could work only because insects, at the same time, evolved genes that attracted them to flowers. Why did they? Because flowers provide nectar, which is food. Just as there is co-evolution between the genes to coordinate mating behaviours in males and females of the same species, so genes for making flowers and giving them their shapes and colours co-evolved with genes in insects for recognizing flowers with the best nectar.
criteria evolved, and means of meeting those criteria co-evolved with them. That is what gave flowers the knowledge of how to attract insects, and insects the knowledge of how to recognize those flowers and the propensity to fly towards them. But what is surprising is that these same flowers also attract humans.
overwhelming majority of species, we do not share any of their criteria for finding something attractive. Yet with flowers–most flowers–we do. Sometimes a leaf can be beautiful; even a puddle of water can. But, again, only by rare chance. With flowers it is reliable.
One unusual aspect of the flower–insect co-evolution is that it involved the creation of a complex code, or language, for signalling information between species.
The code had to be, on the one hand, easily recognizable by the right insects, and, on the other, difficult to forge by other species of flower–for if other species could cause their pollen to be spread by the same insects without having to manufacture nectar for them, which requires energy, they would have a selective advantage.
Thus both the criterion and the means of meeting it had to be hard to vary.
So flowers have to create objective beauty, and insects have to recognize objective beauty. Consequently the only species that are attracted by flowers are the insect species that co-evolved to do so–and humans.
attribute we call beauty is of two kinds. One is a parochial kind of attractiveness, local to a species, to a culture or to an individual. The other is unrelated to any of those: it is universal, and as objective as the laws of physics.
Humans are quite unlike that: the amount of information in a human mind is more than that in the genome of any species, and overwhelmingly more than the genetic information unique to one person.
So human artists are trying to signal across the same scale of gap between humans as the flowers and insects are between species.
Exactly the same is true of all our other knowledge: we can communicate with other people by sending predetermined messages determined by our genes or culture, or we can invent something new.
The two types of beauty are usually created to solve two types of problem–which could be called pure and applied. The applied kind is that of signalling information, and is usually solved by creating the parochial type of beauty. Humans have problems of that type too: the beauty of, say, the graphical user interface of a computer is created primarily to promote comfort and efficiency in the machine’s use. Sometimes a poem or song may be written for a similar practical purpose: to give more cohesiveness to a culture, or to advance a political agenda, or even to advertise beverages. Again, sometimes these purposes can also be met by creating objective beauty, but usually the parochial kind is used because it is easier to create.
The other kind of problem, the pure kind, which has no analogue in biology, is that of creating beauty for its own sake–which includes creating improved criteria for beauty: new artistic standards or styles. This is the analogue of pure scientific research.
The states of mind involved in that sort of science and that sort of art are fundamentally the same. Both are seeking universal, objective truth.
poetry and mathematics or physics share the property that they develop a language different from ordinary language in order to state things efficiently that it would be very inefficient to state in ordinary language.
Applied art and pure art ‘feel’ the same. And, just as we need sophisticated knowledge to tell the difference between the motion of a bird across the sky, which is happening objectively, and the motion of the sun across the sky, which is just a subjective illusion caused by our own motion, and the motion of the moon, which is a bit of each, so pure and applied art, universal and parochial beauty, are mixed together in our subjective appreciation of things. It will be important to discover which is which. For it is only in the objective direction that we can expect to make unlimited progress.
Expression is conveying something that is already there, while objective progress in art is about creating something new. Also, self-expression is about expressing something subjective, while pure art is objective.
real progress is difficult and involves many errors for every success.
art of the future can create unlimited increases in beauty.
When we understand better what elegance really is, perhaps we shall find new and better ways to seek truth using elegance or beauty.

15. The Evolution of Culture

The distinction between explicit and inexplicit is not always sharp. For instance, a poem or a satire may be explicitly about one subject, while the audience in a particular culture will reliably, and without being told, interpret it as being about a different one.
Thus a culture is in practice defined not by a set of strictly identical memes, but by a set of variants that cause slightly different characteristic behaviours.
long-lived memes: exceptional ideas that have been accurately replicated many times in succession.
Arguments by analogy are fallacies.
So the conditions are there for evolution: repeated cycles of imperfect copying of information, alternating with selection. Eventually the story becomes amusing enough to make people laugh, and a fully fledged joke has evolved.
Although we do not know exactly how creativity works, we do know that it is itself an evolutionary process within individual brains. For it depends on conjecture (which is variation) and criticism (for the purpose of selecting ideas). So, somewhere inside brains, blind variations and selections are adding up to creative thought at a higher level of emergence.
The upshot of this is that memes necessarily become embodied in two different physical forms alternately: as memories in a brain, and as behaviour:
Because of the alternating physical forms of a meme, it has to survive two different, and potentially unrelated, mechanisms of selection in every generation. The brain-memory form has to cause the holder to enact the behaviour; and the behaviour form has to cause the new recipient to remember it–and to enact it.
To be a meme, an idea has to contain quite sophisticated knowledge of how to cause humans to do at least two independent things: assimilate the meme faithfully, and enact it. That some memes can replicate themselves with great fidelity for many generations is a token of how much knowledge they contain.
And merely being expressed as behaviour does not automatically get the meme copied into a recipient along with other memes: it has to compete for the recipients’ attention and acceptance with all sorts of behaviours by other people, and with the recipient’s own ideas.
a human brain–quite unlike a genome–is itself an arena of intense variation, selection and competition. Most ideas within a brain are created by it for the very purpose of trying them out in imagination, criticizing them, and varying them until they meet the person’s preferences.
Just as genes for the eye implicitly ‘know’ the laws of optics, so the long-lived memes of a static society implicitly possess knowledge of the human condition, and use it mercilessly to evade the defences and exploit the weaknesses of the human minds that they enslave.
A remark about timescales: Static societies, by this definition, are not perfectly unchanging. They are static on the timescale that humans can notice; but memes cannot prevent changes that are slower than that.
This pushes memes in the direction of causing a finely tuned compulsion in the holder’s mind: ideally, this would be just the inability to refrain from enacting that particular meme (or memeplex).
A static society involves–in a sense consists of–a relentless struggle to prevent knowledge from growing.
But primitive societies (including tribes of hunter-gatherers) must all have been static societies, because if ever one ceased to be static it would soon cease to be primitive, or else destroy itself by losing its distinctive knowledge.
Moreover, in regard to academic knowledge, it is still taken for granted, in practice, that the main purpose of education is to transmit a standard curriculum faithfully. One consequence is that people are acquiring scientific knowledge in an anaemic and instrumental way. Without a critical, discriminating approach to what they are learning, most of them are not effectively replicating the memes of science and reason into their minds. And so we live in a society in which people can spend their days conscientiously using laser technology to count cells in blood samples, and their evenings sitting cross-legged and chanting to draw supernatural energy out of the Earth.
For example, whenever we find ourselves enacting a complex or narrowly defined behaviour that has been accurately repeated from one holder to the next, we should be suspicious.
Indeed, everything that we shall ever try to achieve from now on will never have worked before. We have, so far, been transformed from the victims (and enforcers) of an eternal status quo into the mainly passive recipients of the benefits of relatively rapid innovation in a bumpy transition period. We now have to accept, and rejoice in bringing about, our next transformation: to active agents of progress in the emerging rational society–and universe.
Biological evolution was merely a finite preface to the main story of evolution, the unbounded evolution of memes.

16. The Evolution of Creativity

Closer observation would have revealed that human languages and the knowledge for human tool use were being transmitted through memes and not genes. That made us fairly unusual, but still not obviously creative:
the advantage conferred by successive mutations that gave our predecessors’ brains slightly more creativity (or, more precisely, more of the ability that we now think of as creativity) must have been quite large, for by all accounts modern humans evolved from ape-like ancestors very rapidly by gene-evolution standards.
there is no upper limit to the value of ideas,
real creativity, which is the capacity to create an endless stream of innovations.
A meme is an idea, and we cannot observe ideas inside other people’s brains.
Meme acquisition comes so naturally to us that it is hard to see what a miraculous process it is, or what is really happening. It is especially hard to see where the knowledge is coming from. There is a great deal of knowledge in even the simplest of human memes. When we learn to wave, we learn not only the gesture but also which aspects of the situation made it appropriate to wave, and how, and to whom. We are not told most of this, yet we learn it anyway. Similarly, when we learn a word, we also learn its meaning, including highly inexplicit subtleties. How do we acquire that knowledge?
scientific observation is impossible without pre-existing knowledge about what to look at, what to look for, how to look, and how to interpret what one sees. And he would explain that, therefore, theory has to come first. It has to be conjectured, not derived.
One needs to know the ideas before one can imitate the behaviour. So imitating behaviour cannot be how we acquire memes.
As Popper remarked, ‘It is impossible to speak in such a way that you cannot be misunderstood.’
The real situation is that people need inexplicit knowledge to understand laws and other explicit statements, not vice versa.
Now, imagine that a parrot had been present at Popper’s lectures, and learned to parrot some of Popper’s favourite sentences. It would, in a sense, have ‘imitated’ some of Popper’s ideas: in principle, an interested student could later learn the ideas by listening to the parrot. But the parrot would merely be transmitting those memes from one place to another–which is no more than the air in the lecture theatre does. The parrot could not be said to have acquired the memes, because it would be reproducing only one of the countless behaviours that they could produce. The parrot’s subsequent behaviour as a result of having learned the sounds by heart–such as its responses to questions–would not resemble Popper’s. The sound of the meme would be there, but its meaning would not. And it is the meaning–the knowledge–that is the replicator.
Humans and computers separate continuous streams of sounds or characters into individual elements such as words, and then interpret those elements as being connected by the logic of a larger sentence or program. Similarly, in behaviour parsing (which evolved millions of years before human language parsing), an ape parses a continuous stream of behaviour that it witnesses into individual elements, each of which it already knows–genetically–how to imitate.
Human beings acquiring human memes are doing something profoundly different. When an audience is watching a lecture, or a child is learning language, their problem is almost the opposite of that of parroting or aping: the meaning of the behaviour that they are observing is precisely what they are striving to discover and do not know in advance. The actions themselves, and even the logic of how they are connected, are largely secondary and are often entirely forgotten afterwards. For example, as adults we remember few of the actual sentences from which we learned to speak.
As I said, imitation is not at the heart of human meme replication.
Rather than imitating behaviour, a human being tries to explain it–to understand the ideas that caused it–which is a special case of the general human objective of explaining the world. When we succeed
They use conjecture, criticism and experiment to create good explanations of the meaning of things–other people’s behaviour, their own, and that of the world in general. That is what creativity does. And if we end up behaving like other people, it is because we have rediscovered the same idea.
what replicates human memes is creativity; and creativity was used, while it was evolving, to replicate memes. In other words, it was used to acquire existing knowledge, not to create new knowledge. But the mechanism to do both things is identical, and so in acquiring the ability to do the former, we automatically became able to do the latter. It was a momentous example of reach, which made possible everything that is uniquely human.
A person acquiring a meme faces the same logical challenge as a scientist. Both must discover a hidden explanation. For the former, it is an idea in the minds of other people; for the latter, a regularity or law of nature. Neither person has direct access to this explanation. But both have access to evidence with which explanations can be tested: the observed behaviour of people who hold the meme, and physical phenomena conforming to the law.
‘We do not acquire new memes by copying them, or by inferring them inductively from observation, or by any other method of imitation of, or instruction by, the environment.’
The transmission of human-type memes–memes whose meaning is not mostly predefined within the receiver–cannot be other than a creative activity on the part of the receiver.
When we make an explicit conjecture, it has an inexplicit component whether we are aware of it or not.
the human capacity for universal explanation did not evolve to have a universal function. It evolved simply to increase the volume of memetic information that our ancestors could acquire, and the speed and accuracy with which they could acquire it.
The value of such knowledge must have been high, so this created a ready-made niche for any adaptation that would reduce the effort required to replicate memes.
Meme evolution took place, and, like all evolution, this was always in the direction of greater faithfulness. This meant becoming ever more anti-rational.
The principal one would have been memory capacity: the more one could remember, the more memes one could enact, and the more accurately one could enact them. But there may also have been hardware abilities such as mirror neurons for imitating a wider range of elementary actions than apes could ape–for instance, the elementary sounds of a language. It would have been natural for such hardware assistance for language abilities to be evolving at the same time as the increased meme bandwidth. So, by the time creativity was evolving, there would already have been significant co-evolution between genes and memes: genes evolving hardware to support more and better memes, and memes evolving to take over ever more of what had previously been genetic functions such as choice of mate, and methods of eating, fighting and so on.
Therefore, my speculation is that the creativity program is not entirely inborn. It is a combination of genes and memes. The hardware of the human brain would have been capable of being creative (and sentient, conscious and all those other things) long before any creative program existed. Considering a sequence of brains during this period, the earliest ones capable of supporting creativity would have required very ingenious programming to fit the capacity into the barely suitable hardware. As the hardware improved, creativity could have been programmed more easily, until the moment when it became easy enough actually to be done by evolution. We do not know what was being gradually increased in that approach to a universal explainer. If we did, we could program one tomorrow.
Not only is creativity necessary for human meme replication, it is also sufficient.
people: creative, universal explainers.
We have no way of telling, at present, how likely it was for creativity to begin to evolve in apes. But, once it began to, there would automatically have been evolutionary pressure for it to continue, and for other meme-facilitating adaptations to follow in its wake.

17. Unsustainable

The idea that civilization depends on good forest management has little reach. But the broader interpretation, that survival depends on good resource management, has almost no content: any physical object can be deemed a ‘resource’. And, since problems are soluble, all disasters are caused by ‘poor resource management’.
The Easter Islanders may or may not have suffered a forest-management fiasco. But, if they did, the explanation would not be about why they made mistakes–problems are inevitable–but why they failed to correct them.
progress is sustainable, indefinitely.
This knowledge-based approach to explaining human events follows from the general arguments of this book. We know that achieving arbitrary physical transformations that are not forbidden by the laws of physics (such as replanting a forest) can only be a matter of knowing how. We know that finding out how is a matter of seeking good explanations. We also know that whether a particular attempt to make progress will succeed or not is profoundly unpredictable. It can be understood in retrospect, but not in terms of factors that could have been known in advance.
Moreover, the Americas had not always lacked large quadrupeds. When the first humans arrived there, many species of ‘mega-fauna’ were common, including wild horses, mammoths, mastodons and other members of the elephant family. According to some theories, the humans hunted them to extinction. What would have happened if one of those hunters had had a different idea: to ride the beast before killing it? Generations later, the knock-on effects of that bold conjecture might have been tribes of warriors on horses and mammoths pouring back through Alaska and re-conquering the Old World. Their descendants would now be attributing this to the geographical distribution of mega-fauna. But the real cause would have been that one idea in the mind of that one hunter.
It would be much truer to say that the landscape we live in is the product of ideas. The primeval landscape, though packed with evidence and therefore opportunity, contained not a single idea. It is knowledge alone that converts landscapes into resources, and humans alone who are the authors of explanatory knowledge and hence of the uniquely human behaviour called ‘history’.
there are also ways of thinking that can prevent all improvement.
The sustained creation of knowledge depends also on the presence of certain kinds of idea, particularly optimism, and an associated tradition of criticism.
decline. In fact he was prophesying the content of future knowledge. And, by envisaging a future in which only the best knowledge of 1971 was deployed, he was implicitly assuming that only a small and rapidly dwindling set of problems would ever be solved again.
colour television was a sign of the imminent collapse of our ‘consumer society’. Why? Because, first of all, he said, it served no useful purpose. All the useful functions of television could be performed just as well in monochrome. Adding colour, at several times the cost, was merely ‘conspicuous consumption’.
Optimistic opponents of Malthusian arguments are often–rightly–keen to stress that all evils are due to lack of knowledge, and that problems are soluble.
And, indeed, the deeper and more dangerous mistake made by Malthusians is that they claim to have a way of averting resource-allocation disasters (namely, sustainability). Thus they also deny that other great truth that I suggested we engrave in stone: problems are inevitable.
The only rational policy, in all three cases, is to judge institutions, plans and ways of life according to how good they are at correcting mistakes: removing bad policies and leaders, superseding bad explanations, and recovering from disasters.
But all triumphs are temporary. So to use this fact to reinterpret progress as ‘so-called progress’ is bad philosophy.
Prevention and delaying tactics are useful, but they can be no more than a minor part of a viable strategy for the future.
Problems are inevitable, and sooner or later survival will depend on being able to cope when prevention and delaying tactics have failed.
So we need the capacity to deal with unforeseen, unforeseeable failures. For this we need a large and vibrant research community, interested in explanation and problem-solving. We need the wealth to fund it, and the technological capacity to implement what it discovers.
Trying to predict what our net effect on the environment will be for the next century and then subordinating all policy decisions to optimizing that prediction cannot work.
There is a saying that an ounce of prevention equals a pound of cure. But that is only when one knows what to prevent. No precautions can avoid problems that we do not yet foresee. To prepare for those, there is nothing we can do but increase our ability to put things right if they go wrong.
But if we choose instead to embark on an open-ended journey of creation and exploration whose every step is unsustainable until it is redeemed by the next–if this becomes the prevailing ethic and aspiration of our society–then the ascent of man, the beginning of infinity, will have become, if not secure, then at least sustainable.

18. The Beginning

But almost no one is creative in fields in which they are pessimistic.
science claims neither infallibility nor finality.
in our infinite ignorance we are all equal.
Infinite ignorance is a necessary condition for there to be infinite potential for knowledge.
To attempt to predict anything beyond the relevant horizon is futile–it is prophecy–but wondering what is beyond it is not. When wondering leads to conjecture, that constitutes speculation, which is not irrational either.
Every one of those deeply unforeseeable new ideas that make the future unpredictable will begin as a speculation. And every speculation begins with a problem: problems in regard to the future can reach beyond the horizon of prediction too–and problems have solutions.
Whether the world ultimately does make sense will depend on how people–the likes of us–chose to think and to act.
What lies ahead of us is in any case infinity. All we can choose is whether it is an infinity of ignorance or of knowledge, wrong or right, death or life.