In Which I Talk About Writing Minecraft’s End Poem, And Describe the Problem My Next Book Is Trying To Solve
Over a decade ago now, I wrote the ending to the computer game Minecraft (before its official launch in November 2011). It’s a short narrative that, in those days, scrolled slowly up the screen for about nine minutes, after you had killed the Ender Dragon. Markus Persson, better known as Notch – the guy who developed and coded the game itself – made it hard to speed this up, or skip, because he wanted everyone to read it. (It’s easier to skip it now.) Players of the game called it the End Poem. Some of them loved it. Some of them hated it. I got marriage proposals, and death threats.
And I got a lot of questions about what the ending meant.
The ending discusses the player, and their place in the universe. And it discusses the universe itself.
It contains, in some ways, most of the wisdom I had accumulated up to that point in my life. Everything I thought it might be useful for the player to know; or be reminded of.
But at several points, the text talks about things the player DOESN’T know about themself; about the universe. And those parts are glitched, unreadable. I wrote that unreadability into the narrative; I asked Notch to code it so that some parts could not be read.
Many of the questions I get are about those parts of the End Poem, the parts where it seems about to give you some secret knowledge of the universe; but it crackles, and blurs, and is gone.
A lot of the people who write to me think that I’m hiding some important truth in there, behind the crackle and blur.
The truth is, those sections are where my own knowledge broke down. I wanted to leave space in the End Poem for all that I didn’t know. To acknowledge how small and partial were the truths I could share. Truths which, most of the time, weren’t even mine; which came from somewhere or something far bigger and wiser than me.
I wrote that ending in longhand, with a pen, on the blank pages of a scruffy orange notebook. And there were moments when the pen sped up in my hand, words poured onto paper as I watched, fascinated… and I had no idea what the words I had just written said, until I read them. At such moments, it felt as though that ending were somehow being written by the universe itself, through me. That it wanted to be understood.
I’m not saying that is what was happening; but I am saying, that is how it felt.
ELEVEN YEARS LATER…
In the decade or more since writing that ending, I have been obsessively working away on a book called The Egg And The Rock. The book is, basically, a fresh description, or explanation, of the universe. It uses the techniques of a novelist, with scientific papers as the raw material. It is, in its way, an attempt to fill in those glitches, those gaps, in my knowledge of the universe.
And, despite the fact that I've been working on it for over a decade, I've hardly mentioned it in public.
The launch of the James Webb Space Telescope, however, has changed things. I now want to publicly articulate the main ideas in the book. In particular, I want to put on record the book’s predictions about the early universe, before the James Webb telescope sends back its first proper pictures of that early universe, and proves me right, or wrong.
HIGH RISK, HIGH REWARD
If my predictions turn out to be totally wrong, I’ve wrecked my book in public, and thrown away ten years of my life.
But if my predictions turn out to be largely right, then the book, and more importantly the ideas in the book, will have earned some credibility, and will therefore reach more people and have more impact. Particularly if my predictions turn out to be more right than the predictions of most mainstream astronomers, cosmologists, and astrophysicists.
I believe my ideas (and the ideas of others that I play with in the book) are very likely to prove correct, so I like the odds. OK, let’s roll the dice…
(This post will outline the main problem I am trying to solve.
The next post will, by briefly outlining the ideas in the book, show you how I’m trying to solve that problem.
And the post after that will lay out some predictions about what the James Webb Space Telescope will see, and will not see, based on the ideas in the book.)
THERE IS A PROBLEM WITH SCIENCE
The book tries to solve a problem in science, a big problem. Maybe the biggest, because the failure of science to solve it is not just limiting science’s ability to explain the universe, but it's also messing up a lot of people, psychologically.
It’s not a new problem, but it is one that has grown worse over time.
Philip W. Anderson, the Nobel-Prize-winning theoretical physicist, outlined the problem brilliantly in his classic paper More Is Different (published in Science in 1972). I’m going to draw from that wonderful paper in this post (as I do in the book). Here is how he started:
The reductionist hypothesis may still be a topic for controversy among philosophers, but among the great majority of active scientists I think it is accepted without question. The workings of our minds and bodies, and of all the animate or inanimate matter of which we have any detailed knowledge are assumed to be controlled by the same set of fundamental laws which except under certain extreme conditions we feel we know pretty well.
And that’s an excellent summary of what is still the situation: most scientists believe there is one set of fundamental laws, which they basically understand, and which can basically explain everything. He then clicks that attitude forward one notch:
It seems inevitable to go on uncritically to what appears at first sight to be an obvious corollary of reductionism: that if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws. In practice, that amounts to some astrophysicists, some elementary particle physicists, some logicians and other mathematicians, and few others.
Anderson then points out that there is a kind of hierarchy of science, in which each science rests on all the knowledge below it.
It's a hierarchy of complexity. Starting with the simplest – that is, the most fundamental; the closest to the bedrock of the natural world – the hierarchy is roughly this: particle physics, solid state physics, chemistry, molecular biology, cellular biology… and so on up through the sciences until you get to physiology, psychology, and the social sciences.
Each new level of complexity is dependent on, rests on, all the knowledge held in the levels below it.
But, as Anderson puts it,
At each level of complexity entirely new properties appear, and the understanding of the new behaviours requires research which I think is as fundamental in its nature as any other.
In other words, you can’t predict the behaviour of a complex system simply by studying the parts that comprise it. No matter how fully you understand those parts, the behaviour of the complex whole will startle you.
And so, at each level of complexity, you have to start again, because you are in a new world, with new rules. That is a powerful and profound insight, which has devastating implications for how we are currently studying the universe.
As Anderson elaborates it,
At each stage entirely new laws, concepts, and generalisations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry.
Enough Anderson for now. It’s well put, but abstract, so let's put it another way. (To make this more vivid, I’ll add a dog…)
THE EMERGENT PROPERTIES OF UNIVERSES, AND DOGS
At the beginning of the universe we have a hot, undifferentiated soup of particles. But 13.8 billion years later we have a dog that can bring you your phone.
Careful, careful, don’t bite it… don’t drop it… oh, well done, good dog!
Can reductionism predict the dog from the particles? No. Can it predict you? No. The phone? No.
Why is this such a problem? Well, we know that our universe casually produces sub-units as complicated as, say, a dog, or a phone, or you. The universe-as-a-whole is clearly far more complex than the dog which makes up one of its smallest subunits.
And science has no problem seeing that it is impossible to understand, explain, or predict the behaviour of a dog using just physics, or chemistry, or even biology. When a physicist’s dog runs upstairs and they want to know why, a theory of how gravity acts on matter is not going to help them.
A dog has emergent properties. Indeed, a specific dog with a specific history has specific qualities which other dogs will lack, and which, again, cannot be predicted using merely physics, or chemistry, or biology. Nor can the dog be understood in isolation from its environment.
You might need to know that the dog’s favourite ball is upstairs. Or that its least favourite cat is downstairs. The dog’s movement could have been triggered by a sound, or a smell.
What the dog does in the next ten minutes is similarly going to be the result of all kinds of interactions between senses, objects, internal states, external events, history… It’s really complicated, and that’s just ten minutes in the life of a dog.
Yet we are studying the universe using mostly physics, and a little chemistry. We are studying it as a separate set of parts (black holes, stars, molecular clouds, quasars…) We are studying it, in other words, using an impeccably reductionist approach. But reductionism doesn’t work at the scale of the universe. If physics can’t explain a dog, then how can it explain the universe that formed the stars that fused the elements that made the chemistry that evolved the biosphere that generated the dog? Given that this universe casually generates lots of dogs – which are pretty sophisticated sub-systems – this universe is clearly a far more sophisticated system. A hyper-sophisticated system.
The universe is more complicated than a dog.
MONKS OF THE NEW DARK AGES
The failure to confront that painful truth leads to absurdities like our current reductionist model of the universe, which claims that 27% of the mass of the entire universe is invisible matter made of unknowable stuff, and another 68% of it is a magical force.
If scientists were forced by law to call dark matter “Angels” until they, you know, had actually detected some, and to call dark energy “The Holy Spirit” until they could come up with even a rough account of what it was or where it came from, it might be a bit more obvious what they were doing.
Because right now, they are invoking angels and the Holy Spirit to save their beautiful but simplistic paradigm from the far more complex reality they are seeing.
This, despite the fact that they haven’t detected a single particle of dark matter, in decades of attempts, across hundreds of experiments, costing billions of dollars; and as for dark energy, there are many, many theories; which means, of course, there is no theory at all.
And yet they trundle on, firm believers in Dark Matter and Dark Energy. They live less and less like scientists, and more and more like monks of the New Dark Ages, in a ever-more faith-based universe that is, at this point, 95% invisible.
Meanwhile, the 5% of the universe we can actually see (stars, galaxies, and gas) continues to move in deeply peculiar ways that reductionist physics did not predict, and which demand a fresh approach if we are to explain them. Galaxies don’t rotate the way they should. They don’t cluster the way they should. Those clusters don’t flow the way they should. And they form larger and larger structures – walls, filaments, voids – which were, again, not predicted. Everywhere, randomness is predicted and expected, but structure and order are found.
That doesn’t mean that galaxies are breaking the laws of physics, any more than a dog is breaking the law of gravity when it runs upstairs. But it does mean that there are, clearly, powerful emergent behaviours at play here, that come from the extraordinary complexity of galaxies as systems (and the universe as a complex whole). Emergent behaviours which can’t be explained by treating the whole universe as particles… except, you know, more of them.
More is different.
Yet the monks, undaunted by fifty years of failure, continue to look for explanations in the wrong way (reductionism), at the wrong level (physics).
I mean, I get it. Their reductionist, physics-based approach is generating amazing results at its level. It’s such a powerful approach in terms of observation, and data acquisition, that it’s easy to ignore its recent stream of failures at integrating that new knowledge into a broader theory of the universe that works. After all, this approach worked for atoms, with the stunning success of the Standard Model, why shouldn’t it work for galaxies! Aren’t galaxies just lots and lots of particles?
Well, yes. But so is a dog. So are you. And yet no formula can predict your behaviour. Evolved organisms are not simple stacks of particles. And the hypercomplex system that generates evolved organisms is clearly not a simple stack of particles either. (What it is, I will lay out in my next post. Let’s just stick with the problem for now.)
I get grumpy about this, but I shouldn’t. Looked at another way, science is simply doing its job.
YOU SAY YOU WANT A REVOLUTION
The great linguist and philosopher, Thomas Kuhn, described our current situation beautifully in his masterpiece, The Structure of Scientific Revolutions. The book was published in 1962, yet it describes perfectly the last 50 years of cosmology and astrophysics, because the situation is universal: this is what always happens. Here’s Kuhn:
In the development of any science, the first received paradigm is usually felt to account quite successfully for most of the observations and experiments easily accessible to that science’s practitioners.
Yep. Everything can basically be explained by gravity acting on matter. It’s all just dead matter obeying eternal laws.
Further development, therefore, ordinarily calls for the construction of elaborate equipment…
Yep. The Large Hadron Collider (tasked with exploring the high-energy physics of the early universe) is 27 kilometres long, cost roughly 5 billion dollars to build, plus maybe a billion a year in running costs, and discovered… absolutely nothing. To be fair, it confirmed the existence of the Higgs Boson (but we’d known about that particle since the 1960s). To be even more fair, The James Webb Space Telescope cost a mere twice as much – 10 billion dollars – and is actually useful.
OK, let’s let Kuhn finish his sentence…
…the development of an esoteric vocabulary and skills, and refinement of concepts that increasingly lessens their resemblance to their usual common-sense prototypes.
Is that true for cosmology yet? Esoteric vocabulary? Highly refined concepts? Let’s grab a recent cosmology paper from Nature and see. Oh, here’s a good paper:
(The paper is about what goes on inside the small, dense stars called white dwarfs.) OK, yes. It’s almost impossible for a naive reader to understand many of the papers now coming out of the field.
That’s not a criticism! It’s unavoidable, as scientists develop a new and useful, highly specialised, vocabulary over time. But it does make a field increasingly insular (and at risk of groupthink), once that new private vocabulary develops to the point where the field becomes unable to communicate with outsiders. (And as outsiders lose the ability to critique it knowledgeably.)
That professionalisation leads, on the one hand, to an immense restriction of the scientist’s vision and to a considerable resistance to paradigm change. The science has become increasingly rigid.
Yep. No comment needed there.
But as Kuhn points out, the seeds of the next revolution are already stirring, because…
On the other hand, within those areas to which the paradigm directs the attention of the group, normal science leads to a detail of information and to a precision of the observation-theory match that could be achieved in no other way.
And so the last few decades have seen magnificent progress in observation; highly imaginative missions have mapped the solar system, from the sun to Pluto; technically astonishing telescopes have been built; new instruments of all kinds have given us torrents of observational data, which we have analysed brilliantly… at the level of physics.
SCREAMING “EPICYCLES” DIRECTLY INTO YOUR FACE
Which gets us to the heart of the problem.
The starting point of that reductionist model is that the universe is just dead matter, blindly obeying eternal laws (like, say, gravity). And so, when the behaviour of that matter turns out to be deeply peculiar – galaxies form far faster than they should; they rotate far more coherently than they should, and so on – our current scientific paradigm can only throw more matter into its model – Can’t see it? No problem! Dark matter! – and move this magical, invisible matter around their models until they can get it to drag everything into place using gravity.
As the wonderful German theoretical physicist Sabine Hossenfelder said a couple of years ago (and the problem has only gotten worse since),
These papers typically start with a brief survey of other, previous, simulations, none of which got the structures right, all of which have been adapted over and over and over again to produce results that fit better the observations. It screams “epicycles” directly into your face.
Some of the computer models now being used to “prove” that we live in a dark-matter and dark-energy dominated universe have ten, or even a dozen, free parameters that the scientists can tweak to fine-tune the results. But when you are free to tweak that many parameters, you can get any result you like – and fit it to any observational data you are given.
As possibly the greatest mathematician of the 20th century, John von Neumann, put it,
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.
They’ve all lost their goddamn minds, and it has happened so slowly and incrementally that nobody has noticed.
FIFTY THREE THOUSAND MILLION DOLLARS
Which brings us back to More Is Different. Anderson’s paper is a classic: it has been cited almost five thousand times. And the rate of citation goes UP over time, as it becomes ever more urgently relevant. Yet its message has not been learned, or applied, because it cannot be applied inside science as it is currently constituted.
A physicist reads that paper, and nods, and weeps, and… goes back to doing physics. A biologist nods, and sighs… and goes back to doing biology. And so on. Many superb scientists have an uneasy feeling, deep down, that Anderson’s argument is true, and some of them will even say so. But there’s almost no way to act on it, and still get funding and a tenured position.
Oh, every university preaches the virtues of multidisciplinary approaches, but it’s almost always bullshit, because under the current system nobody can possibly learn five scientific cultures, and their specialised languages, in enough detail to outperform five specialists (and pull in more funding than five specialists can). So universities just force a biologist to talk to a physicist twice a month, and tick the multidisciplinary box. True multidisciplinary approaches have no home.
You, jumping up and down, with your hand up at the back, what? …Oh, the Santa Fe Institute. Yes, sure, places like the Santa Fe Institute do wonderful interdisciplinary work, I love the Santa Fe Institute, but their budgets are rounding errors compared to the established universities. The Santa Fe Institute’s annual budget is ten million dollars. Harvard University’s total operating revenue in 2021 was 5 billion dollars; its operating surplus was 282 million dollars; oh, and its endowment, as of June that year, was 53 billion dollars. One university, sitting on fifty-three… thousand… million… dollars. By the way, Harvard’s net assets went up 13 billion that year. Not a lot of pressure to change the current system coming from those figures.
Meanwhile, in the five decades since More Is Different was published, science has grown inexorably more reductionist. The disciplines have sub-divided, and sub-divided again, into ever-smaller fragments of… whatever the invisible whole is. The thing they are all, technically, supposed to be studying. (We don’t know what it is, because we are no longer even trying to see it.)
And this is, in some ways, almost unavoidable: The brain capacity of human beings has remained constant over that fifty years, while the amount of information in any given field has multiplied ten- or twenty-fold.
HEY KIDS, LET’S READ ALL THE SCIENCE EVER PUBLISHED!
It can be hard to understand just how extreme the situation has become, so let’s go back a bit, for perspective. Isaac Newton, it is generally agreed, had probably read ALL THE SCIENCE EVER PUBLISHED. Sweet; but in 1665, the year Newton started to develop calculus, and got his BA, there were two scientific journals on earth. (One English, one French.) And there were no back issues to read, because both were founded that year.
Now there are over 30,000 scientific journals, publishing over 2.5 million papers per year… and the number of scientific papers ever published doubles every fifteen years.
Under such circumstances, how can a young scientist do anything other than specialise far more tightly than the previous generation of scientists? And this of course means that they don’t, they can’t possibly, know the broad history of their own field, let alone of all the others. To learn everything the previous generation learned would leave them stranded in the past, miles from the current frontier of their discipline. They have to dump much of that broad history to make room for – to make time to learn – more recent, far narrower stuff. They are caught in a grotesque inflationary trap.
So over the past decade (in some ways, over my entire lifetime), I have tried to turn myself into a kind of scientist that hasn’t existed for a couple of hundred years. The kind of scientist who existed back before we called it science, and broke it into parts; back when it was still called natural philosophy, and dealt with everything. A kind of scientist that the university system simply can’t produce any more. A kind of scientist that can put all this shit together.
Am I qualified to do it? That’s a good question. Here’s an honest, if unsatisfactory answer: nobody is qualified to do it; but somebody needs to.
We need a way of thinking about the universe, and our place in it, that gives meaning back to the data, and thus back to existence, without sacrificing any of the wonderful benefits of reductionist science. A method that jolts electricity into the exhausted body of contemporary cosmology, and brings it back to life.
SERIOUSLY THOUGH, WHAT QUALIFIES YOU TO REDESCRIBE THE UNIVERSE?
OK; fair enough, good question: why me? Because I’m good at pattern recognition. And I am particularly good at recognising when a scientific field has lost its ability to see the bigger pattern. This is surprisingly common: all it takes is a flawed fundamental assumption. If the flaw isn’t caught early on, it gets baked into all the models, and soon everyone is wrong in the same way. The flaw quickly becomes invisible, and groupthink punishes anyone who says there is a problem.
Early scientific thinkers, for example, spent 1600 years perfecting Ptolemy’s model of the solar system, in which the sun, moon, and planets all merrily orbit the earth.
They endlessly refined the theory, adding more and more epicycles, until it worked pretty well… except for the fact that its basic assumption was totally wrong.
Phlogiston is another beautiful example of a popular mainstream theory that was elaborated and perfected by many excellent scientists over many years, while remaining magnificently – almost perfectly – wrong.
This problem isn’t confined to the past. Back in 2007, for example, I predicted, in some detail, the financial crisis of 2008 (in a blog post entitled, fairly unambiguously, “Biggest Crash In World History Coming Up”), at a time when the majority of mainstream economists were unable to see it coming.
WAIT, STOP, ISN’T YOUR BIGGEST SELLING BOOK ABOUT A RABBIT WHO EATS HIS OWN POO? AIMED AT FIVE-YEAR-OLDS?
Er, yes. Rabbit’s Bad Habits. (Illustrated by the brilliant Jim Field.) It’s now published in over 30 languages. One of the most successful books about a rabbit eating his own poo of all time. Many say it revolutionised the entire rabbit-eating-his-own poo genre. I’m very proud of it.
Look, I’m not claiming to be a genius. The approach I’m using, and the theory I’m exploring, are brilliant, not me.
If I can use them to make better predictions than several million, far better informed, mainstream scientists… That's terrific evidence, not that I am smart, but that this approach gets results, and this theory has real explanatory power, and should be taken up and explored by the mainstream.
And of course, I could turn out to be totally wrong about all this. In which case, oh well. Back to the rabbit-poo books, having learned a valuable lesson.
Well, that’s the problem.
Next post, I’ll tell you about the book, and the theory.
(PS: As you have probably guessed, I am trying to force a paradigm shift in how we think about the universe, and our place in it. Such revolutions happen one mind at a time. If this post clicked with you, pass it on to a friend you think would also click with it. These tiny actions will make a huge difference over time. We don’t have to be passive in the face of a bad paradigm. It’s our universe too. Also, we need to save the next generation of brilliant astronomers, cosmologists, and astrophysicists from laboriously climbing a ladder that is currently up against the wrong wall.)