Prelude to Centaur Era

The human race will have a new kind of tool, a tool that will increase the power of the mind much more than optical lenses helped our eyes, a tool that will be far superior to microscopes or telescopes... What other consequences will follow from this tool are in the hands of the fates, but they can only be great and good. For although people can be made worse off by all other gifts, correct reasoning alone can only be for the good. - Leibniz in Preface to a Universal Characteristic
While I might be an optimist in the spirit of Leibniz, if reason is what separates man from beast, what’s left to separate man from machine? I want to be able to create and share ideas that others find useful and interesting, but is this a sensible ambition to have in an age where AI is advancing to the point where it will be competitive with or supersede humans across every intellectual domain? Will machine reasoners relegate human intelligence to a minor footnote in history, or empower us to achieve more than we ever thought possible?
Comparative Intelligence
The first thing to determine is how far to go extrapolating AI capabilities, because if there is a comparative advantage to human intelligence then I think it makes sense to lean into that instead of trying to compete directly with AI.
If our predictions are too conservative then we risk investing in skills that AI is better suited to, like spending six months learning mental arithmetic before the invention of the calculator. But if we don’t constrain our imagination at all then there's no limit to the capabilities we can invent, and we end up prophesying the arrival of artificial omnipotent Gods.
Hyperscaling
With that in mind, one of the main lessons I drew from my research is not to underestimate the speed of datacenter scale out and the eagerness of the so-called "hyperscalers" to pour eye-watering sums into training runs, test-time compute and AI infrastructure.
Maintaining the historical yearly increase in training compute will become challenging as companies run into nation-level constraints like the amount of electricity the energy grid can supply. But companies are already investing in huge infrastructure projects to overcome this.
Take Amazon for example, who just bought a nuclear-powered data centre1, or Microsoft who is re-opening the Three-Mile Island nuclear power plant in Pennsylvania2. Meanwhile Google is figuring out how to link together geographically distributed training clusters to tap into multiple regions' energy infrastructure3 and OpenAI is plotting a $500 billion AI supercomputer called Stargate.4
The justification behind this can be explained by the blue bars in the following graph of o3’s performance at competition math problems. Full-compute o3 achieved an accuracy 40% higher than o3-mini with low compute. What that means is that paying more in inference-time compute allows o3 to exhaust more of the solution space giving it a better chance of finding a solution to a problem. What if we were to train even better models and apply even more compute to important outstanding problems like the creation of new life-saving drugs, or a solution to the Riemann Hypothesis? There is a clear incentive for the world’s wealthiest companies to put everything on the line in pursuit of systems capable of doing this.
So be really wary of hiding your AI capabilities estimates behind compute or scale bottlenecks. If there's a large, but finite search space between current-day AI and the solution to a problem, it's best to assume that those bottlenecks will get vaporised through trillions of dollars and sheer force of will.
Expert Human Approximators
It's hard to say what will emerge at the end of a frontier model training run but I think it’s sensible to plan with the expectation that at the very least the hyperscaling era will stretch the o1 paradigm to its limits, and that AI language and visual reasoning will increase to a similar level relative to humans as the performance of AlphaGo Lee relative to Go players.
See the video below for a recap of the relationship between o1 and AlphaGo.
In other words, the lower bound for o1-style large language models is expert human approximators in any task where data is easy to collect, performance is easy to measure and where it’s easy to access or generate problems to learn from - think solving competition math problems, fixing well-specified GitHub issues, navigating the web, operating factory equipment and so on. “Expert human approximators” actually feels quite conservative - there doesn’t seem to be a barrier to superhuman performance in these areas.
If this is the case, then what will become of humans? Will there be any meaningful intellectual work left for us to do?
Imagination Games
There is something conspicuously missing from the list of recent AI achievements - “if even a moderately intelligent person had this much stuff memorised, they would notice - Oh, this thing causes this symptom. This other thing also causes this symptom. There's a medical cure right here.”
Why aren’t we seeing AI autonomously making creative connections leading to new discoveries? From Dwarkesh’s Twitter thread, commenters advanced a number of interesting hypotheses:
“Models are capable of outputting many groundbreaking ideas, they don't yet have the capacity to test them, so they can't make discoveries”5
“Superhuman research taste will remain out of reach for LLMs until we find and scale a new post-training paradigm analogous to the post-training that enabled high-quality CoT reasoning in math and coding”6
“There aren’t any discoveries to be made just by reading the literature”7
Regardless of the reason, at the beginning of this essay I wrote that if human intelligence has any advantage whatsoever, then you should go all in on that instead of trying to compete with AI, and idea generation certainly seems like it is our biggest intellectual advantage.
As further evidence, consider the fact that even the least learned humans - children - are capable of surprising levels of creativity relative to modern AI. For example, children who haven’t read a single book are capable of spontaneously inventing new words that actually make sense.
When children play games, they invent fantasy worlds and scenarios they haven’t seen or experienced before, and are capable of resolving disputes by inventing (or re-discovering) moral rules.
Children frequently clash over the “finders-keepers” rule which grants individuals monopolies over desirable objects. It’s common for them to reinvent the principle of “sharing through turn-taking” to resolve such conflicts. - Imagination Games
Whether or not this is a “defensible moat” over the long term is obviously extremely uncertain, but the fact that children are capable of independent discovery and the world’s most intelligent AI systems are not gives me the impression that hyperscaling existing systems further along the same axes will not magically imbue them with this capability.
So where does this leave us, and what was this lengthy exposition a prelude to?
Centaur Era Starts NOW
We are entering the CENTAUR ERA where human creativity and imagination is augmented with the raw reasoning power of AI. If you care about progress, about knowledge creation and you want to play some role in building the future, abandon anything that the machine can perform on your behalf, automate away the predictable, zero-entropy bits and free your mind to focus on the questions of what is valuable, what problems are worth solving, what is beautiful, and what it means to make progress on a civilisational scale.
Embrace your unique intellectual endowment, get weird, come up with genuinely creative insights and create companies, organisations, partnerships to see them through. Help drive the next golden age of human flourishing and advancement.
The arrival of machine reasoners have only made it more obvious what you should have been doing all along, and they have opened infinitely many more doors than they have closed. All you need is the courage to abandon your outdated plans and pick something big to work on…
https://www.ans.org/news/article-5842/amazon-buys-nuclearpowered-data-center-from-talen/
https://www.bbc.co.uk/news/articles/cx25v2d7zexo
https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer
https://x.com/SGRodriques/status/1888622776959009244
https://x.com/simocristea/status/1888448327118647581
https://x.com/davidad/status/1888621941667303762
https://x.com/CalvinMccarter/status/1889026362477818052