Notes from the Curve

I’ve always wondered what it would have been like to live in the early years of the last century, while relativity and quantum physics were rewriting what we thought possible. My not-so-original take: we are in an analogous situation now. Our world is going to change in ways we can hardly imagine. The following notes are some of those ways.

Low background text

Some specialized particle detectors need metal without any trace of radiation. The problem is that any metallurgic product fabricated after the Second World War is contaminated. Scientists are then forced to recover steel and other metals from underwater shipwrecks, where the water above shielded the metal from radiation. All this preamble to say that LLMs have contaminated all the available text in the world, a language fallout of sorts. Scientists will need to recover as much text as possible produced before the advent of ChatGPT to have a realistic view of what human language really is.

The Distilled Reasoner

There will be a breakthrough in model architecture, but it won’t be designed: it will be observed. At some scale, a model will stop recollecting and start re-deriving: given the base information that produced an insight, it will reconstruct the insight from first principles rather than retrieve it from compressed training data. Re-derivation will emerge because it is less computationally expensive than recollection: the network will learn it as a form of energy minimization. The difference is profound. A model that re-derives is more robust and generalizes out of distribution, precisely because it is not interpolating over memorized examples but reconstructing from underlying structure. Once the behavior is observed in a large model, it will be distilled into a simpler architecture. That distilled model will run on a phone or a datacenter indifferently: the only difference being computation time and context size. This is a possible Raspberry Pi moment.

The Raspberry Pi moment for AI is yet to come

Current LLMs become significantly useful at around 30–40B parameters. To self-host a model that size today you need to spend north of $2000, and still play hyperparameter whack-a-mole to find the right combination of temperature, min-p, and so on. What we’re all waiting for is a small box with the compute of a modern GPU/TPU, 64GB of unified fast memory, and a $200 price tag. Stackable via something like NVLink or EXA, you could build your own supermodel month by month. They’ll sell like hotcakes.

Privacy as an historical accident

Privacy was a temporary accident of analog life. The power dynamics of surveillance were always based on scarcity: scarce video, scarce audio, scarce analyst attention. Ubiquitous infrastructure solves the first two. AI solves the third. We will carry devices recording everything we and everyone around us ever said or did. It will be like being passively tagged on Facebook to the nth power. The distributed angle actually helps: when everyone can see everything, the panopticon loses its asymmetry. No single entity owns the feed. Social acceptance follows naturally, titillating our gossipy and voyeuristic instincts.

Late Binding, Lifted

Late binding delays semantic resolution from compile time to runtime, trading certainty for flexibility. Automatic code generation applies the same principle one abstraction layer up: instead of delaying the resolution of a name or a type, you delay the resolution of the concept itself. Store the prompt, not the program. The idea gets resolved into code only when needed, shaped by the context at generation time. Same design principle, higher rung on the ladder.

Robotics, and then land

We still don’t know if LLMs will scale to the physical world the way they scaled to language and programming. Maybe we’ll need World Models, as LeCun argues. But the gates are open. Machines navigating human spaces and manipulating human objects will be in such demand that production won’t keep up. Soon every family will have a Butler robot, a Housekeeper robot, a Personal Chef robot, a Doctor robot. And then a Mason robot, a Carpenter robot, a Farmer robot, a Mechanic robot. At that point we’ll only need land to be self-sufficient: land comes with energy, solar, geothermal, hydro, wind, all waiting to be captured. There’ll be a robot for that too.

Transaction Costs, Hacked

If the answer is transaction costs, as one of my favourite podcasts argues, AI will bring them down by transacting on your behalf, knowing every psychological technique, every salesman trick, and immune to all of them. An AI will buy you a car at the best valuation for you.

Anamnesis, Diagnosis, Synthesis

The current economics of medicine is built around scarcity: scarce doctor attention, scarce data, scarce time. Your doctor doesn’t know what you actually ate, how little you actually moved, or what your body was doing at 3am. They work from snapshots, a blood test, a consultation, and guess at the rest. AI fixes this. Continuous monitoring means a permanent, honest record of everything your body does, correlated against everything you do. Your doctor, or your Doctor robot, will know more about you than you do. And they will only care about you.

Fair journalism will be solved

AI will hunt for facts, correlations, and sources, and write the articles. Your personal news agent will then scan everything for bias: partisan omissions, loaded wording, agenda-driven framing. All the tricks. The hard question is who trains the bias detector, and on whose definition of bias. But the direction is clear: journalism doesn’t disappear, it gets audited in real time. The spin cycle stops.

The Last Currencies

Future economic autonomous agents won’t need dollars or euros. Energy and knowledge are self-backing: they have intrinsic worth, they are transferable, they are fungible. No central authority required. The currency is the thing is the currency.

Language Systems will substitute Operating Systems

It’s the natural consequence when software writes itself.

AC/BC

This is tongue-in-cheek, but if things take a turn for the worst, future dates will be described as before ChatGPT (bC) and after ChatGPT (aC).

Epilogue

This post could have been named in many different ways. I believe that the current title expresses the feeling clearly: we are witnessing science fiction in the making. These are something more than ’times of change’. Here we are riding an exponential curve, like cyberpunks in a Gibson novel. How cool, eh?