Inference Time Compute

September 16, 2024

LLMs can now spend time thinking. Soon, it will think better, faster, using tools to verify each step, backtracking from deadends, building a mental model of the problem space until, vicariously living through billions of problems, it stretches the frontier of human knowledge.

And at that point, maybe we won't call it just 'human knowledge'. Was it more our genius or theirs?

I hope that we don't anthropomorphize them. Tools, not creatures. But I also hope that in the abyss of trillions of parameters, we do not gaze thanklessly.