Discussion about this post

User's avatar
Jim Amos's avatar

Speaking of rare materials like Cobalt, I doubt we could ever source enough of it to build batteries for 10 billion humanoids. We know the planet is in decline and natural resources are depleting, yet these techbros seem oblivious. Do you ever get the sense that they know their robotics and AI projects are bullshit but they're hoping to sell enough shovels and pipedreams to get away with it?

Expand full comment
Alejandro Piad Morffis's avatar

Great article! There are a lot of layers to this, from corporate greed to lack of innovative vision, but the technical issue that makes the sim-to-real gap extremely hard--regardless of the politics or the economics of it--, I think, is probably something similar to the issue of hallucinations. ML systems trained via backprop need to come up with differentiable approximations of the input space to be able to generalize at all. This means LLMs need to map sentences to vectors, for example, and compare vector distances to determine what a sentence "means".

In the same sense, these robots need to map the world to a vectorial representation to determine what is going on in the kitchen. But smooth, differentiable representations can only be an approximate model of macroscopic reality, and is in that tiny difference between the smooth proxy and the non-continuous reality you want to model that problems arise. For robots, this means tiny discrepancies between their prediction of the outcome of a decision and the actual outcome, that get amplified the longer the prediction chain.

This is why simple, reactive agents like Roombas work so well for they restricted domain, they are approximating a much smaller part of the real world (just flat floors with static obstacles) and they can self-correct fast because the model of the world they have is super simple, just a few parameters (plus they are not learning agents, they are hard-coded, I believe). Humanoid robots are supposed to deal with the whole chunk of reality humans have evolved over eons to represent in our brains accurately, and I think we do need symbolic world models with causal inference rules to model it accurately. I don't know if backprop can lead us all the way there.

(Sorry for the jargon, it's early and my brain is still too dumb to make this more intelligible.)

Expand full comment
8 more comments...

No posts