🤷‍♂️ Reactions to AI 2027

AI 2027 is great in terms of format, and in terms of seeing where good forecasting and reasoning practice takes you, and I think it’s a mistake not to take an acceleration this dramatic seriously.

That said, my first reactions were (a) unable to align my expectations with Daniel and Scott, (b) unhappy with how deterministically the US-China race was depicted.

Below is an attempt to write down the questions I feel particularly unclear on, in the hope that it will force me to bette understand their claims or find real disagreements.

How do we get to a “sufficiently automated” economy for AIs not to experience friction in managing physical production in 2027?

Original questions I wrote down

  • Whence the belief that single millions of robots will be sufficient to get us to Dyson spheres? -> not actually claimed (I think)
  • Is there a specific story for how we get robots with flexible enough learning, and get the hardware coupling right, beyond “AI R&D automation magic” (not to say that that’s necessarily wrong)? -> still looking for specific claims
  • Whence the belief that converting car factories to factories producing million of robots/year is realistic? -> some thinking here and here, which I need to engage with more. But I think that’s less of a crux than the above.

Can anyone explain how we have automated researchers away in chemistry and bio, even for “easy” problems, like protein-ligand binding or biochemically accurate real-time virtual simulation of different cell types?

  • How do the AIs get feedback on their models / plans / experimental designs in chem/bio?
  • How did they get enough data / work with sparse data in domains that aren’t on track to generate enough by 2027? Or is this all happening in simulation? If so, what constrained the environment / rules?
  • If faster experimentation is imagined, how did they get a consistent, reliable and fast interface?
  • Or is this not something we can do in 2027?

How do you transfer tacit knowledge in production and R&D?

  • Imagine all of {ASML, TSMC, NVIDIA} and {frontier bio research, big pharma} etc. is mobilized to teach LLMs and robots – do we think that we’re able to get full, reliable emulation of the relevant behaviors without humans in the loop?

Is there really no realistic scenario that doesn’t involve racing with China?

  • This is more driven by listening to the Dwarkesh podcast
  • My gut reaction to this is very uncharitable – tend to ascribe inability to swallow loss or even draw to Americans. Surely there are better reasons.
    • Does anyone know enough about the intentions of the Chinese ruling class to dismiss the possibility of a stable cooperative equilibrium?
    • Does anyone know enough to be confident that US-led world will go better than Chinese-led?

Published by