Interesting video talk, thanks for sharing.
“I don’t care, the advantage is too great,” is not nearly as startling a standpoint as he seems to find it. Being willing to throw out privacy concerns and ethical concerns is something clever people have been ready to do for some pretty trivial benefits.
We already knew that the subjective boost to productivity from seeing a simple prompt spit out lots of results is intoxicating – and this video confirms that it’s the case among some elite physicists as well as some elite software developers – but that objectively, actual productivity boosts can be significantly undercut or even completely undone by the work he mentions of cross-checking and fixing errors and hallucinations.
Even when they’re real, productivity boosts in things like managing your emails or even solving differential equations doesn’t necessarily speed us toward major breakthroughs – it can reduce academic drudgery without thereby creating “super scientists.” A lot of academic output is (at best) moderately interesting while making (at most) a marginal contribution to anything transformative.
So I await the promised “avalanche of discovery” with interest but substantial skepticism. If AI does provide the insight that gets us to cheap fusion reactors, great. That would totally be worth it. But so far, seems like the boosts to science are being delivered through specialized machine learning software (like protein folding, or maybe the search for exoplanets AI software that this guy worked on a decade-ish ago), rather than the giant scale LLMs. And that feels to me like a continuation of the scientific role software has been playing for over a half-century now, rather than an exponential shift upward.
In some fields (he talks about college admissions) productivity boosts largely cancel each oher out – applicants use ChatGPT to write their applications, staff use it to summarize and analyze those applications, and we ultimately end up with a lot more AI use but not a lot of actual value created. Sympathetic as I am to the vets who’ve been using GPT to access disability benefits, I’d bet on it just being move one in an arms race with the gatekeepers, in which AI will be wielded by both and the equilibrium outcome ends up not far from where we started.
Unless that’s all cut off by the “explosion of cost” the video creator thinks is likely coming. I agree with him that that would be likely even if the economics of LLMs worked the same way as other subscription services. But as we’ve discussed upthread, given that marginal costs per additional user are significantly higher for LLMs, it’s even likelier that access to LLMs eventually becomes something you have to pay a lot for.
His closing idea that an AI superintelligence might yield a world of magic – a world where the technologies we rely on are built on ideas and models that no human brain has ever grasped, or will ever grasp – is terrific as sci-fi. I don’t mean that in a derogatory way; sci-fi ideas are worth grappling with, and reality sometimes unfolds along those lines. But I continue to bet that we’re a long way from an AI singularity, so I’m not going to put much actual worry into it.
PS: The folks who are handing over control of their emails and personal data to AI agents really might want to reconsider.
And here’s a good (alarming) piece on the growing opacity of AIs and the implications for security.
