I can’t say anything.
So … you earn Daily Bread to feed Baby Jesus by completing your Koine Greek lessons every day? And if you neglect your studies for 40 days and 40 nights, Satan appears and starts plying him with tempting loaf-shaped rocks?
found an interesting paper from Oxford last year about the perception of human-AI relationships from people that have one vs those that don’t, via self report on Replika.
https://arxiv.org/pdf/2311.10599
Some interesting comparison there, like self report of their effect on social health, non-users stay to normalish distribution, no surprise there:
What’s more interesting is both group’s reaction for a more human-like AI, totally polar opposite there.
Also a fascinating video couple days ago from Professor David Kipping, who runs an astronomy lab over at Columbia university, and his perspective on AI. He has other great videos unrelated to AI on his channel too that’s also worth checking out.
Interesting video talk, thanks for sharing.
“I don’t care, the advantage is too great,” is not nearly as startling a standpoint as he seems to find it. Being willing to throw out privacy concerns and ethical concerns is something clever people have been ready to do for some pretty trivial benefits.
We already knew that the subjective boost to productivity from seeing a simple prompt spit out lots of results is intoxicating – and this video confirms that it’s the case among some elite physicists as well as some elite software developers – but that objectively, actual productivity boosts can be significantly undercut or even completely undone by the work he mentions of cross-checking and fixing errors and hallucinations.
Even when they’re real, productivity boosts in things like managing your emails or even solving differential equations doesn’t necessarily speed us toward major breakthroughs – it can reduce academic drudgery without thereby creating “super scientists.” A lot of academic output is (at best) moderately interesting while making (at most) a marginal contribution to anything transformative.
So I await the promised “avalanche of discovery” with interest but substantial skepticism. If AI does provide the insight that gets us to cheap fusion reactors, great. That would totally be worth it. But so far, seems like the boosts to science are being delivered through specialized machine learning software (like protein folding, or maybe the search for exoplanets AI software that this guy worked on a decade-ish ago), rather than the giant scale LLMs. And that feels to me like a continuation of the scientific role software has been playing for over a half-century now, rather than an exponential shift upward.
In some fields (he talks about college admissions) productivity boosts largely cancel each oher out – applicants use ChatGPT to write their applications, staff use it to summarize and analyze those applications, and we ultimately end up with a lot more AI use but not a lot of actual value created. Sympathetic as I am to the vets who’ve been using GPT to access disability benefits, I’d bet on it just being move one in an arms race with the gatekeepers, in which AI will be wielded by both and the equilibrium outcome ends up not far from where we started.
Unless that’s all cut off by the “explosion of cost” the video creator thinks is likely coming. I agree with him that that would be likely even if the economics of LLMs worked the same way as other subscription services. But as we’ve discussed upthread, given that marginal costs per additional user are significantly higher for LLMs, it’s even likelier that access to LLMs eventually becomes something you have to pay a lot for.
His closing idea that an AI superintelligence might yield a world of magic – a world where the technologies we rely on are built on ideas and models that no human brain has ever grasped, or will ever grasp – is terrific as sci-fi. I don’t mean that in a derogatory way; sci-fi ideas are worth grappling with, and reality sometimes unfolds along those lines. But I continue to bet that we’re a long way from an AI singularity, so I’m not going to put much actual worry into it.
PS: The folks who are handing over control of their emails and personal data to AI agents really might want to reconsider.
And here’s a good (alarming) piece on the growing opacity of AIs and the implications for security.
A year ago, Starwish asked Open AI Deep Research to analyze CoG games by genre. The result was fluently written but had flaws you could drive a truck through – getting genres tangled, claiming that sci-fi was a top-selling genre (bringing in HGs to bolster the case), underselling the superhero genre. It ended up as a significant data point in my skepticism about the usefulness of AI in research:
Well, this week the blog Astral Codex Ten is running an AMA – Ask Machines Anything – for Claude’s latest paid version, as an effort to convince skeptics who only have access to free versions that the best AI has become very, very good indeed. I thought I’d see how much of a difference twelve months had made, and asked a version of Starwish’s prompt.
The result was, I have to admit, entirely superior across the board. Nothing jumped out at me as wrong, nor really even any major omissions. Fair play to our robot overlords – they’re improving at a rapid clip.
Meanwhile, I’ve also read an interesting article that puts more flesh on the bones of my continuing skepticism about the idea that AI is going to lead to a new scientific revolution – not so much in this case because of AI’s dysfunctions, but because of those in how we do science.


