ChatGPT’s profits are dwarfed by its costs, which is on the one hand true for most startups but, on the other, not usually on a billions-per-year scale. I’m still influenced by the doomsayers when it comes to the prospects there:
“Assuming everything exists in a vacuum, OpenAI needs at least $5 billion in new capital a year to survive. This would require it to raise more money than has ever been raised by any startup in history, possibly in perpetuity, which would in turn require it to access capital at a scale that I can find no comparable company to in business history.”
And my impression is that academic researchers who charge $100+ an hour are (like most consultants) generally doing that to reflect the fact that research work is sporadic, and their hourly rate needs to cover a lot of hours when they’re not working. It’s not necessarily worth it to them to pay a big chunk of their earnings to save a few hours.
It might be worth it to the people who hire researchers, if the part the AI can do is easily hived off from the rest of the contract. My brother-in-law who used to work at Meta said that first-draft legal boilerplate was the one clearly profitable use case he’d found for the tech so far, given how expensive legal review is.
It’s not clear to me, though, how much money this will ultimately save on research as long as the AI output needs thorough human cross-checking (as it certainly will, given hallucination rates). Contractors will set their rates at the level they need to; if their work has to systematically start with checking an AI first draft for accuracy, that’s higher value work, and they might well bump up their rates to reflect that.
Let’s see how the market shakes out.
PS: did the CoG Lit Review include any footnotes/ links to source material, or is the output exactly what you’ve given us?
I wondered about a quote that it attributed to a forum mod (out of vanity, of course, wondering if I’d said it) and found it came from Hazel, who unless I’m much mistaken has never been a moderator.
That’s not a big error, but reflects a level of sloppiness you’d not want in research you were paying hundreds an hour for. And if Deep Research doesn’t footnote its quotes, checking its work would be a huge pain in the neck.
I was also struck by its reliance on online talk about CoG and the absence of something a real world lit review would be expected to include: direct references to the CoG/HG catalog. The researcher should themselves count games of different genres, or at least scan the catalog to verify what people say about e.g. Fallen Hero and Mass Mother Murderer (!) being the examples of villain protagonist games. A human scanning the CoG and HG pages will very quickly find more apt, though less talked-about, examples.
There’s no question that it’s impressive that our tech can produce this kind of output…but even a cursory review brings up major weaknesses that would hurt its usefulness as research. A broadly accurate though impressionistic and flawed picture is fine if you’re a casual wanting to get an intro to a topic. It’s not ok for the kind of research people pay real money for.