Consolidated AI Thread: A Discussion For Everything AI

ChatGPT’s profits are dwarfed by its costs, which is on the one hand true for most startups but, on the other, not usually on a billions-per-year scale. I’m still influenced by the doomsayers when it comes to the prospects there:

“Assuming everything exists in a vacuum, OpenAI needs at least $5 billion in new capital a year to survive. This would require it to raise more money than has ever been raised by any startup in history, possibly in perpetuity, which would in turn require it to access capital at a scale that I can find no comparable company to in business history.”

And my impression is that academic researchers who charge $100+ an hour are (like most consultants) generally doing that to reflect the fact that research work is sporadic, and their hourly rate needs to cover a lot of hours when they’re not working. It’s not necessarily worth it to them to pay a big chunk of their earnings to save a few hours.

It might be worth it to the people who hire researchers, if the part the AI can do is easily hived off from the rest of the contract. My brother-in-law who used to work at Meta said that first-draft legal boilerplate was the one clearly profitable use case he’d found for the tech so far, given how expensive legal review is.

It’s not clear to me, though, how much money this will ultimately save on research as long as the AI output needs thorough human cross-checking (as it certainly will, given hallucination rates). Contractors will set their rates at the level they need to; if their work has to systematically start with checking an AI first draft for accuracy, that’s higher value work, and they might well bump up their rates to reflect that.

Let’s see how the market shakes out.

PS: did the CoG Lit Review include any footnotes/ links to source material, or is the output exactly what you’ve given us?

I wondered about a quote that it attributed to a forum mod (out of vanity, of course, wondering if I’d said it) and found it came from Hazel, who unless I’m much mistaken has never been a moderator.

That’s not a big error, but reflects a level of sloppiness you’d not want in research you were paying hundreds an hour for. And if Deep Research doesn’t footnote its quotes, checking its work would be a huge pain in the neck.

I was also struck by its reliance on online talk about CoG and the absence of something a real world lit review would be expected to include: direct references to the CoG/HG catalog. The researcher should themselves count games of different genres, or at least scan the catalog to verify what people say about e.g. Fallen Hero and Mass Mother Murderer (!) being the examples of villain protagonist games. A human scanning the CoG and HG pages will very quickly find more apt, though less talked-about, examples.

There’s no question that it’s impressive that our tech can produce this kind of output…but even a cursory review brings up major weaknesses that would hurt its usefulness as research. A broadly accurate though impressionistic and flawed picture is fine if you’re a casual wanting to get an intro to a topic. It’s not ok for the kind of research people pay real money for.

7 Likes

Okay, as a quick note, its purpose is not to generate a profitable revenue stream. Its purpose is to generate capital and maintain a growing stock price, which has nothing to do with actually making sales and generating revenue. Basically, it’s like X/Twitter in that it not only never has made a profit, but it never will and isn’t supposed to.

That said, the scale it looks like it’ll need is abnormal even for the tech industry.

2 Likes

With the “Isn’t supposed to” part… no one has ever explained to me how that’s meant to work, economics-wise. (Especially with interest rates no longer at zero.) Can you point me to an explanation that doesn’t boil down to “the hype train can run for a long time, and we’ll just do our best not to be the last ones off”?

You’re right but at the same time this sort of thing has never made sense to me, and maybe it’s never meant to.

I’d consider myself pretty informed when it comes to the tech industry as I need to for my day job, and my impression of AI (or, as it exists right now, machine learning) is that it’s support largely amounts to a gamble. Make it so inescapable that it can’t cease to exist, even if it should based off of how limited it’s revenue is. Capital holders want a payday afterall, which means the stock needs to rise forever, AGI is always right by the corner just ignore that the industry has been going in circles. This isn’t a healthy business model, has rarely ever been, but I digress

I don’t think AI as we understand it today would’ve gotten the support it needed if it’s development came earlier by twenty years or so, due to the sheer amount of money it would need to burn

3 Likes

It’s not an AI issue, so much as a financialism issue - or as a lot of Marxist and Marxist-adjacent people I know call it, “late-stage capitalism.” (I dislike this term because it’s teleological and assumes, with an absence of evidence, that it’s going to be the last form capitalism ever takes.) It’s a symptom of a greater issue with capitalism: there must be perpetual growth. Because of this, the greatest profit has always been chasing rising stock prices, not building for revenue (also, the super-rich chase stock prices because the tax structure, and the use of secured loans, favor capital gains over income). And because of that, and partly because of stock-trading algorithms becoming increasingly dominant in the trading business, what we’re seeing is a stock market that’s got nothing to do with actual productivity. The market’s distorted beyond all reason.

This, by the way, is why companies like Google and Amazon aggressively force their AI assistants on you when you use them. Their managers and engineers are not actually stupid. They aren’t so confident that you’ll love their shit that they forgot to put in an off switch. They know damn well that their intrusive chatbots and summarizers are absolutely not wanted and degrade the user experience. They don’t care. AI is the buzzword that attracts capital, and so they need to be able to point at their service and say that AI is “a core part” of it and they have a lot of people using it, because that drives stock prices up. You’re not the customer, you’re the product.

It’s like that guy at Hasbro who said that AI would be “a core part of D&D.” Again, he knew damn well that literally none of Wizards of the Coast’s customers actually wanted that. He didn’t care. It’s not even to generate modules with less writing staff. D&D needs to have AI shoved in somewhere because capital won’t even look at your company if it’s not AI.

2 Likes

Yeah, it’s frustrating how much of this AI push feels less about innovation or user benefit and more about chasing stock market trends and investor buzzwords. Companies are prioritizing short-term capital gains over long-term value or customer satisfaction, and it shows. It’s a symptom of a system that rewards growth and hype over actual utility or creativity. Really makes you wonder how sustainable this all is in the long run.

1 Like

It’s not, but when the market crashes, the government will step in to stabilize things, the current bunch of buzzwords will be discredited and the economy will move on to the next bubble.

1 Like

I mean usually the deep research function is cited very well, and can be too much, but the formatting is a mess outside of the openai platform, here’s another one I ran for fun.

A lot of professionals are definitely using it and finding it very useful, and I do wish it retains the ability to use various tools like Python as in base chatgpt, but I guess that’s a work in progress.

but aside from this, you get 100 chance to use this per month aside from the other uses from other models, and this is just version 1, it’s certainly being very helpful to a professional researcher since it seems that’s all he uses sometimes.

People are quick to point out the flaw and dismiss the concept entirely but it’s just easier to double check the output than just writing everything yourself. Sometimes I do feel like living in entirely different worlds when people say AI is just pure hype when like everyone around me uses it.

I do think Google’s AI summary is playing catch up and not really in a good place, but they were “late” to the race so the rush is understandable, Gemini is still not too far beyond so they have a chance.

Also since ChatGPT is literally the fastest growing app in history beaten only by Threads, I don’t find the massive investment unrealistic, it’s revenue and growth of revenue is also pretty massive, they can easily turn a profit if they fire all the researchers and pause all training, but then other companies that’s also spending billions will catch up.

I don’t understand why it would be useful to a researcher for ChatGPT to confidently tell them that scifi and historical games are popular genres in CoG/HG. Like, that’s entirely wrong, and someone conducting genuine research and reading would easily and quickly be able to find that out by skimming the forum themselves.

But if someone was relying on ChatGPT in order to skim and get sense of a high-level view of things, they’d have little reason to think they needed to dig further. Why would they? The LLM is touted as reliable. Thus misinformation and misconceptions spread.

I know this is an example, and it’s not a life-or death, or livelihood-threatening scenario. The question of “popular trends in ChoiceScript games” is a low-stakes subject for that kind of misinformation. But for other kinds of research it’s far more concerning.

(And if someone researching this did realise that there were glaring inaccuracies, they’d have to do the research anyway! So why not do it for real to begin with?)

7 Likes

I mean I don’t think it said that historical games are popular? just that there are some of them, it does rank scifi as the second most popular genre though after fantasy, which I think is kind of accurate.

this is why I think their integration still needs work, for working with numbers the default model is actually better since it integrates with Python code, and for researching papers it will give you an overview and if you are interested, you can just go ahead and click it to read more to correct any accuracy problem.

I don’t think anyone is using it as it is, just like you don’t pick the first answer from a google query, but google is still pretty useful.

It’s not, though. Fantasy, superheroes, and romance are the bestsellers here, and scifi has been at the bottom of the genre pile for years. If someone was researching ChoiceScript games it would be easy to discover this by searching the forum.

I get that there are some use cases for the software - but workplaces are so eager to devalue skills, and to jump on the fantasy of endless quantities of fast competent labour for “cheap” (just don’t examine the exploitation of people and their work…) that I just can’t feel the same excitement and positivity about it.

8 Likes
(I agree on not calling it 'late capitalism')

I know you weren’t talking about an AI issue – and I’m familiar with the growth-above-quality incentives built into contemporary US capitalism (which, even more than the AI bubble, is what gets Ed Zitron frothing at the mouth).

But while I’ve seen plenty of critics saying “profitability doesn’t matter in today’s market,” that’s mostly not what I see from participants in the system.

Maybe I’m just not reading in the right places. But I see an awful lot of attempts to justify pouring money into not-yet-profitable businesses on the grounds that it will one day be profitable. “First we get a billion users, and that’s when we monetize it.” “Look at Facebook, it went through a lot of years of losing money before it became massively profitable.” “When AGI gets good enough to replace a $80,000 a year white-collar job, it’ll be the days of wine and roses.”

I agree that the investors’ actions mostly suggest an intent to profit from short- and medium-term stock price fluctuations, rather than from the underlying value of the company. But with rare and pretty marginal exceptions, like memecoins, I don’t see many people saying that the company they invest in “isn’t supposed to” be profitable at some future point. That’s leaving the emperor a little too naked, the pyramid scheme too exposed.

You’ll find it harder to buy-low-sell-high if the next wave of suckers can’t be convinced that the company is going to survive without indefinite infusions of outside capital. Twitter wasn’t just coasting on its ubiquity, but on a few 2018-19 quarters of actual profitability. Its long subsequent stretches of unprofitability weren’t treated as a total irrelevance; they left investors queasy, and that general sense that Twitter was financially weak contributed to Musk’s takeover.

So it still seems to me that in most cases, companies with high market caps are supposed to turn a profit, and the market will (much, much later than it should) punish companies where that story of future profit becomes wholly implausible. I think this is especially going to be the case now that we’re no longer in the free money/QE/ZIRP decade.

Maybe companies still aren’t supposed to pay dividends (lest it be taken as a sign that they’ve given up on unending growth), and they’re expected to enshittify their products to get more eyeball time (for the advertising-based ones) or chargeable bells and whistles (for the subscription-based ones). But I don’t think that takes us all the way to a postmodern market where stock price is permanently disconnected from whether a company makes any money.

Google and Amazon are rolling in enough dough that they can force your AI assistants on you without necessarily charging for them, just to attract capital; but a lot of other companies are charging for the AI bells and whistles, or trying to (Microsoft, Salesforce) because software-as-a-service is their revenue stream, and if they can’t show growth in products sold, investors will be spooked about future growth.

I hear this, and thanks for sharing the version with cites and the new prompt. I think a lot depends on what you’re being asked to research, and for what purpose. I’m my wife’s PhD research assistant at the moment, and for the kind of questions she’s digging into, this level of analysis (fluently as it reads) just wouldn’t cut it.

The flaws aren’t just the quickly checkable/fixable ones like confirming that Stronghold isn’t “modern supernatural”. There’s confused analysis in the “very large epics” paragraph, where ChatGPT names a couple of games from 2017 but leaves unnamed the half-dozen substantially larger games that have come out since then (much as I might wish Rebels was still the best benchmark, a 600K game just isn’t as large after Kyle, Jeffrey, and Kreg have raised the bar).

Even worse, it treats the high price tag of those later unnamed “flagship games” as an artefact of the 2023 hike – but all CoG games had their prices go up in 2023, not just games published thereafter.

In the genre analysis, the AI puts sci-fi second to fantasy. Now, a human checking the cites might pick up on the fact that it’s only citing one, quite old CoG game as evidence, because the second example it gives is actually a Hosted Game (despite your asking it not to include HG in your parameters). But unless the human reviewer also did the work you’ve asked the AI to do – actually skimming through the whole catalog – they’d miss the fact that sci-fi is really pretty sparse, and is only “prominent” in the sense that the top-seller of all time, Robots, is sci-fi.

Meanwhile, its genre analysis totally undersells the superhero genre, which should surely be ranked as the second most prominent after fantasy, definitely well above sci-fi. It gives it a short para, but rolled in with “modern drama” and Choice of the Rockstar for some reason, and isn’t mentioned at all in the final summary paragraph which opens, “Fantasy and sci-fi dominate the catalog.”

That conclusion would let a human reader walk away with the plausible but entirely wrong idea that because sci-fi (like fantasy) requires worldbuilding, sci-fi CoGs have long wordcounts and high price points, and co-dominate along with fantasy. Literally none of which is true.

To fix these problems, I think a CoG researcher would need to actually do the research work, not lean on an AI summary that will mislead you in ways you don’t understand until you’ve done the reading yourself.

Honestly, it saddens me to think of researchers using this as their main tool and thinking they’ve cleverly saved themselves time. What they’re skipping isn’t just some writing/reading time…it’s the learning, the taking enough time with a topic to really understand it, which ought to be the whole point, and makes it possible to sift out plausible-but-wrong hypotheses. I don’t look forward to a world where most supposed experts are resting on AI summarizations, confident that a little quick fact-checking has been enough for them to avoid any major errors.

…and while I’ve been writing this, Harris has ninja’d me and you’ve already responded, so I’ll press post and go to bed. :slight_smile:

8 Likes

What I would have liked an AI assistant for in my master’s was finding me relevat publications which I could then read - I’m sure I missed a ton of good ones, because there simply was too much to dig through.

4 Likes

I agree that would be a great use case, especially with Google Search filling up with SEO clickbait and Google Scholar not being fantastic at relevance-ranking its results. I just don’t think that’s a use case that people will spend a high monthly fee to access.

Probably not, but I would imagine it could be something universities could do for themselves if they wanted to, as a part of research if nothing else.

Would be a good subject if I was making doctoral research in AI, in any case.

1 Like

Found out a friend of mine is feeding his writing into chatgpt and getting suggestions back on ideas to “expand” or how to make the writing “better.” I feel kinda conflicted about it. He’s pretty new to writing fiction so he’s still at a stage where basic advice can go a long way but… idk, it still feels wrong somehow? Like it’s fairly harmless of an AI use (or as harmless as generating with an LLM can be) but I’m saddened at the idea of a creative feeding their work to AI because they see no value to it being solely their own.

2 Likes

I don’t see anything wrong with it, it’s a strategy that actually works, it definitely helps you expand on things

2 Likes

This is terrible for him as a writer, tell him to stop immediately. You have to be able to be confident in your own words, or you’ll never develop a voice that belongs to you. That’s never going to happen if you’re constantly feeding your work to a computer to “fix”.

8 Likes

A thing to keep in mind is that if you rely on AI to “improve” your text (and on the basis of what does it correct it, exactly?), you’ll not develop any analysis or editing skills of your own. You’ll be completely dependant upon whatever the machine deems correct at any given moment, without any concern for consistency or personal style or any other thing it just can’t account for.

That’s honestly one of my bigger worries with the thing.
Imagine if, when voting, instead of actually researching what the candidate is about, people would just go for an AI summary.
Now also imagine if the company in charge of training that AI feeds it tons of propaganda.
So a person asks the AI for a summary on candidate A, and gets what’s essentially a propaganda piece. They happily go on voting, and then… oops.
Not everything quick and easy is good for us. Sometimes, effort has to be put in. We need to think with our own heads.

11 Likes

Also thinking is good for your brain in general, and you are doing it less if you use a machine for that.

9 Likes