Consolidated AI Thread: A Discussion For Everything AI

Some interesting data here, especially in light of the explosion of non-consensual pornography out of Grok. Anecdotally tracks to me as well, every AI evangelist I know is a man, most women I know are casual users at best, or swear it off completely.

8 Likes

I think it will be interesting to see if those usage trends hold for younger people.

Pew recently released a study that found no gender gap in teens’ usage of specific transformer-based chatbots[1]:

Daily use of chatbots differs somewhat by race and ethnicity as well as age:

  • Race and ethnicity: About a third of Black (35%) and Hispanic teens (33%) report using AI chatbots daily. A smaller share of White teens (22%) say the same.
  • Age: 31% of teens ages 15 to 17 say they use chatbots on a daily basis, compared with about a quarter of those ages 13 to 14 (24%).

They disaggregate usage frequency by race/ethnicity and age, but not by gender, leading to a reasonable inference they didn’t find any frequency difference by gender.


For LLMs, I've yet to see anything that would make me doubt The Simple Answers.

Did she make that claim?

Perhaps I missed it, but I saw (paraphrasing) “reading stimulates more brain activity” and “if you want to be a good book writer, read more books and watch less TV/movies/anime”.

Regardless, are you referring to a specific kind of knowledge here, like ‘book writing knowledge’ or ‘academic facts’ type knowledge?

Because knowledge in general seems absurd to me. I love reading, but all the reading about cooking techniques I did was vastly inferior to watching YouTube videos demonstrating them.

I would propose there are as many domains where video is a more effective teacher than text as the reverse, including: pronunciation, heart transplants, sign language, competitive bowling, automotive and home repairs…

Monkey see, monkey do.

Too late.


  1. Teens, Social Media and AI Chatbots 2025 | Pew Research Center ↩︎

4 Likes

Good point, well received. Loads of things can’t be learned adequately by any means other than physically practicing them. Video will usually be a better coach than text for those things.

When it comes to ideas rather than skills, broadly speaking, I think text can generally convey information with a superior level of density, clarity, and nuance compared to video. I say that as someone whose YouTube history can confirm that I like watching videos about ideas – history, philosophy, literature, politics, economics, at both a specific and zoomed-out general level. Video is far from worthless in those areas…but someone trying to learn solely from video will have disadvantages similar to someone trying to learn heart surgery from text alone.

But “knowledge” absolutely should include practical knowledge, not just intellection, so my original comment earned your correction.

3 Likes

Recently, one AI VTuber called Vedal that uses an AI chatbot known as Neurosama has taken the top spot on Twitch. This, coupled with reports of some gaming studios using AI on specific parts of game development leads me to think AI acceptance on some areas is growing.

An AI-powered VTuber is now the most subscribed Twitch streamer in the world - Dexerto

1 Like

I wouldn’t quite say acceptance is growing. E33 got a lot of criticism for using GenAI for placeholders and then not properly removing them from the released product, if I recall correctly, it lead to one of their indie awards getting revoked since they didn’t properly clean it up while claiming that no GenAI content was in the release of the game.

Furthermore, Vedal and Neurosama are quite different than A.I channels, customer service chat bots or A.I “partner chatbots”. Much like DougDoug, A.I is not a replacement for people, its a tool. Neurosama acts much in the same way that DougDoug makes his content. Where the entertainment comes from a Human content creating, curating an A.I they partially helped code routines for themselves. (At least in the case of DougDoug.) And using that to create content stemming from the actions, reactions and experiences emerging from the interaction between the A.I and the human. Or Neurosama and Vedal. Or Whatever A.I bot of the day it is and DougDoug.

Its more accepted because it meets a lot of the criteria that people tend to have for A.I to be an acceptable use. Its not a replacement, its the main attraction. It is not overtaking people, its aiding them. And its carefully curated, with the entertainment being the interactions with the A.I rather than coming entirely out of the A.I.

If nothing else. Vedal shows that A.I and content creation can be done in a healthy, transparent and creative way that doesn’t tear down or disrupt human creators while still offering entertainment not possible by humans alone.

Key part being. The A.I is a selling point not for what it can “replace” but what it can “create”.

1 Like

Which they shouldn’t have received even in the first place because they’re AA, not an indie, but everyone decided to forget that detail for some reason.

It’s not, recently Larian shared how they’re using AI for their next game and everyone instantly went for their jugular, even after being assured they’re not using it to replace their workforce.

1 Like

I mean, they were outright lying in that case, because clearly the release did contain GenAI content, if I’m understanding right.

If I write a script in Finnish, and then only half translate it in English, then claim my translated script contains no Finnish words, I’m equally lying.

2 Likes

Recently, one AI VTuber called Vedal that uses an AI chatbot known as Neurosama has taken the top spot on Twitch.

Which is quite funny, if you ask any of the more anti-AI supporters, no doubt they will bring up some justification to try to resolve their cognitive dissonance, I’ve see “Vedal trained the model himself”, “he worked really hard for it”, “he’s an indie”, “we actually watch for vedal”, and of course the “not replacement” angle etc etc.

And the end of the day people will financially support things that they like, morality has never stopped the majority. Of course there’s no need to worry because no one wants AI or is using them, all the web traffic is forced or from bots.

It’s not, recently Larian shared how they’re using AI for their next game and everyone instantly went for their jugular, even after being assured they’re not using it to replace their workforce.

“Everyone” and “jugular” in quotation of course, the anti-AI people should feel free to boycott their next game and I encourage them to do so, and we’ll see how that works out for them,

This whole thing was a manufactured BS controversy for clicks and rage bait.

Expedition 33 has more soul than any game I’ve played in the last decade, and it breaks my fucking heart to see people dragging it because they used a small placeholder asset that was quickly removed.

There was a placeholder texture of a newspaper that accidentally got left in at launch and was replaced almost immediately with the proper texture.

Missing one temp asset by accident in a game which has hundreds of thousands of textures and models, and then immediately fixing it with the correct one doesn’t mean that they lied. It means they made an incredibly minor error which they fixed immediately.

3 Likes

I got the wrong impression, then. I know nothing about the case itself. :person_shrugging:

1 Like

It happened again. Insane that OpenAI will still let you use ChatGPT 4o if you pay them.

3 Likes

Eurgh, I shouldn’t have read that. Good job triggering a trauma I thought I was over, ChatGPT.

1 Like

Enough ppl that Larian publicly backed down about using AI in anything for their current game, it wasn’t worth the distrust and damage to their reputation, so, I’d say it worked out well enough for 'em.

3 Likes

Enough ppl that Larian publicly backed down about using AI in anything for their current game, it wasn’t worth the distrust and damage to their reputation, so, I’d say it worked out well enough for 'em.

Do people actually read their AMA? if that is your take away then I suppose the public statement worked and I agree it worked out well for them.

Either way, Larian is just one studio of many that will further integrate genAI into their pipeline in different ways. As far as I can tell bg3 and expedition 33 is still selling like crazy so clearly there’s a lot of people that don’t care enough to change anything.

1 Like

I’m not sure about that. This is one of those things for me where, if a dev uses genAI even in a minor way that’s meant to be replaced, it sours me on their game as a whole, because I’m not sure what else they used it for without getting caught. That’s my personal opinion, obviously, but to say it’s a manufactured controversy is a stretch.

E33 is one of those games that I loved for the first two acts, but as soon as we got to act 3, I completely lost interest. I hated the canvas reveal, it instantly made me not care for any of the party despite having loved all the characters prior to the reveal. But that’s something I can elaborate on in the video games thread.

If the game is good, if it’s AI slop like the last CoD ppl will call it out like the slop it is, main reason as to why it got horrible sales and their company having to bow their heads saying they’ll do better in the future; fucked around and found out.

Very interesting news on AI.

AI was used to solve several Erdos maths problems.

What I found interesting wasn’t that AI solved these problem but how the researchers went about using the AI to solve them. They discovered that if you give AI an Erdos problem, it first will go on the internet and discovers its an open question and difficult then it quickly comes back to tell the user it can’t solve it. Thus they had to tell the AI not to go on the internet.

The work process was also interesting. The AI did make mistakes which later on were fixed by other AI or humans to exclude trivial solutions. This shows good due diligence. They didn’t turn off their brains and accept whatever the AI spat out. There was this back and forth.

The links go over in more detail on a number of solved problems. It can’t be overstated that the researchers verified the answers and checked for BS. Hallucinations do happen even in these powerful models. It doesn’t mean therefore the output is 100% false and can’t be trusted. Always verify.

Anyway these newer model LLMs are more powerful than earlier ones. The ceiling hasn’t been reached. There’ll always be those who think ChatGPT 5.2 and other reasoning models are nothing more than fancy autocomplete. But AI is able to come up with novel solutions. Even when they later found that one Erdos problem #397 had a human solution which was undiscovered. AI had its own solution which Tao stated was simpler.

For problem #333, it was found out that there was an undiscovered human proof.

Problem #728 stands out as likely the first problem solved by AI that doesn’t have a human proof.

LLMs shouldn’t be underestimated. Any researcher who says that LLMs are a dead end (like Yann LeCun) or their performance gains will reach a wall, the burden of proof is on them to show that some other architecture is possible and can be more powerful that LLMs right now.

We’re still likely far away from AI solving the Riemann Hypothesis or other notoriously difficult open questions that will likely need new mathematics to be invented. However who knows what the next generation AI models will be able to do.

While I’m not concerned about the persuasive power of many of these YouTube anti-AI videos, negative attitudes of certain parts of AI development can’t be ignored. The issue around data centres and them being blamed for everything such as electricity prices can balloon into something bigger.

So far, I’m not seeing policymakers moving in the direction to ban or limit AI development. The courts have largely not been convinced on the argument that training on copyrighted materials is infringing on the rights of the creators, at least when it comes to generative AI. So long as you don’t pirate those materials or your models doesn’t spit out large sections of copyrighted texts, you’re likely safe.

What I’m expecting is more of a push for AI companies to make sure that kids don’t use AI to hurt themselves and stuff.

1 Like

https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/

You make it sound like this is some fake issue like AI water consumption (which is basically fine, especially when compared to things like factory farming). Consumers who live near data centers have seen their electricity costs more than double or even triple, they aren’t good to live near (and that’s near in terms of the electrical grid, which can be a pretty big area).

Arc Raiders also uses AI and was well received, with the CEO of Embark Studios’ parent company saying: “it’s important to assume that every game company is now using AI.”

Speaking as a programmer, I find it really to imagine that at least some part of the code of a game doesn’t have at least a few lines that were AI written. Developers frequently use AI autocomplete and assisting, and game development tools these days might have AI use on their own development.

What is the difference here compared to an indie developer making their own game with AI, or someone writing their own story using AI, or creating a visual art piece? Why is Neurosama’s use acceptable compared to these?

Is it because of environmental concerns? AI can be locally run on a computer in all of these examples.

Is it a problem with the training data? Because from what I read, Neuro’s training data origin is unclear, with many believing Vedal fine-tuned an existing model that was previously trained scrapped, unconsented data. I saw a page where Vedal himself recommends one to use OpenAI to start their own experiments with AI.

A single user that planned to create everything themselves and creates it using AI does not replace others either. Anyone watching and donating to Vedal could be watching and donating to a streamer that does not use AI in their content. There are videos out there of top streamers being angry when Neurosama surpassed them in number of subscribers.

Speaking about CoG games specifically, would it be a problem if there was an AI that assisted with ChoiceScript? Some writers could code the game by themselves, while I’ve seen others state here that they can’t make their stories because they can’t program with ChoiceScript at all even when they tried.

If they had the ability to have an AI write the code for them, is that acceptable? Or are they required to commission a programmer instead?

Anyone that uses the AI could curate the result or not. Someone can simply go with what is outputted the first time or heavily curate it afterwards. For example, someone could use AI to write a story and then heavily edit it, as well as providing all the characters, directing events and plot, etc. Does that fit the definition of heavily curated?

A specific tool doesn’t need to be completely unique to be acceptable to use. Does a word processor offer a different output compared to a typewriter or writing by hand?

That’s a lot more common than you think. I’ve seen loads of people, especially on anti-ai circles, denounce and not want to play E33 because of the AI use, even if it was minimal and if it was removed. The other user that posted here is an example.

I believe they stated they would still use AI, but only one trained on content that belonged to their own company.

I agree, and I believe that’s what can happen. If quality content that has AI is produced and well received, then we might see more normalization of AI as time goes by. Neurosama is probably the greatest example of this.

3 Likes

Yes, it would be a problem as Hosted Games wouldn’t publish it:

2 Likes