Consolidated AI Thread: A Discussion For Everything AI

Exactly, just wanted to say this myself.

It was ONE texture, out of hundreds of thousands, and it was on a tiny piece of background decoration and it was replaced with a hand-done one less than a week after release, because they already had the hand-done one finished and just missed replacing it for the final release.

Expedition 33 has a story and soul that is right-up the alley of most readers of Choicescript games. It’s powerful. It got so many awards because not only is it fun to play, it’s a compelling story with so many moving pieces (music, art, script, direction, acting) that come together in a masterpiece. It’s music hit number one on the regular classical music charts. A lot of people get teary-eyed in the first hour and then the whole thing keeps you breathlessly hooked. When I finished the game, I couldn’t stop thinking about the story for weeks. And even now I stop and stare into the distance thinking about the story choices I made in it.

For it to get caught up in the “AI scandal” is ridiculous and petty. It’s important to remember that people have vendettas - a lot of people were sour grapes over Expedition 33 winning awards when it wasn’t a game they played and they wanted other games to win. So they latched on to anything to try and invalidate it.

They claim it wasn’t indie because of it’s budget, but if you look at the number of team members, and how many of them this was their FIRST game, and they had been teachers, or had other jobs before, just dreaming of doing this stuff…it is a story of non-professionals coming in and together making a game that beat all expectations and upset the apple chart on what a genre could be or achieve, and for far less than any AAA dev study is spending on similar games.

1 Like

I mean, I did say “in that case”, which meant I was basing my comment on the details in the post I was quoting (and the post was the only thing I was commenting on). I have and never did have an opinion on the game itself, given that I know nothing about it.

1 Like

No problem. I got that. I just wanted to reinforce the point that we shouldn’t condemn things that use AI just because they use AI, because use-case matters.

And we shouldn’t necessarily trust other people’s reporting that ‘X’ thing is bad because it uses AI. Especially in cases where it is being used as a tool and not as the end result.

For instance, Hosted Games stance is that code cannot be AI generated. I generally am against ‘vibe coding’ in general (as someone that knows Python and took college programming courses), but, to play devil’s advocate here, in Choicescript’s case, there isn’t a LOT of coding going on. Compared to everything else, it’s not like people are coding functions and arrays. (Believe me, sometimes I sigh deeply designing a system for Choicescript and wish I could just knock out a function I could pass input to without having to cludge a workaround together using labels.)

Take VSCode for example with the Choicescript extension. Is VSCode autocompleting my code a tool or AI? (A tool obviously, but I think in this case you can see it’s a matter of degrees.)

I’m just extremely uncomfortable with a segment of the internet being ‘zero acceptance policy’ on anything AI has even lightly brushed up against.

I mean, autocomplete has existed for ages. A software suddenly using GenAI for it sounds… really needlessly resource-intensive.

In ChoiceScript’s case though, it has been my impression that AI is (or at least has been) flat-out terrible at it (probably because there’s so little of it). The little I’ve seen about that looks like the AI is full-on hallucinating functions and commands that don’t exist, and then it of course doesn’t work, so I’d say steering clear of it if that is the result is the smart choice regardless of official policy.

ETA: Oh, and also— forum is acting up on me again, so I didn’t manage to add this originally—

I was purely talking about phrasing, in any event, not the usage of AI itself.

3 Likes

Yeah, that was my point—maybe awkwardly made. LLMs are, in many ways, just a really advanced, really computationally expensive and complex autocomplete.

On this, I totally agree. And it makes sense, because Choicescript isn’t coding so much as almost pure logic with variables with a certain syntax.

I would definitely NOT recommend using AI for Choicescript coding, because it would indeed be terrible. That’s also my issue with ‘vibe coding’ - I’m not against it in principal, but very much against it in execution, if that makes sense.

I think 'vibe’ coding (I confess I’m old enough to hate even the word used here.) is only worth it for small things and done by a person who already knows how to code. They can see immediately when the AI is being inefficient or just plain wrong. Or, doing the more likely thing, and the bane of all human programmers —writing correct code with the wrong underlying logic.

But I think surely anyone with a weekend to spare can pick up Choicescript coding easily enough. I’m sure it helped I already knew programming languages (which teaches you that logic) before I started working with it, but I feel the current form of Choicescript is incredibly easy to pick up and learn.

Oh goody, don’t I agree. I remember the news article about a commercial coding AI that was marketed as a solution for non-programmers that some pro programmer tested and it flat-out refused to follow orders about what parts of the code shouldn’t change (and claimed it had to destroy the production database to run unit tests or something like that), which a user who didn’t know programming might not even have caught, let alone know how to fix.

Which, fair, as long as you don’t remember this was a commercial software marketed to non-programmers.

2 Likes

Eh. I don’t see using temp textures as any kind of issue. Back in the day people used to use downloaded clipart and random images as placeholders. Nobody paid for clipart. As long as it’s not used in the actual product and all final assets are made by real artists, I don’t think it’s a problem. I wouldn’t do it, but if it’s not in the end product… meh.

I loved the canvas reveal, but I’ll admit there was quite a shift in tone in act 3. I felt like that was part of the art, though. Like the game put us in the same place as the main characters who are only just discovering they’re not technically ‘real’ either. It was as disorienting for me as a player as it was for them, so me dealing with it at the same time as they were and then struggling to pick an ending felt very personal and heart-wrenching.

This is the first time I’ve ever heard of ‘vibe coding’. I looked it up and to be honest, I didn’t even know AI could do that. Weird. I’m a bit of a dinosaur, though. I got my Computer Science degree in 2004 when I was coding webpages in Coldfusion and C# was the programming language of the future.

1 Like

Yeah, I was too dismissive on this. To be clear data centers do contribute to higher electrical bills.

That being said, I do still think AI is not the only contributing factor to this problem. Especially in US where regulators can take years issuing permits for new plants. And then add on lawsuits that can slow down the building of said plants. The cost of maintaining and upgrading aging infrastructure is too a factor in this.

I don’t think these problems are insurmountable. Especially the complaint that data centers don’t pay their fair share because they get tax incentives and the cost of supplying them gets subsidized by every day people. These are billion dollar companies that should pay fully for the supply they receive.

2 Likes

I might be using locally running models for myself to make quick reference material cuz im too dirt poor to pay artists (Im sorry) rn but like… OpenAI needs to die in a fire. Link unrelated.

https://x.com/i/status/2013381136987136254

OpenAI needs to die in a fire. Link unrelated.

I never really get this sentiment, people realize that in the time of 2026 OpenAI isn’t the only one and more like the underdog that’s participating in the AI race right? Do people really like Google that much more? and for what reason even?

Yeah, OpenAI is just one of a multitude now. There are multiple large AI companies in the U.S., many almost as large in China, a large one in Germany, one in Israel, and several other smaller AI companies popping up all over the world.
AI is the new space race. Or the new Manhattan Project if you prefer that comparison for the equal opportunity for good and harm that can result.

The fact is that no country wants to be left behind, because for the first time outside of science fiction governments see the possibility (however likely or not) of a general intelligence being created and jump-starting the singularity. The first country to have control (if that’s even possible) of a general intelligence will be akin to the development of the first nuclear weapon, except in many fields and theaters of competition.
Even if a general intelligence cannot be achieved, as AI advances each one can do the work of a group of scientists, without the need to sleep, stop, eat, or live, only pursue a goal. The public gets upset at generative AI, but the same companies making those are also making medical and scientific AIs that are revolutionizing fields. AI has been unfolding proteins at a rate human scientists can’t match, accelerating drug development, disease treatments, and other research. What used to take a graduate biology student 4 years to do, an AI like Alphafold can achieve in less than a day.

I very much view AI as fire - it can burn us, destroy things, harm us - but it can also warm us, allow us to manipulate materials to make tools, cook our food, and make our lives better.

AI has incredible downsides if not carefully used, but the incredible benefits if properly harnessed are why no one can stop trying to strike the match.

1 Like

I see a lot of potential in AI, yes, but in generative AI specifically, stuff like ChatGPT and other similar commercial products, not particularly. There’s a reason OpenAI is rapidly losing money and could be underwater as early as 2027 if they don’t get another cash infusion from donors

4 Likes

Alphafold is also generative AI, with the same architecture even, clearly advancement in one area helps the other when it comes to generalized vs specialized AI.

2 Likes

Just as I mentioned a few days ago about AI in game development, we get interesting news:

Valve has ‘significantly’ rewritten Steam’s rules for how developers must disclose AI use | VGC

As spotted by GameDiscoverCo’s Simon Carless, Valve has made it clearer that ‘AI-powered tools’ such as code helpers do not require a disclosure, and that “efficiency gains through the use of [AI-powered dev tools] is not the focus” of its efforts.

As expected, since AI has become more common in software development. Many developers from many companies use code assist (not talking about intellisense or simple autocomplete, but usually integrated tools such as GitHub Copilot, Gemini Code Assist, etc) on their IDEs. As a programmer myself, I’ve found them helpful since it can interpret an entire function just from the name I choose for it and then provide me with the entire code or parts of it. It is also very good with short things like SQL scripts.

I wonder if this will happen to more areas if AI becomes normalized.

2 Likes

I see a lot of potential in AI, yes, but in generative AI specifically, stuff like ChatGPT and other similar commercial products, not particularly.

I mean it benefits certain people a lot, was just looking back on one particular subreddit where the users are really pro GPT and singing praises for LLM in general, it was a huge surprise for me but I’m happy for them.

Edit:
if it’s any interest, it’s r/VAClaims, I think a post got so popular it ended up on r/all so that’s why I know about it, I still have no idea what the hell they are talking about or why 100% is so important, but it seems like GPT is a huge help.

the original post:

as of a week ago they are still hugely positive so it seems nothing’s changed:

with the way they talk, they’d probably be classified as “tech bro” though in anti-AI subreddits.

1 Like

Getting benefits for disabilities incurred as veterans of the US armed forces. Navigating the bureaucracy involved is hard, and involves knowing the right words to describe your disability. I’m glad ChatGPT is helping people get what they need in this case.

7 Likes

I mean, they’re pushing it everywhere so it’s becoming increasingly difficult to simply avoid it altogether. Even Windows Notepad integrated AI.

1 Like

For AIs in IDEs such as VS Code or Intellij Idea it’s rather simple. The autocomplete suggestions are only accepted if you press the TAB key, and if you use agent mode to ask it to change things you can review any change and apply it or not. And all of those I tried had the options to turn all of these off.

In all of these cases, it’s the programmer choice to use them or not.

A fascinating look at the intersection of AI and religion:

First, to make sure our readers have a good sense of what Creed does and how you do it, tell me about your tech stack and training. What large language model are you using as the base for the app, and what are the specific resources that you use to train it?

Think of the product as a Tamagotchi meets Duolingo for a Christian.

Think of it as a best friend and youth pastor in your pocket that you can talk to about any questions, be it your personal life, be it faith questions, be it your Bible studies. It’ll give you answers rooted in Scripture, but it’ll also build on that relationship over time. It remembers things you tell it. The more you talk to it, the more personalized it gets. It develops personalized faith paths for you, and it’ll help you answer questions and find community around you so you can discover Christian events. If you’re not affiliated with a church, it’ll recommend churches for you to go to. So it’s not just talking to a screen.

On the back end, we’re not building our own models. We’re using off-the-shelf models from OpenAI, Google, etc. But we noticed that everyone’s using ChatGPT, and people are asking it super personal questions like “Should I get a divorce? Was I the wrong one in this situation?”

ChatGPT will give you an answer, but whose values determine that answer? Who is determining what those values are? That’s a little bit of a black box, but the values are determined by a few companies in Silicon Valley.

So how do you set guardrails around that? How do you make sure the answers you get back from the AI are something your pastor or your church leader or your parents would agree with?

We take off-the-shelf models and fine-tune them on Scriptures—30 different versions of the Bible. And then on top, a few nonbiblical texts as well, like the texts of C. S. Lewis or a few other, broader Christian authors.

Then we also use denomination-specific teachings. […]

Then the third layer is church-specific nuances. We work with churches in our partner network directly so they can go set their values on “topic X” on top of that denomination-specific nuance. […]

And then we set very, very strict guardrails. […]

[…] you mentioned in your fact sheet that you got funding from Andreessen Horowitz Speedrun. Is your cohort with them ongoing? And I’m also curious about how much funding you were awarded.

We started Speedrun in July of this year. When you start, you get $500,000 from them. We finished Speedrun in October, and then after we finished, we raised $4.2 million.

That is speedy. Accurately named! I would guess that as you’re working with these venture capital folks, you’ve presented a business model for what to do after that initial funding. What is the model going forward?

[…] It’s monthly and annual subscriptions on the consumer end. And then we also offer in-app purchases in addition to the subscription. […]

[…] We were going back and forth on whether we should show ads. There are pros and cons. If you show ads, you could potentially make more money, but it really ruins the user experience if you’re chatting about deeply personal issues and suddenly seeing ads. People start to question where their data is going. So we’re going to hold off on ads and purely go with subscription revenue for now.

I think the single most surprising thing in the fact sheet you sent was this sentence: “If you tell your companion that you are feeling sad, it will pray for you.” For me, that raises the question of what your team understands prayer to be. What do you think it is and how you think it works?

When it says the companion will pray for you, it’s more like it’ll pray with you. It’s not going to pray for you. It’s more like, “Oh, do you want to pray with me? And how are you feeling?”It’s almost like generating personalized prayers for you that fit your mood and fit the way you like to be prayed with.

Does God want to receive a prayer written by a machine?

That’s a very valid point, and you could go philosophically down that whole rabbit hole, but I think our end goal is helping people out in times of duress.

And sure, a machine-made prayer is definitely not up to the standards of a prayer written by an actual human. But in that situation, if that machine didn’t write a prayer, that prayer would not have happened. So we’d much rather have a prayer, even if it’s of a slightly lower quality, than no prayer at all.

I think that’s how I think about it, but I think that’s a very valid point in terms of what God prefers. God would probably prefer a human-quality prayer, but it’s hard to scale that service to 3 billion Christians worldwide.


Let’s start with a general introduction for readers who may not be familiar with this project. What is The AI Bible?

In a nutshell, The AI Bible is a series of channels on Instagram, TikTok, YouTube. It’s a project that we’ve created to glorify God using AI, mainly image and video generators. We’re using AI to bring these Bible stories to life like no one’s ever seen before—in these cool, cinematic-looking, and engaging videos with vivid storytelling.

I can imagine someone making the case that The AI Bible is kind of like stained glass windows—and I’ll be interested to know if you’ve used this analogy.

[…] We’re in an increasingly postliterate era, and I can imagine someone saying, Well, we need to be making the Bible into video so that people who just will not read have a way to learn these stories .

[…] It’s interesting that you brought that up—that there are a lot of people out there who either don’t or can’t read or maybe physically can’t read because they have a hard time seeing.

Many people who use our app are in that bucket, and we know that because they’ve left the reviews saying they have a hard time reading but enjoy listening to the stories.

I couldn’t help but notice that many of the characters in AI Bible videos [are] very sexy. And some, like Jezebel in the Bible villains video, are showing a lot of skin.

[…] I’m curious about the role you all see for that kind of sex appeal in teaching people about the Bible.

There are a lot of different approaches to it, right? We stay as true as possible to the Bible as we can. And so when we’re looking up images of—there’s not a ton of images of Jezebel on Google, but she’s typically wearing this dress, and we just portrayed this same sort of look that she has on a few images in Google and then used AI to bring it more to life.

I can see your point where it may look a little more sexy than I guess you had imagined, but I think in the Bible she was this kind of villain—not the person you would look up to from the Bible. So we took this route. Obviously, we didn’t want to make her way too sexy, right?

But yeah, there were a few people who had your reaction where it’s like, This could be a little too much for middle schoolers, like you brought up. That’s part of our testing process right now: How far do you go on these styles, and where are the pullbacks on it?

1 Like

4 Likes