Consolidated AI Thread: A Discussion For Everything AI

People have GOT to stop treating everything ChatGPT says as if it’s the ultimate arbiter of truth. How are people this involved in the tech space falling for this.

9 Likes

Because being successful doesn’t mean you are intelligent or in any way infallible.

There are many people who are very susceptible to this kind of secret-truth hogwash, it’s similar to what gets people hooked on conspiracy theories and whatnot.

Example of a similar nutjob (all images sourced from the same individual):



He thinks he’s unlocking and decoding secret truths. He goes on for hours and hours, these people are cooked.

6 Likes

I look at the outputs the chat bot can give and am like “what fun fictive applications this could have”, but then I’m certain that the GPT will give me whatever type of output I want. I’ll make it give me diametrically opposed takes on a topic as well. I don’t know why people think it has sentience or knowledge given you can have it argue from the perspective of any thinker with a sufficiently large corpus fairly convincingly.

I think stuff like that screenshoted is very much in line with the Raven in Poe’s “The Raven”.

POV character in the poem realizes the Raven only ever says “Nevermore” and then starts asking it questions that will wreck his mental state, knowing what the Raven will always say. I think some folks are using GPTs as their own personal Raven if that analogy makes sense.

This is concerning what kind of impact it can have on people that are psychologically vulnerable to this kind of turn. We need some sort of safeguarding but I’m not sure what.

@Keller - to your point, I think we could even say intelligence can sometimes just make us better at fooling ourselves into believing what we want to be true.

5 Likes

You’re not rude, and I’m sorry to hear you found our exchange so frustrating. There’s no “win” here.

You’d seemed to me to be repeatedly misunderstanding what I was trying to say, so I tried to put a key part of it in context: why I’d personally come to respect Jean Twenge’s work, and so hesitated to throw one of her findings out on the basis of a single, albeit very credible, meta-analysis.

I didn’t expect you to receive that as a dare to disprove everything Twenge and Haidt had ever written. I expected you’d take it as, “Well, Havenstone may not be totally irrational, but he’s still probably wrong, and by 2027-28 at the latest he’ll recognize that too.” I guess I don’t know what else a “win” would have looked like here – or what noxious pattern of argumentation you think I’m replicating, from where.

If I had a meta-analysis to offer, I’d have done so. But I’m afraid the “other side” as I know it consists of lists of published studies: the one that I originally posted from the Oberleiter piece (which you’d seemed to think I was trying to misrepresent as the study’s conclusion?) and the list Twenge offers here, which may be found in a blog but comprises lots of published studies.

I don’t expect you to read through them all, unless this happens to be your second area of academic specialism. I’d kind of hoped that (like Starwish has done) you’d accept the list itself as sufficient evidence that the Narcissism Epidemic wasn’t “purely a myth,” which was the modest point I was trying to argue. I wasn’t trying to rule out the “most skeptical reading” on which all those studies’ findings turn out to be mistaken, and in which future evidence will confirm the findings of the recent meta-analysis that Starwish shared with us.

I certainly wasn’t trying to “smother response,” and I’m sorry it came across that way. I was trying to share some things I think I’ve personally learned from and found enriching. Given that you found them neither, I clearly didn’t win.

One of my closest and most intelligent friends suffered for nearly a decade from intense paranoid delusions of persecution. His theories were incredibly elaborate and thought-through – as they’d have to be to explain how everything in his life was being controlled by a malevolent cabal – but eventually the gaps got wide enough that he decided to start trying to take meds on the hypothesis that he was suffering from a mental disorder.

I wonder if he’d have ever got there if he’d had an AI to affirm his theories and help him cover over the implausibilities. :frowning:

11 Likes

Speaking of AIs speaking truth or not, I just came across this and found it a very amusing read.

1 Like

I just watched this, and despite the title being somewhat of a warning, I will still warn that it gets bad in ways you might not expect, pretty fast.

Content Warnings: discussions of death of a real minor, murder, selfharm, suicide, grooming, gaslighting

4 Likes

Very fair and unbiased covering in the style of preaching to the choir, it’s not even worth responding to since nothing is backed up, so clearly this is more for entertainment to the intended audience, which is definitely not me.

While openai and google both achieve gold on IMO, without using external tools, while being provided the same amount of hours in Google’s case. But I guess anyone can get IMO gold these days, and clearly some vibe coder somewhere getting things wrong on a 7b model is more of a demonstration for the capacity of LLM.

1 Like

Asking here since this is the only community I’m a part of that doesn’t involve facebook, and figured people might be able to help. If not, well. This is for choice script, and claud and choice script don’t really go together part from the no Ai thing.
So, I am writing books with Claud. Not choicescript books though there in the same world, multyvurs. However, with the new update, Claud is now butchering what I’m trying to write, and I am getting frustrated. Also doesn’t help that from what I can tell, there’s no manual or patch notes on how to deal with this, and I spent the last few months trying failing and learning by said trying Claud. Anyway, does anyone have any advice about how to deal with this? Also, I don’t know why, but swearing at Claud, in the prompts, either works a lot better than it really should, or I’m just stupidly luckey when I get frustrated at it for messing up something that it should know dam well from the massive project file I have in there.
Anyway, hope you all who read this have a good rest of you’re week, it’s warm here, so I assume it’s warm th
ere, and yeah. Writing is one of the few things I both enjoy and am, I think though could be wrong I guess, pretty good at, which is why this is annoying me so mouch.

1 Like

Have you tried not using AI? And that’s not just my bias speaking. You think writing is something you’re good at, you say the AI consistently ruins your work, why would you keep using it?

13 Likes

Yeah, don’t write with AI.

3 Likes

I mean, you have to be more specific here about where this is going wrong, otherwise we can’t really give you advice, but you might be more interested in some pro AI community on reddit. There’s

but it’s a very small and close community.

The most surprising thing I got from these announcements was that both OpenAI and Google used LLM based models and not a model designed specifically for maths.

LLMs often get dismissed as being simple next word predictors. It’s clear that LLMs plus more reasoning capacity and some modifications can solve complex and abstract problems.

As I said a long while ago, emergent abilities are powerful stuff. A single neuron in your brain doesn’t know math nor physics or even your identity. And yet a collection of them can result with a human brain that can do a lot of cool stuff.

Looking at the IMO competition, both OpenAI and Google got Question 6 (usually the most difficult question) wrong. It would be awesome to see how the models approached Q6 and where they went wrong.

But since AI development is highly competitive, I’m not holding my breath for these companies to be fully transparent on how their AI models tried to solve question 6.

2 Likes

Interesting Gary Marcus piece here noting that the Open AI submission was very un-LLM-like in the lack of fluency of its answers:

Its correct-but-messy responses are pretty interesting, if we’re more interested in evidence of reasoning capacity than Turing-passing language fluency.

4 Likes

The output reminds me of ChatGPT’s messy chain of thoughts when it’s trying to solve a difficult question. It usually cleans up its final answer in a very confident tone to the user.

Maybe it was shy this time around lol.

Gary highlighted something important, the lack of transparency from OpenAI. I think they should’ve formally entered the IMO competition and be subject to the same rules Google was under instead of hiring some judges.

I don’t know why they didn’t. Google deserves a thumbs up for coordinating and following the rules the IMO set.

So for the next IMO competition, we’ll see if Google or OpenAI can figure out a way for their AI to solve the next Question 6.

1 Like

Probably because they got wind of Google’s upcoming PR win on this front and rushed to have some kind of tie for the headline?

1 Like

Hello, I hope that you’re having a good Thursday.
That is very fair. And yeah, I probably should’ve put some examples as opposed to just going, this is wrong without specifying telling me what this is. Thank you for the Reddit link, I intend to utilize it.
To fix sad shortcoming of mine, my biggest gripe right now is that it doesn’t reference things correctly. For this particular story, granted I have 80 odd percent of the project filled, it’s a big project and it’s a book, I reference things in the project to the point where I even put in the or at least what I think is the correct file names. For example, I have one file called courtship, part one. Though in this particular instance, the thing I wanted to reference was in courtship, part four. So I put in the scene, I put in courtship part four as a reference, and then it Makes up a whole bunch of stuff that I’m looking at it going, as far as I can tell, I put in the correct file name, why are you writing about this when it it had nothing to do with any of this. Granted has most things tend to be this could be as simple as, I’m not actually putting the correct file name in, as in I’m putting in the file name, but I’m not putting it in, whatever The code base wants you to put around it, to the Choice script example, something like when you have someone’s name if you don’t put that name in quotes, the code won’t registered as a name. So yeah, I hope this helps.
Once again, like most things involving code it’s probably going to turn out that I did literally everything correct except this one tiny thing that I messed everything else up, because it’s cold and cold is finicky like that. Logic motion well we’re all writers/readers, we all have some idea of how funny code is.

Yeah it seems like a retrieval issue, you do have to use the exact name as the file for reference,

the ai might also confuse part 4 with part 1, since the two are very similar textually, adding a unique identifier like c4 helps, the rule is the more unique your file name to other files, the less likely it will retrieve the wrong thing.

I thought it was an interesting article.

“Against such Enlightenment faith in the post-human, we require nothing short of a change of lived understanding and a focus on deeper and more difficult questions than how much regulation to impose under the current paradigm. Questions such as, ‘What does it mean to be human?’, ‘When does defending the human per se become more important than improving quality of life with technology?’, and, ‘What is most important for human wellbeing and flourishing?’”

3 Likes

AI invented visualizing!

2 Likes