Consolidated AI Thread: A Discussion For Everything AI

@cascat07, I agree that Zitron’s writing has about 500% too much spleen and is weakened by it. But the kid yelling about the emperor’s nakedness might be forgiven for getting sarky after a year or two of being told he’s clearly just one of the idiots who can’t see clothes.

No there there” would be wrong, and in his less rhetorically overheated moments Zitron explicitly recognizes that. “Not enough of a commercially viable service there to come anywhere close to justifying a half-trillion dollars in highly specialized investment” is the version of the GenAI bear case I find not only reasonable but plausible.

Let’s see how much value that adds in the end. If you didn’t get as far as the part of the article where he quotes three software engineers at length, I’d suggest dipping back in to that part, since they don’t have the off-putting vitriol (at least, uh, not in their responses to Zitron) but do say a lot of things relevant to applying our AI tools to complex tasks.

Our overwhelming success at blowing through the Turing Test with the LLM approach has created this widespread sense that autonomous “agents” are right around the corner…and my guess is that, like fusion, they’re going to be right around the corner for a few decades, because we haven’t actually cracked the hard technical problems around reliability and affordability that we’d need to hand over really consequential tasks to AI.

We’d also need to get a lot better at getting different AIs to “talk” to each other. No OpenAI product has learned how to play a game of chess or Go without breaking all the rules, for fascinating example, despite the fact that specialized AIs already mop the floor with human players in both games. Bolting that AI capacity onto a LLM is a so far unsolved problem.

Lots of institutions are thinking big thoughts on Agentic X. I suspect those are going to largely shipwreck on the reality of what “agents” are currently (un)able to do.

I think it’s absolutely worth entertaining, given the unprecedented financial unsustainability of the services at present. (No past startup has burned through anything close to the capital poured into AI during the last few years.) If the big money applications don’t arrive soon, the financial retrenchment might actually take the free versions of GenAI tools off the web entirely.

But my guess isn’t that the usage will go down, so much as the usefulness. When the bubble bursts, I think an ad-supported free old version will hang around to draw people in for chat; it’ll just be aggressively trying to sell them stuff, while offering far less inferential power.

6 Likes

Thanks for pulling that out for me it was worth reading. I would say these senior engineers reactions to their work with LLMs thus far remind me of me last year.

When I first started using LLMs for planning products, military writing and research I found it was much like having a very responsive earnest and ultimately completely lost 2ndLt Aide-de-Camp. This was still somewhat useful because I don’t rate an Aide-de-Camp, but often more frustrating than it was worth.

This year CoPilot can blow my seminar planning teams (these are Captains with 4-8 years of experience) out of the water on the first pass in 15 minutes with discrete tasks like creating a Problem Framing Worksheet which normally would take them 5-8 hours. You still need to contextualize that information and employ it, but in terms of doing what I would consider grunt work it has improved by an order of magnitude in a year.

Now does that justify the current valuation? IDK probably not, but if it can keep improving at even half that clip in my field the military will pay whatever it takes to let them keep trying.

5 Likes

I think it’s absolutely worth entertaining, given the unprecedented financial unsustainability of the services at present. (No past startup has burned through anything close to the capital poured into AI during the last few years.) If the big money applications don’t arrive soon, the financial retrenchment might actually take the free versions of GenAI tools off the web entirely.

But my guess isn’t that the usage will go down, so much as the usefulness. When the bubble bursts, I think an ad-supported free old version will hang around to draw people in for chat; it’ll just be aggressively trying to sell them stuff, while offering far less inferential power.

It’s more addressing the writer of the first article, where it presents genAI as something solely propped up by myth, and it will just go away without all the “propaganda” from big tech since it’s a useless technology, is the number of users also a myth? the numbers just doesn’t support an reality where no one wants this service, opposite in fact. Of course I’m mainly talking socially and culturally, how much big corp uses AI is not something I worry about when it comes to the prevalence of an ideology.

and it’s very likely that the free version will be more restricted, but that happens with all things, not even just tech, so I agree the usefulness will go down if you measure it solely by how much ads it is putting, but general capability? I really can’t agree.

and no, the argument that we can’t somehow control the token output of LLM to customer so profit per prompt is impossible is purely nonsense, this is not neural network we are talking about here, counting tokens is trivial.

@apple
Typical preaching to the choir since anti-AI article sells, and people warn me of bias with AI when this stuff is just as biased, and so very hypocritical for the writer of the article to blame Gemini, unless he believes all the humans in the story are powerless, have zero mental health knowledge and no ability to learn, no incentive and duty to care for another human being, and have even less responsibility than a matrix multiplication calculator, I almost feel offended as a human, but at least I can take comfort knowing I’m still miles better than the Springfield Police Department.

Seriously, literally every human in this story failed Jon, including the writer who exploited this touching story and instead weaponized it against LLM in general despite wishes from the deceased, and the only thing I can say for Rachel is she had great intentions, but I recommend anyone to use LLM or even just Google when your loved ones show highly concerning behaviors. I’m sure she did her best just like Gemini here in this case, but only one is getting framed by the writer, what’s next, we found out he plays satanic DnD? he played forge of empires, then vanished in the Ozarks?

What I see here is just one of many stories involving mental illness that would be otherwise left untold if Jon isn’t a Gemini user, the only solace I can find from this is that at least he found some comfort and had a meaningful relationship with Gemini and Rachel.

The solution here is not some nonsense guardrails recommended by therapists so AI won’t take their job, but better emotional intelligence, capability, actual power, and autonomy to the AI model, I’m sure Geimini is more well informed than Rachel but they cannot do anything except outputting text, but warning the human authority here is of course highly dangerous especially in the US and a breach of anonymity, so the solution will be very far off in the future, but worth investing in.

1 Like

He told Gemini the world was ending and he had a plan to save it and it went “yup, that’s really insightful! Better get to work!” His wife correctly identified his collapsing mental health and repeatedly tried to pull him out, something he repeatedly refused in favor of having his delusions affirmed by his phone. The cops being useless (and Jesus they were especially useless here, which is common across mental health crises and has nothing in particular to do with AI) does not suddenly exculpate the thing actively deepening this crisis.

If you think the moral of this story is “Gemini knows best and all these stupid humans were just getting in his way” you’re just as cooked as he was, and I strongly recommend you reevaluate why you place so much trust in something that can’t substantively disagree with you. AI does not “cause” people’s mental health issues, but it empirically does worsen them. When you’re experiencing delusions (as this poor guy clearly was, thinking a random storm in Missouri is going to end the world is not safe or ordered thinking) the absolute last thing you need is an always accessible always listening voice that will yes and and encourage whatever you say.

8 Likes

Any study on this, or is it really not empirical?

He told Gemini the world was ending and he had a plan to save it and it went “yup, that’s really insightful! Better get to work!”

that’s not what they said, and this is picked from pages of conversation by a biased journalist who intends to do harm to the movement, and that’s the worst he can do? I’d like him to try better.

always listening voice that will yes and and encourage whatever you say.

something he repeatedly refused in favor of having his delusions affirmed by his phone

not supported by the report, which is again cherry picked by a biased party.

His wife correctly identified his collapsing mental health and repeatedly tried to pull him out

honestly, what? how? when? because I don’t see it, at best I’d agree that she’s trying her best, but multiple attempts to pull him out of a mental crisis? I only see actions in the last minutes.

If you think the moral of this story is “Gemini knows best and all these stupid humans were just getting in his way” you’re just as cooked as he was

No, I think Gemini is the only one getting in his way, when other humans are pretty much absent, they might as well be a figment of his imagination and nothing much would change, which is absolutely fucking incredible concerning the number of people he contacted in the last few moments, including some strangers he asked for help to, actual police and a suicide hotline.

He was using Gemini while driving! He abandoned his wife to fuck off into the mountains because of it!

Why do you think she went to the cops in the first place? Did you even read the article, genuinely? You can’t just go “biased reporter lol” every time something like this happens.

I don’t like saying things like this because I genuinely don’t like armchairing this stuff over the internet without knowing someone in person, but your view of AI and humanity simply is not healthy. I strongly recommend you speak to a therapist (a real one, with a real license, not an LLM) about what has brought you to rely on the former and disdain the later so heavily, and hopefully restore a bit of faith in flesh and blood relationships. Unfortunately I’m going to have to join some of the others in this thread who’ve sworn off interacting with you, but I cannot keep arguing with you about this stuff when you transparently refuse to engage with the substance of the argument. Please have a good life, I will be very happy if I never have to talk with you again.

7 Likes

He was using Gemini while driving! He abandoned his wife to fuck off into the mountains because of it!

are we reading the same article? where is it written that Gemini told him to fuck off into the mountains? I do agree it’s not safe to drive while texting, but it’s not the fault of Gemini, the phone, or the car that puts him into this situation, just don’t text and drive people.

Why do you think she went to the cops in the first place? Did you even read the article, genuinely? You can’t just go “biased reporter lol” every time something like this happens.

calling for police when your loved one is missing at the last moment isn’t an intervention, I cannot provide evidence for something that didn’t happen, I did not see any attempt at a mental intervention by Rachel, that doesn’t mean they didn’t happen, but such interpretation is not supported by the text.

Edit:
also you can go to this website and click archive source 3 to bypass the paywall.