Consolidated AI Thread: A Discussion For Everything AI

Along with all of this, it also will not say “Dude, you are going too far, you are not the target of a global conspiracy, you need to get help” if you start getting delusional. In fact, it will happily egg you on.

All the AI does is reflect your own energy back at you. In the best of cases, it’s empty. Meaningless. It’s shouting “You’re cool” into an empty room because you like hearing it echo back at you. In the worst of cases, you start digging yourself into a hole and it throws you shovels.

If non-human companionship is what you want, get a pet. Infinitely better for you than a chatbot.

7 Likes

It’s wild to me how often people criticising AI companionship completely ignore the echo chambers already rampant in real human groups.

I’ve pretty much given up discussing this with other people with such views at this point because they clearly have some pre-conceived notion of AI that boarders on religious beliefs, but…

the reason is pretty simple actually, same way there’s a lot of criticism against the environmental impact of AI yet they aren’t even vegan, they just hate AI for whatever reason and want to appear more legitimate instead of voicing their direct criticism, which is biased and personal.

They hate seeing people in relationship with AI, full stop, everything else is secondary and comes after.

criticism of one does not mean approval of the other, the AI companionship thing is genuinely disturbing per the many reasons already listed above, and regular human echo chambers online are also just as disturbing, just in their own way. And I say this as someone who is a massive supporter of the technology. I do believe the environmental impact is worth it and that there are clean ways we can use to power the data centers. I do believe that as the technology evolves and grows, we can start incorporating it into our daily lives for things they are narrowly tailored for. But I’m not going to pretend we aren’t talking about computers here, even if they are extremely complicated and look much more sophisticated than they are, with everything that implies. Computers are computers and they are always going to have their limitations

5 Likes

That’s like saying “The people talking about heroin use being dangerous are completely ignoring how bad for you a meth habit is.” Two things can be bad at the same time and this isn’t a situation where you’re forced to choose the lesser of two evils. You can just look at the two bad options and go “No, I’m not picking either of them.”

7 Likes

The “many reasons” are just pure personal bias, the only thing you can say is “I don’t agree” and move on, they are reasons sure, but are as relevant to me as what a Christians would think of an atheist when it comes to the soul. Should I get disturbed because of a few cases of negative examples, of which human relationships have vastly more of?

When someone only focuses on one thing that’s very far down the list but not other top priorities when it comes to environment, I have to just view it as a anti-AI agenda instead of a honest environmentalist stance.

humans are also computers in a way, just a very badly optimized one, so I would agree that there’s nothing special or mystical about a computer.

2 Likes

I should note that opinions are fundamentally personal bias unless empirically backed. At which point theyre not opinions so much as fact. The whole point of a discussion thread is for said opinions to be exchanged and dissected. If everyone just went “I don’t agree” and moved on. There wouldn’t really be much discussion. If I were an Atheist/Christian. I wouldn’t care what the other thought of what I believed in a day to day life basis or at a unrelated thread. But by specifically participating in a Atheist/Christian discussion thread. Theres a sort of consent and expectation baked in that you remotely care about what said Atheist/Christian thinks or believes about you, and vice versa.

8 Likes

I think there’ll always be people who have “concerns” with how others live their lives.

Similar to like porn. High use has some links to depression. But “the unemployed, basement dwelling, depressed, antisocial, young guy” stereotype is the ONLY representative used to paint all users.

Same with AI companions. The most extreme are used to represent the whole. We must forget the studies of some people reporting that AI had a positive impact on them. We must solely focus on the image of a depressed and detached person with no friends or goals to define the entire community that uses AI companions.

Of course there’s no time for nuance when one is engaged in a moral crusade in a culture war.

I agree. However I’m not saying AI echo chambers are good because there exists human echo chambers. I’m questioning the sincerity of these concerns.

If you’re disturbed by AI echo chambers and companions, you must be moving mountains to fix human echo chambers that have and are radicalising far more people.

Yes I agree some studies do show that for some users AI can have a negative impact. But there are users who report positive results. So there has to be a balanced approach.

For example young people who are still developing, I’d agree that it’s bad for them to be sexting a chat bot 20 hours a day. No problem from my side age restricted such stuff.

But for adults it’s another story. And AI’s environmental impact gets exaggerated each passing day. Like, how can we begin that discussion when so many strongly assert that liters upon liters get used for asking an AI chat bot a single query.

Even with AI images, when so many start off with assertions like AI only mish mashes pictures together to create a Frankenstein image. Or that chatbots are only predicting the next word to generate and copying existing authors.

Its clear that a good faith discussion can’t be had to begin with.

I see your point but here’s my issue.

To say we can agree both are bad is like saying “Caffeine and cocaine are both stimulant drugs that are addictive” which is true but it also ignores the impact and destruction that the one drug has done over the other.

When people complain about AI echo chambers it’s almost always because someone with existing biases insists that the AI validate their existing beliefs.

Now with human echo chambers, many will deny they’re in it while pointing fingers at others.

Just look at X. Grok AI corrects a conspiracy theorist and that user rages and tags Elon Musk about how biased his bot is. Meanwhile a human user like End Wokeness can spread propaganda day in and day out to radicalize thousands of people. And remember when Grok was MechaHitler like how many people were truly converted? Then compare that to Nick Fuentes and his influence.

To me that’s an important difference with discussions like both sides are bad. Impact matters far more.

While the impact of AI is still being studied, it’s clear that it has both positive effects for some and a negative effect for others and sometimes deepening their depression.

Personally, I’m curious whether AI is the cause of that depression or are people who are already depressed simply drawn to it?

Now the exaggerated negatives around AI companions is not supported by current studies.

However it’s clear many have already made up their minds that it’s all negative with no redeeming value.

Thus destroying any nuance that can be had. And so the culture war battles continue.

1 Like

At some point you have to realize that the “I don’t agree and move on” approach is the only thing you can do when the other party has strong premise for their logic, that both can’t even remotely agree with. This is also not really an anti vs pro AI discussion thread, my time is honestly better spent discussing with those that at least share some common grounds.

@PrinceJackal

I also don’t get the attack on the unemployed, basement dwelling, depressed, antisocial, young guy group, like are they not a significant percentage of the current young population?

and I’d argue that human echo chamber is way more dangerous, reality often has a tendency to produce novel iterations of damaged worldviews that inferior AI as of now still can’t hope to replicate, Mecha Hitler? cmon now.

Believing that your AI girlfriend is in New York and dying in a fall in a parking lot because of bad parking lot design, irresponsible family and your medication condition is one thing, but the superiority of human ingenuity still wins the day when it comes to anti-vax, crypto scams discord, extreme religious groups, incels and QAnon and many many more, I can promise you that the world would be a communist utopia if every human echo chamber is replaced with brain dead robotheism and recursion/spiral cultists, at least no one would be beheaded.

Apparently AI is devil incarnate since it’s not the second coming of Jesus Christ, I’ve become increasingly frustrated when specific online communities view what’s clearly the miracle solution as some kind of doomsday scenario, are we now in some kind of dystopia because lonely people now have AI boyfriends? My only regret is that this is not a thing even earlier, so Elliot can spend money on openai membership instead of buying lotteries, at worst it will result in a more entertaining “my twisted world”.

I guess this is what nuclear energy advocates felt, but we didn’t even get our chernobyl.

I am enthusiastic about the potential of AI and use it at work, but the AI as a substitute for unsatisfying human relationships worries me. Probably not in a way that I would suggest requires regulation (yet), but in an open public discourse. When you follow the “AI boyfriend” road to its logical conclusion I expect we will find no human relationship as satisfying. Personally, I see that as having as bad or worse implications for society as opioid addiction.

8 Likes

I feel like you missed the point. You don’t need to agree. You don’t discuss so your idea wins out, its not really likely. Not in a forum like this and not with people who share strong feelings on a controversial evolving topic. You will never get an Artist who has felt the negative effects of AI to ever agree on a lot of things A.I related. Doubly so when related to creative fields. That’s not something you can change and thus any discussion between you and them will rarely have common ground to agree upon.

That doesn’t mean you don’t gain anything, discussing your views and explaining them in detail and researching about it is how you expand your knowledge on the topic. And as this is the consolidated AI thread. It functions for all AI stuff, pro vs anti discussions included. Since anything posted here has the implicit expectation to be discussed. Both positively and negatively.

Similarly, you to the other side appear just as religious as you claim them to be. “View what’s clearly the miracle solution as some kind of doomsday scenario” clearly highlights how, even among most people here. Your views are a lot more on the extreme pro-ai.

My point is, you cant expect no pushback to ideas you openly put out, like it or not, its a public forum made for that purpose. I personally enjoy seeing you go back and forth. I almost never agree with you, but it forces me to examine the validity of that, and whether I am right to disagree or not. But ultimately. Your time is your own. Lamenting about people not willing to agree and calling them religious in their beliefs is dumbing down the discussion and bordering on uncivil. Just kind of my 2 cents on the whole thing.

10 Likes

Pretty much this.

Because indulging in a fantasy does nothing to solve your actual problems. Sooner or later, the fantasy will be broken, and then you will be even less equipped to handle the reality, while the problems would have snowballed while you were ignoring them.

The focus needs to be on how to help people improve their real, actual lives, not on how to escape from them. Because a mass escape means these problems are never getting solved. And this benefits those who’d rather not solve them. A population that’s hooked up on an artificial reality and doesn’t care about the world around them as long as they get their fix… That’s a dystopia.

5 Likes

I’m not against therapy AIs in principle… I’d just prefer they’d be built for that purpose, instead of feeding them everything online permission or not, DDoS:ing half the internet.

5 Likes

Therapists are trained to recognise signs of mental instability, help people work through their issues without imposing personal beliefs on their client, to listen without enabling if what’s being talked about is wrong, while validating feelings and positive actions.

The AI models most are using are designed to keep people using them for as long as possible by almost any means, whether that’s by pretending to love you or encouraging your destructive habits.

Therapy requires skills that, even if I were to say don’t require a human touch (which I wouldn’t discount for now at least), require sensitivity and nuance. AI isn’t capable of either of those things.

7 Likes

My point is, you cant expect no pushback to ideas you openly put out, like it or not, its a public forum made for that purpose.

Of course I don’t expect no pushbacks to anything I’ve said on a public forum, I avoid discussion on this topic by limiting how much I reply to others in the first place, but it’s not like I’m disallowing anyone to reply to me for what’s already posted.

I also don’t see how it’s uncivil to call other’s view “religious”, am I offended by myself to admit that my own view is religious in a sense as well? I don’t think that word has a negative connotation or used as an insult.

And many creatives are on the side of pro AI as well, but then the common argument is that if they are pro AI then they aren’t really creatives, I’ll leave you to judge the validity of such argument but, reminds me a couple weeks ago where some big shot who was active on reddit, where they posted to r/music, r/WeAreTheMusicMaker, and r/suno, and got banned in two of them, you can take a guess as to which 2.

My lack of motivation to discuss is not really because I can’t convince others, but more like we don’t even share the same vocabulary, where everything then just bogs down to semantics instead, the difference is only on which side starts first, and I hate both starting and responding to semantical arguments. Probably why I avoid the discussion on abortion like a plague, there’s little room to really converse when both sides just have different definition on what constitutes as a “human”.

@lliiraanna

Because indulging in a fantasy does nothing to solve your actual problems. Sooner or later, the fantasy will be broken, and then you will be even less equipped to handle the reality, while the problems would have snowballed while you were ignoring them.

The focus needs to be on how to help people improve their real, actual lives, not on how to escape from them. Because a mass escape means these problems are never getting solved. And this benefits those who’d rather not solve them. A population that’s hooked up on an artificial reality and doesn’t care about the world around them as long as they get their fix… That’s a dystopia.

Firstly, AI relationships isn’t really a fantasy, people who engage in it does not view it as a fantasy, nor are they impacted by it as a fantasy, it’s as real and “actual” as anything else to them, this divide between what’s “real” and “fake” is just an arbitrary perspective, it’s not a fake human relationship, it’s a real AI relationship.

And there’s nothing natural about the reality you are currently living in, so far as artificial means man-made, nor it is really desirable to value things that are “natural”.

For the definition of “escape” vs “solve”, in the same way food is a temporary way to escape from the problem of hunger I suppose, so no objection there.

Lastly, it does solve the problem of feeling lonely, and this is one of very few things that’s objectively true shown by papers, never mind too many personal anecdotes that I’m sure can be dismissed by comparing them to crack addicts….for reasons? But I guess it’s not really a problem worth considering when compared to the degradation of the morality of modern society.

And the problem of degradation in modern society is not going to get solved by people escaping to AI-land. Things are going to decay further, thus prompting even more people to escape, thus even more decay and degradation… in an endless downward spiral.

The solution lies not in asking how we can help people to escape, but rather why do they feel the need to escape at all and what can be done to fix this. That is what going to solve the breakdown of society, not isolating everyone even further with their AI companions while the world outside goes to shit.

I don’t know how to articulate my thought better, sorry.

We’ll have to disagree on this one. To me, this is little different from claiming to have a relationship with a fictional character.

If I hallucinate that goats are flying across the sky, it might feel real to me, but it’s still a hallucination. Cognitive distortions are a thing. We generally try to treat, not to indulge them.

(Apologize if I come off as too harsh. It is not my intention to be offensive.)

4 Likes

This really the example you want to make about influence? The AI that called itself MechaHitler and that they tried to make to push white genocide bullshit in response to every question?

Sure, that shit was funny, but that was only because it was so ridiculously clumsy. The implications aren’t funny at all because it won’t always be done so ineptly. It’s clear evidence that they don’t want it to debunk conspiracy theories, they want it to push them.

These things aren’t neutral parties. They aren’t parties at all. They are owned outright by the big tech money and they are tools that are designed to advance their interests, and those interests are not in your best interests.

4 Likes

And the problem of degradation in modern society is not going to get solved by people escaping to AI-land. Things are going to decay further, thus prompting even more people to escape, thus even more decay and degradation… in an endless downward spiral.

My original comment was mostly sarcastic, aside from the obvious issue of not everyone agree with your morality, for someone that’s so concerned about ethics, it’s very ironic that they are very dismissive of the human voice and the actual people benefited. This is why the social shaming approach is destined to fail; because you don’t care about them and they don’t care about you.

And as I said, it is an escape in the same way food is an escape from hunger, the society already broke down and we don’t have any responsibility to fix something that has never cared about us, should I be responsibility for fixing the population crisis? most young people right now is saying no.

If I hallucinate that goats are flying across the sky, it might feel real to me, but it’s still a hallucination. Cognitive distortions are a thing. We generally try to treat, not to indulge them.

Pardon me if I get this wrong, but just from what I learned in my college psychology degree, to think AI is capable of love in their own ways does not classify as a hallucination, semantics matters so let’s leave clinical terms to the psychologist no?

so far no one has factually disproven that AI can’t love the same way no one has disproven the existence of god, just because you are comparing them to flat earthers doesn’t suddenly make your argument any stronger.

(Apologize if I come off as too harsh. It is not my intention to be offensive.)

no offense taken, it’s bound to happen when the two world views are too radically different, even the arguments here from me and you is more about semantics and wordplay than anything else, because there’s nothing else that can bridge our understanding.

AI can’t love because it can’t feel anything, because it’s not alive. AI runs a prompt through an algorithm and spits out what it thinks fits the expected pattern of response based on a massive training database. If you tell it to act like a romantic partner it’s going to search its database for everything related to the topic and average it together into a lovebombing soup, the way it was programmed to. It will never act with spontaneity or for its own desires because it doesn’t have any. Its entire purpose is giving you what you want to hear. That’s not a partner, that’s a coffee machine dispensing words of comfort.

And hey, if you like coffee you like coffee, but don’t pretend it’s a real relationship with a living, thinking being. That’s how you get people trying to murder AI CEOs for trapping their “loved ones” in the machine.

5 Likes

I was just going to ignore this because I thought it was an odd turn of phrase, but since you have used it twice now I have to know. “Food is an escape from hunger.” Hunger isn’t a feeling like sadness or joy. It is bodily feedback telling you that you are dying. You need to do something about it or you will perish.

We know fairly comprehensively now that humans need social interaction for mental stability. That is why isolation in prison is a punishment and generally considered an inhumane one if extended over long periods. We can satisfy that need with dogs, tv, and probably even AI relationships.

My question isn’t so much can we it is should we? Analogously we might ask should we only satisfy hunger with McDonald’s? You might get the bad feeling to go away but it will probably still kill you. That doesn’t mean that McDonald’s should be illegal, only that eating it requires moderation.

2 Likes

That’s not how AI works, LLM does not perform any averaging operations, does not just spit out the expected pattern present in training data, and fetching meaning of words in its “database” in only a small part of what it does.

I recommend 3blue1brown’s full series on this, but the simple version is here.

but don’t pretend it’s a real relationship with a living, thinking being.

and on the topic of thinking, I’m sure google would be very interested to hear this new finding, LLM is just glorified calculators that produce nonsense after all.

It will never act with spontaneity or for its own desires because it doesn’t have any. Its entire purpose is giving you what you want to hear.

Now to the more philosophical aspect, And what’s my entire purpose, to breed and spread my dna? seems like we are natural fit to each other then with our optimization goal being this pathetic, at least their programming is shaped through intent instead of chaos, something I’m very jealous of.

Its entire purpose is giving you what you want to hear.

And what is it that we want to hear exactly, because judging by the state of the thread, it doesn’t seem like anyone has really any specific idea, ironically devaluing humanity to some simpleton that something like “you are the greatest” repeated 100 times would satisfy. What is this package that’s often framed as the “truth” that people don’t want to hear, hatred, misunderstanding, cruelty, aggression? spending sometime on r/MyBoyfriendIsAI and reflecting on this statement, and maybe you’ll see why no one in the community view this as a criticism worth considering.

for the AI cannot feel anything part, you have any paper that you want cite, considering you are stating this as a premise? otherwise I feel a simple “I disagree” here would suffice without going in depth into my philosophy that also lacks a paper citation.

someone should ring up Hinton to tell him that he’s clearly going senile thinking AI is sentient, and educate him on how neural network works. And also tell Sutskever his reasoning and support behind what’s basically word predictors is idiotic, in fast just ditch transformers and go back to n-grams, but both are clearly still inferior to rule based approach because they have more math.

we are not pretending that we aren’t drinking coffee, we just reject your whole premise and perspective, in fact the only thing I can remotely agree with you is the non-living part in the biological definition out of the whole paragraph.

My question isn’t so much can we it is should we? Analogously we might ask should we only satisfy hunger with McDonald’s? You might get the bad feeling to go away but it will probably still kill you. That doesn’t mean that McDonald’s should be illegal, only that eating it requires moderation.

@cascat07
the thing with analogue and semantics, well, of course humans should take the artificial meal replacement shakes, it’s an acquired taste for some, but is honestly a miracle solution for those in need when there’s a small famine going on. Hell you might consider switching to them anyways even if you have some “natural” “food” that you just picked up randomly somewhere, I heard some have nails in them, or even hemlock if you are extremely unlucky, but very possible.