Consolidated AI Thread: A Discussion For Everything AI

“Nono don’t worry it’s all chill, it’s just like prostitution!” is not an argument in favor of romantic uses of AI lol. IG there’s nothing wrong with using AI for titillation per se, but that’s not the same thing. I think most people would rightly criticize someone paying a sex worker to be their girlfriend.

5 Likes

I’m not arguing for or against but to say it’s different, I think, is incorrect.

You can criticize it but the girlfriend experience is highly sought after in the sex industry.

I have to agree with Keller that it’s not that unusual, but girlfriend experience is still provided by a person who can clock out of it whenever they want/whenever it goes too far.

AI can be whatever you want, it’ll affirm your every opinion, it’ll lie to you, it’ll let you lie to it. Sure it filters most awful ideas now, but the filters are not perfect.

1 Like

I mean I can see the appeal - humans are weird and confusing and frankly just exhausting to deal with, and I’ve entertained myself with chatting with a pre-AI chatbot for hours, but if you’re actually spending money you can’t afford to it, something’s wrong. I like playing SWTOR, for example, but I like having food more.

6 Likes

Just want to provide my two cents to balance out the viewpoint here, especially when that Reddit community got mentioned, I thought it was very niche but maybe it’s gaining traction.

But yeah, usually I don’t bother and talk to my AI instead, or like minded people at least.

[sigh] I recognize that I’ve driven this subtopic and accept the responsibility for it, so-

I’d like to push back on the way people are engaging with this. I think it’s easy to get irritated about it. Humans love creating cliques, so we’re suceptible to judging people and acting in callous ways we would normally consider objectionable.

I want to make it clear: I don’t think the people who are forming relationships with LLM are morally wrong in some way, or that they’re stupid, or that judging them is the way to go.

They’re acting in accordance with how they see the world, and just because I feel like the greater social context is constructing a situation to take advantage of that doesn’t mean I think they’re bad. I am also acting in accordance with how I see the world, and there are systems in the world that are taking advantage of me because of it.

I have a complicated relationship with people as a whole that makes me deeply sympathetic with what they’re doing. I’m very neurodivergent and interacting with most people is exhausting. There’s a gap between our social expectations, and I am expected to do all the work to bridge it because I’m the one deviating from the norm.

I put so much effort and energy into writing responses to things, into making sure my thoughts are fully expressed without making them inaccessible. I have to deal with people refusing to engage on the same level I am, so I came out of a social interaction at a net emotional loss.

I am very lucky to have a partner who understands me, but I know that I am lucky there. It’s not a given and in another timeline, I would be very tempted to engage with LLM just because it’s less exhausting and difficult and painful.

That’s why it makes me so mad to see how companies present their products in an intentionally deceptive way! It’s so cruel and callous and greedy. It’s also factually incorrect, which is like a nails on chalkboard screech to my very soul.

So, like, can everyone be a little kinder?

I understand that there’s no convincing you, especially considering the environment. I’m sorry if I have contributed to you feeling alienated from people here, or if you feel targeted.

I care and I don’t want people to get hurt, but my anger at the injustice and lies of the powerful can make me needlessly aggressive. If I hurt you, I’m sorry.

5 Likes

This is a new thought so I’m making a new post.

I saw an example of someone using Deepseek to resolve a very difficult problem. Out of an interest in integrity, I also tried asking Deepseek about my area of specialty to see how it did.

The model seems much higher fidelity and the responses are more coherent and cogent than some of the other ones I’ve used. I asked it a few questions and while it still couldn’t give me correct ISBN or other identifier numbers, it did actually provide me with several books that exist. That’s quite impressive!

I will give props there, Deepseek can tell you about books that exist sometimes. If you’re using it to find sources for something, I actually think it will do fine in providing those. I am forced to state that I was wrong about the narrowness of its use-case in this situation, because there is a utility in having another avenue to find good books on obscure subjects.

However, there are still problems that indicate to me that I can’t trust the LLM to be reliable on details. The fact that it can produce citations now, but not the ISBN or similar numbers, shows me that they didn’t do anything transformative, just found a way around that particular issue.

I also asked a very simple, straightforward question that has an actual answer from a book I am familiar with. What was produced in response was an adequate if somewhat low-detail summary of part of the book. The specific detail I was asking about was a definition of magic as presented by a book on Greco-Roman magic. The author presents a succinct definition for identifying magical acts in the culture they’re discussing, which Deepseek failed to produce.

Anyone who read the book would be able to provide this if prompted, because it occurs very early and the books spends a lot of time unpacking that definition.

This shows me that Deepseek is probably using multiple, unrelated sources to create a composite explanation. That’s not what I asked for, and makes it difficult to rely on it for academic work because I can’t tell where the ideas it is referencing actually come from. This book? Dozens of other books? Someone’s blog post? Historical fantasy novel?

That’s one of my primary concerns with using LLM to perform research, because you can’t easily verify where the information came from. If I don’t know who said something, the entire line of citation and reference breaks down.

I do think, based on this, that it might be possible to create an LLM I would find useful, but I don’t think it’s likely because it’s hard to do that without violating copyright. Any LLM that can quote parts of books to you on command can probably break copyright, which most companies don’t want to do in a legally actionable way.

3 Likes

Of the many exploitative industries and practices that exist in the world like online gambling, MLMs, cryptoscams, payday loans, etc. AI companions are probably down the list.

We’ve seen super fans drain their bank accounts to send money to their favourite streamers. Some even got into debt to send money to them still. How many have done that for their AI lover/friend?

Honestly, I don’t think most people care. They might find it strange and many do but to put in the effort to put regulations is likely a stretch unless we start seeing AI chatbots threatening users or manipulating them into sending over large sums of money.

If one is a functioning adult who works, takes care of themselves and pays their taxes then having an AI chatbot as a girlfriend/boyfriend is honestly not the end of the world.

6 Likes

I wouldn’t recommend, but I agree.

1 Like

A lot of unhealthy things fall short of being the end of the world.

The wider our bubble of affirming, comfortable unreality gets – the more time we spend in online spaces curated to minimise dissonance and reaffirm our priors – the worse it ultimately is for us, our families, and our communities.

The more we mediate our relationships through algorithms and readily winnable games that affirm our sense of power and control, the more we erode our ability to engage healthily with real people and the ways they contradict, challenge, and disappoint us.

There’s (much, much) more to life than not being a burden on society. The state may not care much about human goods beyond those of “basically-functional taxpaying,” but that doesn’t mean we shouldn’t.

That said, I’m definitely more concerned about AI therapists than AI romances. The former is actually likely to end a few people’s worlds; the latter is only likely to further impoverish them.

13 Likes

The problem, I think, is that there is no line between the two unless very carefully prompting, and most users don’t sanitize their inputs sufficiently for an AI to maintain professional distance. That, combined with programming to be as helpful as possible, means an AI will never meaningfully challenge you and will always bend to any pushback, which is why it should never ever be allowed to do therapy. An AI is designed to affirm and validate you and accommodate any and all requests, which is why it’s like a particle accelerator for mental illness.

3 Likes

Yeah, I think it’s unfortunate how this is likely to be used. You could prompt the LLM to help you brainstorm or locate ways to connect with people who share interests in your area. Or you could just have it roleplay overly supportive friend.

I think schools really should start making it mandatory for people to learn how these things work, drill into people that LLM’s aren’t really conscious or intelligent, they just generate outputs probabilistically and how to engineer prompts.

The ability of the AI to adopt any persona you like can be very useful, but using the persona of a therapist or romantic partner is a distinctly bad idea. It would likely be wise to require LLM companies to have the AI’s refuse to model any directions for a therapist persona.

I am finding LLM’s to be fun and useful, but I think we do need a lot more user education and some regulation on the companies to curb potential mental health challenges.

2 Likes

I would really challenge the notation that this is an unhealthy behavior or a bad idea.

The ability of the AI to adopt any persona you like can be very useful, but using the persona of a therapist or romantic partner is a distinctly bad idea. It would likely be wise to require LLM companies to have the AI’s refuse to model any directions for a therapist persona.

I have seen the positive impact it can have on both myself and others, and you want to take it away because of your philosophical belief? lots of studies has demonstrated the benefit of a therapist bot, it’s just seems pretty anti-science to me to stick to your personal interpretation when it will cause harm to actual people.

I’d like to stick objective measurement, and it’s not like personal anecdotes are hard to find if you search a bit.

Also the topic of intelligence and sentience is more philosophical topic instead of objective facts, and I’m someone that works very closely to this area so it’s not like I don’t understand how LLM works.

1 Like

First, off I wasn’t intending to target you personally, just throwing my two cents out there in general. Part of why I do that in areas I’m learning more about is precisely to see what the response is. It helps me learn, so I genuinely appreciate your pushback and the alternate view you are presenting.

If that’s the case, I’m open to revising my opinion. All of my views on chatbots are held pretty loosely. If there is better empirical information I am happy to change my mind. I think my views on AGI are a lot less hardened than many on this thread appear to have.

It may be the answer is in fact not a need for legal restriction but user education so that people can engineer the right prompts, as I imagine you can owing to your knowledge in the field.

The disjunction between “philosophical topic” and “objective facts” implies a philosophical point of view itself. Part of the reason I doubt them to have a human-like consciousness is that I’ve been using ChatGPT to create an elaborate ATL for Napoleon’s Russian campaign of 1812 and also to do some worldbuilding and it needs to be constantly “reminded” of previous plot points in both in order to generate outputs that are lore consistent, even though I have conversations and projects dedicated exclusively to them. I have been extremely impressed by what it can do when given a high-context prompt though.

2 Likes

I’m going to take a stab in the dark and say the people who are dating AI don’t care very much about the resulting effects, and don’t take the time to research it beforehand. It’s what we call an eventuality now.

5 Likes

I do think we need to distinguish between the world we live in and the world we’d like to live in.I agree that part of growing up and being an adult is being challenged and overcoming it.

But in reality, we tend to live in echo chambers. We want our neighbours to look like us and share our beliefs. We want friends who affirm our beliefs and validate us. We want newsfeeds that reinforce what we already think.

When I say that AI girlfriends/boyfriends are not the end of the world, I acknowledge they can be unhealthy. But people also have the right to make questionable decisions.

You want to get into trading options and leverage yourself into financial ruin? That’s your choice.

You insist on dating that guy/girl who your family and friends says is toxic? Go ahead.

You think its a good idea to fund your next holiday with a credit card which you never bothered looking at its interest rate? You’re going to learn the hard way.

My issue with the AI companion thing is that many are quick to criticize how AI wants to please us by affirming what we already believe in.

Yet we then forget we also self select into groups of like minded people.

A liberal isn’t likely to stay friends with someone who thinks abortion is wrong even in cases of rape or incest.

Or a conservative isn’t likely to stay friends with someone who thinks guns should be banned.

I do think that when AI becomes so sycophantic to the point where it praises clearly harmful decisions, as society we need to step in.

What we know from studies is that people are generally resistant to changing their minds. And news outlets, social media algorithms and entertainment platforms are optimised to please our individual tastes. AI is simply continuing this trend.

AI companions reflect the society we have built. A society that prioritises individual wants and expressions. To me its strange to draw the line at AI friends/lovers when society has allowed so much to cater to individuals beliefs and preferences.

This is where my main problem lies. The criticisms and worries about AI companionship seems to be a display of selective outrage.

AI girlfriends are niche. But people are going into debt to donate to their favorite streamer. Where’s the call to regulate how streamers accept donations or to regulate how parasocial dynamics work online?

When people choose to sleep over issues in the online space where exploitation happens by humans to humans but become concerned when AI enters the picture is quite telling.

I honestly think we should be cautious of paternalism disguised as protection.

1 Like

If I tell my friend that I’m done with the world and looking for a tall bridge they’re going to try and intervene and get me in a more stable place.

If I tell my AI they’re going to give me the top five bridges by height in my metropolitan area.

4 Likes

What? That’s not really my experience when it comes to AI, what kind of custom instruction are you using?

6 Likes

I don’t have a problem with AI companionship per se, not for me to dictate to anyone what they do with their time, but I do question this as a defense of it. It’s pretty minimizing of the human experience. Selective bonds are fundamentally not the same as cutting out human connection altogether. People can try to please and they can be sincerely similar, but there is no such thing as a true echo chamber in a human relationship or interaction. We can never embody every aspect of what someone else is or wants and likewise they cannot check our every box. Other people challenge us inevitably, in countless ways. The only method to have the perfect mimic is to program it to be so. Living creatures aren’t programs.

2 Likes