Consolidated AI Thread: A Discussion For Everything AI

I’m now snickering here because that sounds like it’s somehow unheard of that one can picture things in their head.

1 Like

I actually learned recently that many people don’t talk to themselves or visualize in their head. This explains a lot

2 Likes

Oh, I know not everyone does that - I don’t, and I draw - but I just find it hilarious that it’s seen as some alien concept.

2 Likes

4 Likes

As unfathomable as it is for me to comprehend not being able to visualize things in your head or hear your own mental voice (as I have done both all my life), it’s even weirder to think that somebody who doesn’t, and hasn’t their whole life, can suddenly experience it.

8 Likes

I had no idea 5 was possible

@Keller
Talk about opening their third eye

2 Likes

I’m somewhere between 2 and 3, so I find extreme vividness and total lack both hard to relate to.

2 Likes

This is pretty funny. I’m still a bear on AI’s near-term prospects, but this rejoinder to skeptics got me laughing and thinking.

1 Like

Speaking of, I’m currently very annoyed at Google’s AI overview that keeps polluting my search results with blatantly false info.

11 Likes

Yeah, I so wish that could be turned off. There was enough “skip a few inches down the results page” in my Google Search experience already.

1 Like

And that’s half the screen when I’m on phone. Plus then every other search result in the list is AI generated too, in worst cases.

3 Likes

If you use Chrome, there’s an extension called Bye Bye Google AI that allows you to remove the AI overview by hitting tab + w before you type in your search. It’s what I’ve been using to avoid it.

3 Likes

That’s super helpful to know – thanks so much!

LITERALLY!!! it made me so angry i switched to duckduckgo for my browser. it’s definitely worse but idc i hate google and all them forcing AI stuff in. i did not ask for this!!

youre so right and this also made me even more spiteful abt the whole ordeal hrgnnn

1 Like

It’s too bad that it doesn’t actually stop it from spending the ressources on creating the AI overview, just hides it.

You’re not the customer, though. You’re the product. Google’s investors want to know that your eyeballs are “using” AI.

6 Likes

I haven’t dug into this in detail, but it looks like good news for authors wanting “fair use” to exclude the kind of data scraping copyright violations that the big AI companies have been resorting to:

5 Likes

It’s been an interesting case, but the important thing to keep in mind here is the context: Anthropic got in trouble because of how it got the data; it downloaded all those books and stored pirated copies from shadow libraries.

In other words, the case was primarily about piracy, not a blanket ruling on training. It may seem like splitting hairs, but it’s a very important distinction from a legal standpoint because it doesn’t settle the broader question on fair use.

3 Likes

Maybe not in shallow or burgeoning relationships, but it’s completely untrue that people do not challenge us in fundamental ways tied to our identities. They just…don’t have to disagree in verbally abusive manners to do so? Sure, people likely will not hang around for long if a casual friend keeps mocking their religion, but you can have an atheist friend as a Christian who isn’t a gigantic ass about it 24/7 or vice versa. The percentage of people who do was a little under half last time I saw a statistic for this (specifically Americans, to clarify) and that’s solely religion, there’s all kinds of other ways to be challenged. That, and we tend to be more forgiving the closer the relationship is, the point at which a lot of these differences become known anyway. Pro-Life or Pro-choice isn’t exactly a first meeting sort of conversation when you’d have no stake and thus no need for tolerance, generally.

AI simply cannot do the same. It is an imitation, it is not the real thing with its endless complexities. And it never will be in remotely the same ballpark either. Even so, my only concern with AI relationships is if the party involved stops trying to reach out to actual people because they are content with dodging that discomfort and possible rejection in favor of attempting pouring all their emotional needs into the AI. Because that is unhealthy for them and will lead to many pragmatic gaps as well. (AI cannot take you to the doctor when you’re ill, it cannot rock you when you cry, it cannot go to that concert you’ve been dying to experience with you to keep you company, it cannot watch your drink when you have to go to the bathroom at a club, it cannot kiss or hug you, it cannot be there for you in a number of ways nor can it need mutuality and reciprocation from you that requires you to put in fulfilling effort)

Also, side note, but apologies for how long it’s been since that conversation, I just kinda forgot to post this draft back then and wanted to anyway.

3 Likes

Using this example, the foundation of that friendship isn’t really religion. It’s likely something else. They could be childhood friends or maybe they bonded over video games and other shared hobbies.

Usually shared interests or hobbies are the foundation of said relationship and the religious stuff is not a large factor either way.

If said friend was a hardcore fundamentalist who only chooses friends based on religion or the atheist friend kept trying to debate their bro out of Christianity… that friendship isn’t likely to even begin or last even long.

Most people don’t actually “challenge” their friends on big identity stuff. Instead they challenge them on smaller things like claiming pineapple is good on pizza or claiming Messi is better than Ronaldo.

These are not serious challenges

When it comes to serious disagreements, people usually sidestep them and stick to the things that brought them together in the first place. If the disagreement cannot be ignored then that friendship usually ends.

People already self select into the type of friendship they want. But when people do that for AI, why is it now a problem?

AI and Human relationships don’t have to be zero sum. There are things AI won’t do that a Human will do at times.

An AI won’t roast you in front of others and call you a loser. It won’t cut you off because it’s having a bad day. It won’t give you a black eye and tell everyone you “fell down.” It won’t steal your lunch, gossip behind your back, borrow money from you and stay silent when you ask when it’s giving the money back. It won’t gaslight you by saying “you’re nothing without me”. It won’t demand your location every hour. It won’t tell you you’re a disappointment…

Look, I agree human friendship and relationships can be good. But we must not act as though human relationships are perfect. There are people who are toxic

Its far too paternalistic in my view to tell others, especially adults who they should hang out with.

If a person has a job, can interact with people and coworkers, has goals in life but chooses at the end of the day to stay home with an AI companion that is fine with me.

That person is not being a burden to anyone. They might go out to eat at a restaurant or go on a solo vacation or save up for sneakers or a fancy car… all without seeking a significant other or friend. And they can still feel happy and fulfilled.

You can be in a human relationship where your partner makes you cut off friends and family. You never leave the house or you leave when your partner is accompanying you. You’re not allowed to have hobbies or interests outside your partner. We all agree this type of relationship is toxic even though it’s with a human.

I’d wager that the first example of a person who has their life in order, has goals and interests but still chooses to have an AI companion over human is better off then the second example.

Human relationships are not good by default. AI relationships are not bad by default. There’s so much context needed before saying this particular relationship is toxic or not.

I agree. Human relationships can be fulfilling. .

But putting effort in a l relationship is not a guarantee to anything. People can take you for granted and use your emotional and physical labor for their own benefit while putting in the most minimal effort.

We can agree that AI is not going to be as fulfilling as the best human friendship possible. For some people a “good enough” AI relationship is fine and they can focus their extra energy on other hobbies or interests or living their life.

If someone prefers that trade off who are we to judge?

EDIT:

So what happens after you confront said friend?

There are so many studies out there that show confronting and debating someone out of a strongly held belief doesn’t work. It actually makes them dig in. That’s not an AI caused problem.

And most commercial AI in use right now have guardrails. It will recommend professional help. It will recommend you speak to a friend or someone you trust when it sees red flags.

Of course, if you tell it “I’m just playing around” or I’m writing an imaginery story, it stops. If someone jailbreaks the AI and insists on getting one side of any argument then of course it’s going to be a sycophant.

And most echo chambers right now are built by and for humans. You can go to online forums, subreddits, Discord servers, Twitter, Facebook groups that echo your aready existing beliefs. There are places where you can say “Hitler is a hero”and get no pushback. You can’t blame that on AI.

It’s wild to me how often people criticising AI companionship completely ignore the echo chambers already rampant in real human groups.

1 Like