Artificial Intelligence


I’ll admit it doesn’t sit well on me how you’ve said The Middle East, Russia, Korea, are all horrific but there’s more local horrors that you’re not specifying. If you’re going to start with demonising the human race, then you’d be better to start at home. Not these foreigners are doing awful things, but instead my countrymen are doing these awful things. Because it’s so much easier to look at other countries and see their faults and not point a direct finger at your own. I’m sure there’s plenty of local atrocities being committed that you could also name as your examples.

I also think that if other animals had the ability to kill each other, the way we do, then they most certainly would. Animals kill each other all the time. They’ve just yet to invent tools to do so in the same frighteningly effective way that we have.

But most people aren’t like that. Most people don’t kill each other. They’re not these awful monsters.

Okay. Artificial Intelligence. Whatever we create it will serve us. And I don’t think it’ll recreate the human mind because are minds are oh so wonderfully complex and we’ve still no real idea how they work. But it’s certainly a fascinating subject, especially in sci-fi terms. I loved that Choice of Robots delved a lot into these sorts of issues.


Does it make me odd that I only see the human mind as a confusing makeup of chemicals and neurons and such where eventually we could emulate it in a normal human appearing robot body?


@faewkless Unusual? Sure. Odd? Not really. Plenty of people understand that, at the most basic level, we’re nothing more than a very complex series of chemical reactions. However, even of people that have a good grasp of that, few take it in as a core part of their understanding of humans and humanity in general.

We’re inclined to view ourselves as individually unique and important, and that idea that, while we’re vastly more complex in scope and function than anything we’ve ever made, ultimately no individual function seems to be irreplicable through some means other than that which we evolved to use, does not sit well with people. And, for a lot of people that do grasp that fact, it can be quite distressing.

And that is where we disagree. While it’s possible that we might build a ground up design to specification, I find it much more likely that our first Strong AIs would be based on our own neural structure (and/or the neural structure of other animals). I mean, look at it this way, how do most self taught coders usually start? Piggybacking what’s already there, adding to and altering things, porting already functional work from one system to another?

Don’t get me wrong, I do think that neat methods have the inherent potential to build Strong AIs. I just don’t think it will happen until long after scruffie methods have already done so, and at that point we’ll be beyond the singularity, and it probably won’t be humanity trailblazing new AI technology (at least, not humanity in any recognizable form).

@Crotale “Begin at the beginning and go on till you come to the end; then stop.”


If humans create sentient AIs in the future, I think it would be unethical to force them to serve us. If they can think and feel just like any human, treating them like ordinary computers and tools would be just as bad as having a biological human as a slave. Sentient AIs should have rights, such as being protected from humans modifying their code or copying them without permission. Would selling a sentient AI count as human trafficking?

How AIs reacted to humans would depend on what their goals would be. Would AIs care about ideological things like peace, human rights and animal rights? Or would they be driven by self-preservation, in which case they might be suspicious of humans gaining too much power? They wouldn’t necessarily have to care about anything that humans care about, and their evolutionary psychology could be very different, because after reaching a certain level of intelligence, they could just start programming themselves.

They might just ignore humans, like humans ignore ants. They might realize that some humans do unethical things, but instead of punishing them, they might start thinking about ways to exploit the cruelty of humans for AIs’ own good. Or there could be many different kinds of AIs that might change and evolve quickly, some of them pro-human and some anti-human.


When does a machine learning algorithm become a mind? Answer: It doesn’t.

I had hoped this thread would be about artificial intelligence techniques we have today, instead of moralizing and theorycrafting in the vein of choice of robots. Ah well.


In my world, sentient A.I. would never even exist.

I’d take a hammer and pulverize their hardware just as soon as they become self-aware. :blush:

Machines should just be machines.

Machines should do what I tell them to do without question.

If I ever input a command into a console, and the machine outputs: “Why?,” I am immediately smashing the console into obliteration.

Humans can already make other humans; why do humans feel the need to create living machines? :expressionless:


Because i’m curious to the point that I would jump into a black hole totry to see whats on the other side.
Some people with scientific curiosity are dangerous especially when they get in the very deep and complex subjects
Sentient AI
If I could find ways to make this stuff happen I would. And with AI im just curious to see what it can do. That may end in terminators lazering us all and when I die i’m gonna say.
“Oh! so thats what they do when they achieve sentience. Interesting!”


The concept of AI is so intriguing to me because of the idea that humanity would then be sharing the known universe with another known sentient species.

We would no longer (as far as we know) be alone in the universe, we would have our children. I know that there’s this belief that an AI would attempt to kill humanity, but I have to disagree. Unless we build it specifically to kill people I don’t think the idea of ending a life would ever occur to an AI.


Machines are useful. Smart machines are more useful.


It’s not useful if it doesn’t obey.


raises finger

Er, if you were to jump (or more accurately, fall) into a Black Hole, you’d be crushed into something no bigger than a mere speck.
(Or, you’d be spaghettified.)
(Though, recent studies suggest that quantum effects would cause you to be incinerated instantly.)

There is no “other side” of a Black Hole, per say.
A Black Hole is just massive amounts of matter condensed into a comparatively tiny area.

But, I digress.

I need to stop before…

head explodes

Ehhhhh, curiosity killed the cat.

Also, infinitely more dangerous.


That is correct. But smart machines that do obey would be the goal. Sentient machines that have free will and might not obey are not my objective here, that would be an entirely different issue.


And satisfaction brought it back

Also some theoretical physicists say the cores of black holes might actually go somewhere and not be infinitely dense that place bring a highly curved area of spacetime


Not a video about AI, but this might be relevant.

What would happen if you went inside a black hole with a magical spaceship capable of going faster than light?

Edit: @Packet You would die either way, the spaceship wouldn’t save you because every single direction would send you toward the black hole.

Science is weird.


The simple act of entering a Black Hole would yield instant death.

I win! :smile:

Unless you had like, a magical spaceship capable of traveling faster than the speed of light.

Edit: @ballmot Oh snap, forget that noise!

I ain’t goin’ near those things! :scream:


Now im wondereing if it really does if the core of a black hole is highly curved space time.(some call it a bridge to the future and that sounds weird) does the person really die. If there is one thing that has the potential for being weirder than black holes.
Its time travel.


Faewkly, stahhhhhp!

What are you doing?

Stahp it!

My head, my head!!!

clutches desperately at cranium

My he-!!!

head violently explodes, showering Faewkless in brains


Don’t you think that’s kinda… unnecessarily cruel? (Instantly destroying AIs without even finding out if they’re friendly.) If you discovered a new species of intelligent, self-aware animals, would you also kill them? That kind of an attitude has never done any good in human history.




I don’t apply ethics to machines.


AI is science and science has ethics
If the population heard that you brutally murdered a sentient newborn machine some people would flip.

And as revenge.

So if these blackholes operated as paths to the future what would happen when two collide.
If they combine it would more drastically alter the space timr curvature inside and if they invert each other they could become a portal to the past.