Supra Humanum (WIP - Minor update 8/10/17 - Now with saves!)

science-fiction
gender-choice

#903

Thank you.
Can I get the review of those 3 (+1) robotic laws? I need to refresh this old memory.


#904

Isaac Asimov’s “Three Laws of Robotics”

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


#905

Have you ever noticed that sci-fi doesn’t seem to bind AI or robots controlled by them to those laws? (Hal 9000, GlaDOS, SKYNET, etc…) I’ll just leave that here… (discuss.)


#906

well if they had followed the laws of robotic Hollywood wouldn’t have a plot for their movies.


#907

Thank you for the review :pray:

Now, to protect the humanity, robots should not harm humanity.


'cuz, they thought that their A.I. was too stupid to harm a human themselves?


#908

Anyone played enough KOTOR II to get the full story of what was going on with G0T0? That was an interesting variation of a robot concluding it was exempt from certain rules…


#909

yep, I wish for EA to let Bioware make KOTOR 3


#910

:disappointed_relieved:
Though, paradoxically, I discovered the KOTOR games via SWTOR, the (disappointing?) substitute. Sign of good writing that I loved both games despite going into them with major spoilers…


#911

my only problem with SWTOR (apart of the sometimes weak story of some classes) is there are many broken secondary missions and EA will not lift a finger in order to fix then.


#912

It got to be a grind fest that the story couldn’t save. I did not get to GOT0, the 2nd just didn’t hold my interest like KOTOR did.


#913

whispers Come on guys, we’re almost 1k replies. We can do this. :grinning:


#914

And the Zeroth law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm

The placement of this law in the order means in dire ciromstances the first law can bo overridden if humanity would suffer from the first 3 laws giving the robot the concept of group is greater than the individual


#915

Well, robots like the maintenance bots in the megalift system will definitely follow the laws of robotics. They are pre-programmed to perform certain tasks and reside within their own network. Now that’s not to say that a skilled hacked can override their programming, they aren’t androids after all.


#916

The problem with SKYNET is it was created to control the missile defense system. When it became self-aware by granting it unfettered access to everything… It was able to see that to protect “the USA” it had to wipe out humanity.


#917

Yea that’s good
A example of the Zeroth law of robotics
A drone In Maintenance at a Nuke plant
There is a Major issue
a Human is trying to override a locked door to escape the radiation. if said radiation gets out it will mean the 50 People on the other side have the potential to die
the robot can let him continue to try to open the door and let the person potentially survive or it can overide the control system and shutdown the panel he is at guaranteeing his death and the others life.
With Zeroth law the 50 are safe, without it the robot is conflicted

@FutbolDude21586
that did not had the laws of robotics properly installed
Zeroth law overides that thought process as there isnt room to interpet
leaving the missle does not hurt Humanity Killing them does


#919

I thought it was attempting to defend itself from being shut down when they realized that SKYNET went self-aware. The other scenario was that to “keep the peace” the AI determined that the human variable had to be removed from the equation.

That’s something that I’ve been having to think hard on, especially early on in my writing. This morning, I was messing around with Chapter 2’s first choice, the scene with the family in the disabled car. Originally, the AI was only going to encourage you to save the people. Then I remembered I had to account for a little to no humanity AI:

$!{AIName}'s voice came to me, "$!{Name}, leave them alone! They are inconsequential. We must pursue our objective!" 
	
	It really shouldn't have come as a shock to me that $!{AIName}'s cold logic would be coming into play, $!{AIhe} was an elevated program after all. However, the screams in the vehicle gave me pause. 
	If I did nothing, these people would surely die on impact with the freighter. However, $!{AIName} had a point, the longer that I stayed here, the further my target would be from me.

These beings are definitely making me think, that is for certain.


#920

Interesting AI related read.

Found that story recently suggesting that the laws did need to be reworked due to the rise of “AI” in our vehicles.


#921

@IronRaptor in this game is it possible to be a terrible person and date multiple people at the same time


#922

Ai: there is a low Probability that they will be of consequence to your Objective
A Delay of this nature has no acceptable Gain and decreases your chances of sucess and puts your person in danger.
Tactical advisement Move on to your objective 45% failure of objective and 8% possibility of your death should you stay and assist.


#923

yep its because of the Zeroth law overrides the safety of the one in favor of the safety of the many.
in reviews like that they never take into consideration the Zeroth law they only think about the original 3