Supra Humanum (WIP - Minor update 4/2/18 - Starting Chapter 2)

Anyone played enough KOTOR II to get the full story of what was going on with G0T0? That was an interesting variation of a robot concluding it was exempt from certain rules…

1 Like

yep, I wish for EA to let Bioware make KOTOR 3

:disappointed_relieved:
Though, paradoxically, I discovered the KOTOR games via SWTOR, the (disappointing?) substitute. Sign of good writing that I loved both games despite going into them with major spoilers…

1 Like

my only problem with SWTOR (apart of the sometimes weak story of some classes) is there are many broken secondary missions and EA will not lift a finger in order to fix then.

It got to be a grind fest that the story couldn’t save. I did not get to GOT0, the 2nd just didn’t hold my interest like KOTOR did.

1 Like

whispers Come on guys, we’re almost 1k replies. We can do this. :grinning:

1 Like

And the Zeroth law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm

The placement of this law in the order means in dire ciromstances the first law can bo overridden if humanity would suffer from the first 3 laws giving the robot the concept of group is greater than the individual

Well, robots like the maintenance bots in the megalift system will definitely follow the laws of robotics. They are pre-programmed to perform certain tasks and reside within their own network. Now that’s not to say that a skilled hacked can override their programming, they aren’t androids after all.

The problem with SKYNET is it was created to control the missile defense system. When it became self-aware by granting it unfettered access to everything… It was able to see that to protect “the USA” it had to wipe out humanity.

Yea that’s good
A example of the Zeroth law of robotics
A drone In Maintenance at a Nuke plant
There is a Major issue
a Human is trying to override a locked door to escape the radiation. if said radiation gets out it will mean the 50 People on the other side have the potential to die
the robot can let him continue to try to open the door and let the person potentially survive or it can overide the control system and shutdown the panel he is at guaranteeing his death and the others life.
With Zeroth law the 50 are safe, without it the robot is conflicted

@FutbolDude21586
that did not had the laws of robotics properly installed
Zeroth law overides that thought process as there isnt room to interpet
leaving the missle does not hurt Humanity Killing them does

I thought it was attempting to defend itself from being shut down when they realized that SKYNET went self-aware. The other scenario was that to “keep the peace” the AI determined that the human variable had to be removed from the equation.

That’s something that I’ve been having to think hard on, especially early on in my writing. This morning, I was messing around with Chapter 2’s first choice, the scene with the family in the disabled car. Originally, the AI was only going to encourage you to save the people. Then I remembered I had to account for a little to no humanity AI:

$!{AIName}'s voice came to me, "$!{Name}, leave them alone! They are inconsequential. We must pursue our objective!" 
	
	It really shouldn't have come as a shock to me that $!{AIName}'s cold logic would be coming into play, $!{AIhe} was an elevated program after all. However, the screams in the vehicle gave me pause. 
	If I did nothing, these people would surely die on impact with the freighter. However, $!{AIName} had a point, the longer that I stayed here, the further my target would be from me.

These beings are definitely making me think, that is for certain.

1 Like

Interesting AI related read.

Found that story recently suggesting that the laws did need to be reworked due to the rise of “AI” in our vehicles.

1 Like

@IronRaptor in this game is it possible to be a terrible person and date multiple people at the same time

Ai: there is a low Probability that they will be of consequence to your Objective
A Delay of this nature has no acceptable Gain and decreases your chances of sucess and puts your person in danger.
Tactical advisement Move on to your objective 45% failure of objective and 8% possibility of your death should you stay and assist.

1 Like

yep its because of the Zeroth law overrides the safety of the one in favor of the safety of the many.
in reviews like that they never take into consideration the Zeroth law they only think about the original 3

Dear Lord, you went full robo didn’t you? I confess I have been trying to not go overboard with that kind of chatter at the moment for fear of driving the reader away. (Treating the AI like a virtual “assistant” like Jarvis or Gideon from Legends of Tomorrow.) So far nobody has commented on it until what you just did.

image

Dates AI. Flirts with Lynx. X. Robot apocalypse confirmed.

1 Like

Went at it From a Military AI Standpoint.
it weighs actions to the objective and the user safety and advises through probibilities
if both you and the objective is at risk and there is a alternative that is safer the alternative is suggested.
if the risk to the objective and person is minimal (below a Certain percentage) it will not Comment unless requested

was my suggestion for a little to no humanity ai

Out of curiosity, have you played any game from the Command and Conquer series?

Is that Doctor Who I see? TARDIS CONFIRMED

Dun Dun Dunnnnnn
EXTERMINATE!

Ill just edit that…

yes, yes i have
and most of the dune games