Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
question about law one
#11
Nah, your outlook on it is neat. I'm pretty sure the robots in this game are based around Asimov's robots (my first clue being the laws) and there is a story in which there are a series of robots that have their law one imprint completely changed to "You may not harm a human being". Then, some angry scientist orders the robot to "Get lost you fucking blahblahlbldfldgdglg" and so the robot does...

...By hopping onto a ship with 62 almost identical copies of himself and lying when they ask him if he is the robot in question. The only difference between he and the others is that the other 62 robots have the normal law set, so they have to act if a human is in danger, regardless of damage to themselves. So they do a bunch of tests to attempt to find out the odd robot. None of them work, etc etc - until one test where they outsmart it and it tries to strangle a scientist to death before it is killed.

You keep acting like borgs should follow the rules how people expect them to, but really it's up for its own brain for interpretation. If that law has no 'through inaction' then the robot has no need to ever help a humans life since there is no law telling it to do so. If it FEELS the need to do so, it can however. (since there is no law telling it not to) Asimov's laws are shit on PURPOSE, to create tension and to allow for odd situations where one law attempts to override another which leads to a confused robot.

The new change is just silly and unneeded in my opinion. Why remove the inaction part? Is it so AI's stop being shit heads and being up the ass of every traitor? Because it doesn't work and leaves a fun part of the law out, since that one beautiful phrase creates a lot of new different situations that can arise from a borgs interpretation of the situation.

Er, plus the Syndieborg lawset still has the original law 1
Reply


Messages In This Thread

Forum Jump:


Users browsing this thread: 1 Guest(s)