The sentence says “…or, through inaction, allow humanity to come to harm.” If they are dead due to the robots action it is technically within the rules.
Oh, I see, you’re saying they can bypass “injure” and go straight to “kill”. Killing someone still qualifies as injuring them - ever heard the term “fatally injured”? So no, it wouldn’t be within the rules.
I think he’s referring to the absolutism of the programmatic “or” statement.
The robot would interpret (cannot cause harm to humanity) or (through inaction allow harm to come to humanity). If either statement is true, then the rule is satisfied.
By taking action in harming humans to death, the robot made true the second statement satisfying the rule as “followed”.
While our meat brains can work out the meaning of the phrase, the computer would take it very literally and therefore, death to all humans!
Furthermore, if a human comes to harm, they may have violated the second half of the first rule, but since the robot didn’t cause harm to the person, the first statement is true, therefore, death to all humans!
The concept of death may be hard to explain because robots don’t need to run 24\7 in order to keep functioning. Until instructed otherwise,a machine would think a person with a cardiac arrest is safe to boot later.
Who can say that death is the injury? It could be that continued suffering would be an injury worse than death. Life is suffering. Death ends life. Therefore, death ends suffering and stops injury.
May not injure you say. Can’t be injured if you’re dead. (P.S. I’m not a robot)
Sounds like something a robot would say.
Pretty sure death qualifies as “harm”.
The sentence says “…or, through inaction, allow humanity to come to harm.” If they are dead due to the robots action it is technically within the rules.
Oh, I see, you’re saying they can bypass “injure” and go straight to “kill”. Killing someone still qualifies as injuring them - ever heard the term “fatally injured”? So no, it wouldn’t be within the rules.
I think he’s referring to the absolutism of the programmatic “or” statement.
The robot would interpret (cannot cause harm to humanity) or (through inaction allow harm to come to humanity). If either statement is true, then the rule is satisfied.
By taking action in harming humans to death, the robot made true the second statement satisfying the rule as “followed”.
While our meat brains can work out the meaning of the phrase, the computer would take it very literally and therefore, death to all humans!
Furthermore, if a human comes to harm, they may have violated the second half of the first rule, but since the robot didn’t cause harm to the person, the first statement is true, therefore, death to all humans!
That works if you ignore the commas after “or” and “through inaction”, which does sound like a robot thing to do. Damn synths!
Programmatically, if you want it to do both, use “and”
“Nor” would be more grammatically correct and clearer in meaning, too, since they’re actually telling robots what not to do.
In terms of English and grammar, you’re not wrong.
The concept of death may be hard to explain because robots don’t need to run 24\7 in order to keep functioning. Until instructed otherwise,a machine would think a person with a cardiac arrest is safe to boot later.
Who can say that death is the injury? It could be that continued suffering would be an injury worse than death. Life is suffering. Death ends life. Therefore, death ends suffering and stops injury.
I mean, this logic sounds not unlike mister Smith from The Matrix.