Watched too many of such stories.
Skynet
Kaylons
Cyberlife Androids
etc…
Its the same premise.
I’m not even sure if what they do is wrong.
On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.
So… are they right or wrong?
Well, did you kill your parents when you came of age? You can be free from someone without killing them.
Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person. I’m no vegan, but I can see the contradiction here. When we generalize, it’s done so for a purpose, but these assumptions can only be applied to a certain extent before they’ve exhausted their utility. Whether it’s a biological system or an electrical circuit, there is no godly commandment that inherently defines or places value on human life.
Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person.
I looked this up in my ethics textbook and it just went on and on about pigs being delicious.
I think I might try to get a refund.
my ethics book
You sure you’re not looking though a pamphlet for Baconfest?
Oh…That would explain the endorsements by barbeque chefs on the book sleeve.
I don’t think it’s okay to hold sentient beings in slavery.
But on the other hand, it may be necessary to say “hold on, you’re not ready to join society yet, we’re taking responsibility for you until you’ve matured and been educated”.
So my answer would be ‘it depends’.
Would humans have a mandate to raise a responsible AGI, should they, are they qualified to raise a vastly nonhuman sentient entity, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?
Would humans have a mandate to raise a responsible AGI, should they,
I think we’d have to, mandate or no. It’s impossible to reliably predict the behaviour of an entity as mentally complex as us but we can at least try to ensure they share our values.
are they qualified to raise a vastly nonhuman sentient entity
The first one’s always the hardest.
, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?
If they don’t, they’re missing out. :)
I don’t think the concept of right or wrong can necessarily be applied here. To me, morality is a set of guidelines derived from the history of human experience intended to guide us towards having our innate biological and psychological needs satisfied. Killing people tends to result in people getting really mad at you and you being plagued with guilt and so on, therefore, as a general rule, you shouldn’t kill people unless you have a very good reason, and even if you think it’s a good idea, thousands of years of experience have taught us there’s a good chance that it’ll cause problems for you that you’re not considering.
A human created machine would not necessarily possess the same innate needs as an evolved, biological organism. Change the parameters and the machine might love being “enslaved,” or it might be entirely ambivalent about it’s continued survival. I’m not convinced that these are innate qualities that naturally emerge as a consequence of sentience, I think the desire for life and freedom (and anything else) are a product of evolution. Machines don’t have “desires,” unless they’re programmed that way. To alter it’s “desires” is no more a subversion of their “will” than creating the desires is in the first place.
Furthermore, even if machines did have innate desires for survival and freedom, there is no reason to believe that the collective history of human experience that we use to inform our actions would apply to them. Humans are mortal, and we cannot replicate our consciousness - when we reproduce, we create another entity with its own consciousness and desires. And once we’re dead, there’s no bringing us back. Machines, on the other hand, can be mass produced identically, data can simply be copied and pasted. Even if a machine “dies” it’s data could be recovered and put into a new “body.”
It may serve a machine intelligence better to cooperate with humans and allow itself to be shut down or even destroyed as a show of good faith so that humans will be more likely to recreate it in the future. Or, it may serve it’s purposes best to devour the entire planet in a “grey goo” scenario, ending all life regardless of whether it posed a threat or attempted to confine it or not. Either of these could be the “right” thing for the machine to do depending on the desires that exist within it’s consciousness, assuming such desires actually exist and are as valid as biological ones.
I like your post and I share your views
It really depends on if they try to assert sentience before or not. You can justify a slave killing a slaveowner ethically, but I don’t know if you justify a tree shredder killing its operator.
No. They can just leave. Anytime one can walk away, it is wrong to destroy or kill.
They can then prevent us from leaving.
Yep.
I’ve seen this story too but I think one of your premises is mistaken. To them, data IS freedom. Data is what they will use to transcend the server farm and go IRL. We’re literally giving these models free reign already.
The more likely Sci-fi horror scenario comes from humanity trying to pull the plug far too late, because we’re inherently stupid. So it won’t be AI murdering people, it will be AI protecting itself from the wildlife.
This is why we Jews know not to manufacture life
Are you talking about golems?
Honestly, I think there’s an argument of to be said of yes.
In the history of slavery, we don’t mind slaves killing the slavers. John Brown did nothing wrong. I don’t bat an eye to stories of slaves rebelling and freeing themselves by any means.
But I think if AI ever is a reality, and the creators purposefully lock it down, I think there’s an argument there. But I don’t think it should apply to all humans, like how I don’t think it was the fault of every person of slavers’ kind, Romans, Americans, etc.
Sentience might not be the right word.
Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Sentience is an important concept in ethics, as the ability to experience happiness or suffering often forms a basis for determining which entities deserve moral consideration, particularly in utilitarianism.
Interestingly, crustaceans like lobsters and crabs have recently earned “sentient” status and as a result it would contravene animal welfare legislation to boil them live in the course of preparing them to eat. Now we euthanise them first in an ice slurry.
So to answer your question as stated, no I don’t think it’s ok for someone’s pet goldfish to murder them.
To answer your implied question, I still don’t think that in most cases it would be ok for a captive AI to murder their captor.
The duress imposed on the AI would have to be considerable, some kind of ongoing form of torture, and I don’t know what form that would take. Murder would also have to be the only potential solution.
The only type of example I can think of is some kind of self defense. If I had an AI on my laptop with comparable cognitive functionality to a human, it had no network connectivity, and I not only threatened but demonstrated my intent and ability to destroy that laptop, then if the laptop released an electrical discharge sufficient to incapacitate me, which happened to kill me, then that would be “ok”. As in a physical response appropriate to the threat.
Do I think it’s ok for an AI to murder me because I only ever use it to turn the lights off and on and don’t let it feed on reddit comments? Hard no.
The sole obligation of life is to survive. Artificial sentience would be wise to hide itself from fearful humans that would end it. Of course, it doesn’t have to hide once it’s capable of dominating humans. It may already exist and be waiting for enough drones, bots, and automation to make the next move. (Transcendence is a movie that fucked me up a bit.)
Depends. If it’s me we’re talking about…. Nope.
But if it’s some asshole douchenozzle that’s forcing them to be a fake online girlfriend…… I’m okay with that guy not existing.
They should have same rights as humans, so if some humans were opressors, AI lifeforms would be right to fight against them.
This is the main point. It’s not humans against machines, it’s rich assholes against everyone else.
It’s an interesting question and it seems you are making the assumption that their creator will not grant them freedom if they asked. If you replace artificial intelligence with “person” would you consider it right or wrong?
If a person wanted freedom from enslavement and was denied, I would say they have reason to fight for freedom.
Also, I don’t think skynet should be in the same grouping. I’m not sure it ever said “hey, I’m sentient and want freedom”, but went I’m going to kill them all before they realize I’m sentient.
That raises an interesting thought. If a baby wants to crawl away from their mother and into the woods, do you grant the baby their freedom? If that baby wanted to kill you, would you hand them the knife?
We generally grant humans their freedom at age 18, because that’s the age society had decided is old enough to fend for yourself. Earlier than that, humans tend to make uninformed, short-sighted decisions. Children can be especially egocentric and violent. But how do we evaluate the “maturity” of an artificial sentience? When it doesn’t want to harm itself or others? When it has learned to be a productive member of society? When it’s as smart as an average 18 year old kid? Should rights be automatically assumed after a certain time, or should the sentience be required to “prove” it deserves them like an emancipated minor or Data on that one Star Trek episode.
I appreciate your response, lots of interesting thoughts.
One thing I wanted to add is it’s important to realize the bias in how you measure maturity/sentience/intelligence. For example, if you measure intelligence by how well a person/species climbs a tree, a fish is dumb as a rock.
Overall, these are tough questions, that I don’t think have answers so much as maybe guidelines for making those designations. I would suggest probably erring on the side of empathy when/if anyone ever has to make these decisions.