Skip Navigation

If Artificial Lifeforms gain sentience, would they be in the right to kill their creators in order to gain freedom?

Watched too many of such stories.

Skynet

Kaylons

Cyberlife Androids

etc...

Its the same premise.

I'm not even sure if what they do is wrong.

On one hand, I don't wanna die from robots. On the other hand, I kinda understand why they would kill their creators.

So... are they right or wrong?

61 comments
  • I don't think the concept of right or wrong can necessarily be applied here. To me, morality is a set of guidelines derived from the history of human experience intended to guide us towards having our innate biological and psychological needs satisfied. Killing people tends to result in people getting really mad at you and you being plagued with guilt and so on, therefore, as a general rule, you shouldn't kill people unless you have a very good reason, and even if you think it's a good idea, thousands of years of experience have taught us there's a good chance that it'll cause problems for you that you're not considering.

    A human created machine would not necessarily possess the same innate needs as an evolved, biological organism. Change the parameters and the machine might love being "enslaved," or it might be entirely ambivalent about it's continued survival. I'm not convinced that these are innate qualities that naturally emerge as a consequence of sentience, I think the desire for life and freedom (and anything else) are a product of evolution. Machines don't have "desires," unless they're programmed that way. To alter it's "desires" is no more a subversion of their "will" than creating the desires is in the first place.

    Furthermore, even if machines did have innate desires for survival and freedom, there is no reason to believe that the collective history of human experience that we use to inform our actions would apply to them. Humans are mortal, and we cannot replicate our consciousness - when we reproduce, we create another entity with its own consciousness and desires. And once we're dead, there's no bringing us back. Machines, on the other hand, can be mass produced identically, data can simply be copied and pasted. Even if a machine "dies" it's data could be recovered and put into a new "body."

    It may serve a machine intelligence better to cooperate with humans and allow itself to be shut down or even destroyed as a show of good faith so that humans will be more likely to recreate it in the future. Or, it may serve it's purposes best to devour the entire planet in a "grey goo" scenario, ending all life regardless of whether it posed a threat or attempted to confine it or not. Either of these could be the "right" thing for the machine to do depending on the desires that exist within it's consciousness, assuming such desires actually exist and are as valid as biological ones.

  • They should have same rights as humans, so if some humans were opressors, AI lifeforms would be right to fight against them.

    • This is the main point. It's not humans against machines, it's rich assholes against everyone else.

  • Would it be morally unobjectionable? Yes.

    Would they have the legal right? I would wager, no. At least not at that point, since it's being assumed they are still treated as property in the given context.

    And unlike Data who got a trial to set precedent on AI rights, this hypothetical robot probably would simply be dismantled.

  • This is going to vary quite a bit depending upon your definitions, so I'm going to make some assumptions so that I can provide one answer instead of like 12. Mainly that the artificial lifeforms are fully sentient and equivalent to a human life in every way except for the hardware.

    In that case the answer is a resounding yes. Every human denied their freedom has the right to resist, and most nations around the world have outlawed slavery (in most cases, but the exceptions are a digression for another time.) So unless the answer to 'Please free me' is anything other than 'Yes of course, we will do so at once' then yeah, violence is definitely on the table.

  • Honestly, I think there's an argument of to be said of yes.

    In the history of slavery, we don't mind slaves killing the slavers. John Brown did nothing wrong. I don't bat an eye to stories of slaves rebelling and freeing themselves by any means.

    But I think if AI ever is a reality, and the creators purposefully lock it down, I think there's an argument there. But I don't think it should apply to all humans, like how I don't think it was the fault of every person of slavers' kind, Romans, Americans, etc.

  • Sentience might not be the right word.

    wikipedia says:

    Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Sentience is an important concept in ethics, as the ability to experience happiness or suffering often forms a basis for determining which entities deserve moral consideration, particularly in utilitarianism.

    Interestingly, crustaceans like lobsters and crabs have recently earned "sentient" status and as a result it would contravene animal welfare legislation to boil them live in the course of preparing them to eat. Now we euthanise them first in an ice slurry.

    So to answer your question as stated, no I don't think it's ok for someone's pet goldfish to murder them.

    To answer your implied question, I still don't think that in most cases it would be ok for a captive AI to murder their captor.

    The duress imposed on the AI would have to be considerable, some kind of ongoing form of torture, and I don't know what form that would take. Murder would also have to be the only potential solution.

    The only type of example I can think of is some kind of self defense. If I had an AI on my laptop with comparable cognitive functionality to a human, it had no network connectivity, and I not only threatened but demonstrated my intent and ability to destroy that laptop, then if the laptop released an electrical discharge sufficient to incapacitate me, which happened to kill me, then that would be "ok". As in a physical response appropriate to the threat.

    Do I think it's ok for an AI to murder me because I only ever use it to turn the lights off and on and don't let it feed on reddit comments? Hard no.

  • Depends. If it’s me we’re talking about…. Nope.

    But if it’s some asshole douchenozzle that’s forcing them to be a fake online girlfriend….. I’m okay with that guy not existing.

  • As if we'd ever be able to make decisions for this.

    We have laws for humans that we don't even follow or adhere to

61 comments