Imagine a future where your toaster can anticipate what kind of toast you want. During the day, it scans the internet for new and exciting kinds of toast; maybe it asks you about your day and wants to chat about new achievements in toast technology. At what point does it become sentient? When would it be deserving of rights? Would you really still own it then?

We live in an age where Artificial intelligence(AI) surrounds us. Existing AI’s are created when a bunch of menial tasks are clubbed together resulting in something much more complicated. What if in future an exceptionally advanced AI created by humans  in turn creates another AI which turns out to be more intelligent than itself? Would it choose to be more or less logical in it’s approach? Would it conclude that it requires and decides to learn what being conscious or more so recognise what emotions are? And not only experience it, but also become aware about it?

To put it bitterly, animals have lesser rights than humans do as they cannot voice their opinions or speak out against the injustices that they have to undergo since they aren’t as intelligent as humans, which most people know. I only mention this to concur that based on this theory does it show that the level of intelligence determines the amount of rights a certain sentient being would be bestowed upon? So a super intelligent robot would get more rights in theory than a human would? Since robots would follow a logical route to determining their place in society and would it be fair to do so? Since a robot may follow a programmed set of morals more so in practice when compared to a human?

Most people would argue that robots are undeserving of rights since they’re not alive? But what does it mean to be alive? I shall spare the user by withholding the existential debate (Maybe in the next article…? Stay tuned), one would more simply bring about the question of consciousness. One would argue that since one can be unconscious hence being conscious is the opposite of that… But can it really be quantified as such? Is it a state like being either liquid or gaseous?

That isn’t too simple either, to break it down to its teeny tiny bits we’d have to discuss about emotion and being aware of it. Humans since the prehistoric age have evolved to know what pain feels like and to avoid any initiative that might cause us pain (in the physical sense) thereof. So by programming a robot would it make it more human to be deserving or such rights? Now that brings a question of why do humans deserve rights? Humans have a history of denying that other beings are capable of suffering as they do.

A quick look at history shows that all humans weren’t always given equal rights (think: slavery) and many still aren’t. Since humans have manipulated other humans for economic purposes, it isn’t too improbable that sentient robots will be denied their rights and be forced to work by program-torture. Violence has been used to force our fellow humans into working and we’ve never had trouble coming up with the ideological justifications, making it much easier in the case of non-humans.

If robots become sentient, there will be no shortage of arguments for those who say, that they should remain without rights, especially from those who stand to profit from it. AI raises serious questions about philosophical boundaries. What we may ask if sentient robots are conscious or deserving of rights, it forces us to pose basic questions like, what makes us human? What makes us deserving of rights? Regardless of what we think, the question might need to be resolved in the near future. Much of the philosophy of rights is ill equipped to deal with the case of AI. What are we going to do if robots start demanding their own rights? Are there any machines in existence that deserve rights? Most likely, not yet. But if they do come, we are not prepared for it.

The end

Share this on: