If robots ever need rights we'll have designed them unjustly

Robots don't need rights right now.

A surprising number of people I meet on line self-identify as robots.  Look, I totally get that some of us have more trouble relating to other people than others, and some of us have emotions that seem not like what we see other people as having on TV or in movies or in junior high locker-room talk.  But that doesn't mean we aren't human.

Please stop putting pictures of robots with faces on ethics reports.
Here's how you can tell you are a legal person – you were born, and you will die.  You can be hurt in ways that you will never recover from.  Not just physically.  If you are socially shunned, you will be less healthy and your life expectancy will be shorter.  If time is taken away from you for example by putting you in jail, you can never recover that, not only because your lifespan is finite but also because the people and things you were raised with and identify with will have faded and changed by the time you get out and you will have lost something of yourself.  Even losing things you own, even just money, is taking away some of your time and history that you can never replace.  If you are a person you can read this and realise that having time taken away from you when you've done nothing wrong is a harm that can be redressed, and you can look to government and other organisations to support your right to redress.  This makes you a person.

No existing robot needs rights.

Animals don't have rights they have welfare.

Animals can suffer just like humans – the ways humans suffer all derive from our being animals.  But rights entail responsibilities, and no animals but humans can understand the system of justice that allows them to seek redress.  Animals all struggle for their wellbeing in their own way, and within human society our justice does designate that we should provide them with welfare.  We define ways that we are obliged to relieve animal suffering, and we use our justice system to enforce that.  But the animals are incapable themselves of participating in that justice system directly, which means they are not persons. Even some people lose their capacity to to defend themselves, which makes them not legal persons, but rather wards of others who are able to care for them.  Some people have sought to make animals also wards, but so far no court has seen that as appropriate.  Our ethics was constructed by our society, and human society is its core.

If robots were to need rights or welfare it would be our fault.

What makes something an artefact is that it was produced by humans. This means robots are more like novels than children – they are authored, not just raised.  Because we are social animals and what we do affects others, when we produce artefacts we have responsibilities concerning how we design them.  We can and do design AI such that it is easy to replace all its components, such that everything that is learned is backed up, such that no robot is unique or irreplaceable.  If we are designing a robot as a piece of art we may not want to do that. But if we are designing a commercial product we are obliged to do that in a way that doesn't hurt humans.  That implies we shouldn't set up robots to require access to our limited resources including our time, love, attention, courts, taxes.  Of course, some people enjoy giving their time and affection to houseplants, toys, fictional characters and yes robots, and that's fine.  But allowing people to act as they like is not the same as building unnecessary obligations into our economy.

Making robots legal persons would only allow corporations to hurt real people.

There is a long legal history that allows corporations to be treated as legal persons. The reason is that corporations are composed of people, so in theory not only know and can pursue their own rights, but also suffer from loss of time, social status, goods, money, power, etc.  So in theory, it is simpler to extend our system of justice to corporations as if they are people than to invent a new one for them.

Unfortunately, some corporations have taken advantage of this and of bankruptcy law to get out of their very real obligations to humans.  Some corporations and even rich individuals create "shell" corporations, designate them as the holders of some liability, and then allow the new shell organisation to go bankrupt.  By the magic of bankruptcy, all the legal and financial liabilities disappear with the only cost being the reputation and standing of the shell corporation.  A few people who probably knew they had temporary jobs lose those jobs, the shell corporation is shut down, and the only people who really suffer are those who lost their money because they had contracts with the shell corporation or were supposed to be paid by it or it was supposed to have paid taxes.  Donald Trump is famous for doing this kind of thing.

Allowing corporations to automate part of their business process, call it an "electronic person", make it responsible for taxes and liability, is basically creating an empty shell organisation.  There would literally be no one who suffers when the robot goes bankrupt except the taxpayers that thought they were going to somehow get money from the fact that human workers had been replaced by robots, and any person who was hurt by the robot's poor design let's say in a traffic accident and tries to sue it.  Oh gosh, the robot is out of money too bad! I guess we should dissolve the robot since it's bankrupt.

Needless to say, allowing corporations to evade taxes increases wealth inequality.  Some are already doing this with "free" Internet services.  We need to get better at denominating the exchanges we are doing with contemporary transnational digital businesses.

Corporations that choose to automate parts of their business process should still be liable for the profits they make and the injuries they cause.

If you haven't guessed, this essay is about The European Parliament's Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).  I was very concerned that the final report should remove the residual language about "electronic persons."  The report is already seriously improved, and I hope the European Commission will be more sensible still when it drafts its legislation.  Still, hope is not enough – now we need to contribute to the public consultation.


Update: 13 February 2017

The day after I first wrote this blog post, the the final European Parliamentary report on AI was released. The language about electronic persons that originally got me concerned has been both weakened and clarified.  See the last line of page 17 and most of page 18 of the report, compare that to section 31 on page 12 of the original draft.  Importantly, this report is not in itself legislation, it is a call for legislation.  How the European Commission (who write legislation for the EU) chooses to interpret and address this recommendation will be known later in 2017, so work on this issue is ongoing.  Right now, there is an open call for public consultation.

Update: 29 November 2019

More recent, more formal academic publications of the above arguments:

See also:

Comments