Como sobreviver ao apocalipse robô

Share Embed


Descrição do Produto

Como sobreviver ao apocalipse robô

David J. Gunkel Northern Illinois University [email protected]

Monday, June 20, 16

Invasão Robô!

Monday, June 20, 16

Whether we recognize it or not, we are experiencing a robot invasion. Machines are now everywhere and doing everything.

Monday, June 20, 16

We chat with them online, we play with them in digital games, we collaborate with them at work, and we rely on their capabilities to help us manage all aspects of our increasingly data-rich, digital lives.

Monday, June 20, 16

Monday, June 20, 16

Monday, June 20, 16

Monday, June 20, 16

As these technologies come to occupy influential positions in contemporary culture—positions where they are not just tools or instruments of human action but social actors in their own right—we will need to ask ourselves some intriguing but rather difficult questions:

Responsabilidade

Monday, June 20, 16

At what point might a robot, an algorithm, or other computer system be held responsible for the decisions it makes or the actions it deploys? When, in other words, would it make sense to say “It’s the computer’s fault?”

Direitos

Monday, June 20, 16

Likewise, at what point might we have to seriously consider extending something like rights—civil, moral or legal standing—to these socially active technologies? When, in other words, would it no longer be considered non-sense to suggest something like “the rights of robots?”

Objetivo: Demonstrar por que além de não fazer sentido abordar estas questões, evitar este assunto poderia ter consequências sociais significativas.

Monday, June 20, 16

Although these questions are a staple in science fiction, we have, I believe, already passed the tipping point. And today I would like to demonstrate why it not only makes sense to talk about these things but also why avoiding this subject could have significant social consequences.

Agenda 1) Configuração Padrão A Teoria Instrumental da Tecnologia

2) The New Normal Desafios Recentes para a Configuração Padrão

3) Consequências Significado dessa Incursão das Máquinas

Monday, June 20, 16

My investigation of this will proceed in three steps : The first will briefly review the way we typically deal with technology and the question of moral status. I call this the default setting. The second will consider the opportunities and challenges that autonomous technologies pose to this way of thinking. Finally, the third part will draw out the consequences of this material, explaining what this means for us, our world, and the other entities we encounter here.

1 Configuração Padrão

Monday, June 20, 16

Initially, the very notion of "responsible machines“ or “robot rights” probably sounds absurd. Don't we already have enough trouble with human beings? So why make things more confusing?

tecnologia = Ferramenta

Monday, June 20, 16

And this line of reasoning sounds intuitively correct. In fact, it seems there is little to talk about. Machines, even sophisticated information processing devices, like computers, smart phones, software algorithms, robots, etc., are technologies, and technologies are tools created and used by human beings. A mechanism or technological object means nothing and does nothing by itself; it is the way it is employed by a human user that ultimately matters.

Teoria Instrumental “A teoria instrumentalista oferece a visão mais amplamente aceita de tecnologia. Baseia-se na ideia de senso comum que as tecnologias são ' ferramentas ' prontas para servir aos propósitos de usuários” - Feenberg 1991

Monday, June 20, 16

This is called “the instrumental theory of technology.” “The instrumentalist theory offers the most widely accepted view of technology. It is based on the common sense idea that technologies are 'tools' standing ready to serve the purposes of users." And because an instrument is considered to be 'neutral,' it is evaluated not in and of itself, but on the basis of how it is employed by its human designer or user.

Erro lógico- Agencia de atributos para um objeto inanimado

Problema Moral - desviar a responsabilidade para um mero instrumento ou ferramenta

Monday, June 20, 16

Consequently, blaming the computer is to make at least two errors. First, it wrongly attributes agency to something that is a mere instrument or inanimate object. This logical error mistakenly turns a passive object into an active subject. Second, it allows human users to deflect responsibility by putting the blame on something else. In other words, it allows human users to “scapegoat the computer,” and deflect responsibility for their own actions.

Teoria Instrumental

Configuração padrão - Resumo A teoria instrumental localiza responsabilidade na tomada de decisão e da ação humana, e resiste a todo e qualquer esforço para adiar responsabilidade de algum objeto inanimado, culpando ou fugindo - goating quais são meras ferramentas. Monday, June 20, 16

Consequently, the instrumental theory not only sounds reasonable, it is obviously useful. It is, one might say, instrumental for making sense of things in an age of increasingly complex technological systems. And it has a distinct advantage in that it locates responsibility in a widely-accepted and seemingly intuitive subject position, in human decision making and action, and it resists all efforts to defer responsibility to some inanimate object by blaming or scapegoating what are mere instruments or tools.

2 The New Normal

Monday, June 20, 16

The instrumental theory has served us well, and it has helped us make sense of all kinds of technological innovation from simple hand tools to rocket ships, and personal computers.

Tecnologia != Ferramenta A teoria instrumental, embora uma ferramenta útil ou um instrumento para que se entenda a tecnologia, não funciona mais. Não é mais uma ferramenta útil para entender as recentes inovações.

Monday, June 20, 16

But all that is over…and it is over precisely because our machines no longer function as mere instruments. In other words, the instrumental theory is no longer a useful tool for understanding technological innovation.

1. Agência Moral

2. Paciência Moral

Responsabilidade

Direitos

Monday, June 20, 16

So let’s consider two examples… the first addresses machine moral agency or the question of responsibility and the second example addresses machine moral patiency or the question of rights.

1. Responsabilidade

Monday, June 20, 16

We now have some rather interesting experiences with machine learning and social responsibility: Google DeepMind’s AlphaGo and Microsoft’s Tay.ai Both AlphaGo and Tay are advanced AI systems using some form of machine learning.

1. Responsabilidade

O nosso artigo da Nature publicado em 28 de Janeiro de 2016, descreve os detalhes técnicos por trás de uma nova abordagem para o computador Go que combina MonteCarlo tree search com redes neurais profundas que foram treinados através de aprendizado supervisionado, a partir de jogos de especialistas humanos e pelo reforço de aprendizagem a partir de jogos de auto –play. - http://deepmind.com/alpha-go

Monday, June 20, 16

AlphaGo, as Google DeepMind explains it.... “combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.” In other words, AlphaGo does not play the game of Go by following a set of cleverly designed moves feed into it by human programmers. It is designed to formulate its own instructions from the experience of game play.

1. Responsabilidade

Monday, June 20, 16

Although less is known about the inner workings of Tay, Microsoft explains that the system “has been built by mining relevant public data.” They trained Tay’s neural networks on anonymized data obtained from social media and then designed the system to evolve its behavior from interacting with users on Twitter, Kik, and GroupMe.

1. Responsabilidade

“Embora tenhamos programado esta máquina para jogar, não temos ideia de que movimentos ele virá. Seus movimentos são um fenomeno emergente do treinamento. Nós apenas criar os conjuntos de dados e os algoritmos de treinamento. Mas os movimentos que, em seguida, vem estao fora do nosso alcance.”

Monday, June 20, 16

What both implementation have in common is that the engineers who designed and built them have no idea what the systems will eventually do once they are in operation. As one of the creators of AlphaGo has explained, “Although we have programmed this machine to play, we have no idea what moves it will come up with. Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands.” Machine learning systems, like AlphaGo, are intentionally designed to do things that we cannot anticipate or completely control.

1. Responsabilidade

Temos agora sistemas de computadores autônomos que, de uma forma ou de outra têm “uma mente própria”

Monday, June 20, 16

In other words, we now have computer systems that in one way or another have “a mind of their own.” And this is where things get really interesting, especially when we ask about responsibility.

1. Responsabilidade

AlphaGo leva 4 de 5 jogos - Quem ganhou? - Quem recebe o prêmio? - Quem ganhou o Lee Sedol?

Monday, June 20, 16

AlphaGo was designed to play Go, and it proved its ability by beating an expert human player. So who won? Who gets the accolade? Who actually beat Lee Sedol? Following the dictates of the instrumental theory of technology, actions undertaken with the computer would be attributed to the human programmers who initially designed the machine to do what it does.

1. Responsabilidade

Monday, June 20, 16

But this explanation does not necessarily hold for a machine like AlphaGo, which is intentionally designed to do things that exceed the knowledge and control of its human programmers. In fact, in most of the reporting on this event, it is not Google or the engineers at DeepMind who are credited with the victory. It is AlphaGo.

1. Responsabilidade

Questões morais   - Quem é responsável pelos tweets de ódio?   - Quem é responsável pelos comentários preconceituosos? Monday, June 20, 16

Things get even more complicated with Tay, Microsoft’s foul-mouthed teenage AI, when we ask the question: Who is responsible for the hateful Tweets? Who can be held accountable for the bigoted comments posted to Twitter?

1. Responsabilidade

Os programadores da Microsoft

De acordo com a forma instrumentista de pensar, teríamos de culpar os programadores da Microsoft, que projetou o AI para ser capaz de fazer essas coisas. Mas os programadores obviamente não definiram o Tay para ser racista. Ela desenvolveu este comportamento reprovável, aprendendo com interações na Internet.

Monday, June 20, 16

According to the instrumentalist way of thinking, we would need to blame the programmers at Microsoft, who designed the AI to be able to do these things. But the programmers obviously did not set out to design Tay to be a racist. She developed this reprehensible behavior by learning from interactions on the Internet. So how did Microsoft assign responsibility?

1. Responsabilidade

Culpe a vítima

“O chatbot AI Tay é um projeto de máquina, desenvolvida para o envolvimento humano. É tanto uma experiência social e cultural, como é técnico. Infelizmente, dentro das primeiras 24 horas online, tomamos conhecimento de um esforço coordenado por parte de alguns usuários a abusar das habilidades de Tay, de forma que Tay responda de modo inadequado . Como resultado , colocamos o Tay offline e estamos fazendo ajustes.” - Microsoft email 3/24/2016

Monday, June 20, 16

Initially a company spokesperson sent out an email that sought to blame the victim. “The AI chatbot Tay,” the spokesperson explained, “is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.” According to Microsoft, then, it is not the programmers or the corporation who are responsible for the hate speech. It is the fault of the users (or some users) who interacted with Tay and taught her to be a bigot. Tay’s racism, in other word, is our fault.

1. Responsabilidade

Desculpas parciais / desculpa

“Como muitos de vocês já sabe, na quarta-feira lançamos um chatbot chamado Tay. Lamentamos profundamente os tweets ofensivos e prejudiciais não intencionais de Tay, que não representam o que somos ou o que nós representamos, nem como nós projetamos Tay. Tay está agora fora de linha e vamos tentar trazer Tay de volta apenas quando nos sentirmos confiantes de que podemos antecipar melhor a intenção maliciosa que está em conflito com os nossos princípios e valores” - Peter Lee, VP of MS Research 3/25/2016

Monday, June 20, 16

Later, Peter Lee--VP of Microsoft Research--posted an official statement on the Microsoft blog. He apologized for the “unintended offensive and hurtful tweets from Tay.” But this apology is also unsatisfying. According to Lee, Microsoft is only responsible for not anticipating the bad outcome. It does not take responsibility for the offensive Tweets. For Lee, it is Tay who is named and recognized as the source of the “wildly inappropriate and reprehensible words and images.” In other words, Tay is responsible for the racist tweets.

2. Direitos

“Robôs sociais são robô parceiros socialmente inteligentes que interagem com seres humanos para promover benefícios sociais e intelectuais,de trabalho ao lado de seres humanos como seus parceiros, aprender com as pessoas como aprendizes, e promover uma interação mais envolvente entre as pessoas” – C. Breazeal 2015

Cynthia Breazeal and Jibo Monday, June 20, 16

Second…let’s look at situations of machine rights. We now have a new breed of robot that we call “sociable robots” These mechanisms are currently the closest thing we have to the droids we see in Star Wars. And a good example of this is the work of Cynthia Breazeal--especially the robot you see here. This is Jibo which Breazeal’s company is marketing as “the first family robot.” Here is how Jibo was first introduced to the world in a promotional video from 2014

2. Direitos

Monday, June 20, 16

2. Direitos

Coisas ou Instrumentos

Jibo

“O que”

Outras Pessoas

“Quem”

Monday, June 20, 16

Jibo is not just an instrument. As the video explains, Jibo occupies a rather unique position somewhere in between a mere thing and another member of the family. So Jibo’s moral status is somewhat ambiguous It is not just another piece of property. But it is not quite a full person. Jibo occupies a position that is situated somewhere in between.

3 Consequências

Monday, June 20, 16

So where does this leave us? Let me conclude with a couple statements that have, at this particular point in time, something of an apocalyptic tone.

1) Isso é um Apocalipse Robô Monday, June 20, 16

First, we are living through that “robot apocalypse” that had been predicted by countless science fiction stories, novels, and films. Machines have infiltrated every aspect of our lives. They may have begun by displacing workers on the factory floor, but they now actively participate in all aspects of our intellectual, social, and cultural existence. This invasion is not some future possibility coming from a distant and alien world. It is here; it is now. And resistance is futile.

2) Como respondemos/ devemos responder? Monday, June 20, 16

Second, since there is no escaping it, what matters is how we respond to this opportunity or challenge. In other words, what is important here and now is what we decide to do in the face of these increasingly autonomous machines. And there appears to be two options:

2) Como respondemos/ devemos responder? - Instrumentalismo Monday, June 20, 16

First, we can respond as we always have. We can treat these machines as mere instruments or tools.

“Minha tese é que os robôs devem ser construídos, comercializados e considerados legalmente como escravos, não uma parceria/ companhia.” – Bryson 2010

2) Como respondemos/ devemos responder? - Instrumentalismo Monday, June 20, 16

Joanna Bryson makes a case for this approach in her essay "Robots Should be Slaves." "My thesis," Bryson writes, "is that robots should be built, marketed and considered legally as slaves, not companion peers." Although this might sound harsh, her argument is persuasive precisely because it draws on and is supported by the instrumental theory of technology.

+ Excepcionalismo humano: As máquinas são ferramentas; apenas os seres humanos têm direitos e responsabilidades. – Escravidão 2.0: Produzir uma nova classe de escravos e racionalizar esta decisão como moralmente sã

2) Como respondemos/ devemos responder? - Instrumentalismo Monday, June 20, 16

This decision has both advantages and disadvantages. On the positive side, it reaffirms human exceptionalism and technological instrumentalism, making it absolutely clear that it is only human beings who have social rights and responsibilities. Technologies, no matter how sophisticated, intelligent, and influential, are and will continue to be mere tools of human action, nothing more. But this approach, for all its usefulness, has a not-so-pleasant downside—it not only ignores machine autonomy but willfully and deliberately produces a new class of slaves and rationalizes this decision as morally justified. Now the problem here is not necessarily what the machines might “feel” as a result of this. That is the wrong question. The real problem has to do with the effect this has on us and with the kind of society this could create.

2) Como respondemos/ devemos responder? - Máquina de ética Monday, June 20, 16

Second, we can decide to entertain the rights and responsibilities of machines just as we had previously done for other non-human entities, like animals or the environment. And there is both moral and legal precedent for this decision. In fact, we already live in a world populated by artificial entities that are considered legal and moral persons—the corporation.

2) Como respondemos/ devemos responder? - Máquina de ética Monday, June 20, 16

And there has, in the last decade, been a considerable number of books and journal articles that have begun to theorize both the moral responsibilities and rights of machines.

+ Máquina de ética: Estender algum nível de consideração moral a estas entidades sociais conscientes. – Reinicialização conceitual: Pense além do excepcionalismo humano, instrumentalismo tecnológico, etc.

2) Como respondemos/ devemos responder? - Máquina de ética Monday, June 20, 16

Once again this proposals sound reasonable and justified. It extends moral consideration to these other socially aware entities and recognizes that the social relationships of the future will involve not only humans but also other kinds of entities, including machines. But this decision also has a significant cost. It requires that we rethink everything we thought we knew about ourselves, technology, and ethics. It requires that we learn to think beyond human exceptionalism, technological instrumentalism, and all the other -isms that have helped us make sense of our world and our place in it.

Escravidão 2.0

Monday, June 20, 16

Obviously, these two options define what is arguably a continuum.

Máquina de ética

Escravidão 2.0

Máquina de ética

Monday, June 20, 16

This is what I have called The Machine Question And how we debate and ultimately decide this question will have a profound effect on how we conceptualize our place in the world, Who we decide to include in the community of moral subjects, And what we exclude from such consideration and why.

More Information: http://machinequestion.org [email protected]

Monday, June 20, 16

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.