Michael Lewis

 

Photo credit: https://www.niu.edu/comm/contact-us/directory/gunkel-david.shtml

Do Machines Have Rights? Ethics in the Age of Artificial Intelligence

Interview by Paul Kellogg

David J. Gunkel was a keynote speaker for “Identity, Agency, and the Digital Nexus”, April 2013, SSHRC funded  international symposium hosted by Athabasca University. His talk challenged the audience to reframe and rethink the “human-machine” binary in 21st century understandings of ethics and agency. Later that year we met over Skype to talk about some of the points raised in his presentation, and other key ideas embedded in his 2012 publication, The Machine Question: Critical Perspectives on AI, Robots and Ethics. Gunkel is the Presidential Teaching Professor of Communication Studies at Northern Illinois University. The symposium was funded by a Connection Grant from the Social Sciences and Humanities Research Council of Canada and organized by Dr. Raphael Foshay, MA-IS Program Director at Athabasca University.


Available audio: 


Aurora: Maybe you could start by telling us a few words about yourself, where you teach and your background.

Gunkel: My background is sort of a split/dual personality in media studies and philosophy. I really couldn’t decide when I was an undergraduate so I just did both and I have advanced degrees in both media production, media critical studies and then in continental philosophy. I research and work on issues having to do with philosophy of technology and related concerns having to do with ethics and digital medial and new media.

Aurora: I had the pleasure of hearing your presentation when you were a keynote speaker at the Digital Symposium in April in Edmonton. It was a very engaging presentation that raised an issue I hadn’t thought about before: the question of applying ethics to the world of machines. I’ve had a chance to read your book, The Machine Question, and it was the intriguing issues that you raised in the book and at that symposium that prompted the request for this interview.

First, would it be accurate to say that a key area of your research has to do with addressing the question: “Do machines have rights?” Second, if that’s true, don’t you immediately encounter scepticism or resistance from people who see rights issues as completely tied to rights for humans, and hence, there is an immediate dismissal of the notion that machines could even be considered within the realm of ethics?

Gunkel: Let me respond with a preface and then get to your questions immediately thereafter. A lot of what we think about when we think technology is from an instrumental viewpoint. That is, technology is the tool that we use and in the field of communication that’s always seen as a kind of medium of human action or interaction. We’re communicating right now through the computer. The computer mediates our interaction with the use of Skype in this circumstance. So for a lot of our history, dealing with technology has always been dealing with something that is seen to be neutral. Technology isn’t a moral component. It is how it’s used that really matters and it is the human being who decides to use it for good or ill depending on how the technology is applied or not applied in various circumstances.

What is happening right now in this new century, the 21stcentury, is that machines more and more are moving away from being intermediaries between human beings and taking up a position as an interactive subject. So the computer and other kinds of machines like the computer – robots, machines with Artificial Intelligence (AI) and algorithms – are no longer just instruments through which we act, but are becoming “the other” with whom we interact. If you look at statistics concerning web traffic for example, right now the majority of what transpires on the web is not human-to-human interaction: it’s machine-to-machine and machine-to-human interaction. So we’re already being pushed out by a kind of machine invasion where machines are taking over more and more of what normally would be considered the human subject position in communicative exchange and other kinds of social interaction. This has led a lot of philosophers recently to think about the machine as a moral agent. That is, is the machine culpable for things it does or doesn’t do? If the machine turns you down for credit, whose fault is that? Is it the credit agency and the person who programs that algorithm or is it the algorithm? There are all kinds of questions about agency that have recently bubbled to the surface in the last decade or so.

But you’re right; my main concern is not with agency. I mean I think agency is a very important question and I think machine moral agency is a crucial component of dealing with the new position occupied by mechanisms in our current social environment. But I want to look at the flip side, what we call moral patiency, or what might be seen as the rights issue. If indeed we have machines now that we are considering rather seriously as being moral agents and asking whether or not they have responsibilities to us, the flip side of these questions is what about those machines makes them moral agents? Would we have any responsibilities to those machines? Would those machines have any rights conversely in a relationship with us? So you’re exactly right. My recent work and where I’m really situating a lot of my own research currently is on the question of machine rights.

Having said that, your second question is very pertinent because the immediate response is: “what are you talking about? How can machines have rights?” We normally think about rights as something that belong to a conscious or at least sentient kind of creature, and machines are for all we know just dumb devices that we design to do certain things. And so the question of rights immediately butts heads with a long tradition in moral philosophy which typically only assigns rights to human beings and only recently has begun to think about the non-human animal as having any kind of rights. Some key websites with information on animal rights include:

People for the Ethical Treatment of Animals (PETA) http://www.peta.org
Mercy for Animals Canada http://www.mercyforanimals.ca
Animal Justice Canada http://www.animaljustice.ca

The way that I counter that, or address that comment is to really tie it all to what is happening in animal rights philosophy. In animal rights we start to break open the humanist anthropocentric kind of ethics of our tradition and ask: “You know, what if non-human animals could also be a moral patient?”

If you go back to the founding thinkers of the Enlightenment, in this case Descartes, he thought animals and machines were the same. He thought in this term the bête machine, the animal machine, that machines and animals were ostensibly the same. If we begin to open up consideration to animals, if we follow the Enlightenment tradition, there’s the flip side of that which says we should probably start thinking about the rights of machines. So I pose it as a machine question because I don’t have right now the definitive answer to that particular query, but I do think it’s a query that we have to engage with seriously. We have to ask about the rights of machines right now at this particular moment when machines are more and more becoming socially interactive subjects that we involve ourselves with to a greater extent than we ever have previously.

Aurora: I’m really glad you raised the issue of animal rights. I can remember 20-25 years ago when the question of rights for animals was posed, even in casual conversation around the dinner table, and people would respond with incomprehension because there was such a long tradition of animals being seen as instruments, as objects, as things that we use for food, or for our own human needs. One of the issues that changed this view was the question of the visibility of emotions that you can see in animals, especially around the hunting of seals. There is an emotive recognition that you can see because seal puppies have eyes and we can look into those eyes and it seems as if there is an emotional connection between the person and the animal. That kind of connection seems more difficult between humans and machines. So at one level there is a parallel, but then the parallel seems to me to maybe break down a bit. How does that fit in with your discussion?

Gunkel: It’s a good question. From the outside it seems really difficult to connect those dots. But when you start to look at the way moral philosophy has developed and the way the logic been argued, it is really irrefutable in terms of following that thinking through in a consistent way: you can’t get a result other than the rights of machines. Let me explain why. Moral philosophy has traditionally been a sort of historic development of continually opening itself up to what had been prior exclusions. So for example, during the early period of Western thought, who counted as a moral subject were only other people like yourself. In Athens, those who counted would have been white males, and the excluded would have been slaves, barbarians, women, and children. It was only the male figure of the family who was considered a member of the moral community and therefore all these other things were considered property. So for example, when Odysseus returns home after his journeys, the first thing he does is hang all his slave girls, and he can do so because they are property, they are not considered human beings. What we’ve done over time is that we’ve enlarged the focus or the scope of who is considered a human being. By the Enlightenment period, who’s considered a human being are mainly white European males of any age, but who’s excluded are Aboriginal people, Africans, and still women. That slowly evolves to include these others, so with the Civil War in the United States the inclusion of African slaves or previous slaves into the community of moral subjects takes place. Slowly with Mary Wollstonecraft and other feminist women begin to be considered subjects of moral consideration, and then in the 20th century you have Peter Singer and Tom Regan arguing that animals now should be included in the community of moral subjects.

Now there’s an important shift that happens with the animal, and you mentioned it, which I think is really crucial. Initially, as we tried to expand the community of moral subjects, it was always about an ability – it was about whether or not these others had the power of reason or in the Greek tradition, the zoon logon echon, the ability to speak or use language. It was argued for a long period of time that Aboriginals didn’t have reason and therefore they were not considered full participants in the human community, or women were not rationally thinking subjects like men and so could be excluded from the moral community. With animal rights we move away from an ability, to a passivity. Derrida says the big move in animal rights thinking was when Jeremy Bentham asked not can they think, or can they reason, or can they speak, but can they suffer? Suffering is not an active component; it’s a passive one - the ability to passively suffer and to be affected. This, Derrida says, is the real shift in moral thinking in the 20thcentury because it moves away from the possession of an active ability to a passive capability of feeling pain or pleasure for that matter. So this, as you’ve pointed out, for some people was a very difficult move because it was a real shift in the way that we focused ethical thinking, from this ability of speech, language and logos to the passivity of suffering.

In the 21st century now as we start to look at the question of the machine as a moral patient, we are again confronted with people throwing their arms up in the air and saying: “What are you talking about? Clearly machines don’t suffer. They don’t have anything that matters to them”. It isn’t like you can hurt your iPad. It doesn’t have any emotion. And that seems like a really good argument, except for the fact that engineers are designing machines with emotion. So we have an entire period now – the last two decades – in which engineers involved in robotics, AI and other kinds of marginally sentient kinds of mechanisms are designing machines with emotional capacities. To have emotions means they can talk to us and interact with us much more effectively since we are creatures with emotions.

For example, Rod Brooks has designed robots that are afraid of the light and feel pain from light. It’s an affective response designed into the mechanism to make the machine seek out dark corners and avoid light areas. You have other individuals who are working on machines that can simulate human emotion to such an extent that it is almost indistinguishable from real human pain. There is a robotics company in Japan (Morita) that makes a pain-feeling robot that is designed to help train dental students. So when the dental students don’t use the drill in the right way, the robot cries out in pain that you’re hurting it and most of the students say it is beyond simulation. It’s so close to what they recognize as pain that they imbue the object with pain. So what we encounter here are two things that are really important to point out. One is the “other minds” problem. How do I know that an object that gives evidence of pain, whether it be an animal, a human being or a machine, really is in pain? I can’t feel the pain of anything other than myself and so all I can do is read external signs and assume that those proceed from some kind of internal affectation. Whether my dog is suffering or not is not anything I can really know for certain. I can only make some educated guess based on the way it winces from being touched in a certain way or cries out, etc. And so the problem we are encountering here is if we design machines that give evidence of pain it’s a very difficult bar to cross to say well that’s not really pain because I could say the same of any other creature. I could say the same of the mouse: “Well that’s not really pain”. I was just in Maine recently and I was told it’s o.k. to boil the lobster because they don’t really feel pain. And my question is really? Do we know that?

Aurora: Have we ever been a lobster?

Gunkel: Exactly. How are we able to decide that yeah, indeed, we can boil a lobster alive because they don’t really feel anything? And so people working in animal studies are saying, yeah animals feel pain but it still results in an “other minds” problem. We don’t know whether anything that appears to suffer really does suffer. It is a statistical conjecture that we make, but we would have to extend the same sort of decision-making with regards to machines that are programmed to evince various pains or pleasures in their external affectation.

The other thing – and let me mention this because it’s really crucial – Dan Dennett once wrote a really nice essay called “Why You Can’t Make a Computer That Feels Pain”. And what he does is go through about 20 odd pages where he tries to design, through a thought experiment, a pain-feeling robot and at the end of the essay, he says, “You know what? We really can’t design a computer that feels pain.” But he draws that conclusion not from the fact that we can never really make a mechanism that feels pain, but from the fact that we don’t know what pain is, that this thing that is a deciding factor that we call pain is so subjective a quality that we don’t even know what it would be to design a pain-feeling robot. So he says you can’t design a computer that feels pain, not because we have an engineering problem but because we have a conceptual problem, we have a philosophical problem. This thing that we hang everything on, pain, is such a nebulous concept that it’s difficult to define exactly what that is or what it looks like.

Aurora: Right, so you very much see your research fitting into the classic narrative about rights and the expansion of who counts as a moral subject – expanding from white males, to people of colour, to women, to animals. You’re posing an issue for inanimate objects such as machines. Now then the discussion you just raised about artificial intelligence raises another question for me. Do we have to separate artificially intelligent machines from other machines, which don’t have artificial intelligence? Or are you saying that as we expand artificial intelligence we can see more ethical issues being posed in terms of our responsibilities to these new units?

Gunkel: That’s a crucial question because usually the way that we address these problems is to say: “well yeah of course, at some point we may have machines that are sentient enough or conscious enough to be recognized as having rights and that’s the AI thing.” So for example, in 2001: A Space Odyssey, Hal is clearly a well-designed AI. Whether he’s sentient or not, whether he feels pain or not, those are really deep questions in the narrative of the film and it’s the thing that viewers of the film have to grapple with. You know, O.K., so are we harming Hal when we shut him down? He says his mind is going. I can feel it. O.K., we might grant at that point rights are a crucial issue. But when we’re looking at, I don’t know a lawn mower or our automobile or our cell phone, clearly there’s not enough AI to consider rights being an issue. This is a very good argument because it has traction in the tradition insofar as we normally decide who has and who doesn’t have rights based on some internal capability: sentience, consciousness, the ability to feel pain, whatever the case is. So the argument goes when machines cross that sentient barrier, then we can talk rights, but until that time, they’re just instruments and we don’t need to do anything about it. And that, as I say, is a very solid philosophical argument from a traditionalist perspective.

I would like to suggest that even before we get to the point of having Hal 9000 type AI, the rights of machines are an issue, and that’s because machines are socially interactive objects, in our world, that have an effect on us and our ability to act in the world irrespective of their intelligence. And so I advocate an approach informed by the philosophy of Emmanuel Lévinas – a Lévinasian approach – which says, even before I know the cognitive capabilities of another, I’m confronted with another who confronts me as a moral problem. How should I respond? Lévinas says before we know anything about the cognitive capabilities of the other, we have to make an ethical choice. We have to choose whether to respond to these challenges as an ethical subject or as an object. This I think pushes the question downstream. In other words, we don’t have to wait for Hal 9000 before we start to answer these questions. We have to start answering them now when the machines are smart at best, maybe even dumb, but we need to begin to think about the social standing of the machine because ethics really is about us in relationship to others in a wider social capacity and not about me internally deciding how to deal with the other entities I encounter. So Mark Coeckelbergh makes the argument that the relationship is what decides ethics, not the capabilities of the object in question. I think he’s right. I think a lot of this has to do with how we relate to the things we find around us; animals, the environment, other people and machines.

Aurora: Interesting. So let me take this then to a machine in the contemporary world around which there are huge ethical issues. The machine I’m thinking of is the drone.

Now, you can raise the question of ethics and retain the drone as an instrument. Take the use of the drone in Pakistan or wherever. I can make an argument that it’s unethical, that it violates human rights, or it violates international law, etc. But the ethical responsibilities lie with the people using the drone or the people making the decision to use the drones. That’s one way of having ethics govern or assess our actions in the world. What you’re suggesting as I understand it is, it’s the relationship between us and the other that structures the ethical question, and we have to make a decision about ethics before we know the ethical capabilities of the other. From that standpoint you would deal with the issue by saying we have a responsibility to the drone to make sure that we are not forcing it to do actions that are unethical and in violation of international law and human rights. Am I getting that wrong or are those the two different ways in which we can approach the question of ethics and drones and how would you see it?

Gunkel: Drones are interesting precisely for the reason you mention. The current configuration of the drone is that it really is a kind of tele-presence instrument, right? We have pilots who are in the western United States flying these devices in Pakistan and Somalia and elsewhere, and they are engaged in “action at a distance” in which human beings are supposedly in charge of deciding when to pull the trigger and when not to pull the trigger. But these drones are becoming more and more autonomous. We discovered that they can fly for over 24 hours, whereas you can’t have one human pilot flying for that period of time due to the fatigue. Increasing amounts of autonomy are being built into these drones. The question is at some point are we going to have drones that are going to make decisions about target-acquisition and whether or not to launch a missile without human oversight, or with very little human oversight? If you talk to people who are following drone development, they say this is obviously the next step in creating an autonomous battle drone that will require very little human supervision, much less than we have currently. But that’s a question of agency, right? I mean who then is the agent in that circumstance? From the question of patiency having to do with rights, it seems like a really odd question to ask “does a drone have rights?” And at this point, we would say, “nah the drone doesn’t have any rights, it’s an instrument.” Maybe if it gets self-aware we’ll have to talk rights, and so we can kick the can down the road and not worry about that for another 50 years or so. My response to that is no, we’ve already decided that drones have rights. We just don’t do it in terms of an international conference or consortium of philosophers. We do it through engineering practice and we do it through battlefield practice. Actions speak louder than words in these sorts of situations and we are practically deciding rights with regards to drones, whether we know it or not.

Let me just explain how this happens. Drones as we know are not the most accurate of weapons. They can insulate American soldiers from battlefield casualties, but inevitably there are incredibly high amounts of human collateral damage, civilians who die when the drone hits the wrong target or targets the wrong automobile, and this is an outrage. But here’s the really crazy thing. If indeed the human rights mattered more than the machine rights, we’d stop using these things. But the fact that we still use them means we’ve already tipped the scale in the direction of the drone. We’ve already decided there is something more valuable in the drone object than there is in the other human beings at the end of that drone’s missile strike. And so even though we haven’t convened a confluence of moral philosophers to decide this question, the practices of the engineering community and the military have made it so that practically we have extended a certain value of a continued right to existence to the drone which is akin to a kind of right, a right that trumps the right to life of the people who are its collateral damage.

Aurora: So you’re saying that the issue of rights is already present, it is simply unacknowledged?

Gunkel: Yes, and that I think is the biggest problem. Rights are already being decided. It’s being decided in laboratories. It’s being decided in assembly plants. It’s being decided on the battlefield. It’s being decided in action. And our thinking about these decisions lags behind the practical necessity of having to implement these decisions to get things done.

Aurora: Were we to say that we have a responsibility towards the drone in the same way that we have a responsibility towards another human being, or a child, or in modern discourse to a non-human animal, then it would be a violation of that responsibility to design algorithms that gave the drone unlimited right to kill civilians. That then would structure our decisions. In other words, if there are rights and ethics associated with the drone as an “other” that we are in a relationship with, this then has implications for the way in which we structure the algorithms which govern that drone’s behaviour, or is this just my traditionalist way of trying to square the circle?

Gunkel: No, I think you’re on to it. I think part of the problem is we are already making decisions concerning rights of machines without recognizing that we have made these decisions, without recognizing the consequences of these daily, seemingly mundane decisions that we make. The word decision is really important because it means to cut, right? Decision. And a decision is always an instance of “cut” which says: this is on this side, that is on that side: these are things that count; these are things that don’t count. We are continually having to do these things whether we’re an engineer, whether we’re a battlefield commander or whatever, not really knowing that we are designing the moral future. But we are. And I think the real task of academics and philosophers is to bring to the surface the discussion that needs to happen regarding these things so that we’re not doing them blindly, so that we’re not allowing these decisions to create a future that we don’t know how we got to, or that we don’t know what happened. I often say to my students, there’s a whole lot of good critical work to be done here because we have to start to get out in front of this question. We can’t lag anymore, we have to be there alongside the engineers and the military commanders and everybody else involved in robotics and AI and algorithms, and start to ask what world is this creating for us? What social obligation is this engineering for our future? What ethical, moral dilemmas are occurring because of certain decisions that have been made whether we know it or not?

Aurora: And the instrumentalization that is so often associated with science and engineering does create a situation where you can have a suspension of ethics based upon a dismissal of the machine as a non-sentient being. We now have a long history where the creations of science and engineering proceed to perform absolutely appalling things in the world.

Gunkel: Correct.

Aurora: What your argument is doing is bringing the ethics question back in an extremely forceful way. It’s not just as a question of what will this action of mine as a human being with an instrument do down the road. It’s a question about the machine with which you are interacting and what ethical responsibilities we have to it. That structures how we proceed now, not in ten years when the machine is built and it’s being used. It structures it all the way down the line.

Gunkel: Correct, which means then that it’s not a matter of postponing the ethical question until the time that it is used. It means the ethical question begins at the very moment when we begin designing the system.

Aurora: And it’s embedded in the entire process. It’s a relationship.

I teach social movements and whenever I approach the question of rights, it’s, for me, invariably deeply linked with large scale social movements of human beings engaging in attempts to break out of old paradigms and break out of old oppressions. So it’s not a coincidence that Thomas Paine is writing in the context of the American Revolution. Or it’s not insignificant that the first European legislature to indicate that Africans are humans is in France when delegations from Haiti arrive in the context of the great upheaval around Toussaint L’Ouverture. The Universal Declaration of Human Rights occurs in the context of the massive decolonization movement at the end of World War II, and between World War I and World War II. Is there a parallel between this narrative of social movements with the discussion that you’re engaging in in terms of machines having rights or the ethical questions associated with robots, artificial intelligence, and machines?

Gunkel: I think there is. I think the social movement if we were to identify it would be post humanism, the effort since the 19th Century beginning with Nietzsche, and continuing with Heidegger, to think through the prejudice of humanism. In other words, as we’ve said before, the human has been a moving target. It has always been a way of excluding others, whether they were African peoples, whether they were Aboriginal peoples, whether they’re women, whether they are animals, there is a way in which the concept of the human has been a way for one group in power to disempower others. Think for example of the way the Nazis were able to exterminate six million Jews by defining them as nonhuman. There’s a way in which the concept of the human has been an incredibly devastating tool for excluding others.  Heidegger knew this when he wrote the Letter on Humanism and said, I do not align my philosophy with humanism because humanism has a whole lot of problems.

And so, there’s this development in the late 20thcentury and early 21stcentury now called post humanism embodied by people like Donna Haraway and then Katherine Hayles and, Cary Wolfe who are all trying to think outside the restrictions of anthropocentric privilege and human exceptionalism. Animal rights philosophy is part of this. Environmental philosophy is part of this, and I think the machine question is part and parcel involved in the same. It’s about trying to dissolve the kind of human centric view of the universe that is being broken open by what we can say is a Copernican Revolution, right? We are thinking about entities and their position in the world.

Aurora: I think there are people in the ecology movement who would echo many of the things you just said in terms of thinking beyond a humancentric world. In terms of the rights of machines, there is a parallel discourse emerging at the moment in terms of the rights of the planet. At the big meeting that happened at Cochabamba, Bolivia in 2010, one part of its declaration was the declarations of the rights of Mother Earth. In other words, we can’t just think of the earth as an instrument or as an object that facilitates human development, it has rights as well. Would you agree that’s a parallel with the type of work you’re engaged in?

Gunkel: Yeah. I can say in two ways it really is, because my initial philosophical formation came with my encounter with a guy named Jim Cheney who is one of the leading thinkers in environmental ethics and through his work and through the work of Thomas Birch and others, in that sort of post modern environmental ethics tradition, I was exposed to a great deal of this kind of thinking early on in my career. In fact I use a great deal of these environmental philosophical positions in my own work because in them I think we find a thinking of otherness that is no longer tied to either human centrism or biocentrism. I mean there’s a way in which rights expands beyond a very limited sort of restricted way of looking at the other as just another organism. Now it’s soils, its waters and it’s the earth itself that become objects needing some kind of response and care.

Aurora: Fascinating. I know for myself the encounter with ecology, talking about the rights of Mother Earth, the discussion that you raised in terms of the rights of machines, forces me to try to “think otherwise”, because it runs counter to a whole lot of training that we encounter in the modern education system. In part as I understand this challenge to think otherwise, it’s about social construction of the “We”. Who or what is included in the “We” and who are or what is excluded? How widely do you think we should cast the circle of inclusion when it comes to the machine question? Is there a methodology that can help us decide how to draw this circle of inclusion as we attempt to think otherwise?

Gunkel: Let me just say with regards to this word otherwise, that it really is meant to evoke two things simultaneously. Thinking otherwise would mean thinking differently, thinking outside the box, outside the sort of established ways of thinking that we’ve grown up with, the legacy systems if you want to call it that. But thinking otherwise also gestures in the direction of Levinas and the issue of an exposure to the other which makes thinking possible and to which thinking should respond. So I want other and otherness to be heard in that dual sense that it is not only different, but it also is in response to the exposure to the other.

In terms of the circle of inclusion and how widely it should be drawn, I would say that my effort is not necessarily to play by those rules. In other words, ethics is always characterized as drawing a circle which includes some and excludes others and so Derrida says in Paper Machine that the big issue is the difference between the who and the what. Who is on the inside and what is on the outside? And as the circle gets drawn larger, more and more things become who, and less things are a what. Luciano Floridi recently positioned himself with regards to something called information ethics, which he argues is the most universal and least exclusionary ethical theory ever developed in which everything that is in existence is inside the circle and the only thing outside is nothing.

But notice that all of these gestures inevitably have to decide between inclusion and exclusion, insiders and outsiders. And so thinking otherwise in my mind is grappling with that dialectic and saying, you know what, there’s got to be a way to think outside that box. In other words, my effort has been to say, not greater inclusion, but questioning the very gestures that opposes inclusion and exclusion in the first place. This is a very Levinasian point because Levinas doesn’t try to create a more inclusive ethics, but rather tries to create or design a different way of thinking ethics that doesn’t rely on inclusion and exclusion, that thinks beyond that sort of binary opposition.

If there is a method for doing this I would say it is Derrida’s deconstruction, because deconstruction is the way in which we can oppose or intervene in binary oppositions that already program us to behave in certain ways. If our ethical programming is designed in such a way that we think about things as inclusion and exclusion, we need a deconstruction of the inclusion/exclusion conceptual opposition to develop alternatives that no longer fit within that categorization, that no longer fall into one versus the other.

I think Levinasian ethics provides us with a very good model because in Levinas’ sense, anything can be taken on face. Anything can come to be the face of the other, but there is no prior decision about what is and what is not included. In fact, it’s a moving target. And Levinas says, yeah that’s fine; it doesn’t have to be fully decided. At different times, something will take on face and something may not, but what is important isn’t whether something has or has not face, what is important is how we respond to the evidence in front of us when that occurs. So it’s a very different kind of ethics.

I would say if I’m open to any charge, it’s the charge of relativism. But I think relativism is a really good thing because I think relativism allows us to have a very mobile way of doing ethics, something that isn’t locked down the way that Kant locks down his ethics, where everything is prescribed ahead of time and can’t respond to new and unique and novel kinds of eruptions of possibility. So I think we need to look at relativism as a very positive thing that says, you know what, we are responsible not only for behaving ethically, but for designing ethics, for deciding what is ethics and doing it again and again and again in very concrete circumstances that we encounter and not being able to rely on simple pieties, simple formulas or codes of ethics which inevitably fail us in the long run.

Aurora: Fantastic. I think that’s an excellent concluding statement and it’s an excellent way to draw this discussion to a close that poses even more questions. Especially your discussion of relativism has gotten me thinking along a whole new line of inquiry, that I think takes us into new territory.

Gunkel: Let me just say one thing that may help this. You know relativism in the human sciences is considered a bad thing, but in the hard sciences, physics in particular, relativism is actually a really good thing. For Einsteinian physics, relativism says there’s no fixed point from which to observe the world and make decisions about everything. Everything is in motion and I think the moral universe is also relative in that sense, that everything is in motion and that everything is decided from positions of power, from positions of privilege, from positions occupied by a certain subject at a certain time imbued with certain subjectivity. And we have to see this as not a negative thing, but we have to start to look at it as a positive opportunity.

Aurora: And that brings to a close the interview with David Gunkel. It’s been fantastic and I’ve really enjoyed myself. Thanks a lot, David.


Bibliography of material referred to in-text

Birch, Thomas H. 1993. “Moral Considerability and Universal Consideration:” Environmental Ethics 15 (4): 313–332.

Brooks, Rodney. 2003. Flesh and Machines: How Robots Will Change Us. New York: Knopf Doubleday Publishing Group.

Cheney, Jim. 1989. “Postmodern Environmental Ethics: Ethics of Bioregional Narrative.” Environmental Ethics 11 (2): 117–134.

Coeckelbergh, Mark. 2012. Growing Moral Relations: Critique of Moral Status Ascription. New York: Palgrave Macmillan.

Cottingham, John. 1992. “Cartesian Dualism: Theology, Metaphysics, and Science.” In The Cambridge Companion to Descartes, 236–257. New York: Cambridge University Press.

Dennett, Daniel C. 1978. “Why You Can’t Make a Computer That Feels Pain.” Synthese 38 (3): 415–456.

Dennett, Daniel C. 1981. Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge: MIT Press.

Derrida, Jacques. 2005. Paper Machine. Stanford: Stanford University Press.
2013. “The Animal That Therefore I Am (More to Follow).” In Signature Derrida, 380–435. Chicago: University of Chicago Press.

Derrida, Jacques. 2008. The Animal That Therefore I Am. Translated by David Willis. New York: Fordham University Press.

Floridi, Luciano. 2010. Information: A Very Short Introduction. Oxford, UK: Oxford University Press.

Gunkel, David J. 2012. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, Massachusetts: MIT Press.

Haraway, Donna Jeanne. 2008. When Species Meet. Minneapolis: University of Minnesota Press.

Hayles, N. Katherine. 2008. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Heidegger, Martin. 2008. “Letter on Humanism.” In Basic Writings: From Being and Time (1927) to The Task of Thinking (1964), edited by David Farrell Krell, 213–266. New York: Harper Perennial Modern Thought.

James, C. L. R. 2001.The Black Jacobins: Toussaint L’ouverture and the San Domingo Revolution. London: Penguin Books.

Kubrick, Stanley. 1968. 2001: A Space Odyssey.

Lévinas, Emmanuel. 1987. Time and the Other and Additional Essays. Pittsburgh: Duquesne University Press.

Lévinas, Emmanuel. 1969. Totality and Infinity: An Essay on Exteriority. Translated by Alphonso Lingis. Pittsburgh: Duquesne University Press.

Morsink, Johannes. 2011. The Universal Declaration of Human Rights: Origins, Drafting, and Intent. Philadelphia: University of Pennsylvania Press.

Paine, Thomas. 2011. Rights of Man. Peterborough, Ont.: Broadview Press.
Regan, Tom, and Peter Singer. 1989. Animal Rights and Human Obligations. 2nd ed. Englewood Cliffs: Prentice Hall.

Wolfe, Cary. 2010. What Is Posthumanism? Minneapolis: University of Minnesota Press.

Wollstonecraft, Mary. 1999. A Vindication of the Rights of Men; A Vindication of the Rights of Woman; An Historical and Moral View of the French Revolution. New York: Oxford University Press.

World People’s Conference on Climate Change and the Rights of Mother Earth. 2010. “Universal Declaration of the Rights of Mother Earth.” Climate & Capitalism (April 27). http://climateandcapitalism.com/2010/04/27/universal-declaration-of-the-rights-of-mother-earth/


David Gunkel Publications

Books

[book cover] The Machine Question The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, Massachusetts: MIT Press, 2012.

[book cover] Thinking Otherwise Thinking Otherwise: Philosophy, Communication, Technology. West Lafayette, Ind.: Purdue University Press, 2007.

[book cover] Hacking Cyberspace Hacking Cyberspace. Boulder, Colo.: Westview Press, 2001.

Edited Collection

Gournelos, Ted and David J. Gunkel, ed. Transgression 2.0: Media, Culture, and the Politics of a Digital Age. New York, NY: Continuum, 2012.

Book Chapters

“Audible Transgression: Art and Aesthetics after the Mashup.” In Transgression 2.0: Media, Culture, and the Politics of a Digital Age, edited by Ted. Gournelos and David J. Gunkel, 42–56. New York, NY: Continuum, 2012.

“Source Material Everywhere [[G.]Lit/ch Remix]: A Conversation with Mark Amerika.” In Transgression 2.0: Media, Culture, and the Politics of a Digital Age, edited by Ted. Gournelos and David J. Gunkel, 57–68. New York, NY: Continuum, 2012.

“To Tell the Truth: The Internet and Emergent Epistemological Challenges in Social Research.” In The Handbook of Emergent Technologies in Social Research, edited by SharleneHesse-Biber, 47–64. New York: Oxford University Press, 2011.

“Media.” In The Baudrillard Dictionary, edited by Richard G. Smith, 121–124. Edinburgh: Edinburgh University Press, 2010.

Links to his many other writings, (articles, essays and book chapters) current publications (post article) can be found on his website:
https://www.niu.edu/comm/contact-us/directory/gunkel-david.shtml

Interview conducted July 30, 2013, Athabasca University, Alberta, Canada

Dr. Paul Kellogg is Associate Professor in the Graduate Program (Master of Arts – Integrated Studies), Faculty of Humanities and Social Sciences, Athabasca University. His publications include Escape from the Staple Trap (forthcoming 2014, University of Toronto Press) and articles in various scholarly journals including New Political Science, Canadian Journal of Political Science, International Journal of žižek Studies, and Political and Military Sociology: An Annual Review.