What is it to be human?

Last night I read this utterly depressing article about organ transplant. I used to be a huge fan of organ transplant, I’ve opted in of course and have a little dot on my driver’s license. It just seems so … obvious. Until you read that article. The big takeaway for me in that piece is how much uncertainty there is in life—very literally. That we even have the notion of a “beating heart cadaver” illustrates just how uncertain the whole venture really is.

But there is also an interesting comparison here to the Turing test, which of course tests for “intelligence” in a machine (or more accurately, a machine’s ability to imitate human intelligence). Apparently in 1968, thirteen men at Harvard Medical School decided the criteria against which human life or death can be measured. These criteria are: unreceptivity and unresponsivity, lack of spontaneous breathing or movement, lack of reflexes, and a flat EEG. An evaluation of these criteria, by another human, is in a sense a test for human intelligence. If an individual fails to demonstrate intelligence against these measures, but the beating heart still indicates life, he is deemed brain-dead (a living human, but on partially so) and can be evaluated as a potential organ donor without the same restrictions put on a living organ donor.

So I found it particularly interesting to read Nicklas’ thoughts on Cleverbot today. Apparently Cleverbot is partially human, which Nicklas observes is an odd conclusion for the Turing test to arrive at. Not only is he right that examples throughout human history show that we often think of “other” as some partial form of our own humanity, that in dehumanizing the “other” we calm our fear somewhat, but we also think of people as partially human in the context of organ donation. In the case of organ donations, though, we create this mental construct of “partial humanity” to theoretically achieve a higher end—presumably, saving other lives that will be more fully lived than one that is only “partially” lived.

All of this of course rests against the backdrop of a society that is embracing robots as our own. One of the more interesting books I read last winter was Alone Together in which author Sherry Turkle explores the ways in which humans substitute robots to fulfill needs that are otherwise not being met by human companions. The most fascinating example of human-robot connection (which I do not think was in Turkle’s book, but something I think I heard from Ryan Calo) is the soldier who dove in front of gunfire to protect his robot weapon.

We are unquestionably capable of emotional connection to non-humans—the family pet being the most obvious such example. Researchers are demonstrating that we are also to a degree capable of connecting to robots, some of which may be deemed “partially human.” At some point in the future, the question of robot rights will become a subject of public discourse, and I imagine at that point we will revisit this discussion of whether the Turing test adequately measures “humanity” for the purposes of conferring certain individual rights. Perhaps there will even be a similar set of “donation” criteria created for robot-part donations.

It interests me how the Turing test compares against our own criteria for determining brain-death. On the surface, the Turing test seems a higher standard to apply—and (as a human, in 2012) that seems appropriate. I wonder though if we will in my lifetime be having conversations about whether that’s a double standard, or whether the beloved robot companion deserves equal rights to a brain-dead patient before he is harvested for parts.

This entry was posted in Uncategorized. Bookmark the permalink.