Are computers intelligent?

Bruce Sterling with Alan Turing bot at the Turing Centenary Symposium

At Reality Augmented Blog, I recently posted a Storify of my live tweets from Bruce Sterling’s talk at the Turing Centenary Symposium at the University of Texas. Bruce talked about Turing’s investigation into “whether or not it is possible for machinery to show intelligent behaviour” and the Turing test, which is supposed to determine how well a computer at least seems to be intelligent by human standards. To consider this question, you might think you’d have to define thinking (cognition, consciousness, etc.), but instead of taking on that difficult task, Turing changes the question from “Do machines think?” to “Can machines do what we (as thinking entities) can do?” That’s really a different question, less metaphysical and more about comparing manifestations of thinking than comparing processes of thinking.

Bruce noted in his talk an aspect of the Turing test that doesn’t get much attention: it was originally about gender. In his paper “Computing Machinery and Intelligence,” Turing described the game as “played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.” He goes on to say

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

So as Bruce notes, the actual Turing test is for a machine with a woman’s sensibility. The gist of his talk reminded me of conversations I’ve had with transgendered Sandy Stone, who spent years studying identity hacking online and off. I brought up the question of a man deciding to appear online as a woman, and how real that transformation can be. If you’re a man and decide to be a woman (or vice versa), you can’t quite make the authentic switch, because gender entails years of cultural and behavioral conditioning. If you’ve been contextualized as a male, you don’t become female by changing your name, your voice, your dress, even your body.

In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience.

In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience. You also can’t program “consciousness” – puny humans aren’t even clear what consciousness is, and we know that things like “consciousness” and “awareness” and “thinking” can be quite subjective and hard to quantify. So when we talk about “artificial intelligence” or “machine intelligence,” that word “intelligence” can be misleading. It’s not about making a machine like a human, it’s about seeing how well a machine can simulate the human. The Turing test is really about how clever we are at programming a bot that does heuristics well and can “learn” to hold its own in a human conversation. It’s interesting to bring gender into it – to simulate the human, a bot would be one or the other.

Scene from Metropolis: Rotwang and his robot in female form Rotwang and his lost-love simulation[/caption]Bruce: “Why not ask ‘can a computational system be a woman?'” This made me think of Rotwang’s remaking of Hel in Metropolis, and how she’s repurposed as a simulation of Maria… a robot designed to simulate the female form. Is she a mechano-electronic woman? Or just a bag o’ bytes? More compelling, I think, is the concept of the cyborg, originally described as a biological entity that’s manufactured and has some machine components. More recently, we’ve come to think of cyborgs as “ordinary” humans augmented by digital or other technology – e.g. anyone with a smart phone or a computer could be considered a cyborg. My colleague Amber Case writes about “cyborg anthropology,” acknowledging that synergies within human-machine interaction are transformative, and require new methods and fields in the study of humanity. I think cyborgization is more interesting and more real than the Kurzweil sense of “artificial intelligence” (machines “smarter” than humans that become self-aware – Hal 9000 is a mythical beast; computers may be capable of processes that seem intelligent, but back to Bruce’s point, computers are not anything like humans.)

Turing himself said “the idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.” On the other hand, Gurdjieff said “man such as we know him, is a machine.” A very complicated machine, he noted elsewhere.

My point in all this is that humans are not machines and machines won’t become human. We’re confused on that point, likely because of a larger metaphysical confusion, a confusion about who and what we are, our place in the universe, and the nature of the various human analogs, similar but different processes, that we see in the egosystem. (That’s not a misspelling…)

Bruce Sterling: “I fear posterity will condemn us for being too clever, for failing to speak about the obvious in an immediate lucid way. We need a new aesthetic with a strong metaphysics. How we get there, I don’t know.”

David Levine

I literally grew up with David Levine’s caricatures; it never occurred to me that he was flesh and blood and would die someday. That day has come, and and like many, I’m mourning his death, who produced who knows how many hundreds of caricatures for The New York Review of Books and the New Yorker. The former publishes as a tribute John Updike’s note about the artist, written 30 years ago:

“Besides offering us the delight of recognition, his drawings comfort us, in an exacerbated and potentially desperate age, with the sense of a watching presence, an eye informed by an intelligence that has not panicked, a comic art ready to encapsulate the latest apparitions of publicity as well as those historical devils who haunt our unease. Levine is one of America’s assets. In a confusing time, he bears witness. In a shoddy time, he does good work.”

The Times has a slideshow of some of Levine’s color caricatures here.

Mac Tonnies

Mac Tonnies would definitely have been part of FringeWare. Check out his bio (though I would disagree with the second sentence).

Consciousness is a potential technology; we are exquisite machines, nothing less than sentient patterns.
As such, there’s no convincing technical reason we can’t eventually
upload ourselves into matrices of our design and choosing. It’s likely
the phenomenon we casually call “intelligence” will cease to be
strictly biological as we begin to merge with our machines more
meaningfully and intimately. (Philip K. Dick once wrote that “living
and nonliving things are exchanging properties.” I suspect that in a
few hundred years, barring disaster, separating the animate from the
inanimate will probably be an exercise in futility.) Ultimately, we
have two options: self-mutate by venturing off-planet in minds and
bodies of our own design, or succumb to extinction.

Mac Tonnies died last month. We’ve lost one uniquely weird and compelling fringe researcher.