I've just wrapped up my time at Wiscon, the world's largest feminist speculative fiction convention. Wiscon is a well-regarded gathering of science fiction and fantasy readers, writers, fans, and academics who come together to discuss speculative fiction and, more particularly, how the form relates to issues of gender inequity, power relationships, and gender identity. But Wiscon is far more than that. I am continually amazed by the high quality of the panels and presentations at Wiscon, and I've been attending regularly for ten years now.
Of course, I've made many friends at the con, and was delighted to unexpectedly see some old friends there. It's always good to reconnect and the convention, by its very nature, is a good place to pick up tips and tricks on writing, marketing, and so forth. And I love, love, love finding new visual artists in the art room.
But what I really enjoy the most (outside of my friends) is the opportunity to hear panels of incredibly talented, intelligent people speak on subjects of interest to me. Two panels stuck out to me this year, one on biology and engineering and one on anarchy. I won't go into detail on these (it's Memorial Day and I have steaks to grill . . . mmm, steak . . .), but I came away from the biology and engineering panel with some questions. I had planned to attend to get some grist for the science fiction novel I'm (very slowly) working on. The panel ended up being more of a discussion of AI and consciousness than what I had hoped for, but it still proved fruitful and stimulating.
My questions, in short, are these: I hear a lot of talk about machine consciousness and the point where machines will or might become self-aware. The way the panel members were talking made it sound like they thought that there was one set of criteria by which we would judge whether or not a machine had become self-conscious (either spontaneously, or by human design) but no one seemed to know what those criteria were. The fact that each panelist seemed to speak in a uniform way about self-consciousness rubbed me the wrong way. I asked: "How does personality enter into this? Doesn't each human being have their own consciousness and isn't that consciousness manifested through personality? Why would we expect one kind of consciousness to be the same as any other?" I have to admit that the answers never really addressed the questions directly, save for one panelist's throwaway comment that "personality is an aspect of psychology, not self-awareness" That just sits wrong with me, but I didn't want to be that obnoxious person who hogs everyone's time (she was sitting just across the walkway from me), so I let it go.
Several audience members and panelists talked about what a newly-aware self-conscious machine would want, in terms of a physical housing. Would they want to live in an anthropomorphic robot body or would they want to be housed in a factory environment, say, so it could do more than the average human? I question the assumption that the machine would want anything in particular at all. More importantly, if the newly self-aware machine wants something where does this want come from? How does the self-aware machine get desire? And since desire largely determines actions, shouldn't we be really, really concerned about what is informing this machine's desire? Furthermore, if the desire came as a result of the way the machine was programmed for self-awareness, what does this say about agency? Can the machine ever be an agent for itself, given its programming?
These are my questions. Mars needs women, I need answers.