Dreaming (or not) of Electric Sheep.

While pottering about with a chapbook I’m writing at the moment, I’ve been watching a YouTuber play through SOMA. Well, the Handsome Sidekick has been watching it and I’ve been kind of looking up now and then to see what’s going on. For the some of you who might want to play the game, I’ll put the rest of this entry (and subsequent spoilers) behind a cut.

The premise behind the game is that the player, a young man called Simon, has been in a car accident in which his girlfriend died. Simon has ongoing brain trauma and is seeing a doctor for treatment. On this particular morning, he goes to the hospital to get a brain scan and ends up in some alternative dimension (it seems), in a kind of spaceship, underwater.

If that’s not weird enough, he finds other beings on the ship: robots which appear to believe that they’re human. Simon wrestles with the idea of whether these robots should be treated with the same respect and manners as one would treat a human. This is put to the test when he has to unplug some of them to use the power to operate parts of the ship. The robots beg him not to, and say all the things that a human might say if one were depriving them of something necessary for life. What a moral quandary!

But this isn’t what interested me about the game (although moral quandaries are surely interesting). It turns out that the robots’ ‘personalities’ are actually scans of people who were living on the ship. I’m a little hazy about the reasons for the scans (something about the world ending and wanting to send the scans out into space to preserve memories and ideas of humanity–more about that later) as I wasn’t always paying attention, but I did start thinking about memories and identity and consciousness, and how intricately these are related.

The robots in the game are obviously not people. They’re machines which have been programmed to think they are people, and this is one of the arguments against the idea of strong AI–that it is hard to have a program which ‘thinks’ because all it is doing is simply going through the logical processes which have been provided to it, when it was programmed. Yet these robots ‘believe’ they are real, and so the player questions whether they can, in good conscience, do something which causes the robot pain. It brought to mind the replicant Rachael, in Bladerunner, when talking to Deckard about her memories. She believes them to be true, and the idea that they’re not is upsetting.

But how do we prove our own memories? We know that they exist, and sometimes, we’re the only ones who can verify them, yet memories are an essential part of how we connect our past-selves to our present-selves. They are real to us, but we have no way of showing others what they are or how they affect us. Our handle on consciousness is so slippery–of course, if we could define it, many hundreds of philosophers would be out of a job! How, then, do we decide that the robots in SOMA are not conscious? They have memories, they are aware of themselves and their ‘bodies’. They believe in their own existence, and understandably, want to continue to exist. In that way, are we not doing them a disservice if we refuse them that right?

The issues raised in SOMA are not new, of course. We’ve been wondering what makes us human for centuries, if not millennia. We know that if someone is injured and subsequently loses a certain level of brain function, she or he also lacks the same identity as before, despite looking just like the person we knew. The creators of SOMA reverse this problem: if the brain function appears to be there, but the body is not, is the identity the same? Furthermore, does saving someone’s brain scan–that is, all their memories, their personality, their thoughts–mean we’re saving the person? How much of our identity is just our memories or beliefs or desires, and how much is our physical selves? It leads us to wonder if we were to send such scans of our brains into space in order to save what was left of humanity, could we really say that this is what humans are?

The game has a few more twists and turns which I can’t really adequately explain, given my lack of proper attention, and I’m certainly not going to play it because some of it is scary and also I’m not an FPS kind of gamer, but it did make me wonder about how we all represent our identities electronically, these days. We represent ourselves in pictures and words on a screen, forming an identity and a memory–forming a kind of collective memory, which in some ways is an interesting development in how memory defines us. Now that they’re recorded so publicly and others are intricately involved in that process, does this make our memories more reliable? Does it make consciousness more collective? And are we becoming more Borg than individual, given that we might be more likely to pre-edit or tailor our words and pictures before putting them online?

I love that this game has given rise to a discussion of some great ethical and philosophical issues, and I love that people who are playing it are talking about them. It’s philosophy in action, and it’s an argument that games have a place in entertainment, culture and art. And it’s also an argument for great story and clever writing. It would be good, in fact, to see more of this kind of thing–not only to silence critics that games simply siphon time and are mindless fluff. But also because it encourages thought and dialogue and, to my delight, blows people’s minds.


2 thoughts on “Dreaming (or not) of Electric Sheep.

  1. If AI can have memories, then the question is, what remains to separate humanity from AI? Is it free will? I’ve heard philosophers debate whether we even have free will. There’s this idea that everything that happens to us conditions us to make exactly the choices we make, so everything is predetermined. And if that’s the case, are we so very different from AI? What gives humanity worth that AI doesn’t have? A Theist might say, well, we created robots, but God created humans, and obviously the creation of God is greater than the creation of humans. But as a Theist, I’m not sure that really holds up. A Theist might also say that humans have souls and robots don’t, but if you’re not a Platonist, that doesn’t hold up, either.

    Questions like these are difficult for me.

    • Questions like this are difficult for me, too! There are certainly many arguments against the idea of free will (determinism, with or without the input of an omnipotent god), and that’s another element of what makes something conscious. And then one of the other issues about artificial intelligence is also what rights they have, if they are able to have memories and make choices… so much to consider!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s