While pottering about with a chapbook I’m writing at the moment, I’ve been watching a YouTuber play through SOMA. Well, the Handsome Sidekick has been watching it and I’ve been kind of looking up now and then to see what’s going on. For the some of you who might want to play the game, I’ll put the rest of this entry (and subsequent spoilers) behind a cut.
The premise behind the game is that the player, a young man called Simon, has been in a car accident in which his girlfriend died. Simon has ongoing brain trauma and is seeing a doctor for treatment. On this particular morning, he goes to the hospital to get a brain scan and ends up in some alternative dimension (it seems), in a kind of spaceship, underwater.
If that’s not weird enough, he finds other beings on the ship: robots which appear to believe that they’re human. Simon wrestles with the idea of whether these robots should be treated with the same respect and manners as one would treat a human. This is put to the test when he has to unplug some of them to use the power to operate parts of the ship. The robots beg him not to, and say all the things that a human might say if one were depriving them of something necessary for life. What a moral quandary!
But this isn’t what interested me about the game (although moral quandaries are surely interesting). It turns out that the robots’ ‘personalities’ are actually scans of people who were living on the ship. I’m a little hazy about the reasons for the scans (something about the world ending and wanting to send the scans out into space to preserve memories and ideas of humanity–more about that later) as I wasn’t always paying attention, but I did start thinking about memories and identity and consciousness, and how intricately these are related.
The robots in the game are obviously not people. They’re machines which have been programmed to think they are people, and this is one of the arguments against the idea of strong AI–that it is hard to have a program which ‘thinks’ because all it is doing is simply going through the logical processes which have been provided to it, when it was programmed. Yet these robots ‘believe’ they are real, and so the player questions whether they can, in good conscience, do something which causes the robot pain. It brought to mind the replicant Rachael, in Bladerunner, when talking to Deckard about her memories. She believes them to be true, and the idea that they’re not is upsetting.
But how do we prove our own memories? We know that they exist, and sometimes, we’re the only ones who can verify them, yet memories are an essential part of how we connect our past-selves to our present-selves. They are real to us, but we have no way of showing others what they are or how they affect us. Our handle on consciousness is so slippery–of course, if we could define it, many hundreds of philosophers would be out of a job! How, then, do we decide that the robots in SOMA are not conscious? They have memories, they are aware of themselves and their ‘bodies’. They believe in their own existence, and understandably, want to continue to exist. In that way, are we not doing them a disservice if we refuse them that right?
The issues raised in SOMA are not new, of course. We’ve been wondering what makes us human for centuries, if not millennia. We know that if someone is injured and subsequently loses a certain level of brain function, she or he also lacks the same identity as before, despite looking just like the person we knew. The creators of SOMA reverse this problem: if the brain function appears to be there, but the body is not, is the identity the same? Furthermore, does saving someone’s brain scan–that is, all their memories, their personality, their thoughts–mean we’re saving the person? How much of our identity is just our memories or beliefs or desires, and how much is our physical selves? It leads us to wonder if we were to send such scans of our brains into space in order to save what was left of humanity, could we really say that this is what humans are?
The game has a few more twists and turns which I can’t really adequately explain, given my lack of proper attention, and I’m certainly not going to play it because some of it is scary and also I’m not an FPS kind of gamer, but it did make me wonder about how we all represent our identities electronically, these days. We represent ourselves in pictures and words on a screen, forming an identity and a memory–forming a kind of collective memory, which in some ways is an interesting development in how memory defines us. Now that they’re recorded so publicly and others are intricately involved in that process, does this make our memories more reliable? Does it make consciousness more collective? And are we becoming more Borg than individual, given that we might be more likely to pre-edit or tailor our words and pictures before putting them online?
I love that this game has given rise to a discussion of some great ethical and philosophical issues, and I love that people who are playing it are talking about them. It’s philosophy in action, and it’s an argument that games have a place in entertainment, culture and art. And it’s also an argument for great story and clever writing. It would be good, in fact, to see more of this kind of thing–not only to silence critics that games simply siphon time and are mindless fluff. But also because it encourages thought and dialogue and, to my delight, blows people’s minds.