Categories
Memory in the Digital Age

Saving Our Memory for the Future: Memory in the Digital Age

The human mind has become a site of contestation once again: major power struggles are playing themselves out, and science and scholars are staking their claim. Our contemporary period is characterised by a multiplicity of revolutions that together are radically reshaping the context of our thinking about what it means to be a human being. Globalisation, overpopulation, climate change, an aging demography, ongoing scientific breakthroughs, and the dominance of new technologies and social media in our lives are just a few examples of developments that are having a major impact upon the understanding about ourselves, the world, other people and nature. The current explosion of interest in, and concern about, cognition and, more specifically, memory, is striking because it is vital in the formation of our understanding of who we are, and how our culture and world work.

Do we, in the digital era, have the right to be forgotten after we die? Do external memory storage facilities and tools such as Google, Facebook and GPS make our brains lazy? What happens to memory when climate change may make mankind extinct and our collective future is shrinking before our very eyes? How dependent is our thinking about memory on cultural factors? Does the increasingly chaotic, unpredictable world we live in undermine the necessity of memory to help us imagine future narratives for mankind? These were some of the questions that The Memory Network addressed at four events organised in partnership with Cheltenham Literature Festival 2013. Taken together, the events spelled out a clear message: the future of memory is at stake.

‘Like a bad smell in cyber space’:  Memory in the Digital Age

The event ‘Re-wired: Memory in the Digital Age’ tackled questions about how, in an increasingly digital context, we are remembered after we die. We send out thousands of emails, upload photos of our loves ones onto Facebook and there is a host of other communication we send out via social media sites such as Twitter. This raises various questions. What happens to our digital selves after we die? How is our individual identity (re-)shaped in this new context? How is the relationship between man and machine changing? Identity and the mind are digitally extended and/or offloaded, and often our experience is more and more shared, with the individual only partly controlling the process. Yet, through archiving sites such as Facebook we also (appear to) have more power in shaping future memories of ourselves. And how is our conception of human nature changing when the digital world in which we are all immersed turns us all into cyborgs and makes us partly virtual? Not only has the traditional division between our private lives and public personas been blurred, but we are now restructured through the algorithmic mathematical formulas that both create and analyse our  world. As Gilles Deleuze presciently said in 1990: ‘We no longer find ourselves dealing with the mass/individual pair. Individuals have become “dividuals,” and masses, samples, data, markets, and “banks”.  This raises questions concerning the ethics of this context, about ownership and power. Who controls and has access to our private, public and digital lives.

Socio-digital Interaction specialist Wendy Moncur (Dundee University) gave a talk about Digital Inheritance, and identified some of the contractions that are facing us. Our physical life, social life and digital life are all becoming increasingly complex, but also interwoven with one another. But Moncur warned: “There is no off-switch for digital death; there is no mechanism by which we can switch ourselves of at the end of our digital life and we can linger on, a bit like a bad smell in cyber space.’ Moncur took a look at the different forms of value in our digital live: people with an ebay-account have ‘financial value’ invested in their online selves, but posting pictures of loved ones has ‘emotional value’ and writing blogs has ‘intellectual value’, and all these different types of values can carry on even after our physical lives end. This is called the ‘post-self’, which consists of memories, mementos, memorials, reputations and other kinds of memory, which and this often changes. ‘Think for instance of Jimmy Savile, who died some years ago but his reputation is continuing to change and people memories of him are shifting. So the post-self is a dynamic being. And we may now be involved in our post-self: there are tools that allows us to send out posthumous emails and we can be involved in our own funerals, thus shaping how the post-self is constructed. The digital self thus outlives our physical life-span.’ Literature that has responded to this context includes Lotte Moggach’s Kiss Me First (2013) and James Smythe’s The Machine (2013).

Science fiction novelist Adam Roberts confided that, although had held out against social media for a long time, he’s now addicted to Twitter, the ‘crack cocaine of the Internet.’  Now his love for the digital goes so deep that Roberts reads novels on the Kindle app on his iPhone, and he wondered how this is perceived by his children; he wonders what kind of role model he sets. Roberts also identified a new, paradoxical form of human behaviour: ‘Social media are memory. Everything you write is engraved in virtual stone forever. Oral communication is different: once you speak certain words will vanish into the ether, yet when you communicate face to face with people you will behave more politely. The rudest communication happens on social media, which is weird because every word you say is recorded. I think this happens because, for instance, Twitter gives the illusion of fluency and there is no face to face contact.’

Roberts told us about the inspiration for his novel New Model Army (2010), which projects into the future the idea that an army could be dehierarchised and democratised by using technology that allows all soldiers to be involved in strategic decision making; their collective knowledge operates via wikis. Roberts got the idea partly from Apple-founder Steve Jobs:  ‘Jobs was fond of saying: ‘The entire accumulated body of knowledge generated by mankind from the beginning of time to the year 2000, we now generate that amount of data every year.’ It’s not strictly speaking ‘knowledge’, of course, but all that you need to turn that intelligence into wisdom are algorithms and search engines  that are sophisticated enough. That new situation levels and democratises human interaction as a whole, and that is a good thing: rather than the tyranny of the previous hierarchical structures, democracy is the best way of reaching decisions.’ Roberts’ novel thus explores through sf the implication that currently both consciousness and memory because less and less an isolated act contained within individual human mind, but that it becomes more and more a shared, collective activity and process. The digital revolution is, for Roberts, an age that empowers people, and we move ever closer to an opportunity to tap into, and use, the wisdom of the crowd.

One of the panellists, Stacey Pitsilides (Goldsmiths), a doctoral student working on ‘Digital Death’, quoted Professor José van Dijck (University of Amsterdam) on why people are flocking to the digital with such major intensity: ‘the anxiety of forgetting is implied in the desire not to be forgotten […] the most important beneficiaries of this software product are your descendants. Immortality through software cultivation appears to be an attractive prospect in which to invest.’ Van Dijck seems to imply an (unconscious) link between our ability to remember and our desire to not be forgotten, but perhaps the irony is that the tools that grant of immortality (sites such as Facebook) might actually make us forget because the act of storing a photo of ourselves on Facebook implicitly allows us to forget that content as well.

The discussion about digital afterlives also invoked a question about religion: with the waning of religion and the comforting idea of the afterlife, can the success of social media be explained through the digital afterlife they give us? Moncur did not agree with this assumption: ‘We’re seeing people use the digital to memorialise. You can create a memorial on Facebook. The 9/11 memorial in the States also has a digital form as well so that you can remotely visit. But we’re also seeing digital memorials with a religious element to it. It’s not about religion disappearing, but it perhaps reflects how we’re behaving offline: some people are religious, some people are not, some people are embracing new forms of spirituality.  The digital is an just another form of expressing how we are as humans. We have many different forms for remembering people.’ Adam Roberts added to this: ‘Atheism is a big deal at the minute, but it seems to me that digital media are the apotheosis of  particular religious traditions. If you look at Christianity, Judaism, Islam there are two aspects that unites them: they are structured according to books, and religion creates a community of people congregating. Social media are exactly those two things.  It’s about writing, but also communicating on a much larger, global scale.’

Click here for a video of the panellists discussing their work and its relation to digital inheritance.

 

Categories
Exchanges

Andrew’s Brain

Nick Lavery

‘The great problem confounding neuroscience is how the brain becomes the mind. How that 3lb bowling ball makes you feel like a human being.’

This problem sets the tone of Andrew’s Brain, restated and reformulated throughout by the novel’s cognitive neuroscientist narrator as he tries to come to terms with himself as both moral agent and as a function of the titular object. It is also one gaining increasing influence over a more general strain of soul-searching in American culture. E.L. Doctorow’s novel fits into a literary subset long established as a distinct genre, the so-called ‘neuro-novel’, while in 2013 president Obama announced a new, decade-long scientific effort to examine the workings of the human brain. In one sense, this project seems to validate the hopes of Doctorow’s protagonist, for whom the project of understanding human behaviour in such terms comes to be seen as an escape route from his entanglement with the previous administration. While the novel’s choice of setting in the recent past might support a reading in terms of a post-Bush triumphalism, its form structures itself around a constant questioning of such optimism in all its forms. Andrew himself is acutely aware that the possibility of scientific advances solving the great problem of neuroscience is blocked by another question: ‘How can I think about my brain, when it’s my brain doing the thinking?’

Commenting on the contemporary cultural ubiquity of pseudo-neuroscientific models of human behaviour in Neuromania, Paolo Legrenzi and Carlo Umilta situate this trend as the endpoint of a cultural pendulum swing away from the anti-psychiatry and theories of the ‘socially constructed’ self of the 60’s, a supposed turn back to materiality and rational humanism ‘less blustery and rhetorical perhaps, but no less insidious because of this’. Andrew’s frequent digressions on the subject of neuroscience and its implications for his life-story form part of an antagonistic dialogue with his fellow narrator, a psychoanalyst whose interruptions frustrate his attempts to understand himself through a coherent narrative, beginning with his initial attempts to cast the novel as the third-person recounting of the story of ‘his friend Andrew’. Andrew is trapped: by an apparent propensity to bring disaster to those around him, by guilt, and by his own consciousness, his subjective perspective barring him from any stable conclusions on the extent of his culpability in a series of disasters spiralling outward from the death of his child, to the death of his second wife on 9/11, to his involvement with the Bush administration’s response.

The novel builds on this tension through a pairing of a circular interrogation between Andrew and his analyst that often takes the form of a shuffling of incongruous and surreal episodes from his life with an escalating sense of delayed consequences, leading up to a closing revelation of the true extent of Andrew’s entrapment. Andrew’s faith in neuroscience cannot help but be seen as a form of defence, an evasion of the responsibilities of selfhood perhaps latent in the contemporary notion of the self as illusion. The psychoanalytic perspective through which such a reading can be formulated, however, also frustrates the possibility of agency and responsibility in its own way. Doctorow describes his own political outlook as ‘biblical’ – ‘you shouldn’t murder, you shouldn’t steal, that sort of thing’ – and the novel’s treatment of post-9/11 American culture seems preoccupied with the various ways in which an obsession with understanding the self covers the absence of any committed moral stance; Andrew’s descriptions of his old college roommate, in particular, refine this dynamic into a form of bleak humour: ‘His war was not going well. He’d invaded the wrong country. You can’t imagine the anxiety that produces’. For all of Andrew’s faults, his position within the administration and eventual fate reinforce the notion that his concern with the brain and its role in behaviour is not just an evasion of responsibility, but a way of understanding the reality beyond the linguistic frameworks imposed by his analyst and by culture at large. As Doctorow acknowledges, the potential mapping of the relationship between the brain and consciousness could be a ‘glorious intellectual achievement’, but would also represent ‘the end of the mythic world that we’ve lived in since the bronze age, with all these stories we’ve told ourselves about what human life is’. The prospect of understanding human behaviour and consciousness may itself turn out to be another one of those stories- a function of both our need to take responsibility for our actions, and the realisation that any explanation originating in human thought cannot fully embody reality.