Subscribe Now
Trending News

Blog Post

On ‘Being You’

On ‘Being You’ 

Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex, where he is also Co-Director of the Sackler Centre for Consciousness Science. He is also Co-Director of the Canadian Institute for Advanced Research (CIFAR) Program on Brain, Mind, and Consciousness, and of the Leverhulme Doctoral Scholarship Programme: From Sensation and Perception to Awareness. Editor-in-Chief of Neuroscience of Consciousness (Oxford University Press) and a Clarivate Highly Cited Researcher (2019, 2020, 2021), which recognizes the top 1% of researchers by field with sustained impact over a period of a decade.

His new book ‘Being You: A New Science of Consciousness’ is a Sunday Times Top 10 Bestseller, a New Statesman Book of the Year, a Guardian Book of the Week and a Guardian and Financial Times Science Book of the Year. His 2017 main-stage TED talk has more than 12 million views and is one of TED’s most popular science talks.

Richard Bright: Can we begin by you saying how you first became interested in consciousness and researching into it?

Anil Seth: I think I’ve always been interested in consciousness. Like many of us, it’s one of those questions that comes up when we’re kids – Who am I? What happens when I die? Where was I before I was born? Then that leads to these deeper questions, such as, how come experience happens as a result of things happening in brains? What is free will? Am I in control of my actions? So, I started thinking about these at a very early age, but it didn’t strike me at the time, that I was staking out a career in this – what’s happened is that, somehow, I’ve managed to keep on being able to be interested in it through my career and life so far.

RB: Your new book, Being You, describes itself as ‘A New Science of Consciousness’. You begin the book by discussing the various ‘isms’ of theories of consciousness before going on to discuss, what you call ‘the Real Problem of Consciousness’. What is ‘the real problem’ and can you say something about your focus on ‘level’, ‘content’ and ‘self’ as core properties of your approach?

AS: In the study of consciousness, people are often asked which of the various metaphysical ‘isms’ they subscribe to. There are various options, there’s classic Dualism, going back to Descartes, which is the idea that there’s two forms of existence in the universe, matter stuff and mind stuff. The problem then is how they interact. There’s also Panpsychism, which states that consciousness is somehow everywhere, part of everything, fundamental and ubiquitous – and Idealism, which says that mind is the nature of everything, rather than physical matter.

My preferred stance, and it’s just a stance, it’s not something that I know or could demonstrate to be true or false, is Physicalism. This is the idea that conscious experiences are either identical to or emerge from physical interactions of some sort. This is the premise upon which pretty much all of science is based and it’s been very successful in explaining other apparent mysteries of the universe. Taking this perspective, the question is ‘How do we apply it to consciousness?’ The problem is that is seems as though conscious experiences have a non-physical character. A conscious experience just doesn’t seem like the kind of thing that could ever be explained in terms of physical mechanisms. This is the intuition that motivates the Dualist perspective, that mind and matter are very different things.

The apparent intractability of explaining consciousness in physical, material terms has led to the notions of the Hard Problem and the Easy Problems of Consciousness, as described by David Chalmers. The Hard Problem is a modern version of the Cartesian divide. Everybody agrees that there is an intimate relationship between what brains do and what happens in conscious experiences, but there remains a fundamental mystery, according to Chalmers, about why and how any kind of physical processing should give rise to experience at all. As he puts it, it seems ‘objectively unreasonable’ that it should, and yet it does. So, the Hard Problem is that of explaining how and why any physical process gives rise to consciousness. The Easy Problems are not easy in the sense of being simple to solve, but they’re conceptually ‘easy’. They’re all the problems or questions about how brains work, for which we can set to one side the issue of consciousness. How brains work as complex mechanisms, how they transform sensory input into action. If you partition the problem this way, consciousness becomes very tricky, because you either ignore it completely and focus on functions and behaviours, or you are addressing what seems to be a metaphysically intractable issue.

Against this background, my stance is quite pragmatic, and builds on many similar approaches that have been around for years – such neurophenomenology, championed in the 1990s by the Chilean neuroscientist Francisco Varela. I’m just putting a different name to it, which is the “real problem” of consciousness. The idea here is to accept that conscious experiences exist, and that they relate in lawful ways to things happening in brains and bodies. The “real problem” challenge is to explain, predict and control how these conscious experiences are manifest in terms of things happening in brains and bodies: to build explanatory bridges between what happens in brains (embodied in bodies, which are embedded in environments) and what happens in conscious experiences. The hope is that by doing this, instead of directly solving the Hard Problem, we will gradually dissolve it, it’ll become bit by bit less mysterious and eventually vanish in a puff of metaphysical smoke.

The strategy to do this is also well-worn. Instead of treating consciousness as one big scary mystery, let’s recognise that there are many different aspects of consciousness, expressed in different ways in different species, in different organisms, in different people, or even in the same person at different times. There are three broad ways to think of this. Firstly, it’s helpful to think in terms of conscious level: very broadly how conscious you are at a particular time, whether you’re asleep or under anaesthesia, or awake and aware. Secondly, conscious content: what you’re conscious of, the nature of the conscious scene that constitutes your experience at any instant. An important part of that, and this is where my book really goes, is conscious self. Part of what populates consciousness at any moment, for most of us, most of the time, is the experience of being who we are, an experience of being a ‘self’. By trying to account for the distinctive properties of these kinds of consciousness in terms of mechanisms I think we stand a chance of developing a satisfying and complete science of consciousness. This is because in science, once you can explain, predict and control a phenomenon, you don’t always have to explain how and why it comes in part of the universe in the first place.

RB: You’ve talked about levels of consciousness. In Chapter 2 you talk about measuring consciousness. Can you say give some examples of how you measure consciousness?

AS: In the history of science, measurement has always been important. There are some things that become fully understandable once they become measurable. Heat is a very good example of a phenomenon that scientists understood once they were able to quantify ‘hotness’ and ‘coolness’ on a scale. It was the development of effective thermometers that allowed heat to be understood in terms of – reduced to – the average speed of movement of molecules in a substance.

Now, I’m not saying that the same exact thing will be possible with consciousness. In fact, I don’t think it will be, but it helps to see how far we can measure how conscious you or I are – or indeed how conscious anything is. The critical issue here is that consciousness is not the same thing as wakefulness. It’s true that they normally go together: when you fall asleep, you might lose your awareness of yourself and your surroundings, but then when you start dreaming you’re vividly conscious again, but you are still asleep. You can also be awake, but unconscious. This is less frequent, but it happens in situations like absence epilepsy seizures, in the vegetative state, and possibly other states as well. So, there’s something distinctive about conscious level that is not just tracking wakefulness, or physiological arousal.

There have been many attempts to measure consciousness in ways separate it out from wakefulness. One of the approaches I find most exciting, and which we’ve worked on in the lab and with colleagues too, is based on measures of neural complexity. Very broadly, complexity is a sweet spot between things being completely random and things being completely ordered. Applied to consciousness, you can think of all conscious experiences as being complex, in the sense that every conscious experience is very rich, it’s composed of many different parts, but every conscious experience is also bound together in some kind of unified whole. Conscious experiences are structured but not simple. Another way to put the point is in terms of ‘Integrated Information’: every conscious experience brings together a lot of information in a very integrated way. Can we develop measures of brain dynamics that express or measure this property? What would a mechanism that exhibits high levels of integrated information look like? The strategy here is to develop quantitative measures that track integrated information – or dynamical complexity more generally – and apply them to the brain, to see if they work as measures of conscious level. For me, this is a fascinating line of research.

The main pioneers of this approach – people like Giulio Tononi and Marcello Massimini – have been working on a specific theory of consciousness, the integrated information theory of consciousness, but the approach itself can be explored without having to adopt this particular theory.  And this is what we’ve been doing in my lab and with my colleagues. We’ve been working on various measures of complexity, and finding that they do in fact usefully dissociate conscious level from wakefulness.

One interesting wrinkle that’s come out of our research is that the measures which work well in practice are rather simple, in that they don’t fully capture this interesting concept of ‘integrated information’ in its full theoretical glory. Our relatively simple measures mainly track the degree of diversity in the patterns of brain activity. These measures – generally called measures of algorithmic complexity – seem to provide a robust and reliable index of conscious level under many different manipulations. This is a nice starting point at the moment and there’s a lot more to be done here, in refining these measures and bringing them closer to a more theoretically principled concept of integrated information.

RB: You also start the book, in the Prologue, by talking about anaesthetics, and the total loss of consciousness associated with either general anaesthesia or coma. Do we know what is happening in the switching off of consciousness in this way?

AS: If you were to come fresh to the problem of studying consciousness, anaesthesia is basically your dream intervention. Here’s a thing you can do that turns consciousness off and then allows it to return, while keeping the organism alive and stable in pretty much all other ways. For my money, anaesthesia is one of the greatest inventions of all time.

Do we understand what’s going on? Well, it depends. Anaesthesiologists of course know what they’re doing in terms of its use in operating theatres. At a neurobiological level, there’s a great deal known about what you might call the ‘local effects’ of anaesthesia. We know where in the brain they act and which neurotransmitter pathways they work on. The challenge is to understand how anesthetics alter global patterns of brain activity in ways that lead to the reversible loss of consciousness. There are now some very interesting hints about this. A number of studies show that connectivity between different brain regions is disrupted in distinctive ways during anaesthesia. There’s still a lot to learn about the precise ways in which connectivity is disrupted, but very broadly, different parts of the brain speak less to each other as anaesthesia kicks in. Other studies show that if you look at patterns of dynamical activity in the brain, when you lose consciousness the activity patterns tend to match the anatomical connectivity much more closely than happens in conscious states, as if activity in the brain during unconscious states becomes heavily constrained by the brain’s wiring – a more direct reflection of how the brain is wired up.

Despite these studies, precisely how anaesthetics do what they do is still a bit of an open question. There is some debate about whether it’s the breakdown of connectivity within the cortex or whether it’s something to do with the thalamus, which is this set of nuclei that sit deep within the brain – perhaps the thalamus, the thought goes, is some kind of consciousness ‘switch’. Although there’s a lot to discover about precisely how anaesthetics work, it’s still a very good thing that they do in fact work.

RB: Can we talk about the other end of the spectrum, of psychedelics and how a lot of current research into psychedelics is really about the therapeutic medicinal benefits and properties. What implications do psychedelics have to the study of consciousness and how can we understand consciousness through psychedelics?

AS: Yes, there’s a lot going on with psychedelics at the moment. They’re back in the spotlight, but perhaps it’s wise to be a little cautious too. The level of boosterism and commercialization surrounding psychedelics is a cause for concern. For sure, there is a lot of clinical therapeutic potential, I’m convinced about that, it just has to be developed carefully.

Now in terms of relevance for understanding consciousness, it’s very similar situation – and opportunity – as we have with anaesthetics. We have a substance that intervenes in brain activity, and that changes conscious experience. Importantly, psychedelics are basically non-toxic and safe – if the context, the ‘set and setting’, are appropriate. Now, instead of abolishing conscious experience as anaesthesia does, psychedelics engender fundamental and profound alterations in conscious contents and conscious self, changes that are so pervasive that it might make sense to think of them as changes in conscious level too. And, like anesthesia, these changes are reversible. Also like anaesthetics, we know a great deal about the local mode of action in the brain. The classic psychedelics like psilocybin, LSD, and DMT act on the serotonin 2A receptor – enhancing its activity.

The challenge, again, is how do we connect these local change to the changes in subjective experience? How do psychedelics alter global patterns of activity in ways that explain now not the loss of consciousness, but the substantively altered consciousness of psychedelia? There is quite a lot of work going on that is addressing this question, and there are some interesting discoveries. One of them comes from our lab, in collaboration with the psychedelic research group at Imperial College, London. We found that measures of brain complexity basically go in the opposite direction to what happens in sleep and anaesthesia. In sleep and anaesthesia, your brain tends to become more predictable, there’s a loss of diversity in its dynamics. By contrast, in the psychedelic state, there’s an increase in the diversity of neural dynamics. Broadly speaking, the brain becomes less predictable in its activity. This is interesting, although it shouldn’t be mischaracterized as saying this is evidence for a ‘higher’ state of consciousness. Rather, the increase in diversity that we observe makes some sense when you think about this more ‘disordered’ nature of consciousness in the psychedelic state. Another finding, this time not from our lab – but from the team of Robin Carhart-Harris – is that functional networks in the brain tend to become more ‘mixed up’ in the psychedelic states. Brain networks that may be largely anti-correlated in normal life tend to overlap more in the psychedelic state, which again makes some sense in terms of reports of things like ego dissolution in psychedelics, where people feel less of a separation between their experience of self and their experience of the world.

RB: Throughout the book you talk about the Predictive Brain and Controlled Hallucination. Could you explain what you mean by these terms?

AS: This follows nicely on from psychedelics because one of the most prominent features of the psychedelic state is that perceptual content is dramatically altered in ways that seem in some sense hallucinatory. For example, people can look at the cloudy sky and the clouds may begin to take on distinctive forms. To understand what is going on here, we have to ask what’s happening in normal perception.

In normal perception there’s the tendency to think that the world just pours itself into the mind through the transparent windows of the senses, and there’s the self within the brain, within the head somewhere, that’s receiving all this sensory input, reading it, forming a picture of the world and then using that to decide what to do next. It really seems as though the world outside actually has all these properties, like colour and shape and so on, and that these properties are objectively existing aspects of an external reality. Now, this picture, which we can call the ‘how things seem’ picture, has been repeatedly challenged by philosophers for hundreds, probably thousands of years, going back to Plato and his Allegory of the Cave, but these days – in modern neuroscience – I like to think of the brain as a prediction machine.

In the prediction machine view, perception, rather than being a reading out of the world, is always an act of construction. Sensory signals that enter the brain don’t come with labels on saying where they’re from, or what their content is, they only ever indirectly reflect things out there in the world. One way for the brain to make sense of sensory signals is to do something equivalent to what we call Bayesian Inference: the brain is trying always to figure out the most likely causes of the sensory signals that it encounters, what we can informally call a ‘best guess’. How does the brain settle on a best guess of the causes of its sensory signals? One way for this to be accomplished is for the brain to continually generate predictions about these causes and then update these predictions using the sensory signals – treating sensory information as a kind of ‘prediction error’. This is a real flip in how we typically think about things. We’ve known for a long time that there are many top-down connections in the brain, signals that go from the brain back out towards the sensory surfaces, but it’s often thought that these top-down connections merely modulate or refine the all-important bottom-up flow, a flow of information which really carries what we experience. I think it’s the other way around, that what we experience is the collection of the content of the top-down predictions and that sensory input serves primarily to calibrate the brain’s predictions, to rein them in against reality. This is the framework known as, variously, as Predictive Coding or Predictive Processing. It’s a mathematically well-established way of doing best guessing, but it has this fascinating implication that what we perceive is the collective ensemble of top-down predictions, not the bottom-up sensory data. This is why I call it a Controlled Hallucination. It’s not my term, I heard it from Chris Frith, who heard it from others, and the trail goes much further back. The word hallucination highlights that perception comes primarily from within. But the control is equally important. I’m not saying that the mind makes up reality, nor that our perceptions are arbitrary – as is sometimes misunderstood. Not at all. Our perceptual content in normal situations is very, very closely tied to a real world, but it doesn’t directly import that real world into the brain. We perceive the world not as it is, but as it is useful for us to do so.

RB: You also you mentioned colour. There’s a famous saying by Cezanne, which you quote in the book, which is that “colour is the place where our brain and the universe meet”. It’s a wonderful phrase. What do you think he means by that?

AS: I wish I knew more about Cezanne to know why he said that, but he’s getting at something that I think is fundamentally true and which goes back even further – at least to Newtonian physics. I like colour as an example because it’s such a prominent feature of visual experience. Right now, for example, I’m looking at a tree across the canal from my hotel window in Amsterdam and the tree has green leaves. This greenness seems to be a property of the tree – a property that does not depend on me, my mind, or my brain. But we have known since Newton that greenness is not a property inherent to the tree. The tree and its leaves are really there, and they reflect light in a particular way, and this reflected light enters my eye.  But this light itself is entirely colourless – it’s just electromagnetic radiation, and my eye – your eye, most eyes – are only sensitive to three of these wavelengths. Out of those three wavelengths the brain generates a whole universe of colours. Greenness requires a brain and a world to exist.

The other reason I like the example of colour is because it highlights that what we experience is both less than and more than whatever the real world really is. This realisation pushes back against another tempting but false intuition, which is to think that what we perceive is some filtered down version of a massively richer reality. In some sense, that’s true. Our perceptual experience is not sensitive to ultraviolet or infrared or X rays or sonar: there are billions of things happening in the universe that are never reflected in our experience. So, our conscious experience is indeed less than what’s there, but in another sense it’s also more than what’s there. Out of just three wavelengths of lights our brains create an infinite universe of distinct colours. You don’t even need neuroscience to get to this point, it’s just a matter of physics and philosophy.

RB: There’s a famous discussion about experiencing redness, which is a kind of a consensus, if you like, of what red is to all of us, although what we’re seeing may be different aspects of red. Another important aspect of this sense of self is memory and what is called the subjective stability of the self. Can you say something about this?

AS: This brings up another important point, indeed another kind of inversion, which has to do with how we think about the self.  It may seem as though the self is an unchanging, immaterial essence of me or you that is the recipient of perceptions, that sits inside the skull peering out through the windows of the eyes. But the self is not a thing that does the perceiving: the self is a perception too, it appears as part of conscious experience.

I think the experience of being a self can be understood in the same way as other kinds of perceptual experience – like the experience of looking at the screen in front of me or looking at the tree across the canal. The experience of ‘being a self’ is also a brain based ‘best guess’, but of a very different kind. Again, nothing really new here, David Hume said as much 100 years ago with his Bundle Theory of Self, that the self is not a single essence, it’s a collection of related experiences. For most of us, most of the time, we feel unified, we don’t notice that we’re sort of bundled together out of many different things. Of course, Psychiatry and Neurology tell a very different story. Different parts of the self can come and go. An example of part of the self that can selectively erode is memory, especially the sort of memory of events – like when I remember what I had for breakfast, or the last time I was in Amsterdam. Autobiographical memory is one part of self that can, under some circumstances, almost completely go away.

This happened in a case I talk about in the book. Clive Wearing is a musicologist, who, after a brain infection, completely lost the ability to lay down new, autobiographical memories. So, he’s always living, in the words of his wife, in a sort of ‘permanent present tense’. One aspect of his self has been substantially altered, perhaps even abolished, but – critically – this doesn’t mean that the experience of ‘what it is to be him’ is all gone. Absolutely not. One of the remarkable things about his case, and others like it, is seeing how much of the self remains, and may perhaps even be enhanced in some ways.

The self is a rich mixture of different forms of experience. You mentioned subjective stability and this is an idea that took shape for me in the writing of the book. One of the features of the experience of being a self that leads us to take it for granted is that the self seems to be this stable essence that does the perceiving – it seems to be continuous over time. The experience of being me is quite similar from day-to-day, even though my experience of the world can change a lot. I’m in Amsterdam today, two days ago, I wasn’t. That’s a big change, but it’s the same me.

But is it the same me? Am I really the same person I was yesterday? To a large extent, maybe – but not completely. Five years ago, ten years ago, the experience of being me was probably substantially different. There’s a phenomenon in perception called change blindness which has been the focus of much attention in vision science. If something changes very slowly and you’re not paying attention to it, then much of the time you don’t perceive the change at all. The lesson here is that change is not just something that happens to our perceptions, change is part of perception. Just as we infer things as having colours, we infer things as changing or not changing. Perception of change is not to be taken for granted, it’s something the brain has to figure out. If certain things change slowly, sometimes the brain makes a reasonable best guess that no change is happening and so we don’t experience change. This is typically being thought about in relation to the outside world, but I think it applies probably even more so to the inner experience of being a self. The self does change slowly, but, more than this, there’s also a strong imperative for the brain to assume that it’s not changing –  so that it provides a point of stability, a self-fulfilling perceptual target for regulation, both in terms of our physiology, and more generally our psychological identity. It makes sense for the brain to perceive itself as relatively stable in order to control and regulate the body, and I think that this is what underpins the experience of subjective stability. The implication, of course, we change more than we realise.

RB: I’ve been studying and practising Theravada Buddhism for the last 40 odd years. One of the main aspects of it is the idea of a permanent self is illusionary and it’s transient, as is the universe. That is fundamental to the whole teaching. Also, the American philosopher, historian, and psychologist, William James, spoke about the need for understanding consciousness by not only by studying behaviour and functionality, but also by introspection. The first two have been studied extensively, but not so much introspection, particularly in the West. With Eastern philosophy and religion there have been two and a half thousand years of people studying the mind through meditative practices. How much would that be of benefit to contemporary consciousness research?

AS: I think a great deal. I’ve been really heartened by the interactions between neuroscience and Eastern philosophy and practice, and I’m very glad to hear that you’re this dedicated – and much more informed than me – as a student of Buddhism. I’ve been on the margins of these interactions – primarily between neuroscience and Buddhism – which has been absolutely fascinating, for precisely some of the reasons you said. There are recognitions already in Buddhist writings about the impermanence of the self, indeed the impermanence of all things, but certainly the impermanence and ‘non-essence’ of selfhood. Buddhism also provides practical tools for recognising these things in the first person through meditation, as I’m sure you know. I meditate too, but I haven’t been doing it for 40 years. There’s a striking synergy here between the scientific and Western philosophical perspective in terms of things like Bundle Theory and self-change blindness, the collected insights from Buddhism, or certain branches of Buddhism anyway, and what any of us can achieve in our first person lives through meditation, if we bring this practice into our daily lives.

This brings us to introspection. Introspection as a method in psychology has got a bad rap, in part for good reasons because I think when the method is applied loosely it is not a reliable source of evidence about the nature of subjective experience But you don’t want to throw the baby out with the bathwater, as happened with the Behaviourist revolution in the early 20th century, which excluded the study of inner mental states almost entirely. If you want to pursue a science of consciousness you need to get good data about the nature of experience, about the ‘explanatory targets’ of the relevant psychological and neurobiological mechanisms. You can start, as we did at the beginning of this conversation, with aspects of consciousness that don’t rely on subjective reports so much: if you knock somebody out with anaesthesia you don’t really need introspection to study consciousness – or its absence. Although even here it’s still interesting as to what happens at the margins. It’s not quite right, with anesthesia, to say that you’ve ‘gone’ and then you suddenly ‘come around’. There are very interesting aspects to the dissolution and re-emergence of consciousness in anesthesia, especially in the coming round phase – but these experiential aspects are very, very difficult to study because there’s a post anaesthetic amnesia that gets in the way. It’s a bit like dreams. It’s really hard to hang on to the experience of coming around from anaesthesia for long enough to describe it.

In any case, whether in anesthesia or more generally in psychology, introspection done right is incredibly useful. And by ‘done right’ I mean noticing what you’re paying attention to, to noticing the contents of your experience in a more systematic way – as of course happens in meditation. In Western traditions, there’s the whole approach of ‘phenomenology’, which can be thought of as a different kind of philosophy of mind. Rather than being concerned with grand claims about the relation between mental states and physical nature, it’s about examining what the nature of experience is really like. For me, the most productive approach to studying consciousness is to connect a rich phenomenology – a richer descriptions of what experiences are like – to a neuroscientific understanding of the mechanism of perception.

RB: There is actually an ex-Buddhist monk and Buddhist scholar, B Alan Wallace, who teaches and has written extensively on what he calls Contemplative Science, which is a discipline of first-person, subjective inquiry into the nature of the mind and its role in Nature, He is the founder of The Center for Contemplative Research in America, which provides the training for dedicated contemplatives to engage in long-term meditation retreats and collaborate in rigorous research with leading scientists and philosophers from a variety of fields, including physics, neuroscience, psychology and phenomenology.

AS: Yes, this is an increasingly active area of collaboration. My own experiences in this area have been with another organisation, the Mind and Life Institute, which also brings together people across these different disciplines. A key figure for me here is Francisco Varela, who kickstarted a lot of this collaborative work and who coined the term neurophenomenology, which is very much aligned with the real problem that I was talking about earlier; indeed his approach was a major inspiration for me. Over the last few years I’ve been to a few Mind and Life retreats, where there’s always been an exciting mixture of meditation practice, workshops, scientific lectures, and lectures about Buddhism. And besides shedding light on the nature of consciousness, these workshops – and contemplative science more generally – these workshops have also emphasised the importance of all these topics for enhancing people’s well-being – something of real practical importance.

RB: I think B Alan Wallace talks about this, meditation as a system and it’s very systematic to the levels you can achieve. There’s one particular level in Theravada Buddhism, which is described as ‘neither perception nor non-perception’, which I’ve never quite got my head around that. One last one last question, which is to do with the prospect of machine consciousness. I spoke with Murray Shanahan a few years ago and we were talking about the theoretical idea of copying a human brain, in all its neurological structure, that the neurons and synapses could be copied exactly, to whatever material, silicon perhaps. Would that copy be automatically conscious?

AS: It’s a good question. Actually, it’s a very timely question. The reason I’m in Amsterdam right now is to speak at the World Summit AI. One of the things that I said in my talk is that it’s a really bad idea to even try to build a conscious machine. To be successful, whether intentionally or by accident, would constitute an ethical disaster. We would have introduced the potential for new suffering in the world that might not even be recognisable as suffering. Yet, a lot of people, perhaps in the tech community, seem to want to build a conscious machine, for reasons that don’t always seem very clear – perhaps because its considered ‘cool’ in some way. But something seeming ‘cool’ is a very bad reason to want to do anything that might have such substantial implications.

Putting that little rant aside, what are the prospects? One common idea – too common in my view – is that as you make AI smarter and smarter, and perhaps at the point that it reaches a threshold that we call General AI – the kind of general intelligence characteristic of human beings- that consciousness happens, that ‘the lights come on’ for that system. For me, this is a very dubious projection, because it makes two assumptions that are both weak.

One is that consciousness is intimately tied to intelligence – that as artificial systems get smarter and smarter, that consciousness will just come along for the ride. But intelligence and consciousness are very different things. Consciousness is fundamentally about any kind of experience whatsoever.  You don’t have to be that smart to suffer, and many non-human animals very likely have conscious experiences, even though we might not rank them as being that all that intelligent, though again that could be another residue of the anthropocentrism that warps our judgement in so many situations. So that’s one thing. The other thing is the assumption of substrate independence, the idea that – for consciousness – it doesn’t matter what something is made out of, it only matters what it does when it comes to consciousness. Is consciousness substrate independent?  For me, there are no rock solid arguments either way. Some things in the universe are substrate independent: a computer that plays Go is actually playing Go. But some things aren’t: a computer simulation of a weather system, however detailed it is, does not instantiate rain, it does not get wet. Rain is not substrate independent. So, is consciousness more like rain? Or is it more like Go? If you think consciousness is coextensive with intelligence, then you might presume to think it’s more like Go, but if it isn’t, if you recognise the centrality of basic embodied experiences such as suffering or pleasure, then you might be inclined to the view that consciousness is not substrate independent. To put my cards on the table, and as I say in the book, I am inclined to think that consciousness is not substrate independent. The reason for tending this way is partly because I think that consciousness is not the same thing as intelligence, but also, this is where I really go in the book, the whole point of having a brain and the whole point of perceiving the self and the world in the first place is to help the body stay alive. The perceptual predictions that underpin all of our conscious experiences, they all bootstrap themselves up from a fundamental imperative to regulate the internal physiological condition of the body.

Life is intimately tied to consciousness in this view, much more so than intelligence. And in living systems there is no sharp divide – no bright line – between what counts as substrate and what does not. Mindware and wetware are not as separated as hardware and software in present-day computers. In living systems the fundamental imperative for self-regulation goes all the way down to the level of individual cells. And in this view, it might be life – rather than information processing – breathes fire into the equations of consciousness.

So, back to your questions: What would it take to build a synthetic conscious artefact? Sure, if you copied a brain down to the level of molecules, yes, of course, consciousness would arise – but that’s just a restatement of materialism. But if I just captured the wiring of a brain at a particular point in time and then rebuilt it in silicon, would that be a simulation of consciousness? Or would it be actually conscious? I think anyone who says definitively one way or the other is relying on assumptions that they perhaps shouldn’t rely on. My suspicion is that it wouldn’t, but it is not a watertight argument, its one based on deep intutions about the connections between life and consciousness.

RB: To finalise, what you would say are currently the most important questions, problems and challenges of understanding consciousness?

AS: That’s a great way to finish, because I can say a little bit about what we’re doing in the lab now. One thing to say here is that I get really frustrated when I hear people say things like ‘we’ve made no progress in understanding consciousness’, it’s still as ‘mysterious’. This is just nonsense. There’s still a deep mystery there, sure, I’m not going to deny that, but there’s been a lot of progress. The continued existence of a deep mystery together with a lot of progress are perfectly compatible. I’ve been in this business for more than 20 years now and the amount that’s understood now about consciousness is massively more than it was 20 years ago. What’s more the whole topic of consciousness is not as fringe or as taboo as it was 20 years ago. I think the prospects are really exciting for consciousness science, not least because of the increasing numbers of smart young people getting into the game.

Back in my own research, I’ll mention three things that I’m particularly invested in at the moment. One of them is the ideas of what theories of consciousness can and should do. So, in our conversation, we’ve talked around this broad idea here that conscious perceptions are tied to this understanding of the brain as a prediction machine. This is one theory – in the book I call it the ‘beast machine’ theory, in recognition of the importance of the predictive regulation of life processes. But there are many other – more prominent – theories out there too. There is Global Workspace Theory, the idea that consciousness is instantiated in ‘fame in the brain’ – that when a mental state is broadcast around a network in the brain, it becomes conscious. There’s Higher Order Thought Theory, which holds that consciousness has to do with higher-level mental re-representations of some form. And then there’s Integrated Information Theory, which is very specific theory proposing that consciousness is identical to maxima of irreducible integrated information in systems. These theories are now getting to the stage where they should be pitted against each other.

One fascinating initiative in this direction has been pioneered by the Templeton World Charity Foundation in the form of ‘adversarial collaborations’. The idea is to fund research programmes to design and perform experiments with the specific goal of disambiguating between competing theories. I think this is a wonderful idea. Of course, in practice, it’s very challenging because you have to figure out where these theories meet: they often have different starting points, different assumptions, different explanatory goals, and so on. But it’s where we need to go.

Another exciting line of work is much more tied with the Prediction Machine view, and with the idea of building sturdier explanatory bridges between neural mechanisms and properties of phenomenology. Here, I’m keen on coupling a detailed phenomenology with some of the frontier tools of machine learning and AI. The objective is not to build a conscious machine, but to use advances in the architectures developed in machine learning to develop and test more powerful connections between mechanism and phenomenology. For instance, we’re now developing different kinds of neural network architecture that can explain the differences in phenomenology between different kinds of visual hallucinations. Psychedelic hallucinations are very different from the visual hallucinations that people experience in Parkinson’s Disease, or Lewy Body Dementia. What properties of the predictive mechanisms underlying visual experience explain the differences among these different sorts of hallucination? This is the question we are trying to answer in the lab.

The final thing I’d like to mention, that I’m working on with colleagues at Sussex and elsewhere, is the topic of ‘emergence’. Emergence could be a whole new conversation in itself. It’s a concept that has been almost as confused and confusing as consciousness. What does emergence mean? Intuitively, it means that there’s a global state which is in some sense more than the sum of its parts. For example, a flock of birds might seem to be more than the sum of the birds that make it up, because it seems to have a life of its own in some way.

Emergence and consciousness relate in many interesting ways. Perhaps most obviously, conscious experiences are global and unified, and in this sense they seem to be more than the sum of their parts. So, as we’ve been doing with measures of complexity, we’re currently building mathematically rigorous measures of emergence that work in practice, that we can use to begin to characterise the global and unified character of conscious states. It’s worth noting that these measures of emergence could be broadly applicable beyond consciousness, too. One particular challenge here is how to take the observer out of the system. So, a flock of birds seems emergent. We call it emergent because to us, there seems to be something about a flock of birds wheeling through the sky that just screams “Hey I’m an emergent property!” It would be useful if we didn’t have to rely on what just seems emergent to us. Is there a general theory of emergence’ that we can apply to physical systems very broadly, that will indicate when there’s something going on, for which it’s sensible to say the whole is doing more than the sum of the parts?  By making progress I this area – which we are doing, with concepts and mathematical constructs such as ‘dynamical independence’ – we can help make emergence a productive area of enquiry that moves it beyond the non-scientific, spooky-sounding interpretations of emergence that rightly make people very suspicious of the whole area. With emergence, as with consciousness, there’s a very rich blend of physics and philosophy and neuroscience to be explored.


Get the Full Experience

Read the rest of this article, and view all articles in full from just £10 for 3 months.

Subscribe Today

, , , , , , , ,

Read More

Related posts

© Copyright 2022, All Rights Reserved