Consider for a moment what happens when you are driving a car. Perhaps you are driving down a high street you know. The shops and houses pass by on either side. You see someone you think you know – but it isn't. It's someone who just looks like him or her.
But now you think about your friend. Perhaps they are having some problems at the moment – maybe you will call them. Then you wonder about switching your radio on – but then you think of something you heard earlier which was rather boring – and you decide not to.
Now the traffic lights ahead turn to red. All the traffic slows to a halt. The brake lights come up on the car in front, and you gently brake too. Not an emergency stop of course. No need for that; just a gentle slowing to a halt.
What is remarkable about all this is not that your brain thought of several different things. Your eyes scanned the road, thought they saw a friend, but it wasn't. All this information coming in was ordered. Most of it was thrown away – most of it wasn't important; but your friend – that attracted your interest.
What your mind was doing was sifting a great deal of information. Most of it was thrown away, but some of it was important. The important bit was brought to your conscious attention, while all the things which didn't matter remained in the background.
Your mind then went into a little soliloquy, wondering how you friend was, and whether you should phone her. But this soliloquy, this conscious wondering or reasoning did not stop your driving well. The car remained under your control, with you making minor adjustments the whole time without you consciously thinking about it. Only when the lights turned red and the car in front began to brake did your driving come centre stage.
So much has happened here. There were conscious thoughts, a stream of consciousness about your friend. There were also automatic unconscious actions, like changing gear or checking the mirror.
But these things did not just happen in the mental space. You interacted with the world physically as your foot pressed harder or less hard on the accelerator, as you extended your arm to change gear or perhaps switch on the radio.
These things were physical actions by you in the world we all share. They show that while we have a conscious stream of thoughts, we also have bodies which interact with the real world. It might be said that our bodies are like a bridge between the world we all share and can see, and our minds which are private and unique.
Indeed, it is hard to stress enough how significant our bodies are. Perhaps when were driving just now we might have felt momentary pangs of hunger, or pain, or boredom. Sometimes these sensations are strong enough to overpower what might be going on in our heads.
We have a sense of place, of not just existing in the world, but being at a particular point in it, from which all other places seem closer or further away. We also have a sense of self, of being whoever we are, partially as a result of our memories.
But our minds are not full of unnecessary clutter. We did not remember the make, model or number plate of the car in front – it wasn't relevant. We did not remember or even notice the amount of pedestrians on the pavements – unless it was exceptionally busy or quiet. In fact, if you think about it, huge amounts of information were processed by your mind in real time as your drove along, but only a very small part of it – the woman who looked like your friend – was brought to your attention.
This is because our minds are skilled at sifting information – in real time if necessary. They can decide what is relevant and what isn't, and disregard it. Yet we have no knowledge of how the brain achieves this ranking system – and yet it does.
An artificial system of intelligence would have to do something similar if it was to be intelligent in the way we are. It would have to have a ranking system like we have, which processed information, and which only kept a relatively small number of things which were important, and kept everything else in the background. But how would it achieve this? At present we have no way of building such a system.
Indeed, it is awareness that is the key to all this. The Deep Mind computer system pioneered by Demis Hassabis was able to figure out the rules of the oriental board game 'go' on its own. It then proceeded to beat the world human champion at 'go' in a series of matches. The machine learned for itself how to play 'go', a complicated game of strategy, for itself, a remarkable achievement, and a giant step forward for machine learning. It has subsequently mastered numerous other two-dimensional computer games, effortlessly defeating any human competitor, and discovering the optimal strategy for each game.
But stupendous though these achievements have been, where a computer, a machine can teach itself the rules of a game and then play that game better than any human, there is still much to do.
In particular, Demis Hassabis's Deep Mind computers have no awareness of their own achievements. They cannot understand the significance of what they have done. We were amazed, gobsmacked that a machine can learn what to do, how to play the game, what the rules are. But the machine itself is not impressed with its own work. It simply solves the problems at the level of a data stream of fantastic complexity. It does not see the big picture.
Also, remarkable though the achievements of Deep Mind are, they are limited to the working out of the rules of two-dimensional computer games. Consider how much more sophisticated the real world actually is.
In the real world things are loaded with significance, for all sorts of reasons. When you drive along the street, past all those shops and houses and pedestrians, you are seeing something which is familiar to you. You see people. Some of the people look like you and you feel a slight connection to them; some don't and seem less significant. You see an old man spitting on the pavement, and you feel disgust. Then you see a small child throwing a tantrum at its mother and you smile, because you used to do the exact same thing.
You see shops. Some of the shops are interesting to you and some are not – though of course as you are driving you only see them for an instant. And all the time you mind is ranking, sorting and sieving information, and only bringing a small part to your conscious attention.
How then will a computer ever simulate all this? This incredibly rich experience each and every one of us has, every day?
The answer is it probably won't. Whatever form of awareness an artificial intelligence eventually achieves, it will probably be very different.
For a start man is a visual thinker. Our whole experience and interaction with the world is based around sight – around our vision. We see a familiar world, a world of shapes and colours which makes sense to us. From an early age, when our parents said: 'Look, there's a bus!' we gradually learned to interpret the world, to understand it and take it for granted.
Mathematicians like to work at a whiteboard, working out their theorems. Computer programmers write down their code – which might end up in a textbook for others to read.
But a computer's experience of the world might be profoundly different. A computer would not start with a colourful set of children's books, of buses and cars and butterflies; a computer would not need to write down an algorithm on a piece of paper; all that information would be electronic – a stream of data.
Also, we do not just live as disembodied minds. We actually have bodies. We feel pain. We get hungry, feel sleepy and get tired. We have a sense of touch, of actually being in a particular place in the world; not just in it generally, but in a particular space in it, and it is from here that we make our judgments.
But a computer would not have a sense of place. It would not interact with the world through the medium of a physical body as we do. It would be disembodied, at least compared to us.
It would probably not have that rich stream of experience from that colourful world each of us enjoy. It probably wouldn't soliloquise in the same way we do, attaching significance to particular objects; but that does not mean it could not be intelligent.
It is more likely that a machine intelligence would find other ways of doing things, to get to the same result.
A machine intelligence might not appreciate a glorious summer's day, it might not feel the warm sun on its skin and derive pleasure from a swim in the crystal waters of the Aegean sea; but it might notice humans do like these things and come up with innovative solutions to clean up our oceans and improve the environment.
It might devise a method of tsunami prevention, or optimise cargo movements to reduce pollution; it might deploy its vast intelligence to assist the human race, despite not being part of it.
A skilled artificial intelligence could watch the behaviour of humans and find ways to make their lot more bearable. It might be able to co-ordinate the traffic control system of an entire city to optimise vehicle movements. It could streamline logistics chains, do the number-crunching in all sorts of areas of scientific research, and generally power-lift the condition of the human race.
Yet all this would be done from the outside, without having human experience itself. It could watch what humans did, mimic human behaviours, model them and draw conclusions – and all this without necessarily having full awareness – at least as we have.
So the question is – will a machine intelligence, an artificial intelligence – ever have full awareness of what it is doing? In the sense in which we have it? Probably not.
An artificial intelligence will probably never be able to appreciate the beauty of the Uffizi art gallery in Florence – although it will notice humans like going there and assign a high significance value in one of its algorithms for that gallery.
An artificial intelligence is unlikely ever to want to play a game of football – although it might design a series of football playing robots of varying levels of skill, for those who don't have anyone to play with.
An artificial intelligence might be inquisitive, research-based, and gradually figure out everything there is to know – but it probably still wouldn't feel pleased with itself. A robot programmed to play football might even be programmed to celebrate after scoring a goal – but it wouldn't feel what a human feels when it scores.
But that wouldn't matter. The human nature of our experiences may not be particularly significant. It might be that providing research is continuing, providing that the secrets of nature are continuing to be unlocked, it wouldn't matter if it was a research scientist or an artificial intelligence which was doing it. It might be that the particularly human nature of our inner lives, our awareness of the world is not essential to further research into science and technology. And don't forget, it was Alpha Go, part of Demis Hassibis's Deep Mind project, which made a daring and pivotal move to win a game of 'go', which no human had ever thought of.