Geoffrey Hinton

Podcast

Nobel Prize Conversations

“When we remember, what we’re doing is just making up a story that sounds plausible to us. That’s what memories are.”

Join podcast host Adam Smith as he speaks to physicist Geoffrey Hinton, often called the godfather of AI. They discuss Hinton’s childhood memories and how his family legacy of successful scientists put pressure on Hinton to follow in their footsteps. Throughout the conversation it is clear that Hinton has always had a fascination with understanding how the human brain works.  

Together with Smith, Hinton discusses the development of AI, how humans can best work with it, as well as his fears of how the technology will continue to develop. Will our world be taken over by AI? Find out in this podcast conversation with the 2024 physics laureate.

This conversation was published on 15 May, 2025. Podcast host Adam Smith is joined by Karin Svensson.

Below you find a transcript of the podcast interview. The transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Man with diploma
Geoffrey Hinton with his Nobel Prize diploma. © Nobel Prize Outreach. Photo: Nanaka Adachi

Geoffrey Hinton: If you look at my academic history at Cambridge, I switched subjects every year and it didn’t make any sense at all. When you look back on it, it was all very useful, but at the time it was just a crazy random walk.

Adam Smith: I can relate to that. Isn’t it marvelous how indecision in what might be called one’s career path can in the end with the passage of time look like requiring sensible background? And in Geoffrey Hinton’s case, boy did he flourish once, he did eventually find the question that he was really interested in addressing and it led him to become one of the founding fathers of artificial intelligence. So join me now as we listen to him. Recount how he explored some of those blind avenues and then eventually found the light at the end of the tunnel light, which for him has uh, over recent years turned a bit darker.

MUSIC

Karin Svensson: This is Nobel Prize Conversations and our guest is Geoffrey Hinton, recipient of the 2024 Nobel Prize in Physics. He was awarded for foundational discoveries and inventions that enable machine learning with artificial neural networks. He shared the prize with John Hopfield. Your host is Adam Smith, Chief Scientific Officer at Nobel Prize Outreach. This podcast was produced in cooperation with Fundación Ramón Areces. Geoffrey Hinton is co-founder and chief scientific advisor of the Vector Institute in Toronto. He talks to Adam about how to best cool your coffee and why we say bite down instead of up. But ultimately they reveal how these mundane musings capture the core of the question, what makes us human. But first, Geoffrey Hinton talks about growing up with a father who favoured bugs over his children.

MUSIC

Smith: You come from a very scientific background, though a lot of scientists in your forebears and your father was a great entomologist. Did that influence you to become a scientist? Did you feel that it was in the blood?

Hinton: Yes. I felt a lot of pressure to become a scientist. I was expected to become a scientist, but I always enjoyed science. So that was okay. When I was little, I wanted to be an entomologist because my father was an entomologist.

Smith: He shared an inordinate fondness for beetles.

Hinton: Oh, you read that right?

Smith: Yes, it was beetles he specialised in.

Hinton: And lots of other insects too. But beetles were his favourite. Among beetles it was the family Elmidae that was his real favourite.

Smith: Did you have to compete with beetles for his favourite?

Hinton: My sister often said that he would’ve loved us more if we had six legs. He had an office at the university that was very high. The ceiling must have been about 16 feet high and the walls were all lined with shelves and he had a stepladder so he could get at them. The walls were covered with hundreds of boxes and each box would have papers reprints. This was before Xerox machines, right? It would have reprints of papers from journals and on the box there would be the name of the genus of insect that the reprint was about or maybe the family. There were hundreds of these boxes. When he died, we sold them to the University of Florida for 10,000 pounds. Among those boxes there was one box, it was a bit smaller and was next to the door on a lower shelf, on that box it said not insects. That was the rest of his life. That had things like letters from his children.

Smith: At least you were differentiated from the insects. You were something special.

Hinton: Yes. Slightly smaller.

Smith: But if you were expected to be a scientist, did that necessarily mean that you wanted to follow expectations or did you want to break away and be something different?

Hinton: Not until I was late in my teens did I want to break away and do something different. When I first went to Cambridge, I was doing science and it took a lot of time to do it properly. You went to lectures in the morning, did experiments in the afternoon, and then in the evening I would rewrite the notes I’ve made during the lectures to make them neat. So that I went over it once more and then it would be time to go to bed. It was like 12 hour days I was doing. After a month I just got totally fed up with it and I left Cambridge. That was the first time I made a decision on my own.

Smith: Golly, that must have gone down badly.

Hinton: It went down quite badly, yes. I went back a year later.

Smith: What did you do with the time that you had between first start and second start?

Hinton: I went to London and got several different menial jobs in order to pay the rent on an apartment. I read a lot of depressing novels by people like Dostoevsky.

Smith: Sounds like independence.

Hinton: Yes, I decided I wanted to be an architect. Fortunately before I actually went back the next fall, I spent the summer working in an architect’s office and discover what architects actually do, which isn’t as romantic as what you think they do as sketch out airy buildings. What they actually do is decide are you going to have cheap door handles or cheap flooring? Because there’s no way you’re going to meet the budget without doing one of those two. After a day of doing architecture, I went and talked to my tutor and switched back to doing science. But I switched back to doing physics, chemistry and physiology. The first time I’ve been there I did physics and chemistry in crystalline state, which was a new subject they were teaching because of the success of x-ray crystallography in getting structure of DNA. X-ray crystallography was a big thing then. When I went back I didn’t do that. I instead did physiology and that was the first time I’d done any biology and I found it fascinating.

Smith: Had you found your path once you made the change to physiology?

Hinton: No. At the end of the physiology course there was a section which I was really looking forward to about the central nervous system and I thought they would tell us how it worked. I was very interested in how it worked and instead they told us how the axons of neurons conduct action potentials, how a wave of depolarisation goes down the axon. But that didn’t exactly tell us how it worked and I thought they were gonna tell us how it worked. So I was got very fed up and I switched to doing philosophy.

Smith: Which is definitely studying how it works.

Hinton: I thought I’d learn more about the mind. I didn’t. I did a year of philosophy and basically developed antibodies to philosophy.

Smith: That seems to be a bit of a theme here.

Hinton: If you look at my academic history at Cambridge, I switched subjects every year and it didn’t make any sense at all. When you look back on it, it was all very useful, but at the time it was just a crazy random walk.

Smith: Often in life that’s true. You can make sense of it in reverse, but yes. Well presumably you did quite well in all these things so they hang on to you and said, okay, fine.

Hinton: No. In my first year I did well in physics but I knew I couldn’t carry on in physics because my math wasn’t good enough. Advanced mathematics I found very difficult. I did quite well in physiology and okay in chemistry. Then when I did philosophy I did just okay and then I switched to psychology and I didn’t really like that at all.

Smith: Again, were you looking for a way to get into the mind?

Hinton: Yes. I thought psychologists would tell us how people worked and instead it was rats in mazes. There was some stuff that retrospectively is interesting, signal detection theory, how distinguishing noise and very faint signals and there’s some interesting mathematics there. But it wasn’t exactly what I wanted to know. I wanted to know how people worked.

MUSIC

Svensson: So Adam, did Geoffrey Hinton find out how people work?

Smith: Don’t think anybody’s found out how people work yet. I think the brain is just too complicated for us just at the moment.

Svensson: He talks about two things in the conversation that I need to understand better and that’s neural networks and large language models. Can you enlighten me a little bit?

Smith: Well, yes, I’ll give it a go. The neural network is a step towards understanding how people work in that it’s arranging processes in a way that seeks to mimic what happens in the brain. In the brain you have neurons arranged broadly in layers and hugely interconnected and in a neural network you have processes arranged in layers and there’s an input layer where information comes in and there’s an output layer where information goes out. Then in between there are these hidden layers. In the brain our understanding is that neurons reinforce their connections with each other when they send electrical signals to each other. So standardly people say neurons that fire together wire together. The same is happening in a neural network where nodes with inputs that are leading to the right answers further down the stream are reinforced and you program a neural network with a set of rules and then it turns out that they are so-called adaptive and that they can develop new rules for themselves, allowing them to get close to the right answer.

Svensson: Building a synthetic brain then basically.

Smith: Yes, exactly. That’s what he was seeking to do. Very few people thought he was gonna succeed, but he did. That led him to become, as everyone likes to say, the godfather of AI. A large language model broadly describes the way that these infrastructures are used to input huge, complex data sets. Then the model is able to recognise and interpret that dataset and give you an output which makes sense as the name large language model implies. That refers initially to producing text as we all know from something like chatGPT. But it can also work with visual data or any sorts of complex data. In the case of one of the Nobel Prizes awarded last October, of course that data was protein folding data which Alphafold 2 was able to interpret.

Svensson: What specifically did Geoffrey Hinton get his Nobel Prize for?

Smith: For the development of that initial neural network which had introduced concepts of statistical mechanics which allowed his neural network to recognize characteristic elements in sets of data.

Svensson: Apart from sort of creating a digital brain, Geoffrey Hinton himself also seems to have a very interesting mind.

Smith: Doesn’t switch off much. Yes, he seems to be constantly on the lookout. Before we started recording, somehow we started talking about religion and I said let’s not go there. And he said, why not? So we went there, we chatted about it a little before the recording started and I think it was just indicative of the fact that not much slips by.

Svensson: What was his take on religion then?

Smith: Yes, he is not keen on it. He thinks it’s all fake news and that those who believe are deluded. But then again, there are others who very much believe in an evidence-based view of the world who have space for religion as well.

Svensson: He doesn’t seem to mind to disagree with people.

Smith: He certainly doesn’t. I think again, it’s the characteristic of so many laureates. Not that they necessarily want to hold contrary views, but that they are not phased by having people criticise their view of things. This conversation about religion is well worth having, but I don’t think it’s a conversation that can be had by scientists alone.

Svensson: Indeed.

Smith: Anyway. That mind is constantly operating even in the dentist chair it turns out – let’s listen.

MUSIC

Hinton: At least in Canada and Britain, I don’t know about elsewhere in the world, when you go to the dentist and they want to see if your lower teeth fit your upper teeth properly, they say bite down. You don’t actually bite down, you bite up. So why do they say bite down and they all say it. I’ve asked dentists, why do you say bite down instead of bite up? They don’t have an answer. They spend their whole life saying bite down. It’s never occurred to them that you actually bite up. Why do you say bite down?

Smith: And what do they say?

Hinton: Do you want to know why they say it?

Smith: Yes please.

Hinton: Okay. I was once eating some bony fish. This really is an explanation of why they say bite down. I was once eating some bony fish in a cafeteria with a friend and we were trying to play blindfold chess. Now we weren’t that good at chess and we kept sort of moving bishops through pawns and things. But we were trying to play blindfold chess and I realised you can’t play blindfold chess while eating bony fish. The reason is the sort of spatial processes you use for dealing with where things are in space and how they’re relating to each other. It’s the same spatial processor you use both for playing the blindfold chess and for finding the bones in the fish. It’s not like you have a separate piece of apparatus for dealing with what’s going on inside your mouth and what’s going on in this chess game. It’s the same spatial processor you are using. So you get a lot of interference. You can’t do both at once. It’s like trying to hold two conversations at once. The question is, when you use that spatial process for dealing with things inside the mouth, how does that relate to how you use it for dealing with things outside the mouth? If for example, I’m reaching for objects, there’s a kind of me and I’m doing the reaching for the objects. If I’m thinking about what’s going on inside my mouth, what’s me and where’s the sort of center of my frame of reference? So inside your mouth, your tongue is you, the tongue’s the bit you can move. You can move the jaw too, but the tongue is you and you sort of feel around with the tip of your tongue. If you’re looking for bones in the fish, you sort of feel for them with the tip of your tongue. So the tongue is you. Now the tongue is attached to the lower jaw. So if you ask what moves relative to the tongue, well what moves relative to the tongue is your upper teeth. When I contract the jaw muscle, so my jaw closes, the tongue doesn’t move relative to the lower teeth. There’s no danger that you bite your tongue with your lower teeth because they’re not moving relative to the tongue. The thing that bites your tongue is your upper teeth. Of course what you need to worry about inside your mouth is biting your tongue. That’s the most important thing not to do. You are your tongue and you don’t want to get bitten and these upper teeth are coming down on your tongue. So of course you think of it in terms of biting down.

Smith: It’s all about frames of reference.

Hinton: It’s all about frames of reference, which I’m very interested in and that’s my best explanation of why dentists say bite down. But the point of this whole story is there is an explanation of why they say it that I think is pretty good. But nobody ever asks a question. Nobody ever says ‘Wait a minute, the teeth come up so why do they say bite down?’ There’s huge numbers of questions that nobody ever asks. My son when he was very little, asked a question that very few kids ask, he said, ‘Daddy, why do bridges stay up?’ The point is there’s nothing underneath the bridge. He was just beginning to understand that it is odd that bridges stay up.

Smith: It’s a lovely good question. All this asking of questions and finding solutions that can slow you up. I suppose most people are rushing through things and stopping to ask is time consuming? As well as being intellectually taxing.

Hinton: I’ve always envied people who can just read a lot of stuff. If I start reading a scientific paper, I keep getting sidetracked. It takes me a whole day to get through a scientific paper because I’ll read a little bit, then think, wait a minute or it’ll remind me of something else. So I get sidetracked a lot by asking questions and I actually like making up questions just for the fun of trying to figure out the answers. So for example, if you take a coffee cup and you put the coffee in, then you have to go and do something. You’re coming back in five minutes and you want the coffee to be as cool as possible in five minutes. Should you put the milk in when you’ve just put the coffee in or should you leave it and put the milk in when you come back? The answer is roughly speaking that you should leave it and put the milk in when you come back because the coffee will be hotter if you haven’t put the milk in. So it will cool faster. So you’ll be losing heat faster if the coffee doesn’t have the milk in, then you put the milk in when you come back and it cools it down some more. But it’ll end up coolest if you put the milk in later. But suppose that the coffee cup is conical shaped because it’s losing heat from the surface of the coffee. So if you’ve got the milk in, you get a bigger surface. Although the coffee overall, if the milk mixes with the coffee, the overall thing’s a bit cooler, it’s got a bigger surface. That must mean the some shape of coffee mug where it doesn’t matter whether you put the milk in first or later it’ll call the same amount. It depends a bit on sort of exactly how much coffee you put in and how much milk you put in. But I think most people don’t get sidetracked by wondering things like if we had the right shape coffee mug, we wouldn’t have to worry about whether you put in the milk first or second.

Smith: Absolutely not. I think that sounds like a wonderful physics project for some undergraduate to work out what the shape ideal shape of cup is. But yes, it sounds like the sort of question which really normally elicits the answer, ‘Oh dad, honestly’.

Hinton: My daughter was used to me lecturing her on scientific theories and one morning when she was a teenager I came down to breakfast early, she used to come down early because she had to get to school. She said, ‘You are down early, dad’. I said ‘Yes, I think I figured out how the brain works!’ And my daughter said, ‘Oh no dad, not again’.

Smith: All this reminds me of a nice thing Garrison Keillor said about old people. He said the reason old people walk so slowly is that everything they see reminds them of something and so they have to stop and think about it.

Hinton: I think it’s because their bodies don’t work so well.

Smith: Oh how prosaic! And how factual and true. Actually that brings me on to your own upbringing. How did that experience translate into the way you brought up your own kids? I think we’ve already seen insight into that.

Hinton: Yes, my kids, my daughter in particular, doesn’t want to be a scientist. She never did want to be a scientist. She would occasionally get very cross with me for trying to explain things scientifically when she wasn’t interested.

Smith: I suppose I was getting to the expectation piece, given that there were obviously huge expectations placed on you.

Hinton: My children are both adopted and that changes the imposition of expectations somewhat. I think that made a difference.

MUSIC

Smith: You’ve had a very interesting path through academia and business. I wanted to ask what it was that really attracts you to an environment. Where do you feel happy working? What defines somewhere you’re happy?

Hinton: I guess I need two things. I need smart other people to talk to who know things I don’t know. One of the best collaborators I’ve ever had was Terry Sejnowski who knows a lot of physics and a lot of biology that I don’t know. He also reads a lot so he knows all the literature, which I don’t. That’s been a very good collaboration. The other thing I need in environment is smart graduate students. At universities, if you have a PhD student, you are stuck with them and they’re stuck with you for about five years. That’s long enough so that after they’ve got used to you and you’ve got used to them, you still have four years left. They can try doing things that fail and they can spend six months trying to do something that fails and that’s not the end of the world. That’s great training for them and it’s a great resource for the advisor to get them to try things and try them really hard. The one disadvantage I found at Google is that the junior research scientists at Google work with more senior scientists on projects. They’ll work on a project for a while because it sounds interesting. But when the going gets tough, if after a month it hasn’t produced anything, they’ll go off and work with some other scientist. They have too much freedom. Now these are typically people who’ve already done PhDs so they deserve some freedom. But that’s something you have at universities that you don’t have at the big companies, not in the same way. I think a lot of original research will continue to be done at universities. Because I think this kind of apprenticeship system where you have an advisor and a student and the students’ apprentice to the advisor and stays as an apprentice for several years, I think that works very well for exploring new ideas. It’s an advantage that the students don’t yet know that much. They don’t have all sorts of opinions of their own or if they do have an opinion of their own, it’s sort of fresh and they can see things from new angles. I think actually for fundamental original research, universities are better than the big companies. But for resources the big companies are much better.

Smith: In your particular field of deep learning and neural networks is there now such a need for resources that the balance of power, if you like, in research is shifting a bit towards the companies?

Hinton: Yes, it’s a real mess. For these large models, you need a lot of resources and that sort of puts universities as a big disadvantage. That’s one big disadvantage. The other big disadvantage, it may change in the near future I don’t know. The companies just pay a lot more. For a good researcher who just got a PhD in machine learning, a few years ago they could go to one of the big companies and get paid $300,000 a year. If they went to a university, they’d get paid $150,000 a year if it was a relatively rich department. That just gives the big companies a huge edge.

MUSIC

Hinton: If you work for a big tobacco company and you want to tell people that tobacco causes cancer, you may start telling them when you work for the company. But really what you should do is quit the company and tell people tobacco causes cancer. Google treated me very well and the people I dealt with at Google were very nice people. So I felt wrong about criticising Google and other companies for not paying enough attention to safety while working for them. They actually, when I said I was leaving, they said ‘Well, you could stay here and say whatever you like and work on AI safety’. It just didn’t feel right. You’re just much cleaner if you don’t work for the company. The people who are uninhibited in saying what they think about AI safety are generally people who don’t work for a company or people unlike some of the people at OpenAI who are about to leave the company.

Smith: Your contention is that we are closer to a dangerous intelligent form of deep learning than people think.

Hinton: Yes. There’s two kinds of dangers of AI. There’s the kind of AI we have now being misused by bad people. Being misused for example to target voters to get them to stay home rather than voting for Kamala Harris. I’m sure quite a bit of that went on. If you know a lot about a person, you can know what presses their buttons and send them things that’ll manipulate their behaviour. Those hundreds of millions of dollars that Musk put into sporting trunk probably went into things like that. I wouldn’t be surprised if they did. Then there’s obviously cyber criminals using it for cyber attacks, which is very scary because these things are getting better and better at cyber attacks. There’s people using it to make nasty viruses, this lethal autonomous weapons which are gonna be very nasty and are coming very soon. There’s all those things that just depend on bad actors and many people are aware of those things. Then there’s things that are quite different that depend on AI itself trying to take over. A lot of people, particularly people concerned with the other things like discrimination and bias in AI say ‘This AI taking over, it’s just science fiction, it’s nonsense’. That tends to go together with the belief that AI doesn’t really understand anything. What I now think of as old fashioned linguists who are followers of Chomsky think this stuff doesn’t understand anything. It’s just a statistical trick using correlations. That belief gets a bit thin when you have arguments with it and it starts beating you at the arguments. I have a little game I play, which I enjoy a lot, which is you take some of the statements about how these large language models don’t really understand what they’re saying and you give those statements to a large language model and ask you to explain what’s wrong with the reasoning of the people who made those statements. It gives very coherent explanations of what they’re getting wrong. The nice thing about this game is it requires no effort on your part. All you have to do is type in the statements from these people. You don’t even have to type them in, just grab them and put them into GPT. It’ll tell you what’s wrong with and they say ‘What’s wrong with this?’ It’ll tell you exactly what’s wrong with it. It’s quite satisfying to get the large language models explaining to the critics of large language models why they’re wrong. At that point it seems just crazy to say they don’t understand anything.

Smith: People point very much to the mistakes they make, the stupid mistakes that large language models make as an evidence that they’re not really understanding that it is just correlation.

Hinton: Yes. I have various things to say about that, two things in particular. If you take someone with low intellectual ability with an IQ of 80 or something like that, they will sometimes make mistakes. They’ll get some common sense things wrong. We don’t say that means they didn’t understand anything. What we say is that means they didn’t understand that. There’s complicated things they don’t understand. But we don’t say that means they’re not understanding at all, that there’s no understanding there just because they made a mistake. That would be kind of crazy. The second thing is people make mistakes like this. Even people of average intelligence make mistakes like this all the time. Now people are generally better, at least until recently, are recognising when they got things wrong and sort of filtering it and doing a bit of reasoning that can’t be right. But they make mistakes all the time. It’s called hallucinations, but it ought to be called confabulation where it’s just a language model. There’s no vision involved. It ought to be called confabulation. Psychologists have studied that since the 1930s and shown that just people confabulate all the time. It’s hard to prove that. When we were remembering things that happened a long time ago, we make up a story that sounds plausible to us. We’ll have some relation to the truth but we’ll have lots of the details wrong. It’s normally hard to prove that because you need to know the precise details of something that happened a long time ago. But there are cases where you do know what happened. The best case I know of is there’s a nice paper by Ulrich Neisser on ‘John Dean’s memory’. John Dean was Nixon’s lawyer and he testified under oath about conversations that had happened in the Oval Office. At the time that he testified, he was unaware that there were tape recordings of those conversations. Basically what he produced in testimony was plausible kinds of meetings that might have happened given what was going on. Those meetings never actually happened. It wasn’t those precise people in the meeting. It wasn’t that person said something, it was somebody else said the same thing. It wasn’t quite that thing anyway. But it’s clear he was trying to tell the truth. But when we remember stuff, what we’re doing is just making up a story that sounds plausible to us. That’s what memories are. If the events happened very recently, what sounds plausible to us is what actually happened. As the events happened longer ago, what sounds plausible to us is something like what happened but influenced by things we’ve learned in the meantime. Human memory is full of confabulations and the fact that these large language models just make stuff up shows they’re more like us, it doesn’t show they’re less like us.

Smith: Would you say that those models are the closest we’ve got to exposing how the brain is actually processing information?

Hinton: Yes. They’re by far the closest we have to explaining what’s going on in the brain when we’re understanding language. I think in broad terms, we understand language in the same way as these large models understand language. They’re like us in that respect. The people who say they’re completely unlike us don’t have a workable theory of how we understand language. The best theory we’ve got is these large language models.

Smith: If there is a very serious danger that they are going to become far more intelligent than us quite soon, what do you think should be happening now in order to protect against potentially bad effects of that?

Hinton: Nearly all the leading researchers I know believe they will get more intelligent than us. Maybe not in everything all at once. They’ll get better at different things at different times. They’re already much better, for example, at playing go or playing chess or figuring out how proteins will fold. They’ll get better at different things at different times. If I ask you to write a sonnet where half the words begin with B, you could probably do that, but it’ll take you a whole day. If you ask GPT-4 to do that, it’ll just spit out the sonnet. They’re much better at things like that. They’re already not very good experts at everything. They’re getting better fairly rapidly. Most researchers think they will get better than us, smarter than us. It’s just a question of when. Some people seem to think it’ll be in the next few years. I think that’s optimistic, if you think it’s a good thing. I think it may take up to 20 years, I’d say probably within the next 20 years they get smarter than us. They’ll be agents too. They’re already making them into agents. They can do things, they can talk to other ones and they can cooperate to achieve things.

Smith: They can create their own goals.

Hinton: So they can create their own sub goals at least. We may put in their top level goals, but they’ll generate sub goals in order to achieve those. They’ll probably generate the sub goal of getting more control to make it easier to get things done. The real question is not will they get more intelligent than us, but if they’re more intelligent than us, will we have a way of making sure they don’t want to take over? We just don’t know. We don’t know whether that’s possible. But given that you’re about to make things more intelligent than you, it would seem wise to just put a lot of resources into figuring out if you’re going to be able to keep control. We are not going to stop AI. I think saying we should stop now that might be the rational policy, but that’s not going to happen. There’s too many profits to be made. Governments want it too much for weapons and so on. They will say for defending themselves against cyber attacks, but at least half of them must be using it for inventing the cyber attacks.

Smith: It was a race of escalation.

Hinton: So they’ll want it for that. It’s not going to be stopped. We should be putting a lot of resources into figuring out; can we generate these hyper intelligent agents in a way that allows us to stay in control? Now some people believe we can. Yann le Cun (who’s a friend of mine and was my postdoc) believes we can. I think it’s improbable, but we should put a lot of effort into seeing whether it’s possible.

Smith: What form does that effort take? Is it regulation? It’s research really, isn’t it?

Hinton: It’s research but only the big companies have the resources to do this research because it’s research on the large cutting edge models. My belief is that government’s the only people who are powerful enough to deal with these large companies and even they may not be. My belief is the government ought to mandate that they spend a certain fraction of their computing resources on safety research. Now it would be great if that happened. The Biden administration was moving very timidly towards a little bit of regulation. In California they were a bit more ambitious and they said that you’re going to require the large companies before they release these things to do a lot of safety tests and tell you the results. That got vetoed by the governor. It got passed by the assemblies, but vetoed by the governor. That was the first bill with real teeth. In Europe, the Europeans would like to have some regulation of AI, although they explicitly say we are not going to regulate military uses of AI because so many European companies want to use it for weapons. But recently the UK and the US have said they’re not going to sign on to the Europeans declaration about AI safety. Basically they explicitly say we’d rather have the profits than the safety. They say that by saying too much regulation will interfere with innovation. But you can rephrase that as when it’s profits versus safety, profits win. In Britain, for example, the prime ministers are being advised by someone who holds lots of shares in AI startups.

Smith: It does seem a case of the industry policing itself partly because as you say, nobody has else has access to the resources. But also because it’s such a fast moving area that people are finding it very hard to keep up with what’s happening.

Hinton: I find it very hard to keep up with what’s happening. There’s new models coming out every day and there’s new techniques being invented every day because there’s a very large number of very smart people working on it now. I find that scary. So it will be hard to regulate. But if you say something like spend a third of your computing resources on AI safety research, that’s sort of more generic and easier to do.

Smith: I can see why it’s to be wished for. In getting people to focus on that question and the provision for potentially controlling what could become dangerously intelligent in the future, is there a danger that you neglect the current problems that AI presents? Alongside all the benefits in diagnostics and all sorts of things that it can do for education etc potentially, there is the threat for automation and large scale unemployment as well as all the bad actors that you’ve already mentioned and that also needs to have attention paid to it. We need to, as a society, think about what we want now from AI, not just the danger of the future. There’s so much to think about. There’s so much to worry about and potentially control and direct. It’s too big.

Hinton: I completely agree. There are these many different dangers. There’s many short-term dangers and it’s not that we shouldn’t think about those. I mainly speak about the long-term existential threat of these things getting more intelligent than us and taking over because many people say that’s just science fiction. I feel I know enough about how these things work and how we work to say it’s not science fiction. But that doesn’t mean I’m not also very concerned about all the short-term things. It’s just my particular expertise means I’m best placed to talk about these longer-term existential threats. But I do try and emphasise when I talk about those, there are also many short-term threats like the social disruption if these things replace all mundane intellectual labour with AI.

Smith: It all comes back in a way to the question that you’ve been asking all these years about how we work and what it is to be human.

Hinton: That’s becoming very central what it is to be human. Because the debate about whether these things will want to take over is all about do they have desires and intentions. Many people think, for example, there’s something that’ll protect us, which is they’re not conscious and we are conscious. We’ve got something special that they ain’t got and they will never have. I think that’s just gibberish. I’m a materialist. Consciousness emerges in sufficiently complicated systems. Perhaps systems complicated enough to be able to model themselves. There’s no reason why these things won’t be conscious.

Smith: So we’re going to have to learn how to live with them as well.

Hinton: Hopefully we can learn to live with them. That’s the good scenario.

Smith: How do you think we should think of them in the future as friends or aliens?

Hinton: Okay, so Yann who thinks we’re going to be safe, thinks we should think of them as servants, good old fashioned servants who do what you tell them to. If they don’t, you fire them. I’m just worried by the fact that there’s very few cases of more intelligent things being controlled by less intelligent things. Once they’re a lot smarter than us, I don’t think they’ll put up with that. That’s what worries me at least. Now there’s one line of argument that’s more promising, which is a lot of the nasty characteristics of people have come from evolution. We evolved with small warring bands of chimpanzees or our common ancestor with chimpanzees. That led to this intense loyalty to your own group and intense competition with other groups being willing to kill members of other groups. That’s sort of shows up in our politics quite a lot right now. These things didn’t evolve so maybe we can avoid a lot of that nastiness in things that didn’t evolve.

Smith: It’s a nice thought that we could learn how to behave from them.

Hinton: Yes. In fact, AI mediators are now quite good at getting people with opposing views to come to see each other’s view. There’s a lot of good can be done with AI and if we can keep it safe, it’s gonna be a wonderful thing.

Smith: Hopeful note to end on. It’s been an absolute pleasure speaking. Thank you very much indeed.

Hinton: Bye for now.

MUSIC

Svensson: You just heard Nobel Prize Conversations. If you’d like to learn more about Geoffrey Hinton, you can go to nobelprize.org where you’ll find a wealth of information about the prizes and the people behind the discoveries. Nobel Prize Conversations is a podcast series with Adam Smith, a co-production of Filt and Nobel Prize Outreach. The producer for this episode was me, Karin Svensson. The editorial team also includes Andrew Hart and Olivia Lundqvist. Music by Epidemic Sound. If you are into big ideas, lateral thinking and in-depth explanations, why not check out our episode with 2020 physics laureate Roger Penrose. You can find previous seasons and conversations on Acast or wherever you listen to podcasts. Thanks for listening.

Nobel Prize Conversations is produced in cooperation with Fundación Ramón Areces.

To cite this section
MLA style: Geoffrey Hinton – Podcast. NobelPrize.org. Nobel Prize Outreach 2025. Tue. 17 Jun 2025. <https://www.nobelprize.org/prizes/physics/2024/hinton/podcast/>

Back to top Back To Top Takes users back to the top of the page

Nobel Prizes and laureates

Six prizes were awarded for achievements that have conferred the greatest benefit to humankind. The 12 laureates' work and discoveries range from proteins' structures and machine learning to fighting for a world free of nuclear weapons.

See them all presented here.

Illustration

Explore prizes and laureates

Look for popular awards and laureates in different fields, and discover the history of the Nobel Prize.