Transcript from an interview with Geoffrey Hinton
Interview with the 2024 physics laureate Geoffrey Hinton, recorded on 6 December 2024 during Nobel Week in Stockholm, Sweden.
How did you first learn about your Nobel Prize?
Geoffrey Hinton: I was more or less asleep in a hotel room in California, and I had by phone upside down on the bedside table with the sound turned off. I just happened to be sleeping on my side when the phone was in my line of sight and it got bright. I saw this little slit of brightness and it started vibrating. I was in California and almost everybody I know is on the East Coast. I wondered who on earth could be calling me. I picked it up and there was this long phone number that I didn’t recognise the area code, the country code, and then someone with a Swedish accent asked if I was Geoffrey Hinton. Then he told me I won the Nobel Prize in Physics.
My first reaction was, well wait a minute. I don’t do physics. This could be a prank. I thought it might well be a prank. But then what he said sounded very plausible. Then other Swedish accents came on, and I was convinced it was real. Well, I was sort of convinced it was real, but for a couple of days after that, I thought I might be a dream. So I did a bit of statistical reasoning, and the statistical reasoning goes like this: What’s the chance that someone who’s really a psychologist trying to understand how the brain works will get the Nobel Prize in Physics? Well, let’s say the chance is one in two million. That’s a fairly generous estimate of the chance. What’s the chance that if it’s my dream, I get the Nobel Prize in Physics? Well, let’s say the chance is one in two. Now you’ve got, it’s a million times more likely that this is a dream than that it’s reality.
I thought it might be like those dreams you have when you’re younger that you can fly and it’s wonderful, and then you wake up and it was only a dream. Then a month later you have a dream that you can fly again. And you remember that you had a dream that you could fly and it wasn’t true. But this time it’s real. I thought it might be one of those. So for a couple of days I was sort of waiting to see if I would wake up. I haven’t woken up yet.
What was it like growing up in a family of famous researchers?
There was a lot of pressure to succeed academically. I kind of knew from a very early age that I had to be a successful academic or a failure.
What made you interested in studying AI?
Geoffrey Hinton: I had a friend at high school who was always much cleverer than me, who when we were about 16 or 17, came into school one day and started talking about memories in the brain and how they might be distributed over the brain in the same way as holograms. Holograms had just been invented. This would’ve been about 1965. He got very interested in the idea that came from a psychologist called Lashley. The memories were distributed across many neurons and I got very interested in that. Ever since then I’ve been thinking about how the brain might work.
What are the greatest risks posed by AI?
Geoffrey Hinton: There’s two kinds of risks. There’s relatively short-term risks, which are very important and very urgent. They’re mainly to do with people misusing AI. People are still in charge, but they’re misusing it. The risks include things like replacing lots of jobs and causing the gap between the rich and the poor, because when productivity increases using AI, it’s not shared equally. Some people lose their jobs and other people get rich. That’s bad for society. That’s one kind of risk and we need to figure out what to do about that, although it’s not clear what to do. Another kind of risk is fake videos where they’ll corrupt elections. They’re already doing that.
Another kind of risk is cyber-attacks, where bad actors use these big AI models for crafting better attacks. To begin with, it’s just for doing better phishing. Last year, phishing attacks went up 1,200%. Probably largely because the large language models can make them much more effective. Then there’s designing things like Covid, which you can do much more efficiently using AI and it’s soon going to be relatively easy to design things like that. That means one crazy person can cause endless chaos. It gets much easier if you release the weights of a large model. They can take a large model, then fine tune it. People are now releasing weights of large models, which I think is crazy.
There’s other short-term risks. There’s obviously things like discrimination bias. If you train a model, suppose you’re train your model to decide whether prisoners should get parole. If the historical data is that white prisoners get parole and black prisoners don’t, and you train an AI model on historical data, it’ll say the same thing. I’m not as worried about that as other people, because with an AI model, you can freeze the weights and you can measure the discrimination, which you can’t do with people. If you try and measure discrimination in people, they realise they’re being measured and you get the Volkswagen effect where they behave differently when they’re being measured.
I think actually for discrimination bias, it’s going to be much easier to measure them in AI systems than in people. I think our goal should be not to make things that don’t discriminate and aren’t biased. Our aim should be to make things that discriminate a lot less and are a lot less biased than the systems they replace. It’s a very important problem, but it’s something where it’s fairly clear we can make progress there. That’s short-term problems.
There’s also longer-term problems of these things taking over. What we’re doing is we’re making things more intelligent than ourselves. Researchers differ on when that will happen, but among the leading researchers, there’s very little disagreement on the fact that it will happen unless of course we blow ourselves up. The question is what’s going to happen when we’ve created beings that are more intelligent than us and we don’t know what’s going to happen? We’ve never been in that situation before. Anybody who says it’s all going to be fine, it’s crazy. Anybody who says they’re inevitably going to take over, they’re crazy too. We really don’t know. But because we really don’t know, it will make a lot of sense to do a lot of basic research now on whether we can stay in control of things that we create that are more intelligent than us.
There aren’t many examples we know of, of more intelligent things being controlled by less intelligent things. The only good example I know of is a baby controlling a mother. There’s not much difference in intelligence, and evolution had to put a lot of work into making that happen. It’s very important that the baby can control the mother. But if you look around on the whole, more intelligent things are not controlled by less intelligent things.
Now, some people think it’ll be fine because we make them, and we’ll build them in such a way that we can always control them. But these things will be intelligent. They’ll be like us. In fact, the way they work is very like the way we work. They’re not like computer code. People refer to them sometimes as computer programs. They’re not computer programs at all. You write a computer program to tell a neural network how to learn a simulated neural network. But once it starts learning, it extracts structure from data. The system you’ve got at the end has extracted its structure from the data. It’s not something that anybody programmed. We don’t exactly know how it’s going to work and it’ll be like us.
So making these systems behave in a reasonable way, is much like making a child behave in a reasonable way. The controls you really have are, you can reinforce, you can reward it for good behaviour, punish it for bad behaviour. But the main control you have is demonstrating good behaviour, training it on good behaviour. That’s what it observes and that’s what it mimics. It’s the same for these systems. It’s very important we train them on the kind of behaviour that we would like to see in them. At present, the big chat bots are trained on all the data they can get, which includes things like the diaries of serial killers. Well, if you were raising a child, would you get your child to learn to read on the diaries of serial killers? I think you’d realise that was a bad idea.
How much time do we have before AI outsmarts us?
Geoffrey Hinton: Well, that’s what we don’t know. My guess is in between five and 20 years from now, there’s a good chance, a 50% chance, we’ll get AI smarter than us. It may be much longer, it’s just possible it’s a bit shorter. But I think it’s quite likely to have happened in 20 years time. Other researchers think it’s shorter or longer. That’s my guess. Actually, that was my guess a year ago. I guess my guess now is between four and 19 years.
What personal qualities are important in succeeding as a scientist?
Geoffrey Hinton: I think it depends what field you’re in and whether you’re trying to do something that’s very different from the standards in the field. For neural networks, for a long time they were regarded as ridiculous, and it was clear to many people that they would never work. To work in a field like that, you have to be confident that you are right even when everybody else says you’re wrong. I had actually a couple of things happened when I was very young that helped. One was my parents who were both atheists sent me to a Christian school, a Christian private school, from the age of seven. Everybody at the school believed in God. The teachers believed in God and the other kids believed in God. It seemed to me it was just obvious nonsense, and it turns out I was right.
That experience of everybody else around you believing one thing and it being clear to you they’re wrong. As you get older, it turns out there’s other people also don’t believe in God. That was a very useful experience. That was one thing that helped keep me going when everybody said neural networks is nonsense. It wasn’t everybody, but it was almost everybody in computer science.
Another experience that I haven’t talked about much, was when I was probably about nine, but I don’t know exactly what age, I heard a radio programme with my father talking about continental drift. At that point there was a lot of controversy about whether the continents moved around. Nearly all geologists thought it was complete rubbish. The theory was first introduced, I think around 1920 by a climatologist called Wegener, who had lots of evidence that the continents moved around. But he wasn’t a geologist, and the geologist just thought this was complete rubbish and they poo-pooed it. They, for example, refused to allow it to be in textbooks. They said it would only mislead the students and it was complete nonsense. I saw a debate in which there was a theory that was regarded as complete nonsense by geologists, nearly all geologists, and turned out to be correct. So that was also very helpful.
In fact, what happened with neural nets, the most similar thing I know is what happened with continental drift. That with continental drift, there was this idea that the South America fitted nicely into the armpit of Africa. But it wasn’t just that, it was that the soil types all down the coast of America matched the soil types from Norway or down to South Africa. There were fossils that linked up. There were glacial scrapes on rocks in the tropics and there were coal deposits in the Arctic. There’s all this evidence that the continents had moved around, but the geologists as a field completely dismissed it. They just couldn’t believe that the Earth had moved.
It was the same with neural networks. We had this evidence that neural networks have to be able to learn to do complicated things because we’ve got a brain. But most people in AI said if you take neural networks and try and learn everything in the neural networks, it’s hopeless. The knowledge has to be innate, or you have to do it by learning symbolic rules. They basically refused to allow people to publish neural network stuff in their journals. It was a very similar paradigm shift, and now there’s been a more or less complete paradigm shift.
What’s your advice to young researchers?
Geoffrey Hinton: I don’t know if I’m in a good position to give advice, but the piece of advice I normally give is, if you have an idea and it seems right to you and it’s different from what everybody else believes, don’t give up on it until you’ve figured out why it’s wrong. Most ideas you have like that, you are wrong. There’s something you haven’t thought of or there’s something you didn’t understand. Just very occasionally you have an idea that’s different from what other people believe is right. You are never going to discover that unless you keep going with your beliefs, until you discover why they’re wrong. You should just ignore what other people say. I’m very good at ignoring what other people say.
What responsibilities do scientists have in society?
Geoffrey Hinton: I think the scientists have a much better understanding of what this technology is than politicians do or the general public do. The scientists still disagree. There’s still some scientists who say these big chaps, they don’t really understand what they’re saying despite all the evidence that they do understand what they’re saying. Some scientists say it’s just a statistical trick.
Looking back at your career, what could you have done differently?
Geoffrey Hinton: I would like to have been concerned about this existential threat sooner. I always thought superintelligence was a long way off and we could worry about it later, and the problem for now is just to make these more intelligent. I wish I’d thought sooner about what was going to happen. If you go back to Turing in the early 1950s, he talks about making things smarter than us. He has about one sentence which says, “of course, when they get smarter than us we’re finished.” He doesn’t say it like that, but he implies that. But most people just don’t think about that problem until it gets close, and the problem is close now. So I wish I’d thought about that sooner.
What are your plans for the prize money?
Geoffrey Hinton: Half of my share of the prize money, I donated to an organisation in Canada that trains people who live in indigenous communities on the technology of producing safe drinking water. That’s good because those people will then stay in the communities and they’ll have safe drinking water. It’s ridiculous at this time, in a rich country like Canada that for example, in Ontario, 20% of the indigenous communities do not have safe drinking water. That’s just crazy. I’m sort of sympathetic to this problem because I adopted a child in Peru and I was there for two months and you can’t drink the tap water there. It’s poisonous. So your whole life revolves around how do you get safe water. It’s a huge extra burden on everyday life. It’s crazy that people in Canada have to suffer that. So I donated half to that.
The other half, back in the 1980s, I worked with someone called Terry Sinofsky, who was actually a student of Hopfield‘s, on this theory of Boltzmann machines. We worked on it equally. I wouldn’t have had the theory unless I’d been talking to him and he wouldn’t have had it unless it’d been talked to me. He was a physicist originally and then went into neuroscience, and we thought that this must be how the brain works. It was such an elegant learning algorithm that we were convinced that it had to be how the brain works. We thought we might get the Nobel Prize in Physiology or Medicine for discovering how the brain worked. We had an agreement back in the 1980s, which was that if they gave it to one of us and not the other one, we’d split it.
So when they gave me the Nobel Prize, totally unexpectedly, and one of the reasons was Boltzmann machines, I got in touch with Terry and said, where would he like me to send his half? He said, he didn’t feel right about it because it wasn’t just for Boltzmann machines, it was for other subsequent stuff that he wasn’t so involved in. So he refused to take the money. In the end, we compromised that, I took that half of my share, and we used it to set up a prize in his name for young researchers. It’ll be for young researchers with crazy theories of how the brain works, like we were, and it’ll be handed out at the annual conference in our field. That seemed like a good compromise. I feel he could easily have been the third person named in the Nobel Prize. I’m not complaining about that, but he could have been. But this is a way of recognising that he made a huge contribution.
Did you find any typos in this text? We would appreciate your assistance in identifying any errors and to let us know. Thank you for taking the time to report the errors by sending us an e-mail.
Nobel Prizes and laureates
Six prizes were awarded for achievements that have conferred the greatest benefit to humankind. The 14 laureates' work and discoveries range from quantum tunnelling to promoting democratic rights.
See them all presented here.