24 September 2008
The Nobel Prizes have rewarded a host of advances that have brought us closer to the stars, from their birth in the primordial gas and dust to their explosive deaths.
The galaxy swings round
like a wheel of lighted smoke,
a smoke composed of stars.
It is sunsmoke.
For lack of other words we call it sunsmoke,
Do you see.
I don't feel languages are equal
To what that vision comprehends.
In his epic science fiction poem Aniara, the 1974 Nobel Laureate in Literature Harry Martinson describes the futility of explaining in words the impact of being able to see all the stars in a galaxy. Martinson echoed the awestruck and humbling feeling that everyone from children to Nobel Laureates experience when they first gaze up at the stars. The numbers by themselves are incomprehensively vast. Our Sun is thought to be one of 200 billion stars in our Galaxy, the Milky Way, and one of 100-billion-billion stars in the Universe. There are as many stars in the Universe as there are blades of grass on Earth.
The stories behind the life and death of stars are just as extraordinary. Stars are born among the primordial soup of gas and dust and they often end their lives in stunning explosions (Figure 1). In their lifetime they forge the chemical elements from which all life is made and when they die they create the most destructive forces in the Universe.
But how are stars born, how do they shine, and what happens to them when their light goes out? These and many other questions inspired generations of scientists to answer them with breakthroughs that were awarded the Nobel Prize. To understand the origin of stars, we begin with the Nobel Prize-awarded discoveries that helped us to understand what occurred at the very beginning of time… or to be more precise, the very first moments after the very beginning of time.
The idea that the Universe was born in a sudden event was far from a popular one. George Lemaitre's description in 1927 of the Universe's origin being "a day without a yesterday", in which a large, pregnant and primeval atom exploded and sent out the smaller atoms that we see today, was greeted with a great degree of scepticism. The consensus among the scientific community was that the Universe has always existed in a largely unchanged form, known as the Steady State theory.
It wasn't until Edwin Hubble discovered in 1929 that the Universe is in fact expanding that a handful of scientists began to reason that if you rewound the course of the Universe's life then at some point it should have existed as a dense fireball. Ralph Alpher, Hans Bethe and George Gamow proposed in 1948 that this idea could explain the abundance of light elements in the Universe, such as hydrogen and helium, but the notion of an exploding and expanding Universe was generally dismissed. One of its biggest critics, the celebrated astronomer Fred Hoyle, sarcastically called this concept the 'Big Bang' theory, a phrase that would later be adopted by its proponents.
Undeterred by the sceptics, Alpher and Gamow, together with Ralph Herman, predicted that if the dawn of the Universe was explosive, then a faint afterglow of this explosion should exist and be detectable. Their thoughts lay dormant for many years until an accidental discovery in 1964 that would change the way people thought about the Universe.
Arno Penzias and Robert Wilson's attempts to tune in to the microwave signals transmitted from the Milky Way using a large horn-shaped radio antenna owned by Bell Laboratories in Holmdel, New Jersey (Figure 2), were being hampered because the instrument was picking up a persistent weak hiss of radio noise. They spent around a year trying everything they could to remove the cause of this noise – from rebuilding parts of the antenna to clearing the pigeons that had taken roost in the antenna – with little luck or idea of what the noise could be.
Reprinted with permission of Lucent Technologies Inc./Bell Labs
The answer was revealed by making a phone call to Robert Dicke at Princeton University. Dicke and his colleagues were looking for Alpher, Gamow and Herman's Big Bang afterglow (though they were unaware of the original paper and had arrived at this hypothesis independently), and Dicke realised that the noise that Penzias and Wilson had tried so long to get rid of fitted the bill. Penzias and Wilson's accidental discovery of this remnant light, known as cosmic microwave background radiation, was the first key piece of evidence that led to the Big Bang theory being generally accepted as the standard model of cosmology, for which they received the 1978 Nobel Prize in Physics.
The Big Bang theory states that at some point billions of years ago all the matter in the Universe occupied a space no larger than the following full stop. Why the subsequent series of events occurred is still not clear, but in an explosive burst, this point, or 'singularity', expanded at an astonishing rate like a fireball at temperatures of billions of degrees, creating space as it rapidly spread. Within the first second gravity and all the other forces were formed. Within minutes the Universe was billions upon billions of miles across and most of the matter that will ever exist was created.
This infant Universe was a hot and dense mixture containing highly energetic particles of light, or photons, and components of atoms such as electrons and protons. Conditions were so intense that atoms couldn't form; any hydrogen atoms formed by an electron and a proton sticking together were instantly struck by photons and broken apart. It took around 380,000 years for the Universe to expand and cool down to the extent that the energy of photons was no longer sufficient to break apart hydrogen atoms. The photons were then free to move, in other words they were uncoupled, allowing them to travel much further in all directions unimpeded to create a sea of light that bathed the entire Universe (Figure 3). As the Universe continued to expand and cool down this light lost its energy, shifting its wavelength from high-energy gamma rays to the microwave range of the electromagnetic spectrum (hence the name cosmic microwave background radiation), at a temperature of just a few degrees above absolute zero.
Credit: Adapted from 'Cosmic Calendar' by Carl Sagan
The cosmic microwave background radiation provides scientists with the earliest snapshots of the conditions of the infant Universe, but it was difficult to examine this radiation clearly on Earth owing to interference from its atmosphere. In 1989, NASA launched its Cosmic Background Explorer (COBE) satellite into space, under the leadership of John Mather, to get a clearer view of cosmic microwave background radiation.
Detectors on board the COBE satellite designed by a team led by George Smoot were sensitive enough to measure minute fluctuations, or 'anisotropies', in the temperature of the cosmic background radiation that are of the right size and pattern to explain the origin of galaxies (Figure 4). These anisotropies correspond to the presence of tiny 'seeds' of matter clumping together under the influence of gravity. From these seeds celestial objects such as stars, galaxies and clusters of galaxies would eventually grow.
Credit: NASA / WMAP Science Team
Mather and Smoot received the 2006 Nobel Prize in Physics for providing their detailed view of the cosmic microwave background radiation, and this remnant of the Big Bang is the subject of continued investigation. Observations from NASA's Wilkinson Microwave Anisotropy Probe satellite, known as WMAP, have narrowed down the age of the Universe to 13.7 billion years, and have also shown that it was only a relatively short time after the Big Bang, about 200 million years, that the first stars began to form.
The tiny clumps of matter found in the earliest moments of the Universe attracted more matter through their gravitational pull. In a process that is still not fully understood, these dense regions became even denser over the course of billions of years, and eventually formed galaxies. During this time clouds of interstellar gas and dust, known as nebulae, began to form. Within these nebulae the first stars were born.
Several events can trigger the star-making process in nebulae, such as collisions between clouds of gases or shock waves generated by distant supernovae. The result of these events is that as gravity pulls in denser parts of a cloud, forcing them to fragment and collapse, the clumps of matter coalesce to form dense cores. Within the compressed clumps gravitational energy is converted into thermal energy, which is released as increasing amounts of heat. Eventually these so-called 'protostars' reach a size, estimated to be a minimum of 0.8 times the mass of the Sun, in which the centres of these regions became so hot and dense that hydrogen nuclei begin to fuse together to form helium, generating tremendous amounts of energy.
The star is now fully switched on, transforming into a blazing ball of fire. The clouds stop collapsing as the energy produced by the newly born star clears the surrounding clouds away from it. The star's bulk is now supported against the pull of gravity by the energy-generating reactions constantly being carried out in its core. This is the beginning of the main sequence phase of a star's life, and this is the phase that our Sun and over 90% of its stellar companions in the Milky Way are currently in. The size of each star is dependent on how much content it managed to capture from the surrounding clouds. How much material the star manages to pull in and collect at this stage is an arbitrary process, but the amount of mass it collects will have a significant bearing on how the star evolves, and eventually dies (Figure 5).
For millennia people have wondered how the Sun produces the vast amount of heat and light that supports life on Earth. In two papers published in 1938 and 1939, the German physicist Hans Bethe provided the long-sought-after answers. Fresh from creating the then definitive account of the central components of atoms and the manner in which these atomic nuclei interact with each other, Bethe described in great detail how the stars are powered by atomic reactions similar to those used in a hydrogen bomb, for which he received the Nobel Prize in Physics in 1967.
For stars up to several times the mass of the Sun, their supply of heat and light is generated mainly by squeezing four hydrogen nuclei together to form one helium nucleus, releasing a tremendous amount of energy in the process (Figure 6). In the more extreme temperatures and pressures found in stars that are heavier than our Sun, the energy-generating reactions also fuse hydrogen to form helium, but through a more complex cycle of nuclear reactions known as the CNO cycle in which carbon acts as a catalyst.
The key to explaining how these nuclear fusion reactions create heat and light lies in Albert Einstein's most famous equation, E=mc2, which shows that mass and energy are interchangeable. As the mass of helium is less than the sum of the hydrogen nuclei constituents, the difference in mass is converted into large quantities of energy.
The theories for how the Sun shines predicted that the fusion reaction should also produce particles called neutrinos that are tiny, even when compared to the size of an atom. These particles were predicted to be a key by-product of energy reactions by Wolfgang Pauli in 1932, and in 1956 the first neutrinos were caught in nuclear reactors by Frederick Reines and Clyde Cowan, for which Reines received the Physics Prize in 1995.
According to the theories the fusion reactions within the Sun produce an enormous number of neutrinos every second, but they are also highly elusive. Having almost no mass and no charge neutrinos travelling close to the speed of light pass through Earth and everything on it virtually unimpeded. By the time you have finished reading this sentence billions upon billions of neutrinos will have passed through your eyes without you feeling a thing.
As there's a one in a thousand billion chance of stopping a neutrino in its tracks on Earth it's no surprise to hear that proving the Sun produces neutrinos was deemed almost impossible. Undeterred by these seemingly hopeless odds, Raymond Davis Jr set up an audacious experiment in 1960 to try and capture some of these solar neutrinos and show how the Sun really works.
Davis, together with his colleague John Bahcall, reasoned that detecting something that rarely collides with matter would require an immense amount of matter for it to collide with. To create a large enough neutrino trap, Davis filled a tank one-fifth the size of an Olympic-sized swimming pool with around 400,000 litres of chlorine-rich dry-cleaning fluid and buried it almost 1.5 kilometres underground in a gold mine in South Dakota to protect it from interference from other forms of radiation (Figure 7). On the very rare occasion a neutrino hits a chlorine atom, it transforms the chlorine into a radioactive form of the element argon, which could be extracted from the tank. By counting the number of extracted argon atoms, Davis could calculate how many neutrinos had passed through the apparatus, and then see if this number agreed with the numbers predicted by theory.
Photo: Courtesy of Brookhaven National Laboratory
Over the course of 25 years the tank trapped only around 2,000 solar neutrinos, two-thirds less than predicted. This concerned scientists, forcing them to re-examine their understanding of how the Sun shines, with suggested explanations ranging from the Sun having a shorter lifespan than expected to its nuclear reactions temporarily shutting down. Under the leadership of Masatoshi Koshiba the Kamiokande detector, buried deep in the Japanese Alps, was pivotal in providing experimental evidence to explain this discrepancy. By measuring the tiny flashes of light given off when neutrinos collide with atoms in water, Kamiokande not only showed Davis' numbers were correct, the detector was also sensitive enough to show that these neutrinos were coming from the direction of the Sun.
It would be years before these numbers were fully understood. It turns out that neutrinos have a tiny mass (they were originally thought to have no mass) and exist in more than one guise that weren't recognized by the detectors. By solving this so-called solar neutrino problem, it became clear that Davis and Koshiba's experiment had allowed us to peer into the nuclear reactions being carried out in the heart of a star, and in recognition of this they both received the Nobel Prize for Physics in 2002.
Hans Bethe's theories of how stars shine had shown that by converting one chemical element, hydrogen, into another, helium, stars act as cosmic alchemists. This finding had a profound influence on an American physicist called William Fowler. He decided to switch his research focus to the nuclear reactions that take place in stars, which would result in Fowler receiving the Nobel Prize in Physics in 1983. Teaming up with Fred Hoyle and the husband-and-wife team of Geoffrey and Margaret Burbidge, Fowler published a seminal paper in 1957, in which they showed that all of the elements from carbon to uranium could be produced by nuclear reactions in stars, starting from the hydrogen and helium produced in the Big Bang.
The paper, commonly known among physicists as B2FH after the first letters of the author's surnames, showed in great detail how stars can create heavier and heavier elements from the primordial hydrogen and helium as they age. As a star begins to exhaust its hydrogen supply the pressure in its core falls and the star begins to shrink. Energy is released, which heats the core to such an extent that new fusion reactions can take place. In stars the size of our Sun helium is fused to make carbon, releasing more energy. Stars that are more massive than the Sun, which burn more furiously and under greater pressure, can fuse heavier elements than helium and carbon, creating all the elements up to iron as the core temperature increases.
Here the process of fusing lighter nuclei to form heavier nuclei plus the release of energy stops. Iron is so stable that creating the remaining elements in the Periodic Table up to uranium requires the input of enormous amounts of energy, which can only come from the death cries of a massive star. But before looking at the fate of these massive stars, perhaps a pertinent question to ask at this stage is what will happen to stars like our Sun when they run out of fuel?
According to Hans Bethe's calculations the nuclear reactions that fused hydrogen to form helium would power the Sun for around 10 billion years. When, after this time, stars with a similar mass to our Sun begin to run out of their resources of hydrogen, their cores start to collapse under the pull of their internal gravity. As the cores collapse, helium atoms are forced together to create carbon, triggering an increase in pressure and temperature that pushes the outermost layers of the star out. The star expands into space, and as it does so its surface temperature decreases, changing its colour to a deep red, hence its name 'red giant'.
Fusing helium to create carbon (and in more extreme conditions carbon to create oxygen) only provides a relatively temporary source of nuclear fuel. Once this energy supply runs out the star sheds its outer layers of gas and plasma into space, which form a planetary nebula (Figure 8). Left behind is a core known as a white dwarf, which cools and dims like a campfire slowly extinguishing in the morning, eventually fading into an unnoticeable black dwarf in the skies. A typical white dwarf is highly dense, containing the mass of the Sun compressed into size of the Earth.
Credit: X-ray: NASA/CXC/SAO; Optical: NASA/STScI
Our sun is about 5 billion years old, nearly halfway through its life, so for the Sun the first steps in its death march will begin in another 5 billion years time. For larger stars, although they may burn more fuel, this comes at a cost. They have to burn their fuel much more quickly to be able to support themselves against the greater pull of their own gravity, making their lifespan drastically shorter. Compared with the billions of years that the Sun will shine stars over ten times the Sun's mass will run out of their hydrogen fuel after a few million years. When that time comes it was assumed that these larger stars also become white dwarfs, but it would become clear that they face a much more exotic and spectacular death.
On a long sea voyage from India to England in 1930, a young physicist called Subrahmanyan Chandrasekhar passed the time by thinking about what happens to more massive stars once they stop shining. A white dwarf couldn't be the fate of stars if they are above a certain critical mass, reasoned Chandrasekhar. Heavier burnt-out stars must collapse under the force of their own gravity, which implies that they face a much more spectacular end to their lives.
Chandrasekhar calculated the critical mass at this stage to be 1.4 times the mass of the Sun, which consequently became known as the Chandrasekhar limit. It would take a generation of scientists to understand the ultimate fate of these larger stars. In the meantime Chandrasekhar would be hugely influential in revealing other aspects of the physical properties of stars, including the motion of stars, the composition of their atmospheres and the mathematical theory of black holes, though it was specifically for the calculation of his eponymous limit that he was awarded the Nobel Prize in Physics in 1983.
For a star to leave behind remains that are more than 1.4 times the mass of the Sun when it runs out of nuclear fuel, it must begin its life with at least eight times the mass of the Sun. Running out of the nuclear fuel supply stimulates the fusion of carbon nuclei to form the chemical elements up to iron, as described by William Fowler and colleagues. Here, the fusion process stops as iron requires more energy to fuse it than is released from the process. Without any nuclear fusion reactions to create the temperatures and pressures needed to support the star, gravity takes over and the star collapses in a matter of seconds.
Fowler and colleagues calculated that the energy generated within the collapsing star is so great that it provides the conditions needed to create all the elements heavier than iron. As the outer layers of such a star collapse and fall inwards they are met by a blast wave rebounding from the collapsing core. The meeting of these two intense pulses of energy creates a shock wave that is so extreme that iron nuclei absorb progressive numbers of neutrons, building all the heavier elements from iron to uranium.
The blast wave continues to spread outwards, and in its final and perhaps finest flourish it creates a supernova explosion that blows the star apart (Figure 9). Most of the elements that have been forged within the heart of the star are blasted across space, where they exist in clouds that can enter another round of congregation and collapse to form the next generation of stars and planets.
After billions of years and several rounds of recycling, some of these elements eventually collected and shaped everything that inhabits our planet. Everything around and within us, from the oxygen that we breathe to the calcium that makes up our bones, was created almost entirely within stars. A star's parting gift in death is to provide the building blocks for life as we know it. In other words, we are all made up of star dust.
Supernova explosions are so intense that they can appear as bright as a galaxy of stars combined. In the year 1054, a supernova 5,000 light years away that is thought to have formed the Crab Nebula was believed to be visible to the naked eye in daylight on Earth for 23 days. People could even read from its light at night time.
On February 23 1987, the same Kamiokande detector that detected neutrinos from the Sun captured twelve of the total of 1,000 billion neutrinos emitted by a supernova named Supernova 1987A about three hours before telescopes saw the first visible light from the explosion. This showed that neutrinos are shot out of the core of a star after it collapses and before the shock wave sends out visible light.
As neutrinos emerge from a supernova explosion before light, these neutrino signals can provide information about the very early stages of a supernova, typically hours before the explosion's light reaches the Earth. This created a new way in which to look at the stars, called neutrino-astronomy. Using early warning signals like neutrinos and X-rays, scientists are now beginning to witness supernovae explosions as they happen, giving them the first glimpses of what happens inside a massive star during the final hours of its life.
One thing we already know is that a supernova explosion leaves at its heart one of two remnants, a neutron star or a black hole. Which of the two fates is destined for a star depends, as before, on its mass.
If the remains are up to five times the mass of the Sun, the core will have collapsed to such a degree that the protons and electrons in atoms are crushed together to form a ball of neutrons. Spinning rapidly, these neutron stars are perhaps the densest objects in the Universe, typically containing their large mass in an area no bigger than a large city. A neutron star is so dense that a teaspoon of material would weigh as much as a mountain on Earth.
The existence of neutron stars was first proposed by Fritz Zwicky in the 1930s, just after James Chadwick discovered the neutron (for which Chadwick received the Nobel Prize in Physics in 1935). It would take over 30 years to prove their existence using telescopes that pick up the radio signals transmitted from stars, another breakthrough rewarded with the Nobel Prize in Physics in 1974 to Sir Martin Ryle.
In 1968 a graduate student at Cambridge University, Jocelyn Bell, spotted a star that sent out an unusual radio signal. The suggestion from Bell's supervisor, Antony Hewish, later proved true, was that this so-called pulsar was the first sighting of a neutron star. As this star spins around it shoots out a powerful lighthouse-like beam of radio waves from its poles. If the Earth lies in the path of such a radio beam it appears to pulse each time the beam sweeps around, typically several times a second (Figure 10). For his role in discovering the first pulsar, Hewish received the Physics Prize with Ryle in 1974.
Not all neutron stars are pulsars but if they are their characteristic radio signals make them more likely to be spotted. In 1974, Russell Hulse and Joseph Taylor discovered a pair of pulsars close enough together to orbit around each other in space, for which they received the Nobel Prize in Physics in 1993. This so-called 'binary pulsar' was not just a curious astronomical observation; detailed examinations of the way in which the two neutron stars orbit around each other provided the proof for one of Albert Einstein's most important theories.
Einstein's general theory of relativity of 1916 predicted that space exists in three-dimensions plus time as a fourth dimension. This space-time, as it is referred to, behaves like a liquid; as massive bodies such as stars move through the cosmos they form ripples of gravitational radiation. Proving Einstein's predicted ripples in the fabric of space-time exist was difficult until Hulse and Taylor's binary pulsar provided objects large enough and travelling fast enough around each other through space to create gravitational waves that can reach Earth before fading away.
Einstein's theory also correctly predicted the existence of one of the most extreme phenomena in the Universe, which is the fate of massive stars that die in supernovae explosions.
When the corpse left at the centre of a supernova explosion weighs more than five times the mass of the Sun, the pull of gravity is so strong that the remnant collapses completely in on itself to form a singularity of zero volume and infinite density called a black hole (Figure 11). The gravitational pull in a black hole is so strong that nothing can escape from it, not even light.
The idea that an object could exist in space with gravity strong enough to prevent light from escaping was first proposed as far back as 1783 by Reverend John Michell, an amateur British astronomer. It wasn't until the early 1970s that scientists began to uncover evidence that this could be true.
Riccardo Giacconi and colleagues built the first instruments that discovered that X-rays are emitted during some of the most violent and extreme incidents in the Universe, such as stars exploding. Under Giacconi's leadership, the first X-ray observatory launched into space in 1971 called Uhuru discovered a fluctuating X-ray signal from a source called Cygnus X-1 that corresponded to something that has ten times the mass of the Sun occupying an area the size of an asteroid. A black hole best fitted the bill, but it took several years of observational data to strengthen the case for it to be generally accepted that Cygnus X-1 is most likely to be a black hole.
Subsequent telescopes launched into space under Giacconi's guidance, named Einstein and Chandra, have expanded our X-ray vision of black holes (and for his leading role in all these X-ray astronomy projects Giacconi received the Nobel Prize in Physics in 2002). Light can't escape from a black hole, but X-ray telescopes can observe the light given out from material when it gets close to a black hole. As matter is drawn towards the boundary of a black hole, known as the event horizon, the friction from colliding particles heats them to extreme temperatures, which produces X-rays.
Through these, and other, X-ray telescopes, scientists are building a better picture of the mysterious world of black holes. These observations show that black holes far from fulfil their popular portrayal as the ultimate galactic destruction machines swallowing up whole galaxies in their paths. While nothing can escape from a black hole once it is near its event horizon, in truth, objects can orbit a black hole outside the event horizon in the same way that they can orbit a star. In fact, one of the most surprising observations is that there could be gigantic black holes lurking in the centre of almost all galaxies, including our own.
These so-called 'supermassive black holes' (as opposed to the 'stellar-sized black holes' described earlier) have the mass of millions or even billions of stars like the Sun packed into an area around the size of our Solar System. The idea of such a large black hole being in the centre of our galaxy might be a frightening thought, but its existence could be the very reason the Milky Way is here in the first place.
Supermassive black holes are believed to be a key force in shaping and regulating the process by which galaxies, such as the Milky Way, are formed. As matter from nearby stars and gases slowly spirals in towards the event horizon of a supermassive black hole before it is swallowed, the extreme amounts of friction and heat this generates is so intense that huge jets of high-energy particles are shot into space. These jets, which can reach a few hundred thousand light years in size, are thought to seed the space between galaxies with gases that are rich in chemical elements, but they can also shut off the processes by which a galaxy forms, preventing them from growing too big, too fast.
In turn, giant ripples or perturbations caused by events such as collisions between galaxies create the spiral waves that allow galaxies like ours to be shaped like the 'wheels of lighted smoke' that Martinson described. So it seems that black holes are an appropriate place in which to conclude the star stories. By acting both as destroyers and creators, they illustrate how even when they are at their most destructive, stars continue to provide the foundations for the growth of the Universe.
Bahcall, J. N. How the sun shines. Nobelprize.org, published online 29 June 2000
Gribben, J. The Universe: A Biography (Penguin, London, 2008).
Martinson, H. Aniara (Translated by Stephen Klass and Leif Sjöberg) (Vekerum, Kristianstad, 1991).
Miller, A. I. Empire of the Stars (Abacus, London, 2006).
Prinja, R. Stars: A Journey through Stellar Birth, Life and Death (New Holland, London, 2008).
Sagan, C. Cosmos (Abacus, London, 1995).
Singh, S. Big Bang (HarperCollins, New York, 2004).
Star Stories. Nobelprize.org, published online 24 September 2008
Weinberg, S. The First Three Minutes (Fontana/Collins, London, 1978).