In the early 20th century, astronomer Vesto Slipher made the first radial velocity measurements of what were then called spiral nebulae. Nearly all of Slipher’s spiral nebulae — what we now know to be galaxies beyond our own Milky Way — were receding. This simple observation laid the foundation of what is today a complex undertaking: the measurement of the expansion rate of our universe.
Expanding Our Understanding
How can we even measure the universe’s expansion? The rate of expansion can be expressed as a value called the Hubble constant, which bears the unusual units of kilometers per second per megaparsec (km/s/Mpc). (The Hubble constant gets its name from Edwin Hubble, who made one of the first measurements of its value in the local universe; Georges Lemaître beat him to the punch by two years.)
Researchers have different approaches for measuring this constant in the nearby universe and in the more distant, early universe. The expansion rate in the nearby universe can be estimated by combining two measurements: 1) the distance to other local galaxies, and 2) how quickly those galaxies are moving away from us. The most precise measurement of the expansion of the early universe, on the other hand, comes from the Planck spacecraft: researchers extracted a value of the Hubble constant from measurements of the oldest light in the universe, known as the cosmic microwave background.
A Cosmic Conundrum
Here’s the catch: the expansion rate measured from the cosmic microwave background can’t be compared directly to the rate measured in the local universe. That’s because the expansion of the universe is accelerating — the expansion rate today is considerably larger than when the universe was young. To compare the early-universe rate to the present-day rate, researchers use the leading cosmological model, ΛCDM, to extrapolate the early-universe value to the present day.
Researchers have produced more than a thousand estimates of the Hubble constant over the past 60 years, and today the local-universe and early-universe expansion rates have both been measured precisely — but the rates do not agree. The extrapolated early-universe expansion rate is around 67 km/s/Mpc, while the present-day value is pinned at around 74 km/s/Mpc. This mismatch is called the Hubble tension, and the solution is unknown. Are our measurement methods at fault? Or are we due for an overhaul of our leading theory of cosmology? In today’s post, we’ll take a look at five recent research articles that tackle the Hubble tension from different angles — proposing ways to alleviate it or staunchly reinforcing its existence.
What If Our Measurements Weren’t Good Enough?
The most precise measurement to date of the local-universe expansion rate hinges upon Hubble Space Telescope observations of Cepheid variable stars. Astronomer Henrietta Swan Leavitt identified the importance of Cepheids in 1912, when she showed that a Cepheid’s pulsation period is directly tied to its intrinsic luminosity. By comparing the apparent brightness of a Cepheid to its intrinsic luminosity, astronomers can measure the distance to Cepheids — and the galaxies they call home — out to about 100 million light-years.
Picking out individual Cepheids in distant galaxies, even within the local universe, is a challenge — the light from multiple stars in crowded stellar neighborhoods can overlap, stymieing measurements of single stars. This could mean that Hubble’s measurements of Cepheid variables are less reliable for more distant stars, skewing estimates of the Hubble constant. Because JWST has better resolution than Hubble, especially at the near-infrared wavelengths necessary for studying stars in far-off, dusty environments, it can provide a valuable test of the conclusions drawn from Hubble data.
A team led by Adam Riess (Space Telescope Science Institute and Johns Hopkins University) used JWST to measure the pulsation period and brightness of more than a thousand Cepheid variable stars, yielding precise measurements of the distances to these stars. This analysis showed that while the Hubble measurements are noisier than the JWST measurements, they’re no less accurate. The team ruled out at a significance of 8.2σ the possibility that inaccurate measurements of Cepheids in distant, crowded galaxies are responsible for the Hubble tension.What If Our Assumptions Are Incorrect?
In addition to using Cepheid variable stars, many estimates of the Hubble constant rely on measurements of Type Ia supernovae, which occur when the remnant core of a low- to intermediate-mass star — a white dwarf — attains a mass of roughly 1.4 solar masses and explodes. The peak brightness of a Type Ia supernova greatly outshines a Cepheid variable, extending the cosmic distance scale out to more than a billion light-years.
Because Type Ia supernovae are theorized to occur at the same limiting mass, these explosions were thought to all have the same maximum luminosity, allowing us to use them for reliable distance measurements. However, there is now evidence that not all Type Ia supernovae have the same brightness. Some are brighter than average, which may result from the collision of two white dwarfs. Some are fainter than average, which may happen when the crust of a white dwarf with a mass less than 1.4 solar masses ignites, triggering the explosion of the entire star. Other factors, like the local abundance of elements heavier than helium, might moderate a supernova’s maximum luminosity as well.This means that instead of being limited to a single value, the maximum luminosities of Type Ia supernovae follow a distribution that is not yet well known. Yun Chen (Chinese Academy of Sciences) and collaborators examined the impact of different Type Ia supernova luminosity distributions on estimates of the Hubble constant. The team tested five luminosity distributions, three of which are Gaussians and two of which are “top hats”: equal likelihood within a certain range of luminosities and zero chance outside that range. The result? Wide-ranging values of the Hubble constant, some of which agree with the early-universe value and others of which agree with the present-day value. Given the huge impact of the underlying luminosity distribution, Chen’s team calls for a better understanding of the intrinsic properties of Type Ia supernovae.
What If We Use Another Measurement Technique?
The quest to measure the Hubble constant benefits from trying a variety of measurement techniques, and George Jacoby (NSF’s NOIRLab) and collaborators have demonstrated a unique way to measure the Hubble constant: using observations of planetary nebulae. Planetary nebulae form when stars similar in mass to the Sun — up to about 8 solar masses — lose their atmospheres at the end of their lives. High-energy photons from the exposed stellar core ionize the expelled atmospheric gas, creating a beautiful but short-lived nebula.Individual planetary nebulae can have a wide range of luminosities; we can’t use single planetary nebulae to measure the distance to another galaxy as we can with Cepheid variable stars or Type Ia supernovae. However, when looking at all of the planetary nebulae in a galaxy, a remarkably consistent pattern emerges: for the brightest planetary nebulae, the number of nebulae as a function of luminosity has the same shape regardless of the properties of the galaxy in question. Using this method, researchers can measure the distances to galaxies out to about 130 million light-years.
Jacoby’s team used archival data from the Very Large Telescope to measure the planetary nebula luminosity function for 16 galaxies. Their resulting measurement of the Hubble constant — 74.2 km/s/Mpc — is consistent with other local-universe measurements but with larger uncertainties. To measure the Hubble constant from planetary nebulae with enough precision to compare against the Type Ia supernova method, the team recommends targeted observations of a larger sample of galaxies containing at least 50 bright planetary nebulae.What If We Could Measure the Hubble Constant from Gravitational Waves?
All of the Hubble-constant measurement techniques discussed so far have relied on electromagnetic radiation from one source or another. When the Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo detectors made the first direct measurement of gravitational waves in 2015, it opened a whole new window into the workings of the universe — and an entirely new way to measure its expansion rate.
Tonghua Liu (Yangtze University) and coauthors demonstrated how gravitational wave observations could enable measurements of the expansion rate out to a redshift of z = 5, or a little more than a billion years after the Big Bang. That time period is currently inaccessible to other measurement methods. Here’s how that would work: first, imagine a binary pair of neutron stars. As these objects circle one another, they expend energy in the form of gravitational waves and sink closer together, hastening their inevitable collision. As the universe expands, the expansion imprints a phase shift in the gravitational waves from the neutron star pair. This effect is tiny, amounting to just a one-second phase shift over 10 years of monitoring a neutron star binary at a redshift of z = 1. Tiny — but not impossible for future gravitational wave observatories to measure!
The phase shift and redshift of a single neutron star binary provide a direct measurement of the expansion rate of the universe at the binary system’s redshift. By measuring phase shifts for a large sample of neutron star systems across a wide range of redshifts, astronomers could measure how the expansion rate of the universe changes across cosmic time — without invoking any assumptions from a particular cosmological model.
Liu’s team simulated phase-shift measurements from a proposed space-based observatory, the Deci-hertz Interferometer Gravitational-wave Observatory (DECIGO), which is designed to detect the low-frequency gravitational waves produced by neutron star binaries years before they merge. Researchers expect that DECIGO will detect a million neutron star binaries, about 10,000 of which would be accompanied by a detectable electromagnetic signal that allows for an estimate of the binary’s redshift. Liu’s team showed that the expansion rate could be measured to within less than a percent, providing a valuable comparison to existing estimates. In addition to measuring the expansion rate, this method also yields a measurement of the universe’s curvature.What If Dark Energy Is Responsible?
The universe is a mysterious place: according to the leading theory of cosmology, the matter that we see and interact with every day makes up just a tiny fraction — about 5% — of the contents of the universe. Dark matter, a hypothetical form of matter that interacts with everyday matter only through gravity, makes up another 27%. The majority of the matter–energy density of our universe comes from the most mysterious quantity of all: dark energy. Dark energy is thought to provide the outward pressure responsible for the accelerating expansion of the universe, but the exact cause of this pressure remains unknown.
The persistence of the Hubble tension has led some researchers to propose alternative theories of cosmology, many of which adjust the properties and identities of dark matter and dark energy. Recently, Mehdi Rezaei (Hamedan Research Center for Applied Meteorology) investigated the impact of oscillating dark energy on the Hubble tension. If dark energy were to oscillate between accelerating and decelerating the expansion of the universe at different points in cosmic time, it could explain the mismatch between the extrapolated early-universe expansion rate and the present-day rate. Using eight different descriptions of oscillating dark energy that have been presented in previous research, Rezaei found that these prescriptions reduced the Hubble tension from its current 5σ severity to 2.14–2.56σ.In addition to making progress on the Hubble tension, oscillating dark matter could solve another problem of cosmological importance called the coincidence problem. Essentially, the coincidence problem boils down to the fact that in the present-day universe the energy densities of dark matter and dark energy are of the same order of magnitude — but in the distant past and distant future these quantities were/will be way out of balance. In the oscillating dark energy scenario, the energy densities of dark matter and dark energy ebb and flow, and they would have been equal at several points in the universe’s history. While oscillating dark energy alleviates the Hubble tension and the coincidence problem, it doesn’t fit certain observations as well as ΛCDM does — and the search for a definitive solution to the Hubble tension goes on!
Citation
“JWST Observations Reject Unrecognized Crowding of Cepheid Photometry as an Explanation for the Hubble Tension at 8σ Confidence,” Adam G. Riess et al 2024 ApJL 962 L17. doi:10.3847/2041-8213/ad1ddd
“Effects of Type Ia Supernovae Absolute Magnitude Priors on the Hubble Constant Value,” Yun Chen et al 2024 ApJL 964 L4. doi:10.3847/2041-8213/ad2e97
“Toward Precision Cosmology with Improved Planetary Nebula Luminosity Function Distances Using VLT-MUSE. II. A Test Sample from Archival Data,” George H. Jacoby et al 2024 ApJS 271 40. doi:10.3847/1538-4365/ad2166
“Model-Independent Way to Determine the Hubble Constant and the Curvature from the Phase Shift of Gravitational Waves with DECIGO,” Tonghua Liu et al 2024 ApJL 965 L11. doi:10.3847/2041-8213/ad3553
“Oscillating Dark Energy in Light of the Latest Observations and Its Impact on the Hubble Tension,” Mehdi Rezaei 2024 ApJ 967 2. doi:10.3847/1538-4357/ad3963