Astrobites RSS

illustration of a giant impact

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Large Impacts onto the Early Earth: Planetary Sterilization and Iron Delivery
Authors: Robert I. Citron and Sarah T. Stewart
First Author’s Institution: University of California, Davis
Status: Accepted to PSJ

The early Earth wasn’t your typical summer break vacation destination. During the Hadean Eon (4.5–4.0 billion years ago), Earth was an extremely hostile environment and was frequently bombarded by asteroids. Yet, somehow, life on Earth could have emerged during this time.

This story begins with a noteworthy event during the Hadean Eon: a now long-gone Mars-sized planet called Theia slammed into Earth. The massive impact blew off a large amount of debris that started to circle the new Earth–Theia merger and eventually formed the Moon. This proposed sequence of events is known as the Giant Impact Hypothesis (see Figure 1). Theia’s impact melted Earth’s entire crust several kilometers deep, creating an environment not very favorable for any form of life that we know of and very likely sterilizing anything already present on Earth. After all the turmoil of the impact settled down, life on Earth (in principle) could have formed shortly after, a mere 4.5 billion years ago. Reality, as it turns out, may have run less smoothly.

illustration of the giant impact hypothesis

Figure 1: An illustration of the Giant Impact Hypothesis. The planet Theia crashed into Earth, and the debris from the collision led to the formation of the Moon. [Aparna Nathan, SITNBoston]

As a deeply studied subject, the origin of life has been described by several proposed hypotheses. A popular hypothesis states that it all began with the formation of amino acids and RNA molecules. Because our present-day atmosphere and oceans are oxidizing (noticed iron rusting? Maybe some fire?), spontaneously forming these molecules is difficult. But what about all those lifeforms we see today? They needed to come from somewhere. Well, to actually form these amino acids and RNA molecules on a global scale, we would need a reducing atmosphere or ocean. And how better to create one than slamming a huge rock full of reducing material (like iron) into Earth?

Reducing an Atmosphere 101

After the Theia impact, but still during the Hadean Eon, Earth endured a large amount of impacting asteroids of varying sizes, which were sling-shotted to the inner solar system by Jupiter and Saturn. The authors of today’s article wanted to know which objects under which conditions can actually create a (temporarily) reducing atmosphere or ocean on Earth, ultimately opening the way to forming the building blocks of life. Instead of really slamming various rocks on Earth, the authors ran smoothed particle hydrodynamics (SPH) simulations of large objects (the projectiles) colliding with an Earth-like planet (ominously called the target). To account for several possible scenarios, illustrated in Figure 2, the authors varied the impacting object’s mass, velocity, and the angle at which it strikes Earth.

diagrams of an object impacting earth

Figure 2: Effect of different impact angles of a large object colliding with Earth, with the distinction between the atmosphere (blue), the mantle (orange), and the core (dark gray). Different angles lead to different degrees of surface melting (red). [Citron & Stewart 2022]

Now, when such an object impacts Earth, a lot happens in a short span of time. To know what goes where, the authors kept track of the mantle (made of forsterite) and core (consisting of an iron–silicon alloy) of both the simulated impacting object and Earth. Part of the iron-rich core material — the stuff we need to start large-scale reduction of Earth’s atmosphere or oceans — gets scattered in the atmosphere by the impact. How much of this iron enters Earth’s atmosphere depends strongly on how the impact occurred (which is controlled by object mass, impact velocity, and impact angle). A 24-hour time lapse of the impact in one of the simulations is shown in Figure 3.

simulation snapshots

Figure 3: Simulation of a smaller object colliding with Earth. Here, the object mass is 25% of the Moon’s mass, the impact velocity is 1.5 times Earth’s escape velocity, and the impact angle is 45°. The mantle and core materials of Earth and the impacting object are color-coded to see where they eventually land, if at all. This simulation shows that the object is shattered on Earth’s surface; the colliding object’s mantle material — forsterite — is mainly scattered around Earth or resides on its surface, while the heavier core material from the object — iron — mainly sinks in large chunks to Earth’s interior. Part of the iron from the object, however, remains scattered in the atmosphere, where it will act as a reducing agent. The spatial dimensions are expressed in Earth radii. [Citron & Stewart 2022]

By analyzing their simulations, the authors found that only the largest of the asteroids during the Hadean Eon — the Moon-sized ones — could have delivered enough iron to fully reduce Earth’s oceans and atmosphere. But there’s an additional problem now: the impact of an object so huge would easily vaporize ocean-sized bodies of water (the authors find that even an object with a radius of 0.2 times the Moon’s radius could vaporize Earth’s early ocean) and would even melt most of Earth’s surface, creating almost post-Theia impact circumstances that weren’t very life-supporting. Considering all this, it looks like these Hadean asteroids did not really help early life on its way.

However, this study has shown that it takes a larger object than previously estimated to fully sterilize the early Earth’s exterior by melting its whole surface; such an object would need to have more than 25% of the Moon’s mass. As objects this size were rare even during the Hadean Eon, the chances of a mass extinction by space rocks are lower than previously expected. Even the ocean-evaporating asteroids do not necessarily sterilize Earth if early life occurred under the planet’s surface. Moreover, remember the Moon-sized asteroids needed to fully reduce the atmosphere and oceans? Turns out we don’t need that kind of overkill (pun intended). Multiple smaller objects slamming into Earth could reduce the atmosphere or ocean enough to create favorable conditions for spontaneous RNA formation.

In any case, if life emerged from a post-impact world, it would be due to the right asteroids at the right time. Too small, and the kick-starter for life wouldn’t occur. Too large, and any progress made so far would be wiped out. Considering the fact that you are reading this post, it seems our very, very far forebears weren’t out of luck!

Original astrobite edited by Sarah Bodansky.

About the author, Roel Lefever:

Roel is a first-year astrophysics PhD student at Heidelberg University. He works on massive stars and simulates their atmospheres and outflows. In his spare time, he likes to hike and bike in nature, play (a whole lot of) video games, play and listen to music (movie soundtracks!), and read (currently The Wheel of Time, but any fantasy really).

illustration of exoplanetary systems

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Continuous Habitable Zones: Using Bayesian Methods to Prioritize Characterization of Potentially Habitable Worlds
Authors: Austin Ware et al.
First Author’s Institution: Arizona State University
Status: Published in ApJ

With more than 5,000 exoplanets discovered (roughly 30 of which are expected to be habitable), how can astronomers prioritize which to study in the search for extraterrestrial life? Today’s article explores the “continuous habitable zone”: planetary orbits that allow for water to be liquid long enough for detectable life to develop.

Habitable Worlds

With JWST beginning observations and the Habitable Exoplanet Observatory (HabEx) and Large UV/Optical/IR Surveyor (LUVOIR) space telescopes on the horizon, transit spectroscopy from space is set to dramatically increase our ability to characterize exoplanet atmospheres. Astronomers are working to prioritize which potentially habitable exoplanets are the best to search for life on. The habitable zone defines the region surrounding a star in which a planet could host liquid water, which is typically assumed to be a requirement for life. Stars evolve, though, and planets that are habitable at one point in time may not always be or have been.

Life takes time to develop and become detectable. Let’s consider Earth’s history as a benchmark. The Great Oxidation Event, during which biologically produced molecular oxygen accumulated in Earth’s atmosphere, occurred roughly 2 billion years after Earth’s formation. This is adopted as the length of time needed for life to make a detectable impact on a planet’s atmosphere. Today’s article presents a method to estimate the likelihood that a planet has resided in its star’s habitable zone for at least 2 billion years, defining the region where this could occur as the 2-billion-year continuous habitable zone (CHZ2).

So, how do we determine the habitable zone of a star? The article discussed two frameworks:

  1. Optimistic habitable zone: Regions that receive an amount of radiation from their star less than Venus did 1 billion years ago and more than Mars did 3.8 billion years ago ago. These “recent Venus” (RV) and “early Mars” (EM) limits are chosen because observations suggest liquid water existed on those planets until 1 or 3.8 billion years ago, respectively.
  2. Conservative habitable zone: A greenhouse effect model. The inner edge is defined by the “runaway greenhouse,” where stellar flux will vaporize an ocean. The outer edge is defined as the “maximum greenhouse,” where Rayleigh scattering dominates over the greenhouse effect of carbon dioxide.
diagram of the optimistic and conservative habitable zones for a range of stellar temperatures

Figure 1: The habitable zone for a range of stellar temperatures, showing Venus, Earth, Mars, and a selection of potentially habitable exoplanets. [Chester Harman]

Statistical Modeling

Using Bayesian statistics, the authors created an equation for the probability of a planet residing in the CHZ2 as a function of its host star’s mass, metallicity, and age. They assigned ages to the host stars based on evolutionary tracks from the Tycho stellar modeling code and found that their estimates aligned well with previous measurements based on stellar spins.

The authors used their framework to evaluate nine potentially habitable exoplanets as well as Venus, Earth, and Mars. All stars considered were relatively Sun-like (between 0.5 and 1.1 solar masses), with Earth-like and super-Earth terrestrial planets (radii < 1.8x Earth’s and mass < 10x Earth’s). Figure 2 shows the results for the solar system and the authors’ best exoplanet candidate.

plot of solar system planets and one exoplanet and their likelihood of being within the continuous habitable zone

Figure 2: CHZ2 probabilities for two stars. Line styles indicate the habitable zone model: three conservative model versions for different planet masses and the RV/EM optimistic model. Left: The Sun, with the orbits of Venus, Earth, and Mars indicated. Right: KIC-7340288, showing the best candidate examined in the article with ~90% CHZ2 probability under all models. [Adapted from Ware et al. 2022]

What This Means for the Future

The authors conclude with proposals for future work on this topic, including extending the analysis to lower-mass stars. They also estimated ages for nearly 3,000 Transiting Exoplanet Survey Satellite (TESS) continuous-viewing-zone stars, which are the best candidates for TESS to find habitable zone planets around, to apply a similar framework to in the future. The exoplanets determined here to have a high CHZ2 probability will be ideal for follow-up with JWST. The method will also be valuable in target selection for future exoplanet characterization missions like HabEx and LUVOIR.

As shown in Figure 2 above, the method used in this article determined Mars to be in the CHZ2, while we know it not to be currently habitable. This indicates the need for additional model parameters in the Bayesian analysis to improve accuracy. This includes further stellar and planetary properties important to their evolution, including stellar oxygen-to-iron ratios, planetary composition, and stellar activity.

Original astrobite edited by Jana Steuer.

About the author, Macy Huston:

I am a fourth-year graduate student at Penn State University studying astronomy and astrophysics. My current work focuses on technosignatures, also referred to as the Search for Extraterrestrial Intelligence (SETI). I am generally interested in exoplanet and exoplanet-adjacent research. In the past, I have performed research on planetary microlensing and low-mass star and brown dwarf formation.

simulation of galaxies during the epoch of reionization in the early universe

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Reionization with Simba: How Much Does Astrophysics Matter in Modeling Cosmic Reionization?
Authors: Sultan Hassan et al.
First Author’s Institution: Flatiron Institute and University of the Western Cape, South Africa
Status: Accepted to ApJ

While a tired trope to be sure, the hero’s journey to conquer the darkness and bring in an age of light is a memorable one. Today, our hero isn’t a person but succeeds in that illuminating quest all the same!

The authors of today’s article consider one particular question: how do we model the re-emergence of light sources in the early universe during the time of cosmic reionization? Namely, does the way we model the sources of ionizing photons (high-redshift stars and galaxies) impact observables on the large scales relevant for cosmological observations?

That’s a bit of a mouthful, but we’ll chew through it slowly and methodically in this bite!

Out of the Darkness

Before we can talk about reionization, we have to understand what brought about the dark age of the universe in the first place. After the hot Big Bang, the universe was initially fully ionized (all atoms were stripped of their electrons) up until it cooled to the point where hydrogen atoms “recombined” (free electrons paired up with lone protons) at a redshift (z) of roughly 1,000 (Figure 1). After recombination, the universe was filled with neutral hydrogen and was in a sense “dark,” since the cooling universe had no sources of ionizing photons to liberate electrons from the neutral hydrogen (HI).

However, during this dark age the seeds of revolution (ahem, structure formation) were slowly growing until eventually the first stars and galaxies formed inside dark matter halos, providing new sources of ionizing photons. These light-bringers then proceeded to make Swiss cheese out of the dark universe, creating holes filled with ionized hydrogen (HII) as illustrated by the bubbles at the center of Figure 1. You can get an instant and visceral feel for this process by watching this wonderful movie of a simulation of reionization.

diagram of the evolution of the universe over time

Figure 1: A schematic view of reionization within the larger cosmic timeline. Blue represents (opaque) neutral hydrogen, while black represents fully ionized hydrogen. The transition between these two regimes proceeds around redshift z = 10 by way of reionized bubbles around source stars and galaxies. [NAOJ]

This story, like many heroic journeys, neglects an enormous amount of real-world complexity. To accurately model reionization at the level necessary for upcoming surveys, astrophysicists need to answer a host of questions: What is the detailed morphology of reionization? How does reionization depend on the characteristics of large-scale structure? On the processes of galaxy formation? What about the nature of the ionizing sources? Today’s article explores some of these questions using the Simba suite of galaxy-formation simulations. In particular, the authors delve into different ways to model the sources of ionization.

Getting Straight to the Source (Modeling)

The authors set out to understand whether or not the details of how stars and galaxies produce ionizing photons affect observables on large (“cosmological”) scales. To do this, they used the Simba simulations, which include a host of galaxy-formation physics as well as gas hydrodynamics, and accounted for radiative transfer of photons in post-processing. Specifically, the authors tested whether it was possible to notice a difference in the morphology of reionization or in the distribution of ionized hydrogen in the simulation with different choices of source modeling. The results of this comparison are shown in Figures 2 and 3, and we’ll walk through them one at a time.

Figure 2 shows the visual morphology of reionization by displaying the spatial distribution of the ionization fraction xHII (blue is ionized, red is neutral) in the simulation. Each row of Figure 2 corresponds to a different model of ionizing photons. The models contain ionizing photon sources with different properties, in different numbers, or a larger degree of scatter, but each model contains similar overall amounts of photons. The columns correspond to increasing time (and therefore increasing mean global ionization fraction) from left to right.

plots of ionization maps for all models tested

Figure 2: Simulation output maps of ionized hydrogen fraction (xHII) as a function of time (left to right) for several choices of reionization source models (rows). Small features are different by eye in the different models but the overall morphology on larger scales remains the same. [Hassan et al. 2022]

From a quick glance, it is clear that as we look down a single column, the Swiss-cheese structure of ionized bubbles looks broadly similar across source modeling choices. Not much changes on large scales, even though the smaller bubbles or detailed edge features may be changing significantly. The authors take this to mean that source modeling choice doesn’t have a significant impact on large scales, but for a quantitative confirmation of this finding they turn to the power spectrum of ionized hydrogen, which describes the spatial distribution of HII.

Figure 3 shows the power spectra of ionized hydrogen at the redshifts considered in Figure 2, as well as their residuals in the lower panel. The different modeling choices correspond to the different curves in the figure. The curves are broadly in agreement with each other over most scales for most choices of source modeling. In particular, on large scales (log k < 0.0) the models agree quite well — quantitatively corroborating that the choice of source modeling does not impact the large-scale spatial distribution of the ionized hydrogen.

power spectra of the models in the previous figure

Figure 3: Reionization power spectra as reionization progresses in time from high to low redshift z (from left to right). For several choices of source modeling considered by today’s authors, the large-scale (low k) power is similar. [Hassan et al. 2022]

The conclusion of this article is concrete: the authors suggest that future large-scale reionization modeling can safely use more efficient methods than expensive simulations like the ones included here. This follows from the finding that changes in the source modeling and associated scatter in the ionization rate and halo mass relation does not affect large scales as shown in Figures 2 and 3. If large-scale reionization does not depend on the details of astrophysical source modeling, this will definitely make the lives of astrophysicists studying large-scale reionization easier — without the need to simulate these details, researchers can run less costly simulations to extract information about the high-redshift universe on scales relevant for cosmology!

Original astrobite edited by Alice Curtin.

About the author, Jamie Sullivan:

I am a third-year astrophysics PhD student at UC Berkeley and part of the Berkeley Center for Cosmological Physics. My current research focuses on measuring and modeling large-scale structure to constrain cosmological parameters. I completed my undergraduate at UT Austin, and I’m originally from the Washington, DC, area.

illustration of an exoplanet on a grazing orbit around its host star

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Accurate Modeling of Grazing Transits Using Umbrella Sampling
Author: Gregory J. Gilbert
Author’s Institution: University of Chicago
Status: Published in AJ

Today’s author uses umbrellas to accurately model the planets that “graze” their stellar hosts.

Planets That Graze on Their Stars

Roughly 75% of all known exoplanets were discovered via transit surveys. These surveys monitor many stars at once to look for dips in brightness that could be caused by a planet passing, or “transiting,” in front of a star. Although rare, some of these planets only “graze” their host stars, meaning that they only partially transit their parent star’s disk (check out this astrobite to learn more about a specific case of a grazing planet).

In astronomical terms, “grazing” planets are defined as those that have an impact parameter that is larger than the ratio of the planet’s radius to the star’s radius. The impact parameter is defined as the distance between the center of the stellar disk and the center of the planetary disk at conjunction, where conjunction is the point in a planet’s orbit where it is most closely aligned with its star, as viewed from Earth. A perfectly centered transit has an impact parameter of 0 while a transit in which only half of the planetary disk passes in front of the stellar disk has an impact parameter of 1.

plot of flux over time for different impact parameters

Figure 1: The impact parameter (the distance between the centers of the stellar and planetary disks at conjunction) changes the shape of a transiting planet’s light curve. On this plot, the flux, or brightness, of the star normalized to 1 is on the y-axis. The time before and after the transit in hours is on the x-axis. Planets that have a high impact parameter graze the disk of their host star during their transit, making it more difficult to characterize a planet using its light curve. [Gilbert 2022]

Figure 1 demonstrates how the shape of the light curve from a transiting planet changes as a function of the impact parameter. The depth of the dip in a light curve allows astronomers to estimate the planet’s radius relative to the star, but this estimation becomes more difficult if the planet is grazing. For example, the light curve of a smaller, non-grazing planet could look the same as the light curve from a larger, grazing planet. One therefore needs to simulate grazing transits even in cases where it is unlikely that the planet grazes its host star.

However, today’s author shows that standard Monte Carlo methods, which are frequently used by exoplanet scientists to model grazing planets, can lead to unreliable results! Identical runs of the same model can return differing results, or results where it is not obvious that the model is wrong (Figure 2). When dealing with a handful of planets, one can let the simulation run for a longer period of time or add additional data, such as the spectrum of the star, to the model. However, for larger samples, a more efficient method is needed. What can astronomers do instead?

plots of posterior distributions

Figure 2: Plots of the posterior distributions from four identical Monte Carlo simulations. The parameters explored are the impact parameter, b, and the ratio of the planet’s radius to the star’s radius, r. Although the simulations are identical at the start, they devolve into four wildly different scenarios. In Panel A, the simulation is mostly consistent with a non-grazing planet (b < 1). In Panel B, the simulation fails to explore entirely whether the planet is grazing or not. In Panel C, the simulation gets caught at the boundary between a grazing and non-grazing planet. In Panel D, the simulation has a bimodal posterior distribution that barely explores whether the planet is grazing at all. [Gilbert 2022]

Umbrella-ella-ella

They can use umbrella sampling! Umbrella sampling is a technique that has been used in other scientific fields for decades, but not by astronomers until recently (specifically, Matthews et al. (2018) was the first to introduce umbrella sampling to the field of astronomy). This technique splits a distribution into sub-regions, draws samples from each of these sub-regions independently, and recombines these samples into a single posterior distribution (Figure 3). The author finds that this technique returns more reliable results than those from standard Monte Carlo methods (Figure 4).

plots of posterior distributions

Figure 3: On the top left, the target distribution is split into three sub-regions, each of which is assigned a function. On the top right, after sampling from each of these sub-regions independently, each sub-region is assigned a biased distribution. On the bottom left, the three unbiased sub-distributions are shown. On the bottom right, the three unbiased sub-distributions are combined into a single posterior distribution. [Gilbert 2022]

plots of posterior distributions

Figure 4: Posterior distributions of radius, impact parameter, and transit duration for a mini-Neptune orbiting a K-dwarf star. The vertical dashed lines represent ground-truth values for this system. These plots demonstrate how standard Monte Carlo methods fail to properly explore the parameters of grazing planets and show that umbrella sampling produces more robust results! [Gilbert 2022]

A good deal of math is needed to properly weight the sub-regions relative to one another; these calculations are described in detail in the article, and a step-by-step tutorial can be found on the author’s GitHub. Nonetheless, the math is worth it — this technique can be used to explore any complicated distribution, so it can be used in fields beyond exoplanet science. This means you should get out your umbrellas, ‘cause it’s gonna be raining grazing planets!

Original astrobite edited by Jana Steuer.

About the author, Catherine Clark:

Catherine Clark is a PhD candidate at Northern Arizona University and Lowell Observatory. Her research focuses on the smallest, coldest, faintest stars, and she uses high-resolution imaging techniques to look for them in multi-star systems. She is also working on a Graduate Certificate in Science Communication. Previously she attended the University of Michigan, where she studied astronomy and astrophysics as well as Spanish. Outside of research, she enjoys spending time outdoors hiking and photographing, and spending time indoors playing games and playing with her cats.

composite image of cygnus a

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Evidence for Strong Intracluster Magnetic Fields in the Early Universe
Authors: J. Xu and J. L. Han
First Author’s Institution: Chinese Academy of Sciences and University of Chinese Academy of Sciences
Status: Published in ApJ

Mysteries of the Magnetic Macrocosm

Magnetic fields are everywhere, from the vast, pristine emptiness of cosmic voids to the dense, galaxy-packed environments of massive clusters. Wherever we find plasma — the hot, ionized fluid making up 99.9% of the universe’s visible matter — we find magnetic fields shaping and stirring said plasma. Needless to say, magnetic fields have their fingers (or, rather, their field lines) in many pies. Yet, for a wide range of astrophysical situations, the question still remains: where did these fields come from?

Galaxy clusters — associations of hundreds to thousands of galaxies held together by the glue of gravity — are no exception to this magnetic mystery. Today’s article seeks to understand the origin of magnetic fields in the intracluster medium, the ultra-hot plasma permeating the space between cluster-bound galaxies. At surface level, knowledge of the magnetic fields in the intracluster medium is necessary for understanding the rich spectrum of radiation emitted by galaxy clusters (see, for instance, these three astrobites). On a grander scale, however, these clusters — as the largest gravitationally bound bodies in the universe — can also provide key insight into the history of our cosmos. By tracing the growth of intracluster fields back through cosmic time, we can probe how magnetic fields influenced the formation of structure in the infant universe and catch a glimpse of the earliest magnetic fields in existence. This is precisely what today’s authors set out to do.

Faraday Forecasts of Faraway Fields

So, how does one study magnetic fields that are millions to billions of light-years from Earth? Today’s authors leverage the power of Faraday rotation: when a polarized light wave passes through a magnetic field, the wave is rotated through an angle that depends on the strength of the field (as illustrated in this cartoon). Therefore, by observing the change in the polarization angle of incoming light and calculating the so-called rotation measure, one can deduce the strength of the magnetic field along the light’s path. This technique is invaluable in radio astronomy and has been used extensively to study the magnetic backdrop of our universe.

If we’re going to be using Faraday rotation to explore intracluster magnetic fields, all we need now is some radiating object to shine light through a galaxy cluster and into our telescopes. There’s one slight complication, though: the incoming light is sensitive to magnetic fields along its entire path of propagation — if we’re looking at light from a distant galaxy cluster, the wave will be rotated not only by the intracluster fields, but also by intergalactic fields between the cluster and the Milky Way and by galactic fields within the Milky Way. How, then, do we isolate the rotation due solely to intracluster fields?

image of radio galaxy hercules A

Figure 1: Composite photo of the radio galaxy Hercules A and its two prominent radio lobes from the Hubble Space Telescope and the Very Large Array. [NASA, ESA, S. Baum and C. O’Dea (RIT), R. Perley and W. Cotton (NRAO/AUI/NSF), and the Hubble Heritage Team (STScI/AURA)]

The authors ingeniously sidestep this issue by looking at close by pairs of light sources; by looking at the difference in Faraday rotation between two light sources embedded in the intracluster medium, we probe only the intracluster fields separating the two sources — the intergalactic and galactic contributions cancel out! Serendipitously, the universe has provided us with an abundance of double light sources in the form of radio galaxies, whose bright pairs of lobes naturally arise as material ejected from these galaxies interacts with the surrounding intracluster medium (see Figure 1). Figure 2 illustrates, schematically, the authors’ strategy to probe intracluster magnetic fields via the rotation measures of radio lobes.

cartoon of a person looking through the intergalactic medium toward a double-lobed radio source

Figure 2: Schematic diagram showing the observation of a radio galaxy embedded in the intracluster medium. Light emitted from the two radio lobes (labeled RM1 and RM2 to indicate their different rotation measures) passes through the magnetic fields of the intracluster medium, the intergalactic medium, and the Milky Way before reaching the observer (far left). [Xu & Han 2022]

Baffling B-fields from Bygone Bodies

Since the authors are interested in the evolution of intracluster fields across the lifetime of the universe, they comb through archived radio telescope data from both the NRAO VLA Sky Survey (NVSS) and from recent literature to obtain rotation measures for double-lobed radio galaxies across a wide range of redshifts (in this context, redshift just tells us how far into the past we’re looking). When compiling their data set of lobe pairs, the authors make careful cuts based on the distances between the lobes and the locations of the lobes relative to the Milky Way so as to minimize rotation measure contamination from intergalactic and galactic fields — when we take the difference between the rotation measures of a given pair of lobes, we want this difference to reflect only the contribution from intracluster fields.

plots of rotation measure difference as a function of redshift

Figure 3: Plots of the pairwise rotation measure (RM) differences (top two rows) and the statistical dispersion in these differences (bottom two rows, showing two different ways of quantifying the dispersion) vs. redshift for the radio lobe data set analyzed by the authors. Blue points represent pairs of lobes from the NVSS catalog, while red points represent pairs compiled from the literature. The right column shows a subset of the data with RM measurement uncertainties below a certain threshold. Click to enlarge. [Xu & Han 2022]

Ultimately, the authors select 387 pairs of lobes from NVSS and 197 pairs from the literature, with redshifts as high as 3 (meaning that the light we’re seeing from the farthest lobe is almost 11.5 billion years old). Plotting the pairwise rotation measure differences (and the statistical dispersion in these differences) yields Figure 3. To high confidence, the authors conclude that the rotation measure differences in higher-redshift clusters are statistically higher than those in lower-redshift clusters, thus implying that intracluster fields were stronger in the past.

The authors go a step further and use these rotation measures to estimate the typical intracluster field strength for clusters that existed more than seven billion years ago (roughly half the age of the universe) — but this only leads to more confusion: there was too little time between the beginning of the universe and the formation of these clusters for their strong magnetic fields to have grown via typical channels like dynamos. Thus, the authors conclude that strong magnetic fields must have existed in the early universe, prior to the formation of these clusters. While intracluster fields will provide useful constraints on the growth of magnetic fields in the early universe, the ultimate origin of these fields continues to elude us.

And thus, the universe’s grand magnetic mystery lives on.

Original astrobite edited by Catherine Manea.

About the author, Ryan Golant:

I am a second-year astronomy Ph.D. student at Columbia University. My current research involves the use of particle-in-cell simulations to study magnetic field growth in gamma-ray burst afterglows and closely related plasma systems. I completed my undergraduate at Princeton University, and I’m originally from Northern Virginia. Outside of astronomy, I enjoy learning about art history, playing violin and video games, and watching cat videos on the internet.

hubble image of spiral galaxy UGC 2885

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Detection of a Superluminous Spiral Galaxy in the Heart of a Massive Galaxy Cluster
Authors: Ákos Bogdán et al.
First Author’s Institution: Center for Astrophysics | Harvard & Smithsonian
Status: Accepted to ApJ

Galaxy clusters contain hundreds of galaxies in a huge variety of shapes and sizes, ranging from irregular dwarf galaxies to giant ellipticals. The most luminous member of a cluster is known as the brightest cluster galaxy. Each brightest cluster galaxy is different, but there are some properties that they tend to have in common — most brightest cluster galaxies are found at the very centre of their host cluster and are large, elliptical galaxies, containing little gas and forming very few new stars.

The reason why most brightest cluster galaxies look so similar is well understood, as it is thought that these large galaxies form via a series of galaxy mergers. These are violent cosmic events that slowly increase the size of the galaxy, whilst also destroying any delicate disk or spiral arms that the galaxy may have (click here to see a simulation of two merging spiral galaxies). Additionally, mergers can lead to gas being expelled from a galaxy, resulting in the gas-poor, quenched brightest cluster galaxies that we see today.

However, today’s article presents an exciting twist to this story by presenting data from three galaxy clusters that do not appear to follow this trend, including one brightest cluster galaxy that doesn’t fit with our current theories at all.

Suspicious Spirals

The authors begin by introducing seven superluminous spiral galaxies, a recently discovered class of huge galaxies with spiral or lenticular shapes. The great size of these galaxies is what motivates the main question of today’s article: could these superluminous spiral galaxies actually be brightest cluster galaxies, despite not looking like them?

To answer this, we can look at the amount of X-ray radiation surrounding these galaxies. X-rays are emitted by the intracluster medium, a vast cloud of incredibly hot gas that fills a cluster, occupying the space between galaxies: if a cluster was a tasty chocolate chip muffin, the intracluster medium would be the cake, filled with chocolate chip galaxies. Using the X-ray telescope XMM-Newton, the authors found no X-ray emission surrounding two of their galaxies. However, as Figure 1 shows, the remaining five have large amounts of X-rays being produced nearby. This indicates the presence of the intracluster medium, meaning that these galaxies are nearby to a galaxy cluster.

x-ray observations of seven superluminous disk galaxies

Figure 1: X-ray observations of the region surrounding each of the seven superluminous disk galaxies. Regions of stronger X-ray emission are represented by lighter colour, and the centre of each X-ray region (i.e., the cluster centre) is marked by a green cross. The position of each superluminous disk galaxy is shown by the green circle. Note that the two galaxies in the bottom right (J11380 and J09354) have no associated clusters, and that the top-left galaxy (J16273) is located at the centre of its cluster. [Adapted from Bogdán et al. 2022]

It’s unusual to find spiral galaxies inside of clusters, but not unheard of. However, what makes this work so exciting is that in three of these clusters, there is not a single other galaxy that is brighter than the superluminous spiral — in other words, they are the brightest cluster galaxy. Finally, one of these galaxies (J16273 in Figure 1) is not only the brightest galaxy in the cluster, but is found directly in the cluster centre, in exactly the position that we would usually expect to find a brightest cluster galaxy!

Galaxy Mergers, but Not as You Know Them

The fact that J16273 is the brightest galaxy in a cluster and lives right in the cluster centre makes it look like a fairly typical brightest cluster galaxy. However, brightest cluster galaxies are elliptical because of the large numbers of galaxy mergers that they experience. How can we explain why this one is so different from all of those that we’ve seen before?

Surprisingly, one explanation is mergers themselves. The authors suggest that J16273 was previously a regular, elliptical brightest cluster galaxy that recently merged with a smaller gas-rich galaxy. Under the right conditions, this merger could spin up the elliptical galaxy, with the remnants of the gas-rich galaxy forming a brand new spinning disk.

In order to really understand these giant spiral galaxies, future work will need to look at many more than just seven of them. The authors acknowledge this and suggest that eROSITA, an ongoing X-ray survey of the sky, will be able to look at many more of these galaxies and determine whether they live in clusters, groups, or alone. eROSITA is due to release its first data at the end of 2022 and should help us to solve the mystery of how these huge spirals ended up in places we never expected to find them.

Original astrobite edited by Katy Proctor.

About the author, Roan Haggar:

I’m a PhD student at the University of Nottingham, working with hydrodynamical simulations of galaxy clusters to study the evolution of infalling galaxies. I also co-manage a portable planetarium that we take round to schools in the local area. My more terrestrial hobbies include rock climbing and going to music venues that I’ve not been to before.

composite X-ray and optical image of the galaxy Messier 51

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: COSMOS2020: Ubiquitous AGN Activity of Massive Quiescent Galaxies at 0 < z < 5 Revealed by X-ray and Radio Stacking
Authors: Kei Ito et al.
First Author’s Institution: The Graduate University for Advanced Studies and National Astronomical Observatory of Japan
Status: Accepted to ApJ

While most passive or “dead” galaxies we see today have had fairly passive lives, distant passive galaxies in the early universe may have had a more active path to passivity. Detailed studies of nearby quiescent galaxies have revealed they follow a simple evolutionary track: a burst of star formation early on in their life followed by a quiet existence with low rates of star formation. In contrast, recent discoveries have uncovered a new population of quiescent galaxies that get quenched faster and earlier on than should be possible if following this simple evolutionary track (for example, the distant quiescent galaxies covered in this astrobite and that astrobite). The existence of so many quiescent galaxies so early on in the universe is a problem for galaxy evolution models, and the intense starburst phase and rapid suppression of star formation has been difficult to reproduce with cosmological simulations.

A big unresolved question related to this problem is how the burst of star formation gets suddenly shut off or quenched in these galaxies. Are the streams of gas from the cosmic web that fuel star formation getting cut off? Or is the gas flowing in and being expelled by some feedback mechanism? One such possible feedback mechanism is triggered by the galaxy’s central supermassive black hole as it funnels in material and creates a disk of hot, luminous gas and dust around it, forming an active galactic nucleus (AGN). The AGN devours some of the gas and radiation, wind, and jets eject the rest.

In today’s article, the authors leverage the extensive multiwavelength COSMOS2020 catalog to explore the AGN activity in quiescent galaxies across cosmic time through two primary AGN signatures: X-ray and radio emissions. However, many of these galaxies and the possible AGN within them, especially those farthest away, are faint enough that they are not individually detected in X-ray and radio surveys. To both overcome this faintness and to focus on typical (rather than extremely bright) sources, the authors use a technique called stacking to characterize the average properties of a quiescent galaxy sample and a comparison star-forming galaxy sample. Beyond comparing the stacks of quiescent galaxies and star-forming galaxies, the authors create a grid of stacks spanning stellar mass (basically, how big the galaxy is) and redshift (how far away and therefore how early on in the universe the galaxy is) to investigate trends along these axes.

Galaxy Pancakes 

To better understand the stacking technique and the grid of stacks, imagine each galaxy is a pancake. Some pancakes are regular (quiescent) and the ones that have a little more going on are buttermilk (star-forming). Now let’s say all of the pancakes have berries in them, but eating a single pancake won’t get you a full serving of fruit. So, to portion out a daily fruit intake you make stacks of pancakes on each plate, separating out regular and buttermilk.

Besides the regular and buttermilk types of pancakes, let’s say they also come in different sizes, from silver dollar to the size of the plate — this represents the stellar mass axis. And of course, the pancakes weren’t made simultaneously: the stacks of pancakes made earlier are farther down the table from where you’re sitting, and the newer ones are right in front of you, similar to how more distant (i.e., higher redshift) galaxies represent conditions earlier in the universe than nearby galaxies.

To build their grid of galaxy pancake stacks, the authors used observations at wavelengths at which the galaxies were detected individually (optical and infrared) and redshifts from the COSMOS2020 catalog to decide which galaxies were star-forming versus quiescent as well as how massive each was. The authors then used observations at wavelengths at which the galaxies were not individually detected (X-ray and radio) to place stacked observations in a grid of stellar mass and redshift. The resulting sample is the largest, highest-redshift sample of typical quiescent galaxies created so far.

Taking an X-ray

plot of image stacks showing brighter and fainter detections in a grid of redshift, stellar mass, and quiescent versus star-forming galaxies

Figure 1: The grid of galaxy stacks showing the average X-ray detection for two different X-ray bands. The red images show the regular pancake quiescent galaxies and the blue images show the buttermilk pancake star-forming galaxies, with redshift (z) increasing from top to bottom and stellar mass increasing from left to right in each color bin. Click to enlarge. [Ito et al. 2022]

The first stacking analysis the authors conducted was with X-ray data, with some representative stacks shown in Figure 1.

Beyond identification of some general trends, physically interpreting these stacks requires understanding what is causing the X-ray emission. X-ray emission comes from two main sources in galaxies: X-ray binaries, which contain a dense stellar remnant energetically drawing material from a star in its orbit, and AGN. Returning to our analogy, the fruit content in the pancakes could come from whole berries scattered around the pancake (X-ray binaries) or from a berry jam filling in the center (AGN).

But if you only know the average amount of fruit in each pancake stack, how can you tell if it’s in the form of whole berries or a jam filling? Based on known relations between the star formation rate and stellar mass in a galaxy and the amount of X-ray binaries expected, the authors determined the relative contribution from X-ray binaries and AGN. With these models, they found that X-ray binaries could explain most of the X-ray emission for the star-forming galaxy stacks. On the other hand, for quiescent galaxies, the average X-ray emission in each stack was 5–50 times higher than expected from just X-ray binaries, implying that much of the X-ray emission came from AGN. Additionally, they found the biggest difference between the star-forming and quiescent samples in the highest redshift bin, providing hints that AGN may have a role in quenching star formation early in the universe.

Tuning In to the Radio

To further verify their findings, the authors then stacked data from the other major signature of AGN: radio emission. Similar to X-rays, radio emission comes from two main sources in galaxies: one related to ongoing star formation and one related to AGN. Taking an empirically known correlation between star formation rate and radio luminosity, the authors determined that the quiescent galaxy stacks have 3–10 times higher radio emission than expected from just star formation, while the star-forming galaxy stacks could be explained primarily by star formation. Consistent with the X-ray result, this suggests that faint AGN are ubiquitous in quiescent galaxies.

How to Quench a Pancake

How does this AGN feedback mechanism work to quench galaxies? In nearby galaxies, we know that quenching tends to occur with more active AGN. This is due to two processes: quasar-mode feedback and radio-mode feedback. In quasar-mode feedback, wind from a bright AGN expels gas from the galaxy and suppresses star formation. In radio-mode feedback, a typically fainter AGN heats the gas in and around the galaxy with radio jets, which prevents gas from cooling and forming stars. In this way, radio-mode feedback maintains quiescence rather than just reducing the possible star formation by tossing out fuel. The authors note their faint, typical sample is probably mostly undergoing radio-mode feedback, with some non-AGN environmental quenching coming into play at lower redshifts.

So what do these stacks tell us about galaxy evolution? The ubiquitous AGN signatures in both X-ray and radio give us an interesting clue about quenching: everyday quiescent galaxy pancakes are often filled with AGN berry jam, and feedback from faint AGN within them are likely the culprit for shutting off star-forming buttermilk berry galaxy pancakes so suddenly and early in the universe.

Original astrobite edited by Alice Curtin.

About the author, Olivia Cooper:

I’m a second-year grad student at UT Austin studying the obscured early universe, specifically the formation and evolution of dusty star-forming galaxies. In undergrad at Smith College, I studied astrophysics and climate change communication. Besides doing science with pretty pictures of distant galaxies, I also like driving to the middle of nowhere to take pretty pictures of our own galaxy!

photograph of a globular cluster

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Detection of a 100,000 M black hole in M31’s Most Massive Globular Cluster: A Tidally Stripped Nucleus
Authors: Renuka Pechetti et al.
First Author’s Institution: Liverpool John Moores University, UK
Status: Published in ApJ

Intermediate-Mass Black Holes

Stellar-mass black holes — those with masses of tens of solar masses (M) — are thought to result from the collapse of massive stars. The formation of supermassive black holes — those with millions to billions of solar masses — is less clear. Given their large masses, there is not enough time for stellar-mass black holes to grow into the supermassive black holes that we know existed fairly early in the history of the universe. One possibility is that the “seeds” that grow into supermassive black holes lie somewhere in between 102 and 105 M, or what we’d call intermediate-mass black holes.

Despite their importance, intermediate-mass black holes remain elusive and their existence has not been quite confirmed. The best way to measure black hole masses is using the motions of stars around them, but this tactic may not work for intermediate-mass black holes since they have a smaller sphere of influence than supermassive black holes. Today’s article takes a look at a possible intermediate-mass black hole in a globular cluster in neighboring galaxy M31, also known as the Andromeda Galaxy.

Pinning Down the Mass

The globular cluster B023-G078 is the most massive cluster in Andromeda, and the velocity of stars within the cluster seems to indicate the presence of a massive central object. The authors of the article use images of the cluster from the Hubble Space Telescope and spectroscopic observations from Gemini to determine if this central mass could be an intermediate-mass black hole.

plot of root mean square velocity as a function of radius in arcseconds and parsecs

Figure 1: Root-mean-square velocity of stars in the cluster vs. radial distance to the center of the cluster. Points in red show the observations from the Gemini telescope. The black line shows the best-fit model for a massive black hole. The blue line shows the model assuming there is no black hole. [Adapted from Pechetti et al. 2022]

The authors use the Hubble images to come up with models for the mass of the black hole. They use a method called Jeans anisotropic modeling, which fits the Jeans equations to observations of a star cluster or galaxy. The high resolution of the Gemini data (and the proximity of the cluster) allows them to get information on the motion of individual stars within the cluster. Using integral field spectroscopy, the authors determine the root-mean-square velocity of stars at different distances from the center of the cluster, which depends on the central mass. The authors then compare their models to the observed velocities, shown in Figure 1.

The best-fit models give the central object a mass of 9 x 104 M, placing it firmly in intermediate-mass black hole territory!

It is possible that the central mass is actually several stellar-mass black holes rather than one intermediate-mass black hole. The main difference between the two possibilities would be that a collection of many black holes would look more extended than a single compact object. The authors investigate this possibility using their models, but any conclusions may require higher resolution observations.

However, there is something else that can give us a clue if this is indeed an intermediate-mass black hole: the origin of the globular cluster.

Remnants of a Small Galaxy? 

Because of the wide spread of metallicity measured for stars in the cluster, the authors consider the possibility that B023-G078 is the remnant of a small galaxy that underwent a merger with Andromeda, making it a stripped nuclear star cluster. The idea is that as small galaxies merge into larger galaxies (what is known as a minor merger), tidal forces pull apart parts of the galaxy, including the nuclear star cluster at the center that houses a massive black hole, leaving behind a globular cluster.

Given the mass of the cluster (~106 M), the authors estimate that the original galaxy had a mass of ~109 M. (For comparison, the mass of the Milky Way is ~1011 M.) Since the mass of a central black hole typically scales with the mass of the galaxy, this mass estimate means this nucleus is a good place to look for an intermediate-mass black hole.

The combination of the mass of the black hole from modeling and the evidence that this cluster is a stripped nuclear star cluster leads the authors of the article to favor the idea that there is indeed an intermediate-mass black hole in the cluster!

Original astrobite edited by Alex Pizzuto.

About the author, Gloria Fonseca Alvarez:

I’m a fifth-year graduate student at the University of Connecticut. My research focuses on the inner environments of supermassive black holes. I am currently working on measuring black hole properties from the spectral energy distributions of quasars in the Sloan Digital Sky Survey. As a Nicaraguan astronomer, I am also involved in efforts to increase the participation of Central American students in astronomy research.

side-by-side images of Venus's surface today and an imagining of what its surface might have looked like in the past

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Was Venus Ever Habitable? Constraints from a Coupled Interior–Atmosphere–Redox Evolution Model
Authors: Joshua Krissansen-Totton, Jonathan J. Fortney, and Francis Nimmo
First Author’s Institution: University of California, Santa Cruz
Status: Published in PSJ

Where Oh Where Did the Water Go? (And Was It There To Begin With?)

Despite sometimes being called “Earth’s twin,” Venus isn’t very similar to Earth beyond its size and composition. With a thick toxic atmosphere filled with CO2 and a volcano-laden surface, it’s definitely more like Earth’s evil twin. Even spacecraft can only survive on its surface for a maximum of 2 hours before succumbing to the high pressure and temperature of a planet plagued by the runaway greenhouse effect.

But was Venus always such a hellish place? For a long time, we’ve theorized that Venus boasted an ocean of liquid water on its surface many billions of years ago, but being closer to the Sun, a runaway greenhouse effect took hold: as the Sun got brighter over time, more solar radiation hit the planet’s surface and led to an increase of surface water evaporation and more water vapor in Venus’s atmosphere. As the presence of water vapor heated the planet even more, intense radiation from the Sun split the water molecules, causing hydrogen to escape into space. This left room for carbon escaping from the planet’s surface to combine with some of the free oxygen left over and build up CO2 in the atmosphere, trapping even more heat and leading to a runaway greenhouse effect.

Different models of Venus’s climate evolution have led to conflicting stories about its past. Some models that incorporate the effects of clouds responding to the warming or cooling of the planet have found it possible for habitable conditions to have existed on the planet as late as 700 million years ago. Also, unlike the Moon or Earth, whose craters are weathered or otherwise degraded, most of Venus’s craters are in pristine condition and randomly distributed across its surface. From this, we infer that most of Venus’s geological history has been erased due to resurfacing events like volcanic outbursts and lava flows that happened very recently. This means that the surface we can see is very young (<1 billion years old), which makes it difficult to use observations of the surface to uncover information about Venus’s elusive history.

However, scientists have found evidence of felsic crust: igneous rocks on Venus’s surface that are relatively rich in feldspar and quartz, whose presence may indicate past surface water. That said, Venus’s atmosphere is almost completely devoid of molecular oxygen (O2). But if water vapor in the atmosphere was broken down by radiation and most hydrogen escaped into space, then this would mean there should be some leftover oxygen in the atmosphere. So what happened to all the oxygen?

Let’s Have PACMAN Eat All Our Doubts Away…

The authors of today’s article try to reconcile all the clues we have about Venus by using a coupled atmospheric–interior model called PACMAN (Planetary Atmosphere, Crust, and MANtle) to reproduce its climate conditions over time to see if the planet could have ever sustained liquid water on its surface. All this means is that they keep track of the conditions in both Venus’s atmosphere and its interior while accounting for any effect one system has on the other. People have used these kinds of models to study Venus before, but none of them have ever looked at the possibility of water on its surface.

The model is split into two phases (see Figure 1). Initially, Venus had a magma ocean on its surface created from impacts with other pieces of space rocks that were abundant during the planet’s formation. The magma ocean was a giant layer of molten, bubbly rock that you definitely wouldn’t want to dip your toes in. As this ocean cooled and released gases into the atmosphere, the temperature dropped to a point where this ocean “froze” and became a solid mantle, initiating phase two of the model.

schematic of the two phases that make up the authors' model

Figure 1: A simplified schematic of the PACMAN model the authors used. On the left is the magma-ocean phase that consists of (from innermost to outermost layer) the core, a solid mantle, magma ocean, and atmosphere. On the right is the solid mantle phase which occurs after the magma ocean solidifies, consisting of the core, solid mantle, and the atmosphere/hydrosphere. Different colored arrows show what components leave and enter each layer in the model. [Adapted from Krissansen-Totton et al. 2021]

The authors calculate quantities like the surface temperature, the amount of radiation emitted and absorbed by the planet, how much water vapor is in the atmosphere, and the amount of water on the surface during both phases. They also keep track of the abundance of various molecules containing carbon, hydrogen, and oxygen (carbon dioxide, water, O2, etc.) and calculate their flux between the atmosphere and the interior (i.e., how many of these molecules enter or exit over time). In addition, they also calculate the accumulation of 40Ar and 4He in the atmosphere, which tell us about the total magmatic activity and more recent magmatic activity, respectively. Together, these enable us to better determine whether a habitable or uninhabitable past is better at reproducing Venus’s current atmosphere.

There are lots of unknown parameters and initial conditions in the model such as CO2 pressure and planetary albedo (reflectiveness), so the authors run their model 10,000 thousand times to sample all 24 of these unknown parameters. Out of all of these runs, only 10% ended successfully in a state that mirrors Venus’s modern atmospheric and surface conditions and chemical abundances. What’s interesting about these successful models is that they suggest Venus’s current state is compatible with two different histories: some of the models tell us Venus was never habitable in its past, while others claim that Venus was transiently habitable, meaning it could have contained an ocean up to ~100 meters deep on its surface for anywhere between 0.04 and 3.5 billion years before succumbing to the runaway greenhouse effect. The latter scenario should have left salt or mineral deposits on the surface after all the water evaporated, leaving these materials potentially accessible to future remote sensing observations!

And the Winner Is…

So which model is correct? Unfortunately, there is no definitive answer since the authors found that both models are favored under different conditions. CO2 tends to make it difficult for hydrogen to escape if the water concentration is too low. Therefore, in the uninhabitable scenarios where no surface water is present, H2O in the atmosphere has a hard time escaping because CO2 continually dominates the atmosphere instead of being locked in the surface. That means that these scenarios can’t reproduce the modern water-less and oxygen-less Venus that we see today. But, if CO2 is allowed to radiatively cool the upper atmosphere, then water can condense on the surface and CO2 is removed from the atmosphere and stored away in the interior of the planet, giving Venus a chance to have a period of enhanced water loss that can then initiate the runaway greenhouse effect before the CO2 is outgassed back into the atmosphere.

On the other hand, most modern models assume that when the magma ocean phase ends, virtually all the carbon and water from the magma (so-called volatiles) live in the atmosphere. But it is possible that some of these volatiles are trapped in the resulting solid mantle instead. If this is allowed, then far fewer models allow for Venus to have been habitable. This is because it would take longer for water to then be released back into the atmosphere, making it hard to explain Venus’s current almost non-existent water abundance.

The bottom line here is that either of these two scenarios is possible and consistent with modern observations. Which scenario wins depends on our assumptions and model parameters. Though this might seem a bit anticlimactic, understanding and constraining Venus’s evolution is important for interpreting the atmospheres and histories of other exoplanets out there that might have gone through similar processes. JWST might be capable of constraining what the atmospheres of other so-called exo-Venuses are, like some of the TRAPPIST-1 system planets. Hopefully, our studies of both Venus and exo-Venuses can symbiotically help shine a light on planetary evolution!

Original astrobite edited by Ishan Mishra.

About the author, Katya Gozman:

Hi! I’m a second-year PhD student at the University of Michigan. I’m originally from the northwest suburbs of Chicago and did my undergrad at the University of Chicago. There, my research primarily focused on gravitational lensing and galaxies while also dabbling in machine learning and neural networks. Nowadays I’m working on galaxy mergers and stellar halos, currently studying the spiral galaxy M94. I love doing astronomy outreach and frequently volunteer with a STEAM education non-profit in Wisconsin called Geneva Lake Astrophysics and STEAM.

visualization of the milky way's magnetic field

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Evolution of primordial magnetic fields during large-scale structure formation
Authors: Salome Mtchedlidze et al.
First Author’s Institution: Ilia State University, Georgia; University of Göttingen, Germany; and Abastumani Astrophysical Observatory, Georgia
Status: Accepted to ApJ

Magnetic fields — often denoted by a in physics shorthand — are ubiquitous throughout the universe, playing a part in the physics of planets, stars, galaxies, and beyond. But, where did these magnetic fields come from? Were they born in the Big Bang, or did they arise sometime later in cosmic history? The short answer: we don’t know! This question of cosmic magnetogenesis remains one of the most important unsolved problems in modern astronomy and is intimately connected to the underlying cosmology and fundamental physics of our universe.

The Birth of Magnetic Fields

Very broadly speaking, there are two competing avenues for cosmic magnetogenesis: the astrophysical scenario and the primordial scenario. In the astrophysical scenario, weak, small-scale magnetic fields are produced around local astronomical systems — like stars and galaxies — and are then amplified and spread across large scales; these initially tiny seed fields could be generated via naturally circulating electric currents (so-called dynamos), the turbulent flow of intergalactic or interstellar gas, or spontaneous processes in unstable plasmas. By contrast, magnetic fields in the primordial scenario are generated at the dawn of cosmic time — before stars, galaxies, or any structure in the universe came to be — and grow with the universe itself. Hypotheses of primordial magnetogenesis involve highly theoretical quantum phenomena, like the violation of fundamental symmetries of nature or the coupling and decoupling of the fundamental forces. Despite each of the primordial models taking place shortly after the Big Bang, the precise mechanism of field generation is hotly contested.

Nevertheless, the presence of magnetic fields in the early universe could have vitally important cosmological consequences. For one, these fields would tamper with the cosmic microwave background — the oldest, most distant light we can see — fundamentally affecting our inferences of the state of the infant universe. Primordial magnetic fields would also alter the thermal properties of the material between galaxies, thus shifting the time at which the universe transitioned from neutral to reionized. Recently, it’s even been suggested that early magnetic fields could explain the Hubble tension — the notorious mismatch between local and global measurements of the expansion rate of the universe — and, if these fields are sufficiently twisty (i.e., if the fields are helical), they could also explain why the universe contains so much more matter than antimatter. In other words, figuring out magnetogenesis could solve many of the universe’s biggest puzzles for the price of one!

Baby Photos of the Cosmic Magnetic Field

Evidently, primordial magnetic fields deserve some attention. As such, the authors of today’s article seek to understand how magnetic fields would evolve from the very early universe to the present day. In particular, the authors use computer simulations to trace how a primordial seed field would interact with the largest-scale structures in the universe — the components of the cosmic web, like massive galaxy clusters; long, thin filaments; and vast, empty cosmic voids — as they develop over cosmic time. By comparing current observations of large-scale magnetic fields to the patterns predicted by these simulations, we can rule out different models of primordial magnetogenesis.

The authors consider four different models for the primordial magnetic field:

  1. A completely uniform and homogeneous field that could be produced during the rapid inflation of the universe
  2. A scale-invariant field (a field possessing equal contributions from waves with small wavelengths and waves with large wavelengths) that could result from a different inflationary scenario
  3. A random, non-helical field that could originate from a phase transition in the early universe, when some fundamental force became independent from the rest
  4. A random, helical field that could also arise from a phase transition

These scenarios set the initial conditions of the authors’ simulations, and thus each model is expected to evolve in a different way.

maps of the temperature, mass density, and magnetic field strength at a redshift of z=0.02 for the four scenarios

Figure 1: Maps of the present-day cosmic web as predicted from simulations of primordial magnetic field evolution. From left to right: uniform magnetic field case, scale-invariant case, helical phase-transitional case, and non-helical phase-transitional case; from top to bottom: magnetic field, density, and temperature. Click to enlarge. [Mtchedlidze et al. 2022]

Magnetic Fields All Grown Up

Figure 1 shows the imprint of the simulated primordial magnetic fields on the present-day cosmic web with respect to field strength, density, and temperature. The authors find that the two inflationary magnetic field models develop stronger evolved fields than the two phase-transitional models, with the overall magnetization in galaxy clusters and in the bridges between clusters differing by orders of magnitude between the two field-generation scenarios. Additionally, the inflationary magnetic fields stretch to much larger scales than do the phase-transitional cases. While the helical phase-transitional fields evolve to higher strengths than the non-helical fields, the authors note that, at least according to their models, it should be difficult to distinguish between helical and non-helical fields observationally.

four-panel plot of simulated rotation measure

Figure 2: Predicted present-day rotation measure from simulations of primordial magnetic field evolution. From top to bottom: uniform magnetic field case, scale-invariant case, helical phase-transitional case, and non-helical phase-transitional case. The color bar is in units of radians per square meters. [Mtchedlidze et al. 2022]

The authors also produce simulated maps of the present-day rotation measure based on the evolved primordial fields (Figure 2). When a radio wave passes through a magnetic field on its way to an observer, its polarization is rotated by an amount proportional to the magnetic field’s strength; therefore, by measuring the degree to which an extragalactic radio wave’s polarization has been affected (quantified by the aptly named rotation measure) one can deduce the strength of astronomical magnetic fields. By comparing their rotation measure maps to recent observations, the authors find that the two inflationary magnetic field models, which produce larger magnetization levels in cosmic filaments, are favored over the phase-transitional models.

While their modeling of magnetic field evolution over cosmic time neglects some key physical processes, such as gas cooling, chemical evolution, and high-energy outflow from stars and black holes, the authors still decisively show that different models of primordial magnetogenesis leave unique imprints on the universe’s largest scales.

Since the Low-Frequency Array has already started taking rotation measure data of distant radio waves passing through cosmic filaments, it’s only a matter of time before we can start ruling out models of early magnetic field creation. Even better, when the Square Kilometre Array comes online in the next decade, it’ll collect exquisite rotation measure data from the entirety of the cosmic web. With the power of the Square Kilometre Array at our disposal, we’ll be solving the mysteries of magnetogenesis B-fore you know it!

Original astrobite edited by Zili Shen.

About the author, Ryan Golant:

I am a second-year astronomy Ph.D. student at Columbia University. My current research involves the use of particle-in-cell (PIC) simulations to study magnetic field growth in gamma-ray burst afterglows and closely related plasma systems. I completed my undergraduate at Princeton University, and I am originally from Northern Virginia. Outside of astronomy, I enjoy learning about art history, playing violin and video games, and watching cat videos on the internet.

1 2 3 30