Astrobites RSS

Illustration of a bright ring of material surrounding a dense, textured, reddish bubble.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Indication of a Pulsar Wind Nebula in the hard X-ray emission from SN 1987A
Authors: Emanuele Greco et al.
First Author’s Institution: University of Palermo, Italy
Status: Accepted to ApJL

In 1987 astronomers witnessed the closest supernova in almost 400 years, subsequently called SN 1987A. At only 51.4 kiloparsecs (or about 167,000 light-years), SN 1987A’s home is in the Large Magellanic Cloud, and it was visible in the Southern Hemisphere with the naked eye for a few months before it faded. But one question that remains unanswered is what kind of object was left behind. The original star that created SN 1987A was a blue supergiant, which would have left behind either a black hole or a neutron star. Yet even with decades of observations by many telescopes spanning the electromagnetic spectrum, its nature has yet to be confirmed.

Why are astronomers still trying to figure out what was left behind in SN 1987A? One reason is that it would let us learn more about neutron star and black hole formation and the mechanics of supernovae. Another reason is that if this leftover object happens to be a pulsar, a neutron star that emits radio (and potentially X-ray or gamma-ray) pulses, then we would be able to observe its very early, formational years, which we know very little about. Recent work (like that discussed in this astrobite) suggests that a neutron star is the likely remnant, but we can’t say for sure. The authors of today’s paper attempt to confirm once and for all that the leftover remnant of SN 1987A is a neutron star.

Look with Your X-ray Eyes

To determine the nature of the object at the center of SN 1987A, the authors use X-ray observations taken between 2012 and 2014 by the Chandra X-ray Observatory, which observes X-ray photons between 0.1 and 10 keV, and NuSTAR X-ray telescope, which observes X-ray photons between 3 and 79 keV (though the full range of each telescope is not necessarily used in the analysis). The images of SN 1987A from these telescopes are shown in Figure 1, with redder colors representing more photons detected.

Three panels show different X-ray views of SN 1987a and the background X-ray radiation.

Figure 1: X-ray images of SN 1987A where redder colors represent more X-rays. Left: Image from the Chandra X-ray Observatory from 0.1–8 keV. The cyan circle shows SN 1987A, and the red circle shows the noise level of the background X-rays. Since the background is almost completely black, there is very little noise. Center: A zoom-in of the left panel. The X-ray dim center of SN 1987A is shown by the black circle in the center. Right: The NuSTAR image from 3–30 keV. SN 1987A is circled again in cyan, and the slightly noisier background is circled in red. SN 1987A still clearly stands out above the background. [Greco et al. 2021]

The authors then analyzed the X-ray spectra, or number of photons observed at each energy, of SN 1987A between 0.5 and 20 keV, shown in Figure 2. By fitting different models to these spectra, they can determine what the source of the X-ray emission might be. The authors tested two primary models. The first one models the X-ray emission with two thermal components, each caused by high-energy Bremsstrahlung radiation. These components are essentially caused by two groups of highly energetic particles (usually electrons) that can each be described by a characteristic temperature and are of high enough energy to emit X-rays.

The second model is the same as the first, but it also includes a model for a highly absorbed pulsar wind nebula (PWN). PWNs are astronomical winds of charged particles accelerating close to the speed of light around a pulsar, and they are known to give off high energy X-rays. Being highly absorbed means that very few of the X-rays emitted by a PWN would escape the gas and dust that make up the supernova remnant of SN 1987A; most are reabsorbed instead. The authors compute the residuals by subtracting these best-fit models from the X-ray spectra, shown in the bottom panels of Figure 2. The closer these residuals are to zero, the better the model. If this second model fits much better than the first, then the authors can say that there is very likely a PWN, and hence a neutron star, at the center of SN 1987A.

Two plots showing X-ray spectra and the two different models.

Figure 2: Combined X-ray spectra showing the number of X-ray photons observed in each energy bin of all Chandra and NuSTAR observations over the span of three years with different colors for each observation. Spectra from Chandra span 0.5–8 keV, and spectra from NuSTAR span 3–20 keV. The bottom panels show the residuals, or the spectra after the best-fit models have been subtracted off. Left: Spectra with a best-fit model containing just two thermal components. One can see that there is an excess of photons at energies higher than 10 keV in the bottom panel, as shown by the points all above the bright green zero line. Right: Same as the left, but the best-fit model has an absorbed pulsar wind nebula component in addition to the two thermal components. The excess X-rays at energies > 10 keV appear to be accounted for here. [Greco et al. 2021]

So What’s at the Center?

Unfortunately, the authors were unable to conclusively answer that question. They found that the model that includes a PWN is statistically slightly better than the one without (shown by the better residuals in Figure 2 at energies > 10 keV), but not so much that they can say anything definitively. They were able to come up with a way that the higher energy X-rays might be produced without a PWN, but it involves an extremely energetic shockwave expanding steadily outwards at the fastest speeds allowed with no slowing down. While this is possible, it is an unlikely physical scenario compared to just having a neutron star at the center of SN 1987A.

Despite the uncertainty still surrounding the central object of SN 1987A, all is not lost! The authors also did some simulations showing that, if there really is a PWN at the center of SN 1987A, then by the 2030s, fewer of the lower energy X-ray photons will be absorbed, allowing these photons to be more easily detectable with Chandra or potential future X-ray observatories. So while the nature of what SN 1987A left behind remains a mystery for now, we are getting increasingly closer to solving it.

Original astrobite edited by Anthony Maue.

About the author, Brent Shapiro-Albert:

I’m a fourth year graduate student at West Virginia University studying various aspects of pulsars. I’m a member of the NANOGrav collaboration which uses pulsar timing arrays to detect gravitational waves. In particular I study how the interstellar medium affects the pulsar emission. Other than research I enjoy reading, hiking, and video games.

Lineup of five planets, including Earth, showing relative sizes of some known habitable-zone planets.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Bridging the Planet Radius Valley: Stellar Clustering as a Key Driver for Turning Sub-Neptunes into Super-Earths
Authors: J. M. Diederik Kruijssen, Steven N. Longmore, & Mélanie Chevance
First Author’s Institution: Center of Astronomy, Heidelberg University, Germany
Status: Published in ApJL

Neptunes and Jupiters and Earths, Oh My!

Extrasolar planets, or exoplanets, have been theorized for centuries, and studied firsthand since the 1990s. Much of the common classification of exoplanets is based on analogs in our own solar system: hot Jupiters, super-Earths, and super-Jupiters, just to name a few. The authors of today’s paper focus on two types of exoplanets: super-Earths (planets with more mass than Earth but less mass than Neptune) and sub-Neptunes (planets of 1.7–3.9 times the size of the Earth, but with a composition similar to Neptune’s).

plot of number of planets per star vs. planet size shows a distinct valley between 1.5 and 2 earth radii.

Figure 1: A histogram of planets with given radii from a sample of 900 Kepler systems. The decreased occurrence rate between 1.5 and 2.0 Earth radii is apparent. [Fulton et al. 2017]

Between these two classes of exoplanets, there is a radius “valley” in the range 1.5–2.0 Earth radii where the occurrence rate of known exoplanets is much lower. Since we can observe exoplanets above and below this radius, it’s unlikely that the valley is a result of observational limitations, so a physical mechanism is probably to blame. There are three main theories about the cause of the radius valley: photoevaporation, core-powered mass loss, and the planet forming with no gaseous outer layer to begin with (otherwise known as rocky planet formation). In a photoevaporation scenario, X-ray and/or extreme ultraviolet radiation from the host star cause the gaseous layers of a larger planet to evaporate, leaving behind only a rocky core. Photoevaporation can also destroy gasses in the protoplanetary disk, which may also impact planetary formation. In core-powered mass loss, the energy radiated during the cooling of the rocky core erodes the gas envelopes of sub-Neptune-sized planets, again leaving behind the core. Rocky planet formation is exactly what it says on the label: a rocky planet is directly produced with no gaseous layers and no evolution required. All of these theories consider only properties and dynamics within the star–planet system. Today’s authors investigate the potential effects of stellar clustering on planet formation as a cause of the radius valley.

Compiling the Sample

The authors analyze a sample of exoplanets from the NASA Exoplanet Archive with radii of 1–4 Earth radii and orbital periods of 1–100 days. These radii and periods are chosen so that they only analyze planets that have had these values directly measured rather than derived from mass–radius relationships. The density of stars around the planet’s host star is part of the archival data, and the sample is split into “field” and “overdensity” subgroups that consist of low stellar density and high stellar density host star regions, respectively. In this case, what constitutes low and high densities is determined by the probability of there being many stars within 40 pc of the system: field stars have an 84% probability that there aren’t many neighboring stars, and overdensity stars have an 84% probability that there are. Additionally, only systems with ages of 1–4.5 billion years are considered, since younger systems may not be stabilized and the overdense group is too small in older systems. Finally, they constrain the host star mass to 0.7–2.0 solar masses to limit the chance of observing effects that are actually caused by mass differences rather than stellar clustering. With these cuts, the authors are left with 8 field planets and 86 overdensity planets, for a total of 94.

Results

Three panel plot showing properties of the planets in the authors’ sample. See caption for details.

Figure 2: Left: The orbital periods and radii of the planets. The radius valley is marked with the black line, and its uncertainty is given by the grey stripe. Center: The planetary radii versus the density of their stellar fields, with the grey line representing a constant radius. Right: A histogram of how many planets have each radius. Note that the radius is plotted on a logarithmic scale in all three panels. [Kruijssen et al. 2020]

Simply plotting the densities and radii suggests that the authors’ idea holds up (Figure 2). In the middle panel, the gray line represents a constant radius within the radius valley. The fact that there are fewer planets around this line shows the radius valley exists, but how does that prove their idea? The field stars all lie above the radius valley, while a little more than half of the overdensity stars lie below the radius valley. If residing in a dense field can cause dynamic and radiative effects that decrease the planet’s radius, having more small planets in overdense regions is expected.

But what if it’s really the effect of some other properties of the systems? Comparing the planets’ host star masses, metallicities, and ages shows no clear differences that might suggest the trend is caused by one of those characteristics. This data is compiled in Table 1. But what about the distance from Earth to the system? The further from Earth a system is, the less likely we are to be able to observe smaller planets. Could that be a factor skewing the numbers, since that could mean we just aren’t seeing the smaller planets? On average, the field systems are closer to Earth, but all of their planetary radii lie above the valley. The authors therefore conclude that the distance is probably not a contributing factor either.

Table of the characteristics of the authors' planet subsample. See caption for details.

Table 1: Characteristics of the sample planets. The authors split the sample into three groups: field planets, overdensity planets with radii above the radius valley, and overdensity planets with radii below the radius valley. The median stellar masses, metallicities, ages, and distances from Earth for each group are given with their uncertainties. The authors conclude that these values are all close enough to suggest that they are not the cause of the radius valley. [Kruijssen et al. 2020]

But what about those other mechanisms we discussed earlier? The authors consider photoevaporation within the system, mass loss, and rocky formation alongside the potential effects of densely clustered stars near the system. They conclude that stellar clustering alone can’t be responsible for the trends seen in planetary radius, but alongside one of the other three theories, clustering is certainly a potential contributor to the radius valley. The clustering would, however, affect each of the three scenarios differently. For the rocky core mass loss scenario, it is unlikely that clustering has any direct effect, since that mechanism is purely internal to the planet. The likelihood of rocky planet formation, on the other hand, can be increased by clustering effects, since neighboring stars could cause photoevaporation within the protoplanetary disk. This would decrease the amount of gas in the disk, increase the dust-to-gas ratio — the ratio of solid particles to gaseous particles in the disk — and thus increase the likelihood of rocky formation. Additionally, clustering could cause more stellar encounters with the system, which in turn could change the orbits of the planets and the effects of photoevaporation inside the system.

In this paper, the authors conclude that, in addition to previous theories, the dynamic and photoevaporative effects of stars near planetary systems can contribute to the radius valley between super-Earth and sub-Neptune exoplanets. Although this doesn’t provide definite answers to why this valley exists, it provides another piece to the puzzle. Solving the mystery of this radius valley can give us more insight into planetary formation mechanisms in extrasolar systems.

Original astrobite edited by Mike Foley.

About the author, Ali Crisp:

I’m a third year grad student at Louisiana State University. I study hot Jupiter exoplanets in the Galactic Bulge. I am originally from Tennessee and attended undergrad at Christian Brothers University, where I studied physics and history. In my “free time,” I enjoy cooking, hiking, and photography.

Image of a galaxy with a long, streaming tail stretched out behind it.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Spectacular HST observations of the Coma galaxy D100 and star formation in its ram pressure stripped tail
Authors: William J. Cramer et al.
First Author’s Institution: Yale University
Status: Published in ApJ

Galaxy clusters are the largest gravitationally bound structures in the universe, exceeded in size by only the vast cosmic web in which they are embedded. Clusters contain anything from hundreds to thousands of galaxies, which they accrete due to gravity, and they can reach several megaparsecs in size. However, galaxy clusters are not gentle giants. These huge objects contain extremely hot X-ray-emitting plasma, and they can produce gravitational tidal forces strong enough to tear galaxies apart.

Because of these cluster properties, galaxies in clusters and galaxies elsewhere in the universe (called field galaxies) can differ dramatically. Galaxies that have entered a cluster environment are more often elliptical, have low star formation rates, and contain very little gas (from which new stars are formed). This so-called morphology–density relation has been well-established for decades — and although a whole host of theories exist, the specific causes of it are still unclear.

Enter Cramer et al., the authors of today’s paper.

This work presents observations of ram-pressure stripping, a mechanism that can explain the evolution of galaxies from gas-rich to gas-poor when entering a cluster. A galaxy moving through a medium (in this case, the hot intracluster plasma) can have loosely bound gas removed by drag forces from that medium. Imagine what it would look like if you poured a bag of flour over your head, and then stuck your head out of the window of a fast-moving car (not that I’d recommend this).

Along Came a Jellyfish

The authors’ evidence for ram-pressure stripping comes in the form of a jellyfish galaxy. In this case, they examine D100, a barred spiral galaxy close to the centre of the Coma cluster. Jellyfish galaxies represent an extreme example of ram-pressure stripping, where the stripped gas streams out in a long tail behind the galaxy, giving them their distinctive look. Think back to the flour-head-car-window example — you would probably expect to see something similar.

Left: composite photograph of a galaxy with a long, trailing red tail. Right: photograph of a jellyfish with a bright head and long, trailing tentacles

Figure 1: Left: Composite image of D100 galaxy, showing the stripped gas trailing the galaxy disc, which is moving from left to right in this image. Right: A jellyfish, for comparison. [Left: Cramer et al. 2019; right: Alexander Semenov]

Using new Hubble Space Telescope (HST) observations, this work examines both the galaxy and the long tail trailing behind, which contains far fewer stars than the main galactic disc, and so is much fainter. The photo of D100 in Figure 1 is a composite image, combining the HST observations of starlight with observations of the emission line from the Subaru telescope that show the presence of excited hydrogen gas. This Hα emission is shown in bright red, and demonstrates the dramatic effect that the Coma cluster is having on this galaxy.

Hα emission from galaxies is often an indicator of ongoing star formation (although it can have other sources). However, it is the combination of Hα measurements and the powerful HST observations that make this work possible. Thanks to the exceptional resolution of Hubble, and the authors’ multiple observation bands — F814W (red/near-IR wavelengths), F475W (blue) and F275W (near-UV) — Cramer and collaborators are able to study not only how much star formation is taking place, but also where in the tail this is happening.

A Tail of Three Bands

The authors’ colour analysis shows that star formation stopped long ago in the galaxy outskirts, but has stopped more recently closer to the centre, and it is ongoing in the core. This indicates that the star-forming gas was removed from the galaxy outskirts first, causing outside-in quenching.

Photograph of a nearly face-on spiral galaxy with a stream of dark dust extending from its center.

Figure 2: HST image of D100. Arrow is pointing to a star-forming clump, embedded in a dark region of dust that is also being stripped. [Adapted from Cramer et al. 2019]

A zoom-in on the HST image (Figure 2) also reveals a small, bright patch, located in a cloud of dust. The colour of this patch, which is bright in the blue and UV bands and fainter in red, indicates that it is a clump of ongoing star formation. In fact, the HST observations find 37 bright patches (shown in Figure 3), and analysis of their colours shows 10 of them to be clumps of star formation, all of which are found in the tail of gas. The 27 other sources are mostly background sources, such as distant galaxies.

map showing D100 and the locations of 37 other bright sources around it, as well as outlines of the streamer emitted from the galaxy's center.

Figure 3: Map of 37 bright sources around D100. Those labelled in blue/underlined are star-forming clumps [Cramer et al. 2019]

The main conclusion of the paper is that the stripped gas can form stars outside of the galactic disc, but that it doesn’t form them uniformly throughout the tail. Instead, stars form in these clumps, which are up to 100 parsecs in size. The brightness of these regions is, however, insufficient to produce all of the Hα emission that is observed. This indicates that another mechanism (such as gas shocks) must be responsible for some of this emission, but the precise nature of this mechanism remains, for now, a mystery.

Although this paper is a convincing endorsement of ram-pressure stripping, it is important to note that ram-pressure alone is not enough to explain all of the differences between cluster and field galaxies. For example, it provides no explanation of why disc galaxies are rarer in clusters. A full description of the relationship between galaxies and their environments is likely to be a complex combination of different effects, in which ram-pressure stripping will play a small, but important, role.

Original astrobite edited by Alex Gough and Kate Storey-Fisher.

About the author, Roan Haggar:

I’m a PhD student at the University of Nottingham, working with hydrodynamical simulations of galaxy clusters to study the evolution of infalling galaxies. I also co-manage a portable planetarium that we take round to schools in the local area. My more terrestrial hobbies include rock climbing and going to music venues that I’ve not been to before.

image showing a map of the Milky Way from Gaia data, with an overlaid sinusoidal stream of stars.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Spur and the Gap in GD-1: Dynamical Evidence for a Dark Substructure in the Milky Way Halo
Authors: Ana Bonaca, David W. Hogg, Adrian M. Price-Whelan, and Charlie Conroy
First Author’s Institution: Center for Astrophysics | Harvard & Smithsonian
Status: Published in ApJ

Suppose I told you that a fire-breathing dragon lives in my garage, but I cannot show you the dragon because it is floating, invisible, and spits heatless fire. You would not believe that I had any dragons at all. Carl Sagan, the creator of this analogy, argues that my claim of the dragon only makes sense if there is some experiment that could disprove it. In other words, scientific claims have to be testable. Now take a look at the theory of dark matter: astronomers say that the Milky Way is full of invisible blobs of dark matter called subhalos, which only interact with normal matter through gravity. However, this claim sounds a lot like the invisible dragon in my garage unless there is some way to observe the effects of those subhalos.

You would expect the dragon in my garage to leave evidence of footprints or breathing fire. One way to detect the invisible subhalos, used by the astronomers of today’s paper, is by observing stellar streams. Stellar streams are groups of stars that have been stretched out on their orbit in the outer region of the Milky Way. If a subhalo flies into a stellar stream, the gravitational interactions can rip a hole in the stellar stream. The disrupted stars then fly away from the hole, and observers on Earth would see them piled up into a spur (see Figure 1). This method was previously used on the Pal 5 stellar stream, but the evidence was not conclusive to prove that the stream was disturbed by a subhalo. The authors today have the advantage of a clear view of the GD-1 stream from the Gaia space telescope, as described in this Astrobite. In the Gaia data, a spur and a gap are clearly visible in the stream, which points to possible interactions with a dark matter subhalo.

two plots showing the positions of the stars in the GD-1 stellar stream and a model of a subhalo-perturbed stream. two gaps are visible in the stream in both plots, as well as a parallel spur of stars above the main stream.

Figure 1: Top: Positions of stars in the GD-1 stream, observed by the Gaia space telescope. The spur and gaps are labeled with arrows. Both axes indicate the projected sky position of stars along and perpendicular to the stream orbit. Bottom: Positions of stars in a model where GD-1 was perturbed by a dark matter subhalo 495 million years ago (subhalo parameters shown in the legend). These two panels are in excellent agreement. [Bonaca et al. 2019]

The high spatial resolution and precision of the data allows the authors to create a model of the orbital history of GD-1. The motion of the stars is determined by the gravitational field throughout the orbit, which, in this case, is the well-studied Milky Way gravitational field plus any potential perturbers, such as dark matter subhalos, molecular clouds, and globular clusters. Thus, the map of the stellar stream encodes useful information about past interactions. The authors ran a suite of simulations, changing the mass and velocity of the perturber, how far away it was at closest approach, and the time of the encounter. The code used to calculate the orbit of stars is publicly available for interested readers.

stellar stream

Figure 2: Artist’s impression of a stellar stream arcing high in the Milky Way’s halo. [NASA]

The best-fit parameters used to construct the final model are shown in the bottom panel of Figure 1. In this scenario, a dark matter subhalo with 5 million solar masses came within 15 pc of the stellar stream, at a velocity of 250 km/s, and this event happened 495 million years ago. This dense, massive, high-velocity flyby gave the stars a velocity kick, which made a gap. The perturber also kicked the stars perpendicular to the stream motion and set some stars on a loop around the original unperturbed orbit, producing the spur when viewed in projection. Is this excellent agreement with observational data a sign of the elusive dark matter dragon?

The authors ruled out the possibility that the perturber is a known object. They traced the known orbits back in time for Milky Way globular clusters, satellite dwarf galaxies, and the Milky Way disk. No known object came close enough to GD-1 to produce the observed spur and gap. Thus, the authors conclude that a dark matter subhalo is the most probable perturber that caused the spur and the gap.

While this evidence is compelling, the authors want other independent ways of confirming the nature of the perturber. They highlight that this hypothesis is testable by measuring the radial velocities of the stars. The authors matched their models to observations using spatial position alone, which means the accepted models can have the stars at the same location but moving with different radial velocities. Future data from the Hubble Space Telescope can observe the radial velocity of stars in this stream, and that will provide a further test for the different perturber models.

This paper used simulations to show that the observed spur and gap in GD-1 are most likely caused by dark matter subhalos. The authors demonstrated an exciting avenue to find the invisible subhalos, and future research may discover more properties of these subhalos and compare them to the predictions of dark matter theory. Perhaps the dark matter dragon isn’t so elusive after all.

Original astrobite edited by Catherine Manea and Keir Birchall.

About the author, Zili Shen:

Hi! I am a PhD student in Astronomy at Yale University. My research focuses on ultra-diffuse galaxies and their globular cluster populations. Since I came to Yale, I have worked on two “dark-matter-free” galaxies NGC1052–DF2 and DF4. I have been coping with the pandemic and working from home by making sourdough bread and baking various cookies and cakes, reading books ranging from philosophy to virology, going on daily hikes or runs, and watching too many TV shows.

Illustration of the TESS satellite in front of the distant Sun.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Exploring Trans-Neptunian Space with TESS: A Targeted Shift-Stacking Search for Planet Nine and Distant TNOs in the Galactic Plane
Authors: M. Rice, G. Laughlin
First Author’s Institution: Yale University
Status: Published in PSJ

The Transiting Exoplanet Survey Satellite (TESS), which just recently finished its primary mission to search for planets around nearby, bright stars, has also provided a treasure trove of other information for astronomers. As it stares at the sky, waiting to catch the brief flicker of a distant planet passing in front of its host star, TESS’s steady, unwavering gaze catches everything from stellar pulsations, to gamma-ray bursts, to distant solar system objects tumbling through the dark.

Illustration of a dark body in the distant outer reaches of the solar system.

Artist’s rendering of the hypothetical Planet Nine in the outskirts of our solar system. [Caltech/R. Hurt, IPAC]

It has been hypothesized that among these distant solar system bodies lies a ninth planet orbiting our Sun. To date, searches for this hypothetical world have turned up little of interest, but with an expected size not much larger than the Earth, and an orbit that is thought to be ~10x as distant as Neptune, “Planet Nine” would appear incredibly faint, due to the small amount of sunlight that reaches it. Furthermore, if it happens to lie near the star-studded galactic plane on the sky, it would be incredibly difficult to pick out in images.

An image of Planet Nine could be in some of the many exposures that TESS has already taken, although likely not in plain sight. Given that TESS takes 30-minute exposures of each patch of sky, the signal from our distant solar system companion would probably be extremely weak and hard to detect. One way around this issue is to “stack” multiple exposures on top of each other. This acts to boost the signal from any faint sources in an image above any background noise from the camera. Unfortunately, even the most distant solar system bodies move across the TESS field of view between exposures. Because the object is in a different place in each image, you lose any benefit from simply stacking the images on top of each other in place.

To solve this problem, the authors of today’s paper make use of a clever technique called “shift stacking”. Although an object will appear at a different location on each exposure, one can shift the sequence of exposures in such a way that the same pixels on each image correspond to the location of the object. By doing so, the images can then be stacked and added together and an object too faint to be visible in a single image now pops out.

diagram showing the successive steps involved in shift-stacking images.

Figure 1: Top: An illustration of the process by which a series of images, taken at different times, are combined to convert a faint, moving source into a much brighter, single point. Bottom: When searching for an undetected source, the path on the image is unknown. In this case, the algorithm must try out different guesses for the correct trajectory. The “correct” path is the one that produces the strongest signal on the final image. [Rice & Laughlin 2020]

A diagram detailing this technique is shown in Figure 1. The shift-stacking process described above is shown in the top panel. The tricky part, however, is when you don’t know what the path of the object you’re searching for actually follows. In this case, one can use a computer algorithm to try out many different possible paths for the object (shown in the lower panel). The path that produces the strongest signal on the stacked image is likely the correct one.

Trying to guess the correct path for an undetected object can be slow ordeal. One simplification, however, makes this task much easier to conquer. Most outer solar system objects move incredibly slowly because they are so far from the Sun. They move so slowly, in fact, that their motion on the sky is almost entirely dominated by the Earth’s motion. This fact really helps to narrow down the range of possible guesses for the path of any undetected body. Because the Earth’s motion dominates, a body’s path across the images depends only on its distance from the Sun, and not on the specific shape of its orbit.

plots showing multiple results of shift-stacking analysis

Figure 2: An application of the shift-stacking technique to three previously known outer solar system bodies: Sedna (top), 2015 BP519 (middle) and TG 422 (bottom). In the leftmost column, the known orbital parameters are used to calculate the trajectory of the object on the image. Next, the trajectory of the object is guessed using both a polynomial (middle) and PCA (right) technique to model the baseline flux. In all cases, the objects are recovered. [Rice & Laughlin 2020]

To verify the effectiveness of this shift-stacking technique, the authors first attempt to recover an image of three known outer solar system objects: Sedna, 2015 BP519, and 2007 TG 422. The resulting shift-stacked images of these bodies are shown in the left column of Figure 2. In these images, the object shows up as a bright point. Some of the shift-stacked images also contain prominent streaks. It turns out that these are caused by much closer and brighter asteroids that happened to pass through the field of view of the telescope.

Next, the authors attempt to recover these three outer solar system bodies without telling the algorithm ahead of time about the trajectories of these bodies. Instead, the algorithm tries to guess the path by maximizing the brightness of the point source in the shift-stacked images. This is shown in the middle and right hand columns of Figure 2. Here, “polynomial” and “PCA” refer to the technique used to subtract the baseline flux from the images. Although the polynomial technique is less computationally expensive, it sometimes results in the object itself being removed from the images.

Lastly, the authors apply their blind search algorithm to TESS sectors 18 and 19. Although this is only a small piece of the observing footprint of the telescope, these two sectors partially overlap with the galactic plane, which is where the shift-stacking technique is particularly useful. In total, the authors provide a list of 17 new outer solar system body candidates, which will need to be followed up with ground-based observations to confirm. From the TESS images, the distance, brightness, and size of the objects are estimated. Unfortunately, none appear anywhere near as large as what is expected for the hypothetical planet nine. It is, however, exciting that this technique finds so many new candidate objects from such a small search area. Presently, there are only about 100 known distant outer solar system bodies! Although this technique is quite computationally expensive to run, a more clever implementation that involves convolutional neural networks could allow this to be run on the entire sky.

Original astrobite edited by Bryanne McDonough.

About the author, Spencer Wallace:

I’m a member of the UW Astronomy N-body shop working with Thomas Quinn to study simulations of planet formation. In particular, I’m interested in how this process plays out around M stars, which put out huge amounts of radiation during the pre main-sequence phase and are known to host extremely short period planets. When I’m not thinking about planet formation, I’m an avid hiker/backpacker and play bass for the band Night Lunch.

VLA-COSMOS

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Illuminating the dark side of cosmic star formation two billion years after the Big Bang
Authors: M. Talia et al.
First Author’s Institution: University of Bologna & INAF, Italy
Status: Accepted to ApJ

The modern terminology of galaxies is extraordinarily anthropomorphic; blue, star-forming galaxies are “alive”, and red galaxies that have ceased star formation are “dead”. So then how do galaxies “live”? In other words, why do some galaxies form lots of stars while others do not? Are the dead galaxies older, or do they simply mature faster? What role do external forces such as galaxy mergers play in the lives of galaxies? How can their internal structures (bars, arms, and bulges) or internal forces (supernovae and active supermassive black holes) work to enhance or inhibit star formation? These details have been the focus of the past two decades of galaxy studies, trying to answer the question: How and when did galaxies assemble their mass of stars?

The highest-level diagnostic we can construct to help us understand the big picture of star formation in galaxies is the cosmic star formation rate density (SFRD) diagram. It maps the average rate at which stars are formed in the universe at a given time, per unit volume. The physics, then, is a matter of both supply and efficiency: how much gas is available to be formed into stars (supply), and how well did galaxies turn that gas into stars (efficiency)? Constructing the SFRD diagram can then help us to understand the interplay between gas and the processes that can act to enhance or inhibit star formation.

star formation rate density diagram

Figure 1: The star formation rate density diagram, including many literature measurements focusing on the early universe (z > 3). The results from this paper indicate that a missing population of galaxies might account for a large portion of the SFRD at z > 4. [Talia et al. 2021]

Although one can measure the rate of star formation in a given galaxy, and then extend that study to perhaps a hundred or even a million galaxies, one will never be able to count the number of stars forming in every galaxy at every point in the history of the universe. Such a census will be technologically impossible far into the future since our own Milky Way galaxy obscures the light from more distant galaxies one would need to measure. Despite these challenges, astronomers have found clever means of estimating the SFRD for ~75% of the history of our universe by carefully constructing unbiased, representative samples of galaxies such that specific inferences of that sample also hold for the general, wider population of galaxies. As shown in Figure 1, the SFRD rises for the first 3 billion years before peaking at a redshift of z ~ 2, after which it declines for the remaining 10 billion years until today.

Observing star formation rates during the first 2 billion years of the universe (z > 3) is incredibly difficult. Not only were the first galaxies intrinsically smaller and fainter than galaxies we see today, but starting at z ~ 6, the universe is pervaded by a dense fog of neutral hydrogen (from which galaxies formed!) that obscures their light. Given these difficulties, these incredibly early galaxies are only now being observed in large numbers.

The authors of today’s paper point out that the existing samples of z > 3 galaxies are not at all representative. For the most part, and almost exclusively at z > 6, these galaxies are discovered via their bright ultraviolet (UV) emission, which has been redshifted so that it is observed in the optical and infrared. Not only must these galaxies be incredibly bright to be found at such large distances, but their intensive UV emission translates directly to an enormous star formation rate. That is, the feature that makes them easy to find also makes their star formation rates high. This is a huge bias in our samples! To overcome this bias, the authors turn to radio wavelengths. They used a large radio survey VLA-COSMOS to find 197 radio sources that have no counterpart in near-infrared wavelengths. These, they argue, are heavily dust-obscured galaxies without any UV emission — the missing link.

Median galaxy template

Figure 2: Median galaxy template (top) fitted to stacked observations in many broadband filters (bottom). The derived average physical parameters, as well as the redshift distribution, are also shown. [Talia et al. 2021]

The authors’ first test was to stack the broadband brightness measurements of each galaxy together so that they can predict what the average total spectrum would look like for these galaxies, and hence their average properties. The lack of blue light on the left-hand side of the spectrum indicates that there is no luminous UV component as seen in the UV-bright galaxies of previous samples. Moreso, the authors estimate an incredibly high dust extinction of a whopping 4.2 magnitudes (nearly a factor of 50)! These galaxies are super dusty indeed.

Using a similar approach to the stacked analysis, the authors then estimate the redshift and star formation rate for each of the 98 galaxies for which they could reliably measure an infrared brightness. Due to their unique radio selection approach, the authors are able to compile a large sample of very high redshift galaxies at z > 4.5. They estimate the redshifts and star formation rates for the remaining 99 sources as well, but with much greater uncertainty.

Lastly, the authors compute the SFRD using their sample, taking care to correct for any dusty galaxies they may have missed. This is a challenging correction to make, so the authors do so by adopting an agnostic approach, seeing how their SFRD looks depending on how complete their sample might be.

As shown by the red bars in Figure 1, it is precisely this population of highly dust-obscured galaxies at z > 3, invisible to optical and infrared surveys, that may indeed constitute a significant portion of the star formation rate density in the early universe compared to other less-dusty samples!

These findings highlight the surprising extent of our missing knowledge of the first galaxies, and they encourage investment in future radio surveys with ALMA and follow-up with JWST.

Original astrobite edited by William Saunders with Lukas Zalesky.

About the author, John Weaver:

I am a second year PhD student at the Cosmic Dawn Center at the University of Copenhagen, where I study the formation and evolution of galaxies across cosmic time with incredibly deep observations in the optical and infrared. I got my start at a little planetarium, and I’ve been doing lots of public outreach and citizen science ever since.

RS Puppis

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Identifying Candidate Optical Variables Using Gaia Data Release 2
Authors: Shion Andrew, Samuel J. Swihart, and Jay Strader
First Author’s Institution: Harvey Mudd College
Status: Accepted to ApJ

The Wonders of Gaia

1.5 million kilometers from Earth, at the L2 Lagrange point, a space observatory not much larger than a car traverses space and gazes deeply into the Milky Way. Launched in 2013, Gaia tediously constructs a three-dimensional map of our galaxy. Its primary objective is to determine the brightness, temperature, composition, and motion of over a billion astronomical objects (mostly stars) … and I thought grad school was demanding!

Unlike many spacecraft, Gaia observes its targets frequently (~70 times). As a result, it offers the rare ability to expose the variability of the celestial bodies under its watchful eye. These observations provide us the opportunity to advance our understanding of stellar evolution and the dynamical nature of our galaxy’s constituents.

Isolating Variables

Well-studied variables, such as RR Lyrae, Cepheids, and long-period variables provide high-quality measurements; however, sources with short-term variability are harder to detect and limit the number of variables that are studied. In fact, from the Gaia Data Release Two (DR2), which is the instrument’s most recent data release that contains G-band photometry measurements for ~1.7 billion sources, information is only provided for 550,000 variable sources. To address this dearth of variability observations, the authors conduct a thorough review of variable stars that are confirmed in DR2. They contend that variable stars can be identified by targeting stars with relatively high photometric uncertainties. If so, this method may prove critical for building a robust sample of variable stars that can be used for future studies!

In Figure 1, the authors plot Gaia G-band magnitude vs. G-band magnitude uncertainty for 1,000 stars in a small region of the sky. The patch of the sky was centered on a well-studied RR Lyrae variable star, TY Hyi (G = 14.3). The “baseline curve”, where the bulk of the stars (the black dots) lie, is the expected distribution for non-variables. Away from this curve, the variable star (the red dot) has a much larger uncertainty than the stars with a similar brightness on the baseline curve.

Gaia DR2 stars

Figure 1. A G-band magnitude vs. G-band magnitude uncertainty plot of 1,000 DR2 stars showing the expected “baseline curve” along which most non-variable stars lie. The variable star (red dot) does not fall on the baseline curve, but instead has a noticeably larger G-band magnitude uncertainty than other stars of comparable magnitude. [Andrew et al. 2021]

In Figure 2, the authors expand their analysis and consider 70,680 variable stars with photometric periods < 10 days. They now also consider 2,000 random non-variable stars. In this plot, nearly all the variables lie above the baseline curve, with higher uncertainties compared to non-variable stars of similar magnitudes. Moreover, they find that stars with higher variability amplitudes feature higher uncertainties.

To note, the authors acknowledge that the G-band magnitude uncertainty varies with the number of observations (at fixed brightness, the uncertainty decreases as the number of observations increases), and they correct for this by using the weighted average of individual photometry measurements for each source.

variable stars in Gaia DR2

Figure 2: G-band magnitude vs. G-band magnitude uncertainty for 70,680 variable stars with periods less than 10 days, colored by their optical variability amplitude. The black points are a random sample of 2,000 stars, illustrating a baseline curve for non-variable stars. The dashed lines are the mean magnitude uncertainty of variables, in three bins from 0.0 to 1.2 mag in variability amplitude. [Andrew et al. 2021]

Exploring Other Catalogs

The authors then calculate a standard deviation, σ, from the baseline curve for sources using binned G-band magnitudes. They subsequently define a parameter, Gσ, which is the ratio of the G-band magnitude uncertainty in Gaia DR2 for a given source, to the σ for that bin. They use this parameter to define a threshold of Gσ = 3 for identifying variable stars.

But how effective is this method in finding short-period variables in Gaia’s DR2? To address this, the authors check the reliability of their newly defined threshold by scanning a series of short-period (<10 days) variable star catalogs with photometric G-band magnitudes between 14 and 19.5. They first inspect the Catalina Real-Time Transient Survey, which contains 70,680 variables. From their analysis, they find that 96% of the variables in this catalog have Gσ values > 3; the remaining 4% were masked because of potential contamination by a nearby neighboring star, which can generate false positives. Moreover, they inspect the Zwicky variable star catalog (see here for more on the Zwicky Transient Facility), which contains 556,521 variables. Similarly, they find a significant percentage (94%) are recovered when applying the Gσ > 3 threshold; the remaining 6% are also excluded because of neighboring stars.

Furthermore, this method also proves effective at identifying standard RR Lyrae and Cepheid variables (which can have periods up to 70 days). From Gaia DR2, they find that 100% of the Cepheids (8,465 sources) and 99.8% of RR Lyrae (107,418 sources) have Gσ > 3.

Confident in their method, they proceed to analyze the entirety of DR2, and they catalog 9.3 million candidate variable stars, a significant increase from the 550,000 sources reported in DR2 prior to this study.

Hidden No More

The authors of today’s paper provide an immensely powerful tool for identifying variable stars. They show that variable stars in Gaia’s latest data release, which contains over 1.7 billion sources, tend to have larger photometric uncertainties when compared to non-variable stars; more variable stars have larger photometric uncertainties, too. They quantify this relation with the parameter Gσ, which traces how far a star is from a baseline curve of non-variable stars. Using a threshold of Gσ = 3, they recover over 90% of short-period variables in other variable catalogs.

Variable stars have significantly contributed to some of the largest advances in modern astronomy: they have helped us define cosmological parameters, enhanced our understanding of the distance-scale of the universe, and provided us the information to calculate the ages of the oldest stars. Accurately identifying, and studying, these objects promise to unveil even more about our universe. Fascinating instruments like Gaia will serve as the bridges to these wonderful discoveries.

Original astrobite edited by Ellis Avallone.

About the author, James Negus:

James Negus is currently pursuing his Ph.D. in astrophysics at the University of Colorado Boulder. He earned his B.A. in physics, with a specialization in astrophysics, from the University of Chicago in 2013. At CU Boulder, he analyzes active galactic nuclei utilizing the Sloan Digital Sky Survey. In his spare time, he enjoys stargazing with his 8” Dobsonian Telescope in the Rockies and hosting outreach events at the Fiske Planetarium and the Sommers–Bausch Observatory in Boulder, CO. He has also authored two books with Enslow Publishing: Black Holes Explained (Mysteries of Space) and Supernovas Explained (Mysteries of Space).

cosmic clocks

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Eppur è piatto? The cosmic chronometer take on spatial curvature and cosmic concordance
Authors: Sunny Vagnozzi, Abraham Loeb, Michele Moresco
First Author’s Institution: Kavli Institute for Cosmology, University of Cambridge, United Kingdom
Status: Submitted to ApJ

Though astronomers have been studying the universe for hundreds of years, there are still a lot of things we do not know about it. We do not know whether it is finite or infinitely large, and we cannot determine its overall shape. Nevertheless, we know that we can describe the universe with a four-dimensional spacetime, the combination of our three-dimensional space and time. This spacetime is not rigid, but can be distorted and deformed by the content of the universe, like a bowling ball distorts a spandex sheet. The matter (and energy) also changes how the space part of spacetime is curved — and we can measure this curvature.

There are three possibilities for the curvature of the universe, illustrated in Figure 1: the universe can be closed, flat, or open. A closed universe would be shaped like a sphere (although with a three-dimensional surface), meaning that if you would walk along a straight line you would inevitably end up at the position on which you started. Also, if you and a friend start walking on parallel paths, your paths will cross at some point. An open universe is the “opposite” of this: the distance between you and your friend will increase with each step and you will never end up near each other. A flat universe is exactly in between these two cases: parallel paths stay at the same distance and never cross.

We characterize the curvature with parameter Ωk. Using the sign convention of the authors of today’s paper, a negative Ωk indicates a closed universe, a positive Ωk an open one. If the universe is flat, Ωk is exactly zero.

universe curvature

Figure 1: Different possibilities for curvature of the universe. The universe can be closed (top), open (middle), or flat (bottom). In the sign convention of today’s paper, a closed universe has Ωk < 0, an open universe has Ωk > 0. [NASA/WMAP]

In general, cosmologists predict that the universe is flat (Ωk = 0). This is not only suggested by a variety of measurements, but also a key prediction of the theory of cosmological inflation. Inflation describes a brief period during which the universe expanded exponentially (see this astrobite for more on inflation). The strong expansion of the universe decreased the curvature, in the same way that inflating a small balloon to the size of the Earth makes it appear flatter. Still, there is an ongoing debate on this issue. The Planck satellite tried to measure Ωk using the cosmic microwave background (CMB), remnant light from the early universe, which travelled through our potentially curved universe. The results suggest a Ωk between –0.095 and –0.007, so this measurement method points to a closed universe instead of flat. A reanalysis of Planck data confirmed this preference for a curved universe using the CMB.

However, the CMB on its own is not a sensitive probe for Ωk. It determines a combination of Ωk, the matter density in the universe Ωm, and the expansion rate H0, i.e., the Hubble constant. A strongly curved universe with a low value of H0 and a high value of Ωm can have the same CMB as a flat universe with a high H0 and a low Ωm. The fact that we can only measure H0, Ωm, and Ωk together and not individually from the CMB is called the geometrical degeneracy.

Cosmologists combine the Planck measurement with other probes, such as baryon acoustic oscillations (BAOs) or Type Ia supernovae. Combining the Planck data with BAO measurements from the Dark Energy Survey leads to Ωk = 0.0007 ± 0.0019, which is consistent with a flat universe.

The authors of today’s paper, though, believe that this combination of Planck and BAOs is not valid. They argue that the Ωk parameters inferred by each dataset on its own disagree so strongly, that the results of a combination of the data sets can be unreliable. If the results of two datasets are in strong tension, this could indicate that one or both include unknown systematic errors, or they need different models to be described. They should therefore not be combined. In case of the curvature of the universe, a different data set should be used to break the geometrical degeneracy. The choice of today’s authors: cosmic chronometers, the universe’s standard clocks.

Cosmic chronometers are objects whose time evolution we know (or can at least model very well), for example specific types of galaxies. We observe some of these objects at different redshifts, which indicate how far away they are. From the differences in their evolutional state, we then infer how much time has passed between the redshifts. This time difference tells us how fast the universe has expanded between the redshifts and gives the expansion rate H(z) at each redshift z. H(z) depends on the cosmological parameters, including Ωk, so from this we can infer the cosmic curvature.

Which objects can we use as chronometers? The best choice are passively evolving galaxies. These are galaxies that have exhausted their gas reservoir and form only a few new stars. Since blue stars die earlier than red stars, the galaxies become redder with time. From the galaxies’ spectral colours (more precisely, their spectral energy distributions) and sophisticated models of stellar evolution, we can infer how much time has passed since they exhausted their gas and stopped star formation. When we compare two galaxies that formed at the same time but are at different redshifts, the difference in their evolution tells us how much time has passed between the redshifts. We have found our cosmic clocks!

Today’s authors use 31 measurements of H(z) with cosmic chronometers between redshift z = 1.965 (approximately 10 billion years ago) and z = 0.07 (approximately 1 billion years ago). Figure 2 shows these measurements, along with the best fit for H(z) and the prediction from the Planck measurements. Planck underpredicts H(z), but the tension between the cosmic chronometers and Planck is much smaller than the disagreement with the BAO measurements. Therefore, the authors argue that combining the Planck and the cosmic chronometer data set is justified.

Hubble parameter

Figure 2: Cosmic expansion rate (also called Hubble parameter) at each redshift. The data points show the determination of the cosmic chronometer measurements used in the paper. The red line is the fit to the cosmic chronometer data combined with Planck; the blue line is the prediction of the Planck data alone. The Planck data underpredicts H(z) on its own. [Vagnozzi et al. 2020]

When the authors do so, they find the constraints on Ωm, Ωk and H0 shown in Figure 3. The combination of Planck and cosmic chronometers prefers a higher value of H0 than the Planck data on its own. However, this is not enough to alleviate the famous Hubble tension. Most important, though, the combined data finds Ωk = –0.0054 ± 0.0055. This value is consistent with a flat universe for which Ωk = 0, as predicted by cosmological inflation.

parameter constraints

Figure 3: Constraints on curvature of the universe (Ωk), the Hubble parameter (H0) and the matter density in the universe (Ωm) using only the Planck data (blue) or the combination of Planck with the cosmic chronometers (red). The Planck data on its own prefers a small value for H0 and an Ωk < 0. The combined dataset, however, confirms a flat universe and a higher value for H0. [Vagnozzi et al. 2020]

In conclusion, the authors of today’s paper argue that the universe is most likely not curved. Their result fits other measurements that combined Planck data with other probes, such as BAOs, but their choice to use cosmic chronometers produces a result that they consider more reliable, because the individual datasets did not disagree strongly. This result could be a notable step forward in solving the controversy around Planck’s curvature measurement. More measurements of cosmic chronometers are undoubtedly due in the future — so look out for more results from the universe’s clocks.

Original astrobite edited by Haley Wahl.

About the author, Laila Linke:

I am a third year PhD Student at the University of Bonn, where I am exploring the relationship between galaxies and dark matter using gravitational lensing. Previously, I also worked at Heidelberg University on detecting galaxy clusters and theoretically predicting their abundance. In my spare time I enjoy hiking, reading fantasy novels and spreading my love of physics and astronomy through scientific outreach!

Mercury interior

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Radiogenic Heating and its Influence on Rocky Planet Dynamos and Habitability
Authors: Francis Nimmo et al.
First Author’s Institution: University of California Santa Cruz
Status: Published in ApJL

Rocky planets are thought to start as hot masses of material accreting from a disk of gas and dust around their young host star. Whereas the primary heat source early on comes from accretion, and orbital dynamics can lead to further heating through tidal squeezing, the ongoing thermal evolution of many rocky planets is likely controlled by radiogenic heat production. In particular, the radioactive isotopes of uranium (U) and thorium (Th) have long half-lives and so may be significant in deciding the long-term geodynamic history. The authors of today’s bite argue that the exact concentrations of such elements in a planet’s mantle could decide the presence and strength of that world’s magnetic field.

Dynamo theory holds that magnetic fields are generated by circulation of conductive fluids. In the case of Earth, convection of hot liquid metals in the outer core may be generating our magnetic field (see Figure 1). This outer core dynamo shuffles heat outward from the planet’s interior and its efficiency is controlled by the temperature of the overlying mantle. Thus, our magnetic field can be linked to heat production in the mantle that is mainly due to the decay of radiogenic elements. But what would it mean if our planet happened to have more or less of these elements?

diagram of earth's interior

Figure 1: Simplified cross-section of the common interpretation for Earth’s interior. The thin layer at the Earth’s surface is the crust (brown), below that is the mantle (red), which extracts heat from the liquid outer core (yellow), which convects to produce the magnetic field and gradually solidifies to form the inner core (white). [universe-review.ca]

In general, the composition of a planet should be similar to that of its host star since they coalesced from the same stuff. Therefore, we should be able to measure the elemental abundance of a star and say something about its planets. However, concentrations of some elements can vary significantly from star to star due to the different processes that produce them. So-called r-process elements like U and Th are likely distributed unevenly throughout the galaxy because they only form under the extreme conditions of rare processes like neutron star merger events. For understanding radiogenic heat production in the mantle of a planet, the presence of U and Th is important in terms of its concentration relative to the bulk mass of silicates. The ratio of europium to magnesium (Eu/Mg) serves as a good proxy for this — useful since U and Th are hard to detect in the spectra of stars. Given typical measurements of Eu and Mg, the authors consider that radiogenic heat production in the mantle of similar planets may vary from roughly 30% to 300% of the Earth’s 15 terawatts.

The models at the center of today’s paper are relatively simple compared to more computationally expensive 2D or 3D models, but are sufficient to see how changing a parameter like mantle heat production could affect a planet’s evolution. They consider the timeline of three identical Earths, where the only difference is having less U and Th (Figure 2a), Earth-like concentrations (Figure 2b), or more U and Th (Figure 2c). All cases assume that plate tectonics contributes to heat transfer, because previous work suggests magnetic dynamos are more likely under conditions conducive to plate tectonics, despite its presence not being a certainty (see Venus, for example). In Figure 2, the authors use entropy production as a proxy for the likelihood and intensity of a dynamo. The entropy production rate determines the presence of a dynamo based on whether it exceeds the adiabatic entropy rate, where the adiabat defines the expected temperature and pressure conditions for the mantle. Dynamo convection is at first due to extraction of heat into the mantle that gradually declines, but rapidly increases again after the core cools enough to begin solidifying. This extra burst of activity is due to “compositional buoyancy” where solidification of the core releases light elements into the fluid above.

As a good starting point, the trend predicted by the model for normal Earth (Figure 2b) matches geologic observations that the Earth has had an active magnetic field for over 3.5 billion years, though it turned off or weakened at least once for a few million years. In fact, it seems that Earth was just on the threshold for having a consistently active dynamo, based on how the entropy production may have briefly dipped below the threshold around one billion years ago. In the case of less radiogenic heat than normal Earth (Figure 2a), solid core formation starts earlier and the dynamo is easily maintained. In the case of more radiogenic heat (Figure 2c), the dynamo may turn off for hundreds of millions of years because a high-temperature mantle isn’t as effective at extracting heat from the core. So, opposite to what you might expect, the authors find that more radiogenic heat in the mantle leads to less core heat flux, less dynamo, and a smaller solid core.

heat flow models

Figure 2: Model results for (a) 0.33, (b) 1, and (c) 3 times Earth’s U and Th concentrations. The upper panels show decreasing heat flow over time (solid lines) and the onset of inner core formation (dashed green line). The lower panels show the entropy production rate over time, which generally decreases until inner core formation begins. The dynamo is thought to operate only when the total entropy rate (black) is greater than the adiabatic entropy rate (red). [Nimmo et al. 2020]

A more thorough view of the effect of radiogenic heat can be seen in Figure 3. The concentration of radiogenic elements could affect the habitability of the planet based on whether they are of low enough abundance to allow for a magnetic dynamo. Though some disagree, it is generally thought that a magnetic field helps shield a planet from solar particles which may otherwise erode the atmosphere. On the other hand, higher radiogenic heat in the mantle is expected to cause more volcanism, which likely releases much of the volatiles that allow for a thick, comfy atmosphere. The authors point out that their model probably misses some of the complex feedbacks that may occur here, especially with the many unknowns about plate tectonics, but ultimately argue that the abundance of r-process elements (as seen from stellar Eu/Mg ratios) should be seen as another important factor to consider in the search for habitable exoplanets.

Rate of entropy production

Figure 3: Rate of entropy production (indicated by color) for a varying fraction of radiogenic elements compared to normal Earth (in log scale) over time. Solid black lines indicate a reference temperature and the dashed red lines show the trajectory of three modeled scenarios through time (the author’s Figure 1 is referenced as Figure 2 in this astrobite). Note the black region where too much radiogenic heat kills the dynamo. [Nimmo et al. 2020]

Interestingly, it has been found that lower quantities of radiogenic isotopes are present farther from the galactic center. Also, older stars are found to have smaller amounts of these heavy elements — however, today’s authors expect the random distribution due to r-process rarity to ultimately have the strongest influence on U and Th abundances. The more we learn about what makes Earth’s systems work, the more we will know about what to look for in our searches of the skies for habitable worlds. This paper paves the way for future observations and modeling to expand our view of the complicated interactions that feed into planetary geodynamics and possibly life in the universe.

Original astrobite edited by Spencer Wallace.

About the author, Anthony Maue:

Anthony is a PhD student at Northern Arizona University in Flagstaff studying planetary geology. In particular, his research focuses on Titan’s fluvial processes through analyses of Cassini radar data, laboratory experiments, and terrestrial field analog studies. Outside of school, Anthony enjoys skiing, cycling, running, music and film.

Sun and Mercury

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Solar Wind Prevents Reaccretion of Debris after Mercury’s Giant Impact
Authors: Christopher Spalding and Fred C. Adams
First Author’s Institution: Yale University
Status: Published in PSJ

Mercury is a bit of an oddball compared to the other terrestrial planets. Because of its proximity to the Sun, Mercury doesn’t have an atmosphere, only a “surface-bound exosphere” of gas particles on ballistic trajectories. Under the surface, Mercury has an iron core that extends to more than 80% of its radius, compared with just 50% for Earth.

Many theories have been proposed to explain how Mercury ended up as the planet with the largest core compared to its size. One idea is that Mercury formed with a silicate mantle that was blasted away by asteroid impacts. Another puts forth that as the planets formed from the protoplanetary disk orbiting the Sun, high temperatures sorted out the silicates and iron, so Mercury formed in a region of the disk bereft of silicates to begin with. A third theory states that high temperatures took over after Mercury formed, vaporizing its mantle but not the iron core. The fact that many close-in exoplanets have been found over the last decade with rocky mantles casts considerable doubt on the latter theory.

It is, in fact, a corollary of the first theory that the authors of today’s paper tested. Specifically, they hypothesize that as asteroid impacts knocked pieces of Mercury’s mantle into orbit, the powerful solar wind removed the debris before it could coalesce back onto the surface.

Every Day Is a Windy Day in Space

In 1957, Eugene Parker realized something funny happened when he tried to solve the fluid equations to understand how the Sun’s atmosphere works. At very far distances, he found a discontinuity — the pressure was much lower than realistically possible. His solution was so revolutionary it took three tries to get published: the solar corona is not static but constant expands into space. Parker’s solar wind is composed of supersonic protons traveling at 400 km/s, and it dominates the interplanetary environment as far as the heliopause. Parker’s other major discovery was the spiraling solar magnetic field.

It’s believed that the young Sun had a solar wind about 100 times stronger than today, which is what makes the work in today’s paper possible.

The Giant Impact of Giant Impacts

Mercury’s early history was likely dominated by giant impacts (similar to those that might have formed the Moon), which blasted large amounts of its silicate mantle into space. The pebble-sized debris, left to orbit, would gradually reaccrete onto the surface of Mercury within about ten million years.

But the strong solar wind from the young Sun can push on the debris just enough to modify the particles’ orbits, either accelerating the debris toward the outer solar system or dragging it in toward the Sun. Figure 1 shows a schematic of this system.

ejected material orbiting Mercury

Figure 1: Diagram of ejected material orbiting Mercury. The solar wind in this case exerts a drag, reducing the orbital semi-major axis and causing the particle to fall toward the Sun. In other cases, the solar wind can accelerate the particle, causing it to exit toward the outer solar system. [Spalding & Adams 2020]

Dual Methods of Studying Early Mercury

To test whether the solar wind could be responsible for facilitating the loss of Mercury’s mantle, the authors first looked for an analytical solution by directly solving equations of motion. Despite the simplifications required, they believed the results would be conceptually insightful. They then followed up with a detailed numerical simulation, relying on high-performance computing.

solar wind velocities

Figure 2: Radial (top) and azimuthal (bottom) velocities of the solar wind as a function of radius for the Sun at ages 3, 10, and 30 million years. Radial velocity increases monotonically but azimuthal velocity reaches a maximum close to the Sun. Super-Keplerian azimuthal winds can accelerate particles outward or inward, depending on orientation. Mercury’s semi-major axis is 0.39 AU. [Spalding & Adams 2020]

In the analytical incarnation, the authors looked for the amount of acceleration the solar wind can impart on centimeter-sized debris orbiting Mercury. Close to the Sun, the solar magnetic field locks the solar wind to the solid body rotation of the Sun. The result is the wind has an azimuthal velocity (circulating around the equator) in addition to its outward, radial velocity. The azimuthal velocity was recently confirmed by the Parker Solar Probe.

Though the azimuthal velocity decreases with distance, at Mercury’s location, it is sufficient to impact orbiting debris with a force, as shown in Figure 2.

The authors added solar wind acceleration to the orbital equations of motion and looked for the decay timescale of the semi-major axis and eccentricity. They varied the age of the Sun, strength of the solar wind, debris launch angle, and starting orbit. In most cases, the solar wind causes debris to decay within about one million years, which is significantly shorter than the ten million years it takes the debris to reaccrete onto the surface, a promising indication for their hypothesis.

debris collisions

Figure 3: Results of numerical simulations with and without solar wind. Of the 110 starting particles, many times more collide back with Mercury in the absence of a solar wind, indicating the wind’s role in stripping collisional debris. [Spalding & Adams 2020]

Many researchers would be satisfied with an analytical solution that supports the hypothesis, but these authors wanted to follow up with a computational approach. Simulations can easily handle more robust physics and perform better control tests. The authors run N-body simulations of centimeter-sized debris orbiting Mercury with and without the solar wind, tracking each particle to see if it either collides back with Mercury or escapes for good.

Figure 3 shows the results, indicating that the presence of even a weak solar wind significantly reduces the number of particles that recoalesce onto the planet’s surface.

Beyond the Solar System

With a combination of analytical and computational methods, the authors conclude that a strong solar wind during the period of heavy impacts on Mercury could have removed ejected material from orbit within less than a million years. Over time, this resulted in Mercury’s silicate mantle being lost into the Sun or toward the outer solar system, leaving behind the iron core.

The authors offer the possibility of utilizing this work in the study of exoplanets. As space physicists learn more and more about the solar wind and heliosphere, attention has turned to astrospheres, heliospheres around stars other than the Sun. Some detections of close-in exoplanets indicate they are iron-enriched like Mercury, leading to the possibility that their composition can be used as an indirect probe of stellar wind characteristics.

Original astrobite edited by Haley Wahl and Wynn Jacobson-Galan.

About the author, Will Saunders:

I’m a third year grad student at West Virginia University and my main research area is pulsars. I’m currently working with the NANOGrav collaboration (a collaboration which is part of a worldwide effort to detect gravitational waves with pulsars) on polarization calibration. In my set of 45 millisecond pulsars, I’m looking at how the rotation measure (how much the light from the star is rotated by the interstellar medium on its way to us) changes over time, which can tell us about the variation of the galactic magnetic field. I’m mainly interested in pulsar emission and the weird things we see pulsars do! In addition to doing research, I’m also a huge fan of running, baking, reading, watching movies, and I LOVE dogs!

1 2 3 24