Astrobites RSS

illustration of the hubble and gaia spacecraft working together

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: GaiaHub: A Method for Combining Data from the Gaia and Hubble Space Telescopes to Derive Improved Proper Motions for Faint Stars
Authors: Andrés del Pino et al.
First Author’s Institution: Center for Studies of Astrophysics and Cosmology of Aragón (CEFCA) and Space Telescope Science Institute
Status: Published in ApJ

Intro: What Is Proper Motion and How to Find It

Stars in the night sky seem fixed, but they are all traveling through the Milky Way just like the Sun. Since objects in the universe travel in 3D space, we can separate their velocities into three components, as shown in Figure 1: one component is the radial velocity, which points towards or away from Earth, and the other two components come from the proper motion, which refers to motion in the plane of the sky. Radial velocity is usually measured by finding the redshift of the object’s spectral lines, and it can reach down to several kilometers per second in accuracy. Proper motion is much harder to measure.

diagram illustrating radial velocity and proper motion

Figure 1: An illustration of radial velocity and proper motion. [ESA/ATG Medialab]

The measurement of accurate sky positions is called astrometry. Proper motion measurement relies on astrometry, since we are comparing observations from two epochs and calculating how much the position of the star has changed. This is the strong suit of the Gaia mission, which probes stars out to the halo of the Milky Way galaxy (see this astrobite). Gaia has led to many discoveries: new globular clusters in the Milky Way, stars moving so fast enough to escape the Milky Way, groups of stars that move together, and plenty more to come.

However, Gaia data have two important shortcomings. Firstly, it is a small telescope and works better for bright stars. For faint stars, the astrometric errors rise rapidly. But if we are interested in a faraway system (e.g., a satellite dwarf galaxy of the Milky Way), all the stars will be faint. Using Gaia data alone, the velocity errors far exceed the true variation in the galaxy. The second issue is the time baseline. Given a constant velocity, stars will shift more if you wait longer between two observations. That is why the time baseline has a huge impact on proper motion accuracy. Gaia has only been operating and recording positions for three years. If there is a way to increase the time baseline, that can also improve the proper motion measurements.

How to Measure Proper Motions Better?

Prior to the launch of the Gaia space telescope, the workhorse in astrometry studies was the Hubble Space Telescope. Hubble data can solve both of the issues mentioned above; it can observe much fainter stars, and it’s been taking data since 10–15 years before Gaia was even launched. If there is a way to combine these datasets, the time baseline for the proper motion measurements could be extended by a factor of 4–6. As shown in Figure 2, even adding one Hubble image can push down the errors by a lot for faint stars (G magnitude > 17). That is precisely what the authors of today’s article did.

plot of proper motion uncertainties for gaia data alone versus gaia and hubble data combined

Figure 2: The expected proper motion uncertainties as a function of the magnitude of stars. In both panels, nominal errors of Gaia Early Data Release 3 are shown by a black dashed curve. The top panel shows the impact of using one or more Hubble images, taken at the same epoch, June 2011. The bottom panel shows the impact of using just one Hubble image taken on different years (the typical baseline found in the data is ~11 years). [del Pino et al. 2022]

Combining Hubble and Gaia

The authors of today’s article developed a software called GaiaHub, which compares the positions of the stars measured with Gaia with those measured with Hubble. The first step is to measure positions of stars in Hubble data. This is a well-established process that takes into account the instrument distortions and time variations, and it achieves a typical accuracy of 0.25–0.5 milliarcseconds.

Then comes the hard part: the star positions need to be matched to Gaia measurements. Since the two datasets are more than 10 years apart, establishing a common reference frame between the two is the key challenge. The software offers three different algorithms: when there is a large number of randomly moving stars, it matches the average positions of all stars; when the stars have some coherent motion, the proper motion can be modeled iteratively so that the coherent motion is removed; or, finally, if there are many contaminant stars, the code can also set up the reference frame from co-moving stars. The improved accuracy with Gaia and Hubble data can be seen in Figure 2 as a function of the magnitude of the stars.

Results

So how does this software perform on real data? Figure 3 shows the drastic improvement you get from combining Gaia and Hubble data. In this example, proper motions are used to identify member stars of a globular cluster Palomar 4. The stars in a cluster should move together, which means they should all have similar proper motions. The left column in Figure 3 shows the proper motion in as a function of on-the-sky coordinates, right ascension (RA) and declination (Dec), measured by Gaia alone (top panel) and GaiaHub (bottom panel). The proper motion measurements from GaiaHub clearly have much smaller scatter and allow for a cleaner selection of member stars. This is confirmed by the right column, which shows the sky positions of the selected stars and their proper motion vectors. In the Gaia selection, the lines indicating the direction of motion point all over the place, while GaiaHub results show very coherent motion. Given that member stars should move together, GaiaHub successfully picks out the likely members of Palomar 4.

Comparison between the results obtained using Gaia and GaiaHub for the Palomar 4 globular cluster.

Figure 3: Comparison between the results obtained using Gaia and GaiaHub for the Palomar 4 globular cluster. Left column: proper motion in RA vs. Dec, measured by Gaia (top panel) and GaiaHub (bottom panel). Right column: sky positions of the stars with the projected proper motion vectors. [del Pino et al. 2022]

This is a huge improvement for proper motion measurements! Large uncertainties in proper motion mean that we get more scatter in the velocities, and that leads to artificially large velocity dispersion measurements for globular clusters. With GaiaHub’s new capabilities, the artificial scatter is reduced and we can recover the real internal velocity dispersions. The authors did this exercise for 40 globular clusters and published their results in this article. Along with radial velocities, we now have the full 3D velocity information. GaiaHub opens exciting new science involving analyzing velocity dispersions along each direction.

As with all research techniques, GaiaHub has its limitations. Due to the cross matching, GaiaHub relies on stars that overlap in both datasets. That means the field of view is limited by the smaller of the two, which is Hubble. The magnitude of the stars that can be detected by both telescopes is also limited, since bright stars are often saturated in Hubble images. Both of these factors mean that GaiaHub works best at an intermediate distance, where the Hubble field of view is large enough to cover the globular cluster and the brightest stars are faint enough to not be saturated.

To summarize, GaiaHub improves the proper motion measurements by a factor of ten. More precise proper motions at fainter magnitudes allow us to study the kinematics of many stellar systems around the Milky Way. This public software will be a great resource for the astronomy community!

Original astrobite edited by Katya Gozman.

About the author, Zili Shen:

Hi! I am a PhD student in Astronomy at Yale University. My research focuses on ultra-diffuse galaxies and their globular cluster populations. Since I came to Yale, I have worked on two “dark-matter-free” galaxies NGC1052-DF2 and DF4. I have been coping with the pandemic and working from home by making sourdough bread and baking various cookies and cakes, reading books ranging from philosophy to virology, going on daily hikes or runs, and watching too many TV shows.

photograph of the green bank telescope in front of rolling mountains

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Searching for Broadband Pulsed Beacons from 1883 Stars Using Neural Networks
Authors: Vishal Gajjar et al.
First Author’s Institution: Breakthrough Listen, University of California Berkeley
Status: Published in ApJ

The search for extraterrestrial intelligence (SETI) is perhaps humankind’s most ambitious and forward-thinking endeavor. We’ve been asking ourselves the fundamental question of “Are we alone?” since the dawn of written history, but technological advancements in the last 100 years have allowed us to take our first steps toward finding an answer. Today’s article describes a reimagination of one of the most common search techniques to look for signatures of extraterrestrials (ETs), and while we haven’t found any alien signals just yet, our search capabilities only continue to get better!

The easiest way to find ETs would be to look for their technosignatures — the light waves emitted by the technology they use (check out this Astrobite for more on technosignatures). In particular, if an alien civilization wanted to be found by other intelligent life, they would want to send out a signal that wouldn’t be deflected or absorbed by the space between us, would travel as fast as possible, and would require the least amount of energy to produce. For these and other reasons, most SETI searches have involved searching for artificial radio signals coming from the vicinity of nearby stars.

But what specific kinds of signals should we search for? Will they be transmitted over a narrow frequency range, or will they be “broadband” signals covering a large range of frequencies? Will the signal be continuously transmitted, or will it pulse on and off at specific intervals to clearly demonstrate it’s made by intelligent life? There is no one satisfactory answer to these questions, and most previous searches have looked for narrowband signals that are always being emitted, since we would need less time to detect a signal of that type than other types of signals.

However, the authors of today’s article were able to prove that for a civilization generating these signals, it would cost less energy to produce broadband pulsed signals, as long as those signals were being sent out for longer than a few hundred seconds. They made the reasonable assumption that ETs will try for longer than a few minutes to get our attention and went about searching for broadband pulsed signals in radio data from the Green Bank Telescope.

A Very Small Needle in a Turbulent Haystack

The Breakthrough Listen collaboration, of which many authors of this article are a part, chose 1,883 stars (explained in this article) as targets for their observations. They chose every star within 5 parsecs (a little more than 16 light-years) of Earth — so that the distance between us would not attenuate the signals too much — as well as all stars within 5–50 parsecs (163 light-years) that fall on the main sequence or the early part of the giant branch. Stars on these earlier segments of the stellar evolutionary track are less volatile and, if they have planets orbiting them, create environments that are the most likely to aid life to grow. The authors took 233 total hours of observations, broken up into 5-minute segments, since that is approximately the observational length for which a 0.3-millisecond long broadband pulse would take less power to send than a continuous narrowband signal.

Luckily, we have lots of experience searching for repeating broadband radio pulses in the form of radio pulsars and fast radio bursts! Pulsars are useful physical tools for a wide range of astronomical applications (for more, see the astrobites here, here, here, here, here, here, here, and here), but today, we can use our experience in analyzing transient radio signals to predict how a broadband signal sent by ETs would be affected by the interstellar medium between us. Radio waves are scattered and dispersed by the interstellar medium, and broadband radio signals undergo a dispersion delay, where the lower-frequency part of a pulse will be delayed relative to the higher-frequency part due to the ionized medium it travels through. The authors of today’s article focus on this dispersion delay.

plot of a dispersed broadband pulse signal

Figure 1: The received signal from a dispersed broadband pulse, as a function of frequency and time. Note that the higher-frequency parts of the signal arrive before the lower-frequency parts. [Gajjar et al. 2022]

The “waterfall” plot in Figure 1 shows the intensity as a function of frequency and time for a single broadband pulse that has undergone dispersion. The dispersion measure of a signal, which is related to the time delay between two reference frequencies, can help us measure the amount of ionized material a signal has traveled through. Combined with detailed maps of the Milky Way, we can use the dispersion measure to estimate the distance between us and the origin of the signal!

Most importantly, the dispersion delay time between two frequencies always scales as the inverse of the frequency difference, squared. The authors of today’s article suggest that if an alien civilization were to send us a signal, the best strategy would be to artificially arrange it in some way so that we would not see a normal dispersion trend; rather, we would see some other pattern that does not occur in nature, proving that it comes from other sentient life.

These other types of artificial dispersion are shown in Figure 2. The authors searched for dispersed pulses from their original dataset, and they also created artificial datasets by flipping the frequency and time axes, both independently and simultaneously. By doing this, each type of artificially dispersed pulse shown in Figure 2 would look to the single-pulse-search software as a normally dispersed pulse, allowing the team to run the same search code on all four datasets. Searching all of these datasets resulted in a staggering 133,393 candidates!

plots of artificially dispersed signals

Figure 2: The three types of artificially dispersed signals that the authors searched for. From left to right, they are made by flipping the time axis, the frequency axis, and both simultaneously, to make artificially dispersed broadband pulses that are not seen in nature. [Gajjar et al. 2022]

How to Analyze 133,000 Candidates This Century

Of course, having a human sit down and examine that many candidates would be beyond unreasonable — thankfully, machine learning and graphics processing units allow us to quickly filter out many bad options. The authors filtered out candidates that looked too much like human-made radio frequency interference or didn’t show enough of a difference between their on-pulse and off-pulse energy distributions. Many other filters were used to weed out unpromising candidates, leading to a shortlist of only 2,948 candidates.

The best candidates in each class of artificial dispersion were examined more closely, but similar-looking signals were found in other 5-minute-long pointings in completely unrelated areas of the sky. It’s not easy for us to prove definitively that these signals actually come from the region of the stars we’re pointing at, rather than human-made radio frequency interference; it’s much more reasonable to conclude that these “signals” are bright human-made signals that have made their way into the telescope, rather than two extremely distant alien civilizations sending us the exact same signal.

The authors used these non-detections to place limits on the maximum signal strength any civilization in those areas could be sending, finding some signals as weak as a few hundred times stronger than our strongest airplane radar. That doesn’t sound like much of a limit, but that’s a signal we’d be detecting from a whole other star system — and each new search is another step towards better technology and better search methods to make a possible discovery!

Original astrobite edited by Lili Alderson.

About the author, Evan Lewis:

Evan is a third-year graduate student in astronomy at West Virginia University. His research focuses on transient radio sources, including pulsars, magnetars, and fast radio bursts. Outside of research, he enjoys playing percussion, hugging dogs, baking, and playing video games!

JWST blueprints

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Analysis of a JWST NIRSpec Lab Time Series: Characterizing Systematics, Recovering Exoplanet Transit Spectroscopy, and Constraining a Noise Floor
Authors: Zafar Rustamkulov et al.
First Author’s Institution: Johns Hopkins University
Status: Published in ApJL

JWST is the most powerful infrared telescope ever built — and, understandably, scientists across the world and in every sub-field of astronomy can barely contain themselves waiting for the data it will soon beam back. The gorgeous shots taken during the commissioning phase for engineering and alignment suggest the measurements will be exquisite. But how exquisite, exactly? What are the subtlest signals JWST will be able to detect, specifically in the context of exoplanet atmospheres? Today’s authors take a step towards answering that question — and, impressively they do it before any scientific measurements have been taken!

How Good Is Great?

As capable as JWST is, you could probably guess that it can’t do everything we could possibly ask of it. Although the telescope can take images of the most distant galaxies across the largest scales of the universe, all of its measurements come with some amount of uncertainty, or noise. If that noise is small compared to the signal we’re trying to measure, then we don’t need to worry; for example, if we measure the age of a rock to be 5 million years ± 0.01 million years, we can be confident that it’s a young rock even if we aren’t exactly sure of its age. But, if we measure it instead to be 2 billion years ± 2 billion years, suddenly our measurement looks less useful, since the rock could either be very young or nearly as old as Earth itself!

In the context of JWST and astronomy, if you measure, say, 100 photons per second from a galaxy or star, what are the chances that source is actually shooting 100 photons per second in our direction? Could it be 100 ± 50 photons per second? If you’re trying to measure a very subtle signal (say, the transit of a planet) that would only block say 40 photons per second, the answers to those questions could make the difference between confidently detecting your planet and not being sure if it was even there! So, what are the answers? The only way to know is to analyze actual measurements taken by JWST. You would think that means we have to wait until the first science images arrive this summer. Unless…

Blinded by the (Lamp) Light

Although JWST is now in its chilly home out at L2, it somewhat infamously took many years to reach this point. One reason for this was the extensive testing of every component of the telescope. As part of this testing regimen, engineers locked the science instruments in a chamber meant to replicate the cold, vacuum-like conditions of space, then ran them through their paces to check their response. While the science instruments were in the chamber, the engineers forced the Near Infrared Spectrograph (NIRSpec) instrument to undergo the worst eye exam ever, shining a tiny lamp through its optics and onto its detector for several hours.

But ah ha! What if we just pretended that chamber actually was space, and that little lamp was actually a star? If we could clean up this data, add in a fake but perfectly known transit signal, and then check how well we could recover that true signal, we’d have an estimate of NIRSpec’s noise resolution. This is exactly what today’s authors set out to do.

This wasn’t straightforward, since lamps, unfortunately, are not stars. NIRSpec breaks light into its component wavelengths to measure its spectrum, and a glowing filament produces a very different spectrum than a star. Even worse, lamps flicker and change over time, but contrary to what popular children’s songs would suggest, stars don’t actually twinkle in space. The authors took great care to remove each of the effects that could influence the results, sliding and smoothing each of the frames until they resembled something similar to what we’ll see from a real star in space. Figure 1 shows the brightness of the lamp after the authors applied various corrections.

plots of the lamp flux over time

Figure 1: The lamp data before (top) and after (bottom) trend removal. In both panels, each column of pixels represents one integration, then they are lined up left to right in the order they were taken. Note that in the top panel, representing data before a trend removal step, the average column intensity goes down over time due to the lamp fading slightly. In the bottom panel, showing data after their “common-mode correction,” which “mostly removes the systematics imparted by the unstable light source,” the source appears much more stable. [Adapted from Rustamkulov et al. 2022]

After that, the authors added in artificial signals of two planets, TRAPPIST-1 d and GJ 436 b, complete with full models of their atmospheres. With realistic “measurements” of these planets now fully assembled, they could pretend that we live several months in the future and that these light curves were freshly beamed in from beyond the Moon, not collected in a lab six years ago. The authors ran their data through code routines similar to what we’ll use on real data, then checked how well NIRSpec recovered the fake signals.

Noise Canceling Telescope

So, what did the authors find? Lots of very good news! Despite the lamp drifting around the image plane more than we expect stars will, the authors found that NIRSpec should be able to detect spectral features from the atmospheres of TRAPPIST-1 d and GJ 436 b, as shown in Figure 2.

plot of injected and recovered exoplanet spectra

Figure 2: The recovered spectra of two planets. In both panels, the blue curve depicts the “true” signal that the authors injected into their lamp data, and the red points are the results of their model fit to those fake measurements. The Y axis here is transit depth, or how much of the star’s light at a given wavelength is blocked by the planet, in parts per million (ppm). Note how closely the recovered points follow the true curves, especially in the case of GJ 436 b between 1.5 and 3 microns. [Rustamkulov et al. 2022]

Even better, the authors didn’t run into any “noise floor,” or fundamental uncertainty the instrument can’t get around no matter how long it measures a source. Although they couldn’t give a firm estimate for it, they were able to set an upper bound and are confident that it’s smaller than 14 parts per million. That’s a tiny, tiny value, and it implies that JWST should be able to detect dozens of spectral features given enough time.

The excitement and anticipation for spectra of exoplanet atmospheres seems justified! Back to waiting for the first science images…

Original astrobite edited by Jessie Thwaites.

About the author, Ben Cassese:

I am a first-year Astronomy PhD student at Columbia University working on simulated observations of exomoons. Prior to joining the Cool Worlds Lab, I studied Planetary Science and History at Caltech, and before that I grew up in Rhode Island. In my free time I enjoy backpacking, spending too much effort on making coffee, and daydreaming about adopting a dog in my NYC apartment.

simulation of gas and dust in a protoplanetary disk

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Prospects for Hurricane-like Vortices in Protoplanetary Disks
Authors: Konstantin Gerbig and Gregory Laughlin
First Author’s Institution: Yale University
Status: Published in ApJ

What do hurricanes have to do with planet formation? At first glance, nothing. Planets form in protoplanetary disks, whereas hurricanes occur on a planet that has already formed: Earth. However, the authors of today’s article are searching for a connection between the two.

How Planets Form

Planets form in protoplanetary disks that are made of dust and gas. When dust in those disks clumps together, it can form pebble-sized nuggets that stick together to form boulders. The boulders can become kilometer-size protoplanets until they finally grow into planets. However, there still are some missing pieces in our understanding of planet formation. One of these is the meter-size barrier, which states that it’s really hard to get from a meter-size boulder to something larger. It is fairly well established how to get from dust to meter-size boulders and how to get from kilometer-size protoplanets to planets. It’s the step in between that is missing. However, the meter-size barrier cannot be a real physical barrier: we are living on the proof that planet formation must be possible. But how?

One mechanism proposed to facilitate the growth of boulders is a dust trap where dust and pebbles can be trapped together, allowing them to effectively grow to protoplanets. The authors of today’s article investigate a possible way such a dust trap could occur: a hurricane!

Hurricanes on Earth

For hurricanes on Earth, the key ingredient is water: a hurricane can only form above an ocean. As in planet formation, big things start small: the seed to a hurricane needs to be a small initial turbulence or spin flow of air.

The little seed is magnified by an interplay of strong winds and the evaporation and condensation of water. Strong winds on the surface of the ocean pick up water vapor. At some point, the air becomes saturated so that the water has to condensate again. This is when clouds start to form. The condensation releases heat into the air (called “latent heat”) and the heated air starts to rise. At a certain level in the atmosphere, the air is able to cool down by radiating away its energy, and it does not rise further. The air can flow away horizontally at that height. It leaves behind a void at the surface of the ocean that gives rise to more winds. With the right conditions, this mechanism intensifies the little initial turbulence and the spin flow becomes a large circulation of air mass: a hurricane.

As depicted in Figure 1, a hurricane has a center called the eye of a hurricane. Within the eye, moisturized air continues to flow upward, maintaining the storm as long it can pick up water vapor and rise upwards.

diagram of a hurricane

Figure 1: A hurricane on Earth can only exist for a significant time if it is above an ocean. Warm and moist air is rising leading to a circulation of air mass. More air flowing in at the ocean surface magnifies this process until an immense storm is formed. [Wikipedia user Kevinsong; CC BY 3.0]

What About Hurricanes in Protoplanetary Disks?

The authors of today’s article propose that a similar process can occur in protoplanetary disks. A layer of dust grains that are covered with ice can act as a fuel tank for hurricane-like structures similar to the ocean on Earth. When a gas layer flows over the icy dust grains, it can pick up moisture. Just like on Earth, if there is an initial turbulence in the form of a spin flow, it can be magnified by the same mechanism as a hurricane.

Previous research has shown that such spin flows already exist in protoplanetary disks. They are called vortices. The main difference to a hurricane on Earth is that both gas and dust grains in a protoplanetary disk orbit the star. This motion around the star, also known as Keplerian motion, gives rise to shear forces that tear apart vortices. This means that there is something acting against the growth of vortices.

The authors construct a model to simulate hurricane-like conditions in protoplanetary disks. They seed their simulations with small initial vortices and observe whether they grow. The simulations show that the hurricane mechanism indeed can create large vortices out of small ones. Figure 2 presents the comparison between several simulations: one without the hurricane model (red line) and several with the hurricane model and different initial conditions (yellow and purple lines). The small initial vortices become larger over time and merge into a big one, possibly producing a dust trap.

Simulations with (yellow line) and without (red line) hurricane-like conditions in a protoplanetary disk

Figure 2: Simulations with (yellow and purple lines) and without (red line) hurricane-like conditions in a protoplanetary disk. The lines show the time evolution of kinetic energy, which is a measure of the strength of a vortex. The four images show snapshots of a simulation with hurricane-like conditions at different times, showing the vortices growing when hurricane-like conditions are present. [Gerbig & Laughlin 2022]

The authors find a sweet spot for sustaining and magnifying vortices. This sweet spot is located just outside the ice line, which is the location in the disk where the temperature is low enough for water to freeze.

Can Hurricanes Help to Form Planets?

We’ve seen that these hurricane-like vortices are possible, but can they actually form planets?

Prior research has shown that a vortex can trap dust within its eye. As vortices are found to be short-lived, mechanisms prolonging the lifetime of a vortex, such as the hurricane mechanism, are essential to planet formation.

However, when planets form in a vortex, they draw from the dust grains that fuel the hurricane-like vortex. If a planet eats up too much of the dust, the vortex can no longer be kept alive. The authors of today’s article therefore argue that it is not obvious if this mechanism actually supports planet formation. For now, the question must remain unanswered. However, the first step is done — we know that hurricanes can occur in protoplanetary disks. Now it is up to future investigations to see if they can enhance planet formation.

Disclaimer: The first author of today’s article is an active astrobites author but was not involved in the publication of today’s bite.

Original astrobite edited by Macy Huston.

About the author, Lina Kimmig:

I’m a first-year PhD candidate working at Heidelberg University in the exciting field of planet formation. As planets form in protoplanetary disks that exist around most young stars, I am looking at the effects of different physical processes on those disks. To investigate those effects, I run astrophysical simulations. My main interest are warped disks that have a three-dimensional twisted shape (a little bit like Pringles crisps). Outside of research, I not only like eating Pringles crisps but also love dancing, sewing, skiing, and elephants.

artist's impression of the exoplanet HR8799e

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Interpreting the Atmospheric Composition of Exoplanets: Sensitivity to Planet Formation Assumptions
Authors: Paul Mollière et al.
First Author’s Institution: Max Planck Institute for Astronomy
Status: Published in ApJ

One of the biggest questions that drive astronomers is “Where did we come from?” Whether studying the earliest hours of the universe, the formation of stars and galaxies, or the compositions of planets orbiting distant stars (exoplanets), astronomers use observations of the universe to piece together a story that describes how things got to be the way they are. For the better part of two decades, exoplanet-focused astronomers have attempted to measure the molecules that make up the atmospheres of exoplanets in order to better understand how those planets formed. Understanding how many different exoplanets form can then help astronomers understand how our own solar system formed, and how Earth, and life itself, came to be.

There are a few problems that are preventing astronomers from making those connections reliably, however. Well, a lot of problems, actually. For one, exoplanet atmospheres are hard to measure, but that might change soon thanks to JWST. For another, planet formation is dynamic, and different planet formation models can produce dramatically different atmospheres. Today’s article presents a new framework for tackling this second problem and shows how different planet formation models can lead to different interpretations about how the exoplanet HR 8799e formed.

Making Planets Is Complex!

Planet formation is a dense topic (interesting astrobites include this one, this one, and an old review bite), but in the broadest strokes: planets form from disks of gas and dust called protoplanetary disks that surround young stars. Giant planets, like Jupiter, form when their rocky cores grow massive enough (by smashing into other rocks) to vacuum up gas from the protoplanetary disk.

When and where a forming planet vacuums up its gas will affect which molecules end up in its atmosphere. This is because, as you go out further from the central star in a disk, the disk gets colder, and molecules that were gaseous can condense down into ices and become solid. The locations within a protoplanetary disk where molecules condense into solids are called ice lines (see Figure 1).

About ten years ago, a study suggested that the ratio of carbon atoms to oxygen atoms in an exoplanet’s atmosphere could indicate which ice lines it formed between, since the freezing out of water, carbon dioxide, and carbon monoxide changes the ratio of those two elements in the gas of the disk. This study suggested that if the carbon-to-oxygen (C/O) ratio of an exoplanet could be measured, astronomers could tell where in the protoplanetary disk it formed.

As discussed in today’s article, things aren’t so simple. One example the article uses is the fact that as the exoplanet is forming, the disk chemistry is changing — the disk is heated by the newly born star and disrupted by the newly born planet. Figure 1 shows the C/O ratio throughout the disk for the static model introduced in the previous study, as well as snapshots in time of a disk evolving as, for instance, carbon monoxide is turned into carbon dioxide by heat from the star.

Flipping the Table (or, “Formation Model Inversion”)

This quest isn’t hopeless: today’s article presents a framework in which the different assumptions and uncertainties mentioned above could be compared with available atmospheric measurements in order to meaningfully connect observations and model predictions.

plot of the carbon to oxygen ratio as a function of distance from the host star

Figure 1: The C/O ratio within a protoplanetary disk as a function of distance and time. The x-axis plots distance (a) from the central star, marking the ice lines of water, carbon dioxide, and carbon monoxide, while the y-axis plots the C/O ratio. The color gradient of the lines, from dark blue to bright yellow, indicates the progression in time of a model that assumes that the chemicals in the disk evolve as they are heated by the central star. This plot shows that it might not be simple to predict where in a protoplanetary disk a planet formed based on its C/O ratio, as there can be different values on the x-axis for one value on the y-axis. [Mollière et al. 2022]

The real problem is that complex models of planet formation require a set of assumptions as input and give the predicted atmospheric measurement of a planet as output. Astronomers measure the output, so the models have to be “inverted” in order to determine the formation location of the planet. What the new framework in today’s article does is generate many different models with various input parameters. Then, the authors compare the outputs to the measured abundances of a given exoplanet to see which model inputs result in the closest match to the measurements. The authors can do this for different models and then compare the best matching input parameters between models, allowing them to examine what different models predict for the origin of a given exoplanet.

Where Did HR 8799e Get Its Carbon and Oxygen?

HR 8799e is the innermost gas giant planet in a system of four directly imaged giant planets. In today’s article, the authors use their new framework on HR 8799e and demonstrate how including the time evolution of the chemicals in the protoplanetary disk — and the movement of small rocks through the disk during planet formation — change the predicted formation history of the planet.

HR 8799e’s atmosphere was previously studied using data from the Very Large Telescope Interferometer (VLTI) GRAVITY instrument. That study found the planet’s C/O ratio to be 0.6 (that is, 6 carbon atoms for every 10 oxygen atoms). The authors of today’s article use this measurement and their new analysis framework to compare the simplistic model of a protoplanetary disk and a chemically evolving disk.

The authors find that the simple model predicts that HR 8799e formed either within the water ice line (very close to its host star) or outside the carbon monoxide ice line (far away). Either way, the planet now orbits in the middle of these two extremes, indicating that it must have migrated from where it originally formed (see Figure 2, left panel). However, the chemically evolving disk model makes a slightly different prediction, indicating that as the disk chemically evolved, the most likely formation location of HR 8799e moved inward from beyond the carbon monoxide ice line to within it (see Figure 2, right panel). This could indicate that, depending on when HR 8799e began forming relative to the disk’s chemical evolution, it might not have needed to migrate to get to its current position.

plots of formation probability density as a function of distance from the star and distance from the star and time

Figure 2: The origin location of HR 8799e’s C/O ratio. The left plot indicates how likely the solids comprising the planet are to have originated from a given location in the protoplanetary disk for the most simplistic model considered. The most probable locations are within the water ice line and outside the carbon monoxide ice line, but compared to the current location of HR 8799e, this appears to indicate the planet must have migrated far from where it formed. The right plot illustrates the chemically evolving disk case. While this model shows that early in time the most likely place for HR 8799e to form is the same as in the simple model case, the most likely formation location changes as the disk chemistry changes — it becomes more likely the planet could have formed within the CO ice line and migrated only a little bit to its current position. [Mollière et al. 2022]

Today’s astrobite presents a complex narrative of exoplanetary archaeology, exploring different assumptions that can change how astronomers infer the formation history of exoplanets. With new and improved atmospheric detections on the horizon (hello JWST!), this new framework for comparing formation models will prove a useful tool to help astronomers puzzle out how and where exoplanets form, and maybe — eventually — how we got here.

Original astrobite edited by Lynnie Saade.

About the author, William Balmer:

William Balmer (they/them) is a PhD student at Johns Hopkins University/Space Telescope Science Institute studying the formation, evolution, and composition of giant planets, brown dwarfs, and very low-mass stars. They enjoy reading, tabletop games, cycling, and astrophotography.

representative-color composite image of supernova remnant W49B

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Bumpy Declining Light Curves Are Common in Hydrogen-Poor Superluminous Supernovae
Authors: Griffin Hosseinzadeh et al.
First Author’s Institution: Steward Observatory, University of Arizona
Status: Published in ApJ

Space is constantly alight with supernovae — so much so that astronomers are scrambling to keep up! As a result, interest and competitive resources can fall off after a supernova has been named and identified, making it difficult to observe superluminous supernovae months after the dazzling heights of their light curves.

Brighter than a typical supernova, superluminous supernovae reach blinding absolute magnitudes of −20 or more. Miraculously, their super-powered light curves remain bright for hundreds of days before decreasing to a known slope called a radioactive tail.

The authors argue that this period of time after the initial explosion is well worth the study, because some superluminous supernovae don’t cool off without a fight. Instead, they display unanticipated bumps and wiggles months after the peaks of their light curves. Shared qualities amongst the supernovae that show wiggles could even shed light on the mechanisms powering these behemoths.

What’s Haunting the Cosmic Graveyard?

The source behind monstrous superluminous supernova explosions is a topic of hot debate, especially when it comes to the subclass that have spectra devoid of emission lines of hydrogen. There are two main competing theories. The first is that there is a super-powered neutron star called a magnetar at the center. With the most extreme magnetic fields in the universe, reaching a magnetic field strength of 1015 Gauss, these colossal corpses would serve as a “central engine” driving the explosion’s brightness and prolonging its light curve.

plot of multiwavelength light curves for a superluminous supernova

Figure 1: An example superluminous supernova, SN2011ke. Bumps begin about 40 days after the explosion in nearly every observed wavelength. In the residual panels, the authors subtract the underlying blackbody to make this clearer. Click to enlarge. [Adapted from Hosseinzadeh et al. 2022]

You can tell that something’s not quite right with a magnetar if its cooling phase, or the decrease of its light curve, doesn’t follow a smooth, well-behaved luminosity trend of L ∝ t-2. Observed late-time bumps in superluminous supernovae certainly disrupt this picture, as in Figure 1. To explain bumpy behavior with just a central engine, one would require that material falls back onto the magnetar surface and alights in a violent flare.

Others posit that hydrogen-poor superluminous supernovae — subclass SLSN-I — once had hydrogen on their surfaces, but they dropped that hydrogen like a pair of lost glasses. Surely it must be around here somewhere… crunch. The unaware explosions trample over the circumstellar material as it expands. This interaction would serve as multiple powder kegs prolonging the light curve in a manner similar to the magnetar model. Yet, when astronomers take spectra during SLSN-I bumps, they do not find the smoking-gun evidence of narrow emission lines that would indicate interaction with hydrogen-rich material.

That Ghost Has Footprints!

The authors collected a total of 34 SLSNe-I with plenty of optical data well after their peaks. Of these 34, they find that 44–76% exhibit bumps at about 50 days or more past explosion. Why the broad uncertainty range? Because, unfortunately, the consistency of data coverage limits how sure the authors can be that these bumps exist. Among their sample, they investigate what these bumps have in common and if there are relationships between the overall light curve and the bumps.

The authors reason that there are five main characteristics of a magnetar: magnetic field (B), spin period (P), ejecta mass (Mej), ejecta velocity (vej), and the time it takes for the explosion to reach its peak brightness (trise); and there are four main characteristics of a bump: duration (Δtbump), the time the bump occurs (tbump), the temperature at which it is emitting (Tbump), and amplitude. The authors cross-compare each of these characteristics by mixing and matching their axes and plotting all 34 superluminous supernovae in Figure 2. Then, they check for correlation, or a clear and obvious trend, between the quantities on each set of axes.

multi-panel plot of bump properties as a function of supernova properties

Figure 2: Which properties exhibit a trend? The authors believe the rise time (trise) of the explosion and the time the bump occurs (tbump) are very weakly correlated. To convince yourself, find the panel with the red text (last column, third row) and ask yourself how much you think increasing trise increases tbump, and vice versa. [Hosseinzadeh et al. 2022]

The authors find a mild correlation between how quickly the main peak rises and the time at which the bump appears (Figure 2, panel with red text). This correlation could indicate that the ejecta reaches a temperature in the range of 6000–8000K — which includes the temperature at which ionized oxygen recombines… suspiciously the same element that dominates the supernova ejecta mass! Could this mean that these bumps are indeed caused by recombination of ionized oxygen, since there’s so much of it? If this occurs at the site of the magnetar, it would ultimately favor the magnetar accretion model.

If the bumps are instead caused by interaction with circumstellar material, the authors determine it would be an optically thin shell with an average mass of 0.034 solar mass and a thickness of about 8.1 x 1015 cm. That’s not a whole lot of material, spread about a whole lot of space!

histogram of the number of supernova light curve bumps

Figure 3: Histogram of superluminous supernova bumps and the maximum depth from which photons would have originated. Bumps in the shaded region are sourced from a maximum depth too shallow to be consistent with a central engine origin. [Adapted from Hosseinzadeh et al. 2022]

The authors also debate explosion mechanisms using the timing of the wiggles and inner ejecta thickness (not to be confused with the farther-out circumstellar material). If bumps are caused by changes at the very center of the explosion, such as in an accreting magnetar, then those photons must climb their way out of dense material to reach us. This would require a minimum amount of time that depends on the number of photon collisions, or the opacity of the material. A comparison of these depths and bump times is illustrated in Figure 3: the shaded region indicates where observed bumps appear too quickly to be explained by a photon originating in the center of the explosion. In the unshaded region, the photon could have come from the central engine or interaction with circumstellar material.

What remains unanswered, however, is whether or not all 34 supernovae must be explained by the same mechanism. If the answer is yes, then those meddling supernovae in the shaded region rule out a central engine entirely! If the answer is no, then it seems that since most bumps exist in the unshaded region, one could argue that there is diversity among the superluminous supernova mechanisms; perhaps those powered without central engines are simply rarer.

Original astrobite edited by Lina Kimmig and Briley Lewis.

About the author, Lindsay DeMarchi:

Lindsay DeMarchi is currently a graduate student at Northwestern University. She is obsessed with gravity and uses multi-messenger methods to analyze the final moments of stellar collapse.

simulated image of heat flow around a hot jupiter exoplanet

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Reassessing the Evidence for Time Variability in the Atmosphere of the Exoplanet HAT-P-7 b
Authors: Maura Lally and Andrew Vanderburg
First Author’s Institution: Cornell University, Northwestern University, and University of Texas at Austin
Status: Published in AJ

It would make sense, given the ever-changing atmospheres of Earth and the other solar system planets, that exoplanets would also be host to their own kinds of “weather.” But, despite the many characterised exoplanet atmospheres, confidently detecting a changing atmosphere remains extremely challenging. Previous attempts to assess the variability of exoplanet atmospheres have used phase curves — a measure of the changes in observed stellar flux over a planet’s entire orbit as it passes in front of and behind its star (see, for example, Figure 1). For tidally locked planets like hot Jupiters, different parts of the atmosphere will be visible as the planet moves around its star, so the shape of the phase curve can provide information about the planet’s atmosphere. By monitoring how the phase curve varies over time, you can decipher if the atmosphere is varying.

Figure 1: An overview of the Kepler data analysed in the article. Top: the entire light curve of HAT-P-7, split into 60 chunks each containing 10 transits. Middle: A zoom in on 70 days of observations. Bottom: An example phase curve along with the positions of the planet as it completes its orbit. [Lally & Vanderburg 2022]

The first planet to be assessed in this way was HAT-P-7b, a hot Jupiter that was observed by the Kepler mission for more than four years. The original study (Armstrong et al., 2016, which was covered by this astrobite!) found that the hottest part of the planet’s atmosphere shifts east and west by up to 41 degrees in latitude away from the point at which the star appears directly overhead, likely thanks to strong but changing wind patterns. However, phase curve analysis is hard, especially if the host star itself is changing over time, and theoretical work has struggled to explain hotspot offsets as large as HAT-P-7b’s. The authors of today’s article, therefore, take another look at the mysterious weather of HAT-P-7b.

Back to Basics

As an initial test, the authors first analysed the many phase curves in the original Kepler data to see if they obtained the same hotspot offset variability as Armstrong et al. To ensure that the result wasn’t influenced by any analysis choices, the authors repeated their methods on different outputs of the Kepler data reduction pipeline and tested different model choices to compare the results.

As seen in Figure 2, the authors’ offset measurements were comparable to those of Armstrong et al., although the offset was only seen to vary by up to 30 degrees — a result that was robust for all their analysis choices. But is this variability actually coming from the planet’s atmosphere? To determine if the host star could be causing the phase curves to vary, the authors injected simulated non-varying planetary phase curve signals into the light curves of a selection of stars similar to HAT-P-7 and extracted the resulting hotspot offsets. This test showed that similar variability could be recovered from the injected light curves, meaning that it’s possible that HAT-P-7b’s hotspot offset might not be varying after all! Given that Kepler was known to be a very well-behaved instrument, the stars themselves must be varying and contributing additional noise to the phase curves.

Figure 2: The measured hotspot offsets from HAT-P-7b’s phase curve over the course of the observations. The values calculated by Armstrong et al. are shown in pink open circles, while the values the authors calculated for a selection of phase curves are shown in red filled circles. Both analyses are a good match, indicating that the respective methodologies are unlikely to be causing the variability. [Adapted from Lally & Vanderburg 2022]

Tuning Out the Noise

To understand what sources of noise might be impacting HAT-P-7b’s phase curve, the authors produced the power spectrum of the light curve of HAT-P-7. They then compared it to a white noise spectrum, as shown in Figure 3, to identify at what frequencies significant sources of noise were occurring.

Figure 3: The power spectrum of HAT-P-7’s light curve (grey), showing how sources of noise are occurring over a range of frequencies. An equivalent white noise spectrum is shown in blue, highlighting that HAT-P-7 has significant excess noise at lower frequencies. [Lally & Vanderburg 2022]

Figure 3 shows that HAT-P-7’s light curve has a significant amount of excess noise at timescales similar to the planetary period, which could explain the observed variations in the hotspot offset. Noise due to supergranulation — changes in stellar brightness as bright warm bubbles appear on the surface of the star over long timescales — is particularly dominant around the period of HAT-P-7b, and could very easily be affecting the hotspot offset measurements.

With all this evidence to hand, the authors conclude that the varying offset measurements of HAT-P-7b are likely not related to atmospheric variability, and supergranulation is the culprit of the changing phase curves. Although Kepler is now decommissioned, the analysis performed in today’s article provides a helpful tool for verifying past and future claims of exoplanet weather from it or any other telescope, including JWST.

Original astrobite edited by Yoni Brande.

About the author, Lili Alderson:

Lili Alderson is a second-year PhD student at the University of Bristol studying exoplanet atmospheres with space-based telescopes. She spent her undergrad at the University of Southampton with a year in research at the Center for Astrophysics | Harvard-Smithsonian. When not thinking about exoplanets, Lili enjoys ballet, film, and baking.

illustration of a giant impact

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Large Impacts onto the Early Earth: Planetary Sterilization and Iron Delivery
Authors: Robert I. Citron and Sarah T. Stewart
First Author’s Institution: University of California, Davis
Status: Published in PSJ

The early Earth wasn’t your typical summer break vacation destination. During the Hadean Eon (4.5–4.0 billion years ago), Earth was an extremely hostile environment and was frequently bombarded by asteroids. Yet, somehow, life on Earth could have emerged during this time.

This story begins with a noteworthy event during the Hadean Eon: a now long-gone Mars-sized planet called Theia slammed into Earth. The massive impact blew off a large amount of debris that started to circle the new Earth–Theia merger and eventually formed the Moon. This proposed sequence of events is known as the Giant Impact Hypothesis. Theia’s impact melted Earth’s entire crust several kilometers deep, creating an environment not very favorable for any form of life that we know of and very likely sterilizing anything already present on Earth. After all the turmoil of the impact settled down, life on Earth (in principle) could have formed shortly after, a mere 4.5 billion years ago. Reality, as it turns out, may have run less smoothly.

As a deeply studied subject, the origin of life has been described by several proposed hypotheses. A popular hypothesis states that it all began with the formation of amino acids and RNA molecules. Because our present-day atmosphere and oceans are oxidizing (noticed iron rusting? Maybe some fire?), spontaneously forming these molecules is difficult. But what about all those lifeforms we see today? They needed to come from somewhere. Well, to actually form these amino acids and RNA molecules on a global scale, we would need a reducing atmosphere or ocean. And how better to create one than slamming a huge rock full of reducing material (like iron) into Earth?

Reducing an Atmosphere 101

After the Theia impact, but still during the Hadean Eon, Earth endured a large amount of impacting asteroids of varying sizes, which were sling-shotted to the inner solar system by Jupiter and Saturn. The authors of today’s article wanted to know which objects under which conditions can actually create a (temporarily) reducing atmosphere or ocean on Earth, ultimately opening the way to forming the building blocks of life. Instead of really slamming various rocks on Earth, the authors ran smoothed particle hydrodynamics (SPH) simulations of large objects (the projectiles) colliding with an Earth-like planet (ominously called the target). To account for several possible scenarios, illustrated in Figure 1, the authors varied the impacting object’s mass, velocity, and the angle at which it strikes Earth.

diagrams of an object impacting earth

Figure 1: Effect of different impact angles of a large object colliding with Earth, with the distinction between the atmosphere (blue), the mantle (orange), and the core (dark gray). Different angles lead to different degrees of surface melting (red). [Citron & Stewart 2022]

Now, when such an object impacts Earth, a lot happens in a short span of time. To know what goes where, the authors kept track of the mantle (made of forsterite) and core (consisting of an iron–silicon alloy) of both the simulated impacting object and Earth. Part of the iron-rich core material — the stuff we need to start large-scale reduction of Earth’s atmosphere or oceans — gets scattered in the atmosphere by the impact. How much of this iron enters Earth’s atmosphere depends strongly on how the impact occurred (which is controlled by object mass, impact velocity, and impact angle). A 24-hour time lapse of the impact in one of the simulations is shown in Figure 2.

simulation snapshots

Figure 2: Simulation of a smaller object colliding with Earth. Here, the object mass is 25% of the Moon’s mass, the impact velocity is 1.5 times Earth’s escape velocity, and the impact angle is 45°. The mantle and core materials of Earth and the impacting object are color-coded to see where they eventually land, if at all. This simulation shows that the object is shattered on Earth’s surface; the colliding object’s mantle material — forsterite — is mainly scattered around Earth or resides on its surface, while the heavier core material from the object — iron — mainly sinks in large chunks to Earth’s interior. Part of the iron from the object, however, remains scattered in the atmosphere, where it will act as a reducing agent. The spatial dimensions are expressed in Earth radii. [Citron & Stewart 2022]

By analyzing their simulations, the authors found that only the largest of the asteroids during the Hadean Eon — the Moon-sized ones — could have delivered enough iron to fully reduce Earth’s oceans and atmosphere. But there’s an additional problem now: the impact of an object so huge would easily vaporize ocean-sized bodies of water (the authors find that even an object with a radius of 0.2 times the Moon’s radius could vaporize Earth’s early ocean) and would even melt most of Earth’s surface, creating almost post-Theia impact circumstances that weren’t very life-supporting. Considering all this, it looks like these Hadean asteroids did not really help early life on its way.

However, this study has shown that it takes a larger object than previously estimated to fully sterilize the early Earth’s exterior by melting its whole surface; such an object would need to have more than 25% of the Moon’s mass. As objects this size were rare even during the Hadean Eon, the chances of a mass extinction by space rocks are lower than previously expected. Even the ocean-evaporating asteroids do not necessarily sterilize Earth if early life occurred under the planet’s surface. Moreover, remember the Moon-sized asteroids needed to fully reduce the atmosphere and oceans? Turns out we don’t need that kind of overkill (pun intended). Multiple smaller objects slamming into Earth could reduce the atmosphere or ocean enough to create favorable conditions for spontaneous RNA formation.

In any case, if life emerged from a post-impact world, it would be due to the right asteroids at the right time. Too small, and the kick-starter for life wouldn’t occur. Too large, and any progress made so far would be wiped out. Considering the fact that you are reading this post, it seems our very, very far forebears weren’t out of luck!

Original astrobite edited by Sarah Bodansky.

About the author, Roel Lefever:

Roel is a first-year astrophysics PhD student at Heidelberg University. He works on massive stars and simulates their atmospheres and outflows. In his spare time, he likes to hike and bike in nature, play (a whole lot of) video games, play and listen to music (movie soundtracks!), and read (currently The Wheel of Time, but any fantasy really).

illustration of exoplanetary systems

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Continuous Habitable Zones: Using Bayesian Methods to Prioritize Characterization of Potentially Habitable Worlds
Authors: Austin Ware et al.
First Author’s Institution: Arizona State University
Status: Published in ApJ

With more than 5,000 exoplanets discovered (roughly 30 of which are expected to be habitable), how can astronomers prioritize which to study in the search for extraterrestrial life? Today’s article explores the “continuous habitable zone”: planetary orbits that allow for water to be liquid long enough for detectable life to develop.

Habitable Worlds

With JWST beginning observations and the Habitable Exoplanet Observatory (HabEx) and Large UV/Optical/IR Surveyor (LUVOIR) space telescopes on the horizon, transit spectroscopy from space is set to dramatically increase our ability to characterize exoplanet atmospheres. Astronomers are working to prioritize which potentially habitable exoplanets are the best to search for life on. The habitable zone defines the region surrounding a star in which a planet could host liquid water, which is typically assumed to be a requirement for life. Stars evolve, though, and planets that are habitable at one point in time may not always be or have been.

Life takes time to develop and become detectable. Let’s consider Earth’s history as a benchmark. The Great Oxidation Event, during which biologically produced molecular oxygen accumulated in Earth’s atmosphere, occurred roughly 2 billion years after Earth’s formation. This is adopted as the length of time needed for life to make a detectable impact on a planet’s atmosphere. Today’s article presents a method to estimate the likelihood that a planet has resided in its star’s habitable zone for at least 2 billion years, defining the region where this could occur as the 2-billion-year continuous habitable zone (CHZ2).

So, how do we determine the habitable zone of a star? The article discussed two frameworks:

  1. Optimistic habitable zone: Regions that receive an amount of radiation from their star less than Venus did 1 billion years ago and more than Mars did 3.8 billion years ago ago. These “recent Venus” (RV) and “early Mars” (EM) limits are chosen because observations suggest liquid water existed on those planets until 1 or 3.8 billion years ago, respectively.
  2. Conservative habitable zone: A greenhouse effect model. The inner edge is defined by the “runaway greenhouse,” where stellar flux will vaporize an ocean. The outer edge is defined as the “maximum greenhouse,” where Rayleigh scattering dominates over the greenhouse effect of carbon dioxide.
diagram of the optimistic and conservative habitable zones for a range of stellar temperatures

Figure 1: The habitable zone for a range of stellar temperatures, showing Venus, Earth, Mars, and a selection of potentially habitable exoplanets. [Chester Harman; CC BY 4.0]

Statistical Modeling

Using Bayesian statistics, the authors created an equation for the probability of a planet residing in the CHZ2 as a function of its host star’s mass, metallicity, and age. They assigned ages to the host stars based on evolutionary tracks from the Tycho stellar modeling code and found that their estimates aligned well with previous measurements based on stellar spins.

The authors used their framework to evaluate nine potentially habitable exoplanets as well as Venus, Earth, and Mars. All stars considered were relatively Sun-like (between 0.5 and 1.1 solar masses), with Earth-like and super-Earth terrestrial planets (radii < 1.8x Earth’s and mass < 10x Earth’s). Figure 2 shows the results for the solar system and the authors’ best exoplanet candidate.

plot of solar system planets and one exoplanet and their likelihood of being within the continuous habitable zone

Figure 2: CHZ2 probabilities for two stars. Line styles indicate the habitable zone model: three conservative model versions for different planet masses and the RV/EM optimistic model. Left: The Sun, with the orbits of Venus, Earth, and Mars indicated. Right: KIC-7340288, showing the best candidate examined in the article with ~90% CHZ2 probability under all models. [Adapted from Ware et al. 2022]

What This Means for the Future

The authors conclude with proposals for future work on this topic, including extending the analysis to lower-mass stars. They also estimated ages for nearly 3,000 Transiting Exoplanet Survey Satellite (TESS) continuous-viewing-zone stars, which are the best candidates for TESS to find habitable zone planets around, to apply a similar framework to in the future. The exoplanets determined here to have a high CHZ2 probability will be ideal for follow-up with JWST. The method will also be valuable in target selection for future exoplanet characterization missions like HabEx and LUVOIR.

As shown in Figure 2 above, the method used in this article determined Mars to be in the CHZ2, while we know it not to be currently habitable. This indicates the need for additional model parameters in the Bayesian analysis to improve accuracy. This includes further stellar and planetary properties important to their evolution, including stellar oxygen-to-iron ratios, planetary composition, and stellar activity.

Original astrobite edited by Jana Steuer.

About the author, Macy Huston:

I am a fourth-year graduate student at Penn State University studying astronomy and astrophysics. My current work focuses on technosignatures, also referred to as the Search for Extraterrestrial Intelligence (SETI). I am generally interested in exoplanet and exoplanet-adjacent research. In the past, I have performed research on planetary microlensing and low-mass star and brown dwarf formation.

simulation of galaxies during the epoch of reionization in the early universe

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Reionization with Simba: How Much Does Astrophysics Matter in Modeling Cosmic Reionization?
Authors: Sultan Hassan et al.
First Author’s Institution: Flatiron Institute and University of the Western Cape, South Africa
Status: Published in ApJ

While a tired trope to be sure, the hero’s journey to conquer the darkness and bring in an age of light is a memorable one. Today, our hero isn’t a person but succeeds in that illuminating quest all the same!

The authors of today’s article consider one particular question: how do we model the re-emergence of light sources in the early universe during the time of cosmic reionization? Namely, does the way we model the sources of ionizing photons (high-redshift stars and galaxies) impact observables on the large scales relevant for cosmological observations?

That’s a bit of a mouthful, but we’ll chew through it slowly and methodically in this bite!

Out of the Darkness

Before we can talk about reionization, we have to understand what brought about the dark age of the universe in the first place. After the hot Big Bang, the universe was initially fully ionized (all atoms were stripped of their electrons) up until it cooled to the point where hydrogen atoms “recombined” (free electrons paired up with lone protons) at a redshift (z) of roughly 1,000 (Figure 1). After recombination, the universe was filled with neutral hydrogen and was in a sense “dark,” since the cooling universe had no sources of ionizing photons to liberate electrons from the neutral hydrogen (HI).

However, during this dark age the seeds of revolution (ahem, structure formation) were slowly growing until eventually the first stars and galaxies formed inside dark matter halos, providing new sources of ionizing photons. These light-bringers then proceeded to make Swiss cheese out of the dark universe, creating holes filled with ionized hydrogen (HII) as illustrated by the bubbles at the center of Figure 1. You can get an instant and visceral feel for this process by watching this wonderful movie of a simulation of reionization.

diagram of the evolution of the universe over time

Figure 1: A schematic view of reionization within the larger cosmic timeline. Blue represents (opaque) neutral hydrogen, while black represents fully ionized hydrogen. The transition between these two regimes proceeds around redshift z = 10 by way of reionized bubbles around source stars and galaxies. [NAOJ; CC BY 4.0]

This story, like many heroic journeys, neglects an enormous amount of real-world complexity. To accurately model reionization at the level necessary for upcoming surveys, astrophysicists need to answer a host of questions: What is the detailed morphology of reionization? How does reionization depend on the characteristics of large-scale structure? On the processes of galaxy formation? What about the nature of the ionizing sources? Today’s article explores some of these questions using the Simba suite of galaxy-formation simulations. In particular, the authors delve into different ways to model the sources of ionization.

Getting Straight to the Source (Modeling)

The authors set out to understand whether or not the details of how stars and galaxies produce ionizing photons affect observables on large (“cosmological”) scales. To do this, they used the Simba simulations, which include a host of galaxy-formation physics as well as gas hydrodynamics, and accounted for radiative transfer of photons in post-processing. Specifically, the authors tested whether it was possible to notice a difference in the morphology of reionization or in the distribution of ionized hydrogen in the simulation with different choices of source modeling. The results of this comparison are shown in Figures 2 and 3, and we’ll walk through them one at a time.

Figure 2 shows the visual morphology of reionization by displaying the spatial distribution of the ionization fraction xHII (blue is ionized, red is neutral) in the simulation. Each row of Figure 2 corresponds to a different model of ionizing photons. The models contain ionizing photon sources with different properties, in different numbers, or a larger degree of scatter, but each model contains similar overall amounts of photons. The columns correspond to increasing time (and therefore increasing mean global ionization fraction) from left to right.

plots of ionization maps for all models tested

Figure 2: Simulation output maps of ionized hydrogen fraction (xHII) as a function of time (left to right) for several choices of reionization source models (rows). Small features are different by eye in the different models but the overall morphology on larger scales remains the same. [Hassan et al. 2022]

From a quick glance, it is clear that as we look down a single column, the Swiss-cheese structure of ionized bubbles looks broadly similar across source modeling choices. Not much changes on large scales, even though the smaller bubbles or detailed edge features may be changing significantly. The authors take this to mean that source modeling choice doesn’t have a significant impact on large scales, but for a quantitative confirmation of this finding they turn to the power spectrum of ionized hydrogen, which describes the spatial distribution of HII.

Figure 3 shows the power spectra of ionized hydrogen at the redshifts considered in Figure 2, as well as their residuals in the lower panel. The different modeling choices correspond to the different curves in the figure. The curves are broadly in agreement with each other over most scales for most choices of source modeling. In particular, on large scales (log k < 0.0) the models agree quite well — quantitatively corroborating that the choice of source modeling does not impact the large-scale spatial distribution of the ionized hydrogen.

power spectra of the models in the previous figure

Figure 3: Reionization power spectra as reionization progresses in time from high to low redshift z (from left to right). For several choices of source modeling considered by today’s authors, the large-scale (low k) power is similar. [Hassan et al. 2022]

The conclusion of this article is concrete: the authors suggest that future large-scale reionization modeling can safely use more efficient methods than expensive simulations like the ones included here. This follows from the finding that changes in the source modeling and associated scatter in the ionization rate and halo mass relation does not affect large scales as shown in Figures 2 and 3. If large-scale reionization does not depend on the details of astrophysical source modeling, this will definitely make the lives of astrophysicists studying large-scale reionization easier — without the need to simulate these details, researchers can run less costly simulations to extract information about the high-redshift universe on scales relevant for cosmology!

Original astrobite edited by Alice Curtin.

About the author, Jamie Sullivan:

I am a third-year astrophysics PhD student at UC Berkeley and part of the Berkeley Center for Cosmological Physics. My current research focuses on measuring and modeling large-scale structure to constrain cosmological parameters. I completed my undergraduate at UT Austin, and I’m originally from the Washington, DC, area.

1 15 16 17 18 19 47