Astrobites RSS

Hubble Space Telescope image of the dwarf spiral galaxy NGC 5949

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Wandering Black Hole Candidates in Dwarf Galaxies at VLBI Resolution
Authors: Andrew J. Sargent et al.
First Author’s Institution: United States Naval Observatory and The George Washington University
Status: Published in ApJ

How do you make a black hole billions of times the mass of the Sun? Even for the planet-building Magratheans, this seems like a tall order. Plenty of mechanisms have been proposed to explain the formation of these supermassive black holes found at the centers of most galaxies. Some involve the mergers of “seeds” — massive black holes weighing in at merely hundreds to hundreds of thousands of solar masses. A simple way to test these theories is to search for relic massive black holes, and low-mass dwarf galaxies are excellent targets. Since dwarf galaxies haven’t undergone many mergers, any massive black holes they harbor should have avoided being gobbled up by growing supermassive black holes.

Today’s article studies 13 possible massive black hole candidates in dwarf galaxies, some of which may have wandered to the edges of their hosts. What’s up with that — and are they really massive black holes? Let’s dive in!

Here’s a question: since supermassive black holes are usually found near the center of their galaxies, why might we expect some massive black holes in dwarf galaxies to lie further out? The answer has to do with gravity: since dwarf galaxies are much less massive than the galaxies that host supermassive black holes, their gravitational potential is lower, making it easier for massive black holes to “wander” away from their centers. This means that if you see a radio source that appears far to the side of a dwarf galaxy’s center, it could be an massive black hole — or it could be an accreting supermassive black hole (an active galactic nucleus) in a galaxy far, far away that by chance simply happens to lie behind the dwarf galaxy. These unwanted interlopers can pose a challenge for identifying massive black holes.

Another issue with finding massive black holes is that they’re faint. While massive black holes go through periods of accretion like supermassive black holes, their low masses mean that they don’t accrete as quickly, reducing their luminosities. By the early 2000s, only two accreting black holes had been found in dwarf galaxies. Fortunately, this changed with the advent of sky surveys like the now famous Sloan Digital Sky Survey (SDSS), which has been running since 2000 and has amassed detections of close to a billion unique sources.

The 13 massive black hole candidates, shown in Figure 1, were assembled in an article from 2020 by some of the same astronomers who authored today’s article. In the 2020 article, the team sifted through 43,707 low-mass dwarf galaxies from SDSS, looking for sources that had been detected at radio frequencies by the Very Large Array. After keeping the matches and eliminating the radio sources that were background active galactic nuclei or could be explained by processes related to star formation, the team ended up with 13 massive black hole candidates, many of which aren’t aligned with the centers of their host galaxies.

optical images of the 13 dwarf galaxies in the sample

Figure 1: The 13 dwarf galaxies hosting possible massive black hole candidates, as seen by the Dark Energy Camera Legacy Survey at optical wavelengths. The red crosses show the location of the compact radio sources that may be massive black holes. While some appear close to their host’s center, others are significantly farther away. [Reines et al. 2020]

In this more recent article, the authors performed follow-up observations using the Very Long Baseline Array (VLBA). The VLBA uses radio telescopes thousands of kilometers apart to reach high angular resolution and allow astronomers to see fine details. Unfortunately, the VLBA was only able to detect four of the 13 candidates — and those four, because of their luminosity and position, seemed most likely to be active galactic nuclei in galaxies far beyond the dwarfs the team was targeting. The detected candidates are shown in Figure 2.

radio emission detected from four sources in the sample

Figure 2: The four sources the team was able to detect with the VLBA. Here, S is flux density, a quantity that describes the intensity of radio emission. As these sources are actually background active galactic nuclei rather than massive black holes in the targeted dwarf galaxies, the physical scales in the lower right are inaccurate. [Sargent et al. 2022]

This seems like an enormous problem! Only four detections, all of which appear to be imposters? Fortunately, the situation isn’t as dire as it might seem. While the VLBA is good at resolving sources on small scales in the configuration the team used, it may not resolve large-scale sources — and the radio emission from accreting massive black holes might be in the form of larger structures like radio lobes, rather than central point sources.

Multiwavelength observations confirmed that two of the remaining nine candidates are likely accreting supermassive black holes near the center of their host galaxies, but the other seven remain unknown. Five of those seven candidates are too bright to be from star formation and, based on their positions, could be either more background active galactic nucleus interlopers or, tantalizingly, wandering massive black holes.

Where do we go next? Follow-up observations at other wavelengths could be useful. The group suggests the Hubble Space Telescope in particular as a means of figuring out what those seven sources truly are. Given the difficulties involved in detecting massive black holes, even one more could prove valuable as astronomers try to understand the formation of the largest black holes in the universe.

Original astrobite edited by Suchitra Narayanan.

About the author, Graham Doskoch:

I’m a graduate student at West Virginia University, pursuing a PhD in radio astronomy. My research focuses on pulsars and efforts to use them to detect gravitational waves as part of pulsar timing arrays like NANOGrav and the IPTA. I love running, hiking, reading, and just enjoying nature.

composite ultraviolet and infrared image of the triangulum galaxy

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Panchromatic Hubble Andromeda Treasury: Triangulum Extended Region (PHATTER) II. The Spatially Resolved Recent Star Formation History of M33
Authors: Margaret Lazzarini et al.
First Author’s Institution: California Institute of Technology
Status: Published in ApJ

The Panchromatic Hubble Andromeda Treasury (PHAT) team has already done the impossible. Led by Professor Julianne Dalcanton (read our interview with her from #AAS233 here!), PHAT completely revolutionized observational astronomy by imaging over 117 million stars in the disk of the Andromeda Galaxy, otherwise known as Messier 31. Imaging Messier 31 took two weeks of Hubble Space Telescope time, which is a remarkable achievement considering many observational astronomers are lucky to get even a few precious hours on Hubble!

Now, the PHAT team is ready for round two. They have moved on to Messier 31’s neighbor and the third most massive galaxy in our Local Group: the Triangulum Galaxy, or Messier 33. And of course, this observing program wouldn’t be complete without a new, catchy acronym: the Panchromatic Hubble Andromeda Treasury: Triangulum Extended Region, or “PHATTER.” Studying Messier 33 in addition to Messier 31 is beneficial because Messier 33 has had more star formation overall and can therefore provide more insight into a new parameter space previously unexplored in Messier 31. Messier 33 also has a lower stellar surface density (i.e., lower star-to-area ratio), so resolving individual stars is much easier in Messier 33 than in Messier 31. The PHATTER team has generously made their data publicly available, providing photometry (i.e., the measured flux from astronomical objects) for over 22 million stars covering 38 square kiloparsecs (about 400 million square light-years) of Messier 33.

This article, the second in the PHATTER series (where the first described the observations and photometry), measured the star formation history of Messier 33. Measuring the star formation history of a galaxy can provide crucial information about the astrophysical phenomena that shape galaxy formation, such as how the structure of a galaxy changes over time.

To measure star formation rates of galaxies, astronomers have historically used two different methods. The first method involves studying ultraviolet emission from massive young stars. Because young stars primarily emit at ultraviolet wavelengths, ultraviolet flux is often used as a tracer for star formation within the last 200 million years. The second method involves studying H-alpha emission, which occurs when the electron in a hydrogen atom falls from the third energy level to the second. H-alpha emission often indicates that hydrogen is being ionized, usually by young O stars, and this emission traces star formation within the last 5 million years. However, both of these techniques are limited by dust extinction, which can be difficult to correct for.

The authors of today’s article use a novel method referred to as “CMD-based modeling” to measure the star formation history of Messier 33. The basic premise of this technique is that if you have high-accuracy photometry, you can use color–magnitude diagrams (CMDs, the observer’s version of the H-R diagram, where instead of plotting luminosity vs. temperature, you plot magnitude vs. color) to infer the star formation rates throughout history that would have produced a given observed population of stars. For example, younger stars spend less time in a given color–magnitude diagram zone than older red giant branch stars, and this information can be used to interpret the observed color–magnitude distribution of stars in a galaxy. Another useful benefit of the CMD-based modeling technique is that it simultaneously fits for the dust extinction, unlike the ultraviolet or H-alpha methods.

To measure the star formation history in bins across the face of Messier 33, the authors split their photometry into ~2,000 regions, each of which contained 4,000 stars on average. To measure the star formation history, the team fit color–magnitude diagrams in each region using the MATCH software, which finds the combination of stellar populations that best produces the observed color–magnitude diagram. Using this software, the authors were able to reconstruct Messier 33’s star formation history by measuring the star formation rate in ~50-million-year bins, up until 630 million years ago. While the CMD-based method requires high-resolution photometry, you can study the star formation rate throughout history, whereas the ultraviolet and H-alpha techniques only measure recent star formation.

The Structure of Messier 33

Detailed star formation histories can be used to measure how a galaxy’s stellar structure has changed over time. Messier 33 has typically been characterized as a flocculent spiral galaxy, meaning its spiral arms are less defined than those of a grand design spiral galaxy like Messier 101 (see Figure 1 for a comparison of the two). However, by studying the star formation rate throughout Messier 33’s history (as opposed to just the recent star formation), the authors were able to reconstruct the evolution of Messier 33’s spiral structure using the measured star formation rate in ~50-million-year time bins.

The authors found that while Messier 33 does indeed have flocculent spiral structure that formed about 79 million years ago, it previously had two distinct spiral arms. In short, the younger stellar populations (younger than 80 million years old) present as a flocculent spiral structure and the older stellar populations are primarily present in two distinct spiral arms.

In Figure 2, you can clearly see the split between these two stellar populations. The authors also clearly detect a bar in Messier 33 that is older than about 79 million years, which is significant because there has been a lot of recent debate in the literature about whether Messier 33 has a bar. The detection of bars in galaxies has strong implications for the galaxy formation history; bars force a lot of gas towards the galaxy’s center, fueling new star formation, building central bulges of stars, and feeding massive black holes. In particular for Messier 33, a small bar could explain discrepancies between models and observed gas velocities in the inner disk. The authors suggest that more modeling should be done to explain why the younger stellar populations did not form in a bar, whereas the older stellar populations did.

plots of Messier 33's star-formation rate during two time periods

Figure 2: The spiral structure clearly evolves from 79–631 million years ago to 0–79 million years ago, indicating a transition in the spiral structure of Messier 33 around 79 million years ago from a two-armed barred spiral structure (right) to the more flocculent spiral structure we observe today. [Lazzarini et al. 2022]

Finally, the authors compared their global star formation rate (which has units of solar masses per year and measures the total mass of stars being added to the galaxy each year) to that measured by the conventional methods using ultraviolet and H-alpha emission. The author found their measured value was about 1.6 times larger than the ultraviolet/H-alpha measurement, indicating that ultraviolet/H-alpha measurements may not capture the full star formation rate of a galaxy. In the future, the authors plan to extend this analysis by focusing on measuring the age gradient of Messier 33’s spiral arms and bar.

Original astrobite edited by Isabella Trierweiler.

About the author, Abby Lee:

I am a graduate student at UChicago, where I study cosmic distance scales and the Hubble tension. Outside of astronomy, I like to play soccer, run, and learn about fashion design!

photograph of the Keck telescopes

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Long Dark Gaps in the Lyβ Forest at z<6: Evidence of Ultra Late Reionization from XQR-30 Spectra
Authors: Yongda Zhu et al.
First Author’s Institution: University of California, Riverside
Status: Published in ApJ

The who, what, when, and where of reionization are unresolved questions that have important implications for our understanding of the cosmos. This final phase transition of the universe from neutral to ionized encompasses a variety of dramatic changes, as large-scale structures formed and evolved and the first stars and galaxies began to light up the universe. While shining a light on the history of reionization would also help uncover the history of how objects in the universe emerged and grew, this epoch is fundamentally dark.

However, we do have the basics of the why down: before the epoch of reionization, the universe was mostly filled with neutral hydrogen gas. Then, as the first stars, galaxies, and quasars began to form and started shining, these objects emitted high-energy photons that ionized the neutral gas around them. The ionizing radiation kicked out electrons from neutral hydrogen atoms until eventually most of the gas in the universe became ionized.

One key part of gaining a complete understanding of reionization is the when — precisely when did it begin and end, and how rapidly? There are a few main methods for probing these transition points, all of which suggest the process occurred early on, with a midpoint at roughly redshift z ~ 8 (600 million years after the Big Bang) and an endpoint somewhere around z ~ 5.5–6 (1 billion years after the Big Bang).

Absorbing It All

Many of the techniques used to trace reionization, including those used in today’s article, involve the Lyman series transitions of hydrogen, especially the Lyman-α (n=2 to n=1) transition. One such technique relies on observations of quasars (active supermassive black holes in the centers of galaxies and among the brightest objects in the universe) during the epoch of reionization. By observing distant quasars, we can understand the gas content in the universe along that line of sight using the presence and absence of Lyman-α emission and absorption compared to typical quasar spectra, which are fairly well understood and have strong signals.

As emission from a quasar travels through material between the quasar and the observer, some emission gets intercepted by gas clouds along the way, which can absorb the emission and produce a Lyman-α absorption line at a wavelength determined by the redshift of the gas cloud. However, at redshifts before the end of the epoch of reionization, the primarily neutral gas in the way will be very opaque at this wavelength, as photons with wavelengths (energies) near Lyman-α struggle to pass through the neutral hydrogen, which is optically thick enough to suppress observed emission nearly completely. This optical thickness to high-energy photons results in a contiguous region of strong absorption in the spectrum that is known as a dark gap (see Figure 1).

diagram showing how neutral gas creates a gap in quasar spectra

Figure 1: Example spectrum of a distant quasar with ionizing radiation emitted from the quasar intercepted by neutral gas along the line of sight. Emission lines directly from the quasar are marked by dashed colored lines. An over-dense patch of neutral gas causes a gap in the Lyman-α emission as nearly all of the flux from the quasar is absorbed. [Adapted from Figure 7 in An Introductory Review on Cosmic Reionization by John H. Wise]

Filling In the Gaps

These dark gaps could be caused by several different processes within reionization. For one, the ionizing background radiation itself could have some fluctuations, though the authors explain that their results and other recent results disfavor this scenario as it doesn’t imply sufficient neutral gas at later times. Alternatively, the prevalence of so-called “islands” of neutral gas, like pockets of Lyman photon absorption, could be the cause. Lastly, reionization could simply end later in the history of the universe, meaning more neutral gas is available to absorb high-energy photons at lower redshifts.

These scenarios are difficult to disentangle with Lyman-α gaps alone. One solution, as presented in today’s article, is to use a Lyman transition with a slightly shorter wavelength that passes through neutral gas slightly more easily (i.e., the neutral gas has a lower optical depth at that wavelength). The authors use this technique of tracing long dark gaps in quasar spectra but apply it to another Lyman transition, Lyman-β (n=3 to n=1). In order to study the dark Lyman-β gaps, the authors analyze spectra of a sample of epoch of reionization quasars at z > 5.5. Within each spectrum, they map out the dark gaps of Lyman-β absorption, the length of the gaps, and the redshift evolution of the gaps. As shown in Figure 2, one quasar spectrum in particular had a uniquely long dark gap down to z ~ 5.5. Within that line of sight, the authors found a low-density region of galaxies, which supports the idea that highly opaque sight lines are associated with galaxy underdensities. This makes sense: in areas with fewer galaxies to ionize their surroundings, there is more remaining neutral gas.

spectrum of the quasar PSOJ025-11

Figure 2: The light blue line in the top panel shows the spectrum for the quasar, and the dashed curve shows the predicted emission from the quasar in the absence of absorption from gas in front of the quasar. The region over which Lyman-β gaps were searched for is labeled, as is the corresponding Lyman-α forest for the same redshift range. The bottom panel shows a zoom-in of the Lyman-β gap region, with gaps (flux lower than the dashed threshold) shaded in gray. The the dark gap between z = 5.53 and 5.61 is the longest gap detected in the survey. [Adapted from Zhu et al. 2022]

The authors also emphasize the uniqueness of this study: the reionization scenarios are difficult to disentangle with Lyman-α gaps, as their signatures look similar. However, given the lower optical depth of neutral gas at the wavelength of Lyman-β, Lyman-β is a more sensitive probe of neutral gas in the late intergalactic medium, and it’s a useful tool to better understand the end of reionization. By further comparing their observations of dark gaps to expectations from cosmological simulations of these scenarios, the authors determine which reionization scenarios remain possible given the evidence.

Given the distinction enabled by Lyman-β data, the authors propose the best-fit scenario for their dark gap sample is late reionization, with the epoch of reionization ending at z ~ 5.3. They demonstrate that rapid late-reionization models, specifically those with a fraction of neutral gas > 5% at z = 5.6, are consistent with the observations. Looking ahead, these dark Lyman-β gaps and future large samples of quasar spectra with gaps can help fill in the gaps in our knowledge of the timing of reionization.

Original astrobite edited by Jana Steuer.

About the author, Olivia Cooper:

I’m a second-year grad student at UT Austin studying the obscured early universe, specifically the formation and evolution of dusty star-forming galaxies. In undergrad at Smith College, I studied astrophysics and climate change communication. Besides doing science with pretty pictures of distant galaxies, I also like driving to the middle of nowhere to take pretty pictures of our own galaxy!

collapsar

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Radio Constraints on r-process Nucleosynthesis by Collapsars
Authors: K.H. Lee et al.
First Author’s Institution: University of Florida
Status: Published in ApJL

Most elements in the periodic table originate in stars. Elements lighter than iron are stable and can be formed by nuclear fusion in stellar cores. Heavier elements have unstable nuclei and require additional sources of energy to form. These elements are believed to originate in supernovae: the explosive deaths of stars with masses more than ten times that of the Sun. However, even supernovae cannot produce the heaviest elements in the periodic table — the lanthanides and the actinides. The origin of these “heavy” elements (which include other metals like gold and platinum) is a long-standing mystery. The authors of today’s article use radio observations of collapsars — special types of supernovae — to constrain whether these elusive heavy elements can be formed in them.

r-process Nucleosynthesis

The reason why lanthanides and actinides are so difficult to produce is because they are formed by a rare process involving the rapid capture of neutrons that is known as the r-process. In this process, a free neutron is captured onto the nucleus of an atom, producing a nucleus with a higher atomic mass number. The problem is that free neutrons are inherently unstable particles that undergo the beta-decay process to form a proton, an electron, and an antineutrino on a timescale of a few minutes. To efficiently activate the r-process, neutrons need to be captured faster than they decay. Therefore, a neutron-rich environment is needed. One such environment is formed during the explosive mergers of two neutron stars. So far, only one of these events has ever been observed: GW170817, which was detected in both gravitational and electromagnetic waves. However, other astrophysical explosions have also been proposed to produce neutron-rich ejecta, creating the conditions for r-process nucleosynthesis. One such explosion is known as a collapsar.

CollapsaRs

When massive stars (heavier than about 10 solar masses) die, they can form neutron stars or black holes. If a neutron star is formed, most of the star’s outer layers are ejected at large velocities (~10,000 km/s!), producing an energetic explosion that we all know as a supernova. However, if a black hole is formed instead of a neutron star, most of the star can be gobbled up by the black hole and very little material will be ejected. This changes if the star that formed the black hole had a mass of 20–30 solar masses and was spinning really fast. Owing to the large angular momentum of this star, a larger fraction of its total mass can now be ejected. In fact, some of the ejected material can be accelerated to relativistic speeds (comparable to the speed of light), producing a beam of high-energy photons called a gamma-ray burst. In addition to the gamma-ray burst, the remaining ejected material — which is not relativistic but is still moving really fast (~20,000 km/s) — can produce a regular supernova-like explosion. This explosion of a rapidly spinning massive star is known as a collapsar.

The ejected material from collapsars can power gamma-ray bursts and supernovae. The material that falls into the black hole can also do interesting stuff. Because this material has large angular momentum, it can form a disk around the newly formed black hole. Most of this disk will be accreted by the black hole within a few seconds, destroying the disk material, but the accretion process itself can eject a fraction of the disk in the form of winds. It turns out that these disk winds are extremely rich in neutrons and are thus viable sites for r-process nucleosynthesis (i.e., formation of lanthanides and actinides). However, the signatures of this r-process are extremely difficult to observe as they can be masked by other features arising from the supernova explosion. To date, there has been no direct evidence of r-process nucleosynthesis in collapsars.

Radio

Radio observations provide one way to observe the signatures of this elusive r-process in collapsars. Several months after the supernova explosion, disk-wind ejecta that is rich in lanthanides and actinides should interact with the interstellar medium surrounding the black hole. This will produce a radio flare powered by a mechanism known as synchrotron emission, in which electrons spiral around magnetic field lines. This flare should peak several months after the explosion, evolve slowly, and last for several years (Figure 1). Observations of such a radio flare in collapsars several years post-explosion can provide constraints on the amount of r-process elements synthesized in them.

plot of theoretical radio emission from a collapsar

Figure 1: Expected radio emission from the collapsar GRB060505 that occurred in 2006. The theoretical models assume an r-process ejecta mass of 0.1 solar mass (solid lines) and 0.01 solar mass (dotted lines). Different colors represent different model parameters, specifically interstellar medium density profiles and electron distributions. The black arrow marks actual upper limits from Very Large Array observations that can be used to constrain the r-process ejecta mass. [Lee et al. 2022]

Putting It All Together!

The authors of today’s article searched for the late-time radio emission in collapsars. First, they selected 11 collapsars that exploded in the last decade based on the gamma-ray bursts detected by the Swift space satellite. They then looked for possible radio emission at the locations of these supernovae using data from the Karl G. Jansky Very Large Array radio telescopes. Unfortunately, they did not detect any late-time radio flares from these collapsars. However, based on the sensitivity of the data, the team was able to place upper limits on the observed late-time radio emission. They then used theoretical models to derive upper limits on the total r-process material ejected by the collapsar. They found that the collapsars could not have ejected more than 0.2 solar mass of r-process material. For reference, the neutron star merger GW170817 produced about 0.05 solar mass of r-process material. This means that collapsars do not cause significantly more r-process nucleosynthesis than the one neutron star merger we know of.

The authors note that the derived upper limits depend on assumptions of their models about the interstellar medium density profiles, the energy distribution of electrons in the interstellar medium, and the velocity of the disk-wind ejecta. Despite these caveats, their observations place meaningful constraints on the amount of r-process material synthesized in collapsars. Future, more sensitive radio observations can help confirm whether collapsars can synthesize the heaviest lanthanides or actinides or not. This will be important to identify whether collapsars and neutron star mergers can account for the observed r-process elements in the universe, or whether we need to look for more r-process factories.

Original astrobite edited by Sasha Warren.

About the author, Viraj Karambelkar:

I am a second-year graduate student at Caltech. My research focuses on infrared time-domain astronomy. I study dusty explosions and dust enshrouded variable stars using optical and infrared telescopes. I mainly work with data from the Zwicky Transient Facility and the Palomar Gattini-IR telescopes. I love watching movies and plays, playing badminton and am trying hard to improve my chess and crossword skills.

Illustration of a brown dwarf

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Tracing the Top-of-the-Atmosphere and Vertical Cloud Structure of a Fast-Rotating Late T-dwarf
Authors: Elena Manjavacas et al.
First Author’s Institution: Space Telescope Science Institute and Johns Hopkins University
Status: Published in AJ

While they have a reputation as “failed stars,” brown dwarfs might have more in common with their gas giant planet cousins than with stars. Brown dwarfs have swirling, patchy clouds in their atmospheres, and the light curves of brown dwarfs have been seen to vary in amplitude as they rotate and bring different faces with variable cloud coverage into view. By observing brown dwarfs over their rotation periods with spectrophotometry, astronomers can simultaneously measure how much the atmosphere is changing in multiple wavelength bands. This technique means a 3D map of a brown dwarf’s atmosphere can be built, since different wavelengths probe different levels of pressure within an atmosphere. Although most spectrophotometric observations of brown dwarfs have used the Hubble Space Telescope, the authors of today’s article employed the ground-based Keck I telescope to study 2M0050–3322, a rapidly rotating T-type brown dwarf.

Seeing One Atmosphere Through Another

Since the levels of variability in brown dwarf atmospheres can be small, it is important to characterise any other non-brown dwarf sources of noise in the data. For these ground-based observations, particular care has to be taken to account for changes in Earth’s atmosphere over the course of the observations. Using the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE), the authors observed 2M0050–3322 for two of its rotation periods (around 2.5 hours in total). They also observed several other nearby stars to help calibrate the impacts of things like the local humidity and temperature on the measurements. The light curves of all the objects were obtained at multiple different infrared wavelengths: J band, H band, and in a wavelength range slightly redder than the H band, which the authors call the CH–H2O band. By dividing each of 2M0050–3322’s light curves by the median light curve of the calibration stars, 2M0050–3322’s light curves could be corrected for any effects of Earth’s changing atmosphere to find the true variability of the brown dwarf, as shown in Figure 1.

Light curves of the brown dwarf in several wavelength bands

Figure 1: Light curves of 2M0050–3322 in J, H and CH4–H2O bands. The CH4–H2O band shows the biggest fluctuations, but all bands are best fit by a flat line. [Manjavacas et al. 2022]

Over the course of their observations, the authors found that 2M0050–3322 had a minimum to maximum fluctuation of ~1% in the J and H band and a higher 5% amplitude in the redder CH4–H2O band. This seemingly low level of variation was also confirmed by fitting flat and sinusoidal models to the light curves, with a flat line proving to best the preferred fit for all the observations.

Models to the Rescue?

With observations in hand, the authors then sought to compare their results to models of 2M0050–3322 to see if a similar lack of variation was present. General circulation models of the thermal flux of the atmosphere predict a slightly sinusoidal light curve with almost a 1% variation, which matches the amplitude seen in the J and H band observations! Meanwhile, models of the structure of clouds in the atmosphere show that 2M0050–3322 has various types of clouds at different pressures, meaning that each of the observation bands could be probing different clouds.

Figure 2 shows that the CH4–H2O band traces similar pressure levels as those where sodium sulphide (Na2S) and potassium chloride (KCl) clouds condense. This could explain why the CH4–H2O light curves show more variability than the other bands, which reach deeper into the atmosphere and therefore do not probe these clouds. While these modelling efforts begin to explain the observations of the brown dwarf, the authors caution that longer-term monitoring is likely needed to fully explain the mysteries of 2M0050–3322.

Visual representation of the cloud structure of 2M0050–3322

Figure 2: Visual representation of the cloud structure of 2M0050–3322. Three types of clouds are seen to form in the atmosphere, each at different pressure levels. KCl and Na2S clouds are seen to form at similar pressure levels as those probed by the CH4–H2O band, possibly explaining why this band shows more variation than the J and H bands. [Manjavacas et al. 2022]

Original astrobite edited by William Balmer.

About the author, Lili Alderson:

Lili Alderson is a second-year PhD student at the University of Bristol studying exoplanet atmospheres with space-based telescopes. She spent her undergrad at the University of Southampton with a year in research at the Center for Astrophysics | Harvard-Smithsonian. When not thinking about exoplanets, Lili enjoys ballet, film, and baking.

illustration of the hubble and gaia spacecraft working together

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: GaiaHub: A Method for Combining Data from the Gaia and Hubble Space Telescopes to Derive Improved Proper Motions for Faint Stars
Authors: Andrés del Pino et al.
First Author’s Institution: Center for Studies of Astrophysics and Cosmology of Aragón (CEFCA) and Space Telescope Science Institute
Status: Published in ApJ

Intro: What Is Proper Motion and How to Find It

Stars in the night sky seem fixed, but they are all traveling through the Milky Way just like the Sun. Since objects in the universe travel in 3D space, we can separate their velocities into three components, as shown in Figure 1: one component is the radial velocity, which points towards or away from Earth, and the other two components come from the proper motion, which refers to motion in the plane of the sky. Radial velocity is usually measured by finding the redshift of the object’s spectral lines, and it can reach down to several kilometers per second in accuracy. Proper motion is much harder to measure.

diagram illustrating radial velocity and proper motion

Figure 1: An illustration of radial velocity and proper motion. [ESA/ATG Medialab]

The measurement of accurate sky positions is called astrometry. Proper motion measurement relies on astrometry, since we are comparing observations from two epochs and calculating how much the position of the star has changed. This is the strong suit of the Gaia mission, which probes stars out to the halo of the Milky Way galaxy (see this astrobite). Gaia has led to many discoveries: new globular clusters in the Milky Way, stars moving so fast enough to escape the Milky Way, groups of stars that move together, and plenty more to come.

However, Gaia data have two important shortcomings. Firstly, it is a small telescope and works better for bright stars. For faint stars, the astrometric errors rise rapidly. But if we are interested in a faraway system (e.g., a satellite dwarf galaxy of the Milky Way), all the stars will be faint. Using Gaia data alone, the velocity errors far exceed the true variation in the galaxy. The second issue is the time baseline. Given a constant velocity, stars will shift more if you wait longer between two observations. That is why the time baseline has a huge impact on proper motion accuracy. Gaia has only been operating and recording positions for three years. If there is a way to increase the time baseline, that can also improve the proper motion measurements.

How to Measure Proper Motions Better?

Prior to the launch of the Gaia space telescope, the workhorse in astrometry studies was the Hubble Space Telescope. Hubble data can solve both of the issues mentioned above; it can observe much fainter stars, and it’s been taking data since 10–15 years before Gaia was even launched. If there is a way to combine these datasets, the time baseline for the proper motion measurements could be extended by a factor of 4–6. As shown in Figure 2, even adding one Hubble image can push down the errors by a lot for faint stars (G magnitude > 17). That is precisely what the authors of today’s article did.

plot of proper motion uncertainties for gaia data alone versus gaia and hubble data combined

Figure 2: The expected proper motion uncertainties as a function of the magnitude of stars. In both panels, nominal errors of Gaia Early Data Release 3 are shown by a black dashed curve. The top panel shows the impact of using one or more Hubble images, taken at the same epoch, June 2011. The bottom panel shows the impact of using just one Hubble image taken on different years (the typical baseline found in the data is ~11 years). [del Pino et al. 2022]

Combining Hubble and Gaia

The authors of today’s article developed a software called GaiaHub, which compares the positions of the stars measured with Gaia with those measured with Hubble. The first step is to measure positions of stars in Hubble data. This is a well-established process that takes into account the instrument distortions and time variations, and it achieves a typical accuracy of 0.25–0.5 milliarcseconds.

Then comes the hard part: the star positions need to be matched to Gaia measurements. Since the two datasets are more than 10 years apart, establishing a common reference frame between the two is the key challenge. The software offers three different algorithms: when there is a large number of randomly moving stars, it matches the average positions of all stars; when the stars have some coherent motion, the proper motion can be modeled iteratively so that the coherent motion is removed; or, finally, if there are many contaminant stars, the code can also set up the reference frame from co-moving stars. The improved accuracy with Gaia and Hubble data can be seen in Figure 2 as a function of the magnitude of the stars.

Results

So how does this software perform on real data? Figure 3 shows the drastic improvement you get from combining Gaia and Hubble data. In this example, proper motions are used to identify member stars of a globular cluster Palomar 4. The stars in a cluster should move together, which means they should all have similar proper motions. The left column in Figure 3 shows the proper motion in as a function of on-the-sky coordinates, right ascension (RA) and declination (Dec), measured by Gaia alone (top panel) and GaiaHub (bottom panel). The proper motion measurements from GaiaHub clearly have much smaller scatter and allow for a cleaner selection of member stars. This is confirmed by the right column, which shows the sky positions of the selected stars and their proper motion vectors. In the Gaia selection, the lines indicating the direction of motion point all over the place, while GaiaHub results show very coherent motion. Given that member stars should move together, GaiaHub successfully picks out the likely members of Palomar 4.

Comparison between the results obtained using Gaia and GaiaHub for the Palomar 4 globular cluster.

Figure 3: Comparison between the results obtained using Gaia and GaiaHub for the Palomar 4 globular cluster. Left column: proper motion in RA vs. Dec, measured by Gaia (top panel) and GaiaHub (bottom panel). Right column: sky positions of the stars with the projected proper motion vectors. [del Pino et al. 2022]

This is a huge improvement for proper motion measurements! Large uncertainties in proper motion mean that we get more scatter in the velocities, and that leads to artificially large velocity dispersion measurements for globular clusters. With GaiaHub’s new capabilities, the artificial scatter is reduced and we can recover the real internal velocity dispersions. The authors did this exercise for 40 globular clusters and published their results in this article. Along with radial velocities, we now have the full 3D velocity information. GaiaHub opens exciting new science involving analyzing velocity dispersions along each direction.

As with all research techniques, GaiaHub has its limitations. Due to the cross matching, GaiaHub relies on stars that overlap in both datasets. That means the field of view is limited by the smaller of the two, which is Hubble. The magnitude of the stars that can be detected by both telescopes is also limited, since bright stars are often saturated in Hubble images. Both of these factors mean that GaiaHub works best at an intermediate distance, where the Hubble field of view is large enough to cover the globular cluster and the brightest stars are faint enough to not be saturated.

To summarize, GaiaHub improves the proper motion measurements by a factor of ten. More precise proper motions at fainter magnitudes allow us to study the kinematics of many stellar systems around the Milky Way. This public software will be a great resource for the astronomy community!

Original astrobite edited by Katya Gozman.

About the author, Zili Shen:

Hi! I am a PhD student in Astronomy at Yale University. My research focuses on ultra-diffuse galaxies and their globular cluster populations. Since I came to Yale, I have worked on two “dark-matter-free” galaxies NGC1052-DF2 and DF4. I have been coping with the pandemic and working from home by making sourdough bread and baking various cookies and cakes, reading books ranging from philosophy to virology, going on daily hikes or runs, and watching too many TV shows.

photograph of the green bank telescope in front of rolling mountains

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Searching for Broadband Pulsed Beacons from 1883 Stars Using Neural Networks
Authors: Vishal Gajjar et al.
First Author’s Institution: Breakthrough Listen, University of California Berkeley
Status: Published in ApJ

The search for extraterrestrial intelligence (SETI) is perhaps humankind’s most ambitious and forward-thinking endeavor. We’ve been asking ourselves the fundamental question of “Are we alone?” since the dawn of written history, but technological advancements in the last 100 years have allowed us to take our first steps toward finding an answer. Today’s article describes a reimagination of one of the most common search techniques to look for signatures of extraterrestrials (ETs), and while we haven’t found any alien signals just yet, our search capabilities only continue to get better!

The easiest way to find ETs would be to look for their technosignatures — the light waves emitted by the technology they use (check out this Astrobite for more on technosignatures). In particular, if an alien civilization wanted to be found by other intelligent life, they would want to send out a signal that wouldn’t be deflected or absorbed by the space between us, would travel as fast as possible, and would require the least amount of energy to produce. For these and other reasons, most SETI searches have involved searching for artificial radio signals coming from the vicinity of nearby stars.

But what specific kinds of signals should we search for? Will they be transmitted over a narrow frequency range, or will they be “broadband” signals covering a large range of frequencies? Will the signal be continuously transmitted, or will it pulse on and off at specific intervals to clearly demonstrate it’s made by intelligent life? There is no one satisfactory answer to these questions, and most previous searches have looked for narrowband signals that are always being emitted, since we would need less time to detect a signal of that type than other types of signals.

However, the authors of today’s article were able to prove that for a civilization generating these signals, it would cost less energy to produce broadband pulsed signals, as long as those signals were being sent out for longer than a few hundred seconds. They made the reasonable assumption that ETs will try for longer than a few minutes to get our attention and went about searching for broadband pulsed signals in radio data from the Green Bank Telescope.

A Very Small Needle in a Turbulent Haystack

The Breakthrough Listen collaboration, of which many authors of this article are a part, chose 1,883 stars (explained in this article) as targets for their observations. They chose every star within 5 parsecs (a little more than 16 light-years) of Earth — so that the distance between us would not attenuate the signals too much — as well as all stars within 5–50 parsecs (163 light-years) that fall on the main sequence or the early part of the giant branch. Stars on these earlier segments of the stellar evolutionary track are less volatile and, if they have planets orbiting them, create environments that are the most likely to aid life to grow. The authors took 233 total hours of observations, broken up into 5-minute segments, since that is approximately the observational length for which a 0.3-millisecond long broadband pulse would take less power to send than a continuous narrowband signal.

Luckily, we have lots of experience searching for repeating broadband radio pulses in the form of radio pulsars and fast radio bursts! Pulsars are useful physical tools for a wide range of astronomical applications (for more, see the astrobites here, here, here, here, here, here, here, and here), but today, we can use our experience in analyzing transient radio signals to predict how a broadband signal sent by ETs would be affected by the interstellar medium between us. Radio waves are scattered and dispersed by the interstellar medium, and broadband radio signals undergo a dispersion delay, where the lower-frequency part of a pulse will be delayed relative to the higher-frequency part due to the ionized medium it travels through. The authors of today’s article focus on this dispersion delay.

plot of a dispersed broadband pulse signal

Figure 1: The received signal from a dispersed broadband pulse, as a function of frequency and time. Note that the higher-frequency parts of the signal arrive before the lower-frequency parts. [Gajjar et al. 2022]

The “waterfall” plot in Figure 1 shows the intensity as a function of frequency and time for a single broadband pulse that has undergone dispersion. The dispersion measure of a signal, which is related to the time delay between two reference frequencies, can help us measure the amount of ionized material a signal has traveled through. Combined with detailed maps of the Milky Way, we can use the dispersion measure to estimate the distance between us and the origin of the signal!

Most importantly, the dispersion delay time between two frequencies always scales as the inverse of the frequency difference, squared. The authors of today’s article suggest that if an alien civilization were to send us a signal, the best strategy would be to artificially arrange it in some way so that we would not see a normal dispersion trend; rather, we would see some other pattern that does not occur in nature, proving that it comes from other sentient life.

These other types of artificial dispersion are shown in Figure 2. The authors searched for dispersed pulses from their original dataset, and they also created artificial datasets by flipping the frequency and time axes, both independently and simultaneously. By doing this, each type of artificially dispersed pulse shown in Figure 2 would look to the single-pulse-search software as a normally dispersed pulse, allowing the team to run the same search code on all four datasets. Searching all of these datasets resulted in a staggering 133,393 candidates!

plots of artificially dispersed signals

Figure 2: The three types of artificially dispersed signals that the authors searched for. From left to right, they are made by flipping the time axis, the frequency axis, and both simultaneously, to make artificially dispersed broadband pulses that are not seen in nature. [Gajjar et al. 2022]

How to Analyze 133,000 Candidates This Century

Of course, having a human sit down and examine that many candidates would be beyond unreasonable — thankfully, machine learning and graphics processing units allow us to quickly filter out many bad options. The authors filtered out candidates that looked too much like human-made radio frequency interference or didn’t show enough of a difference between their on-pulse and off-pulse energy distributions. Many other filters were used to weed out unpromising candidates, leading to a shortlist of only 2,948 candidates.

The best candidates in each class of artificial dispersion were examined more closely, but similar-looking signals were found in other 5-minute-long pointings in completely unrelated areas of the sky. It’s not easy for us to prove definitively that these signals actually come from the region of the stars we’re pointing at, rather than human-made radio frequency interference; it’s much more reasonable to conclude that these “signals” are bright human-made signals that have made their way into the telescope, rather than two extremely distant alien civilizations sending us the exact same signal.

The authors used these non-detections to place limits on the maximum signal strength any civilization in those areas could be sending, finding some signals as weak as a few hundred times stronger than our strongest airplane radar. That doesn’t sound like much of a limit, but that’s a signal we’d be detecting from a whole other star system — and each new search is another step towards better technology and better search methods to make a possible discovery!

Original astrobite edited by Lili Alderson.

About the author, Evan Lewis:

Evan is a third-year graduate student in astronomy at West Virginia University. His research focuses on transient radio sources, including pulsars, magnetars, and fast radio bursts. Outside of research, he enjoys playing percussion, hugging dogs, baking, and playing video games!

JWST blueprints

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Analysis of a JWST NIRSpec Lab Time Series: Characterizing Systematics, Recovering Exoplanet Transit Spectroscopy, and Constraining a Noise Floor
Authors: Zafar Rustamkulov et al.
First Author’s Institution: Johns Hopkins University
Status: Published in ApJL

JWST is the most powerful infrared telescope ever built — and, understandably, scientists across the world and in every sub-field of astronomy can barely contain themselves waiting for the data it will soon beam back. The gorgeous shots taken during the commissioning phase for engineering and alignment suggest the measurements will be exquisite. But how exquisite, exactly? What are the subtlest signals JWST will be able to detect, specifically in the context of exoplanet atmospheres? Today’s authors take a step towards answering that question — and, impressively they do it before any scientific measurements have been taken!

How Good Is Great?

As capable as JWST is, you could probably guess that it can’t do everything we could possibly ask of it. Although the telescope can take images of the most distant galaxies across the largest scales of the universe, all of its measurements come with some amount of uncertainty, or noise. If that noise is small compared to the signal we’re trying to measure, then we don’t need to worry; for example, if we measure the age of a rock to be 5 million years ± 0.01 million years, we can be confident that it’s a young rock even if we aren’t exactly sure of its age. But, if we measure it instead to be 2 billion years ± 2 billion years, suddenly our measurement looks less useful, since the rock could either be very young or nearly as old as Earth itself!

In the context of JWST and astronomy, if you measure, say, 100 photons per second from a galaxy or star, what are the chances that source is actually shooting 100 photons per second in our direction? Could it be 100 ± 50 photons per second? If you’re trying to measure a very subtle signal (say, the transit of a planet) that would only block say 40 photons per second, the answers to those questions could make the difference between confidently detecting your planet and not being sure if it was even there! So, what are the answers? The only way to know is to analyze actual measurements taken by JWST. You would think that means we have to wait until the first science images arrive this summer. Unless…

Blinded by the (Lamp) Light

Although JWST is now in its chilly home out at L2, it somewhat infamously took many years to reach this point. One reason for this was the extensive testing of every component of the telescope. As part of this testing regimen, engineers locked the science instruments in a chamber meant to replicate the cold, vacuum-like conditions of space, then ran them through their paces to check their response. While the science instruments were in the chamber, the engineers forced the Near Infrared Spectrograph (NIRSpec) instrument to undergo the worst eye exam ever, shining a tiny lamp through its optics and onto its detector for several hours.

But ah ha! What if we just pretended that chamber actually was space, and that little lamp was actually a star? If we could clean up this data, add in a fake but perfectly known transit signal, and then check how well we could recover that true signal, we’d have an estimate of NIRSpec’s noise resolution. This is exactly what today’s authors set out to do.

This wasn’t straightforward, since lamps, unfortunately, are not stars. NIRSpec breaks light into its component wavelengths to measure its spectrum, and a glowing filament produces a very different spectrum than a star. Even worse, lamps flicker and change over time, but contrary to what popular children’s songs would suggest, stars don’t actually twinkle in space. The authors took great care to remove each of the effects that could influence the results, sliding and smoothing each of the frames until they resembled something similar to what we’ll see from a real star in space. Figure 1 shows the brightness of the lamp after the authors applied various corrections.

plots of the lamp flux over time

Figure 1: The lamp data before (top) and after (bottom) trend removal. In both panels, each column of pixels represents one integration, then they are lined up left to right in the order they were taken. Note that in the top panel, representing data before a trend removal step, the average column intensity goes down over time due to the lamp fading slightly. In the bottom panel, showing data after their “common-mode correction,” which “mostly removes the systematics imparted by the unstable light source,” the source appears much more stable. [Adapted from Rustamkulov et al. 2022]

After that, the authors added in artificial signals of two planets, TRAPPIST-1 d and GJ 436 b, complete with full models of their atmospheres. With realistic “measurements” of these planets now fully assembled, they could pretend that we live several months in the future and that these light curves were freshly beamed in from beyond the Moon, not collected in a lab six years ago. The authors ran their data through code routines similar to what we’ll use on real data, then checked how well NIRSpec recovered the fake signals.

Noise Canceling Telescope

So, what did the authors find? Lots of very good news! Despite the lamp drifting around the image plane more than we expect stars will, the authors found that NIRSpec should be able to detect spectral features from the atmospheres of TRAPPIST-1 d and GJ 436 b, as shown in Figure 2.

plot of injected and recovered exoplanet spectra

Figure 2: The recovered spectra of two planets. In both panels, the blue curve depicts the “true” signal that the authors injected into their lamp data, and the red points are the results of their model fit to those fake measurements. The Y axis here is transit depth, or how much of the star’s light at a given wavelength is blocked by the planet, in parts per million (ppm). Note how closely the recovered points follow the true curves, especially in the case of GJ 436 b between 1.5 and 3 microns. [Rustamkulov et al. 2022]

Even better, the authors didn’t run into any “noise floor,” or fundamental uncertainty the instrument can’t get around no matter how long it measures a source. Although they couldn’t give a firm estimate for it, they were able to set an upper bound and are confident that it’s smaller than 14 parts per million. That’s a tiny, tiny value, and it implies that JWST should be able to detect dozens of spectral features given enough time.

The excitement and anticipation for spectra of exoplanet atmospheres seems justified! Back to waiting for the first science images…

Original astrobite edited by Jessie Thwaites.

About the author, Ben Cassese:

I am a first-year Astronomy PhD student at Columbia University working on simulated observations of exomoons. Prior to joining the Cool Worlds Lab, I studied Planetary Science and History at Caltech, and before that I grew up in Rhode Island. In my free time I enjoy backpacking, spending too much effort on making coffee, and daydreaming about adopting a dog in my NYC apartment.

simulation of gas and dust in a protoplanetary disk

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Prospects for Hurricane-like Vortices in Protoplanetary Disks
Authors: Konstantin Gerbig and Gregory Laughlin
First Author’s Institution: Yale University
Status: Published in ApJ

What do hurricanes have to do with planet formation? At first glance, nothing. Planets form in protoplanetary disks, whereas hurricanes occur on a planet that has already formed: Earth. However, the authors of today’s article are searching for a connection between the two.

How Planets Form

Planets form in protoplanetary disks that are made of dust and gas. When dust in those disks clumps together, it can form pebble-sized nuggets that stick together to form boulders. The boulders can become kilometer-size protoplanets until they finally grow into planets. However, there still are some missing pieces in our understanding of planet formation. One of these is the meter-size barrier, which states that it’s really hard to get from a meter-size boulder to something larger. It is fairly well established how to get from dust to meter-size boulders and how to get from kilometer-size protoplanets to planets. It’s the step in between that is missing. However, the meter-size barrier cannot be a real physical barrier: we are living on the proof that planet formation must be possible. But how?

One mechanism proposed to facilitate the growth of boulders is a dust trap where dust and pebbles can be trapped together, allowing them to effectively grow to protoplanets. The authors of today’s article investigate a possible way such a dust trap could occur: a hurricane!

Hurricanes on Earth

For hurricanes on Earth, the key ingredient is water: a hurricane can only form above an ocean. As in planet formation, big things start small: the seed to a hurricane needs to be a small initial turbulence or spin flow of air.

The little seed is magnified by an interplay of strong winds and the evaporation and condensation of water. Strong winds on the surface of the ocean pick up water vapor. At some point, the air becomes saturated so that the water has to condensate again. This is when clouds start to form. The condensation releases heat into the air (called “latent heat”) and the heated air starts to rise. At a certain level in the atmosphere, the air is able to cool down by radiating away its energy, and it does not rise further. The air can flow away horizontally at that height. It leaves behind a void at the surface of the ocean that gives rise to more winds. With the right conditions, this mechanism intensifies the little initial turbulence and the spin flow becomes a large circulation of air mass: a hurricane.

As depicted in Figure 1, a hurricane has a center called the eye of a hurricane. Within the eye, moisturized air continues to flow upward, maintaining the storm as long it can pick up water vapor and rise upwards.

diagram of a hurricane

Figure 1: A hurricane on Earth can only exist for a significant time if it is above an ocean. Warm and moist air is rising leading to a circulation of air mass. More air flowing in at the ocean surface magnifies this process until an immense storm is formed. [Wikipedia user Kevinsong; CC BY 3.0]

What About Hurricanes in Protoplanetary Disks?

The authors of today’s article propose that a similar process can occur in protoplanetary disks. A layer of dust grains that are covered with ice can act as a fuel tank for hurricane-like structures similar to the ocean on Earth. When a gas layer flows over the icy dust grains, it can pick up moisture. Just like on Earth, if there is an initial turbulence in the form of a spin flow, it can be magnified by the same mechanism as a hurricane.

Previous research has shown that such spin flows already exist in protoplanetary disks. They are called vortices. The main difference to a hurricane on Earth is that both gas and dust grains in a protoplanetary disk orbit the star. This motion around the star, also known as Keplerian motion, gives rise to shear forces that tear apart vortices. This means that there is something acting against the growth of vortices.

The authors construct a model to simulate hurricane-like conditions in protoplanetary disks. They seed their simulations with small initial vortices and observe whether they grow. The simulations show that the hurricane mechanism indeed can create large vortices out of small ones. Figure 2 presents the comparison between several simulations: one without the hurricane model (red line) and several with the hurricane model and different initial conditions (yellow and purple lines). The small initial vortices become larger over time and merge into a big one, possibly producing a dust trap.

Simulations with (yellow line) and without (red line) hurricane-like conditions in a protoplanetary disk

Figure 2: Simulations with (yellow and purple lines) and without (red line) hurricane-like conditions in a protoplanetary disk. The lines show the time evolution of kinetic energy, which is a measure of the strength of a vortex. The four images show snapshots of a simulation with hurricane-like conditions at different times, showing the vortices growing when hurricane-like conditions are present. [Gerbig & Laughlin 2022]

The authors find a sweet spot for sustaining and magnifying vortices. This sweet spot is located just outside the ice line, which is the location in the disk where the temperature is low enough for water to freeze.

Can Hurricanes Help to Form Planets?

We’ve seen that these hurricane-like vortices are possible, but can they actually form planets?

Prior research has shown that a vortex can trap dust within its eye. As vortices are found to be short-lived, mechanisms prolonging the lifetime of a vortex, such as the hurricane mechanism, are essential to planet formation.

However, when planets form in a vortex, they draw from the dust grains that fuel the hurricane-like vortex. If a planet eats up too much of the dust, the vortex can no longer be kept alive. The authors of today’s article therefore argue that it is not obvious if this mechanism actually supports planet formation. For now, the question must remain unanswered. However, the first step is done — we know that hurricanes can occur in protoplanetary disks. Now it is up to future investigations to see if they can enhance planet formation.

Disclaimer: The first author of today’s article is an active astrobites author but was not involved in the publication of today’s bite.

Original astrobite edited by Macy Huston.

About the author, Lina Kimmig:

I’m a first-year PhD candidate working at Heidelberg University in the exciting field of planet formation. As planets form in protoplanetary disks that exist around most young stars, I am looking at the effects of different physical processes on those disks. To investigate those effects, I run astrophysical simulations. My main interest are warped disks that have a three-dimensional twisted shape (a little bit like Pringles crisps). Outside of research, I not only like eating Pringles crisps but also love dancing, sewing, skiing, and elephants.

artist's impression of the exoplanet HR8799e

Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Interpreting the Atmospheric Composition of Exoplanets: Sensitivity to Planet Formation Assumptions
Authors: Paul Mollière et al.
First Author’s Institution: Max Planck Institute for Astronomy
Status: Published in ApJ

One of the biggest questions that drive astronomers is “Where did we come from?” Whether studying the earliest hours of the universe, the formation of stars and galaxies, or the compositions of planets orbiting distant stars (exoplanets), astronomers use observations of the universe to piece together a story that describes how things got to be the way they are. For the better part of two decades, exoplanet-focused astronomers have attempted to measure the molecules that make up the atmospheres of exoplanets in order to better understand how those planets formed. Understanding how many different exoplanets form can then help astronomers understand how our own solar system formed, and how Earth, and life itself, came to be.

There are a few problems that are preventing astronomers from making those connections reliably, however. Well, a lot of problems, actually. For one, exoplanet atmospheres are hard to measure, but that might change soon thanks to JWST. For another, planet formation is dynamic, and different planet formation models can produce dramatically different atmospheres. Today’s article presents a new framework for tackling this second problem and shows how different planet formation models can lead to different interpretations about how the exoplanet HR 8799e formed.

Making Planets Is Complex!

Planet formation is a dense topic (interesting astrobites include this one, this one, and an old review bite), but in the broadest strokes: planets form from disks of gas and dust called protoplanetary disks that surround young stars. Giant planets, like Jupiter, form when their rocky cores grow massive enough (by smashing into other rocks) to vacuum up gas from the protoplanetary disk.

When and where a forming planet vacuums up its gas will affect which molecules end up in its atmosphere. This is because, as you go out further from the central star in a disk, the disk gets colder, and molecules that were gaseous can condense down into ices and become solid. The locations within a protoplanetary disk where molecules condense into solids are called ice lines (see Figure 1).

About ten years ago, a study suggested that the ratio of carbon atoms to oxygen atoms in an exoplanet’s atmosphere could indicate which ice lines it formed between, since the freezing out of water, carbon dioxide, and carbon monoxide changes the ratio of those two elements in the gas of the disk. This study suggested that if the carbon-to-oxygen (C/O) ratio of an exoplanet could be measured, astronomers could tell where in the protoplanetary disk it formed.

As discussed in today’s article, things aren’t so simple. One example the article uses is the fact that as the exoplanet is forming, the disk chemistry is changing — the disk is heated by the newly born star and disrupted by the newly born planet. Figure 1 shows the C/O ratio throughout the disk for the static model introduced in the previous study, as well as snapshots in time of a disk evolving as, for instance, carbon monoxide is turned into carbon dioxide by heat from the star.

Flipping the Table (or, “Formation Model Inversion”)

This quest isn’t hopeless: today’s article presents a framework in which the different assumptions and uncertainties mentioned above could be compared with available atmospheric measurements in order to meaningfully connect observations and model predictions.

plot of the carbon to oxygen ratio as a function of distance from the host star

Figure 1: The C/O ratio within a protoplanetary disk as a function of distance and time. The x-axis plots distance (a) from the central star, marking the ice lines of water, carbon dioxide, and carbon monoxide, while the y-axis plots the C/O ratio. The color gradient of the lines, from dark blue to bright yellow, indicates the progression in time of a model that assumes that the chemicals in the disk evolve as they are heated by the central star. This plot shows that it might not be simple to predict where in a protoplanetary disk a planet formed based on its C/O ratio, as there can be different values on the x-axis for one value on the y-axis. [Mollière et al. 2022]

The real problem is that complex models of planet formation require a set of assumptions as input and give the predicted atmospheric measurement of a planet as output. Astronomers measure the output, so the models have to be “inverted” in order to determine the formation location of the planet. What the new framework in today’s article does is generate many different models with various input parameters. Then, the authors compare the outputs to the measured abundances of a given exoplanet to see which model inputs result in the closest match to the measurements. The authors can do this for different models and then compare the best matching input parameters between models, allowing them to examine what different models predict for the origin of a given exoplanet.

Where Did HR 8799e Get Its Carbon and Oxygen?

HR 8799e is the innermost gas giant planet in a system of four directly imaged giant planets. In today’s article, the authors use their new framework on HR 8799e and demonstrate how including the time evolution of the chemicals in the protoplanetary disk — and the movement of small rocks through the disk during planet formation — change the predicted formation history of the planet.

HR 8799e’s atmosphere was previously studied using data from the Very Large Telescope Interferometer (VLTI) GRAVITY instrument. That study found the planet’s C/O ratio to be 0.6 (that is, 6 carbon atoms for every 10 oxygen atoms). The authors of today’s article use this measurement and their new analysis framework to compare the simplistic model of a protoplanetary disk and a chemically evolving disk.

The authors find that the simple model predicts that HR 8799e formed either within the water ice line (very close to its host star) or outside the carbon monoxide ice line (far away). Either way, the planet now orbits in the middle of these two extremes, indicating that it must have migrated from where it originally formed (see Figure 2, left panel). However, the chemically evolving disk model makes a slightly different prediction, indicating that as the disk chemically evolved, the most likely formation location of HR 8799e moved inward from beyond the carbon monoxide ice line to within it (see Figure 2, right panel). This could indicate that, depending on when HR 8799e began forming relative to the disk’s chemical evolution, it might not have needed to migrate to get to its current position.

plots of formation probability density as a function of distance from the star and distance from the star and time

Figure 2: The origin location of HR 8799e’s C/O ratio. The left plot indicates how likely the solids comprising the planet are to have originated from a given location in the protoplanetary disk for the most simplistic model considered. The most probable locations are within the water ice line and outside the carbon monoxide ice line, but compared to the current location of HR 8799e, this appears to indicate the planet must have migrated far from where it formed. The right plot illustrates the chemically evolving disk case. While this model shows that early in time the most likely place for HR 8799e to form is the same as in the simple model case, the most likely formation location changes as the disk chemistry changes — it becomes more likely the planet could have formed within the CO ice line and migrated only a little bit to its current position. [Mollière et al. 2022]

Today’s astrobite presents a complex narrative of exoplanetary archaeology, exploring different assumptions that can change how astronomers infer the formation history of exoplanets. With new and improved atmospheric detections on the horizon (hello JWST!), this new framework for comparing formation models will prove a useful tool to help astronomers puzzle out how and where exoplanets form, and maybe — eventually — how we got here.

Original astrobite edited by Lynnie Saade.

About the author, William Balmer:

William Balmer (they/them) is a PhD student at Johns Hopkins University/Space Telescope Science Institute studying the formation, evolution, and composition of giant planets, brown dwarfs, and very low-mass stars. They enjoy reading, tabletop games, cycling, and astrophotography.

1 6 7 8 9 10 39