Astrobites RSS

VLA-COSMOS

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Illuminating the dark side of cosmic star formation two billion years after the Big Bang
Authors: M. Talia et al.
First Author’s Institution: University of Bologna & INAF, Italy
Status: Accepted to ApJ

The modern terminology of galaxies is extraordinarily anthropomorphic; blue, star-forming galaxies are “alive”, and red galaxies that have ceased star formation are “dead”. So then how do galaxies “live”? In other words, why do some galaxies form lots of stars while others do not? Are the dead galaxies older, or do they simply mature faster? What role do external forces such as galaxy mergers play in the lives of galaxies? How can their internal structures (bars, arms, and bulges) or internal forces (supernovae and active supermassive black holes) work to enhance or inhibit star formation? These details have been the focus of the past two decades of galaxy studies, trying to answer the question: How and when did galaxies assemble their mass of stars?

The highest-level diagnostic we can construct to help us understand the big picture of star formation in galaxies is the cosmic star formation rate density (SFRD) diagram. It maps the average rate at which stars are formed in the universe at a given time, per unit volume. The physics, then, is a matter of both supply and efficiency: how much gas is available to be formed into stars (supply), and how well did galaxies turn that gas into stars (efficiency)? Constructing the SFRD diagram can then help us to understand the interplay between gas and the processes that can act to enhance or inhibit star formation.

star formation rate density diagram

Figure 1: The star formation rate density diagram, including many literature measurements focusing on the early universe (z > 3). The results from this paper indicate that a missing population of galaxies might account for a large portion of the SFRD at z > 4. [Talia et al. 2021]

Although one can measure the rate of star formation in a given galaxy, and then extend that study to perhaps a hundred or even a million galaxies, one will never be able to count the number of stars forming in every galaxy at every point in the history of the universe. Such a census will be technologically impossible far into the future since our own Milky Way galaxy obscures the light from more distant galaxies one would need to measure. Despite these challenges, astronomers have found clever means of estimating the SFRD for ~75% of the history of our universe by carefully constructing unbiased, representative samples of galaxies such that specific inferences of that sample also hold for the general, wider population of galaxies. As shown in Figure 1, the SFRD rises for the first 3 billion years before peaking at a redshift of z ~ 2, after which it declines for the remaining 10 billion years until today.

Observing star formation rates during the first 2 billion years of the universe (z > 3) is incredibly difficult. Not only were the first galaxies intrinsically smaller and fainter than galaxies we see today, but starting at z ~ 6, the universe is pervaded by a dense fog of neutral hydrogen (from which galaxies formed!) that obscures their light. Given these difficulties, these incredibly early galaxies are only now being observed in large numbers.

The authors of today’s paper point out that the existing samples of z > 3 galaxies are not at all representative. For the most part, and almost exclusively at z > 6, these galaxies are discovered via their bright ultraviolet (UV) emission, which has been redshifted so that it is observed in the optical and infrared. Not only must these galaxies be incredibly bright to be found at such large distances, but their intensive UV emission translates directly to an enormous star formation rate. That is, the feature that makes them easy to find also makes their star formation rates high. This is a huge bias in our samples! To overcome this bias, the authors turn to radio wavelengths. They used a large radio survey VLA-COSMOS to find 197 radio sources that have no counterpart in near-infrared wavelengths. These, they argue, are heavily dust-obscured galaxies without any UV emission — the missing link.

Median galaxy template

Figure 2: Median galaxy template (top) fitted to stacked observations in many broadband filters (bottom). The derived average physical parameters, as well as the redshift distribution, are also shown. [Talia et al. 2021]

The authors’ first test was to stack the broadband brightness measurements of each galaxy together so that they can predict what the average total spectrum would look like for these galaxies, and hence their average properties. The lack of blue light on the left-hand side of the spectrum indicates that there is no luminous UV component as seen in the UV-bright galaxies of previous samples. Moreso, the authors estimate an incredibly high dust extinction of a whopping 4.2 magnitudes (nearly a factor of 50)! These galaxies are super dusty indeed.

Using a similar approach to the stacked analysis, the authors then estimate the redshift and star formation rate for each of the 98 galaxies for which they could reliably measure an infrared brightness. Due to their unique radio selection approach, the authors are able to compile a large sample of very high redshift galaxies at z > 4.5. They estimate the redshifts and star formation rates for the remaining 99 sources as well, but with much greater uncertainty.

Lastly, the authors compute the SFRD using their sample, taking care to correct for any dusty galaxies they may have missed. This is a challenging correction to make, so the authors do so by adopting an agnostic approach, seeing how their SFRD looks depending on how complete their sample might be.

As shown by the red bars in Figure 1, it is precisely this population of highly dust-obscured galaxies at z > 3, invisible to optical and infrared surveys, that may indeed constitute a significant portion of the star formation rate density in the early universe compared to other less-dusty samples!

These findings highlight the surprising extent of our missing knowledge of the first galaxies, and they encourage investment in future radio surveys with ALMA and follow-up with JWST.

Original astrobite edited by William Saunders with Lukas Zalesky.

About the author, John Weaver:

I am a second year PhD student at the Cosmic Dawn Center at the University of Copenhagen, where I study the formation and evolution of galaxies across cosmic time with incredibly deep observations in the optical and infrared. I got my start at a little planetarium, and I’ve been doing lots of public outreach and citizen science ever since.

RS Puppis

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Identifying Candidate Optical Variables Using Gaia Data Release 2
Authors: Shion Andrew, Samuel J. Swihart, and Jay Strader
First Author’s Institution: Harvey Mudd College
Status: Accepted to ApJ

The Wonders of Gaia

1.5 million kilometers from Earth, at the L2 Lagrange point, a space observatory not much larger than a car traverses space and gazes deeply into the Milky Way. Launched in 2013, Gaia tediously constructs a three-dimensional map of our galaxy. Its primary objective is to determine the brightness, temperature, composition, and motion of over a billion astronomical objects (mostly stars) … and I thought grad school was demanding!

Unlike many spacecraft, Gaia observes its targets frequently (~70 times). As a result, it offers the rare ability to expose the variability of the celestial bodies under its watchful eye. These observations provide us the opportunity to advance our understanding of stellar evolution and the dynamical nature of our galaxy’s constituents.

Isolating Variables

Well-studied variables, such as RR Lyrae, Cepheids, and long-period variables provide high-quality measurements; however, sources with short-term variability are harder to detect and limit the number of variables that are studied. In fact, from the Gaia Data Release Two (DR2), which is the instrument’s most recent data release that contains G-band photometry measurements for ~1.7 billion sources, information is only provided for 550,000 variable sources. To address this dearth of variability observations, the authors conduct a thorough review of variable stars that are confirmed in DR2. They contend that variable stars can be identified by targeting stars with relatively high photometric uncertainties. If so, this method may prove critical for building a robust sample of variable stars that can be used for future studies!

In Figure 1, the authors plot Gaia G-band magnitude vs. G-band magnitude uncertainty for 1,000 stars in a small region of the sky. The patch of the sky was centered on a well-studied RR Lyrae variable star, TY Hyi (G = 14.3). The “baseline curve”, where the bulk of the stars (the black dots) lie, is the expected distribution for non-variables. Away from this curve, the variable star (the red dot) has a much larger uncertainty than the stars with a similar brightness on the baseline curve.

Gaia DR2 stars

Figure 1. A G-band magnitude vs. G-band magnitude uncertainty plot of 1,000 DR2 stars showing the expected “baseline curve” along which most non-variable stars lie. The variable star (red dot) does not fall on the baseline curve, but instead has a noticeably larger G-band magnitude uncertainty than other stars of comparable magnitude. [Andrew et al. 2021]

In Figure 2, the authors expand their analysis and consider 70,680 variable stars with photometric periods < 10 days. They now also consider 2,000 random non-variable stars. In this plot, nearly all the variables lie above the baseline curve, with higher uncertainties compared to non-variable stars of similar magnitudes. Moreover, they find that stars with higher variability amplitudes feature higher uncertainties.

To note, the authors acknowledge that the G-band magnitude uncertainty varies with the number of observations (at fixed brightness, the uncertainty decreases as the number of observations increases), and they correct for this by using the weighted average of individual photometry measurements for each source.

variable stars in Gaia DR2

Figure 2: G-band magnitude vs. G-band magnitude uncertainty for 70,680 variable stars with periods less than 10 days, colored by their optical variability amplitude. The black points are a random sample of 2,000 stars, illustrating a baseline curve for non-variable stars. The dashed lines are the mean magnitude uncertainty of variables, in three bins from 0.0 to 1.2 mag in variability amplitude. [Andrew et al. 2021]

Exploring Other Catalogs

The authors then calculate a standard deviation, σ, from the baseline curve for sources using binned G-band magnitudes. They subsequently define a parameter, Gσ, which is the ratio of the G-band magnitude uncertainty in Gaia DR2 for a given source, to the σ for that bin. They use this parameter to define a threshold of Gσ = 3 for identifying variable stars.

But how effective is this method in finding short-period variables in Gaia’s DR2? To address this, the authors check the reliability of their newly defined threshold by scanning a series of short-period (<10 days) variable star catalogs with photometric G-band magnitudes between 14 and 19.5. They first inspect the Catalina Real-Time Transient Survey, which contains 70,680 variables. From their analysis, they find that 96% of the variables in this catalog have Gσ values > 3; the remaining 4% were masked because of potential contamination by a nearby neighboring star, which can generate false positives. Moreover, they inspect the Zwicky variable star catalog (see here for more on the Zwicky Transient Facility), which contains 556,521 variables. Similarly, they find a significant percentage (94%) are recovered when applying the Gσ > 3 threshold; the remaining 6% are also excluded because of neighboring stars.

Furthermore, this method also proves effective at identifying standard RR Lyrae and Cepheid variables (which can have periods up to 70 days). From Gaia DR2, they find that 100% of the Cepheids (8,465 sources) and 99.8% of RR Lyrae (107,418 sources) have Gσ > 3.

Confident in their method, they proceed to analyze the entirety of DR2, and they catalog 9.3 million candidate variable stars, a significant increase from the 550,000 sources reported in DR2 prior to this study.

Hidden No More

The authors of today’s paper provide an immensely powerful tool for identifying variable stars. They show that variable stars in Gaia’s latest data release, which contains over 1.7 billion sources, tend to have larger photometric uncertainties when compared to non-variable stars; more variable stars have larger photometric uncertainties, too. They quantify this relation with the parameter Gσ, which traces how far a star is from a baseline curve of non-variable stars. Using a threshold of Gσ = 3, they recover over 90% of short-period variables in other variable catalogs.

Variable stars have significantly contributed to some of the largest advances in modern astronomy: they have helped us define cosmological parameters, enhanced our understanding of the distance-scale of the universe, and provided us the information to calculate the ages of the oldest stars. Accurately identifying, and studying, these objects promise to unveil even more about our universe. Fascinating instruments like Gaia will serve as the bridges to these wonderful discoveries.

Original astrobite edited by Ellis Avallone.

About the author, James Negus:

James Negus is currently pursuing his Ph.D. in astrophysics at the University of Colorado Boulder. He earned his B.A. in physics, with a specialization in astrophysics, from the University of Chicago in 2013. At CU Boulder, he analyzes active galactic nuclei utilizing the Sloan Digital Sky Survey. In his spare time, he enjoys stargazing with his 8” Dobsonian Telescope in the Rockies and hosting outreach events at the Fiske Planetarium and the Sommers–Bausch Observatory in Boulder, CO. He has also authored two books with Enslow Publishing: Black Holes Explained (Mysteries of Space) and Supernovas Explained (Mysteries of Space).

cosmic clocks

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Eppur è piatto? The cosmic chronometer take on spatial curvature and cosmic concordance
Authors: Sunny Vagnozzi, Abraham Loeb, Michele Moresco
First Author’s Institution: Kavli Institute for Cosmology, University of Cambridge, United Kingdom
Status: Submitted to ApJ

Though astronomers have been studying the universe for hundreds of years, there are still a lot of things we do not know about it. We do not know whether it is finite or infinitely large, and we cannot determine its overall shape. Nevertheless, we know that we can describe the universe with a four-dimensional spacetime, the combination of our three-dimensional space and time. This spacetime is not rigid, but can be distorted and deformed by the content of the universe, like a bowling ball distorts a spandex sheet. The matter (and energy) also changes how the space part of spacetime is curved — and we can measure this curvature.

There are three possibilities for the curvature of the universe, illustrated in Figure 1: the universe can be closed, flat, or open. A closed universe would be shaped like a sphere (although with a three-dimensional surface), meaning that if you would walk along a straight line you would inevitably end up at the position on which you started. Also, if you and a friend start walking on parallel paths, your paths will cross at some point. An open universe is the “opposite” of this: the distance between you and your friend will increase with each step and you will never end up near each other. A flat universe is exactly in between these two cases: parallel paths stay at the same distance and never cross.

We characterize the curvature with parameter Ωk. Using the sign convention of the authors of today’s paper, a negative Ωk indicates a closed universe, a positive Ωk an open one. If the universe is flat, Ωk is exactly zero.

universe curvature

Figure 1: Different possibilities for curvature of the universe. The universe can be closed (top), open (middle), or flat (bottom). In the sign convention of today’s paper, a closed universe has Ωk < 0, an open universe has Ωk > 0. [NASA/WMAP]

In general, cosmologists predict that the universe is flat (Ωk = 0). This is not only suggested by a variety of measurements, but also a key prediction of the theory of cosmological inflation. Inflation describes a brief period during which the universe expanded exponentially (see this astrobite for more on inflation). The strong expansion of the universe decreased the curvature, in the same way that inflating a small balloon to the size of the Earth makes it appear flatter. Still, there is an ongoing debate on this issue. The Planck satellite tried to measure Ωk using the cosmic microwave background (CMB), remnant light from the early universe, which travelled through our potentially curved universe. The results suggest a Ωk between –0.095 and –0.007, so this measurement method points to a closed universe instead of flat. A reanalysis of Planck data confirmed this preference for a curved universe using the CMB.

However, the CMB on its own is not a sensitive probe for Ωk. It determines a combination of Ωk, the matter density in the universe Ωm, and the expansion rate H0, i.e., the Hubble constant. A strongly curved universe with a low value of H0 and a high value of Ωm can have the same CMB as a flat universe with a high H0 and a low Ωm. The fact that we can only measure H0, Ωm, and Ωk together and not individually from the CMB is called the geometrical degeneracy.

Cosmologists combine the Planck measurement with other probes, such as baryon acoustic oscillations (BAOs) or Type Ia supernovae. Combining the Planck data with BAO measurements from the Dark Energy Survey leads to Ωk = 0.0007 ± 0.0019, which is consistent with a flat universe.

The authors of today’s paper, though, believe that this combination of Planck and BAOs is not valid. They argue that the Ωk parameters inferred by each dataset on its own disagree so strongly, that the results of a combination of the data sets can be unreliable. If the results of two datasets are in strong tension, this could indicate that one or both include unknown systematic errors, or they need different models to be described. They should therefore not be combined. In case of the curvature of the universe, a different data set should be used to break the geometrical degeneracy. The choice of today’s authors: cosmic chronometers, the universe’s standard clocks.

Cosmic chronometers are objects whose time evolution we know (or can at least model very well), for example specific types of galaxies. We observe some of these objects at different redshifts, which indicate how far away they are. From the differences in their evolutional state, we then infer how much time has passed between the redshifts. This time difference tells us how fast the universe has expanded between the redshifts and gives the expansion rate H(z) at each redshift z. H(z) depends on the cosmological parameters, including Ωk, so from this we can infer the cosmic curvature.

Which objects can we use as chronometers? The best choice are passively evolving galaxies. These are galaxies that have exhausted their gas reservoir and form only a few new stars. Since blue stars die earlier than red stars, the galaxies become redder with time. From the galaxies’ spectral colours (more precisely, their spectral energy distributions) and sophisticated models of stellar evolution, we can infer how much time has passed since they exhausted their gas and stopped star formation. When we compare two galaxies that formed at the same time but are at different redshifts, the difference in their evolution tells us how much time has passed between the redshifts. We have found our cosmic clocks!

Today’s authors use 31 measurements of H(z) with cosmic chronometers between redshift z = 1.965 (approximately 10 billion years ago) and z = 0.07 (approximately 1 billion years ago). Figure 2 shows these measurements, along with the best fit for H(z) and the prediction from the Planck measurements. Planck underpredicts H(z), but the tension between the cosmic chronometers and Planck is much smaller than the disagreement with the BAO measurements. Therefore, the authors argue that combining the Planck and the cosmic chronometer data set is justified.

Hubble parameter

Figure 2: Cosmic expansion rate (also called Hubble parameter) at each redshift. The data points show the determination of the cosmic chronometer measurements used in the paper. The red line is the fit to the cosmic chronometer data combined with Planck; the blue line is the prediction of the Planck data alone. The Planck data underpredicts H(z) on its own. [Vagnozzi et al. 2020]

When the authors do so, they find the constraints on Ωm, Ωk and H0 shown in Figure 3. The combination of Planck and cosmic chronometers prefers a higher value of H0 than the Planck data on its own. However, this is not enough to alleviate the famous Hubble tension. Most important, though, the combined data finds Ωk = –0.0054 ± 0.0055. This value is consistent with a flat universe for which Ωk = 0, as predicted by cosmological inflation.

parameter constraints

Figure 3: Constraints on curvature of the universe (Ωk), the Hubble parameter (H0) and the matter density in the universe (Ωm) using only the Planck data (blue) or the combination of Planck with the cosmic chronometers (red). The Planck data on its own prefers a small value for H0 and an Ωk < 0. The combined dataset, however, confirms a flat universe and a higher value for H0. [Vagnozzi et al. 2020]

In conclusion, the authors of today’s paper argue that the universe is most likely not curved. Their result fits other measurements that combined Planck data with other probes, such as BAOs, but their choice to use cosmic chronometers produces a result that they consider more reliable, because the individual datasets did not disagree strongly. This result could be a notable step forward in solving the controversy around Planck’s curvature measurement. More measurements of cosmic chronometers are undoubtedly due in the future — so look out for more results from the universe’s clocks.

Original astrobite edited by Haley Wahl.

About the author, Laila Linke:

I am a third year PhD Student at the University of Bonn, where I am exploring the relationship between galaxies and dark matter using gravitational lensing. Previously, I also worked at Heidelberg University on detecting galaxy clusters and theoretically predicting their abundance. In my spare time I enjoy hiking, reading fantasy novels and spreading my love of physics and astronomy through scientific outreach!

Mercury interior

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Radiogenic Heating and its Influence on Rocky Planet Dynamos and Habitability
Authors: Francis Nimmo et al.
First Author’s Institution: University of California Santa Cruz
Status: Published in ApJL

Rocky planets are thought to start as hot masses of material accreting from a disk of gas and dust around their young host star. Whereas the primary heat source early on comes from accretion, and orbital dynamics can lead to further heating through tidal squeezing, the ongoing thermal evolution of many rocky planets is likely controlled by radiogenic heat production. In particular, the radioactive isotopes of uranium (U) and thorium (Th) have long half-lives and so may be significant in deciding the long-term geodynamic history. The authors of today’s bite argue that the exact concentrations of such elements in a planet’s mantle could decide the presence and strength of that world’s magnetic field.

Dynamo theory holds that magnetic fields are generated by circulation of conductive fluids. In the case of Earth, convection of hot liquid metals in the outer core may be generating our magnetic field (see Figure 1). This outer core dynamo shuffles heat outward from the planet’s interior and its efficiency is controlled by the temperature of the overlying mantle. Thus, our magnetic field can be linked to heat production in the mantle that is mainly due to the decay of radiogenic elements. But what would it mean if our planet happened to have more or less of these elements?

diagram of earth's interior

Figure 1: Simplified cross-section of the common interpretation for Earth’s interior. The thin layer at the Earth’s surface is the crust (brown), below that is the mantle (red), which extracts heat from the liquid outer core (yellow), which convects to produce the magnetic field and gradually solidifies to form the inner core (white). [universe-review.ca]

In general, the composition of a planet should be similar to that of its host star since they coalesced from the same stuff. Therefore, we should be able to measure the elemental abundance of a star and say something about its planets. However, concentrations of some elements can vary significantly from star to star due to the different processes that produce them. So-called r-process elements like U and Th are likely distributed unevenly throughout the galaxy because they only form under the extreme conditions of rare processes like neutron star merger events. For understanding radiogenic heat production in the mantle of a planet, the presence of U and Th is important in terms of its concentration relative to the bulk mass of silicates. The ratio of europium to magnesium (Eu/Mg) serves as a good proxy for this — useful since U and Th are hard to detect in the spectra of stars. Given typical measurements of Eu and Mg, the authors consider that radiogenic heat production in the mantle of similar planets may vary from roughly 30% to 300% of the Earth’s 15 terawatts.

The models at the center of today’s paper are relatively simple compared to more computationally expensive 2D or 3D models, but are sufficient to see how changing a parameter like mantle heat production could affect a planet’s evolution. They consider the timeline of three identical Earths, where the only difference is having less U and Th (Figure 2a), Earth-like concentrations (Figure 2b), or more U and Th (Figure 2c). All cases assume that plate tectonics contributes to heat transfer, because previous work suggests magnetic dynamos are more likely under conditions conducive to plate tectonics, despite its presence not being a certainty (see Venus, for example). In Figure 2, the authors use entropy production as a proxy for the likelihood and intensity of a dynamo. The entropy production rate determines the presence of a dynamo based on whether it exceeds the adiabatic entropy rate, where the adiabat defines the expected temperature and pressure conditions for the mantle. Dynamo convection is at first due to extraction of heat into the mantle that gradually declines, but rapidly increases again after the core cools enough to begin solidifying. This extra burst of activity is due to “compositional buoyancy” where solidification of the core releases light elements into the fluid above.

As a good starting point, the trend predicted by the model for normal Earth (Figure 2b) matches geologic observations that the Earth has had an active magnetic field for over 3.5 billion years, though it turned off or weakened at least once for a few million years. In fact, it seems that Earth was just on the threshold for having a consistently active dynamo, based on how the entropy production may have briefly dipped below the threshold around one billion years ago. In the case of less radiogenic heat than normal Earth (Figure 2a), solid core formation starts earlier and the dynamo is easily maintained. In the case of more radiogenic heat (Figure 2c), the dynamo may turn off for hundreds of millions of years because a high-temperature mantle isn’t as effective at extracting heat from the core. So, opposite to what you might expect, the authors find that more radiogenic heat in the mantle leads to less core heat flux, less dynamo, and a smaller solid core.

heat flow models

Figure 2: Model results for (a) 0.33, (b) 1, and (c) 3 times Earth’s U and Th concentrations. The upper panels show decreasing heat flow over time (solid lines) and the onset of inner core formation (dashed green line). The lower panels show the entropy production rate over time, which generally decreases until inner core formation begins. The dynamo is thought to operate only when the total entropy rate (black) is greater than the adiabatic entropy rate (red). [Nimmo et al. 2020]

A more thorough view of the effect of radiogenic heat can be seen in Figure 3. The concentration of radiogenic elements could affect the habitability of the planet based on whether they are of low enough abundance to allow for a magnetic dynamo. Though some disagree, it is generally thought that a magnetic field helps shield a planet from solar particles which may otherwise erode the atmosphere. On the other hand, higher radiogenic heat in the mantle is expected to cause more volcanism, which likely releases much of the volatiles that allow for a thick, comfy atmosphere. The authors point out that their model probably misses some of the complex feedbacks that may occur here, especially with the many unknowns about plate tectonics, but ultimately argue that the abundance of r-process elements (as seen from stellar Eu/Mg ratios) should be seen as another important factor to consider in the search for habitable exoplanets.

Rate of entropy production

Figure 3: Rate of entropy production (indicated by color) for a varying fraction of radiogenic elements compared to normal Earth (in log scale) over time. Solid black lines indicate a reference temperature and the dashed red lines show the trajectory of three modeled scenarios through time (the author’s Figure 1 is referenced as Figure 2 in this astrobite). Note the black region where too much radiogenic heat kills the dynamo. [Nimmo et al. 2020]

Interestingly, it has been found that lower quantities of radiogenic isotopes are present farther from the galactic center. Also, older stars are found to have smaller amounts of these heavy elements — however, today’s authors expect the random distribution due to r-process rarity to ultimately have the strongest influence on U and Th abundances. The more we learn about what makes Earth’s systems work, the more we will know about what to look for in our searches of the skies for habitable worlds. This paper paves the way for future observations and modeling to expand our view of the complicated interactions that feed into planetary geodynamics and possibly life in the universe.

Original astrobite edited by Spencer Wallace.

About the author, Anthony Maue:

Anthony is a PhD student at Northern Arizona University in Flagstaff studying planetary geology. In particular, his research focuses on Titan’s fluvial processes through analyses of Cassini radar data, laboratory experiments, and terrestrial field analog studies. Outside of school, Anthony enjoys skiing, cycling, running, music and film.

Sun and Mercury

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Solar Wind Prevents Reaccretion of Debris after Mercury’s Giant Impact
Authors: Christopher Spalding and Fred C. Adams
First Author’s Institution: Yale University
Status: Published in PSJ

Mercury is a bit of an oddball compared to the other terrestrial planets. Because of its proximity to the Sun, Mercury doesn’t have an atmosphere, only a “surface-bound exosphere” of gas particles on ballistic trajectories. Under the surface, Mercury has an iron core that extends to more than 80% of its radius, compared with just 50% for Earth.

Many theories have been proposed to explain how Mercury ended up as the planet with the largest core compared to its size. One idea is that Mercury formed with a silicate mantle that was blasted away by asteroid impacts. Another puts forth that as the planets formed from the protoplanetary disk orbiting the Sun, high temperatures sorted out the silicates and iron, so Mercury formed in a region of the disk bereft of silicates to begin with. A third theory states that high temperatures took over after Mercury formed, vaporizing its mantle but not the iron core. The fact that many close-in exoplanets have been found over the last decade with rocky mantles casts considerable doubt on the latter theory.

It is, in fact, a corollary of the first theory that the authors of today’s paper tested. Specifically, they hypothesize that as asteroid impacts knocked pieces of Mercury’s mantle into orbit, the powerful solar wind removed the debris before it could coalesce back onto the surface.

Every Day Is a Windy Day in Space

In 1957, Eugene Parker realized something funny happened when he tried to solve the fluid equations to understand how the Sun’s atmosphere works. At very far distances, he found a discontinuity — the pressure was much lower than realistically possible. His solution was so revolutionary it took three tries to get published: the solar corona is not static but constant expands into space. Parker’s solar wind is composed of supersonic protons traveling at 400 km/s, and it dominates the interplanetary environment as far as the heliopause. Parker’s other major discovery was the spiraling solar magnetic field.

It’s believed that the young Sun had a solar wind about 100 times stronger than today, which is what makes the work in today’s paper possible.

The Giant Impact of Giant Impacts

Mercury’s early history was likely dominated by giant impacts (similar to those that might have formed the Moon), which blasted large amounts of its silicate mantle into space. The pebble-sized debris, left to orbit, would gradually reaccrete onto the surface of Mercury within about ten million years.

But the strong solar wind from the young Sun can push on the debris just enough to modify the particles’ orbits, either accelerating the debris toward the outer solar system or dragging it in toward the Sun. Figure 1 shows a schematic of this system.

ejected material orbiting Mercury

Figure 1: Diagram of ejected material orbiting Mercury. The solar wind in this case exerts a drag, reducing the orbital semi-major axis and causing the particle to fall toward the Sun. In other cases, the solar wind can accelerate the particle, causing it to exit toward the outer solar system. [Spalding & Adams 2020]

Dual Methods of Studying Early Mercury

To test whether the solar wind could be responsible for facilitating the loss of Mercury’s mantle, the authors first looked for an analytical solution by directly solving equations of motion. Despite the simplifications required, they believed the results would be conceptually insightful. They then followed up with a detailed numerical simulation, relying on high-performance computing.

solar wind velocities

Figure 2: Radial (top) and azimuthal (bottom) velocities of the solar wind as a function of radius for the Sun at ages 3, 10, and 30 million years. Radial velocity increases monotonically but azimuthal velocity reaches a maximum close to the Sun. Super-Keplerian azimuthal winds can accelerate particles outward or inward, depending on orientation. Mercury’s semi-major axis is 0.39 AU. [Spalding & Adams 2020]

In the analytical incarnation, the authors looked for the amount of acceleration the solar wind can impart on centimeter-sized debris orbiting Mercury. Close to the Sun, the solar magnetic field locks the solar wind to the solid body rotation of the Sun. The result is the wind has an azimuthal velocity (circulating around the equator) in addition to its outward, radial velocity. The azimuthal velocity was recently confirmed by the Parker Solar Probe.

Though the azimuthal velocity decreases with distance, at Mercury’s location, it is sufficient to impact orbiting debris with a force, as shown in Figure 2.

The authors added solar wind acceleration to the orbital equations of motion and looked for the decay timescale of the semi-major axis and eccentricity. They varied the age of the Sun, strength of the solar wind, debris launch angle, and starting orbit. In most cases, the solar wind causes debris to decay within about one million years, which is significantly shorter than the ten million years it takes the debris to reaccrete onto the surface, a promising indication for their hypothesis.

debris collisions

Figure 3: Results of numerical simulations with and without solar wind. Of the 110 starting particles, many times more collide back with Mercury in the absence of a solar wind, indicating the wind’s role in stripping collisional debris. [Spalding & Adams 2020]

Many researchers would be satisfied with an analytical solution that supports the hypothesis, but these authors wanted to follow up with a computational approach. Simulations can easily handle more robust physics and perform better control tests. The authors run N-body simulations of centimeter-sized debris orbiting Mercury with and without the solar wind, tracking each particle to see if it either collides back with Mercury or escapes for good.

Figure 3 shows the results, indicating that the presence of even a weak solar wind significantly reduces the number of particles that recoalesce onto the planet’s surface.

Beyond the Solar System

With a combination of analytical and computational methods, the authors conclude that a strong solar wind during the period of heavy impacts on Mercury could have removed ejected material from orbit within less than a million years. Over time, this resulted in Mercury’s silicate mantle being lost into the Sun or toward the outer solar system, leaving behind the iron core.

The authors offer the possibility of utilizing this work in the study of exoplanets. As space physicists learn more and more about the solar wind and heliosphere, attention has turned to astrospheres, heliospheres around stars other than the Sun. Some detections of close-in exoplanets indicate they are iron-enriched like Mercury, leading to the possibility that their composition can be used as an indirect probe of stellar wind characteristics.

Original astrobite edited by Haley Wahl and Wynn Jacobson-Galan.

About the author, Will Saunders:

I am a third year Ph.D. student at Boston University, where I study planetary atmospheres. I work with Prof. Paul Withers at BU and Dr. Mike Person at MIT using stellar occultations to measure waves in the atmospheres of Mars and Uranus. I received my Bachelors in Physics from the University of Pennsylvania. I am so excited about founding and co-hosting the podcast astro[sound]bites. Check us out on astrosoundbites.com, Apple Podcasts, Google Play, SoundCloud, and Spotify. In my free (pandemic) time, I enjoy biking, outdoor dining, and walking around Boston.

A2261-BCG

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Chandra Observations of Abell 2261 Brightest Cluster Galaxy, a Candidate Host to a Recoiling Black Hole
Authors: K. Gültekin et al.
First Author’s Institution: University of Michigan
Status: Accepted to ApJ

Galaxies can come in many different shapes and sizes, from dwarf galaxies that contain only a few hundred million stars to giant spirals like the Andromeda Galaxy that house over a trillion stars. Their centers can also vary from noisy black holes emitting large jets of radiation (also known as active galactic nuclei) to two spiraling supermassive black holes from two merged galaxies. Today’s paper looks at a bright cluster galaxy and tries to figure out exactly what is at its center.

An Odd Galaxy

Mrk 739

Figure 1: Mrk 739, seen here, is an example of a galaxy merger observed before the two supermassive black holes have merged at the center of the newly-formed galaxy. [SDSS]

Though galaxies can look different, the galaxy cluster Abell 2261’s brightest cluster galaxy, A2261-BCG, is particularly strange looking. It has a very large center in its stellar surface brightness profile — a profile that describes the brightness of the galaxy as a function of distance from the center — showing that its core is very large and flat. Interestingly, the core is offset from the center. Cores like this are sometimes the result of a supermassive black hole merger. When two galaxies collide, their central black holes can sink to the center of the merger and become a binary (as seen in Figure 1). Stars interacting with this supermassive black hole binary can even end up being flung out of the center, taking some energy from the binary with it and causing the binary’s orbit to shrink. This is called “scouring” a core in the galaxy. As the binary moves closer together, it emits gravitational radiation in the form of gravitational waves, further shrinking its orbit. This energy emission will eventually cause the supermassive black hole binary to merge, and can lead to a recoil of the merged black hole at speeds of up to several thousand kilometers per second (!), pushing it slightly away, or offset, from the center.

Astronomers expect that A2261-BCG should host a very massive black hole (which is possibly the result of a merged binary) at the center due to the size of its core. In this paper, the authors look for evidence of a recoiling or ejected black hole in AA2261-BCG, which would indicate that the black hole is a result of a merged binary.

Searching Radio and X-ray Observations

In order to test the theory that this supermassive black hole was once two, the authors focused on four stellar knots, areas of high star density, near the center. If the black hole was recoiling, it would take a clump of stars with it; the black hole would lie at the center of this clump. In previous works, this team used the Very Large Array (VLA) telescope in New Mexico to look for radio emission coming from the stellar knots, but the only activity they found was evidence of old jet activity. They then used the Hubble Space Telescope to look at the stellar velocity distributions of three of the knots to see if there was a massive object around, but didn’t find any conclusive evidence.

In this work, they use X-ray observations taken with the Chandra telescope to look for evidence of accretion onto a large black hole at the center of A2261-BCG. If found, it would point to the fact that the black hole has never recoiled or that the recoil is very slight.

Were They Abell to Figure Out What’s at the Center?

The team analyzed new Chandra observations and performed image and spectral fitting on this data combined with archival data to determine the profile of the gas being emitted from the center. They found evidence of a previous dynamical disturbance, which matches their optical observations, but showed no point-source emission arising from the optical center of the galaxy. The observations show no 1010 solar mass black hole at any of the stellar knots (which is seen in Figure 2 by the absence of any excess emission from the stellar knots), which raises the question of just where the black hole is. One possibility is that it is at the center of the galaxy and is just accreting at such a low rate that it cannot be detected in X-rays. Another possibility is that the black hole traveled farther than 10 kpc away from the center, but looking farther away from the center could increase the chance of more X-ray noise.

Center of Abell A2261-BCG

Figure 2: Images of the center of the galaxy. Left: the Hubble image showing the four stellar knots in the white contours at the center, with the red circle showing the optical center and the red box showing the location of the radio emission. Middle: Chandra data showing the center. Right: the residuals (the difference between the two). The colors show the amount of X-ray emission. Each panel shows no excess emission at any of the locations. [Gültekin et al. 2020]

The evidence of a radio source implies that at some point there was a sufficient amount of material falling onto the black hole to produce jets, but lack of evidence of a bright X-ray core supports the idea that this is relic emission and not current emission.

While the center should show some evidence for a supermassive black hole, the team finds that either there is no 1010 solar mass black hole at the center, or that it is accreting at a very low level. Further observations with the upcoming James Webb Space Telescope would allow two-dimensional spectral characterization of the core of the galaxy to help determine whether there is a black hole at the center of A2261-BCG.

Original astrobite edited by Brent Shapiro-Albert.

About the author, Haley Wahl:

I’m a third year grad student at West Virginia University and my main research area is pulsars. I’m currently working with the NANOGrav collaboration (a collaboration which is part of a worldwide effort to detect gravitational waves with pulsars) on polarization calibration. In my set of 45 millisecond pulsars, I’m looking at how the rotation measure (how much the light from the star is rotated by the interstellar medium on its way to us) changes over time, which can tell us about the variation of the galactic magnetic field. I’m mainly interested in pulsar emission and the weird things we see pulsars do! In addition to doing research, I’m also a huge fan of running, baking, reading, watching movies, and I LOVE dogs!

elliptical galaxy

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Rapid Build-up of Massive Early-type Galaxies. Supersolar Metallicity, High Velocity Dispersion and Young Age for an ETG at z = 3.35
Authors: Paolo Saracco, Danilo Marchesini, Francesco La Barbera, et al.
First Author’s Institution: INAF – Osservatorio Astronomico di Brera, Italy
Status: Accepted to ApJ

For the majority of galaxies in the universe, several billions of years must pass for dramatic, measurable changes to occur in their stellar populations and morphologies. Moreover, in the standard cosmological paradigm, ΛCDM, the most massive structures typically form last from the buildup of smaller constituents (this is known as hierarchical growth). However, recent studies have found something surprising: some galaxies are able to accumulate incredible stellar mass (>1011 M) and deplete their gas reservoirs — becoming “quiescent”, or no longer forming stars, within the first two billion years after the Big Bang. These massive systems must form and evolve through novel avenues in order to assemble so much mass and exhaust their gas supplies so quickly. In this astrobite, we dive into the mystery of massive quiescent galaxies at high redshift: where did they come from and how did they form?

One Galaxy Is Worth One Thousand Words

While the diversity of galaxies throughout the universe is extreme (even during a single epoch), in-depth analyses of a few representative members are capable of shedding light on entire galactic populations. For rare sub-groups, such as massive quiescent galaxies (MQGs) at high redshift, detailed studies of single objects can yield especially insightful results sparking further research. In this work, the authors sought to measure key properties of a previously discovered MQG with the catchy name of C1-23152 in order to explore more general questions about this population of galaxies, like those raised in the introduction. This galaxy is located at z = 3.35, when the universe was only 1.9 billion years old. Previous analyses of this galaxy used a combination of imaging, where one group of astronomers used the Hubble Space Telescope to measure structural properties, as well as shallow spectroscopy to confirm the redshift and some basic features. In this work, the team led by Paolo Saracco devoted 17.3 hours of total effective integration time with the Large Binocular Telescope (LBT) in Arizona to obtain a detailed near-infrared spectrum. The goal in obtaining this spectrum, shown in Figure 1, was to conclusively establish this galaxy’s stellar age, metallicity, and velocity dispersion, while also constraining its star-formation history.

spectrum of galaxy C1-23151

Figure 1: LBT spectrum of galaxy C1-23151. In dark grey is the observed spectrum; in black is a smoothed version of the spectrum; and in red is the best stellar population synthesis model used to infer the physical properties of the galaxy. Prominent emission and absorption lines are labeled in red, while grey vertical bars demarcate portions of the spectrum that are not used in the analysis due to the presence of strong emission lines or sky transmission. In the bottom panel, an SDSS spectrum of a local quiescent galaxy is shown for comparison. [Saracco et al. 2020]

What Was Measured?

Essentially all physical properties of galaxies are in one way or another encoded in the light they emit. The key then is to work backwards: using the light we observe through photometry and spectroscopy, what must the underlying physical properties be? Using the high quality LBT spectrum (Figure 1) of galaxy C1-23152, the authors performed both absorption line fitting (ALF) and full spectrum fitting (FSF). In both cases, synthetic stellar population synthesis models with known physical parameters (e.g., stellar age, metallicity, mass) are compared with the observed spectrum. During ALF, as the name suggests, only the absorption lines are compared with those of models, as the strengths of these lines are easily measured and robust. Conversely, during FSF, the entire spectrum is compared with models. Combining the two approaches is a method to combat systematic issues arising from the model-fitting procedures and yields related but different estimates of the galaxy’s physical properties. Both of these methods employ a code that tries to fit multiple stellar populations of different ages to the observed spectrum (see Figure 2 to see how this works). Lastly, the team performed standard spectral energy distribution (SED) modeling, where archival photometry from the UltraVISTA survey was used to constrain the SED of the galaxy.

stellar population synthesis model

Figure 2: Demonstration of a multi-component stellar population synthesis model. In this example, a galaxy is synthesized from two stellar populations: a younger population accounting for 75% of the flux (35% of the mass) and an older population accounting for 25% of the flux (65% of the mass). Using the same code to analyze C1-23152, the team attempted to tease out the different stellar populations with ALF and FSF. The right panel shows the inferred ages of the different components of this synthetic galaxy and demonstrates the robustness of the team’s method in extracting correct ages for the underlying populations. [Saracco et al. 2020]

The team’s analysis revealed C1-23152 is indeed an early-type (quiescent) galaxy that contains an active galactic nucleus (AGN) and has a total stellar mass of ~ 2×1011 M. It attained its morphology and ceased its star formation during the last 600 million years prior to observation (at 3.35 < z < 4.6). During the recent 450 million years prior to observation, the galaxy was likely forming stars at a dramatic rate, creating more than 400 solar masses worth of stars per year (400× the current average star formation rate in local galaxies). The team measured a super-solar metallicity, which suggests that star formation was taking place rapidly, without time for the gas supplies to replenish. It is likely that the star formation ceased roughly 150 million years prior to observation.

The Art of Inductive Reasoning

So how does this specific finding relate to massive quiescent galaxies in general? On the one hand, the fast formation time suggested by ALF and FSF, combined with the high surface mass density as measured through its morphology, points to a dissipative stellar-mass growth cycle that did not involve mergers, which would ultimately lower the surface mass density. The authors stress that density appears to be a principal driver in quenching star formation, which has been suggested by other astronomers. Theoretical models can form galaxies like C1-23152 relatively easily so long as the progenitor galaxy is dense. The greater-than-solar metallicity also suggests a fast, dissipative growth of stars. It is likely that the AGN in the galaxy played some role in the quenching process, although it is not clear from the data what that role is. Assuming this galaxy continues to evolve through well-understood pathways, such as major and/or minor mergers, it will likely resemble the most massive systems observed today (corresponding to z = 0). A comparison of how this galaxy compares with others like it is given in Figure 3, where different evolutionary scenarios are also illustrated.

quiescent galaxy comparison

Figure 3: Comparison of C1-23152 with other similar quiescent galaxies. From left to right, the velocity dispersion, surface-mass density, and stellar mass of each galaxy is plotted on the x-axis while its effective radius (the radius that encloses half of the light) is plotted on the y-axis. Local quiescent galaxies are shown in grey; local quiescent galaxies with high velocity dispersions are shown as black triangles; red squares illustrate quiescent galaxies that are both compact and have high velocity dispersions; two other MQGs at z > 3 are shown in green and cyan; the large black circle indicates the galaxy in question, C1-23152. The different colored arrows indicate different evolutionary paths the galaxy is likely to traverse from z = 3.35 to today. [Saracco et al. 2020]

C1-23152 provides an interesting laboratory for understanding the growth of formation of massive quiescent galaxies in the early universe. With the detailed analysis of its spectrum, the authors confirmed many of the previously reported properties of this galaxy and were able to infer possible formation scenarios that fit into our theoretical understanding of the way galaxies form and change over time, although this galaxy is quite an extreme example. Nevertheless, each in-depth study of interesting and unique galaxies, like C1-23152, brings us closer to a full understanding of galaxy formation and evolution, with all of the intricacies included.

Original astrobite edited by Wynn Jacobson-Galan.

About the author, Lukas Zalesky:

I am a PhD student at University of Hawaii’s Institute for Astronomy. I am interested in understanding the way galaxies form and evolve over billions of years, as well as gravitational lensing by galaxy clusters. Outside of research I spend my time playing music, video games, exercising, and exploring the beautiful island of Oahu.

Kepler-452b

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: The Occurrence of Rocky Habitable Zone Planets Around Solar-Like Stars from Kepler Data
Authors: S. Bryson, M. Kunimoto, R. Kopparapu et al.
First Author’s Institution: NASA Ames Research Center
Status: Accepted to AJ

When we look up on a clear night and contemplate the seemingly innumerable stars, it’s difficult not to wonder just how many other worlds like the Earth might be out there. In ancient times, long before the first unambiguous discovery of an extrasolar planet (a planet orbiting a star other than the Sun), it was believed there were essentially two possibilities, both put forward by philosophers during the Classical period in Ancient Greece: Aristotle (384–322 B.C.) declared “There cannot be more worlds than one”, while Epicurus (~460–370 B.C.) held the opposing view that “There are infinite worlds both like and unlike this world of ours”. Over two millennia later, and thanks to the painstaking work of countless generations of astronomers, we now know for sure which of these views is closer to the truth. Now, a team of astronomers led by Steve Bryson of NASA’s Ames Research Center have taken us one step closer in our quest to discover other worlds like the Earth, using data obtained with the Kepler Space Telescope and the Gaia mission. Their results suggest that around half of Sun-like stars in our galaxy might host rocky, potentially habitable planets within their habitable zones.

NASA’s Kepler Space Telescope was launched in 2009, and one of its primary missions was to discover rocky planets in or near the habitable zones of their host stars (roughly, the region where liquid water can exist on a planet’s surface), and to estimate the fraction of stars that might host such planets. During the nine and a half years over which it was active, Kepler detected 2,393 confirmed exoplanets around the 530,506 stars it observed — and for most of that time it stared at a single patch of sky only about the size of a hand at arm’s length. To detect planets, Kepler made use of the transit method, which relies on measuring the tiny dip in brightness that occurs when a planet passes in front of its host star as we view it from the Earth. As can be imagined, this dip in brightness is pretty small: about 1% for a giant exoplanet similar to Jupiter, and only about 0.1% for a rocky, Earth-like exoplanet. NASA retired the space telescope in 2018 after it ran out of fuel, but that hasn’t stopped astronomers from continuing to pore over the data and make new discoveries.

The number of potentially habitable planets per solar system in our galaxy is a key term in the Drake equation, which is a probabilistic formula used to estimate the number of detectable civilisations residing within the Milky Way. None of the terms in the equation are known exactly, and most are only rough estimates based on observation. For this reason, much of the research carried out at The SETI Institute (The Search for Extraterrestrial Intelligence) focuses on finding reliable constraints for these terms.

Drake equation

Figure 1: An illustration of the various terms in the famous Drake equation. The number of potentially habitable planets per solar system is one of the key terms in the equation. [University of Rochester]

Bryson and collaborators performed a detailed statistical analysis after combining the Kepler planet candidate catalogue with data from ESA’s Gaia mission. Previous studies that attempted to estimate similar planet occurrence rates have only considered the planet’s distance from the star, whilst this is the first study of its kind to consider the “instellation flux”, or the amount of energy falling on a planet from its host star. This was possible thanks to the inclusion of data from Gaia, which was designed to construct an ultra-precise 3D map of the positions and motions of stars in the Milky Way, and also to provide information on stellar properties — such as their luminosity and effective temperature. This allowed the researchers to carry out their analysis in an entirely new way that was more representative of the actual diversity of host stars in our galaxy.

The researchers then estimated the occurrence rates using a range of models, stellar populations and computation methods. They limited their analysis to exoplanets similar in size to the Earth (radii between 0.5 to 1.5 times that of the Earth) and therefore likely to be rocky, and stars with a similar age and temperature to the Sun (between about 4,800 K and 6,300 K). They also considered two scenarios using either a conservative or optimistic definition of the inner and outer habitable zone boundaries.

habitable zone occurrence rates

Figure 2: The resulting distributions of the habitable zone occurrence rate for a range of models and stellar populations. The medians and 68% credible intervals are shown above the distributions. The top panels incorporate uncertainties on planet radius, stellar instellation and stellar effective temperature, whilst the bottom panels do not incorporate these uncertainties. Left panels consider the conservative habitable zone; right panels, the optimistic habitable zone. The plots show how very similar results were obtained for models 1 and 2 for both stellar populations. [Bryson et al. 2020]

From their analysis, the authors estimate that the average number of planets per star with a planet radius between 0.5 and 1.5 Earth radii, and within the star’s habitable zone, is between 0.37 and 0.60 for the conservative habitable zone. For the optimistic habitable zone, they estimated between 0.58 and 0.88 planets per star. This means that, even using the most conservative estimate, there could be as many as 300 million potentially habitable planets in our galaxy, and most likely many more! They also showed that there are likely to be at least four potentially habitable planets within about 30 light-years of the Sun, and the closest is likely to be at most about 20 light-years away.

Although the team carried out a very careful analysis of the data, the uncertainties on their estimates are still quite large due to the small number of rocky planets actually detected by Kepler. Future work will likely help to refine these estimates even further. Knowing just how common different types of exoplanets are could help guide the design of future space missions searching for potentially habitable exoplanets.

Original astrobite edited by Will Saunders.

About the author, Jamie Wilson:

I am a PhD student at the Astrophysics Research Centre, Queen’s University Belfast. My work focuses on the characterisation of exoplanet atmospheres in order to better understand their chemical compositions, formation conditions and evolutionary histories. When not doing science I can usually be found playing drums and touring with my band.

W3/W4/W5 complex

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Methanimine as a Key Precursor of Imines in the Interstellar Medium: The Case of Propargylimine
Authors: Jacopo Lupi, Cristina Puzzarini, and Vincenzo Barone
First Author’s Institution: Scuola Normale Superiore, Italy
Status: Submitted to ApJ

What Even Is an Imine?

Perhaps one of the biggest questions we can ask is, where does life come from? Many astrochemists seek to answer this question by investigating the history and evolution of molecules that are biologically significant. It turns out imines (pronounced like “I means”) are an important group of molecules that can eventually form DNA. Imines are distinguished by a carbon atom double-bonded to a nitrogen atom, which is then bonded to hydrogen, or “C=NH,” where carbon can be bonded to any other groups of atoms.

propargylimine

Figure 1: Z- and E-propargylimine. The CNH bonds in red are what classify these molecules as imines. Note that the Z- and E- configurations are different molecules. The slightly different positioning of the N–H bond makes these molecules isomers. [Abygail Waggoner]

Currently, only six imines have been detected in the interstellar medium (ISM). The most recent detection, published earlier this year, is of Z-propargylimine, shown in Figure 1 (check out this website to learn how chemists name molecules), in G+0.693-0.027. With an increasing number of known imines in the ISM, many astrochemists are trying to understand how they form. Traditionally, larger molecules (like Z-propargylimine), are assumed to have formed in the ice layers of dust grains. However, recent studies have shown that some imines can form in the gas phase of molecular clouds.

Today’s paper uses computational chemistry to determine if the newly detected propargylimine (PGIM) can form via a similar route in the gas phase, or if this large, complex imine is more likely to form in interstellar ices.

Chemistry with Computers

The authors of today’s paper use computational chemistry that uses quantum mechanics to determine the structure and energy of different molecules. The software they used, Gaussian, is commonly used to determine if a chemical reaction is possible and exactly how a set of reactants form a product.

Today’s paper explores many different ways to form PGIM, and they find that the simplest imine, methanimine (CH2NH), is a possible precursor for PGIM. Methanimine can react with either CN or CCH to form CH2NCCH, which then follows a reaction pathway presented in Figure 2 to form either E- or Z-propargylimine. As you can see in Figure 2, these reaction pathways can get pretty complex.

imines

Figure 2: The different reaction pathways to forming PGIM from H2CNCCH and hydrogen. Carbon atoms are represented by black circles, nitrogen by blue, and hydrogen by white. Note that different reaction pathways and branching are represented by different colored lines, and both E- and Z-propargylimine can form. [Lupi et al. 2020]

The type of reaction the authors identified is known as an addition-elimination reaction. Basically, once CH2NCCH is formed from methanimine, a hydrogen atom will be “added” by reacting with CH2NCCH, then the nitrogen and terminal carbon will “switch” spots. Lastly, the hydrogen atom is lost, or “eliminated,” thus forming PGIM.

In addition to the kinematic study shown above, the authors derived the individual rate constants for each step in the reaction pathway shown in Figure 2. The calculated rate constants suggest that the proposed addition-elimination reaction is indeed possible in gas-phase interstellar conditions.

CNH to DNA

So, why is it important that PGIM can form in the gas-phase in the ISM? Well, as can be seen in Figure 2, chemical reactions and reaction pathways are very complex, and many different molecules can be formed many ways. While this study focused on the production of PGIM, the results suggest that other complex imines could form from smaller, less complex imines via a similar pathway in the gas-phase.

amino acids

Figure 3: Imines are chemical precursors to amines, classified by carbon bonded to NH2. Amines are chemical precursors to amino acids, which are classified by the NH2 and COOH groups. Amino acids are the building blocks that make up our DNA. In this image “R” indicates any group of atoms. [Abygail Waggoner]

Like we discussed at the beginning of today’s bite, imines are considered a biological precursor to DNA (Figure 3), so it is important to understand their formation to also understand the origins of life in the universe. Traditionally we assume that large carbon-based molecules, like imines, form in the ice. So, the discovery of a possible gas-phase formation route is a new and exciting pathway that could tell us more about the origins of life as we know it.

Original astrobite edited by Huei Sears.

About the author, Abygail Waggoner:

I am a second year chemistry graduate student at the University of Virginia and NSF graduate fellow. I study time variable chemistry in protoplanetary disks. When I’m not nerding out about space, I’m nerding out about fantasy by reading or playing games like dungeons and dragons.

brown dwarf

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Direct radio discovery of a cold brown dwarf
Authors: H. K. Vedantham et al.
First Author’s Institution: ASTRON, Netherlands Institute for Radio Astronomy
Status: Submitted to ApJL

Brown Dwarfs: The Middle School of Celestial Objects

What do you get when you have too much mass to form a planet, but not enough mass to form a star? A brown dwarf! First theorized in the 1960s and observed in the 1990s, brown dwarfs — a subclass of ultra-cool dwarfs — are substellar objects around 13–80 times the mass of Jupiter (or 10–90 times, depending on who you ask). They are special because, though they are thought to form in a similar manner to stars, they aren’t massive enough to trigger sustained hydrogen fusion in their cores. Instead, they are thought to fuse deuterium or lithium. This means that, unlike our Sun or other stars, they will gradually cool and fade rather than becoming white dwarfs, neutron stars, or black holes.

Despite not being stars, brown dwarfs are still self-luminous — meaning they emit energy in the form of light rather than just reflecting it back from a host star — and therefore can have spectral classifications like stars. Depending on how much light they emit and their temperatures, brown dwarfs are classified as either L, T, or Y type. Each class shows different dominant absorption lines: L dwarfs are water- and carbon-monoxide-dominated, T dwarfs are methane-dominated, and Y dwarfs are potentially ammonia-dominated.

The Study

Like stars, some brown dwarfs are known to have strong magnetic fields, and even instances of potential aurorae. In addition to being observable by some optical instruments, this magnetic activity allows some brown dwarfs to be detectable in the radio and — if the magnetic field activity is strong enough — X-ray bands. However, radio observations of these objects have previously been performed primarily to follow-up known brown dwarfs. The authors of today’s paper use the Low Frequency Array (LOFAR) to make the first direct radio discovery of a brown dwarf, BDR 1750+3809. They specifically looked at circularly polarized radio sources in the LOFAR Two-meter Sky Survey (LoTSS), because known brown dwarfs have highly circularly polarized radio emission. They followed up the LoTSS data with near-infrared observations using the Wide-field Infrared Camera (WIRC) at Palomar, and the NIRI imager at Gemini-North. They also obtained a spectrum using NASA’s Infrared Telescope Facility (IRTF).

Using all of these follow-up observations, the authors were able to determine several characteristics of BDR 1750+3809:

  • It has strong methane absorption bands, indicating it is likely a T dwarf
  • The approximate distance to the object, calculated using the distance modulus, is around 57–74 pc (186–241 light-years)
  • It has a larger luminosity than expected. This is likely caused by the viewing geometry or by a companion object that is either large or close to BDR 1750+3809, similar to the Jupiter–Io system.

Most importantly, though, the detection shows that radio observations can be used to blindly discover these objects.

LOFAR observations

Figure 1: Six graphs of LOFAR radio signals (and non-detections) from BDR 1750+3809 are shown. The graphs on the left show the total intensity of the signal, while the graphs on the right show only the intensity of the circularly polarized signals. The three different observation dates are noted on the graphs. [Vedantham et al. 2020]

Why Does It Matter?

This discovery is important not only as evidence of a way to discover more brown dwarfs, but also as a potential window into learning more about the properties of exoplanet magnetospheres. Both brown dwarfs and planets are thought to have exclusively dipolar magnetic fields, meaning they have two poles of equal and opposite strength like a bar magnet or Earth’s magnetic field. However, because of technological constraints and the fact that Earth’s ionosphere blocks many low-frequency radio waves, signals from exoplanet magnetic fields are currently hard to detect (although one was detected via its aurorae earlier this year). This low-frequency brown dwarf observation — comparable to what is expected from gas giant exoplanets — indicates that instruments such as LOFAR do have the sensitivity necessary to make radio detections of exoplanet magnetospheres. If learning about the magnetic field itself isn’t exciting enough, keep in mind that a magnetic field strong enough to shield a planet from stellar radiation is a requirement for habitability as we know it. The more we can determine about an exoplanet’s magnetosphere, the more we can speculate about the possibility of it sustaining life!

Original astrobite edited by Aaron Pearlman.

About the author, Ali Crisp:

I’m a third year grad student at Louisiana State University. I study hot Jupiter exoplanets in the galactic bulge. I am originally from Tennessee and attended undergrad at Christian Brothers University, where I studied physics and history. In my “free time,” I enjoy cooking, hiking, and photography.

1 23 24 25 26 27 47