Astrobites RSS

solar system

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Col-OSSOS: Color and Inclination are Correlated Throughout the Kuiper Belt
Author: Michaël Marsset, Wesley C. Fraser, et al.
First Author’s Institution: Queen’s University Belfast, UK
Status: Accepted to AJ

The outer reaches of our own solar system remain a mystery. Astronomers are only just beginning to shine (colored) light on the distant region of our solar system called the Kuiper Belt. But this region has a lot to tell us about the history of the solar system — N-body simulations predict the types of objects we should find and what their orbits should look like. Are they locked into an orbital resonance with Neptune? Have they been flung out of the plane of the solar system by a past interaction with another object? Other studies also predict the types of molecules we should see based on where the objects formed and how they have been flung around the solar system.

From Grayscale to Color

Figure 1. Demonstration of spectroscopy vs. broad-band filter imaging. The spectrum (of a galaxy, for this plot) is shown in black, while the filter acceptance by wavelength for the BViz filters are shown in color. The flux from all wavelengths spanned by a filter is added according to that filter’s response to produce the data points in the top panel. [STSci and Dan Coe]

Using spectroscopy to learn about the compositions of Kuiper Belt objects (KBOs) has typically been impossible because these objects are generally so faint (due to their great distances from Earth of 30–40 astronomical units or more). One solution to this problem is to take “broad-band” filter measurements — instead of separating the object’s light into individual wavelengths as in spectroscopy, astronomers take images using filters that select wider ranges of wavelengths (see Figure 1). While this technique sacrifices detailed wavelength information, it increases photon counts, making even faint KBOs measurable. The difference in flux between images in two filters gives a “color.” Unfortunately, many KBO surveys have taken images in only one filter. Thus, many of the ~2,100 currently known KBOs don’t have associated colors. Today’s paper details results from ongoing Col-OSSOS, or the Colors of the Outer Solar System Origins Survey, which uses the Gemini-North telescope in Hawaii to measure the colors of select objects discovered by OSSOS. Today’s authors compiled the largest KBO sample to date with well-measured colors — 229 KBOs in three different filters.

Today’s authors were specifically interested in how an object’s color correlates with its orbital inclination. Since color and orbital properties like inclination can tell us about an object’s dynamical history, considering both together could potentially place stronger constraints on our solar system’s complicated dynamical past than either alone. The authors selected their sample from Col-OSSOS itself and from previous surveys according to a few criteria:

  1. Previous surveys considered must have published their telescope pointing history and taken observations in filters comparable to the Col-OSSOS filters.
  2. Objects must have an orbital inclination greater than 5º, such that they are “dynamically excited.”
  3. Objects must be smaller than ~440 km, to avoid the range of sizes in which KBOs transition from large and ice-rich to small and depleted of ices. The authors were only interested in colors due to rock composition, not in colors due to the presence of ices.
  4. Objects must not belong to known families with distinct compositions/colors (like the icy collisional family of Haumea) or pass too closely to Jupiter.

When all was said and done, the authors had a sample of 229 KBOs whose colors fell into two distinct populations that the authors termed “gray” and “red.” They then examined the orbital inclination distributions of the gray and red KBOs in turn.

Colorful Results

Figure 2. The orbital inclination vs. spectral slope for the 229 gray and red KBOs, where the spectral slope is a measure of how an object’s reflectance changes with wavelength — in other words, it is another measure of color. The blue shading represents the smoothed density of data points. The red KBOs have a significantly lower inclination distribution in general than the gray KBOs. [Marsset et a. 2019]

Marsset et al. find that the inclination distributions of the two color populations are significantly different, as measured by a variety of statistical tests. Specifically, they find that the red population of dynamically excited KBOs have smaller inclinations in general than the gray KBOs (see Figure 2). This suggests that the red KBOs have experienced less disruption than the gray KBOs over our solar system’s history. Moreover, when the gray and red populations are categorized further into specific dynamical classes (e.g. objects in nearly circular orbits or objects that are actively scattering off of Neptune, for example), the same trend emerges. This means that the overall trend is not biased by any single subpopulation of objects.

But what if the observed trend between color and orbital inclination in the gray and red KBO populations is simply due to biases in the surveys? Previous studies have shown that redder objects tend to be more reflective, making them brighter and more readily detected. The authors also note that few surveys target higher inclinations and that those surveys tend to use redder filters. They use both analytical calculations and a survey simulation code to estimate the effects of these factors. They find that the potential biasing factors would actually result in more red KBOs detected with high orbital inclinations — exactly opposite of the trend they find in the data! Furthermore, their survey simulations show that they find many fewer red objects than their models predict. This implies that the color-inclination trend observed is an intrinsic feature rather than one produced by survey bias.

So why does this trend between color and inclination exist? Prior to today’s paper, two hypotheses competed for the top spot: (1) all KBOs were originally similar, but collisions and other resurfacing processes altered both their colors and inclinations, and (2) the two color populations originally formed in different locations in the protoplanetary disk from different materials, and the gray KBOs were flung outward into the Kuiper Belt. Since collisions affect both orbital inclination and eccentricity, the authors would expect a color-eccentricity trend as well if hypothesis (1) were correct. However, no such trend exists in the data, suggesting that the two color populations did, in fact, originally form in different regions of our solar system. The results from today’s paper are suggestive of hypothesis (2), yet 229 KBOs is only a tiny fraction of all the KBOs waiting to be discovered and studied. Col-OSSOS is still taking data, the Large Synoptic Survey Telescope (LSST), which will turn on in 2023, is expected to detect ~40,000 KBOs with well-measured colors. And that will still only be a fraction of the predicted number of KBOs (possibly more than 100,000 larger than 100 km, and even more smaller than that). There is still a lot of solar system to explore!

About the author, Stephanie Hamilton:

Stephanie is a physics graduate student and NSF graduate fellow at the University of Michigan. For her research, she studies the orbits of the small bodies beyond Neptune in order learn more about the Solar System’s formation and evolution. As an additional perk, she gets to discover many more of these small bodies using a fancy new camera developed by the Dark Energy Survey Collaboration. When she gets a spare minute in the midst of hectic grad school life, she likes to read sci-fi books, binge TV shows, write about her travels or new science results, or force her cat to cuddle with her.

composite image of a complex-structure spherical bubble of emitting gas of different colors

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Formation of carbon-enhanced metal-poor stars as a consequence of inhomogeneous metal mixing
Author: Tilman Hartwig and Naoki Yoshida
First Author’s Institution: The University of Tokyo, Japan
Status: Submitted to ApJL

“Star Light, [Supernova] Bright”

Though the Big Bang and the birth of the universe as we know it — or, at least, *think* we know it — all happened way before any of us were born, we can still piece together the universe’s history from the starlight we observe today.

Scientists believe that the very first stars in our universe were born a long, long, long, long, long time ago, when the universe was “just” a few hundred million years old (much younger than the universe’s current age of nearly 14 billion years). These first stars were huge — scientists believe they might have been a hundred or so times more massive than the Sun. They were born of gas containing only hydrogen, helium, and tiny traces of lithium — the only three elements that were even around after the Big Bang. All elements heavier than hydrogen and helium in the universe, which astronomers refer to collectively as “metals”, were forged later, after these first stars exploded into brilliantly-bright supernovae and expelled the first heavier elements out into space. These first metals can be traced in the “second-generation” stars, which are the extremely metal-poor stars that were born after the first stars went supernovae. Scientists study second-generation stars in our Milky Way to learn more about how they formed and how the first stars led to their birth.

In today’s astrobite, we consider the story told by a special class of second-generation star: the carbon-enhanced, metal-poor (CEMP) stars. CEMP stars are metal-poor but have relatively high carbonicity, which means they have enhanced amounts of carbon compared to their iron content. Specifically, we look at CEMP-no stars, which are CEMP stars that also have relatively little barium with respect to iron. These CEMP-no stars are believed to directly represent the chemical composition of the environment they formed in.

At this time, however, we’re still not sure how these carbon-enhanced stars formed. One theory says that CEMP-no stars were born after faint supernovae. Since faint supernovae are known to expel relatively less iron than more “normal” supernovae, and since CEMP-no stars have carbon that’s enhanced relative to their iron content, it’s possible that CEMP-no stars show carbon enhancements because they were born after faint supernovae. However, it’s not clear what percentage of the universe’s first stars may have exploded as faint supernovae. It’s also not clear if that percentage was enough to explain all of the CEMP-no stars we expect there to be in the universe.

Today’s authors consider a different perspective. They question if the carbon enhancements observed for these CEMP-no stars might be explained in part by inhomogeneous metal mixing, an uneven distribution of metals in the environment where they formed. So far, not much research has been done on the effects of inhomogeneous mixing of elements in the early universe. So today’s authors used theory and modeling to test their scenario, in which inhomogeneous metal mixing leads to the formation of a CEMP-no star.

To Mix and Make a Star

We can break down the authors’ proposed scenario for forming a CEMP-no star into four crucial steps, which are illustrated in Figure 2:

  1. One of the universe’s first stars explodes into a supernova.
  2. The explosion eventually leads to overdensities in the surrounding gas, such that some parts of the gas are ‘clumpier’ than others. Due to inhomogeneous mixing after the explosion, these clumps have differing elemental abundances from each other.
  3. A clump with higher carbonicity relative to another nearby clump collapses and forms stars.
  4. The newly-formed stars send out energetic photons. These photons pierce the other nearby clump, break apart the molecules contained in that nearby clump, and thus stop that clump from collapsing and making its own stars as well.
inhomogeneous mixing schematic

Figure 2: A graphic illustrating the authors’ scenario. In the first panel on the left, a first star explodes into a supernova that has a typical carbon abundance. In the second panel we see how, due to inhomogeneous mixing, the supernova leads to a clump of gas that has higher carbonicity (along the top row) and a clump that has less carbonicity (along the bottom row). Then in the third and fourth panels, we see that the clump with higher carbonicity produces CEMP stars. These stars send out energetic photons that prevent the clump with less carbonicity from producing stars as well. [Hartwig & Yoshida 2019]

The authors used oodles of theory to build and test a model of this scenario. Here are just a few of the many aspects of the scenario that they explored in detail:

  • How quickly the clump with higher carbonicity would need to collapse
  • How various cooling methods for the surrounding gas would affect the clump’s timescale of collapse
  • How long it would take the newly formed stars to send out energetic photons

The authors’ work culminated in an analytical, closed set of equations that related the carbonicity of the clump to the difference in carbonicity (and thus the level of inhomogeneity) between that clump and its neighboring clump. They found that this relationship depended most strongly on the difference in carbonicity and the physical distance between the two clumps.

formation of CEMP-no stars

Figure 3: Predictions of how well the authors’ inhomogeneous metal mixing scenario explains the formation of 64 CEMP-no stars that have been observed. The x-axis shows the distance between the two clumps in parsecs (pc). The y-axis represents the minimum difference in carbonicity between the two clumps (where higher numbers mean a larger difference) that would allow this scenario to produce a CEMP-no star. The black line is the 50% line. So for example, if we pick a spot along the black line, then 50% of the observed CEMP-no stars in the survey could be explained by the corresponding distance and difference-in-carbonicity values, while another 50% would require a larger difference in carbonicity. The red and green areas explain 0% and 100% of the observed CEMP-no stars, respectively. So if all of the clumps resulting from the first stars’ supernovae fell into the green region, then this scenario would explain 100% of the CEMP-no stars observed in the survey. [Hartwig & Yoshida 2019]

Figure 3 predicts how well this scenario could explain the formation of 64 CEMP-no stars that have been observed. The authors find that for their standard model, with a clump distance of 30 pc and a difference in carbonicity of about 1 unit, about 11% of observed CEMP-no stars can be explained by this scenario.

They stress how they’re not saying that all CEMP-no stars in the universe formed through this inhomogeneous pathway. They’re merely proposing that this is how a certain proportion of the CEMP-no stars we observe today may have formed. They look to future 3D simulations to dive deeper into inhomogeneous mixing, and investigate how efficiently the process might have occurred in the early universe. But in the meantime, if we couple this inhomogeneous pathway with the faint-supernova pathway we discussed at the beginning of this astrobite, then we may not need so many faint supernovae after all to tell the story of how these CEMP-no stars came to be.

About the author, Jamila Pegues:

Hi there! I’m a 3rd-year grad student at Harvard. I focus on the evolution of protoplanetary disks and extra-solar systems. I like using chemical/structural modeling and theory to explain what we see in observations. I’m also interested in artificial intelligence; I like trying to model processes of decision-making and utility with equations and algorithms. Outside of research, I enjoy running, cooking, reading stuff, and playing board/video games with friends. Fun Fact: I write trashy sci-fi novels! Stay tuned — maybe I’ll actually publish one someday!

Sunspot

Editor’s note: This article, written by AAS Media Fellow Kerry Hensley, was originally published on Astrobites.

Are All Sun-like Stars the Same?

We refer to stars with approximately the same spectral type as the Sun as “Sun-like,” but how similar are they really? One way to gauge this is by studying the stars’ magnetic activity, like their starspots (relatively cool areas of the stellar photosphere where magnetic flux bubbles out of the surface) or stellar flares (sudden releases of energy in the form of lots and lots of photons — all the way from X-ray to radio).

sunspots

Figure 1: The starspots studied in this paper are generally much larger than a typical sunspot. A particularly large sunspot, spanning 80,000 miles, is shown here. [NASA/SDO]

Some Sun-like stars have been observed to unleash so-called superflares, which are thought to arise from processes similar to garden-variety solar flares but have 10,000 times the energy. Has the Sun ever set loose a superflare? Could it do so in the future? It’s not clear yet, but it’s an important question to ask, since a superflare could seriously disrupt the satellite networks we’ve come to rely on. By studying flares on other Sun-like stars, we can get a better sense of the similarities and differences between the Sun and the Sun-like stars scattered across the universe.

Superflares can also tell us something about how magnetic fields are generated and configured on other stars; superflares (and solar flares) seem to be linked to starspots (see Figure 1), which are a visible manifestation of a star’s coiled and twisted magnetic field. By studying the starspots that superflares are linked to, we can gain a better understanding of the magnetic dynamos of other stars.

However, our telescopes don’t have the resolution necessary to directly image starspots on other stars. How do we study activity on distant stars?

Kepler: Not Just for Planets!

Led by Kosuke Namekata (Kyoto University, Japan), the authors of today’s paper used Kepler space telescope (may it orbit in peace!) light curves for over 5,000 stars to study starspots on Sun-like stars. In order to identify starspots, the authors searched for repeated dips in the Kepler light curves — signaling the spots transiting the visible face of the stars as they rotate. In total, they were able to track 56 sunspots as they formed and faded (see Figure 2).

Kepler light curve

Figure 2: Example Kepler light curve (a), along with the residual between the data (black) and the fit (red) in panel (b), the phase of the starspots (c), and the depth of the minima as a function of time (d) for a star from this study. [Namekata et al. 2018]

For each of the starspots, the authors calculated the area (from the depth of the brightness dip), the lifetime (from how long they were able to track the presence of the brightness dips), and the rate at which it emerged and decayed (from how the starspot area changed over time).

The authors found that starspots tended to emerge and decay at rates consistent with what we expect from studying spots on our own Sun, which hints that starspots on stars near and far are governed by the same processes. They also found that the lifetimes of the individual spots (10–350 days) tended to be shorter than expected given their area (0.1–2.3% of the stellar surface), but cautioned that the starspot lifetimes could be underestimated because of the difficulty of detecting the spots just as they are emerging and fading. Figure 3 shows a comparison of the areas and lifetimes of sunspots and starspots.

Starspot lifetime versus area

Figure 3: Starspot lifetime versus area for both Sun-like stars (filled circles) and the Sun (black and grey crosses). Sunspots tend to follow the Gnevyshev-Waldmeier (GW) law, while starspots on other stars tended to have shorter lifetimes for a given area. [Namekata et al. 2018]

The lifetimes of the largest starspots — those with areas of about 10,000 millionths of the solar hemisphere (about 30 billion square kilometers) — tended to be about a year. The rate of superflare occurrence also seems to be about once a year, suggesting that the presence of a large starspot is a strong indicator that a superflare will be released.

We still have a long way to go toward understanding magnetic activity, starspots, and superflares on Sun-like stars, but today’s paper gets us one step closer. Hopefully, the wealth of Kepler data will continue to provide discoveries like this for many years to come!

Citation

“Lifetimes and Emergence/Decay Rates of Star Spots on Solar-type Stars Estimated by Kepler Data in Comparison with Those of Sunspots,” Kosuke Namekata et al 2018 ApJ, in press. https://arxiv.org/abs/1811.10782

TRAPPIST-1

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites written by a guest author; the original can be viewed at astrobites.org.

Title: Planet–Planet Tides in the TRAPPIST-1 System
Author: Jason T. Wright
First Author’s Institution: The Pennsylvania State University
Status: Published in RNAAS

In May 2016, the world was struck with sudden excitement over the discovery of the Trappist-1 system. At just 12 parsecs away, the system is host to several Earth-sized planets inside the habitable zone, which makes said planets prime candidates for harbouring life (see this bite and this one for more). The discovery of this neighbouring exoplanet system ignited the curiosity not only of astronomers and exoplanet scientists who had a new system to study, but also of the general public, who were excited to follow the search for possible signs of extraterrestrial beings.

However, as we peer more deeply into the dynamics of the system, the once idyllic scene of Trappist-1 is rapidly becoming more complex. The system is unique in that all the planets are very tightly packed, with the furthest out only having an orbital period of 12 days. While the questions of excess radiation and water loss have been well parsed, today’s paper calls attention to the role of planet–planet tides in the system.

Formalism

The tidal strain ϵ on a body p scales inversely to distance cubed, and proportionally to the mass of the interacting object q. We are all familiar with the role of tides on Earth, which cause natural wonders like the Bay of Fundy. However, the tides on Earth are caused by the gravitational influence of the Moon and the Sun on Earth’s oceans (see Figure 1). These two bodies have the greatest effect, because the Sun is the most massive body in our system, and the Moon, while not massive, is very close.

tides on Earth

Figure 1: The main cause of Earth’s two tides is the Moon, but the Sun’s gravitational pull cannot be neglected. As seen above in this infographic, when the pull of the Sun and the Moon align (the blue and the orange arrows), the amplitude of tides on Earth increases, which causes a spring tide. When the gravitational forces are at right angles, the amplitude of tides on Earth is minimized, which is known as a neap tide. [Katie Harris]

While all objects in our solar system (and indeed, everywhere in the universe) gravitationally interact, the tidal effects of the other planets in our system on Earth are negligible. That is not true for Trappist-1, where the planets are much closer together. Despite the insignificant mass of the planets compared to that of the host star, their proximity to one another means that they require further attention.

Wright takes the ratio of the tidal strain of every planet on every other planet in the Trappist-1 system and finds that for every planet, there is at least one other planet where the ratio of the tidal strain of the interacting planet is at least 10% of that of the tidal strain of the star, meaning that planet–planet tides cannot be ignored. In fact, for planet g, the effect of planet f is 2.7 times that of the host star, meaning that the tidal effect of the neighbouring planet is the dominant one for that particular system.

What Does this Mean?

If planet–planet tides are strong forces on the Trappist-1 planets, it could have significant consequences for any hypothetical life on the planet. Lingam & Loeb (2018) suggest that stronger tides could have a positive influence on abiogenesis, biological rhythms, nutrient upwelling, and stimulating photosynthesis. So, while the inclusion of planet–planet tides might initially be a positive thing in terms of the Search for Extraterrestrial Intelligence (SETI) and finding alternative Earths, many important factors for determining the effect of tides, such as the spin states of the planets, are still under investigation — and they may be a cause for pause before planning an interstellar trip.

About the author, Katie Harris:

This astrobites guest post was written by Katie Harris, a Master of Space Studies student at the International Space University in France. She completed her undergraduate degree in Astrophysics at the University of Toronto, where she did research on infrared spectroscopy instrumentation and Bayesian statistics. She is interested in all things space and is currently working towards a career in space medicine and crossover technology development for medicine and astronomy.

pulsar pulses

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites that looks back at a paper from 2006; the original can be viewed at astrobites.org.

Title: Predicting the Starquakes in PSR J0537-6910
Authors: John Middleditch, Francis E. Marshall, Q. Daniel Wang, Eric V. Gotthelf, William Zhang
First Author’s Institution: Los Alamos National Laboratory
Status: Published in ApJ

pulsar

Artist’s illustration of a pulsar, a fast-spinning, magnetised neutron star. [NASA]

Pulsars (rotating, magnetised neutron stars) emit radiation that sweeps periodically over the Earth (like the beam of a lighthouse sweeping across the ocean). We detect this radiation as a sequence of pulses, with the frequency of the pulse corresponding to the frequency of rotation of the star. Pulsars will typically spin down over their lifetime due to electromagnetic braking, but this is a fairly slow process. Occasionally, in some pulsars, we will detect a sudden increase in the frequency of the pulses. This is called a pulsar glitch. Essentially, the mismatch in the rotation of the fluid inside the star and the solid crust on the outside of the star causes a catastrophic event that we see as an increase in the frequency of the pulses.

The question that the paper we’re exploring today — originally published in 2006 — seeks to answer is: can you predict the next glitch in a pulsar? In general, this is a challenging task, with different pulsars exhibiting different glitching behaviours that need to be captured in your model. However, for one particular pulsar, PSR J0537-6910, this can be accomplished fairly straightforwardly, due to the strong correlation between the size of each glitch and the waiting time until the next glitch. The authors of today’s paper exploit this correlation to develop a method to predict the next starquake on PSR J0537-6910.

What Is a Pulsar Glitch?

Glitches are thought to be caused by superfluidity inside a neutron star.  When a substance cools down to a temperature below a critical temperature Tc, it forms a superfluid state, i.e., a state that flows without viscosity. But neutron stars are much hotter than any substances we find on Earth (they run around 106 K). So how can a neutron star be cool enough to contain a superfluid? The matter inside a neutron star is extremely dense, and it has very different properties from terrestrial matter. In particular, the matter inside a neutron star has a high critical temperature — Tc ~ 109 K — and the neutron fluid inside the star can therefore form a superfluid even at high temperatures.

A cartoon of the angular velocity of the crust of a pulsar vs. time during a glitch. The pulsar glitch is characterised by (1) steady spin-down of the star, (2) a step-like jump in frequency and (3) an ensuing gradual relaxation back to the original spin-down rate. [Adapted from van Eysden 2011]

The pulsar spins down due to electromagnetic processes. The pulsar is a rapidly rotating magnetised body — if a rotating magnetic dipole is inclined at some angle from the rotation axis, it emits magnetic dipole radiation at the rotation frequency. The emission of this electromagnetic radiation leads to lost rotational energy. Therefore, the star spins down as a consequence, and we call this magnetic braking.

The crust of the neutron star spins down continuously because the magnetic field lines are locked into the crust. However, the superfluid inside the star is likely to be at least partially decoupled from the spin-down of the crust. Therefore, an angular velocity lag builds up between the crust and the superfluid as the superfluid continues to spin at the same rate for a period of time, uninhibited by the magnetic braking. Eventually, the lag builds up to a critical value. At this stage, there is an angular momentum transfer to the crust from the superfluid which causes a glitch. Because glitches are believed to be intimately connected to the behaviour of the interior superfluid, astronomers believe pulsar glitches offer a rare window into the processes occurring inside the star.

How Do Glitches Occur?

It is still not known for certain exactly why and how glitches occur in neutron stars, but a number of possible mechanisms have been proposed. For example, vortex avalanches are a possible mechanism for glitches. In the superfluid, there are many vortices (i.e. tiny whirlpools) induced by the rotation of the star. Vortices are “trapped” or “pinned” to certain locations in the crust. This just means they are fixed in that location until there is enough force to unpin them. When enough lag builds up between the superfluid and the crust, the force is sufficient to unpin them. As they unpin, they transfer angular momentum to the crust and cause a glitch. Another possible mechanism is starquakes, which mean there is a cracking of the neutron star crust that causes a rearrangement of the matter inside the star.

Glitching pulsars can be thought of as belonging to two different classes: Crab-like and Vela-like. The Vela pulsar typically has large glitches which occur fairly periodically, while the Crab pulsar has a power law distribution of glitch sizes. Therefore, it is difficult to develop a model that captures the behaviour of these two classes simultaneously. In this paper, the authors focus on a single pulsar (PSR J0537-6910) and use its unique properties to predict when it will next glitch.

Reliable Glitcher: PSR J0537-6910

PSR J0537-6910 is a 62-Hz pulsar in the Large Magellanic Cloud. The authors report on seven years of observation of this pulsar, containing 23 glitches. PSR J0537-6910 is unique among glitching pulsars. Firstly, it is the fastest spinning young pulsar and one of the most actively glitching pulsars we know of. Secondly, its glitching properties are particularly favourable to glitch prediction due to the very strong correlation between the waiting time from one glitch to the next and the amplitude of the first glitch, shown in the figure below. The authors suggest the predictable behaviour of the glitches of this pulsar is associated with the the angular velocity lag build-up causing a “cracking” in the crust as glitches occur, with the smaller glitches that precede a large glitch corresponding to more localised cracks.

Figure 3: Waiting time vs. glitch size for PSR J0537-6910. [Middleditch et al. 2006]

Impressively, we’re able to predict the waiting time for the next glitch of PSR J0537-6910 to within a few days. Predictions of this accuracy have not been achieved with any other pulsar.

About the author, Lisa Drummond:

I am an astrophysics PhD student with interests in compact objects and gravitational waves. I studied neutron star interiors for my Masters thesis at the University of Melbourne, Australia and now I am doing my PhD at MIT.

planet formation

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: In situ formation of icy moons of Uranus and Neptune
Authors: Judit Szulágyi, Marco Cilibrasi and Lucio Mayer
First Author’s Institution: University of Zurich, Switzerland
Status: Accepted to ApJL

With over 100 moons between them, gas giants Saturn and Jupiter host most of our solar system’s satellites. Moons are thought to form in the gaseous circumplanetary disks (CPDs) that surround giant planets during their later stages of formation; the satellites develop from the disks in much the same way as planets themselves are formed.

But what about smaller planets like Neptune and Uranus? Today’s bite delves into the world of radiative hydrodynamical simulations to see whether CPDs — and thus moons — could also form around our ice giants.

The Real Moons

Given that Uranus hosts five major moons in similar, circular orbits, this ice giant’s satellites likely formed in a circumplanetary disk. A debris disk, like the one that may have formed our own Moon, is unlikely; debris-disk satellites would have very little water, which is not what we observe for Uranus’s moons.

Figure 1: Triton, as seen by the Voyager 2 spacecraft. [NASA/Jet Propulsion Lab/U.S. Geological Survey]

Neptune, however, is only home to one major moon, Triton, which has an unusual composition and retrograde orbit. Triton is more than likely a captured Kuiper Belt object that is thought to have severely disrupted the dynamics of the pre-existing Neptunian system. In fact, previous work suggests that without the existence of satellites around Neptune, it wouldn’t have been possible for the system to capture an object like Triton.

Let’s Form Some Disks

Forming a moon isn’t an easy job for a planet. Previous studies have revealed that there are two key planetary properties that determine how likely it is for a gaseous CPD to form around a planet — mass and temperature.

  • Mass: Terrestrial planets like Venus are too small for CPDs to form; any satellites that exist around them are usually captured (as in the case of Mars) or the result of a planet–planet impact (as in the case of Earth).
  • Temperature: CPDs are more likely to form if the planet is cooler, BUT a cooler planet radiates its formation heat faster and has less time to form a disk.

The authors deployed hydrodynamical simulations to recreate the later stages of planet formation for Uranus and Neptune. This involved setting the planets as point masses in the centre of the simulation surrounded by a gas disk, and then letting simulated nature (heat transfer, ideal gas laws, gravity) take over. For more details regarding hydrodynamical simulations, see this post on simulating the entire universe (!!) and this one on gas accretion.

simulated circumplanetary disks

Figure 2: Zooming in to the circumplanetary disk around Uranus (left column) and Neptune (right column). The different rows indicate the planetary surface temperatures: 100 K (top), 500 K (middle) and 1,000 K (bottom). [Szulágyi et al. 2018]

From the gas density plots in Figure 2, in which yellow/white indicates the densest region, we see that once the simulated Neptune and Uranus cooled to below 500 K a circumplanetary disk was able to form. This conclusion is drawn visually from the disk-like structure that has formed at 100 K (top row of Figure 2); this structure is not visible at 500 K and 1,000 K (middle and bottom row of Figure 2). It makes sense that both planets require a similar temperature as they are of almost equal mass. Next, the authors created a synthetic population of satellite-forming seeds within the disk to see if these protosatellites will turn into fully fledged moons by accreting matter.

Simulated Moons vs. Reality

Formation timescale of moons around Uranus

Figure 3: Formation timescale of moons around Uranus (left) with the distribution of their masses on the right. The red vertical lines represent Uranus’s 5 major moons. [Szulágyi et al. 2018]

In the case of the 100-K CPD around Uranus (Figure 2, top left panel), the majority of the synthetic population of moons that developed around Uranus formed over a 500,000-year period, at the location of the disk where the temperature was below the freezing point of water. This means that many of these moons would be icy — just like the actual moon population observed around Uranus. The masses of the moons spanned several orders of magnitude — a range that includes the masses of the satellites we observe today (red lines in Figure 3). Around 5% of the authors’ simulations yielded 4–5 moons between 0.5–2 times the mass of the current Uranian satellites.

formation timescale of moons around Neptune

Figure 4: Like Figure 3, above, the formation timescale of moons around Neptune (left) with the distribution of their masses on the right. The red vertical line represents the moon Triton. [Szulágyi et al. 2018]

Similar trends were also observed for Neptune as, once again, the entire population of moons had temperatures below the freezing point of water — meaning Neptune is also more likely to form icy satellites. Generally, simulated Neptune struggled to make moons as massive as Triton. This isn’t worrying, however, since Triton is likely to be a captured Kuiper Belt object.

So, overall, it is possible to form satellites around ice giants! This is an exciting result for exomoon lovers because Neptune-mass exoplanets are the most common mass category of exoplanet we’ve found so far. Furthermore, icy moons are the main targets for extraterrestrial life in our own solar system; ice-giant satellites elsewhere in the universe could be a similar source of potential in our search for habitable worlds.

About the author, Amber Hornsby:

Third-year postgraduate researcher based in the Astronomy Instrumentation Group at Cardiff University. Currently, I am working on detectors for future observations of the Cosmic Microwave Background. Other interests include coffee, Star Trek and pizza.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this guest post from astrobites; the original can be viewed at astrobites.org.

Title: An Extreme Protocluster of Luminous Dusty Starbursts in the Early Universe
Authors: I. Oteo et al.
First Author’s Institution: University of Edinburgh, UK
Status: Published in ApJ

The biggest structures known in the universe are galaxy clusters: they are made of hundreds or even thousands of galaxies, lots of gas, and a huge amount of dark matter. But a long time ago, these giants were babies. Right after the Big Bang, when no galaxies, stars or even molecules had yet formed, the universe was extremely homogeneous, although it had density fluctuations with relative amplitude of ~10-5. During the cosmos’ expansion, the regions that initially were slightly heavier became increasingly heavier, because mass attracts mass. Then, clumps of gas turned into stars. Due to their mutual gravitational attraction, these stars gradually moved closer to each other, growing into galaxies that congregated further — also because of the gravity — into today’s galaxy clusters (for a deeper understanding, read this).

Since looking at distant astronomical objects means to look into the past, we may be able to see the progenitors of galaxy clusters, known as protoclusters. They should be far from us (at high redshifts, z), composed of dozens of galaxies forming lots of stars, and they should therefore contain a huge amount of dust and gas (the stars’ ingredients). From our perspective on Earth, we should view these protoclusters as distant aggregations of galaxies that are very bright and very red, visible particularly in submillimeter/millimeter wavelengths (the wavelength of the emission perceived from the gas and dust of distant galaxies that has been heated by the galaxy’s stars). To observe these systems may teach us about how the universe and its structures evolve through the cosmic time. In today’s paper, the authors report the discovery of a protocluster core with extreme characteristics: it is super dense, super massive, and super old.

Multiple Observations

In the attempt to find ideal protoclusters, the authors looked for sources in the H-ATLAS survey. They chose the reddest protocluster-like system and baptized it Dusty Red Core (DRC).
To find out more about DRC, they made many different observations:

DRC

Figure 1: From left to right: wide-field view of DRC (the A region) from APEX; DRC galaxies observed with ultra-deep ALMA continuum; high-resolution ALMA imaging shows details of DRC-1, composed of three star-forming clumps. [Oteo et al. 2018]

Getting the Information

All this data leads to several new findings about DRC. Firstly, the continuum and imaging observations show that DRC is actually composed of 11 bright, dusty galaxies instead of a single object as first thought. Its brightest component, DRC-1, is formed by three bright clumps.

The easiest way to find the protocluster’s redshift is by using the lines emitted by the molecules and atoms of the gas filling the galaxies, such as 12CO, H2O and CI. Ten of the 11 galaxies in DRC are at the same redshift of z = 4.002 (the final one didn’t have enough lines to measure the redshift). Since the expansion of the universe means that more distant objects move faster and have higher redshift, this can be converted to a luminosity distance of ~117 billion light-years, meaning they were formed only 1.51 billion years after the Big Bang. The authors also find that the components are concentrated into an area of 0.85 × 1.0 million light-years. This may seem like an enormous area — but for astronomy, this qualifies as an extremely overdense region. Knowing that, it is safe to say that at least ten objects of DRC are members of a protocluster core in the initial evolutionary states of the universe.

The continuum observations are also useful to calculate how much gas and dust mass is converted into stars per year (the star formation rate, or SFR). The lower limit obtained is 6,500 solar masses per year, which is the highest star formation rate ever found for such a distant protocluster. Furthermore, the protocluster’s molecular gas mass and total mass were estimated. The gas mass — estimated using the CI emission lines — is found to be at least 6.6 × 1011 solar masses. The total mass of the protocluster was calculated in three different ways, with an outcome of as much as 4.4 × 1013 solar masses (for comparison, the estimated mass of the Local Group is ~2 × 1012 solar masses). Based on these results and cosmological simulations, the authors concluded that DRC may evolve to a massive galaxy cluster in today’s universe, like the Coma Cluster.

Protoclusters like DRC are a key for us to understand a remote part of the universe’s history. Moreover, DRC may help us to infer information about the unknown part of the universe — which is huge, since the dark sector corresponds to 95% of the cosmos. The protocluster analyzed in this paper is bright and massive, and it was measured with accuracy by modern telescopes, despite its enormous distance from us. Those measurements may be used as parameters to test different cosmological theories, thus helping us to understand the universe’s big picture.

About the author, Natalia Del Coco:

Today’s guest post was written by Natalia Del Coco, a masters student at the University of São Paulo, Brazil. In her research, she looks for correlations between the physical properties of clusters of galaxies and the cosmic web around them. Besides being an astronomer, Natalia is also a ballerina, a shower singer, and a backpacker.

primordial black hole

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Observing the influence of growing black holes on the pre-reionization IGM
Authors: Evgenii Vasiliev, Shiv Sethi, & Yuri Shchekinov
First Author’s Institution: Southern Federal University, Russia
Status: Published in ApJ

Cosmologically important phenomena are typically discussed on the scales of gigaparsecs (Gpc); to give you some idea of the sizes involved, one Gpc could fit approximately 33,000 Milky Way galaxies end to end. With that crazy scale in mind, today we’ll gain an understanding for how parsec-scale astrophysical events can have far-reaching effects cosmologically.

Cosmic Dawn and Reionization

The cosmological period prior to the reionization of the universe’s hydrogen is typically described as being the cosmic dawn. This stage in the history of the universe is marked by the formation of the first stars and galaxies. But it doesn’t end there — connecting these large structures throughout the universe is the intergalactic medium (IGM). The IGM during the cosmic dawn consists of mostly neutral hydrogen, and compared to galaxies, it is much less dense (10-27 kg/m3 compared to the average density of our Milky Way, which is ~10-19 kg/m3). The ultraviolet (UV) radiation from these early stars and galaxies are what most astronomers and cosmologists believe led to reionization. Put simply, this ionizing radiation extended symmetrically from these sources and over time these regions of reionized IGM began to overlap, leading to complete reionization of the universe. While this is our current best guess, cosmologists typically wonder: What roles do other structures play during this period? One type of galaxy of interest are those containing Active Galactic Nuclei (AGN), which are galaxies with very dense cores with a supermassive black hole (SMBH) at their centers accreting matter. These AGN are extremely luminous and can produce a lot of X-ray and UV emissions. What we’ll be exploring today is how these SMBHs influence the surrounding regions.

Early Black Holes

The very first black holes were most likely few and far between and should have had masses several orders of magnitude less than what can be observed today (but in light of new detections we may need to question this). This follows from the fact that for black holes to exist, you need some form of already dense matter to collapse under extreme gravitational conditions. These conditions, however, weren’t as ripe during the cosmic dawn, as matter was just starting to clump together to form behemoth objects such as Population III stars. During this time, AGN were fueled by accretion onto the type of early SMBHs today’s astrobite investigates.

SMBHs at the center of AGN affect the IGM through reionizing and heating of the gas. This is because as the SMBH accretes local matter, hard non-thermal radiation is emitted. So how does the accretion rate of these early black holes influence the surrounding regions? We should expect there to be some relationship between the rate of matter accretion to the distance of influence.  Directly following from this, we should also be able to see how observable an object like this might be by using a radio interferometer. Luckily we have the authors from today’s astrobite paper to help answer this for us.

The authors model the accretion according to the above equation, which relies on the initial black hole mass, MBH,t=0, the radiative efficiency ε that tells us how easily matter is converted to radiation, and the Eddington timescale of TE = 0.45 Gyr (read more here). This is then related to the resulting ionizing luminosity, which they assume to have a power-law relationship, and this eventually leads us to the ionizing radiation flux.

brightness temperatures

Figure 1: The relationship between brightness temperature and the distance of ‘influence’ of ionizing radiation plotted at several redshifts. The upper plot is for a ε = 0.1 and the bottom plot for ε = 0.05. [Vasiliev et al. 2018]

These growing black holes are placed in a host halo of neutral hydrogen where the effects of the ionizing/heating front radius can be related to an observable differential brightness temperature, ΔTb, which measures the difference between the background Cosmic Microwave Background temperature and the neutral hydrogen 21cm line. Their results for a black hole with an initial mass of 300 solar masses starting at redshift of z ~ 20 with the radiative efficiencies of ε = 0.1 (upper) and ε = 0.05 (lower) can be seen in Figure 1. They also compare a non-growing black hole (dashed) as a point of reference.

We can see from Figure 1 that growing black holes should exert the largest influence in terms of distance from the black hole, when compared to non-accreting (dashed). The authors show us that accreting black holes during the early universe can have an influence on the scales of 10 kpc to 1 Mpc. This is of course a very large dynamic range that accreting black holes can influence. 

These distances convert to just about the correct angular scales that radio interferometers, such as LOFAR, might be able to probe, which is certainly an exciting prospect. This would be a huge achievement as being able to probe down to the kpc scales and link them to phenomena seen at some of the largest scales could provide us some much needed information on how the earliest black holes formed.

About the author, Joshua Kerrigan:

I’m a 5th year PhD student at Brown University studying the early universe through the 21cm neutral hydrogen emission. I do this by using radio interferometer arrays such as the Precision Array for Probing the Epoch of Reionization (PAPER) and the Hydrogen Epoch of Reionization Array (HERA).

BNS ejecta

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Binary Neutron Star Mergers: Mass Ejection, Electromagnetic Counterparts and Nucleosynthesis
Authors: David Radice, Albino Perego, Kenta Hotokezaka, Steven A. Fromm, Sebastiano Bernuzzi, Luke F. Roberts
First Author’s Institution: Princeton University
Status:Submitted to ApJ

Neutron star mergers are absolutely fascinating. These events are not just sources of gravitational waves but of electromagnetic radiation all across the spectrum — and of neutrinos as well. If you missed the amazing multimessenger observations last year that gave us a peek into what binary neutron star (BNS) systems are up to, please check out this bite about GW170817! The observations had major implications for many fundamental questions in astrophysics. The gravitational-wave signal from the merger was detected along with the electromagnetic radiation produced. As a result, we were able to confirm that neutron-star mergers are sites where heavy elements (those beyond iron) can be made via the r-process.

While all of this has undoubtedly been extremely cool (and we’re holding our collective breath for more data), there’s a lot of work that remains to be done. We need accurate predictions of the quantity and composition of material ejected in mergers in order to fully understand the origin of the heavy elements, and to say whether BNS mergers are the only r-process site. To investigate such questions, we require theoretical models that include all the relevant physics. Today’s paper presents the largest set of NS merger simulations with realistic microphysics to date. By realistic microphysics, we mean that the simulations also take into account what the atoms and subatomic particles are doing. This is done by using nuclear-theory-based descriptions of the matter in neutron stars, and by including composition and energy changes due to neutrinos (albeit in an approximate way).

Simulating Neutron-Star Mergers

Modeling BNS mergers is a complex multi-dimensional problem. We need to simulate the dynamics in full general relativity, along with the appropriate microphysics, magnetic fields and neutrino treatment. Remarkable progress has been made, particularly in the last decade, since the first purely hydrodynamical merger simulations were carried out. Still, the problem remains extremely computationally expensive and simulation efforts have traditionally focused either on carrying out general relativistic simulations while sacrificing microphysics, or on incorporating advanced microphysics with approximate treatments of gravity. If you care about the merger dynamics and the dynamical ejecta, i.e., material ejected close to the time of merger due to tidal interaction and shocks, you need fully general relativistic simulations, like the ones presented in today’s paper.

The authors carry out 59 high-resolution numerical relativity simulations, using binaries with different total masses and mass ratios. They also use different descriptions of the high-density matter in neutron stars. Neutrino losses are included in all cases, and some simulations include neutrino reabsorption as well. A few simulations even include viscosity. The authors systematically study the mass ejection, associated electromagnetic signals, and the nucleosynthesis from BNS mergers.

Mass Ejection

Fig 1. Electron fraction for an example simulation. The neutron stars are 1.35 Msun each and neutrino reabsorption has been included. The bulk of the ejecta lies within a ~60 degree angle of the orbital plane. [Radice et al. 2018]

Fig 1 shows the electron fraction of the material in one of the simulations over time. First, material is ejected due to tidal interactions, close to the orbital plane. Next, more isotropic, shock-heated material is ejected. This component has higher velocity and quickly overtakes the tidal component, as seen in the figure. The two components interact and the tidal component gets reprocessed to slightly higher electron fractions.

The authors also find a new outflow mechanism, aided by viscosity, that operates in unequal mass binaries. This ejecta component, called “viscous-dynamical ejecta”, is discussed in detail in a companion paper.

Using their results, the authors fit empirical formulas that predict the mass and velocity of ejecta from BNS mergers. Even more material can become unbound from the remnant object on longer timescales (“secular ejecta”), but this is not studied here due to the high computational costs of running the simulation for that long.

Nucleosynthesis

The authors study in detail how the r-process nucleosynthesis depends on the binary properties and neutrino treatment. Sample nucleosynthesis yields are presented in Fig 2. You’ll notice that the second and third r-process peaks are robustly produced while the first peak shows more variation. In fact, the first peak is quite sensitive to the neutrino treatment as well as the binary mass ratio.

Fig 2. Electron fraction (left) and nucleosynthesis yields (right) of the dynamical ejecta. “A” refers to the mass number of the nucleus. The different colored lines represent binaries with different mass ratios. The green dots show solar abundances. [Radice et al. 2018]

Electromagnetic Signatures

The radioactive decay of the r-process nuclei produced powers electromagnetic emission, referred to as a “kilonova”. Other electromagnetic signals can also be produced due to different ejecta components.

The authors compute kilonova curves for all their models. They find that binaries that form black holes immediately after merger do not have massive accretion disks and produce faint and fast kilonovae. When the remnants are long-lived neutron stars, more massive disks are formed and the kilonovae are brighter and evolve on longer timescales. Example kilonova curves are shown in Fig 3.

Fig 3. Kilonova curves in three bands for three different models: binary with prompt BH formation (left), binary forming a hypermassive neutron star (middle), binary forming a long-lived supramassive NS (right). Solid and dashed lines correspond to the viewing angle. [Radice et al. 2018]

The authors also compute the synchrotron radio signal due to interaction of the ejecta with the interstellar medium. Example radio lightcurves are shown in Fig 4 along with the afterglow in GW170817. A small fraction of the ejecta is accelerated by shocks shortly after merger to velocities >0.6c, producing bright radio flares. The flares can probe the strength with which the neutron stars bounce after merger and in turn probe matter at extreme densities. Some of the models predict that the synchrotron signal from GW170817 will rebrighten in months to years after the merger!

Fig 4. Radio light curves of the dynamical ejecta of one model at 3 GHz, compared with GW170817. The ISM number density n is a parameter of the model used for generating the curves. [Radice et al. 2018]

Looking Ahead

Systematic investigations are key to understanding complex events such as neutron-star mergers. Improved theoretical modeling, with a push towards incorporating all the relevant physics in merger models, will not only help us understand what we saw last year but also set us up for the next set of observations!

About the author, Sanjana Curtis:

I’m a grad student at North Carolina State University. I’m interested in extreme astrophysical events like core-collapse supernovae and compact object mergers.

exoplanet transit

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: A More Informative Map: Inverting Thermal Orbital Phase and Eclipse Lightcurves of Exoplanets
Authors: Emily Rauscher, Veenu Suri, Nicolas B. Cowan
First Author’s Institution: University of Michigan, Ann Arbor
Status: Submitted to AAS journals

During a recent visit to the Dutch National Maritime Museum I came across the world map from 1648 AD by cartographer John Blaeu. In addition to being the first world map of the Earth that adhered to the Copernican worldview, Blaeu’s map is an extremely detailed document of the knowledge of the world at the time sourced solely from the information gathered by explorers from their travels through ages to the far reaches of the world. Its errors and subjectivity to the western view of the world are quite amusing, but there is still a level of high detail and surprising accuracy in many places. Through these details, it is clear that cartographers must meticulously piece together information from a large number of sources of varying reliability. Creating maps of exoplanets, a practice beginning to be known as exocartography, is an equally challenging observational and mathematical problem that is addressed by today’s paper.

transit schematic

Figure 1: Schematic of transits and occultations (also referred to as secondary eclipse) in an exoplanetary system. [Winn 2010]

Creating maps of exoplanets can give an idea of the presence of climatic features in the planetary atmosphere or surface. When compared to predictions from general circulations models, these features can give us insight into the physical processes in play in the planets’ atmospheres. What observations do we need to create these maps? Scientists continuously monitor the brightness of a planetary system as different regions of the planet come into view; a schematic of a planet’s transits and occultations as it orbits is shown in Figure 1. The disk-integrated flux of the planet (the total flux from the planet’s surface, visible as a circular disk), observed as a function of time, can then be converted to flux (more easily so for a tidally locked planet) as a function of spatial locations on the planet (which are coming into view as a function of time) to render a 2D brightness map of the planet for the chosen wavelength band.

However, this technique can be sensitive to the choice of map structure and uncertainties in the orbital parameters of the system. As a result, it has only been reliably attempted for a few well-studied hot Jupiters — for example, HD189733 — and hot super-Earths to date. Today’s paper proposes a method that maximizes the information in the flux maps retrieved from phase curves and secondary eclipse observations, while also accounting for the effect of orbital parameter uncertainties.

A Closer Look at Exocartography

It may still be a few decades before we successfully send a spacecraft to get an up-close image of the nearest exoplanet. Till then, astronomers will mostly be busy extracting brightness maps of exoplanets from their high-precision photometric observations. Only phase-curve (i.e. not including the secondary eclipse) observations in this context are more sensitive to the longitudinal distribution of brightness, while most latitudinal information comes from the secondary eclipse. Considering the wavelength band in which these observations are taken, infrared phase curves like those obtained using Spitzer are more sensitive to the thermal emission from the planet, while shorter (optical) wavelengths usually probe the reflected light, which can be used to get an albedo map of the planet.

Once you have the phase-resolved or eclipse observations for a particular wavelength band, the first step in converting them to spatial flux maps of the planet conventionally involves assuming a 2D brightness map structure and fitting the observations with a disk-integrated flux-curve model (a function of the orbital phase) that has a one-to-one correspondence with the chosen map. But for a planet that could, in fact, have a very general distribution of spatial brightness, what’s the functional form for the 2D map you should choose to start with?

The best mathematical way to represent a signal with a general functional form is to decompose it into a linear combination of Fourier basis functions or basis maps. This choice is ideal because the Fourier basis functions are orthogonal, which makes them informatically independent of each other and prevents any scrambling of information when we use a linear combination of basis functions to represent a signal. In the case of brightness maps, which are essentially functions defined on the surface of a sphere, spherical harmonics play the role of the orthogonal basis functions, making them a good choice to represent the map structure for the planet’s brightness. Herein lies the key problem addressed by today’s paper: light curves corresponding to a set of orthogonal basis maps (like spherical harmonics) may not be orthogonal themselves, especially in the case of eclipse-only observations!

Orthogonalize!

The authors of today’s paper propose to use a principal component analysis (PCA) approach to orthogonalize the light curves (corresponding to spherical-harmonics maps for a chosen orbital realization) to get a set of eigen-lightcurves (or eigencurves), which can then be used to construct flux-curve models to fit the thermal phase curve, eclipse, or combined observations. Finding eigencurves in this context means getting the eigenvectors of a matrix composed of the initial set of light curves as column vectors, which is done by a simple linear transform of these light curves using the coefficients obtained from PCA. PCA essentially tries to determine the axes of variance and covariance in the input matrix of initial light curves. These axes, in the context of constructing the flux-curve model, are like independent lines of forces along which a linear combination of eigencurves acts to give the final model for the signal. Determining the eigencurves means determining these independent lines of forces.

Time-varying eigencurves

Figure 2: Time-varying eigencurves (horizontal axis showing time from the center of secondary eclipse in days). The first panel shows the case for the full orbit without secondary eclipse, and the second panel shows the eclipse-only case. [Rauscher et al. 2018]

The eigenvalues for each eigencurve intuitively represent the relative amount of information contributed by an eigencurve to the total light-curve signal, and hence they can be ranked and chosen according to their information content. Eigenmaps can be determined by a linear combination of the initial set of maps using the same PCA coefficients as obtained for corresponding eigencurves. A linear combination of eigencurves (with linear coefficients free to be constrained by observations) finally forms the flux-curve model that is fit to the observations. The authors perform this exercise for three cases: simulated observations for full orbit phase curve (without secondary eclipse), only secondary eclipse (with a fraction of before and after orbit), and real observations of the hot Jupiter HD189733 with both phase curve and eclipse combined. The eigencurves and the corresponding eigenmaps used to retrieve the brightness maps for the first two cases are shown in Figure 2 and 3.

2D eigenmaps

Figure 3: Spatially varying 2D eigenmaps for the two cases shown in Figure 2. ‘X’ in the bottom figure is the point permanently facing the star (assuming the planet to be tidally locked), and the green box in case of eclipse-only observation marks the range of longitudes probed by the observations. [Rauscher et al. 2018]

Comparing the eigencurves with their corresponding eigenmaps (Figure 2 and Figure 3), it is evident that each eigencurve for the full orbit without eclipse case encodes paired pieces of spatial information along the longitudes of the planet, while most latitudinal information comes from the eclipse (second eigenmap (Z2) onwards in Figure 3). The PCA approach ensures that the eigencurves used to construct the model flux curve are orthogonal even for the eclipse-only case. The authors use the eigencurves for combined phase-curve and eclipse observations of HD189733 by Spitzer in 8 μm channel and retrieve the longitude of the dayside hotspot (region of peak temperature on the planetary hemisphere facing the star) and the flux contrast of the hotspot, which are both consistent with the results obtained from previous studies. Additionally, the authors also investigate the effect of orbital parameter uncertainties by checking the sensitivity of PCA coefficients to construct eigencurves from original light curves for orbital realizations within one-sigma uncertainties.

Usually, correlated noise in the detectors used for high-precision photometric observations are corrected for together with the fit for astrophysical parameters, which can lead to correlations creeping even in between the orthogonal light curves obtained from the PCA approach. This only calls for more caution when working with the high-quality mapping data which will be obtained from the James Webb Space Telescope in the future. With the planned high-precision photometric observations in multiple spectral bands by JWST, we can look forward to combining the horizontal 2D mapping as discussed in today’s paper with information about the vertical atmospheric structure, allowing us to get more reliable three-dimensional maps of exoplanets in the near future.

About the author, Vatsal Panwar:

I am a PhD student at the Anton Pannekoek Institute for Astronomy, University of Amsterdam. I work on characterization of exoplanet atmospheres to understand the diversity and origins of planetary systems. I also enjoy yoga, Lindyhop, and pushing my culinary boundaries every weekend.

1 24 25 26 27 28 38