Astrobites RSS

β Pictoris

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Fundamental limitations of high contrast imaging set by small sample statistics
Authors: D. Mawet, J. Milli, Z. Wahhaj, D. Pelat, et al.
First Author’s Institution: European Southern Observatory,
Status: Published in ApJ

Introduction

The world’s state-of-the-art exoplanet imaging projects include the VLT’s SPHERE, Gemini’s GPI, Subaru’s SCExAO, Palomar’s Project 1640, and the LBT’s LEECH survey. As next-generation imagers come online, we need to think carefully about what the data say, as sensitivities push closer in to the host stars. This Astrobite is the first of two that look at papers that have changed the way we think about exoplanet imaging data.

Traditionally, high-contrast imaging programs calculate a contrast curve, or a 1-D plot that shows the difference in contrast that could exist between a host star and a detectable low-mass companion (Fig. 1). The authors of today’s paper examine some of the statistical weirdness that happens as we get closer in to the host, and how this can have a dramatic effect on the scientific implications.

contrast curve

Fig. 1: An example contrast curve that pre-dates today’s paper, showing the sensitivity of an observation of the exoplanet GJ504 b (the black square). Note that closer in to the host star’s glare, the attainable contrast becomes less and less favorable for small planet masses. The fact that GJ504 b lies above the curve means that it is a solid detection. [Adapted from Kuzuhara et al. 2013]

Is that Blob a Planet?

With no adaptive-optics correction, the Earth’s atmosphere causes the light from the host star (that is, the point spread function, or PSF) to degenerate into a churning mess of speckles. Adaptive-optics correction removes a lot of the atmospheric speckles, but optical imperfections from the instrument can leave speckles of their own. These quasi-static speckles are quite pernicious because they can last for minutes or hours and rotate on the sky with the host star. How do we tell if a blob is a planet or just a speckle?

resolution elements at a given radius

Fig. 2: As the radius from the host star decreases, there are fewer speckle-sized resolution elements for calculating the parent population at that radius. The radius intervals are spaced apart here by λ/D, the diffraction limit of the telescope. [Mawet et al. 2014]

Consider the following reasoning. In the absence of a planet, the distribution of intensities on all speckle-sized resolution elements at a given radius from the host star (see Fig. 2) is a Gaussian centered at zero (Fig. 3a). I’ll set my planet detection threshold at an intensity equivalent to, say, 3σ of this Gaussian. If I actually find a blob with an amplitude greater than or equal to 3σ, then there is only a 1 in ~740 chance that the blob is just a speckle. As a trade-off, I may only recover a fraction of planets that are actually that bright (Fig. 3b).

detection thresholds

Fig. 3: Left: Pixel intensities that are just noise are distributed as a Gaussian centered around a value of zero. The vertical dashed line represents a chosen detection threshold of 3σ. The tiny sliver under the curve to the right of the dashed line represents false positive detections. Compare the shape of this curve to the blue one in Fig. 4. Right: A planet of true brightness 3σ will be detected above the threshold (orange) about 50% of the time. [Jensen-Clem et al. 2018]

The name of the game is to minimize a false positive fraction (FPF) and maximize a true positive fraction (TPF). These can be calculated as integrals over the intensity spectrum of quasistatic speckles.

All well and good, if we know with certainty what the underlying speckle intensity distribution is at a given radius. (Unfortunately, for reasons of speckle physics, speckles at all radii do not come from the same parent population.) But even if the parent population is Gaussian with mean μ and standard deviation σ, we don’t magically know what μ and σ are. We can only estimate them from the limited number of samples there are at a given radius (e.g., along one of the dashed circles in Fig. 2). And at smaller radii, there are fewer and fewer samples to calculate a parent population in the first place!

The t-Distribution

Enter the Student’s t-distribution, which is a kind of stand-in for Gaussians with few measured samples. This concept was published in 1908 by a chemist using the anonymous nom de plume “Student” after he developed it for the Guinness beer brewery to compare batches of beer using small numbers of samples. The t-distribution includes both the measured mean and measured standard deviation of the parent population. As the number of samples goes to infinity, the distribution turns into a Gaussian. However, small numbers of samples lead to t-distributions with tails much larger than those of a Gaussian (Fig. 3).

By integrating over this distribution, the authors calculate new FPFs. Since the tails of the t-distribution are large, the FPFs increase for a given detection threshold. The penalty is painful. Compared to “five-sigma” detections at large radii, the probability that we have been snuckered with a speckle at 2λ/D increases by factor of about 2. At 1λ/D, the penalty is a factor of 10!

Gaussian vs. t distribution

Fig. 4: A comparison between a Gaussian (blue) and a t-distribution (red). If we have set a detection threshold at, say, x=3, the area under the t-distribution curve (and thus the false detection probability) is larger than in the Gaussian case. [IkamusumeFan]

What Is to Be Done?

The authors conclude that we need to increase detection thresholds at small angles to preserve confidence limits, and they offer a recipe for correcting contrast curves at small angles. But the plot gets thicker: it is known that speckle intensity distributions are actually not Gaussian, but another kind of distribution called a “modified Rician“, which has longer tails towards higher intensities. The authors run some simulations and find that the FPF gets even worse at small angles for Rician speckles than Gaussian speckles! Yikes!

The authors suggest some alternative statistical tests but leave more elaborate discussion for the future. In any case, it’s clear we can’t just build better instruments. We have to think more deeply about the data itself. In fact, why limit ourselves to one-dimensional contrast curves? There is no azimuthal information, and a lot of the fuller statistics are shrouded from view. Fear not! I invite you to tune in for a Bite next month about a possible escape route from Flatland.

About the author, Eckhart Spalding:

I am a graduate student at the University of Arizona, where I am part of the LBT Interferometer group. I went to college in Illinois, was a secondary-school physics and math teacher in Kenya’s Maasailand for two years, and got an M.S. in Physics from the University of Kentucky. My out-of-office interests include the outdoors, reading, and unicycling.

Green Bank Telescope

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach
Authors: Yunfan Gerry Zhang, Vishal Gajjar, Griffin Foster, Andrew Siemion, James Cordes, Casey Law, Yu Wang
First Author’s Institution: McGill University of California Berkeley
Status: Submitted to ApJ

Today’s astrobite combines two independently fascinating topics — machine learning and fast radio bursts (FRBs) — for a very interesting result. The field of machine learning is moving at an unprecedented pace with fascinating new results. FRBs have entirely unknown origins, and experiments to detect more of them are gearing up. So let’s jump right into it and take a look at how the authors of today’s astrobite got a machine to identify fast radio bursts.

Convolutional Neural Networks

Let’s begin by introducing the technique and machinery the authors employed for finding these signals. The field of machine learning is exceptionally hot right now, and with new understanding being introduced almost daily into the best machine-learning algorithms, the diffusion into nearby fields is accelerating. This is of course no exception for astronomy (radio or otherwise), where datasets grow to be extraordinarily large and intractable for classical algorithms. Enter the Convolutional Neural Network (CNN): the go-to machine-learning algorithm for understanding and prediction on data with spatial features (aka images). How does one of these fancy algorithms work? A basic starting point would be that of a traditional neural network, but I’ll leave that explanation to someone else. A generic neural network can take in few or many inputs, but the inputs don’t necessarily have to be spatially related to each other; CNNs, however, are well suited for images. (Note: you can also have one-dimensional or three-dimensional CNNs). These images have features that, when combined, are important for identifying what is in the image. In Figure 1, for example, the dog has features such as floppy ears, or a large mouth with a tongue protruding. A CNN learns some or all of these features from a provided training dataset with a known ground truth; in Figure 1, for instance, the prediction can be labeled dog, cat, lion, or bird. These features are learned at varying spatial scales as the input images are successively convolved over, and the prediction is compared to its known label, with any corrections propagated backwards to update those features. This latter step is the training part — which you might notice is the same process as a non-convolutional neural network. Thus armed with this blazingly fast classifier, we can move forward to understanding what we’ll be predicting on.

Figure 1: An example of a convolutional neural network. An input image is sequentially convolved over through several convolutional layers, where each successive layer learns unique features, which after training, are ultimately used to make a prediction based on a set of labels. [KDnuggets]

Fast Radio Bursts

Figure 2: Simulated FRB pulses in Green Bank Telescope (GBT) radio time-frequency data. Pulses are simulated with a variety of parameters for the purpose of making the CNN as robust as possible. [Zhang et al. 2018]

We’ve covered FRBs on astrobites in the past (1, 2, 3), and with each new post we seem to be getting closer and closer to finding the origin of these mysterious radio signals. FRBs are radio-bright millisecond bursts seen in time–frequency radio telescope data. These bursts have unique features that set them apart from other radio signals and will be important for understanding how the authors developed an FRB training dataset for the predictions in their paper. These features consist of a dispersion measure (DM), time of arrival (TOA), amplitude, and pulse width (there are more, but I’ll highlight these as being the most important characteristics). The DM is one of the more interesting features of an FRB, as this is what indicates that FRBs are cosmological. The DM is measured from the dispersion of the signal in time and frequency as it traveled through an ionized medium — in this case, the intergalactic medium. This is that curved trait seen in Figure 2, which delays the signal to later times when moving to lower frequencies. TOA is when the signal arrived in the observations, amplitude is the flux density of the signal, and pulse width is the width at 10% of the maximum amplitude.

Using all of these characteristics to define a training dataset, the authors simulated many different types of FRBs, all with their own unique values. This is important because having a large, robust training dataset means you’re more likely to have a neural network capable of robust predictions.

Putting the CNN to Work

We now have all the components: a convolutional neural network, a robust training dataset, and a monumental amount of Green Bank Telescope (GBT) data. The authors seek to probe archival data of the now pretty well known FRB 121102, which has a history of being a repeating FRB. This means that FRB 121102 is an amazing resource for understanding FRBs because we can take many measurements.

Feature distributions

Figure 3: Distributions of the various features for the discovered FRB 121102 pulses from the GBT archival data. Understanding how these parameters relate to each other can give us hints to the nature of FRB 121102. [Zhang et al. 2018]

Using several hours of GBT archival data, the authors set the CNN to work predicting whether there are additional FRB pulses from FRB 121102 that may have gone overlooked due to the signal being weak or just plain being missed due to the amount of data. They successfully find 72 additional pulses from FRB 121102! And interestingly enough, more than half of these newly discovered pulses happened within the first half-hour of this dataset. This brings the total tally, including the older signals, to 93 FRB pulses.

The additional detection and measurement of these pulses is certainly important. Like we’ve stated in our past astrobites, the origin of these bursts is almost completely speculative and we need to build up as many measurements as we can to either rule out or constrain the potential cosmological sources. Having a repeating FRB with which we can start to collect measurements, like the distributions seen in Figure 3, is fantastic for understanding the FRB’s environment affecting these parameters. Hopefully with the continued development of these CNNs and other machine-learning techniques, we’ll see an explosion of FRB detections.

About the author, Joshua Kerrigan:

I’m a 5th year PhD student at Brown University studying the early universe through the 21cm neutral hydrogen emission. I do this by using radio interferometer arrays such as the Precision Array for Probing the Epoch of Reionization (PAPER) and the Hydrogen Epoch of Reionization Array (HERA).

gas-giant planet formation

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: The evolution of gas giant entropy during formation by runaway accretion
Authors: David Berardo, Andrew Cumming, Gabriel-Dominique Marleau
First Author’s Institution: McGill University and University of Montréal, Canada
Status: Published in ApJ

Introduction

Direct imaging has turned up only a handful of planets. However, as observing sensitivities get better in the coming years, the technique will become a powerful probe of planet formation physics. Part of the reason planets are so challenging to image is that planets don’t carry out fusion themselves, so they just slowly cool and become dimmer with time.

How, exactly, do they cool? We need to know this in order to convert the measured luminosity of a planet into meaningful data, like the planet’s mass. For that, we have to mostly rely on evolutionary models to predict the cooling curve. The authors of today’s paper do this by tackling the physics of the accretion process during its most rapid phase, when the growing protoplanet’s gravitational well consumes material as fast as the surrounding disk can supply it.

In the field of planet formation physics, “hot start” and “cold start” and a gradient of “warm” starts in between refer to the starting entropies of planets. These terms do not necessarily indicate the formation mechanism. The authors of today’s paper specifically investigate the core accretion mechanism to see what interior entropies, and by extension luminosities, it can lead to.

a growing Jupiter-mass planet

Fig. 1: Left: The mishmash of quantities the authors monitor in growing planets. At the shock boundary, material accretes at a rate M-dot, and an accretion luminosity Laccr is contributed to the planet’s luminosity. T0, P0, and S0 are the temperature, pressure, and entropy just below the shock. Material settles and compresses in the envelope before reaching the radiative-convective boundary. Sc is the entropy in the core. Right: An example plot showing the regimes of resulting entropy Sf of a 10-Jupiter-mass planet as a function of T0 and P0, after it began accreting at an initial entropy Si. The color scale is in units of Boltzmann’s constant per proton mass. [Adapted from Berardo et al. 2017]

How Shocking?

As material falls into an accreting planet, it loses gravitational potential energy. How much of that energy gets radiated away, and how much is incorporated into the planet’s internal entropy? The physics surrounding the shock is critical here. With a stew of boundary conditions, “jump” conditions around the accretion shock, assorted assumptions, and the open-source stellar evolution code Modules for Experiments in Stellar Astrophysics (MESA), the authors monitor different layers of the growing planet (Fig. 1, left).

They find that the planet’s internal entropy is set by the difference between the initial entropy Si and the entropy S0 of the accreting material as set by the temperature and pressure at the shock boundary (Fig. 1, right). In some cases, core accretion can lead to “cold” starts with luminosities less than about 5×10-6 Lsun if the accretion rate is small and on the order of 10-3 Mearth per year, if the initial entropy is low, and if the shock temperature is close to that of chilly nebular temperatures. But many core accretion scenarios actually result in much more luminous planets. As the authors outline in their summary, the core accretion formation regimes include (see Fig. 2):

  1. The cooling regime (S0 < Si, and T0 < 500–1000 K): the planet is convective and cooling proceeds quickly. (Check out this bite for more info about convection, radiation, and entropy in planet interiors.)
  2. The stalling regime (S0 > Si, and T0 ≅ 1000–2000 K): the planet’s envelope is radiative, and the internal entropy decreases as a function of decreasing radius inside the planet. The final entropy Sf tends to settle near the initial entropy value Si.
  3. The heating regime (T0 > 2000 K): the imbalance between the initial entropy and that of the accreting material is steep enough that it forces a convective layer with a minimum Smin > Si to form on the convective core.
Luminosity curves

Fig. 2: Luminosity curves as a function of planet age, in subpanels for different combinations of initial entropies Si and accretion rates. The data points are directly imaged planets. Number 8 is 51 Eri b, the chilliest directly imaged planet found so far, and the closest contender for a “cold” start formation. Letter A is the possibly-still-accreting planet HD 100546 b. [Berardo et al. 2017]

The authors plot expected cooling curves and overlay data points of directly imaged planets. In Fig. 2, the bundles of lines corresponding to different planet masses are fairly tight, especially if the planet is a few tens of millions of years old. But if the mass of a very young planet can be independently determined, then under certain circumstances (like additional constraints on the accretion rate, for example), the luminosity can help constrain the exact formation scenario. One particularly interesting example is the young planet HD 100546 b, which is embedded in an asymmetric protoplanetary disk and is probably still undergoing accretion.

A Tricky Business

Currently, though, figuring out how well the data points in Fig. 3 agree with the models is a tricky business without measuring the planet masses in a way that does not also depend on a cooling model. Fortunately, this is beginning to change with Gaia, which can help determine masses of planets by monitoring the way they tug on the host star. Observationally, it will also be important to carry out spectroscopy of accreting protoplanets to determine how much of the luminosity comes from the shock itself (see Fig. 1), and to actually spatially resolve the accretion emission to put better constraints on the details of the accretion process.

On the theory side, the authors call for models that allow parameters that they kept fixed — like T0 and the accretion rate — to vary in time, and to incorporate more complications like the effects of dust grains, accretion asymmetries, and whatever other individual circumstances may be in play during the formation of a given planet. As the symbiotic trifecta of high-contrast imaging, Gaia data releases, and sophisticated modeling continue to advance, we may yet use the luminosity of young planets to illuminate the broader physics of massive planet formation.

About the author, Eckhart Spalding:

I am a graduate student at the University of Arizona, where I am part of the LBT Interferometer group. I went to college in Illinois, was a secondary-school physics and math teacher in Kenya’s Maasailand for two years, and got an M.S. in Physics from the University of Kentucky. My out-of-office interests include the outdoors, reading, and unicycling.

S0-2 orbit

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Improving Orbit Estimates for Incomplete Orbits with a New Approach to Priors – with Applications from Black Holes to Planets
Authors: K. Kosmo O’Neil, G. D. Martinez, A. Hees, et al.
First Author’s Institution: University of California, Los Angeles
Status: Accepted to ApJ

Everyone warns you: don’t make assumptions, because when you ASSUME, you look as foolish AS a SUM of Elephant seals, or however it goes.

Figure 1. I’m sorry, they’re just silly-looking animals. [Brocken Inaglory]

But assumptions are useful, as long as they’re based on facts! On the weekends, for example, I just assume, based on prior experience, that an NYC subway journey will take about twenty minutes longer than it’s supposed to because of crowds and construction. More often than not, I arrive about when I expect based on that assumption.

Of course, if the transit authority magically got its act together, I’d have to update my beliefs — I wouldn’t let an old assumption about slow subways mislead me into showing up awkwardly early for things forever. It would probably only take two or three fast train journeys before I stopped building in that extra 20 minutes. My observations, in other words, would take precedence over my assumptions.

But what if I couldn’t test my assumptions against the real world so effectively? What if I were working from very limited data? In the subway analogy, what if I had moved away from New York years ago, but I were still advising tourists about travel time based on how things used to be? My advice might be better than nothing, but still inadequate or misleading.

Today’s authors investigate: What happens when data are scarce, and you have to let your assumptions guide you? How do you choose your assumptions wisely, so you’re misled as rarely as possible?

Orbits, and the Lack Thereof

The data these authors investigate isn’t so different from a subway timetable — it’s a list of on-sky coordinates describing where a celestial body is measured to be, and when. Specifically, the authors look at the star S0-2, which orbits the black hole at the center of the Milky Way, and the four planets around HR8799, which have been directly imaged (i.e., photographed on the sky, so their positions are known). Figure 2 summarizes what we know about these objects.

Figure 2. (Left) The on-sky coordinates of star S0-2 (black points) as it traces out its 16-year orbit around the black hole at the center of the Milky Way (gray star), plus the best-fitting orbit (blue line). (Middle) S0-2’s radial velocity at various points in its orbit, plus the best-fitting orbital solution. (Right) The on-sky coordinates of the four planets orbiting HR8799 (gray star). Note that these planets take a long time to orbit their star, so we’ve only witnessed a small fraction of their orbits since the system was first photographed. [O’Neil et al. 2018]

S0-2 has been closely watched for a while — we’ve been able to see it trace out more than a complete orbit around the central black hole. Whatever pre-existing assumptions we might have made about its orbit, they’ve been well and truly tested against the experimental data, and we don’t need to rely on them anymore.

HR8799, though, is a different story. It takes roughly 45 years for the innermost planet (HR8799e, plotted in yellow) to go all the way around, and the planets were only discovered about ten years ago. There’s a surprisingly wide range of possible orbits that fit the limited observations we have so far, and so if we want to decide which possibilities are most likely, we need to rely on our assumptions about how orbits ought to work.

What Assumptions Are Best?

Traditionally, scientists who specialize in orbit-predicting have chosen their assumptions to (they hope) introduce as little bias as possible: to decide, for example, that no value of orbital eccentricity is any more likely than another, a-priori. It’s a fancy way of declaring they’re as agnostic as possible about the best-fitting orbit.

But today’s authors point out that we’re not observing eccentricity, nor any other parameter of the orbit, directly — we’re actually observing on-sky coordinates, as a function of time, and trying to fit for the orbital parameters that match those coordinates best. We should choose our assumptions so that no observation is more likely than any other a-priori. That’s the real way to be as agnostic as we can be.

To test this hypothesis, the authors go on to simulate what happens when you make each of those assumptions and try to fit an orbit with only a few data points. Because the data are simulated, the authors know the right answer about the orbit, and they can test their results against it. As Figure 3 shows, the “old way” can really bias the results of orbit fitting, and the “new way” performs much better.

Figure 3. What happens when you try to measure the mass of the Milky Way’s black hole, based on only a few measured coordinates of S0-2? If you adopt the “old” assumptions, you get the blue distribution, which is biased — you’ll conclude that the Milky Way’s black hole is more massive than it really is. If you adopt the “new” assumptions, you’ll get a much more accurate answer. [O’Neil et al. 2018]

Unexpectedly, the worse your data is — in other words, the fewer on-sky coordinates you’ve measured, and the more heavily you have to lean on your assumptions — the more likely you are to be biased, if you stick with the old way. It goes to show that we should all re-examine our assumptions every once in a while!

About the author, Emily Sandford:

I’m a PhD student in the Cool Worlds research group at Columbia University. I’m interested in exoplanet transit surveys. For my thesis project, I intend to eat the Kepler space telescope and absorb its strength.

stellar activity

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Stellar Surface Magneto-Convection as a Source of Astrophysical Noise II. Center-to-Limb Parameterisation of Absorption Line Profiles and Comparison to Observations
Authors: H. M. Cegla, C. A. Watson, S. Shelyag et al.
First Author’s Institution: University of Geneva, Switzerland
Status: Accepted by ApJ

The Sun is whizzing around the Milky Way at about 500,000 miles per hour. A Galápagos tortoise, which is the opposite of the Sun, ambles along at about 0.2 miles per hour. One might reasonably conclude that the speed of a Galápagos tortoise is completely irrelevant compared to the speed of the Sun — if the Sun were to speed up or slow down by a tortoise-speed or two, what difference would it really make?

Figure 1. It’s true!

Earth, for example, pulls on the Sun to the tune of 0.4 miles per hour. Imagine an alien, far away, watching the Sun approach: at the time of year when Earth’s gravity swung the Sun forward, towards her, she would clock the Sun going 0.4 miles per hour faster than she would six months later.

Spectrographs here on Earth are getting good enough to detect velocity shifts as small as that. The newest ones — ESPRESSO and EXPRES, among others in various stages of design and construction — aim to measure tortoise-speed shifts in radial velocity (in more professional units, shifts of 10 centimeters per second), which should reveal a whole population of Earth-like planets lurking around the stars they’re aimed at.

The problem is that tortoise-speed velocity shifts are not necessarily the result of orbiting planets. Stellar surfaces are bubbling, boiling messes — hot “granules” of plasma bubble up from the sweltering depths, then cool and sink back down again. Meanwhile, the bubbling in the upper layers is forceful enough that the entire surface reverberates, expanding and shrinking to the bubbling beat.

All of that bubbling and oscillating can wobble the star more than an orbiting planet does, and confuse us in our attempts to find other Earths. Today’s authors ask: How well do we actually understand the radial velocity of a stellar surface? In particular, how well do we understand the radial-velocity signals coming from the center of a star, compared to those coming from the edge?

Edge-On Bubbles

Below is a snapshot of a stellar surface, simulated by the authors. In the left column is the middle of the star: our line-of-sight is at a 0° angle with a line pointing straight up from the surface of the star. In the right column is a little piece of star considerably farther out toward the edge, or the “limb.” Here, our line-of-sight makes a 60° angle with a line pointing up out of the surface.

Figure 2. Top: Maps of the intensity of light coming from a simulated stellar surface viewed straight-on (left) and at a 60° limb angle (right). Bottom: Radial-velocity signatures of the stellar surface, in kilometers per second. Blue regions are moving toward the observer, i.e. outward from the surface of the star; red regions are moving inward, away from the observer.

The maps at 0° and 60° look quite different, because of a handful of effects operating all together:

  1. Limb darkening: The outer layers of the star are somewhat transparent, and as you go from the stellar limb toward the center of the star, your gaze penetrates ever-deeper, ever-hotter, ever-brighter layers of the stellar interior.  As a result, the entire surface is brighter at 0° than at 60°.
  2. Viewing angle: The granules, or the bright bubbles of hot plasma, always rise straight up toward the surface. But as your viewing angle onto the surface changes, your perception of that motion changes, too — while a bubble at the center of the star is strongly blueshifted because it’s moving directly towards you, a bubble exactly at the stellar limb is actually moving perpendicular to your line of sight, so it imparts no radial-velocity signature.
  3. Corrugation: Granules stand out above the stellar surface, like hills. From a top-down view, that doesn’t matter, but from a side-on view, the granules block the view of the redshifted “valleys” beneath.

Together, these effects conspire to change the shape and position of absorption lines in the stellar spectrum. A line measured at the center of the star will be different from one measured at the limb. Consider, for example, the contribution to the line coming from the granules: because of the corrugation effect, the granules become the most visible piece of the stellar surface as you move toward the limb, and because of the viewing-angle effect, the granules at the limb appear less blueshifted than their counterparts at the stellar center, even as they limb-darken. So overall, a granule spectrum at the limb is dimmer and redder than its central counterpart, and since the overall stellar spectrum is made by adding up the spectra of granules and other surface features (weighted how bright they are), the overall spectrum changes, too.

Keeping this in mind, the authors of this paper are currently working to build up a library of the spectra and radial velocity signatures we might expect from a whole host of different types of stars. Good news for conscientious planet hunters — we’re one step closer to understanding the noise in our radial-velocity observations!

About the author, Emily Sandford:

I’m a PhD student in the Cool Worlds research group at Columbia University. I’m interested in exoplanet transit surveys. For my thesis project, I intend to eat the Kepler space telescope and absorb its strength.

brown dwarf

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: A Significant Over-Luminosity in the Transiting Brown Dwarf CWW 89Ab
Authors: Thomas G. Beatty, Caroline V. Morley, Jason L. Curtis, Adam Burrows, James R. A. Davenport, Benjamin T. Montet
First Author’s Institution: Pennsylvania State University
Status: Accepted by AJ

Our story begins on a bright and not-so stormy brown dwarf. Brown dwarfs are, themselves, mysterious objects, but CWW 89Ab is especially odd. But let’s start at the beginning; why are brown dwarfs mysterious? It is theorized that brown dwarfs form in a similar way to main-sequence stars: a gas cloud collapses, heats up, and eventually ignites. However, brown dwarfs do not have enough mass to start fusing hydrogen like a star, and they are instead left fusing deuterium. Over time, these objects cool off and start to dim and fade away.

What makes these objects such a mystery is that we have very little understanding of how they evolve over time. Astronomers use brown dwarf models informed by their observations to map the formation and evolution of these objects. However, these models require independently determined masses, radii, luminosities, and ages for brown dwarfs. If one of these parameters is missing or is determined using other models and not observations, large uncertainties may occur. And up to this point, no known brown dwarf had independent measurements of all four required parameters.

Then our protagonist, CWW 89Ab came along. It is a brown dwarf that was initially discovered by the K2 mission and orbits a Sun-like main-sequence star in the Ruprecht-147 cluster. Using transits and radial velocities, Nowak et al. 2017 were able to determine its radius of 0.94 Jupiter radii and mass of 37 times that of Jupiter. Because this system also falls in an open cluster, the authors determine an extremely precise age for the brown dwarf by looking at the main-sequence turnoff for the cluster (this website includes a nice explanation and animation on how this works). The only thing left was to determine CWW 89Ab’s luminosity.

The authors of today’s paper used Spitzer to observe CWW 89Ab passing behind its star, a process known as a secondary eclipse. The depth of the eclipse represents how much light from the brown dwarf is blocked, allowing the authors to determine its luminosity at 3.6- and 4.5-µm wavelengths. From these observations, the authors discovered that this brown dwarf is 16x brighter than predicted by evolutionary models! In other words, from the secondary eclipse depths, the authors found that the brown dwarf must have a brightness temperature of 1,700 K — but brown dwarf evolution models suggest that this object should instead have an interior temperature of 850 K. Figure 1 highlights just how extreme this luminosity difference is.

Luminosity vs. Age

Figure 1: The luminosity of CWW 89Ab (green point) plotted against different brown-dwarf evolution models. The different Burrows models explore varying luminosities assuming the brown dwarf is receiving large amounts of flux from the host star. The total stellar irradiance (blue dashed line) is the total luminosity the brown dwarf receives from its star. [Beatty et al. 2018]

The Game Is Afoot!

The authors, like any good detectives (or scientists), explored multiple possibilities for the cause of this over-luminosity (or temperature difference). One possibility is that the nearby main-sequence star is heating the brown dwarf, making it hotter and thus more luminous than models predict. However, even if the brown dwarf absorbed all of the stellar radiation it received and kept all the heat on its day side (i.e., didn’t transport heat to its night side), it still wouldn’t reach the 1,700 K temperature observed. Furthermore, a hotter brown dwarf would mean a larger brown dwarf, which wouldn’t match the radius observations of this object. So something must be making CWW 89Ab brighter while keeping it at the observed size. 

One way to do this is to assume there is a temperature inversion in CWW 89Ab’s atmosphere. A temperature inversion means that as you increase in altitude (and decrease in pressure) the atmosphere becomes hotter. We see this on Earth in the stratosphere, as well as in hot Jupiters orbiting close to their parent stars. However, the hot Jupiters that do have temperature inversions have temperatures higher than 2,000 K — much hotter than the 1,700 K temperature observed on CWW 89Ab. Therefore, CWW 89Ab’s potential temperature inversion is likely caused by a different mechanism than that occurring in a hot Jupiter. The authors suggest that if there is an over-abundance of carbon compared to oxygen, the brown dwarf would no longer be able to radiate its interior heat away, but it would still absorb large amounts of heat from its parent star. This would make its upper atmosphere (the level that we can detect) much hotter than models predict. Figure 2 illustrates two possible models to explain the data. One (gold line) is to assume that the brown dwarf mysteriously has an interior temperature of 1,700 K based on the observations and unexplained by the models. The other possibility is that the interior temperature of this brown dwarf is 850 K as the models suggest, but a temperature inversion has caused the atmosphere to heat up at higher altitudes (shown with a red line in Figure 2). The authors note that while the 1,700 K model does a better job fitting the data, they cannot rule out the possibility of a temperature inversion with only two data points.

Eclipse depth

Figure 2: The authors plot the eclipse depth (in green) at the two different wavelengths that they observed with Spitzer. Plotted in orange is the brown dwarf model assuming that CWW 89Ab has an internal temperature of 1,700 K, which would make it much hotter than the 850 K temperature that models predict given its other parameters. On the other hand, the red line plots the model assuming a 850 K internal temperature, but assuming the atmosphere has a temperature inversion. [Beatty et al. 2018]

The Unsolved Mystery

Of course there is always the possibility that our brown dwarf models are imprecise or missing important information. But CWW 89Ab does provide the first brown dwarf with all independently observed parameters for these models. So if these models do an insufficient job capturing this brown dwarf, could it mean we are interpreting all brown dwarf observations incorrectly? The authors argue strongly against this possibility, noting that one unexplainable mystery should not be evidence that all previous brown-dwarf analysis is moot. The reality is that we need to observe more secondary eclipses of CWW 89Ab at varying wavelengths in order to solve this case of the over-luminous brown dwarf. And the best way to obtain these observations will be to use the Sherlock Holmes of telescopes, the James Webb Space Telescope. In the meantime, this unsolved mystery will just have to keep us in suspense.

About the author, Jessica Roberts:

I am a graduate student at the University of Colorado, Boulder, where I study extra-solar planets. My research is currently focused on understanding the atmospheres of the extremely low-mass low-density super-puffs. Out of the office, you will probably find me running, cross-stitching, or playing with my dog.

exoplanetary system

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: AN EXCESS OF JUPITER ANALOGS IN SUPER-EARTH SYSTEMS
Authors: M. L. Bryan, H. A. Knutson, B. Fulton, E.J. Lee, K. Batygin, H. Ngo, T. Meshkat
First Author’s Institution: California Institute of Technology
Status: Submitted to ApJ

Various studies throughout the years have established the fact that the gas giants significantly influenced the formation and evolution of our solar system. Jupiter, in particular, is thought to have played a considerable role in shaping the formation of the inner solar system. Astronomers believe that during the formation of the solar system, Jupiter blocked material from flowing into the inner disk, altered the velocity distribution of this material, and disrupted planet formation within several AU of the sun, leaving our solar system with no planets out to 0.39 AU. Because the gas giants played such an integral role in the formation of our own solar system, scientists are now interested in whether similar processes occur in exoplanetary systems.

Figure 1. An artist’s representation of Gliese 832c, a super-Earth, as compared with Earth. [PHL/UPR Arecibo]

The authors of today’s paper are interested in how long-period gas giants, or Jupiter analogs, disrupt planet formation close to the host star. In particular, the authors are interested in exoplanetary systems containing both Jupiter analogs and super-Earths, or planets with masses larger than Earth’s, but significantly less than that of Uranus or Neptune (see Figure 1). In today’s paper, the authors use radial velocity (RV) observations to search for such systems.

It is difficult to constrain the effect that Jupiter analogs have on inner planetary system evolution. Transit and radial-velocity surveys require the observation of one or more complete orbits of these planets, and current surveys have not been around long enough to observe the whole orbit of Jupiter analogs in exoplanetary systems.

In order to combat this lack of observational constraints, the authors collect published RV data for 65 systems that each have at least one confirmed super-Earth. Those systems are where the researchers believe they will find the Jupiter analogs they are hunting for. Here the authors define a super-Earth as a planet with a radius of 1–4 Earth radii, and a mass of 1–10 Earth masses. A Jupiter analog has a mass of 0.5–20 Jupiter masses, and a semi-major axis of 1–20 AU.

Within the sample of 65 systems, the authors were able to recognize Jupiter analogs by analyzing long term trends in the RV data. The researchers then obtain adaptive optics (AO) imaging data to search for companions to the system’s host star, as companion stars could cause RV trends that the researchers might misattribute to Jupiter analogs. The authors determine whether the measured RV trends exhibit a correlation with the star’s emission lines to see whether any of the observed trends are due to stellar activity. Finally, the researchers account for uncertainty introduced by the inability to pinpoint the precise locations of the Jupiter analogs.

Figure 2. The Jupiter analog occurrence rate found in this paper as compared with the rate estimates published in Wittenmyer et al. (2016) and Rowan et al. (2016). This study finds a much higher occurrence rate of long-period gas giants in systems hosting inner super-Earths than would be expected by chance alone. [Bryan et al. 2018]

After performing this analysis, the authors find nine systems with statistically significant trends indicating the presence of a Jupiter analog. They find that in systems hosting inner super-Earths, the occurrence rate of Jupiter analogs is 39±7%, which is a significantly higher rate than one would expect to see by chance alone (see Figure 2). The authors also find an occurrence rate of 44±17% for systems with an M-type host star, which are the smallest type of stars. This indicates that a system having a host star of a different spectral type does not alter the Jupiter analog occurrence rate in systems hosting an inner super-Earth. The apparent correlation between the occurrence of inner super-Earths and outer Jupiter analogs suggests that long-period gas giants do not hinder super-Earth formation by any of the processes previously mentioned. Conversely, it seems as though these two population of planets seem to be correlated with one another, and as though outer gas giants may even facilitate super-Earth formation.

This is an intriguing result, because it implies that systems with long-period gas giants identified in RV surveys likely contain an inner super-Earth as well, providing a compelling place to continue the search for these mid-sized planets. We have a lot to learn about how exoplanetary systems form, but observing and understanding correlations like the one found in this study help researchers understand what to expect in the wealth of exoplanetary systems we now know exist.

About the author, Catherine Clark:

Today’s post was written by an Astrobites guest author: Catherine Clark, a second-year graduate student at Northern Arizona University. She uses speckle imaging and long-baseline interferometry to characterize exoplanet host stars. When she’s not Fourier transforming, she enjoys yoga, climbing, and photography.

ALMA observations

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Multiple Disk Gaps and Rings Generated by a Single Super-Earth: II. Spacings, Depths, and Number of Gaps, with Application to Real Systems
Authors: Ruobing Dong, Shengtai Li, Eugene Chiang, & Hui Li
First Author’s Institution: Steward Observatory, University of Arizona; University of Victoria, Canada
Status: Accepted in ApJ

Taking a Closer Look at Protoplanetary Disks

It’s a little too late for anyone to witness how Earth, and its planetary neighbors, came to be — at least, not without some sort of time machine. That’s one reason scientists study protoplanetary disks out in space: to learn more about how planets form.

A protoplanetary disk is a fluffy disk of dust and gas orbiting around a young star. These disks are believed to be sites of planet formation. The solar system we live in was once a protoplanetary disk, waaay back in the day before its planets formed.

In the past, it’s been extremely difficult to directly catch disks in the act of forming planets. But powerful new instruments of this (and the next) decade are helping to change that. With the awesome might of one such instrument, known as the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, scientists have very recently been able to observe protoplanetary disks at higher resolution than ever before. Their observations show all sorts of substructures in disks, like holes, spirals, and gaps. Figure 1 above gives a taste of these intricate substructures in a handful of disks, which were all observed with ALMA a few years ago.

It’s still not clear what exactly is causing these substructures in protoplanetary disks.  But one really, really exciting prospect is that they’re caused by planet formation.

Reading this, you might take a second look at the gaps observed in Figure 1 … and then quickly get overwhelmed, trying to imagine a planet within every single gap. But a group of researchers recently found that it’s possible for a single planet to clear out multiple gaps. Through protoplanetary disk simulations, they found that a single super-Earth-like planet, with a mass between those of Earth and Neptune, could lead to up to five dust gaps in a disk — all of which could be seen by ALMA. So orbiting planets might have caused groups of the gaps that we see in Figure 1.

Today’s authors set out to more fully explore the relationship between a growing planet and the gaps that it can form. Using simulations of disks with evolving planets, they studied how gap characteristics vary with disk and planet parameters. They also determined how scientists can use these gap characteristics to infer properties of planets from observed disk substructures — assuming that those observed substructures were caused by planets to begin with!

Putting Theory into Practice

Today’s authors performed two-dimensional simulations of both the gas and the dust in a protoplanetary disk.  In each simulation, the authors placed a single planet at a fixed orbital radius. They then ran the simulation, letting the planet orbit and grow for about 0.1 to 1 million years. They kept track of the gaps that formed due to the planet over time, through the gaps’ widths, depths, and radial locations.

They varied the planet masses (Mp) and disk height vs. disk radius ratios (h/r) across their simulations. Figure 2 shows an example simulation run, where Mp = 1.8 Earth masses and h/r = 0.03 (meaning that the total height of the disk at a given disk radius is always 3% of that disk radius). We can see that the planet produced many gaps in both the disk’s gas and dust over time, but that the dust gaps are much larger in magnitude than those in the gas.

Figure 2: A snapshot of one disk simulation, giving a bird’s-eye-view of the disk after 0.26 million years. The left and right plots show the gas surface density and the dust surface density of the disk, respectively. The green plus marks the star’s location, which is masked out with a large black circle. The green dot marks the planet’s location. We can see gaps carved out by the planet in the dust, and much fainter gaps carved out in the gas. [Dong et al. 2018]

The authors analyzed how variations in Mp and h/r affect characteristics of the gaps. They found a number of interesting trends from their results, including:

  • As Mp decreases or h/r increases, gaps become more widely spaced and shift away from the planet.
  • As Mp increases or h/r increases, gaps are opened more quickly.
  • As h/r decreases, gaps become narrower.

The authors also found that, among the three gap characteristics that they explored — width, depth, and location — gap location is the easiest and most robust characteristic for comparing simulations to observations. Of the three, gap location changes the least with time in their simulation runs. And for particularly narrow gaps that aren’t resolved well in observations or models, gap locations can still be more accurately determined than gap width or depth.

Figure 3: Comparisons between real disk observations on the left and the authors’ best-matching models on the right. In each of the models, the green plus marks the star’s location, which is masked out with a black circle, while the green dot marks the planet’s location. Each row corresponds to a different disk. Read from top to bottom, the disks are named HL Tau, TW Hya, and HD 163296, and the modeled planets have masses of 57, 29, and 65 Earth masses. We can see that some (though not all) of the gaps in the models match the gaps in the observations. [Dong et al. 2018]

With gap location as their major tool of analysis, the authors went on to compare their simulation results to three real protoplanetary disks, known as HL Tau, TW Hya, and HD 163296.

The authors emphasized that their models were not tailored or fitted to the unique structures of these disks. Instead, the authors used the best-matching simulations from the grid of Mp and h/r values that they’d already explored. More work would need to be done to specifically simulate each disk.

Figure 3 compares the three real protoplanetary disks to the authors’ best-matching models. They found that in each case, a simulated planet of sub-Saturn mass could produce gaps that roughly match the gaps we observe in these disks.

For the future, the authors point to observational microlensing surveys, as a way to learn about the planets possibly forming within protoplanetary disk gaps. With the help of their simulations, those surveys may have a better idea of where to look first.

About the author, Jamila Pegues:

Hi there! I’m a 2nd-year grad student at Harvard. I focus on the evolution of protoplanetary disks and extra-solar systems. I like using chemical/structural modeling and theory to explain what we see in observations. I’m also interested in artificial intelligence; I like trying to model processes of decision-making and utility with equations and algorithms. Outside of research, I enjoy running, cooking, reading stuff, and playing board/video games with friends. Fun Fact: I write trashy sci-fi novels! Stay tuned — maybe I’ll actually publish one someday!

Cas A

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: The distribution of radioactive 44Ti in Cassiopeia A
Authors: Brian Grefenstette et al.
First Author’s Institution: California Institute of Technology
Status: Published in ApJ

NuSTAR

Artist’s concept of the NuSTAR X-ray satellite.

Massive stars die as core-collapse supernovae: the star can no longer produce the nuclear reactions that balance its strong gravity, and the star collapses onto its core. When this happens, large amounts of energy and neutrons are available to form elements heavier than iron. The distribution of elements produced in the deepest layers of the star as it goes supernova is key to understanding the mechanism by which the collapse of the star leads to an explosion. Radioactive decay powers the optical light emitted by the supernova around 50−100 days after the explosion. In fact, we can still see radioactive signatures in remnants that are hundreds of years old. In today’s paper, the authors use observations from the high energy X-ray satellite NuSTAR to study the distribution of 44Ti in the young supernova remnant Cassiopeia A (Cas A). The current distribution of radioactive elements and their decay products is linked to the local conditions in which they were synthesised when the explosion took place. Therefore, knowing where the 44Ti is now can shed light on the details of the supernova event that ended the life of Cas A’s progenitor star.

The Cas A Supernova Remnant

Cas A is known by radio astronomers as one of the brightest radio sources in the sky, although the bulk of its energy is emitted as thermal X-rays. These are produced as the ejecta from the star encounter the supernova remnant shock, and it heats them to X-ray emitting temperatures. Shocks are transition layers in which the thermodynamic properties of a plasma change rapidly; they arise whenever some material moves faster than the local speed of sound. Supernova remnants can show two kinds of shocks: the forward shock, which is the blast wave from the supernova explosion, and a reverse shock that forms as the forward shock bounces back when it encounters dense circumstellar material. We think Cas A’s explosion date is 1672, although there are no definite records of it being observed. It is one of the few historical supernova remnants found to be of type IIb from its light echoes (read this Astrobite for the details of supernova classification and this Astrobite to find out what makes light echoes such a mind-blowing astronomical technique).

Radioactive Elements and the Explosion

The production of radioactive elements is very sensitive to local conditions of density and temperature during explosive nucleosynthesis and can give us clues on the nature of the explosion mechanism. 44Ti has a half life of ~58 years, which means there is still a small amount of 44Ti in Cas A decaying. We can see these decays as high-energy lines within the frequency range of NuSTAR. In fact, the mass of 44Ti we measure today is directly proportional to its original mass, independent of what has happened to the ejecta in the 340 years since explosion. Another important radioactive product of explosive nucleosynthesis is 56Ni. Unlike 44Ti, 56Ni has a short half-life, ~6 days, which means we can only know its original abundance from the abundance of its stable decay product 56Fe.

Figure 1: X-ray features of Cas A. The thin rim in the outermost layer is continuum synchrotron emission from the forward shock. Note that much of the X-ray emission forms a bright circle. This is because the ejecta are being heated by the reverse shock, which was generated when the supernova blast wave ran into the surrounding interstellar medium. [Robert Hurt, NASA/JPL-Caltech]

The authors use the NuSTAR satellite to map 44Ti with 18″ resolution (see Figure 1; the size of Cas A is 5′). Their aim is to get a velocity for each 44Ti clump. The velocity vector has two components in the plane of the sky, which they can get from interpolating the current position of a clump to the center of expansion (which for Cas A has been measured quite reliably), assuming that the ejecta expand freely. The third component of the velocity vector, the line-of-sight component, can be measured from the red and blue Doppler shifting of the lines coming from each spatially resolved clump — recall that an X-ray telescope labels the energy of each incoming photon.

The authors find that almost all of the 44Ti is moving in the opposite direction of Cas A’s central compact object (CCO). The CCO is a neutron star that we think is the compact remnant of the supernova explosion. It is off-centre, so we believe that it received a ‘kick’ at the time of the explosion. Since the 44Ti is moving in the opposite direction, it can be part of the material that gave the neutron star its kick (from momentum conservation). They also find that there are regions where they see 44Ti and Fe, regions with 44Ti and no Fe, and regions with Fe and no 44Ti (recall that Fe is the decay product of 56Ni). Since the ratio of 56Ni to 44Ti is sensitive to the local conditions during the explosion, these observations suggest that the local conditions of the supernova shock during explosive nucleosynthesis were varied, and so there were large-scale asymmetries in densities of the innermost ejecta when the star went supernova.

Perhaps it is not so surprising that core-collapse supernovae are complicated events. In the case of Cas A, 4–6 solar masses of material collapsed on itself, with part of it forming a neutron star and part of it being expelled outward to interact with the medium around it. The asymmetries evident from element distributions result in a shell that is surprisingly spherical. There are clearly many details left to iron out as to how all of this happens. Works such as this one can offer us a (radioactive) glimpse into how it all fits together.

About the author, Maria Arias:

Today’s post was written by an Astrobites guest author: Maria Arias, a third year PhD student at the University of Amsterdam. She studies supernova remnants at low radio frequencies with the LOFAR telescope. She’s always happy to take a break from data reduction, though, and go for a yoga class, a run, or a beer.

debris disk

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: On the dynamics of pebbles in protoplanetary disks with magnetically-driven winds
Authors: Mohsen Shadmehri, Fazeleh Khajenabi, Martin Pessah
First Author’s Institution: Golestan University, Iran
Status: Published in ApJ

Do you take Jupiter for granted? When you were taking your dog for a walk last night, did you stop and think about how your dog could have been hit by a meteorite had Jupiter not been ejecting asteroids from the solar system for the last few billion years — conveniently protecting life on Earth from frequent giant-asteroid impacts? Or when you were eating breakfast this morning, did you stop and appreciate how you would not have been able to have breakfast if it weren’t for the fact that Jupiter migrated into the inner solar system shortly after it formed, dumping excess planetesimals into the Sun — and conveniently preventing the Earth from growing past its current size and becoming uninhabitable?

We may have Jupiter to thank for shaping the conditions that allow life to thrive on our planet. However, surveys of exoplanets in other star systems have found giant planets to be rare (though a new study is more optimistic). In smaller protoplanetary disks (such as those around smaller stars), the lack of gas giants is easy to understand, as there simply may not have been enough planet-forming material in the disk to form such a large planet. However even in larger disks, there is a longstanding unanswered theoretical question of how planets of this size can form so quickly before the disk fades away in a few million years. If real protoplanetary disks cannot solve this problem, Jupiter-sized planets may indeed be few and far between.

The Pebble Solution

It was long thought that the rocky cores of gas-giant planets (5 to 10 times the mass of the Earth) formed from city-sized planetesimals (>1 km) merging together, but this takes too long. The reason this process is so slow is that there is a limited supply of planetesimals in any given area of the disk. In the last decade, it has been suggested that the largest dust particles (1 cm to 1 m), called “pebbles”, can act as a catalyst to speed up the gas-giant growth process, because they do not stay where they formed. Pebbles drift towards their stars faster than objects of any other size. This change of location makes it possible for pebbles all the way near the outer edge of the disk to reach a rocky core much further inwards and help it grow. Even though pebbles are small, they also make up a large fraction of the mass in a disk — allowing them to readily speed up a planet’s growth process.

However, today’s paper by Mohsen Shadmehri et al. suggests pebbles may not be a good solution to this problem in all cases.

Turbulence vs. Magnetic Winds

Shadmehri et al. challenge the robustness of using pebbles as a catalyst by more accurately modeling how disks accrete onto their stars. Protoplanetary disks are also called accretion disks because the gas and dust at the inner edge can fall into and be consumed by the star at the center, while disk material throughout the disk also tends to fall further inwards (see Figure 1). There are two main explanations as to why a disk’s material accretes onto its star — turbulence and magnetic winds — but it is unclear which effect is dominant (see this bite for a detailed overview).

While most studies in the past have assumed that turbulence is primarily responsible for the disk’s accretion, the explanation of magnetic winds has gained traction in part because theoretical studies have had trouble finding a source of turbulence strong enough to account for the observed high levels of accretion onto young stars. Previous studies supporting the idea of pebbles as a catalyst for the growth of rocky cores completely neglected the role of magnetic winds in their models.

Figure 1. Diagram of a disk around a young star. Magnetic fields fling a little bit of gas out of the disk (green) as it rotates. To conserve angular momentum, the rest of the disk flows inwards. Meanwhile, dust drifts inwards for a different reason (drag forces). With more vertical diffusion (such as with strong winds), the dust will spread more out of the midplane in the up and down directions. [Scott et al. 2018]

A Pebble Problem

In order to test the effects of magnetic winds on how much pebbles can contribute to the growth of planet cores, Shadmehri et al. develop an analytic model of the evolution of a disk with magnetic winds of different strengths. They keep the total rate of accretion fixed to match observations, implying that disks with stronger magnetic winds also have less turbulence (while disks with weaker winds are more turbulent). Interestingly, in disks with stronger winds, they find that pebbles do not speed up the growth of rocky cores enough for them to ultimately grow into gas-giant planets later on.

Pebbles are a less effective catalyst with strong winds because it takes too long for the smallest dust particles (1 µm) to grow to pebble-size (1 cm to 1 m). This slower rate of growth is not because of the winds themselves, but because there is less turbulence in these disks. With less turbulence, dust particles stay closer to their usual near-circular orbits. As a result, they also have slower relative velocities, which in turn makes them less likely to collide with each other and merge into bigger particles large enough to be considered pebbles.

Additionally, the authors account for how stronger magnetic winds spread the dust towards the surface of the disk, instead of the dust staying flat in the midplane (see Figures 1 and 2). With the dust more spread out, the dust density throughout the disk drops, which in turn leads to a lower amount of pebbles growing in the disk.

Figure 2. Dust diffusion coefficient (αD) as a function of distance from the central star. With the strongest winds (β0 = 103), there is more diffusion, which spreads out the dust particles in the disk — thereby making it harder for them to grow to pebbles. [Shadmehri et al. 2018]

The net effect of stronger magnetic winds causing pebbles to grow both slower and in lower numbers is ultimately to make them less effective in contributing to the growth of cores in the disk (see Figure 3).

Figure 3. Pebble accretion rates over time. The case with the strongest wind (β0 = 103) has the lowest accretion rate. Over 1 Myr, the authors calculate that this case contributes a total of only 0.1 Earth masses of pebbles to a rocky core in the disk — not nearly enough to help it grow to the 5+ Earth masses needed for it to become a gas giant. In contrast, the medium wind case contributes 56 Earth masses, which is more than enough. The red dashed line shows the accretion rate from a previous study with no winds. [Shadmehri et al. 2018]

Reconciling with Jupiter

Fortunately, pebbles can still be excellent catalysts in other cases, including some with strong magnetic winds. For example, if the total accretion rate onto the star is higher (which would imply strong turbulence in addition to strong winds), pebbles can still effectively aid cores in growing large enough to eventually become gas giants. These higher accretion rates are preferentially found around younger stars, which might suggest that pebbles are better at solving the mystery of gas-giant growth for cores that grow earlier on in a young star’s lifetime.

All in all, disks with strong magnetic winds and low accretion rates may prevent pebbles from helping Jupiter-sized planets grow, supporting observational evidence that these planets may be rare. That is all the more reason to be thankful that the Jupiter-sized planet in our own solar system is there to make our lives better.

About the author, Michael Hammer:

I am a 3rd-year graduate student at the University of Arizona, where I am working with Kaitlin Kratter on simulating planets, vortices, and other phenomena in protoplanetary disks. I am from Queens, NYC; but I’m not Spider-Man…

1 25 26 27 28 29 38