Astrobites RSS

pulsar pulses

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites that looks back at a paper from 2006; the original can be viewed at astrobites.org.

Title: Predicting the Starquakes in PSR J0537-6910
Authors: John Middleditch, Francis E. Marshall, Q. Daniel Wang, Eric V. Gotthelf, William Zhang
First Author’s Institution: Los Alamos National Laboratory
Status: Published in ApJ

pulsar

Artist’s illustration of a pulsar, a fast-spinning, magnetised neutron star. [NASA]

Pulsars (rotating, magnetised neutron stars) emit radiation that sweeps periodically over the Earth (like the beam of a lighthouse sweeping across the ocean). We detect this radiation as a sequence of pulses, with the frequency of the pulse corresponding to the frequency of rotation of the star. Pulsars will typically spin down over their lifetime due to electromagnetic braking, but this is a fairly slow process. Occasionally, in some pulsars, we will detect a sudden increase in the frequency of the pulses. This is called a pulsar glitch. Essentially, the mismatch in the rotation of the fluid inside the star and the solid crust on the outside of the star causes a catastrophic event that we see as an increase in the frequency of the pulses.

The question that the paper we’re exploring today — originally published in 2006 — seeks to answer is: can you predict the next glitch in a pulsar? In general, this is a challenging task, with different pulsars exhibiting different glitching behaviours that need to be captured in your model. However, for one particular pulsar, PSR J0537-6910, this can be accomplished fairly straightforwardly, due to the strong correlation between the size of each glitch and the waiting time until the next glitch. The authors of today’s paper exploit this correlation to develop a method to predict the next starquake on PSR J0537-6910.

What Is a Pulsar Glitch?

Glitches are thought to be caused by superfluidity inside a neutron star.  When a substance cools down to a temperature below a critical temperature Tc, it forms a superfluid state, i.e., a state that flows without viscosity. But neutron stars are much hotter than any substances we find on Earth (they run around 106 K). So how can a neutron star be cool enough to contain a superfluid? The matter inside a neutron star is extremely dense, and it has very different properties from terrestrial matter. In particular, the matter inside a neutron star has a high critical temperature — Tc ~ 109 K — and the neutron fluid inside the star can therefore form a superfluid even at high temperatures.

A cartoon of the angular velocity of the crust of a pulsar vs. time during a glitch. The pulsar glitch is characterised by (1) steady spin-down of the star, (2) a step-like jump in frequency and (3) an ensuing gradual relaxation back to the original spin-down rate. [Adapted from van Eysden 2011]

The pulsar spins down due to electromagnetic processes. The pulsar is a rapidly rotating magnetised body — if a rotating magnetic dipole is inclined at some angle from the rotation axis, it emits magnetic dipole radiation at the rotation frequency. The emission of this electromagnetic radiation leads to lost rotational energy. Therefore, the star spins down as a consequence, and we call this magnetic braking.

The crust of the neutron star spins down continuously because the magnetic field lines are locked into the crust. However, the superfluid inside the star is likely to be at least partially decoupled from the spin-down of the crust. Therefore, an angular velocity lag builds up between the crust and the superfluid as the superfluid continues to spin at the same rate for a period of time, uninhibited by the magnetic braking. Eventually, the lag builds up to a critical value. At this stage, there is an angular momentum transfer to the crust from the superfluid which causes a glitch. Because glitches are believed to be intimately connected to the behaviour of the interior superfluid, astronomers believe pulsar glitches offer a rare window into the processes occurring inside the star.

How Do Glitches Occur?

It is still not known for certain exactly why and how glitches occur in neutron stars, but a number of possible mechanisms have been proposed. For example, vortex avalanches are a possible mechanism for glitches. In the superfluid, there are many vortices (i.e. tiny whirlpools) induced by the rotation of the star. Vortices are “trapped” or “pinned” to certain locations in the crust. This just means they are fixed in that location until there is enough force to unpin them. When enough lag builds up between the superfluid and the crust, the force is sufficient to unpin them. As they unpin, they transfer angular momentum to the crust and cause a glitch. Another possible mechanism is starquakes, which mean there is a cracking of the neutron star crust that causes a rearrangement of the matter inside the star.

Glitching pulsars can be thought of as belonging to two different classes: Crab-like and Vela-like. The Vela pulsar typically has large glitches which occur fairly periodically, while the Crab pulsar has a power law distribution of glitch sizes. Therefore, it is difficult to develop a model that captures the behaviour of these two classes simultaneously. In this paper, the authors focus on a single pulsar (PSR J0537-6910) and use its unique properties to predict when it will next glitch.

Reliable Glitcher: PSR J0537-6910

PSR J0537-6910 is a 62-Hz pulsar in the Large Magellanic Cloud. The authors report on seven years of observation of this pulsar, containing 23 glitches. PSR J0537-6910 is unique among glitching pulsars. Firstly, it is the fastest spinning young pulsar and one of the most actively glitching pulsars we know of. Secondly, its glitching properties are particularly favourable to glitch prediction due to the very strong correlation between the waiting time from one glitch to the next and the amplitude of the first glitch, shown in the figure below. The authors suggest the predictable behaviour of the glitches of this pulsar is associated with the the angular velocity lag build-up causing a “cracking” in the crust as glitches occur, with the smaller glitches that precede a large glitch corresponding to more localised cracks.

Figure 3: Waiting time vs. glitch size for PSR J0537-6910. [Middleditch et al. 2006]

Impressively, we’re able to predict the waiting time for the next glitch of PSR J0537-6910 to within a few days. Predictions of this accuracy have not been achieved with any other pulsar.

About the author, Lisa Drummond:

I am an astrophysics PhD student with interests in compact objects and gravitational waves. I studied neutron star interiors for my Masters thesis at the University of Melbourne, Australia and now I am doing my PhD at MIT.

planet formation

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: In situ formation of icy moons of Uranus and Neptune
Authors: Judit Szulágyi, Marco Cilibrasi and Lucio Mayer
First Author’s Institution: University of Zurich, Switzerland
Status: Accepted to ApJL

With over 100 moons between them, gas giants Saturn and Jupiter host most of our solar system’s satellites. Moons are thought to form in the gaseous circumplanetary disks (CPDs) that surround giant planets during their later stages of formation; the satellites develop from the disks in much the same way as planets themselves are formed.

But what about smaller planets like Neptune and Uranus? Today’s bite delves into the world of radiative hydrodynamical simulations to see whether CPDs — and thus moons — could also form around our ice giants.

The Real Moons

Given that Uranus hosts five major moons in similar, circular orbits, this ice giant’s satellites likely formed in a circumplanetary disk. A debris disk, like the one that may have formed our own Moon, is unlikely; debris-disk satellites would have very little water, which is not what we observe for Uranus’s moons.

Figure 1: Triton, as seen by the Voyager 2 spacecraft. [NASA/Jet Propulsion Lab/U.S. Geological Survey]

Neptune, however, is only home to one major moon, Triton, which has an unusual composition and retrograde orbit. Triton is more than likely a captured Kuiper Belt object that is thought to have severely disrupted the dynamics of the pre-existing Neptunian system. In fact, previous work suggests that without the existence of satellites around Neptune, it wouldn’t have been possible for the system to capture an object like Triton.

Let’s Form Some Disks

Forming a moon isn’t an easy job for a planet. Previous studies have revealed that there are two key planetary properties that determine how likely it is for a gaseous CPD to form around a planet — mass and temperature.

  • Mass: Terrestrial planets like Venus are too small for CPDs to form; any satellites that exist around them are usually captured (as in the case of Mars) or the result of a planet–planet impact (as in the case of Earth).
  • Temperature: CPDs are more likely to form if the planet is cooler, BUT a cooler planet radiates its formation heat faster and has less time to form a disk.

The authors deployed hydrodynamical simulations to recreate the later stages of planet formation for Uranus and Neptune. This involved setting the planets as point masses in the centre of the simulation surrounded by a gas disk, and then letting simulated nature (heat transfer, ideal gas laws, gravity) take over. For more details regarding hydrodynamical simulations, see this post on simulating the entire universe (!!) and this one on gas accretion.

simulated circumplanetary disks

Figure 2: Zooming in to the circumplanetary disk around Uranus (left column) and Neptune (right column). The different rows indicate the planetary surface temperatures: 100 K (top), 500 K (middle) and 1,000 K (bottom). [Szulágyi et al. 2018]

From the gas density plots in Figure 2, in which yellow/white indicates the densest region, we see that once the simulated Neptune and Uranus cooled to below 500 K a circumplanetary disk was able to form. This conclusion is drawn visually from the disk-like structure that has formed at 100 K (top row of Figure 2); this structure is not visible at 500 K and 1,000 K (middle and bottom row of Figure 2). It makes sense that both planets require a similar temperature as they are of almost equal mass. Next, the authors created a synthetic population of satellite-forming seeds within the disk to see if these protosatellites will turn into fully fledged moons by accreting matter.

Simulated Moons vs. Reality

Formation timescale of moons around Uranus

Figure 3: Formation timescale of moons around Uranus (left) with the distribution of their masses on the right. The red vertical lines represent Uranus’s 5 major moons. [Szulágyi et al. 2018]

In the case of the 100-K CPD around Uranus (Figure 2, top left panel), the majority of the synthetic population of moons that developed around Uranus formed over a 500,000-year period, at the location of the disk where the temperature was below the freezing point of water. This means that many of these moons would be icy — just like the actual moon population observed around Uranus. The masses of the moons spanned several orders of magnitude — a range that includes the masses of the satellites we observe today (red lines in Figure 3). Around 5% of the authors’ simulations yielded 4–5 moons between 0.5–2 times the mass of the current Uranian satellites.

formation timescale of moons around Neptune

Figure 4: Like Figure 3, above, the formation timescale of moons around Neptune (left) with the distribution of their masses on the right. The red vertical line represents the moon Triton. [Szulágyi et al. 2018]

Similar trends were also observed for Neptune as, once again, the entire population of moons had temperatures below the freezing point of water — meaning Neptune is also more likely to form icy satellites. Generally, simulated Neptune struggled to make moons as massive as Triton. This isn’t worrying, however, since Triton is likely to be a captured Kuiper Belt object.

So, overall, it is possible to form satellites around ice giants! This is an exciting result for exomoon lovers because Neptune-mass exoplanets are the most common mass category of exoplanet we’ve found so far. Furthermore, icy moons are the main targets for extraterrestrial life in our own solar system; ice-giant satellites elsewhere in the universe could be a similar source of potential in our search for habitable worlds.

About the author, Amber Hornsby:

Third-year postgraduate researcher based in the Astronomy Instrumentation Group at Cardiff University. Currently, I am working on detectors for future observations of the Cosmic Microwave Background. Other interests include coffee, Star Trek and pizza.

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this guest post from astrobites; the original can be viewed at astrobites.org.

Title: An Extreme Protocluster of Luminous Dusty Starbursts in the Early Universe
Authors: I. Oteo et al.
First Author’s Institution: University of Edinburgh, UK
Status: Published in ApJ

The biggest structures known in the universe are galaxy clusters: they are made of hundreds or even thousands of galaxies, lots of gas, and a huge amount of dark matter. But a long time ago, these giants were babies. Right after the Big Bang, when no galaxies, stars or even molecules had yet formed, the universe was extremely homogeneous, although it had density fluctuations with relative amplitude of ~10-5. During the cosmos’ expansion, the regions that initially were slightly heavier became increasingly heavier, because mass attracts mass. Then, clumps of gas turned into stars. Due to their mutual gravitational attraction, these stars gradually moved closer to each other, growing into galaxies that congregated further — also because of the gravity — into today’s galaxy clusters (for a deeper understanding, read this).

Since looking at distant astronomical objects means to look into the past, we may be able to see the progenitors of galaxy clusters, known as protoclusters. They should be far from us (at high redshifts, z), composed of dozens of galaxies forming lots of stars, and they should therefore contain a huge amount of dust and gas (the stars’ ingredients). From our perspective on Earth, we should view these protoclusters as distant aggregations of galaxies that are very bright and very red, visible particularly in submillimeter/millimeter wavelengths (the wavelength of the emission perceived from the gas and dust of distant galaxies that has been heated by the galaxy’s stars). To observe these systems may teach us about how the universe and its structures evolve through the cosmic time. In today’s paper, the authors report the discovery of a protocluster core with extreme characteristics: it is super dense, super massive, and super old.

Multiple Observations

In the attempt to find ideal protoclusters, the authors looked for sources in the H-ATLAS survey. They chose the reddest protocluster-like system and baptized it Dusty Red Core (DRC).
To find out more about DRC, they made many different observations:

DRC

Figure 1: From left to right: wide-field view of DRC (the A region) from APEX; DRC galaxies observed with ultra-deep ALMA continuum; high-resolution ALMA imaging shows details of DRC-1, composed of three star-forming clumps. [Oteo et al. 2018]

Getting the Information

All this data leads to several new findings about DRC. Firstly, the continuum and imaging observations show that DRC is actually composed of 11 bright, dusty galaxies instead of a single object as first thought. Its brightest component, DRC-1, is formed by three bright clumps.

The easiest way to find the protocluster’s redshift is by using the lines emitted by the molecules and atoms of the gas filling the galaxies, such as 12CO, H2O and CI. Ten of the 11 galaxies in DRC are at the same redshift of z = 4.002 (the final one didn’t have enough lines to measure the redshift). Since the expansion of the universe means that more distant objects move faster and have higher redshift, this can be converted to a luminosity distance of ~117 billion light-years, meaning they were formed only 1.51 billion years after the Big Bang. The authors also find that the components are concentrated into an area of 0.85 × 1.0 million light-years. This may seem like an enormous area — but for astronomy, this qualifies as an extremely overdense region. Knowing that, it is safe to say that at least ten objects of DRC are members of a protocluster core in the initial evolutionary states of the universe.

The continuum observations are also useful to calculate how much gas and dust mass is converted into stars per year (the star formation rate, or SFR). The lower limit obtained is 6,500 solar masses per year, which is the highest star formation rate ever found for such a distant protocluster. Furthermore, the protocluster’s molecular gas mass and total mass were estimated. The gas mass — estimated using the CI emission lines — is found to be at least 6.6 × 1011 solar masses. The total mass of the protocluster was calculated in three different ways, with an outcome of as much as 4.4 × 1013 solar masses (for comparison, the estimated mass of the Local Group is ~2 × 1012 solar masses). Based on these results and cosmological simulations, the authors concluded that DRC may evolve to a massive galaxy cluster in today’s universe, like the Coma Cluster.

Protoclusters like DRC are a key for us to understand a remote part of the universe’s history. Moreover, DRC may help us to infer information about the unknown part of the universe — which is huge, since the dark sector corresponds to 95% of the cosmos. The protocluster analyzed in this paper is bright and massive, and it was measured with accuracy by modern telescopes, despite its enormous distance from us. Those measurements may be used as parameters to test different cosmological theories, thus helping us to understand the universe’s big picture.

About the author, Natalia Del Coco:

Today’s guest post was written by Natalia Del Coco, a masters student at the University of São Paulo, Brazil. In her research, she looks for correlations between the physical properties of clusters of galaxies and the cosmic web around them. Besides being an astronomer, Natalia is also a ballerina, a shower singer, and a backpacker.

primordial black hole

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Observing the influence of growing black holes on the pre-reionization IGM
Authors: Evgenii Vasiliev, Shiv Sethi, & Yuri Shchekinov
First Author’s Institution: Southern Federal University, Russia
Status: Published in ApJ

Cosmologically important phenomena are typically discussed on the scales of gigaparsecs (Gpc); to give you some idea of the sizes involved, one Gpc could fit approximately 33,000 Milky Way galaxies end to end. With that crazy scale in mind, today we’ll gain an understanding for how parsec-scale astrophysical events can have far-reaching effects cosmologically.

Cosmic Dawn and Reionization

The cosmological period prior to the reionization of the universe’s hydrogen is typically described as being the cosmic dawn. This stage in the history of the universe is marked by the formation of the first stars and galaxies. But it doesn’t end there — connecting these large structures throughout the universe is the intergalactic medium (IGM). The IGM during the cosmic dawn consists of mostly neutral hydrogen, and compared to galaxies, it is much less dense (10-27 kg/m3 compared to the average density of our Milky Way, which is ~10-19 kg/m3). The ultraviolet (UV) radiation from these early stars and galaxies are what most astronomers and cosmologists believe led to reionization. Put simply, this ionizing radiation extended symmetrically from these sources and over time these regions of reionized IGM began to overlap, leading to complete reionization of the universe. While this is our current best guess, cosmologists typically wonder: What roles do other structures play during this period? One type of galaxy of interest are those containing Active Galactic Nuclei (AGN), which are galaxies with very dense cores with a supermassive black hole (SMBH) at their centers accreting matter. These AGN are extremely luminous and can produce a lot of X-ray and UV emissions. What we’ll be exploring today is how these SMBHs influence the surrounding regions.

Early Black Holes

The very first black holes were most likely few and far between and should have had masses several orders of magnitude less than what can be observed today (but in light of new detections we may need to question this). This follows from the fact that for black holes to exist, you need some form of already dense matter to collapse under extreme gravitational conditions. These conditions, however, weren’t as ripe during the cosmic dawn, as matter was just starting to clump together to form behemoth objects such as Population III stars. During this time, AGN were fueled by accretion onto the type of early SMBHs today’s astrobite investigates.

SMBHs at the center of AGN affect the IGM through reionizing and heating of the gas. This is because as the SMBH accretes local matter, hard non-thermal radiation is emitted. So how does the accretion rate of these early black holes influence the surrounding regions? We should expect there to be some relationship between the rate of matter accretion to the distance of influence.  Directly following from this, we should also be able to see how observable an object like this might be by using a radio interferometer. Luckily we have the authors from today’s astrobite paper to help answer this for us.

The authors model the accretion according to the above equation, which relies on the initial black hole mass, MBH,t=0, the radiative efficiency ε that tells us how easily matter is converted to radiation, and the Eddington timescale of TE = 0.45 Gyr (read more here). This is then related to the resulting ionizing luminosity, which they assume to have a power-law relationship, and this eventually leads us to the ionizing radiation flux.

brightness temperatures

Figure 1: The relationship between brightness temperature and the distance of ‘influence’ of ionizing radiation plotted at several redshifts. The upper plot is for a ε = 0.1 and the bottom plot for ε = 0.05. [Vasiliev et al. 2018]

These growing black holes are placed in a host halo of neutral hydrogen where the effects of the ionizing/heating front radius can be related to an observable differential brightness temperature, ΔTb, which measures the difference between the background Cosmic Microwave Background temperature and the neutral hydrogen 21cm line. Their results for a black hole with an initial mass of 300 solar masses starting at redshift of z ~ 20 with the radiative efficiencies of ε = 0.1 (upper) and ε = 0.05 (lower) can be seen in Figure 1. They also compare a non-growing black hole (dashed) as a point of reference.

We can see from Figure 1 that growing black holes should exert the largest influence in terms of distance from the black hole, when compared to non-accreting (dashed). The authors show us that accreting black holes during the early universe can have an influence on the scales of 10 kpc to 1 Mpc. This is of course a very large dynamic range that accreting black holes can influence. 

These distances convert to just about the correct angular scales that radio interferometers, such as LOFAR, might be able to probe, which is certainly an exciting prospect. This would be a huge achievement as being able to probe down to the kpc scales and link them to phenomena seen at some of the largest scales could provide us some much needed information on how the earliest black holes formed.

About the author, Joshua Kerrigan:

I’m a 5th year PhD student at Brown University studying the early universe through the 21cm neutral hydrogen emission. I do this by using radio interferometer arrays such as the Precision Array for Probing the Epoch of Reionization (PAPER) and the Hydrogen Epoch of Reionization Array (HERA).

BNS ejecta

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Binary Neutron Star Mergers: Mass Ejection, Electromagnetic Counterparts and Nucleosynthesis
Authors: David Radice, Albino Perego, Kenta Hotokezaka, Steven A. Fromm, Sebastiano Bernuzzi, Luke F. Roberts
First Author’s Institution: Princeton University
Status:Submitted to ApJ

Neutron star mergers are absolutely fascinating. These events are not just sources of gravitational waves but of electromagnetic radiation all across the spectrum — and of neutrinos as well. If you missed the amazing multimessenger observations last year that gave us a peek into what binary neutron star (BNS) systems are up to, please check out this bite about GW170817! The observations had major implications for many fundamental questions in astrophysics. The gravitational-wave signal from the merger was detected along with the electromagnetic radiation produced. As a result, we were able to confirm that neutron-star mergers are sites where heavy elements (those beyond iron) can be made via the r-process.

While all of this has undoubtedly been extremely cool (and we’re holding our collective breath for more data), there’s a lot of work that remains to be done. We need accurate predictions of the quantity and composition of material ejected in mergers in order to fully understand the origin of the heavy elements, and to say whether BNS mergers are the only r-process site. To investigate such questions, we require theoretical models that include all the relevant physics. Today’s paper presents the largest set of NS merger simulations with realistic microphysics to date. By realistic microphysics, we mean that the simulations also take into account what the atoms and subatomic particles are doing. This is done by using nuclear-theory-based descriptions of the matter in neutron stars, and by including composition and energy changes due to neutrinos (albeit in an approximate way).

Simulating Neutron-Star Mergers

Modeling BNS mergers is a complex multi-dimensional problem. We need to simulate the dynamics in full general relativity, along with the appropriate microphysics, magnetic fields and neutrino treatment. Remarkable progress has been made, particularly in the last decade, since the first purely hydrodynamical merger simulations were carried out. Still, the problem remains extremely computationally expensive and simulation efforts have traditionally focused either on carrying out general relativistic simulations while sacrificing microphysics, or on incorporating advanced microphysics with approximate treatments of gravity. If you care about the merger dynamics and the dynamical ejecta, i.e., material ejected close to the time of merger due to tidal interaction and shocks, you need fully general relativistic simulations, like the ones presented in today’s paper.

The authors carry out 59 high-resolution numerical relativity simulations, using binaries with different total masses and mass ratios. They also use different descriptions of the high-density matter in neutron stars. Neutrino losses are included in all cases, and some simulations include neutrino reabsorption as well. A few simulations even include viscosity. The authors systematically study the mass ejection, associated electromagnetic signals, and the nucleosynthesis from BNS mergers.

Mass Ejection

Fig 1. Electron fraction for an example simulation. The neutron stars are 1.35 Msun each and neutrino reabsorption has been included. The bulk of the ejecta lies within a ~60 degree angle of the orbital plane. [Radice et al. 2018]

Fig 1 shows the electron fraction of the material in one of the simulations over time. First, material is ejected due to tidal interactions, close to the orbital plane. Next, more isotropic, shock-heated material is ejected. This component has higher velocity and quickly overtakes the tidal component, as seen in the figure. The two components interact and the tidal component gets reprocessed to slightly higher electron fractions.

The authors also find a new outflow mechanism, aided by viscosity, that operates in unequal mass binaries. This ejecta component, called “viscous-dynamical ejecta”, is discussed in detail in a companion paper.

Using their results, the authors fit empirical formulas that predict the mass and velocity of ejecta from BNS mergers. Even more material can become unbound from the remnant object on longer timescales (“secular ejecta”), but this is not studied here due to the high computational costs of running the simulation for that long.

Nucleosynthesis

The authors study in detail how the r-process nucleosynthesis depends on the binary properties and neutrino treatment. Sample nucleosynthesis yields are presented in Fig 2. You’ll notice that the second and third r-process peaks are robustly produced while the first peak shows more variation. In fact, the first peak is quite sensitive to the neutrino treatment as well as the binary mass ratio.

Fig 2. Electron fraction (left) and nucleosynthesis yields (right) of the dynamical ejecta. “A” refers to the mass number of the nucleus. The different colored lines represent binaries with different mass ratios. The green dots show solar abundances. [Radice et al. 2018]

Electromagnetic Signatures

The radioactive decay of the r-process nuclei produced powers electromagnetic emission, referred to as a “kilonova”. Other electromagnetic signals can also be produced due to different ejecta components.

The authors compute kilonova curves for all their models. They find that binaries that form black holes immediately after merger do not have massive accretion disks and produce faint and fast kilonovae. When the remnants are long-lived neutron stars, more massive disks are formed and the kilonovae are brighter and evolve on longer timescales. Example kilonova curves are shown in Fig 3.

Fig 3. Kilonova curves in three bands for three different models: binary with prompt BH formation (left), binary forming a hypermassive neutron star (middle), binary forming a long-lived supramassive NS (right). Solid and dashed lines correspond to the viewing angle. [Radice et al. 2018]

The authors also compute the synchrotron radio signal due to interaction of the ejecta with the interstellar medium. Example radio lightcurves are shown in Fig 4 along with the afterglow in GW170817. A small fraction of the ejecta is accelerated by shocks shortly after merger to velocities >0.6c, producing bright radio flares. The flares can probe the strength with which the neutron stars bounce after merger and in turn probe matter at extreme densities. Some of the models predict that the synchrotron signal from GW170817 will rebrighten in months to years after the merger!

Fig 4. Radio light curves of the dynamical ejecta of one model at 3 GHz, compared with GW170817. The ISM number density n is a parameter of the model used for generating the curves. [Radice et al. 2018]

Looking Ahead

Systematic investigations are key to understanding complex events such as neutron-star mergers. Improved theoretical modeling, with a push towards incorporating all the relevant physics in merger models, will not only help us understand what we saw last year but also set us up for the next set of observations!

About the author, Sanjana Curtis:

I’m a grad student at North Carolina State University. I’m interested in extreme astrophysical events like core-collapse supernovae and compact object mergers.

exoplanet transit

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: A More Informative Map: Inverting Thermal Orbital Phase and Eclipse Lightcurves of Exoplanets
Authors: Emily Rauscher, Veenu Suri, Nicolas B. Cowan
First Author’s Institution: University of Michigan, Ann Arbor
Status: Submitted to AAS journals

During a recent visit to the Dutch National Maritime Museum I came across the world map from 1648 AD by cartographer John Blaeu. In addition to being the first world map of the Earth that adhered to the Copernican worldview, Blaeu’s map is an extremely detailed document of the knowledge of the world at the time sourced solely from the information gathered by explorers from their travels through ages to the far reaches of the world. Its errors and subjectivity to the western view of the world are quite amusing, but there is still a level of high detail and surprising accuracy in many places. Through these details, it is clear that cartographers must meticulously piece together information from a large number of sources of varying reliability. Creating maps of exoplanets, a practice beginning to be known as exocartography, is an equally challenging observational and mathematical problem that is addressed by today’s paper.

transit schematic

Figure 1: Schematic of transits and occultations (also referred to as secondary eclipse) in an exoplanetary system. [Winn 2010]

Creating maps of exoplanets can give an idea of the presence of climatic features in the planetary atmosphere or surface. When compared to predictions from general circulations models, these features can give us insight into the physical processes in play in the planets’ atmospheres. What observations do we need to create these maps? Scientists continuously monitor the brightness of a planetary system as different regions of the planet come into view; a schematic of a planet’s transits and occultations as it orbits is shown in Figure 1. The disk-integrated flux of the planet (the total flux from the planet’s surface, visible as a circular disk), observed as a function of time, can then be converted to flux (more easily so for a tidally locked planet) as a function of spatial locations on the planet (which are coming into view as a function of time) to render a 2D brightness map of the planet for the chosen wavelength band.

However, this technique can be sensitive to the choice of map structure and uncertainties in the orbital parameters of the system. As a result, it has only been reliably attempted for a few well-studied hot Jupiters — for example, HD189733 — and hot super-Earths to date. Today’s paper proposes a method that maximizes the information in the flux maps retrieved from phase curves and secondary eclipse observations, while also accounting for the effect of orbital parameter uncertainties.

A Closer Look at Exocartography

It may still be a few decades before we successfully send a spacecraft to get an up-close image of the nearest exoplanet. Till then, astronomers will mostly be busy extracting brightness maps of exoplanets from their high-precision photometric observations. Only phase-curve (i.e. not including the secondary eclipse) observations in this context are more sensitive to the longitudinal distribution of brightness, while most latitudinal information comes from the secondary eclipse. Considering the wavelength band in which these observations are taken, infrared phase curves like those obtained using Spitzer are more sensitive to the thermal emission from the planet, while shorter (optical) wavelengths usually probe the reflected light, which can be used to get an albedo map of the planet.

Once you have the phase-resolved or eclipse observations for a particular wavelength band, the first step in converting them to spatial flux maps of the planet conventionally involves assuming a 2D brightness map structure and fitting the observations with a disk-integrated flux-curve model (a function of the orbital phase) that has a one-to-one correspondence with the chosen map. But for a planet that could, in fact, have a very general distribution of spatial brightness, what’s the functional form for the 2D map you should choose to start with?

The best mathematical way to represent a signal with a general functional form is to decompose it into a linear combination of Fourier basis functions or basis maps. This choice is ideal because the Fourier basis functions are orthogonal, which makes them informatically independent of each other and prevents any scrambling of information when we use a linear combination of basis functions to represent a signal. In the case of brightness maps, which are essentially functions defined on the surface of a sphere, spherical harmonics play the role of the orthogonal basis functions, making them a good choice to represent the map structure for the planet’s brightness. Herein lies the key problem addressed by today’s paper: light curves corresponding to a set of orthogonal basis maps (like spherical harmonics) may not be orthogonal themselves, especially in the case of eclipse-only observations!

Orthogonalize!

The authors of today’s paper propose to use a principal component analysis (PCA) approach to orthogonalize the light curves (corresponding to spherical-harmonics maps for a chosen orbital realization) to get a set of eigen-lightcurves (or eigencurves), which can then be used to construct flux-curve models to fit the thermal phase curve, eclipse, or combined observations. Finding eigencurves in this context means getting the eigenvectors of a matrix composed of the initial set of light curves as column vectors, which is done by a simple linear transform of these light curves using the coefficients obtained from PCA. PCA essentially tries to determine the axes of variance and covariance in the input matrix of initial light curves. These axes, in the context of constructing the flux-curve model, are like independent lines of forces along which a linear combination of eigencurves acts to give the final model for the signal. Determining the eigencurves means determining these independent lines of forces.

Time-varying eigencurves

Figure 2: Time-varying eigencurves (horizontal axis showing time from the center of secondary eclipse in days). The first panel shows the case for the full orbit without secondary eclipse, and the second panel shows the eclipse-only case. [Rauscher et al. 2018]

The eigenvalues for each eigencurve intuitively represent the relative amount of information contributed by an eigencurve to the total light-curve signal, and hence they can be ranked and chosen according to their information content. Eigenmaps can be determined by a linear combination of the initial set of maps using the same PCA coefficients as obtained for corresponding eigencurves. A linear combination of eigencurves (with linear coefficients free to be constrained by observations) finally forms the flux-curve model that is fit to the observations. The authors perform this exercise for three cases: simulated observations for full orbit phase curve (without secondary eclipse), only secondary eclipse (with a fraction of before and after orbit), and real observations of the hot Jupiter HD189733 with both phase curve and eclipse combined. The eigencurves and the corresponding eigenmaps used to retrieve the brightness maps for the first two cases are shown in Figure 2 and 3.

2D eigenmaps

Figure 3: Spatially varying 2D eigenmaps for the two cases shown in Figure 2. ‘X’ in the bottom figure is the point permanently facing the star (assuming the planet to be tidally locked), and the green box in case of eclipse-only observation marks the range of longitudes probed by the observations. [Rauscher et al. 2018]

Comparing the eigencurves with their corresponding eigenmaps (Figure 2 and Figure 3), it is evident that each eigencurve for the full orbit without eclipse case encodes paired pieces of spatial information along the longitudes of the planet, while most latitudinal information comes from the eclipse (second eigenmap (Z2) onwards in Figure 3). The PCA approach ensures that the eigencurves used to construct the model flux curve are orthogonal even for the eclipse-only case. The authors use the eigencurves for combined phase-curve and eclipse observations of HD189733 by Spitzer in 8 μm channel and retrieve the longitude of the dayside hotspot (region of peak temperature on the planetary hemisphere facing the star) and the flux contrast of the hotspot, which are both consistent with the results obtained from previous studies. Additionally, the authors also investigate the effect of orbital parameter uncertainties by checking the sensitivity of PCA coefficients to construct eigencurves from original light curves for orbital realizations within one-sigma uncertainties.

Usually, correlated noise in the detectors used for high-precision photometric observations are corrected for together with the fit for astrophysical parameters, which can lead to correlations creeping even in between the orthogonal light curves obtained from the PCA approach. This only calls for more caution when working with the high-quality mapping data which will be obtained from the James Webb Space Telescope in the future. With the planned high-precision photometric observations in multiple spectral bands by JWST, we can look forward to combining the horizontal 2D mapping as discussed in today’s paper with information about the vertical atmospheric structure, allowing us to get more reliable three-dimensional maps of exoplanets in the near future.

About the author, Vatsal Panwar:

I am a PhD student at the Anton Pannekoek Institute for Astronomy, University of Amsterdam. I work on characterization of exoplanet atmospheres to understand the diversity and origins of planetary systems. I also enjoy yoga, Lindyhop, and pushing my culinary boundaries every weekend.

β Pictoris

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.

Title: Fundamental limitations of high contrast imaging set by small sample statistics
Authors: D. Mawet, J. Milli, Z. Wahhaj, D. Pelat, et al.
First Author’s Institution: European Southern Observatory,
Status: Published in ApJ

Introduction

The world’s state-of-the-art exoplanet imaging projects include the VLT’s SPHERE, Gemini’s GPI, Subaru’s SCExAO, Palomar’s Project 1640, and the LBT’s LEECH survey. As next-generation imagers come online, we need to think carefully about what the data say, as sensitivities push closer in to the host stars. This Astrobite is the first of two that look at papers that have changed the way we think about exoplanet imaging data.

Traditionally, high-contrast imaging programs calculate a contrast curve, or a 1-D plot that shows the difference in contrast that could exist between a host star and a detectable low-mass companion (Fig. 1). The authors of today’s paper examine some of the statistical weirdness that happens as we get closer in to the host, and how this can have a dramatic effect on the scientific implications.

contrast curve

Fig. 1: An example contrast curve that pre-dates today’s paper, showing the sensitivity of an observation of the exoplanet GJ504 b (the black square). Note that closer in to the host star’s glare, the attainable contrast becomes less and less favorable for small planet masses. The fact that GJ504 b lies above the curve means that it is a solid detection. [Adapted from Kuzuhara et al. 2013]

Is that Blob a Planet?

With no adaptive-optics correction, the Earth’s atmosphere causes the light from the host star (that is, the point spread function, or PSF) to degenerate into a churning mess of speckles. Adaptive-optics correction removes a lot of the atmospheric speckles, but optical imperfections from the instrument can leave speckles of their own. These quasi-static speckles are quite pernicious because they can last for minutes or hours and rotate on the sky with the host star. How do we tell if a blob is a planet or just a speckle?

resolution elements at a given radius

Fig. 2: As the radius from the host star decreases, there are fewer speckle-sized resolution elements for calculating the parent population at that radius. The radius intervals are spaced apart here by λ/D, the diffraction limit of the telescope. [Mawet et al. 2014]

Consider the following reasoning. In the absence of a planet, the distribution of intensities on all speckle-sized resolution elements at a given radius from the host star (see Fig. 2) is a Gaussian centered at zero (Fig. 3a). I’ll set my planet detection threshold at an intensity equivalent to, say, 3σ of this Gaussian. If I actually find a blob with an amplitude greater than or equal to 3σ, then there is only a 1 in ~740 chance that the blob is just a speckle. As a trade-off, I may only recover a fraction of planets that are actually that bright (Fig. 3b).

detection thresholds

Fig. 3: Left: Pixel intensities that are just noise are distributed as a Gaussian centered around a value of zero. The vertical dashed line represents a chosen detection threshold of 3σ. The tiny sliver under the curve to the right of the dashed line represents false positive detections. Compare the shape of this curve to the blue one in Fig. 4. Right: A planet of true brightness 3σ will be detected above the threshold (orange) about 50% of the time. [Jensen-Clem et al. 2018]

The name of the game is to minimize a false positive fraction (FPF) and maximize a true positive fraction (TPF). These can be calculated as integrals over the intensity spectrum of quasistatic speckles.

All well and good, if we know with certainty what the underlying speckle intensity distribution is at a given radius. (Unfortunately, for reasons of speckle physics, speckles at all radii do not come from the same parent population.) But even if the parent population is Gaussian with mean μ and standard deviation σ, we don’t magically know what μ and σ are. We can only estimate them from the limited number of samples there are at a given radius (e.g., along one of the dashed circles in Fig. 2). And at smaller radii, there are fewer and fewer samples to calculate a parent population in the first place!

The t-Distribution

Enter the Student’s t-distribution, which is a kind of stand-in for Gaussians with few measured samples. This concept was published in 1908 by a chemist using the anonymous nom de plume “Student” after he developed it for the Guinness beer brewery to compare batches of beer using small numbers of samples. The t-distribution includes both the measured mean and measured standard deviation of the parent population. As the number of samples goes to infinity, the distribution turns into a Gaussian. However, small numbers of samples lead to t-distributions with tails much larger than those of a Gaussian (Fig. 3).

By integrating over this distribution, the authors calculate new FPFs. Since the tails of the t-distribution are large, the FPFs increase for a given detection threshold. The penalty is painful. Compared to “five-sigma” detections at large radii, the probability that we have been snuckered with a speckle at 2λ/D increases by factor of about 2. At 1λ/D, the penalty is a factor of 10!

Gaussian vs. t distribution

Fig. 4: A comparison between a Gaussian (blue) and a t-distribution (red). If we have set a detection threshold at, say, x=3, the area under the t-distribution curve (and thus the false detection probability) is larger than in the Gaussian case. [IkamusumeFan]

What Is to Be Done?

The authors conclude that we need to increase detection thresholds at small angles to preserve confidence limits, and they offer a recipe for correcting contrast curves at small angles. But the plot gets thicker: it is known that speckle intensity distributions are actually not Gaussian, but another kind of distribution called a “modified Rician“, which has longer tails towards higher intensities. The authors run some simulations and find that the FPF gets even worse at small angles for Rician speckles than Gaussian speckles! Yikes!

The authors suggest some alternative statistical tests but leave more elaborate discussion for the future. In any case, it’s clear we can’t just build better instruments. We have to think more deeply about the data itself. In fact, why limit ourselves to one-dimensional contrast curves? There is no azimuthal information, and a lot of the fuller statistics are shrouded from view. Fear not! I invite you to tune in for a Bite next month about a possible escape route from Flatland.

About the author, Eckhart Spalding:

I am a graduate student at the University of Arizona, where I am part of the LBT Interferometer group. I went to college in Illinois, was a secondary-school physics and math teacher in Kenya’s Maasailand for two years, and got an M.S. in Physics from the University of Kentucky. My out-of-office interests include the outdoors, reading, and unicycling.

Green Bank Telescope

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach
Authors: Yunfan Gerry Zhang, Vishal Gajjar, Griffin Foster, Andrew Siemion, James Cordes, Casey Law, Yu Wang
First Author’s Institution: McGill University of California Berkeley
Status: Submitted to ApJ

Today’s astrobite combines two independently fascinating topics — machine learning and fast radio bursts (FRBs) — for a very interesting result. The field of machine learning is moving at an unprecedented pace with fascinating new results. FRBs have entirely unknown origins, and experiments to detect more of them are gearing up. So let’s jump right into it and take a look at how the authors of today’s astrobite got a machine to identify fast radio bursts.

Convolutional Neural Networks

Let’s begin by introducing the technique and machinery the authors employed for finding these signals. The field of machine learning is exceptionally hot right now, and with new understanding being introduced almost daily into the best machine-learning algorithms, the diffusion into nearby fields is accelerating. This is of course no exception for astronomy (radio or otherwise), where datasets grow to be extraordinarily large and intractable for classical algorithms. Enter the Convolutional Neural Network (CNN): the go-to machine-learning algorithm for understanding and prediction on data with spatial features (aka images). How does one of these fancy algorithms work? A basic starting point would be that of a traditional neural network, but I’ll leave that explanation to someone else. A generic neural network can take in few or many inputs, but the inputs don’t necessarily have to be spatially related to each other; CNNs, however, are well suited for images. (Note: you can also have one-dimensional or three-dimensional CNNs). These images have features that, when combined, are important for identifying what is in the image. In Figure 1, for example, the dog has features such as floppy ears, or a large mouth with a tongue protruding. A CNN learns some or all of these features from a provided training dataset with a known ground truth; in Figure 1, for instance, the prediction can be labeled dog, cat, lion, or bird. These features are learned at varying spatial scales as the input images are successively convolved over, and the prediction is compared to its known label, with any corrections propagated backwards to update those features. This latter step is the training part — which you might notice is the same process as a non-convolutional neural network. Thus armed with this blazingly fast classifier, we can move forward to understanding what we’ll be predicting on.

Figure 1: An example of a convolutional neural network. An input image is sequentially convolved over through several convolutional layers, where each successive layer learns unique features, which after training, are ultimately used to make a prediction based on a set of labels. [KDnuggets]

Fast Radio Bursts

Figure 2: Simulated FRB pulses in Green Bank Telescope (GBT) radio time-frequency data. Pulses are simulated with a variety of parameters for the purpose of making the CNN as robust as possible. [Zhang et al. 2018]

We’ve covered FRBs on astrobites in the past (1, 2, 3), and with each new post we seem to be getting closer and closer to finding the origin of these mysterious radio signals. FRBs are radio-bright millisecond bursts seen in time–frequency radio telescope data. These bursts have unique features that set them apart from other radio signals and will be important for understanding how the authors developed an FRB training dataset for the predictions in their paper. These features consist of a dispersion measure (DM), time of arrival (TOA), amplitude, and pulse width (there are more, but I’ll highlight these as being the most important characteristics). The DM is one of the more interesting features of an FRB, as this is what indicates that FRBs are cosmological. The DM is measured from the dispersion of the signal in time and frequency as it traveled through an ionized medium — in this case, the intergalactic medium. This is that curved trait seen in Figure 2, which delays the signal to later times when moving to lower frequencies. TOA is when the signal arrived in the observations, amplitude is the flux density of the signal, and pulse width is the width at 10% of the maximum amplitude.

Using all of these characteristics to define a training dataset, the authors simulated many different types of FRBs, all with their own unique values. This is important because having a large, robust training dataset means you’re more likely to have a neural network capable of robust predictions.

Putting the CNN to Work

We now have all the components: a convolutional neural network, a robust training dataset, and a monumental amount of Green Bank Telescope (GBT) data. The authors seek to probe archival data of the now pretty well known FRB 121102, which has a history of being a repeating FRB. This means that FRB 121102 is an amazing resource for understanding FRBs because we can take many measurements.

Feature distributions

Figure 3: Distributions of the various features for the discovered FRB 121102 pulses from the GBT archival data. Understanding how these parameters relate to each other can give us hints to the nature of FRB 121102. [Zhang et al. 2018]

Using several hours of GBT archival data, the authors set the CNN to work predicting whether there are additional FRB pulses from FRB 121102 that may have gone overlooked due to the signal being weak or just plain being missed due to the amount of data. They successfully find 72 additional pulses from FRB 121102! And interestingly enough, more than half of these newly discovered pulses happened within the first half-hour of this dataset. This brings the total tally, including the older signals, to 93 FRB pulses.

The additional detection and measurement of these pulses is certainly important. Like we’ve stated in our past astrobites, the origin of these bursts is almost completely speculative and we need to build up as many measurements as we can to either rule out or constrain the potential cosmological sources. Having a repeating FRB with which we can start to collect measurements, like the distributions seen in Figure 3, is fantastic for understanding the FRB’s environment affecting these parameters. Hopefully with the continued development of these CNNs and other machine-learning techniques, we’ll see an explosion of FRB detections.

About the author, Joshua Kerrigan:

I’m a 5th year PhD student at Brown University studying the early universe through the 21cm neutral hydrogen emission. I do this by using radio interferometer arrays such as the Precision Array for Probing the Epoch of Reionization (PAPER) and the Hydrogen Epoch of Reionization Array (HERA).

gas-giant planet formation

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: The evolution of gas giant entropy during formation by runaway accretion
Authors: David Berardo, Andrew Cumming, Gabriel-Dominique Marleau
First Author’s Institution: McGill University and University of Montréal, Canada
Status: Published in ApJ

Introduction

Direct imaging has turned up only a handful of planets. However, as observing sensitivities get better in the coming years, the technique will become a powerful probe of planet formation physics. Part of the reason planets are so challenging to image is that planets don’t carry out fusion themselves, so they just slowly cool and become dimmer with time.

How, exactly, do they cool? We need to know this in order to convert the measured luminosity of a planet into meaningful data, like the planet’s mass. For that, we have to mostly rely on evolutionary models to predict the cooling curve. The authors of today’s paper do this by tackling the physics of the accretion process during its most rapid phase, when the growing protoplanet’s gravitational well consumes material as fast as the surrounding disk can supply it.

In the field of planet formation physics, “hot start” and “cold start” and a gradient of “warm” starts in between refer to the starting entropies of planets. These terms do not necessarily indicate the formation mechanism. The authors of today’s paper specifically investigate the core accretion mechanism to see what interior entropies, and by extension luminosities, it can lead to.

a growing Jupiter-mass planet

Fig. 1: Left: The mishmash of quantities the authors monitor in growing planets. At the shock boundary, material accretes at a rate M-dot, and an accretion luminosity Laccr is contributed to the planet’s luminosity. T0, P0, and S0 are the temperature, pressure, and entropy just below the shock. Material settles and compresses in the envelope before reaching the radiative-convective boundary. Sc is the entropy in the core. Right: An example plot showing the regimes of resulting entropy Sf of a 10-Jupiter-mass planet as a function of T0 and P0, after it began accreting at an initial entropy Si. The color scale is in units of Boltzmann’s constant per proton mass. [Adapted from Berardo et al. 2017]

How Shocking?

As material falls into an accreting planet, it loses gravitational potential energy. How much of that energy gets radiated away, and how much is incorporated into the planet’s internal entropy? The physics surrounding the shock is critical here. With a stew of boundary conditions, “jump” conditions around the accretion shock, assorted assumptions, and the open-source stellar evolution code Modules for Experiments in Stellar Astrophysics (MESA), the authors monitor different layers of the growing planet (Fig. 1, left).

They find that the planet’s internal entropy is set by the difference between the initial entropy Si and the entropy S0 of the accreting material as set by the temperature and pressure at the shock boundary (Fig. 1, right). In some cases, core accretion can lead to “cold” starts with luminosities less than about 5×10-6 Lsun if the accretion rate is small and on the order of 10-3 Mearth per year, if the initial entropy is low, and if the shock temperature is close to that of chilly nebular temperatures. But many core accretion scenarios actually result in much more luminous planets. As the authors outline in their summary, the core accretion formation regimes include (see Fig. 2):

  1. The cooling regime (S0 < Si, and T0 < 500–1000 K): the planet is convective and cooling proceeds quickly. (Check out this bite for more info about convection, radiation, and entropy in planet interiors.)
  2. The stalling regime (S0 > Si, and T0 ≅ 1000–2000 K): the planet’s envelope is radiative, and the internal entropy decreases as a function of decreasing radius inside the planet. The final entropy Sf tends to settle near the initial entropy value Si.
  3. The heating regime (T0 > 2000 K): the imbalance between the initial entropy and that of the accreting material is steep enough that it forces a convective layer with a minimum Smin > Si to form on the convective core.
Luminosity curves

Fig. 2: Luminosity curves as a function of planet age, in subpanels for different combinations of initial entropies Si and accretion rates. The data points are directly imaged planets. Number 8 is 51 Eri b, the chilliest directly imaged planet found so far, and the closest contender for a “cold” start formation. Letter A is the possibly-still-accreting planet HD 100546 b. [Berardo et al. 2017]

The authors plot expected cooling curves and overlay data points of directly imaged planets. In Fig. 2, the bundles of lines corresponding to different planet masses are fairly tight, especially if the planet is a few tens of millions of years old. But if the mass of a very young planet can be independently determined, then under certain circumstances (like additional constraints on the accretion rate, for example), the luminosity can help constrain the exact formation scenario. One particularly interesting example is the young planet HD 100546 b, which is embedded in an asymmetric protoplanetary disk and is probably still undergoing accretion.

A Tricky Business

Currently, though, figuring out how well the data points in Fig. 3 agree with the models is a tricky business without measuring the planet masses in a way that does not also depend on a cooling model. Fortunately, this is beginning to change with Gaia, which can help determine masses of planets by monitoring the way they tug on the host star. Observationally, it will also be important to carry out spectroscopy of accreting protoplanets to determine how much of the luminosity comes from the shock itself (see Fig. 1), and to actually spatially resolve the accretion emission to put better constraints on the details of the accretion process.

On the theory side, the authors call for models that allow parameters that they kept fixed — like T0 and the accretion rate — to vary in time, and to incorporate more complications like the effects of dust grains, accretion asymmetries, and whatever other individual circumstances may be in play during the formation of a given planet. As the symbiotic trifecta of high-contrast imaging, Gaia data releases, and sophisticated modeling continue to advance, we may yet use the luminosity of young planets to illuminate the broader physics of massive planet formation.

About the author, Eckhart Spalding:

I am a graduate student at the University of Arizona, where I am part of the LBT Interferometer group. I went to college in Illinois, was a secondary-school physics and math teacher in Kenya’s Maasailand for two years, and got an M.S. in Physics from the University of Kentucky. My out-of-office interests include the outdoors, reading, and unicycling.

S0-2 orbit

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Improving Orbit Estimates for Incomplete Orbits with a New Approach to Priors – with Applications from Black Holes to Planets
Authors: K. Kosmo O’Neil, G. D. Martinez, A. Hees, et al.
First Author’s Institution: University of California, Los Angeles
Status: Accepted to ApJ

Everyone warns you: don’t make assumptions, because when you ASSUME, you look as foolish AS a SUM of Elephant seals, or however it goes.

Figure 1. I’m sorry, they’re just silly-looking animals. [Brocken Inaglory]

But assumptions are useful, as long as they’re based on facts! On the weekends, for example, I just assume, based on prior experience, that an NYC subway journey will take about twenty minutes longer than it’s supposed to because of crowds and construction. More often than not, I arrive about when I expect based on that assumption.

Of course, if the transit authority magically got its act together, I’d have to update my beliefs — I wouldn’t let an old assumption about slow subways mislead me into showing up awkwardly early for things forever. It would probably only take two or three fast train journeys before I stopped building in that extra 20 minutes. My observations, in other words, would take precedence over my assumptions.

But what if I couldn’t test my assumptions against the real world so effectively? What if I were working from very limited data? In the subway analogy, what if I had moved away from New York years ago, but I were still advising tourists about travel time based on how things used to be? My advice might be better than nothing, but still inadequate or misleading.

Today’s authors investigate: What happens when data are scarce, and you have to let your assumptions guide you? How do you choose your assumptions wisely, so you’re misled as rarely as possible?

Orbits, and the Lack Thereof

The data these authors investigate isn’t so different from a subway timetable — it’s a list of on-sky coordinates describing where a celestial body is measured to be, and when. Specifically, the authors look at the star S0-2, which orbits the black hole at the center of the Milky Way, and the four planets around HR8799, which have been directly imaged (i.e., photographed on the sky, so their positions are known). Figure 2 summarizes what we know about these objects.

Figure 2. (Left) The on-sky coordinates of star S0-2 (black points) as it traces out its 16-year orbit around the black hole at the center of the Milky Way (gray star), plus the best-fitting orbit (blue line). (Middle) S0-2’s radial velocity at various points in its orbit, plus the best-fitting orbital solution. (Right) The on-sky coordinates of the four planets orbiting HR8799 (gray star). Note that these planets take a long time to orbit their star, so we’ve only witnessed a small fraction of their orbits since the system was first photographed. [O’Neil et al. 2018]

S0-2 has been closely watched for a while — we’ve been able to see it trace out more than a complete orbit around the central black hole. Whatever pre-existing assumptions we might have made about its orbit, they’ve been well and truly tested against the experimental data, and we don’t need to rely on them anymore.

HR8799, though, is a different story. It takes roughly 45 years for the innermost planet (HR8799e, plotted in yellow) to go all the way around, and the planets were only discovered about ten years ago. There’s a surprisingly wide range of possible orbits that fit the limited observations we have so far, and so if we want to decide which possibilities are most likely, we need to rely on our assumptions about how orbits ought to work.

What Assumptions Are Best?

Traditionally, scientists who specialize in orbit-predicting have chosen their assumptions to (they hope) introduce as little bias as possible: to decide, for example, that no value of orbital eccentricity is any more likely than another, a-priori. It’s a fancy way of declaring they’re as agnostic as possible about the best-fitting orbit.

But today’s authors point out that we’re not observing eccentricity, nor any other parameter of the orbit, directly — we’re actually observing on-sky coordinates, as a function of time, and trying to fit for the orbital parameters that match those coordinates best. We should choose our assumptions so that no observation is more likely than any other a-priori. That’s the real way to be as agnostic as we can be.

To test this hypothesis, the authors go on to simulate what happens when you make each of those assumptions and try to fit an orbit with only a few data points. Because the data are simulated, the authors know the right answer about the orbit, and they can test their results against it. As Figure 3 shows, the “old way” can really bias the results of orbit fitting, and the “new way” performs much better.

Figure 3. What happens when you try to measure the mass of the Milky Way’s black hole, based on only a few measured coordinates of S0-2? If you adopt the “old” assumptions, you get the blue distribution, which is biased — you’ll conclude that the Milky Way’s black hole is more massive than it really is. If you adopt the “new” assumptions, you’ll get a much more accurate answer. [O’Neil et al. 2018]

Unexpectedly, the worse your data is — in other words, the fewer on-sky coordinates you’ve measured, and the more heavily you have to lean on your assumptions — the more likely you are to be biased, if you stick with the old way. It goes to show that we should all re-examine our assumptions every once in a while!

About the author, Emily Sandford:

I’m a PhD student in the Cool Worlds research group at Columbia University. I’m interested in exoplanet transit surveys. For my thesis project, I intend to eat the Kepler space telescope and absorb its strength.

1 29 30 31 32 33 43