Astrobites RSS

chirp

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Electromagnetic Chirps from Neutron Star-Black Hole Mergers
Authors: Jeremy Schnittman, Tito Dal Canton, Jordan Camp, David Tsang, and Bernard Kelly
First Author’s Institution: NASA Goddard Space Flight Center and Joint Space-Science Institute
Status: Submitted to ApJ, open access

One of the biggest scientific accomplishments of the last few years was the discovery of gravitational waves by the LIGO Collaboration, which you can read about on Astrobites here. There is, of course, still plenty of work to be done in this field. For example, no experiment has definitively detected an electromagnetic counterpart (which would give off radiation somewhere in the electromagnetic spectrum) to a gravitational wave, although the Fermi Gamma-ray Burst Monitor may have seen hints of one. Detecting such a component would be scientifically interesting for many reasons. The authors of today’s paper give us two such reasons: first, LIGO currently only has a rudimentary ability to localize where in the sky gravitational waves are coming from. Identifying the specific galaxy that produced the gravitational wave would allow us to constrain certain astrophysical models. Second, we could possibly combine an electromagnetic (EM) counterpart with a sub-threshold gravitational-wave signal (one that is not statistically significant on its own) to glean more information about astrophysical events.

Some models of blacks holes merging with neutron stars call for short gamma-ray bursts (GRBs; the brightest EM explosions observed anywhere in the universe) to be produced by the merger. This is the same process that produces some gravitational waves. For some of these GRBs, there is a “precursor” gamma-ray flare a few seconds before the peak of the GRB emission. This may seem like only a short amount of time, but since the black hole and neutron star are orbiting each other very rapidly, this event would occur hundreds of orbits before the merger actually occurs. The environment here includes some of the most extreme forces in the known universe, and all the particles involved are extremely relativistic (traveling close to the speed of light). Therefore, the light curve for the precursor flare that eventually reaches the observer on Earth will be affected by phenomena such as relativistic Doppler beaming and gravitational lensing. The former makes the light appear at a different luminosity than it actually is and the latter causes the light to bend on its way to us. The authors explain that all the physics combines to give off an electromagnetic “chirp”, similar to the “chirps” that gravitational waves give off. Therefore, it is conceivable that algorithms similar to those used by the LIGO collaboration could be used to search for the electromagnetic chirps.

The authors used a Monte Carlo radiation transport code to calculate the light curves and the resulting spectra on Earth of a neutron-star–black-hole merger. Free parameters included the masses of the neutron star and black hole, along with the separation between them. Figure 1 shows what the thermal emission coming from the surface of the neutron star would like like over time, from the point of view of an observer looking at it edge-on (see the caption for details). They note that the inclination angle of the observer does affect their results, with the Einstein ring — a signature of gravitational lensing — only being visible at high angles. However, at smaller angles a modulation from the relativistic beaming is still present.

Figure 1: An illustration showing what the neutron star/black hole system would look like to an edge-on observer at different times. In a) the ring is caused by gravitational lensing effects; b) is the point of maximum blueshift; c) shows a weaker gravitational lensing effect; d) is the point of maximum redshift.

As the system gives off gravitational radiation, the distance between the black hole and the neutron star decreases with the size of the orbit. This causes the frequency and amplitude of the light curve described above to increase, and a “chirp” — just like in the gravitational-wave signal — is observed. See the figure below for an illustration of the inspiral. The different features of the light curve (from the gravitational lensing and the beaming) dominate at different times here, and from this the black hole mass can be determined. If the light curve is precise enough, the neutron star’s radius and equation of state could even be determined.

Figure 2: The electromagnetic modulation for the inspiral of a neutron star/black hole binary merger (initial separation 50 solar masses), with the zoomed in portions corresponding to the beginning and the end.

The authors do note that the luminosity range in which the EM “chirp” could be detected using satellites such as Fermi GBM is fairly small: if it is too bright, a fireball will occur that would mask the chirp. They end the paper by observing that there are still chirps that could be detected with current technologies, and that future gravitational-wave observatories such as LISA could potentially work as a trigger for electromagnetic experiments to target their observations.

About the author, Kelly Malone:

I am a fourth year physics graduate student at Penn State University studying gamma-ray astrophysics. I previously received bachelor’s degrees in physics and astronomy from UMass Amherst in 2013.

brown dwarf

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: The First Brown Dwarf Discovered by the Backyard Worlds: Planet 9 Citizen Science Project
Authors: Marc J. Kuchner, Jacqueline K. Faherty, Adam C. Schneider et al.
First Author’s Institution: NASA Goddard Space Flight Center
Status: Published in ApJL, open access

Not everyone can be a star. Brown dwarfs, for example, have failed at their attempt. These objects have masses below the necessary amount to reach pressure and temperature high enough to burn hydrogen into helium in their cores and thus earn the classification “star”. It’s not very long since we’ve learned of their existence. They were proposed in the 1960s by Dr. Shiv S. Kumar, but the first one was only observed many years later, in 1988 — and we are not even sure it is in fact a brown dwarf! We’ve only reached a substantial number of known brown dwarfs with the advent of infrared sky surveys, such as the Two Micron All Sky Survey (2MASS) and the Wide-field Infrared Survey Explorer (WISE).

Discovering and characterising cold brown dwarfs in the solar neighbourhood is one of the primary science goals for WISE. There are two ways of doing that: 1) identifying objects with the colours of cold brown dwarfs; 2) identifying objects with significant proper motion. Brown dwarfs are relatively faint objects, so they need to be nearby to be detected. We can detect the movement of such nearby targets against background stars, which are so distant that they appear to be fixed on the sky. This movement is called proper motion. As the signal-to-noise ratio is not very good for such faint objects, the second method is the preferred one. However, single exposure WISE images are not deep enough to find most brown dwarfs. This is where today’s paper enters. The authors have launched a citizen science project called “Backyard Worlds: Planet 9” to search for high proper motion objects, including brown dwarfs and possible planets orbiting beyond Pluto, in the WISE co-add images. Co-add images are simply a sum of the single exposures images taking into account corrections to possible shifts between them. This increases signal-to-noise ratio and helps to detect faint targets. In today’s paper, the authors report the first new substellar discovery of their project: a brown dwarf in the solar neighbourhood, which was identified only six days after the project was launched!

Citizen Science: A Promising Approach

The idea behind citizen science is to engage numerous volunteers to tackle research problems that would otherwise be impractical or even impossible to accomplish. The Zooniverse community hosts lots of such projects, in disciplines ranging from climate science to history. Citizen science projects have made some remarkable discoveries in astronomy, such as KIC 8462852 (aka “Tabby’s Star”, “Boyajian’s star” or “WTF star”).

In “Backyard Worlds: Planet 9”, volunteers are asked to examine short animations composed of difference images constructed from time-resolved WISE co-adds. The difference images are obtained by subtracting the median of two subsequent images from the image to be analysed. This way, if an object does not significantly move, it will disappear from the analysed image with the subtraction, leaving only moving objects to be detected. The images are also divided into tiles small enough to be analysed on a laptop or cell phone screen. The classification task consists of viewing one animation, which is composed of four images, and identifying candidates for two types of moving objects: “movers” and “dipoles”. Movers are fast moving sources, that travel more than their apparent width over the course of WISE’s 4.5 year baseline. Dipoles are slower-moving sources that travel less than their apparent width, so that there will be a negative image right next to a positive image, since the subtraction of the object’s flux will only be partial. An online tutorial is provided to show how to identify such objects and distinguish them from artifacts such as partially subtracted stars or galaxies, and cosmic rays.

The Discovery: WISEA 1101+5400

Figure 1: Two co-adds of WISE data separated by 5 years showing how WISEA 1101+5400 has moved. The region shown is 2.0” x 1.6” in size. [Kuchner et al. 2017]

Five users reported a dipole on a set of images, which can be seen here, the first report taking place only six days after the project was launched. The object, called WISEA 1101+5400, can be seen in Figure 1. This source would be undetectable in single exposure images, but in these co-adds it is visible and obviously moving. Follow-up spectra were obtained using the SpeX spectrograph on the 3-m NASA Infrared Telescope Facility (IRTF). The average spectrum is shown in Figure 2. Both the object’s colours and the obtained spectra are consistent with a field T dwarf, a type of brown dwarf.

Figure 2: In black, the spectrum for WISEA 1101+5400. A field T5.5 brown dwarf, SDSS J0325+0425, is shown in red for comparison. Atomic and molecular opacity sources that define the T-dwarf spectral class are indicated. [Kuchner et al. 2017]

Assuming WISEA 1101+5400 is the worst case scenario, i.e. about as faint an object as this survey is able to detect and with the minimum detectable proper motion, the authors estimate that “Backyard Worlds: Planet 9” has the potential to discover about a hundred new brown dwarfs. If WISEA 1101+5400 is not the worst case scenario, but objects even fainter or with lower proper motion can be found, this number could go up.

Although the discovery of only one brown dwarf might not seem worthy of celebration, this discovery demonstrates the ability of citizen scientists to identify moving objects much fainter than the WISE single exposure limit. It is yet more proof that science could use the help of enthusiasts. So if you’re not doing anything now, why not head over to https://www.zooniverse.org/ and help a scientist?

About the author, Ingrid Pelisoli:

I am a third year PhD student at Universidade Federal do Rio Grande do Sul, in Brazil, and currently a visiting academic at the University of Warwick, UK. I study white dwarf stars and (try to) use what we learn about them to understand more about the structure and evolution of our Galaxy. When I am not sciencing, I like to binge-watch sci-fi and fantasy series, eat pizza, and drink beer.

long-period exoplanet

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Orbits for the Impatient: A Bayesian Rejection Sampling Method for Quickly Fitting the Orbits of Long-Period Exoplanets
Authors: Sarah Blunt, Eric L. Nielsen, Robert J. De Rosa, et al.
First Author’s Institution: Brown University
Status: Published in ApJ, open access

Discoveries of exoplanets happen quite often these days — so much so that the discovery alone is not enough to satisfy collective scientific curiosity. Discovery with direct imaging, in particular, does not usually reveal much about the planet, other than its existence. However, unlike the transit method and radial velocity measurements, direct imaging allows us to observe exoplanets with very long periods, which is an under-sampled population among currently known exoplanets. Still, this double-edged method of measurement cannot give us full orbital parameters of the planetary system. Long-period exoplanets cannot be easily observed by any other method but direct imaging, so the question arises — how can we find the orbital properties of this planetary population with the measurements we have?

orbits

A visualization of the OFTI method sampling, scaling and rotating a randomly selected orbit of the fitted exoplanet. In the lowest image, the red lines are the accepted orbits while the gray lines show the rejected orbits. [Blunt et al. 2017]

The authors of today’s paper use a new rejection sampling method to quickly find the orbits of these exoplanets, called Orbits for the Impatient (OFTI). This method generates random orbital fits from astrometric measurements, then scales and rotates the orbits, and then rejects orbits too unlikely. A visualization of this process is shown in the figure to the left.

This method uses astrometric observations and their uncertainties with prior probability density functions to produce posterior probability density functions of generated orbits. The main process of a rejection sampling method goes like this: the code generates random sets of orbital parameters, calculates a probability for each value, then rejects values with lower probabilities. The rejection process in OFTI is determined by comparing the generated probability to a selected number in (0,1). If the generated probability is greater than the random variable, then the orbit is accepted. This process repeats until any desired numbers of orbits have been selected.

Usually, algorithms such as Metropolis-Hastings MCMC are used for orbital fitting problems. However, this method takes far less time than an MCMC approach. The OFTI trials are independent, so the fitting and rejection-sampling can be done several times without incurring a bias in fitting. Running OFTI for several successive trials gives an unbiased estimate of the orbit up to 100 times faster than traditional Metropolis-Hasting MCMC fitting.

You may wonder how this method manages to run quickly without compromising the accuracy of its results. The answer to this musing is, of course, clever computational and statistical tricks. OFTI uses vectorized arrays rather than iterative loops when possible, and it is specifically designed to run multiple trials in parallel. Since there is an associated error with the astrometric measurements that OFTI uses to generate orbits, it first calculates the minimum χ2 value of all orbits tested during an initial run. Then it subtracts the minimum χ2 value from all other generated χ2 values. This way, orbits with an artificially high χ2 are not unfairly flat-out rejected. OFTI also confines the inclination and mass based on prior measurements, then uses the maximum, minimum and standard deviation of the array to change the range of values for these parameters, which prevents the generation of obviously unlikely orbits.

In this paper, the authors use this fitting method to find orbital parameters for 10 directly imaged exoplanets and other objects, including brown dwarfs and low-mass stars. The objects have at least two measured epochs of astrometry each; however in these cases, the orbits have not yet been measured because the measurements only cover a short range of the objects’ orbits. Using OFTI, the authors were able to successfully solve for the orbits of each of these substellar objects. The fitting for one of these objects, GJ 504 b, the current coldest imaged exoplanet, is shown in the figure below.

GJ 504 b

The orbit sampling of the planet GJ 504 b around star GJ 504 A. The 100 most probable orbits are colored accordingly. The right section of the image shows the measurements made of the object in black, and the red line shows the minimum orbit. [Blunt et al. 2017]

The most obvious application of this new process is long-period exoplanets, but the authors also solve for the orbits of a variety of other systems, including trinary stars and brown-dwarf systems. OFTI is also very useful in planning follow-up observations of targets. This method is incredibly useful, not only to planetary scientists but also to all kinds of stellar specialists. Impatient scientists can now use this method to achieve quick and accurate results — which are, quite frankly, the best kind of results.

About the author, Mara Zimmerman:

Mara is working on her PhD in Astronomy at the University of Wyoming. She has done research with binary stars, including “heartbeat” stars, and currently works on modeling debris disks.

IceCube

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Evidence against star-forming galaxies as the dominant source of IceCube neutrinos
Authors: Keith Bechtol et al.
First Author’s Institution: Wisconsin IceCube Particle Astrophysics Center and University of Wisconsin-Madison
Status: Published in ApJ, open access

The IceCube Neutrino Observatory is a giant telescope embedded deep within the South Pole ice — a trap waiting to detect elusive astrophysical neutrinos. These neutrinos have traveled from violent, often explosive astrophysical sources all the way to the Earth without interacting with another particle. Many of these neutrinos will pass straight through the Earth and out the other side without leaving a trace. However, the strings of detectors inserted a kilometer deep into the Antarctic ice that make up IceCube (Fig. 1) can detect the radiation produced by rare neutrino interactions and, in fact, the IceCube collaboration announced the first ever detection of astrophysical neutrinos back in 2013. (Check out this astrobite on their discovery!) The origin of these particles, however, remains a mystery.

Fig. 1. The IceCube telescope consists of thousands of detectors attached along strings embedded within a cubic kilometer of Antarctic ice.

Active galactic nuclei and gamma-ray bursts were early favorites for the origin of the IceCube neutrinos, but one by one potential sources have been ruled out. Today’s paper eliminates another top contender, further deepening the mystery.

Star-forming galaxies (SFGs) host constant collisions between energetic cosmic rays and interstellar gas. These collisions produce unstable particles called pions, which live for tiny fractions of a second before decaying into more sturdy particles like neutrinos. Along with these neutrinos comes a similar amount of energetic gamma-rays. Observations of the gamma-rays can therefore support or, as the case may be, undermine the argument that SFGs produce the IceCube neutrinos.

Gamma-rays observed by the Fermi Large Area Telescope can be divided into two categories — those that can be traced back to a particular source, and those that make up a diffuse fog of gamma-rays with no particular known origin. Many of the diffuse gamma-rays come from point sources too faint to be individually resolved. In order for SFGs to be a valid source of IceCube neutrinos, they must produce a significant fraction of these diffuse gamma-rays. However, recent studies of the diffuse gamma-ray background suggest that blazars make up at least 72%, leaving little room for SFG gamma-rays.

In today’s paper, Keith Bechtol and his collaborators predict the maximum number of neutrinos that could be produced by SFGs without violating the limits on the gamma-ray emission. Figure 2 shows that even with the maximum allowed gamma-ray emission, SFGs just are not going to produce enough neutrinos to account for everything we are seeing in IceCube.

Fig. 2. The constraints on the gamma-rays emitted by star forming galaxies (red line) limit the neutrinos these galaxies can produce (black line). The IceCube neutrinos represented by the black points lie significantly above the black line, still unexplained. [Bechtol et al. 2017]

So, one more promising source is eliminated, narrowing down the options, and pushing more and more unexpected possibilities to the forefront. Other recent work on the origin of IceCube neutrinos suggests that perhaps radio galaxies can explain both the IceCube neutrinos and the remaining piece of the diffuse gamma-ray emission. Others claim that we have acted too quickly in ruling out SFGs — perhaps more complex modeling of the particle interactions within these galaxies will allow for some of the gamma-rays that must be produced to be quickly absorbed, escaping our detection. Continued analysis and larger datasets will someday reveal the answer to this mystery. It is already exciting, however, that we are beyond the realm of expected answers. New observations always reveal the unexpected.

About the author, Nora Shipp:

I am a 2nd year grad student at the University of Chicago. I work on combining simulations and observations to learn about the Milky Way and dark matter.

gamma-ray burst

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Where and when: optimal scheduling of the electromagnetic follow-up of gravitational-wave events based on counterpart lightcurve models
Authors: Om Sharan Salafia et al.
First Author’s Institution: University of Milano-Bicocca, Italy; INAF Brera Astronomical Observatory, Italy; INFN Sezione di Milano-Bicocca, Italy
Status: Submitted to ApJ, open access

The LIGO Scientific Collaboration’s historic direct detection of gravitational waves (GWs) brought with it the promise of answers to long-standing astrophysical puzzles that were unsolvable with traditional electromagnetic (EM) observations. In previous astrobites, we’ve mentioned that an observational approach that involves both the EM and GW windows into the universe can help shed light on mysteries such as the neutron star (NS) equation of state, and can serve as a unique test of general relativity. Today’s paper highlights the biggest hinderance to EM follow-up of GW events: the detection process doesn’t localize the black hole (BH) and NS mergers well enough to inform a targeted observing campaign with radio, optical, and higher-frequency observatories. While EM counterparts to GW-producing mergers are a needle that’s likely worth searching an entire haystack for, the reality is that telescope time is precious, and everyone needs a chance to use these instruments for widely varying scientific endeavors.

The first GW detection by LIGO, GW150914, was followed up by many observatories that agreed ahead of time to look for EM counterparts to LIGO triggers. The authors of this study propose to improve upon the near-aimless searches in swaths of hundreds of degrees that have been necessary following the first few GW candidate events (see Figure 1). Luckily, there are two key pieces of information we have a priori (in advance): information about the source of the GW signal that can be pulled out of the LIGO data, and an understanding of the EM signal that will be emitted during significant GW-producing events.

Figure 1: Simplified skymaps for the two likely and one candidate (LVT151012) GW detections as 3D projections onto the Milky Way. The largest contours are 90% confidence intervals, while the innermost are 10% contours. From the LIGO Scientific Collaboration.

What Are We Even Looking For?

Mergers that produce strong GW signals include BH–BH, BH–NS, and NS–NS binary inspirals. GW150914 was a BH–BH merger, which is less likely to produce a strong EM counterpart due to a lack of circumbinary material. The authors of this work therefore focus on the two most likely signals following a BH–NS or NS–NS merger. The first is a short gamma-ray burst (sGRB), which would produce an immediate (“prompt”) gamma-ray signal and a longer-lived “afterglow” in a large range of frequencies. Due to relativistic beaming, it’s rare that prompt sGRB emission is detected, as jets must be pointing in our direction to be seen. GRB afterglows are more easily caught, however. The second is “macronova” emission from material ejected during the merger, which contains heavy nuclei that decay and produce a signal in the optical and infrared shortly after coalescence. One advantage to macronova events is that they’re thought to be isotropic (observable in all directions), so they’ll be more easily detected than the beamed, single-direction sGRBs.

(Efficiently) Searching Through the Haystack

LIGO’s direct GW detection method yields a map showing the probability of the merger’s location on the sky (more technically, the posterior probability density for sky position, or “skymap”). The uncertainty in source position is partly so large because many parameters gleaned from the received GW signal, like distance, inclination, and merger mass, are degenerate. In other words, many different combinations of various parameters can produce the same received signal.

An important dimension that’s missing from the LIGO skymap is time. No information can be provided about the most intelligent time to start looking for the EM counterpart after receiving the GW signal unless the search is informed by information about the progenitor system. In order to produce a so-called “detectability map” showing not only where the merger is possibly located but also when we’re most likely to observe the resulting EM signal at a given frequency, the authors follow an (albeit simplified) procedure to inform their searches.

The first available pieces of information are the probability that the EM event, at some frequency, will be detectable by a certain telescope, and the time evolution of the signal strength. This information is available a priori given a model of the sGRB or macronova. Then, LIGO will detect a GW signal, from which information about the binary inspiral will arise. These parameters are combined with the aforementioned progenitor information to create a map that helps inform not only where the source will most likely be, but also when various observatories should look during the EM follow-up period. Such event-based, time-dependent detection maps will be created after each GW event, allowing for a much more responsive search for EM counterparts.

Figure 2: The suggested radio telescope campaign for injection 28840, the LIGO signal used to exemplify a more refined observing strategy. Instead of blindly searching this entire swath of sky, observations are prioritized by signal detectability as a function of time (see color gradient for the scheduled observation times). Figure 8 in the paper.

Using these detectability maps to schedule follow-up observations with various telescopes (and therefore at different frequencies) is complicated to say the least. The authors present a potential strategy for follow-up using a real LIGO injection (a fake signal fed into data to test their detection pipelines) of a NS–NS merger with an associated afterglow. Detectability maps are constructed and observing strategies are presented for an optical, radio, and infrared follow-up search (see Figure 2 as an example). Optimizing the search for an EM counterpart greatly increased the efficiency of follow-up searches for the chosen injection event; for example, the example radio search would have found the progenitor in 4.7 hours, whereas an unprioritized search could have taken up to 47 hours.

Conclusions

The process of refining an efficient method for EM follow-up is distressingly complicated. Myriad unknowns, like EM signal strength, LIGO instrumental noise, observatory availability, and progenitor visibility on the sky all present a strategic puzzle that needs to be solved in the new era of multimessenger astronomy. This work proves that improvements in efficiency are readily available, and that follow-up searches for EM counterparts to GW events will likely be more fruitful as the process is refined.

About the author, Thankful Cromartie:

I am a graduate student at the University of Virginia and completed my B.S. in Physics at UNC-Chapel Hill. As a member of the NANOGrav collaboration, my research focuses on millisecond pulsars and how we can use them as precise tools for detecting nanohertz-frequency gravitational waves. Additionally, I use the world’s largest radio telescopes to search for new millisecond pulsars. Outside of research, I enjoy video games, exploring the mountains, traveling to music festivals, and yoga.

giant elliptical galaxy

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title:
1. The MASSIVE survey — VII. The relationship of Angular Momentum, Stellar Mass and Environment of Early-Type Galaxies
2. The SAMI galaxy survey: mass as the driver of the kinematic morphology — density relation in clusters

Authors:
1. Melanie Veale, Chung-Pei Ma, Jenny E. Greene, et al.
2. Sarah Brough, Jesse van de Sande, Matt S. Owers, et al.

First Author’s Institution:
1. University of California, Berkeley
2. University of New South Wales, Australia

Status:
1. Submitted to MNRAS, open access
2. Submitted to ApJ, open access

Introduction

Scientific papers are a bit like buses. Sometimes you wait for ages waiting for one to take you where you want to go, then — surprise, surprise — two come along at once. This is, of course, a fundamental physical law, to which even astrophysicists are not immune.

In today’s article I’m going to break with tradition a little bit and highlight not one, but two papers, released weeks apart and with similar goals. This happens reasonably often, principally because if the science is both exciting and possible, chances are more than one team are looking into it! It’s always interesting to see independent groups take on the same question — and of course, the replicability of results is at the core of the scientific method. So for those reasons, and in the interests of fairness, let’s look at two takes on the origin of fast and slow rotating elliptical galaxies.

Fast and Slow Rotators

In the last decade the terms ‘fast rotator’ and ‘slow rotator’ entered astrophysical parlance as detailed studies revealed important differences among nearby galaxies. At first sight all elliptical galaxies look much alike, being more-or-less featureless red-ish blobs (see figure). However, a closer look reveals that they exhibit two quite distinct types of kinematic behaviour (the term kinematic in this context refers to the movement of stars within a galaxy, in other words its internal motions). This important detail has been highlighted by Astrobites before.

Elliptical galaxies: big and red, but not to be confused with buses. [Anglo-Australian Observatory]

The terminology here is not particularly imaginative: the principal difference between fast and slow rotators is, well, that the former rotate faster than the latter. But let us go into a bit more depth. Galaxies are collisionless systems, meaning that the gulf separating stars is sufficiently vast relative to their size that head-on collisions never happen in practice. Instead, all interactions are through gravity; stars whip around their host galaxy, their motions governed by its gravitational potential well. The orbits of the stars can be correlated, so that they are mostly orbiting around the same axis and in the same direction — or messy, with disordered orbits. Moreover, while all closed orbits are ellipses, there’s a big difference between a nearly circular orbit (like the Earth going around the Sun) and a highly elongated orbit (like that of a long-period comet). These extremes are sometimes respectively referred to as tangential and radial orbits.

If the orbits of stars in a galaxy are mostly correlated and tangential, the galaxy ends up as a flattened, more oblate rotating system. By contrast, disordered radial orbits give you blob-like systems without much rotation. In the first case, we might say that the system is ‘rotation supported’ (it doesn’t collapse down to a point under its own gravity because it’s rotating and can’t shed its angular momentum) and in the second that it is ‘pressure supported’ (stars falling in towards the centre are balanced by stars that have already passed through the centre and are now travelling outwards). This gets to the crux of the matter: most elliptical galaxies are rotation-supported fast rotators, but a significant fraction (about 15%) are pressure-supported slow rotators. The stark difference in their kinematics has led to suggestions that despite their apparent similarities, an alternate formation channel is required to create slow rotators.

Today’s Papers

In order to get to the bottom of this, the two teams conducted similar investigations. Both used data from large surveys of many galaxies, the MASSIVE survey and the SAMI galaxy survey respectively. Both surveys provide detailed spectroscopy of many galaxies — large samples are necessary since the aim is to draw statistical conclusions about the population of slow rotator galaxies as a whole. From this data, the kinematics of each target can be inferred (I explained how that works in some detail in a previous article, but it’s not essential to recap all that here).

Encouragingly, both studies hold some conclusions in common. As was already believed to be the case, both find that slow rotators are preferentially found among the most massive galaxies. Both teams looked at the effect of galaxy environment (i.e. whether the galaxy is isolated or contained in a cluster with many nearby neighbours). Massive galaxies do tend to be more commonly found in clusters, so given the dependence on mass already established such a trend must exist. What’s important is that when mass is controlled for there is no additional dependence on environment: both teams concur on this point.

Conclusions

Galaxies tend to grow via a series of mergers — collisions — between smaller galaxies, a process that takes place faster in dense environments where the chance of encountering another galaxy is much higher. This is the explanation for the point made above, that galaxies in clusters tend to be more massive than their isolated counterparts.

In the past it has been suggested that slow rotators might form due to a major collision between two similar sized galaxies, a highly disruptive event that would of course tend to leave behind a particularly massive galaxy. This kind of event would be much more common in the centre of a cluster of galaxies. However, neither of the studies presented here find strong evidence for a ‘special’ formation channel like this!

It’s certainly true that slow rotator galaxies tend to be particularly massive, but they don’t seem to care how they were put together (i.e. by many minor mergers or one big major merger): whether minor or major, galaxy mergers will tend to add mass and (usually) decrease the angular momentum of a galaxy. The more mergers that occur (i.e. the more massive a galaxy gets), the slower it will tend to rotate. In other words, fast rotators that grow large enough will eventually transition to become slow rotators instead.

About the author, Paddy Alton:

I am a fourth year PhD student at Durham University’s Centre for Extragalactic Astronomy, where I work with Dr John Lucey and Dr Russell Smith. My research is on the stellar populations of other galaxies — with a specific focus on those of the largest elliptical galaxies, whose stars formed under radically different conditions to those in our own Milky Way. I graduated in 2013 from the University of Cambridge with an MSci in Natural Sciences, having specialised in Astrophysics. Out of the office I enjoy a variety of sports, but particularly rowing (whenever Durham’s fickle river Wear allows it).

R136 observed with WFC3

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Thermal Feedback in the High-mass Star and Cluster Forming Region W51
Authors: Adam Ginsburg, Ciriaco Goddi, J.M. Diederik Kruijssen, et al.
First Author’s Institution: National Radio Astronomy Observatory
Status: Accepted to ApJ, open access

Today let’s talk about massive stars! My favorite view of massive stars is the Hubble image of the star cluster R136 in the Large Magellanic Cloud, shown in cropped form in the cover photo above. All the blue shining spots in this picture are massive stars, with masses up to hundreds of solar masses that are million times brighter than the Sun! Massive stars bring beauty to our night skies, as well as structure to the universe. The Hubble image shows massive stars in their magnificent adulthood. But have you ever wondered what they looked like when they were still babies?

Figure 1. W51 as seen by the radio observatories ALMA and VLA. Images from radio observations are ‘false color’, meaning that the colors represent light that cannot be seen with naked eyes. Color scheme: blue is the carbon monoxide (CO) line, orange is the methanol (CH3OH) line, purple is the cyanoacetylene (HC3N) line, green is the radio continuum, and the white haze is free-free emission of ionized gas. [Ginsburg et al. 2017]

Indeed we know very little about their babyhood because baby massive stars are very far away and are usually blocked by opaque dust; to study their births we need observations at longer wavelengths. By looking at infrared (IR) wavelengths, we can study the dust that is heated by newly formed stars, which provides clues to the embedded stars. By studying radio line emission, we can see and trace the dense gas that comes before star formation. Lastly, the radio free-free continuum shows the compact ionized (HII) regions around young stars. Today’s paper does all these, looking into the high-mass star forming region W51 (shown in Figure 1) using ALMA. ALMA’s extraordinary angular resolution never ceases to amaze me. Today’s observations were done at ~0.2″ resolution — equivalent to telling two quarters apart at a distance of ~25 km (about ten standard airport runways placed back-to-back)!

The paper looks at three baby massive stars in W51: e2, e8, and North (Figure 1). These objects were chosen because of their strong star formation and the gas clouds have not been destroyed by supernova explosions, which are the key ingredients for understanding high-mass star formation. While the authors uncovered a wealth of information through their observations in the paper, here we focus on two main aspects: the temperature and ionization structures around the baby massive stars.

Figure 2. Temperature map around the hot core e2. This map was created using the molecular emission lines of methanol around the source. We see that the baby high-mass star heats up a large volume with a radius about 5,000 AU. [Ginsburg et al. 2017]

Figure 2 shows the temperature map of the dense gas around the baby massive star e2, created by modeling eight methanol emission lines. The main takeaway is that baby massive stars heat up a large volume of surrounding gas in their early formation phase, preventing gas from fragmenting and keeping the reservoir of gas available for star formation. The contour (blue line) in the temperature map encloses the region above 350 K, encompassing a region with radius ~5,000 AU. This temperature is much higher than the ~10 K typically observed in interstellar gas.

Figure 3. Image showing the highly excited warm molecular gas (colors) and the free-free radio emission from ionized gas (contours) around e2. The legend shows the nature of different colors. The absence of enhanced heating around the ionized region suggests that ionizing radiation has little effect on the dense molecular gas. [Ginsburg et al. 2017]

What about the ionization structure? Figure 3 shows the warm molecular gas (colors) and the ionized gas (contours) around e2. Again, the bright emission in colors shows that the baby stars are responsible for heating up the nearby dense gas. There are two key features:

  1. There is no enhanced heating of dense gas (brighter colors) around the ionized region (contours). The authors conclude that ionizing radiation from already-formed massive stars has little effect on the star-forming gas;
  2. The bright dust continuum emission (left blue blob) predicts strong ionizing radiation from the embedded baby stars, but the corresponding free-free emission (white contours) is not observed. The authors proposed an explanation: rapid accretion onto the growing stars bloats them and reduces their surface temperature, making them too cold to emit ionizing radiation. This is a big deal! The working of simultaneous gas infall and outward radiation feedback is extremely hard to model even with simulations and supercomputers. Today’s paper presents the first observational insight on what actually happens to growing massive stars!

Today’s paper is a pedagogical piece showcasing how bright scientists and next-generation observatories translate into new insights for future observations and simulations. These insights are necessary for us to understand how beautiful massive clusters like R136 came to be. Indeed, we expect the active star formation in the early universe behaved similarly to within forming massive clusters. Understanding how massive star clusters form therefore provides a unique basis to connect the cosmic history of star formation.

P.S. The first author of today’s paper is also an active developer of astronomy software — check out his Github page!

About the author, Benny Tsang:

I am a graduate student at the University of Texas at Austin working with Prof. Milos Milosavljevic. Using Texas-sized supercomputers and computer simulations, I focus on understanding the effects of radiation from stars when massive star clusters are being assembled. When I am not staring at computer screens, you will find me running around Austin, exploring this beautiful city.

hot Jupiter

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Evidence for Two Hot Jupiter Formation Paths
Authors: Benjamin E. Nelson, Eric B. Ford, and Frederic A. Rasio
First Author’s Institution: Northwestern University
Status: Submitted to AJ, open access

Frolicking Through Fields of Data

The future of astronomy observations seems as bright as the night sky … and just as crowded! Over the next decade, several truly powerful telescopes are set to launch (read about a good number of them here and also here). That means we’re going to have a LOT of data on everything from black holes to galaxies, and beyond — and that’s in addition to the huge fields of data from the past decade that we’re already frolicking through now. It’s certainly far more data than any one astronomer (or even a group of astronomers) wants to analyze one-by-one; that’s why these days, astronomers turn more and more to the power of astrostatistics to characterize their data.

The authors of today’s astrobite had that goal in mind. They explored a widely-applicable, data-driven statistical method for distinguishing different populations in a sample of data. In a sentence, they took a large sample of hot Jupiters and used this technique to try and separate out different populations of hot Jupiters — based on how the planets were formed — within their sample. Let’s break down exactly what they did, and how they did it, in the next few sections!

Hot Jupiters Are Pretty Cool

First question: what’s a hot Jupiter, anyway?

They’re actually surprisingly well-named: essentially, they are gas-giant planets like Jupiter, but are much, much hotter. (Read all about them in previous astrobites, like this one and this other one!) Hot Jupiters orbit perilously close to their host stars — closer even than Mercury does in our own Solar System, for example. But it seems they don’t start out there. It’s more likely that these hot Jupiters formed out at several AU from their host stars, and then migrated inward into the much closer orbits from there.

Figure 1: A gorgeous artist’s impression of a hot Jupiter orbiting around its host star. [ESO/L. Calçada]

As to why hot Jupiters migrate inward … well, it’s still unclear. Today’s authors focused on two migration pathways that could lead to two distinct populations of hot Jupiters in their sample. These migration theories, as well as what the minimum allowed distance to the host star (the famous Roche separation distance, aRoche) would be in each case, are as follows:

  • Disk migration: hot Jupiters interact with their surrounding protoplanetary disk, and these interactions push their orbits inward. In this context, aRoche corresponds to the minimum distance that a hot Jupiter could orbit before its host star either (1) stripped away all of the planet’s gas or (2) ripped the planet apart.
  • Eccentric migration: hot Jupiters start out on very eccentric (as in, more elliptical than circular) orbits, and eventually their orbits morph into circular orbits of distance 2aRoche. In this context, aRoche refers to the minimum distance that a hot Jupiter could orbit before the host star pulled away too much mass from the planet.

The authors defined a parameter ‘x’ for a given hot Jupiter to be x = a/aRoche, where ‘a’ is the planet’s observed semi-major axis. Based on the minimum distances in the above theories, we could predict that hot Jupiters that underwent disk migration would have a minimum x-value of x = aRoche/aRoche = 1. On the other hand, hot Jupiters that underwent eccentric migration would instead have a minimum x-value of x = 2aRoche/aRoche = 2. This x for a given planet is proportional to the planet’s orbital period ‘P’, its radius ‘R’, and its mass ‘M’ in the following way:

x = a/aRoche ~ P2/3M1/3R-1

And this x served as a key parameter in the authors’ statistical models!

Toying with Bayesian Statistics

Next question: how did today’s authors statistically model their data?

Figure 2: Probability distribution of x for each observation group, assuming that each hot Jupiter orbit was observed along the edge (like looking at the thin edge of a DVD). The bottom panel zooms in on the top one. Note how the samples have different minimum values! [Nelson et al. 2017]

Short answer: with Bayesian statistics. Basically, the authors modeled how the parameter x is distributed within their planet sample with truncated power laws — so, x raised to some power, cut off between minimum and maximum x values. They split their sample of planets into two groups, based on the telescope and technique used to observe the planets: “RV+Kepler” and “HAT+WASP”. Figure 2 displays the distribution of x for each of the subgroups.

The authors then used the Markov Chain Monte Carlo method (aka, MCMC; see the Bayesian statistics link above) to explore what sort of values of the power laws’ powers and cutoffs would well represent their data. Based on their chosen model form, they found that the RV+Kepler sample fit well with their model relating to eccentric migration. On the other hand, they found evidence that the HAT+WASP sample could be split into two populations: about 15% of those planets corresponded to disk migration, while the other 85% or so corresponded to eccentric migration.

Remember that a major goal of today’s authors was to see if they could use this statistical approach to distinguish between planet populations in their sample … and in that endeavor, they were successful! The authors were thus optimistic about using this statistical technique for a much larger sample of hot Jupiters in the future, as oodles of data stream in from telescopes and surveys like KELT, TESS, and WFIRST over the next couple of decades.

Their success joins the swelling toolbox of astrostatistics … and just in time! Telescopes of the present and very-near future are going to flood our computers with data — so unless we’re willing to examine every bright spot we observe in the sky by hand, we’ll need all the help from statistics that we can get!

About the author, Jamila Pegues:

Hi there! I’m a 1st-year grad student at Harvard. I focus on the evolution of protoplanetary disks and extrasolar systems. I like using chemical/structural modeling and theory to explain what we see in observations. I’m also interested in artificial intelligence; I like trying to model processes of decision-making and utility with equations and algorithms. Outside of research, I enjoy running, cooking, reading stuff, and playing board/video games with friends. Fun fact: I write trashy sci-fi novels! Stay tuned — maybe I’ll actually publish one someday!

first stars

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: Modeling of Lyman-Alpha Emitting Galaxies and Ionized Bubbles at the Era of Reionization
Authors: Hidenobu Yajima, Kazuyuki Sugimura, Kenji Hasegawa
First Author’s Institution: Tohoku University, Sendai, Miyagi, Japan
Status: Submitted to ApJ, open access

About four hundred thousand years after the Big Bang, the universe settled into a pretty dull period in its history. There were no stars or galaxies, just one massive expanse of neutral hydrogen, sitting in the dark. This period in the universe’s history, known appropriately as the Dark Ages, came abruptly to an end when the first stars were born and began to shine, dumping loads of high-energy photons into their surroundings. These photons created ‘bubbles’ of ionised hydrogen around the stars, which slowly grew as more photons were pumped out by the stars. The bubbles surrounding the first stars were pretty small, but later, as stars began to group together into the first galaxies, these bubbles were blown much bigger by the combined photons from all the stars in the galaxy. Over time the bubbles from neighbouring galaxies began to overlap, until eventually all of the hydrogen in the universe was ionised (see Figure 1). This process is known as reionisation (Astrobites has written plenty about reionisation in the past; for more background, go check out some of these articles), and it’s a key period in the universe’s history.

The subject of today’s bite are these ionised bubbles, the baby galaxies that blew them, and how much they contributed to reionisation. We will see that there is a close relationship between the properties of a galaxy and the size of the bubble it can blow. The size of the bubble also affects how easily we can see the galaxy. Finally, we’ll also learn about two upcoming observatories that will hopefully be able to see both the bubbles and their galaxies at earlier times than ever before.

reionisation timeline

Figure 1: A timeline showing the beginning of the Dark Ages (at recombination), and its end when the first stars and galaxies were born, ionising nearby hydrogen. These ionised bubbles soon grow and overlap, until the majority of the Hydrogen in the universe is ionised; this period is known as the Epoch of Reionisation [Nature 468].

Who blew all the bubbles?

One burning question researchers would like answered is ‘What kinds of galaxies contributed the most to reionisation?’ Many researchers in the field assert that it was small galaxies; they tend to allow their ionising photons to escape much easier than massive galaxies as they have less gas to get in the way. There are also far more small galaxies than big ones: more galaxies, more high-energy photons, more reionisation! Unfortunately, such small galaxies are typically harder to detect than their big cousins since they’re less luminous.

bubble size

Figure 2: Size of ionised hydrogen bubbles (RHII) plotted against the luminosity of the Lyman-alpha emission (LLyα). The bigger the bubble, the stronger the emission. This relationship doesn’t change much with redshift.

That’s not to say that finding small galaxies is impossible. In the early universe, galaxies tend to be creating lots of new stars, and these young stellar populations emit light with a strong hydrogen spectral line, known as Lyman-alpha. Using Lyman-alpha, astronomers hope to be able to see the small galaxies that contribute to reionisation in a big way.

Unfortunately, as it’s so energetic, Lyman-alpha radiation is absorbed by neutral hydrogen. So how can we detect it from before the universe was ionised? The trick is to choose galaxies that have blown large bubbles. Galaxies with large enough bubbles allow any newly emitted ionising radiation from the galaxy to travel far enough uninhibited through the bubble to become redshifted. Redshifted Lyman-alpha radiation doesn’t have enough energy to ionise the neutral hydrogen outside the bubble, so it can happily continue travelling all the way to our telescopes on Earth, 12 billion light-years away.

So now the question is, what galaxies blow the biggest bubbles? The authors of today’s paper use a simulated model of the early universe to investigate this. Figure 2 shows the predicted size of ionised bubbles against the luminosity of Lyman-alpha. There’s a strong correlation between bubble size and luminosity. So … what galaxies emit the most Lyman-alpha? The bottom left panel of Figure 3 shows the relationship between Lyman-alpha luminosity and stellar mass. There is a clear correlation between the size of a galaxy and the amount of Lyman-alpha radiation it’s pumping out.

ly-alpha strength against stellar mass

Figure 3: The relationships between bubble size and Lyman-alpha luminosity (y axis, top and bottom respectively) with stellar mass and star formation rate (x axis, left and right respectively). The different coloured lines are for different redshifts. The biggest galaxies emit the most Lyman-alpha, and therefore blow the biggest bubbles. The link between bubble size and star formation rate is not as strong.

What does this all tell us? For a start, the model seems to suggest that we won’t be able to see the very smallest galaxies at very high redshifts using Lyman-alpha. All is not lost, however: thanks to two upcoming observatories, we may still be able to see the most energetic Lyman-alpha emitting galaxies and their bubbles at redshifts of around z ~ 10, much higher than we’ve ever seen them before (The most distant Lyman-alpha emitter found to date is at z ~ 8.6).

The first of these new observatories will be the James Webb Space Telescope (JWST), an enormous space-based telescope scheduled to launch in 2018. It will be capable of detecting Lyman-alpha radiation out to very high redshifts: the horizontal line in Figure 2 shows the expected sensitivity of the instrument, within range of the most luminous Lyman-alpha emitters at z ~ 10 according to the model.  

The second of these enormous observatories to come online will be the Square Kilometer Array (SKA), a truly enormous radio telescope array based in both South Africa and Australia. It will be able to ‘see’ neutral hydrogen, leaving the ionised hydrogen bubbles to stand out like holes in a cheese. The vertical dashed line in Figure 2 shows the smallest bubble size that it’s hoped the SKA will be able to see, again well within the range of the biggest bubbles at ~ 10.

Combining these observatories, the yellow region in Figure 2 represents those galaxies with bubbles are that are big enough to be observed with the SKA, and that allow enough Lyman-alpha escape to be picked out by JWST. If the model is correct, these will be the most distant Lyman-alpha emitters observed, and the first-ever detection of ionised bubbles. But the smaller galaxies, thought to be responsible for the majority of reionisation, will have to wait for future generations of humongous space- and ground-based telescopes to be detected.

About the author, Christopher Lovell:

I’m a 2nd year postgrad at the University of Sussex. I model high redshift galaxies using hydrodynamical simulations. When I’m not reading for work I read for pleasure, mostly science fiction and history, and when I’m not reading I enjoy dodging London traffic on my bike.

computers

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we repost astrobites content here at AAS Nova once a week. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org!

Title: A machine learns to predict the stability of tightly packed planetary systems
Authors: Daniel Tamayo, Ari Silburt, Diana Valencia et al.
First Author’s Institution: University of Toronto at Scarborough, Canada
Status: Published in ApJL, open access

Scientists are impatient people. Nobody has time to make an entire universe and watch it evolve for thirteen billion years to see what happens (do you have any idea how many emails you could answer in thirteen billion years?!), so instead, scientists simulate a smaller version that only takes a few months to mature.* Nobody has time to comb by hand through four years’ worth of Kepler telescope data to look for telltale planet shadows, so instead, scientists write a computer program that finds planets automatically. Nobody has time to manually rearrange the inside of a telescope to observe new stuff every five minutes, so scientists build robots to do it instead.

All of the above strategies work because computers are much faster than humans at doing small, repetitive tasks. But sometimes, even computers are too slow. For example, predicting the fate of a set of planets orbiting around a star can take a computer a couple of weeks. Nothing the computer is doing is complicated — it’s just tracking the motions of the planets and the star, subject to each others’ gravity. But it has to calculate the gravitational forces in question trillions of times, and, as former Federal Reserve Chair Alan Greenspan likes to say, trillions is lots.

Today’s authors wondered: is there a faster way to figure out what will happen to those planets?

Stable or Not?

“What will happen?” is kind of a broad question, and broad questions don’t lend themselves to speedy answers. So the authors decided to take their essay prompt and turn it into a true-or-false: will a given set of planets remain stable for a long time, or not? (“Or not” encompasses a few possibilities — maybe two of the planets collide with each other, or one crashes into the star, or one gets kicked out of the system entirely.)

Planetary scientists care about stability in their planet simulations because it’s a way of checking that the simulations match reality. Stable planetary systems last a long time, and unstable systems fall apart quickly. The odds of seeing an unstable exoplanetary system in real life are slim, just like the odds of looking up from your desk and catching your coworker mid-spilling coffee on his keyboard. So if you’re trying to match your simulation to a real-life, observed system of planets, your simulation should probably be stable.

An Answer Key

So how did the authors get a quick answer to “stable or not?” They handed their computer a practice test with an answer key. The practice test was a set of 5,000 three-planet systems for which they had already run a full-blown, weeks-long simulation to test for stability, so they had answers (“stable” or “unstable”) in hand. The computer’s job was to take those 5,000 systems, together with their answers, and look for patterns: do the stable systems have things in common? Are there clues in the properties of the planets that hint that a system will ultimately be stable?

They gave the computer some time to study this data, and then they tested its performance on a new set of planetary systems it had never seen before. If the computer did well, they reasoned, then they could dispense with time-consuming stability simulations in the future and just rely on the computer’s predictions. If the computer did poorly, well, then back to test prep.

What to Study

First, they only let the computer search for patterns in the bare minimum of data necessary to describe the planets — the shapes of their orbits, their distances from the star, and their distances from each other. These numbers are the equivalent of a stick-figure sketch of each planetary system. The computer did okay with this information, but it was never very confident in assigning an answer of “stable” (see Figure 1, top panel).

So they decided to help the computer out some more: instead of giving it just a bare-bones description of each system, they let it see the results of a short simulation of the planets’ orbits (one that only ran for a few minutes, instead of weeks). The bottom panel of Figure 1 shows the results: major improvement! The computer did much better at confidently sorting the unstable and stable systems.

Well done, computer!

Figure 1: The computer’s test results, given either a bare-bones description of each planetary system (upper panel) or the results of a short stability simulation (lower panel). The colors indicate the correct answer: green means that the systems are genuinely stable, and blue means unstable. “Predicted probability” on the x-axis indicates the computer’s certainty — a value close to 0 means the computer is confident that the system is unstable, and a value close to 1 means the computer is confident the system is stable. A value in the middle indicates that the computer was uncertain. To get an A+ on this test, the computer would have to predict 0 for every blue system and 1 for every green system.

What Does It Mean?

This result isn’t just a cool demonstration of computers’ ability to learn and predict on their own. It also gives us some new insight into what makes planetary systems stable or unstable. The authors investigated why the computer made the predictions it did, and noticed that strong variation in the middle planet’s distance from the star — resulting from the three planets tugging on each other gravitationally — was a good clue that the system would ultimately lose stability. Impatience gets results!

*When run on a couple of the world’s most powerful supercomputers, working together.

About the author, Emily Sandford:

I’m a PhD student in the Cool Worlds research group at Columbia University. I’m interested in exoplanet transit surveys. For my thesis project, I intend to eat the Kepler space telescope and absorb its strength.

1 35 36 37 38 39 41