Features RSS

Screenshot of the user interface for WWT shows clusters of objects plotted against a sky background.

One of the most rapidly evolving elements of astronomy research is how we handle data. With telescopes and computer simulations progressively producing ever vaster quantities, how can we process and analyze this data? What tools can we use to turn it into new astronomical discoveries?

The future of astronomy relies on new innovations on this front, and in a Special Issue of the Astrophysical Journal Supplement Series, 23 papers explore different insights and challenges related to astronomical data — presenting new workflows, software instruments, databases, and tutorials that will aid astronomers in generating novel and significant research results.

Here are the broad categories of data in astronomy that are touched on in this special issue:

volume renderings

Volume renderings from a simulation of a low metallicity star. This is an example of the data that can be analyzed using cyberhub, a web-browser-based tool for medium-sized collaborations. [Herwig et al. 2018]

1. Cloud-Based Research Environments for Discovery

Collaborations in astronomy are often large and broadly distributed. As a result, the astronomy community needs the infrastructure to be able to access large data sets, combine them, and collaboratively process them to make discoveries. An article by Herwig et al. presents the cyberhubs system, a package for medium-sized scientific teams to collaboratively interact with data via web browser. Williams et al. discuss the challenges inherent in reducing a large photometric data set — in their case, data from the Panchromatic Hubble Andromeda Treasury (PHAT) — on the Amazon Elastic Compute Cloud (EC2), a commercial system of virtual computers that users can rent on demand. Heidorn et al. present Astrolabe, a cyberinfrastructure project of the University of Arizona and the American Astronomical Society that aims to ensure the long-term curation of astronomical data for future reference and use.

2. Software Instruments for Transient Detection, Alerts, and Analysis

time-variable sources

Just some of the time-variable sources that are detected and analyzed, and their characteristic timescales for variation. [Narayan et al. 2018]

Given the current boom of time-domain astronomy, the development of tools for studying transient astronomical phenomena is crucial. Necessary tools include not only those that will detect transients, but also those that provide alerting for rapid followup, and those that enable analysis of the large quantities of resulting data. Law et al. discuss realfast, a fast transient search system at the Jansky Very Large Array that will look for transients in real time as data comes in, reducing the amount of data that must be stored. Guillochon et al. introduce MOSFiT, a software package that enables rapid comparison of transient data to models. And Narayan et al. present ANTARES, an automated software system that sifts through, characterizes, annotates, and prioritizes transient events for followup, allowing for rapid alerting of the community to transients that warrant additional observations.

In addition to searching for unexpected transient events, time-domain astronomers also study the variability of single sources. He et al. describe a long-term study of magnetic-feature and flare activity of three Sun-like stars with Kepler. As for the Sun itself — studying it in detail produces terabytes-per-hour streams of data that must be captured and analyzed. Denker et al. present the challenges of managing such a stream of high-resolution observations at the GREGOR Solar Telescope, and Boubrahimi et al. explore how best to interpolate between solar data collected from a variety of ground-based and space-based solar observatories every day.

3. Statistical Properties of Data with Uncertainties or Gaps

How do we address the issue of incomplete or uncertain data? Correct application of statistical methods are an important aspect of data reduction. Hogg et al., Vianello, Huppenkothen et al., VanderPlas, Huijse et al., Ma et al., and Aggarwal et al. all present on methods of careful statistical handling of astronomical data — covering topics from an overview of Markov Chain Monte Carlo methods for sampling probability density functions, to a look at how we might use statistics to predict solar eruptions.

OSSOS TNOs

Blue dots represent the 838 characterized OSSOS discoveries of trans-Neptunian objects from a recent data release. [Bannister et al. 2018]

4. New Database Releases

The production of vast amounts of data isn’t enough — it must also be compiled in a useful way before it can be analyzed by the community. The regular release of large, updated databases are an important driver of astronomical discovery. In this Special Issue, Bannister et al. present the Outer Solar System Origins Survey (OSSOS), a data release of more than 800 trans-Neptunian objects, and Egeland introduces sunstardb, a database useful for studying stars in analogy to the Sun.

5. Astronomy Data in Publication

The big-data boom produces many important questions in scientific publishing, like how data will be cited and classified, whether software instrument source codes will be made available, and what impact these references might have on the future of astronomical publication. Novacescu et al. discuss the policy of data citation — in particular, using digital object identifiers (DOIs) to refer to data both analyzed and generated by research projects. Frey et al. present an update on the Unified Astronomy Thesaurus, an effort to unite astronomers under a single vocabulary to govern keywords and classification for astronomy research. Allen et al. address the issue of source code availability: can other researchers easily access the software you used, to explore or reproduce your results? Varga examines how metrics based on references or keywords can be used to predict citation impact for scientific articles.

6. Advances in Data Visualization

WWT UI

More screen captures of the WorldWide Telescope user interface. [Rosenfield et al. 2018]

One challenge of astronomy data echoes the challenge inherent in all of science: how can we best communicate and share it? Rosenfield et al. introduce a tool for this, the American Astronomical Society’s WorldWide Telescope (WWT). This project enables terabytes of astronomical images, data, and stories to be viewed and shared among researchers, exhibited in science museums, projected into full-dome immersive planetariums and virtual reality headsets, and taught in classrooms.

It’s evident that there are indeed many challenges raised by the production and management of vast amounts of astronomical data — but there are also many opportunities available. The articles in this Special Issue are meant to provide an introduction to some of the topics currently under consideration, but conversations will continue to evolve as we adapt to this age of big data.

Citation

Special ApJS Issue on Data

Frank Timmes and Leon Golub 2018 ApJS 236 1. doi:10.3847/1538-4365/aab770

black holes in a globular cluster

What is the distribution of sizes of black holes in our universe? Can black holes of any mass exist, or are there gaps in their possible sizes? The shape of this black-hole mass function has been debated for decades — and the dawn of gravitational-wave astronomy has only spurred further questions.

Mind the Gaps

The starting point for the black-hole mass function lies in the initial mass function (IMF) for stellar black holes — the beginning size distribution of black holes after they are born from stars. Instead of allowing for the formation of stellar black holes of any mass, theoretical models propose two gaps in the black-hole IMF:

  1. An upper mass gap at 50–130 solar masses, due to the fact that stellar progenitors of black holes in this mass range are destroyed by pair-instability supernovae.
  2. A lower mass gap below 5 solar masses, which is argued to arise naturally from the mechanics of supernova explosions.
Missing black-hole formation channels

Missing black-hole (BH) formation channels due to the existence of the lower gap (LG) and the upper gap (UG) in the initial mass function. a) The number of BHs at all scales are lowered because no BH can merge with BHs in the LG to form a larger BH. b) The missing channel responsible for the break at 10 solar masses, resulting from the LG. c) The missing channel responsible for the break at 60 solar masses, due to the interaction between the LG and the UG. [Christian et al. 2018]

We can estimate the IMF for black holes by scaling a typical IMF for stars and then adding in these theorized gaps. But is this initial distribution of black-hole masses the same as the distribution that we observe in the universe today?

The Influence of Mergers

Based on recent events, the answer appears to be no! Since the first detections of gravitational waves in September 2015, we now know that black holes can merge to form bigger black holes. An initial distribution of black-hole masses must therefore evolve over time, as mergers cause the depletion of low-mass black holes and an increase in higher-mass black holes.

A team of scientists led by Pierre Christian, a graduate student at Harvard University, has now looked into characterizing this shift. In particular, Christian and collaborators explore how black-hole mergers in the centers of dense star clusters ultimately shape the black-hole mass function of the universe.

Black Holes Today

Christian and collaborators use analytical models of coagulation — mergers of particles to form larger particles — to estimate the impact of mergers in star clusters on resulting black-hole sizes. They find that, over an evolution of 10 billion years, mergers can appreciably fill in the upper mass gap of the black-hole IMF.

black-hole mass function

An example of the black-hole mass function that can result from evolving the initial mass function — complete with gaps — over time. Two breaks appear as a result of the initial gaps: one at ~10 (LB) and one at ~60 solar masses (UB). [Christian et al. 2018]

The lower mass gap, on the other hand, leaves observable signatures in the final black-hole mass function: a break at 10 solar masses (since black holes below this mass can’t be created by mergers) and one at 60 solar masses (caused by the interaction of the upper and lower gaps). As we build up black-hole statistics in the future (thanks, gravitational-wave detectors!), searching for these breaks will help us to test our models.

Lastly, the authors find that their models can only be consistent with observations if ejection is efficient — black holes must be regularly ousted from star clusters through interactions with other bodies or as a result of kicks when they merge. This idea is consistent with many recent studies supporting a large population of free-floating stellar-mass black holes.

Citation

Pierre Christian et al 2018 ApJL 858 L8. doi:10.3847/2041-8213/aabf88

infrared galactic center

Finding planets in the crowded galactic center is a difficult task, but infrared microlensing surveys give us a fighting chance! Preliminary results from such a study have already revealed a new exoplanet lurking in the dust of the galactic bulge.

Detection Biases

UKIRT-2017 microlensing survey fields

UKIRT-2017 microlensing survey fields (blue), plotted over a map showing the galactic-plane dust extinction. The location of the newly discovered giant planet is marked with blue crosshairs. [Shvartzvald et al. 2018]

Most exoplanets we’ve uncovered thus far were found either via transits — dips in a star’s light as the planet passes in front of its host star — or via radial velocity wobbles of the star as the orbiting planet tugs on it. These techniques, while highly effective, introduce a selection bias in the types of exoplanets we detect: both methods tend to favor discovery of close-in, large planets orbiting small stars; these systems produce the most easily measurable signals on short timescales.

For this reason, microlensing surveys for exoplanets have something new to add to the field.

Search for a Lens

In gravitational microlensing, we observe a background star as it is briefly magnified by a passing foreground star acting as a lens. If that foreground star hosts a planet, we observe a characteristic shape in the observed brightening of the background star, and the properties of that shape can reveal information about the foreground planet.

microlensing diagram

A diagram of how planets are detected via gravitational microlensing. The detectable planet is in orbit around the foreground lens star. [NASA]

This technique for planet detection is unique in its ability to explore untapped regions of exoplanet parameter space — with microlensing, we can survey for planets around all different types of stars (rather than primarily small, dim ones), planets of all masses near the further-out “snowlines” where gas and ice giants are likely to form, and even free-floating planets.

In a new study led by a Yossi Shvartzvald, a NASA postdoctoral fellow at the Jet Propulsion Laboratory (JPL), a team of scientists now presents preliminary results from a near-infrared microlensing survey conducted with the United Kingdom Infrared Telescope (UKIRT) in Hawaii. Though the full study has not yet been published, the team reports on their first outcome: the detection of a giant planet in the galactic bulge.

Giant Planet Found

UKIRT-2017-BLG-001 light curve

The light curve of UKIRT-2017-BLG-001. The inset shows a close-up of the anomaly in the curve, produced by the presence of the planet. [Shvartzvald et al. 2018]

UKIRT-2017-BLG-001 is a giant planet detected at an angle of just 0.35° from the dusty, crowded Galactic center. It suffers from a high degree of extinction, implying that this planet could only have been detected via a near-infrared survey. The mass ratio of UKIRT-2017-BLG-001 to its host star is about 1.5 times that of Jupiter to the Sun, and its host star appears to be about 80% the mass of the Sun.

The star–planet pair is roughly 20,500 light-years from us, which likely places it in the galactic bulge. Intriguingly, evidence suggests that the source star — the star that the foreground star–planet lensed — lies in the far galactic disk. If this is true, this would be the first source star of a microlensing event to be identified as belonging to the far disk.

WFIRST

Artist’s impression of the WFIRST mission. [NASA]

Looking Ahead

What’s next for microlensing exoplanet studies? The goal of the UKIRT near-infrared microlensing survey isn’t just to discover planets — it’s to characterize the exoplanet occurrence rates in different parts of the galaxy to inform future surveys.

In particular, the UKIRT survey explored potential fields for the upcoming Wide Field Infrared Survey Telescope (WFIRST) mission, slated to launch in the mid-2020s. This powerful space telescope stands to vastly expand the reach of infrared microlensing detection, broadly surveying our galaxy for planets hiding in the dust.

Citation

Y. Shvartzvald et al 2018 ApJL 857 L8. doi:10.3847/2041-8213/aab71b

Globular cluster NGC 6397

Measuring precise distances to faraway objects has long been a challenge in astrophysics. Now, one of the earliest techniques used to measure the distance to astrophysical objects has been applied to a metal-poor globular cluster for the first time.

A Classic Technique

Gaia spacecraft

An artist’s impression of the European Space Agency’s Gaia spacecraft. Gaia is on track to map the positions and motions of a billion stars. [ESA]

Distances to nearby stars are often measured using the parallax technique — tracing the tiny apparent motion of a target star against the background of more distant stars as Earth orbits the Sun. This technique has come a long way since it was first used in the 1800s to measure the distance to stars a few tens of light-years away; with the advent of space observatories like Hipparcos and Gaia, parallax can now be used to map the positions of stars out to thousands of light-years.

Precise distance measurements aren’t only important for setting the scale of the universe, however; they can also help us better understand stellar evolution over the course of cosmic history. Stellar evolution models are often anchored to a reference star cluster, the properties of which must be known precisely. These precise properties can be readily determined for young, nearby open clusters using parallax measurements. But stellar evolution models that anchor on the more-distant, ancient, metal-poor globular clusters have been hampered by the less-precise indirect methods used to measure distance to these faraway clusters — until now.

NGC 6397 scan

Top: An image of NGC 6397 overlaid with the area scanned by Hubble (dashed green) and the footprint of the camera (solid green). The blue ellipse represents the parallax motion of a star in the cluster, exaggerated by a factor of ten thousand. Bottom: An example scan from this field. [Adapted from Brown et al. 2018]

New Measurement to an Old Cluster

Thomas Brown (Space Telescope Science Institute) and collaborators used the Hubble Space Telescope to determine the distance to NGC 6397, one of the nearest metal-poor globular clusters and anchor for one stellar population model. Brown and coauthors used a technique called spatial scanning to greatly broaden the reach of the parallax method.

Spatial scanning was initially developed as a way to increase the signal-to-noise of exoplanet transit observations, but it has also greatly improved the prospects of astrometry — precisely determining the separations between astronomical objects. In spatial scanning, the telescope moves while the exposure is being taken, spreading the light out across many pixels.

Unprecedented Precision

This technique allowed the authors to achieve a precision of 20–100 microarcseconds. From the observed parallax angle of just 0.418 milliarcseconds (for reference, the moon’s angular size is about 5 million times larger on the sky!), Brown and collaborators refined the distance to NGC 6397 to 7,795 light-years, with a measurement error of only a few percent.

Using spatial scanning, Hubble can make parallax measurements of nearby globular clusters, while Gaia has the potential to reach even farther. Looking ahead, the measurement made by Brown and collaborators can be combined with the recently released Gaia data to trim the uncertainty down to just 1%. This highlights the power of space telescopes to make extremely precise measurements of astoundingly large distances — informing our models and helping us measure the universe.

Citation

Thomas Brown et al 2018 ApJL 856 L6. doi:10.3847/2041-8213/aab55a

black hole in Milky Way

Are supermassive black holes found only at the centers of galaxies? Definitely not, according to a new study — in fact, galaxies like the Milky Way may harbor several such monsters wandering through their midst.

Collecting Black Holes Through Mergers

It’s generally believed that galaxies are built up hierarchically, growing in size through repeated mergers over time. Each galaxy in a major merger likely hosts a supermassive black hole — a black hole of millions to billions of times the mass of the Sun — at its center. When a pair of galaxies merges, their supermassive black holes will often sink to the center of the merger via a process known as dynamical friction. There the supermassive black holes themselves will eventually merge in a burst of gravitational waves.

wandering SMBH locations

Spatial distribution and velocities of wandering supermassive black holes in three of the authors’ simulated galaxies, shown in edge-on (left) and face-on (right) views of the galaxy disks. Click for a closer look. [Tremmel et al. 2018]

But if a galaxy the size of the Milky Way was built through a history of many major galactic mergers, are we sure that all its accumulated supermassive black holes eventually merged at the galactic center? A new study suggests that some of these giants might have escaped such a fate — and they now wander unseen on wide orbits through their galaxies.

Black Holes in an Evolving Universe

Led by Michael Tremmel (Yale Center for Astronomy & Astrophysics), a team of scientists has used data from a large-scale cosmological simulation, Romulus25, to explore the possibility of wandering supermassive black holes. The Romulus simulations are uniquely suited to track the formation and subsequent orbital motion of supermassive black holes as galactic halos are built up through mergers over the history of the universe.

From these simulations, Tremmel and collaborators find an end total of 316 supermassive black holes residing within the bounds of 26 Milky-Way-mass halos. Of these, roughly a third are wanderers within 10 kpc of the halo center (roughly the size of the Milky Way’s disk).

These wandering supermassive black holes were kicked onto wide orbits during the merger of their host galaxy with the main halo; Tremmel and collaborators find that their orbits are often tilted, lying outside of the galactic disk. Because these black holes travel through relatively deserted regions, they accumulate little mass and are rarely perturbed in their journeys, wandering for billions of years.

Finding Monsters

SMBHs hosted by halos

Cumulative fraction of simulated Milky-Way-mass halos as a function of the number of supermassive black holes they host. All of the halos host at least one SMBH within 10 kpc from halo center, but the majority host more than that. [Tremmel et al. 2018]

Tremmel and collaborators’ simulations suggest that, regardless of its merger history, a Milky-Way-mass halo will end up with an average of 5 supermassive black holes within 10 kpc of the galaxy center, and an average of 12 within its larger virial radius! This means there could be a number of supermassive black holes — just like the enormous Sgr A* at our galaxy’s core — wandering the Milky Way unseen.

So how can we find these invisible monsters? We already have some observational evidence — in the form of offset and dual active galactic nuclei — of non-central supermassive black holes in distant galaxies. As for nearby, our best bet is to look for tidal disruption events, the burps of emission that occur when an otherwise invisible black hole encounters a star or a cloud of gas.

Citation

Michael Tremmel et al 2018 ApJL 857 L22. doi:10.3847/2041-8213/aabc0a

earth-like planet

The first challenge in the hunt for life elsewhere in our universe is to decide where to look. In a new study, two scientists examine whether Sun-like stars or low-mass M dwarfs are the best bet for hosting exoplanets with detectable life.

Ambiguity of Habitability

habitable zones

The habitable zones of cool M-dwarf stars lie much closer in than for Sun-like stars, placing habitable-zone planets around M dwarfs at greater risk of being affected by space weather.

Most exoplanet scientists will freely admit frustration with the term “habitability” — it’s a word that has many different meanings and is easily misinterpreted when it appears in news articles. Just because a planet lies in a star’s habitable zone, for instance, doesn’t mean it’s necessarily capable of supporting life.

This ambiguity, argue authors Manasvi Lingam and Abraham Loeb (Harvard University and Harvard-Smithsonian Center for Astrophysics), requires us to take a strategic approach when pursuing the search for primitive life outside of our solar system. In particular, we risk losing the enthusiasm and support of the public (and funding sources!) when we focus on the general search for planets in stellar habitable zones, rather than specifically searching for the planets most likely to have detectable signatures of life.

Sun vs M dwarf

Illustration of the difference between a Sun-like star and a lower-mass, cooler M-dwarf star. [NASA’s Goddard Space Flight Center/S. Wiessinger]

Weighing Two Targets

So how do we determine where best to look for planets with detectable biosignatures? To figure out which stars make the optimal targets, Lingam and Loeb suggest an approach based on standard cost-benefit analyses common in economics. Here, what’s being balanced is the cost of an exoplanet survey mission against the benefit of different types of stellar targets.

In particular, Lingam and Loeb weigh the benefit of targeting solar-type stars against that of targeting stars of any other mass (such as low-mass M-dwarfs, popular targets of many current exoplanet surveys). The advantage of one type of target over the other depends on two chief factors:

  1. the probability that the targeted star hosts planets with life, and 
  2. the probability that biosignatures arising from this life are detectable, given our available technology.

Promise of Sun-Like Stars

relative benefit of searches around different stars

Relative benefit of searching for signatures of life around stars with varying masses, assuming a transmission spectroscopy survey mission; results are similar for a direct-imaging mission. Green curve assumes a flat prior; red and blue curves assume priors in which habitability is suppressed around low-mass stars. [Lingam & Loeb 2018]

Taking observational constraints into account, Lingam and Loeb’s results depend on what is known in statistics as a “prior” — an assumption that goes into the calculation. The two possible outcomes are:

  1. If we assume a flat prior — i.e., that the probability of life is the same for any choice of star — then searching for life around M-dwarfs proves the most advantageous, because the detection of biosignatures becomes much easier.
  2. If we assume a prior in which habitability is suppressed around low-mass stars, then it is more advantageous to search for life around solar-type stars.

So which of these priors is correct? There is mounting evidence, particularly based on considerations of space weather, that the habitability of Earth-like planets around M dwarfs might be much lower than their counterparts around solar-like stars.

If this turns out to be true, then Lingam and Loeb argue exoplanet survey missions should target Sun-like stars throughout our galaxy for the best chances of efficiently detecting life beyond our solar system.

Citation

Manasvi Lingam and Abraham Loeb 2018 ApJL 857 L17. doi:10.3847/2041-8213/aabd86

chromosphere

The best-studied star — the Sun — still harbors mysteries for scientists to puzzle over. A new study has now explored the role of tiny magnetic-field hiccups in an effort to explain the strangely high temperatures of the Sun’s upper atmosphere.

solar temperatures

Schematic illustrating the temperatures in different layers of the Sun. [ESA]

Strange Temperature Rise

Since the Sun’s energy is produced in its core, the temperature is hottest here. As expected, the temperature decreases further from the Sun’s core — up until just above its surface, where it oddly begins to rise again. While the Sun’s surface is ~6,000 K, the temperature is higher above this: ~10,000 K in the outer chromosphere.

So how is the chromosphere of the Sun heated? It’s possible that the explanation can be found not amid high solar activity, but in quiet-Sun regions.

In a new study led by Milan Gošić (Lockheed Martin Solar and Astrophysics Laboratory, Bay Area Environmental Research Institute), a team of scientists has examined a process that quietly happens in the background: the cancellation of magnetic field lines in the quiet Sun.

Activity in a Supergranule

IRIS quiet-Sun observations

Top left: SDO AIA image of part of the solar disk. The next three panels are a zoom of the particular quiet-Sun region that the authors studied, all taken with IRIS at varying wavelengths: 1400 Å (top right), 2796 Å (bottom left), and 2832 Å (bottom right). [Gošić et al. 2018]

The Sun is threaded by strong magnetic field lines that divide it into supergranules measuring ~30 million meters across (more than double the diameter of Earth!). Supergranules may seem quiet inside, but looks can be deceiving: the interiors of supergranules contain smaller, transient internetwork fields that move about, often resulting in magnetic elements of opposite polarity encountering and canceling each other.

For those internetwork flux cancellations that occur above the Sun’s surface, a small amount of energy could be released that locally heats the chromosphere. But though each individual event has a small effect, these cancellations are ubiquitous across the Sun.

This raises an interesting possibility: could the total of these internetwork cancellations in the quiet Sun account for the overall chromospheric heating observed?

Simultaneous Observations

To answer this question, Gošić and collaborators explored a quiet-Sun region in the center of a supergranule, making observations with two different telescopes:

  1. The Swedish 1 m Solar Telescope (SST), which provides spectropolarimetry that lets us watch magnetic elements of the Sun as they move and change, and
  2. The Interface Region Imaging Spectrograph (IRIS), a spacecraft that takes spectra in three passbands, allowing us to probe different layers of the solar atmosphere.

Simultaneous observations of the quiet-Sun region with these two telescopes allowed the scientists to piece together a picture of chromospheric heating: as SST observations showed opposite-polarity magnetic-field regions approach each other and then disappear, indicating a field cancellation, IRIS observations often showed brightening in the chromosphere.

Falling Short

SST quiet-Sun observations

SST observations, including the continuum intensity map (upper left), magnetogram showing the magnetic field elements (upper right), and intensity maps in the core of the Ca II 8542 Å line (lower left) and Hα 6563 Å line (lower right). [Gošić et al. 2018]

By careful interpretation of their observations, Gošić and collaborators were able to estimate the total energy contribution from the hundreds of field cancellations they detected. The authors determined that, while the internetwork cancellations can significantly heat the chromosphere locally, the apparent number density of these cancellations falls an order of magnitude short of explaining the overall chromospheric heating observed.

Does this mean quiet-Sun internetwork fields aren’t the cause of the strangely warm temperatures in the chromosphere? Perhaps … or perhaps we don’t yet have the telescope power to detect all of the internetwork field cancellations. If that’s the case, upcoming telescopes like the Daniel K. Inouye Solar Telescope and the European Solar Telescope will let us answer this question more definitively.

Citation

M. Gošić et al 2018 ApJ 857 48. doi:10.3847/1538-4357/aab1f0

Betelgeuse in infrared

What happens on the last day of a massive star’s life? In the hours before the star collapses and explodes as a supernova, the rapid evolution of material in its core creates swarms of neutrinos. Observing these neutrinos may help us understand the final stages of a massive star’s life — but they’ve never been detected.

MiniBooNE neutrino detector

A view of some of the 1,520 phototubes within the MiniBooNE neutrino detector. Observations from this and other detectors are helping to illuminate the nature of the mysterious neutrino. [Fred Ullrich/FNAL]

Silent Signposts of Stellar Evolution

The nuclear fusion that powers stars generates tremendous amounts of energy. Much of this energy is emitted as photons, but a curious and elusive particle — the neutrino — carries away most of the energy in the late stages of stellar evolution.

Stellar neutrinos can be created through two processes: thermal processes and beta processes. Thermal processes — e.g., pair production, in which a particle/antiparticle pair are created — depend on the temperature and pressure of the stellar core. Beta processes — i.e., when a proton converts to a neutron, or vice versa — are instead linked to the isotopic makeup of the star’s core. This means that, if we can observe them, beta-process neutrinos may be able to tell us about the last steps of stellar nucleosynthesis in a dying star.

But observing these neutrinos is not so easily done. Neutrinos are nearly massless, neutral particles that interact only feebly with matter; out of the whopping ~1060 neutrinos released in a supernova explosion, even the most sensitive detectors only record the passage of just a few. Do we have a chance of detecting the beta-process neutrinos that are released in the final few hours of a star’s life, before the collapse?

Neutrino luminosities

Neutrino luminosities leading up to core collapse. Shortly before collapse, the luminosity of beta-process neutrinos outshines that of any other neutrino flavor or origin. [Adapted from Patton et al. 2017]

Modeling Stellar Cores

To answer this question, Kelly Patton (University of Washington) and collaborators first used a stellar evolution model to explore neutrino production in massive stars. They modeled the evolution of two massive stars — 15 and 30 times the mass of our Sun — from the onset of nuclear fusion to the moment of collapse.

The authors found that in the last few hours before collapse, during which the material in the stars’ cores is rapidly upcycled into heavier elements, the flux from beta-process neutrinos rivals that of thermal neutrinos and even exceeds it at high energies. So now we know there are many beta-process neutrinos — but can we spot them?

Neutrino fluxes

Neutrino and antineutrino fluxes at Earth from the last 2 hours of a 30-solar-mass star’s life compared to the flux from background sources. The rows represent calculations using two different neutrino mass hierarchies. Click to enlarge. [Patton et al. 2017]

Observing Elusive Neutrinos

For an imminent supernova at a distance of 1 kiloparsec, the authors find that the presupernova electron neutrino flux rises above the background noise from the Sun, nuclear reactors, and radioactive decay within the Earth in the final two hours before collapse.

Based on these calculations, current and future neutrino observatories should be able to detect tens of neutrinos from a supernova within 1 kiloparsec, about 30% of which would be beta-process neutrinos. As the distance to the star increases, the time and energy window within which neutrinos can be observed gradually narrows, until it closes for stars at a distance of about 30 kiloparsecs.

Are there any nearby supergiants soon to go supernova so these predictions can be tested? At a distance of only 650 light-years, the red supergiant star Betelgeuse should produce detectable neutrinos when it explodes — an exciting opportunity for astronomers in the far future!

Citation

Kelly M. Patton et al 2017 ApJ 851 6. doi:10.3847/1538-4357/aa95c4

CR7

Thirteen billion years ago, early galaxies ionized the gas around them, producing some of the first light that brought our universe out of its “dark ages”. Now the Atacama Large Millimeter/submillimeter Array (ALMA) has provided one of the first detailed looks into the interior of one of these early, distant galaxies.

Sources of Light

reionization

Artist’s illustration of the reionization of the universe (time progresses left to right), in which ionized bubbles that form around the first sources of light eventually overlap to form the fully ionized universe we observe today. [Avi Loeb/Scientific American]

For the first roughly hundred million years of its existence, our universe expanded in relative darkness — there were no sources of light at that time besides the cosmic microwave background. But as mass started to condense to form the first objects, these objects eventually shone as the earliest luminous sources, contributing to the reionization of the universe.

To learn about the early production of light in the universe, our best bet is to study in detail the earliest luminous sources — stars, galaxies, or quasars — that we can hunt down. One ideal target is the galaxy COSMOS Redshift 7, known as CR7 for short.

Targeting CR7

CR7 is one of the oldest, most distant galaxies known, lying at a redshift of z ~ 6.6. Its discovery in 2015 — and subsequent observations of bright, ultraviolet-emitting clumps within it — have led to broad speculation about the source of its emission. Does this galaxy host an active nucleus? Or could it perhaps contain the long-theorized first generation of stars, metal-free Population III stars?

To determine the nature of CR7 and the other early galaxies that contributed to reionization, we need to explore their gas and dust in detail — a daunting task for such distant sources! Conveniently, this is a challenge that is now made possible by ALMA’s incredible capabilities. In a new publication led by Jorryt Matthee (Leiden University, the Netherlands), a team of scientists now reports on what we’ve learned peering into CR7’s interior with ALMA.

ALMA-detected metals in CR7

ALMA observations of [C II] (white contours) are overlaid on an ultraviolet image of the galaxy CR7 taken with Hubble (background image). The presence of [C II] throughout the galaxy indicate that CR7 does not primarily consist of metal-free gas, as had been previously proposed. [Matthee et al. 2017]

Metals yet No Dust?

Matthee and collaborators’ deep spectroscopic observations of CR7 targeted the far-infrared dust continuum emission and a gas emission line, [C II]. The authors detected [C II] emission in a large region in and around the galaxy, including near the ultraviolet clumps. This clearly indicates the presence of metals in these star-forming regions, and it rules out the possibility that CR7’s gas is mostly primordial and forming metal-free Pop III stars.

The authors do not detect far infrared continuum emission from dust, which sets an unusually low upper limit on the amount of dust that may be present in this galaxy. This limit allows them to better interpret their measurements of star formation rates in CR7, providing more information about the galaxy’s properties. 

Lastly, Matthee and collaborators note that the [C II] emission is detected in multiple different components that have different velocities. The authors propose that these components are accreting satellite galaxies. If this is correct, then CR7 is not only a target to learn about early sources of light in the universe — it’s also a rare opportunity to directly witness the build-up of a central galaxy in the early universe.

Citation

J. Matthee et al 2017 ApJ 851 145. doi:10.3847/1538-4357/aa9931

merging neutron stars

Now that the hubbub of GW170817 — the first coincident detection of gravitational waves and an electromagnetic signature — has died down, scientists are left with the task of taking the spectrum-spanning observations and piecing them together into a coherent picture. Researcher Iair Arcavi examines one particular question: what caused the blue color in the early hours of the neutron-star merger?

kilonova

Observations of the GW170817 kilonova by Hubble over a ~week-long span. [ESA/Hubble]

Early Color

When the two neutron stars of GW170817 merged in August of last year, they produced not only gravitational waves, but a host of electromagnetic signatures. Chief among these was a flare of emission thought to be powered by the radioactive decay of heavy elements formed in the merger — a kilonova.

The emission during a kilonova can come from a number of different sources — from the heavy-element-rich tidal tails of the disrupting neutron stars, or from fast, light polar jets, or from a wind or a disk outflow — and each of these components could reveal different information about the original neutron stars and the merger.

It’s therefore important that we understand the sources of the emission that we observed in the GW170817 kilonova. In particular, we’d like to know where the early blue emission came from that was spotted in the first hours of the kilonova.

light curve of the GW170817

The combined ultraviolet–optical–infrared light curve of the GW170817 kilonova. The rise in the emission occurs on roughly a day-long timescale. [Arcavi 2018]

Comparing Models

To explore this question, Iair Arcavi (Einstein Fellow at University of California, Santa Barbara and Las Cumbres Observatory) compiled infrared through ultraviolet observations of the GW170817 kilonova from nearly 20 different telescopes. To try to distinguish between possible sources, Arcavi then compared the resulting combined light curves to a variety of models.

Arcavi found that the light curves for the GW170817 kilonova indicate an initial ~24-hour rise of emission. This rise is best matched by models in which the emission is produced by radioactive decay of ejecta with lots of heavier elements (likely from tidal tails). The subsequent decline of the emission, however, is fit as well or better by models that include lighter, faster outflows, or additional emission due to shock-heating from a wind or a cocoon surrounding a jet.

optical and ultraviolet lightcurves

Optical and ultraviolet light curves for the first 3 days after merger, as compared to four different emission models. Observations at earlier times, where the models differ more substantially, could provide stronger constraints for future mergers. [Arcavi 2018]

Missing Ultraviolet

The takeaway from Arcavi’s work is that we can’t yet eliminate any models for the GW170817 kilonova’s early blue emission — we simply don’t have enough data.

Why not? It turns out we had some bad luck with GW170817: a glitch in one of the detectors slowed down localization of the source, preventing earlier discovery of the kilonova. The net result was that the electromagnetic signal of this merger was only found 11 hours after the gravitational waves were detected — and the ultraviolet signal was detected 4 hours after that, when the kilonova light curves are already decaying.

If we had ultraviolet observations that tracked the earlier, rising emission, Arcavi argues, we would be able to differentiate between the different emission models for the kilonova. So while this may be the best we can do with GW170817, we can hope that with the next merger we’ll have a full set of early observations — allowing us to better understand where its emission comes from.

Citation

Iair Arcavi 2018 ApJL 855 L23. doi:10.3847/2041-8213/aab267

1 61 62 63 64 65 95