Spotlight on Machine Learning in Astrophysics

To meet the challenges posed by our growing collection of data, researchers have devised increasingly sophisticated computing techniques. Today, we’re taking a look at three ways machine-learning techniques have been applied to astrophysical data.

Machine Learning in the Spotlight

Machine learning is a term that describes a collection of techniques in which computers explore data and develop their own algorithms. In astrophysics research, this often takes the form of training computers on a set of known inputs and outputs before introducing data from outside the training set and allowing the computer to derive outputs for those data. For example, researchers could train an algorithm using a set of stellar spectra coupled with known properties of those stars (e.g., spectral type, age, metallicity) and then use the resulting algorithm to classify other stars based on their spectra.

Machine learning and other artificial intelligence techniques are increasingly popular in many fields of science. Here we take a brief look at three recent research articles that describe how machine learning can help us model planet-forming disks, compare observations from different telescopes, and detect fleeting cosmic events.

A Rapid Disk Predictor

A team led by Shunyuan Mao (毛顺元) from the University of Victoria used an artificial neural network to model the interactions between planets and the disks of gas and dust they form in. Planet-forming disks show a wide variety of structures, such as rings and spiral arms, that appear to be linked to the presence, movement, and growth of young planets. By modeling these features, researchers can determine the properties of the planets embedded in protoplanetary disks, but the process can take hours of computing time. Luckily, machine learning appears to offer an easier, faster way to model these disks.

actual versus predicted surface density profile of a gap in a protoplanetary disk

One example of PPDONet’s performance, showing the actual (blue) and predicted (red) density profile of a gap in a disk. [Mao et al. 2023]

Mao’s team has introduced the Protoplanetary Disk Operator Network (PPDONet), which can predict the results of a disk and a planet interacting in less than one second — using a normal laptop. This enormous reduction in computing time is made possible by the team’s machine-learning methods, which recognize when modeling outcomes will be similar to previous runs, jumping ahead and eliminating the need to start every simulation from scratch and iterate through millions of timesteps. The team trained their model on fluid dynamics simulations of disks containing a single planet and found that the model accurately predicts the structure of the disks. The model is publicly available.

Matching Images Between Spacecraft

Researchers wanting to predict solar flares, coronal mass ejections, and other forms of solar activity often base their predictions on images of the Sun taken at extreme-ultraviolet wavelengths. Thanks to spacecraft like the Solar Dynamics Observatory (SDO) and the Solar and Heliospheric Observatory (SOHO), we have decades of solar images to work with, but the differences between telescopes can make it challenging to combine data from different sources into a single prediction — when different observations have different fields of view, spatial and temporal resolution, and noise levels, it’s hard to compare apples to apples.

Demonstration of the different fields of view and spatial resolution of SOHO (left) and SDO (right). [Chatterjee et al. 2023]

To make it possible to work with both SDO and SOHO data sets, Subhamoy Chatterjee (Southwest Research Institute) and collaborators trained a deep-learning model using data from the two spacecraft taken at the same time. The model translated the SOHO images to match the resolution and other characteristics of the SDO images. In an improvement over previous attempts to homogenize solar imaging data, Chatterjee’s team also used Bayesian statistical methods to estimate the uncertainty of the translated images — a critical piece of information for estimating the uncertainty of predictions based on those images.

A Faster Way to Track Down Transients

demonstration of the traditional transient search process

Example of the transient search process. The template image (left) is subtracted from the search image (center), resulting in a difference image (right) that clearly shows a transient source. Click to enlarge. [Adapted from Acero-Cuellar et al. 2023]

Every time we survey the night sky, we find fleeting flashes of light from exploding stars, cosmic collisions, and more. We can learn a lot from studying these events, known as transients, but the process of tracking them down can be time intensive and computationally expensive. A typical method for finding astronomical transients in survey data involves creating reference templates from multiple observations that are then altered to match the seeing conditions and observing setup of the comparison data. The scaled template is then subtracted from the new data, and the resulting image, called a difference image, is scoured for new sources. This method is effective but time consuming, and imaging artifacts, moving stars, and variable stars can all cause false positives. Tatiana Acero-Cuellar (University of Delaware and National University of Colombia) and collaborators suggest that machine learning can make this process faster and eliminate the need for human intervention.

Using data from the Dark Energy Survey, Acero-Cuellar’s team constructed two neural networks to test the possibility of eliminating the difference image altogether. Using one neural network that was trained to use all three images and one that was trained to use all but the difference image, the team found that eliminating the difference image does reduce the network’s ability to identify transients, but only slightly — the accuracy dropped from 96% to 91%. While these neural networks are time-consuming to train, especially when the difference image is not used, putting them into practice requires only a few seconds. This demonstrates the potential for neural networks to eliminate a time-consuming step while retaining a high level of accuracy, which could help us handle the enormous amount of data produced by current and upcoming surveys.

Citation

“PPDONet: Deep Operator Networks for Fast Prediction of Steady-state Solutions in Disk–Planet Systems,” Shunyuan Mao et al 2023 ApJL 950 L12. doi:10.3847/2041-8213/acd77f

“Homogenizing SOHO/EIT and SDO/AIA 171 Å Images: A Deep-learning Approach,” Subhamoy Chatterjee et al 2023 ApJS 268 33. doi:10.3847/1538-4365/ace9d7

“What’s the Difference? The Potential for Convolutional Neural Networks for Transient Detection without Template Subtraction,” Tatiana Acero-Cuellar et al 2023 AJ 166 115. doi:10.3847/1538-3881/ace9d8