To meet the challenges posed by our growing collection of data, researchers have devised increasingly sophisticated computing techniques. Today, we’re taking a look at three ways machine-learning techniques have been applied to astrophysical data.
Machine Learning in the Spotlight
Machine learning is a term that describes a collection of techniques in which computers explore data and develop their own algorithms. In astrophysics research, this often takes the form of training computers on a set of known inputs and outputs before introducing data from outside the training set and allowing the computer to derive outputs for those data. For example, researchers could train an algorithm using a set of stellar spectra coupled with known properties of those stars (e.g., spectral type, age, metallicity) and then use the resulting algorithm to classify other stars based on their spectra.
Machine learning and other artificial intelligence techniques are increasingly popular in many fields of science. Here we take a brief look at three recent research articles that describe how machine learning can help us model planet-forming disks, compare observations from different telescopes, and detect fleeting cosmic events.
A Rapid Disk Predictor
A team led by Shunyuan Mao (毛顺元) from the University of Victoria used an artificial neural network to model the interactions between planets and the disks of gas and dust they form in. Planet-forming disks show a wide variety of structures, such as rings and spiral arms, that appear to be linked to the presence, movement, and growth of young planets. By modeling these features, researchers can determine the properties of the planets embedded in protoplanetary disks, but the process can take hours of computing time. Luckily, machine learning appears to offer an easier, faster way to model these disks.

One example of PPDONet’s performance, showing the actual (blue) and predicted (red) density profile of a gap in a disk. [Mao et al. 2023]
Matching Images Between Spacecraft
Researchers wanting to predict solar flares, coronal mass ejections, and other forms of solar activity often base their predictions on images of the Sun taken at extreme-ultraviolet wavelengths. Thanks to spacecraft like the Solar Dynamics Observatory (SDO) and the Solar and Heliospheric Observatory (SOHO), we have decades of solar images to work with, but the differences between telescopes can make it challenging to combine data from different sources into a single prediction — when different observations have different fields of view, spatial and temporal resolution, and noise levels, it’s hard to compare apples to apples.

Demonstration of the different fields of view and spatial resolution of SOHO (left) and SDO (right). [Chatterjee et al. 2023]
A Faster Way to Track Down Transients

Example of the transient search process. The template image (left) is subtracted from the search image (center), resulting in a difference image (right) that clearly shows a transient source. Click to enlarge. [Adapted from Acero-Cuellar et al. 2023]
Using data from the Dark Energy Survey, Acero-Cuellar’s team constructed two neural networks to test the possibility of eliminating the difference image altogether. Using one neural network that was trained to use all three images and one that was trained to use all but the difference image, the team found that eliminating the difference image does reduce the network’s ability to identify transients, but only slightly — the accuracy dropped from 96% to 91%. While these neural networks are time-consuming to train, especially when the difference image is not used, putting them into practice requires only a few seconds. This demonstrates the potential for neural networks to eliminate a time-consuming step while retaining a high level of accuracy, which could help us handle the enormous amount of data produced by current and upcoming surveys.
Citation
“PPDONet: Deep Operator Networks for Fast Prediction of Steady-state Solutions in Disk–Planet Systems,” Shunyuan Mao et al 2023 ApJL 950 L12. doi:10.3847/2041-8213/acd77f
“Homogenizing SOHO/EIT and SDO/AIA 171 Å Images: A Deep-learning Approach,” Subhamoy Chatterjee et al 2023 ApJS 268 33. doi:10.3847/1538-4365/ace9d7
“What’s the Difference? The Potential for Convolutional Neural Networks for Transient Detection without Template Subtraction,” Tatiana Acero-Cuellar et al 2023 AJ 166 115. doi:10.3847/1538-3881/ace9d8