Putting Coronal Models to the Test

Researchers have created a way to measure the performance of models of the Sun’s tenuous upper atmosphere, or corona. What does this new framework tell us about some of the most common coronal models?

An illustration of the regions of the Sun's atmosphere and interior.

An illustration of the regions of the Sun’s atmosphere and interior. Click to enlarge. [NASA/Goddard]

Seeking Answers about the Solar Atmosphere

The Sun’s superheated corona plays an important role in generating space weather, launching the solar wind, and accelerating energetic particles from the Sun. Without being able to sample the solar corona directly — even the Sun-skimming Parker Solar Probe won’t venture into the densest part of the corona during its planned closest approach in 2025 — we have to rely on our ability to model complex plasma physics in order to interpret our observations made from afar.

Astronomers have created a wide variety of models to probe the behavior of the solar corona, but while these models have been compared against data, rarely have they been compared to each other in a systematic way. Now, a team led by Samuel Badman (University of California, Berkeley) has developed a new way to assess the output of multiple models of the Sun’s corona.

Coronal Comparisons

As Badman and collaborators note, models are often assessed according to their ability to reproduce a single feature, like the structure of wispy coronal streamers seen during a solar eclipse or the strength of the solar wind magnetic field at Earth’s orbit. However, this makes it difficult to compare models to each other, and it might even obscure poor performance on other important metrics.

plot demonstrating how the magnetic field structure metric is devised

Development of the magnetic field structure metric. The modeled (top) and measured (middle) magnetic field directions are shown. The bottom panel marks where those quantities agree. Click to enlarge. [Badman et al. 2022]

To remedy this issue, Badman and coauthors developed a framework to compare several coronal models to data as well as to each other. Specifically, the authors compared outputs from three models — ranging from relatively simple to highly complex — to three types of data:

  1. Extreme-ultraviolet images of the Sun’s disk that reveal the locations of coronal holes (i.e., where the Sun’s magnetic field lines extend out into the solar system rather than looping back to the Sun’s surface)
  2. Visible-light images of coronal streamers captured by blocking the light from the Sun’s disk
  3. Magnetic field measurements made by spacecraft between Earth and the Sun

 

plot of model performance on test 2

Comparison of model performance on the white-light neutral line (WL NL) metric, which measures the models’ ability to recreate the structure of the corona. The more complex Wang–Sheeley–Arge (WSA) and Magnetohydrodynamic Algorithm outside a Sphere (MAS) models perform better on this metric than the simpler potential field source surface (PFSS) models. [Adapted from Badman et al. 2022]

Optimizing Output

By comparing model predictions to these types of data, the authors were able to quantify how well the models reproduced the characteristics of the solar corona close to the Sun as well as conditions in the solar wind at Earth’s orbit. The authors’ analysis revealed that none of the three models studied performed well on all three of the tests.

For example, tuning the least complex of the three models to get the best match to the structure of coronal streamers worsened its performance on the other two tests. The other, more complex models made better predictions of the positions of coronal holes, the shapes of coronal streamers, or both, but these models still struggled to match the magnetic field measurements made by spacecraft farther out from the Sun.

Overall, the team’s results show that their framework is a valuable tool for making comparisons between models. Going forward, the authors hope to create an open-source tool to make this framework more easily accessible to researchers looking to assess the performance of their own models.

Citation

“Constraining Global Coronal Models with Multiple Independent Observables,” Samuel T. Badman et al 2022 ApJ 932 135. doi:10.3847/1538-4357/ac6610