The following tutorials will be given at ADASS XXXI, As per ADASS tradition, tutorials are given on Sunday (24 Oct) afternoon. A detailed tutorial schedule, as well as sign-up instructions, will be available soon.

Efficient and safe astronomy research with Docker

Mateusz Malenta

Modern research is already highly reliable on containers and this reliability is only going to increase over the coming years. Some of the most common software stacks are already distributed in their containerised form and many more are soon going to join them.
The use of containerised research software is especially important in the context of reproducible research. Here containers can be used to create and distribute consistent research software environments. They have their uses in complex installations of interconnected libraries as well as relatively simple requirements, such as stable python environments. Containers also make the software stack and research much more accessible, by removing the need for often time-consuming installation processes and providing an isolated computing ecosystem that can be treated like a black box where only input data has to be provided by the user.
Despite their widespread use, many misconceptions and bad practices are currently prevalent within the research community about the abilities, use cases and safety of containers. If used incorrectly, containers can produce inconsistent results, fail to deploy or even expose critical security vulnerabilities to potential attackers. It is therefore important to equip the scientists with skills that will let them produce and distribute containers that are efficient, easy to use and safe from the point of view of the user and the developer.

Primary learning objectives:

  • Understanding of the basic ideas behind modern container technologies and differences between different container platforms.
  • Understanding of key components of the modern container development pipelines based on Docker
    • building of fast, light and safe container images – introduction to best development practices, but also highlight some of the worst development practices and their consequences.
    • container deployment – ways of sharing images with the community from making the build recipes available to using image repositories and registries.
  • Knowledge and understanding of the common limitations of Docker containers and how to mitigate them as much as is possible
  • Development of good practices for developing, running and maintaining Docker images and containers.

Back to top

Visualising big data: how to get the best out of CARTA

Kechil Kirkham, Kuo-Song Wang

CARTA is a major visualisation tool used by hundreds of astronomers worldwide, architected for use with big data sets. There have been 7 major public releases plus beta and patch releases to date and v2.0 was released on 4 June 2021. The website which describes further details about CARTA is https://cartavis.org/
In the CARTA tutorial, we plan to have five topics that help users to master CARTA for their own science. For a given topic, the first 5-10 minutes are reserved for the tutors to introduce the ideas with slides and live demonstrations, and the rest of the time is for hands-on practice. The final 10 minutes are dedicated to Q&A.

Primary learning objectives:
At the end of this CARTA tutorial learners will have grasped:

  • How to set up the CARTA environment
  • How to manipulate major features of CARTA in order to fulfil their science objectives for their data sets
  • Specifically, how to engage with the 5 tutorial topics set out below, whose objectives are described separately under each topic heading.
  • How to use the help facility and report bugs.

Back to top

Gnuastro hands-on tutorial for astronomical data analysis

Ra ́ul Infante-Sainz, Zahra Sharbaf, Mohammad Akhlaghi

GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of various programs and library
functions for the manipulation and analysis of astronomical data and comes with complete and up-to-date documentation (see https://www.gnu.org/software/gnuastro/manual/html_node). Its only mandatory dependencies are CFITSIO, WCSLIB, and GSL (GNU Scientific Library), thus removing the “dependency hell” issue that is present in other systems widely used nowadays with hundreds of dependencies.
Gnuastro’s command-line programs are built on the Unix Philosophy (“Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features”), therefore they are highly modular and blend beautifully with the well-established Unix/GNU programs that everyone is already familiar with (awk, sed, grep, etc.). Modularity, running in the command-line, high execution speed, and access to the source code let people use the programs inside their pipelines and high-level analysis works. In addition to this, building new/specialized programs from its extensive library is also straightforward.
The main topic of this tutorial is to guide the audience in a practical way on how to do a powerful and robust data analysis using Gnuastro. In this tutorial, the user will learn how to make operations with images and tables (arithmetic, cropping, convolution, masking, warping), simulate astronomical objects, detect sources, construct customized catalogs from the images, download astronomical catalogs already available in standard databases (Gaia, NED, VizieR, etc.), make cosmological calculations, and many other things. All of the previous just using the command-line. From our experience based on tested tutorials that we have held in smaller events, once the user gets familiar with the use of very basic tools and Gnuastro programs, the productivity and the ability to make much more advanced analyses has no limits.

Primary Learning Objectives:

  • Demonstrate that data analysis can be done on the command-line without hundreds of dependencies and third-party languages.
  • Present main Gnuastro programs and libraries through using them with real data.
  • At the end of this activity, the participants will be able to use Gnuastro for their own research.

Back to top

Accessing and working with gravitational-wave open data from LIGO and Virgo

A. Trovato

The breakthrough discovery of gravitational waves (GW) has opened the era of GW astronomy. A total of 50 GW signals have been detected until now from the data of the first two observing runs, O1 and O2, and the first six months of the third observing run (O3a). The first binary neutron star GW170817 discovered during O2, led to an electromagnetic follow-up of the event at an unprecedented scale, paving the way for the discovery of optical, X-ray, and radio counterparts. The LIGO and Virgo data that correspond to these events have been released via the Gravitational Wave Open Science Center (GWOSC) at the web address gw-openscience.org.
There are two types of data releases:

  1. The list of discovered GW events along with short segments of GW strain data around the time of the GW events and various other science products including the skymaps and the posterior samples for astrophysical parameters such as masses, spins, distance, orientations of the compact binary systems. So far, two event catalogs GWTC-1 and GWTC-2 have been published. The catalog for the second block of 6 months of O3 (O3b) is expected for the coming months.
  2. The strain data from entire observation runs accompanied with data quality information and documentation that allows to reproduce the results obtained by the LIGO and Virgo Collaborations (LVC). The de-noised and calibrated GW strain data related to both the O1 and O2 runs were released in January 2018 and in February 2019, respectively. The strain data for O3a was released at the end of April 2021.
    In the last years, the astronomical community has shown growing interest in GW observations, as demonstrated by the amount of visitors of the GWOSC website (about 3000 visitors each month with unique IP), the thousands of strain file downloads, the number of scientific papers that make use of GWOSC data (about 240 at this moment), and the large participation at the open data workshops regularly organized.
    GW data differs in nature from other astronomical observations. This tutorial will provide a learning path to future users that are not acquainted with the specificities of GW data.

Primary learning objectives:

  • Basics and background about gravitational wave astronomy
  • General presentation of the gravitational-wave open science center
  • What data products are available? what software tools?
  • Hands-on:
    • How to access GW data?
    • How to visualize GW data?
    • How to simulation GW waveforms?
    • How to use GWTC catalogs and posterior samples from Bayesian parameter estimation?

Back to top

Astropy, PyVO and the Radio realm

Hendrik Heinl, Dave Morris

The tutorial will have a strong focus on the usage of PyVO as an API to access VO services. The used standards will be include

  • Obscore (for data discovery and exploration)
  • TAP/ADQL (for data access)
  • SAMP (to send data from the scripts to client software)
  • Datalink and SODA (to remotely perform coutouts on large images).
    The used software will be: Python 3.6 (or newer), astropy, PyVO, Topcat and Aladin. We will provide Python scripts in a github repository.

Primary learning objectives:
After the tutorial the participants shall be enabled:

  • to use PyVO and obscore for datadiscovery
  • use PyVO and datalink/SODA to perform remote image cutouts
  • use astropy to write/extend a SAMP-Handler for their own use case
  • as a side effect get a bit familiar with TAP/ADQL

Back to top